Posts
DevSecOps was once considered a buzzword. Not long ago, its younger brother, MLSecOps, ca…
November 18, 2025 at 7:33 PM•Max Knyazev is typing…Telegram mirror
DevSecOps was once considered a buzzword. Not long ago, its younger brother, MLSecOps, came onto the scene. Today I will not dwell on what it is and why (
Maybe then I’ll do a series of articles on all sorts of Dev-Ops-ML-Infra-AI-App-Data-Sec
). Let's talk about useful open source projects that may interest you if you are involved in MLSecOps
🤝
There is a cool thing on GitHub - awesome-MLSecOps
This is essentially a map of the minefields between MLOps, DevSecOps and classic AppSec ( the materials are more than worthy ). From attacks through model serialization to data poisoning - everything is categorized, neatly and with love. Just right, so as not to spend half a day looking for all this disgrace
🪄
Let's move on - NVIDIA / Garak LLM vulnerability scanner
Scans large language models for prompt injections, leaks, hallucinations and other funny things that are usually talked about at conferences in the context of “we didn’t have that.” Works with Hugging Face, OpenAI API and local models. Quite simply, it does with the language model what Burp does with the web.
😉
ProtectAI / NB Defense aka Jupyter Notebook Security Analyzer
Checks Notebook for tokens, personal data and suspicious dependencies. Integrates directly into JupyterLab. The analyzer is suitable for CI, you can run everything as a pre-commit hook. For teams where ML and security first started to hold hands, we definitely hire
🫡
And the finale - IBM/Adversarial Robustness Toolbox
This is a classic ( you need to know this ). A set of tools for testing the resistance of models to adversarial attacks. Supports TensorFlow, PyTorch, Scikit-learn and a bunch of other frameworks
😳
Use it for your health ( I definitely know one senior in Yandex who will add this to his favorites and will be right )
🧠
#information_security
Open original post on TelegramThere is a cool thing on GitHub - awesome-MLSecOps
This is essentially a map of the minefields between MLOps, DevSecOps and classic AppSec ( the materials are more than worthy ). From attacks through model serialization to data poisoning - everything is categorized, neatly and with love. Just right, so as not to spend half a day looking for all this disgrace
Let's move on - NVIDIA / Garak LLM vulnerability scanner
Scans large language models for prompt injections, leaks, hallucinations and other funny things that are usually talked about at conferences in the context of “we didn’t have that.” Works with Hugging Face, OpenAI API and local models. Quite simply, it does with the language model what Burp does with the web.
ProtectAI / NB Defense aka Jupyter Notebook Security Analyzer
Checks Notebook for tokens, personal data and suspicious dependencies. Integrates directly into JupyterLab. The analyzer is suitable for CI, you can run everything as a pre-commit hook. For teams where ML and security first started to hold hands, we definitely hire
And the finale - IBM/Adversarial Robustness Toolbox
This is a classic ( you need to know this ). A set of tools for testing the resistance of models to adversarial attacks. Supports TensorFlow, PyTorch, Scikit-learn and a bunch of other frameworks
Use it for your health ( I definitely know one senior in Yandex who will add this to his favorites and will be right )
#information_security
Discussion
Comments
Comments are available only to confirmed email subscribers. No separate registration or password is required: a magic link opens a comment session.
Join the discussion
Enter the same email that you already used for your site subscription. We will send you a magic link to open comments on this device.
There are no approved comments here yet.