Skip to content

Automatic translation from Russian to English. It may contain inaccuracies.

Posts

As you noticed, posts to the channel have recently been published no more than once a wee…

December 17, 2025 at 7:29 PMMax Knyazev is typing…Telegram mirror
As you noticed, posts to the channel have recently been published no more than once a week. There was a lot of work, by the end of the year I was more tired than usual, and at the same time graduate school was throwing up its own jokes ( I’m currently writing three scientific articles at the same time ). And this is what brought me go crazy to the decision to go on vacation 🌴

So, I took a vacation for the first time in the last 1.5 years. Rested a little, relaxed ( I even managed to get sick during this time ), so I can finally write informative long posts like before. I hope you miss them. Today we’ll talk about one cool tool that I found in the open source. But first things first ⤵️

There is one eternal paradox in AppSec. We are excellent at looking for typical vulnerabilities, but we often miss logical defects and some more non-standard things in the code. And every time it ends with the same phrase in the report: “it could not be detected by automated means.” B CapIBars podcast I also talked about this when I answered the question “Is it possible to cover everything with scanners?” ( no, you can’t, and that’s the reason )

FuzzForge AI just about trying to break this pattern 👊

The classic AppSec stack today looks like it consists of: SAST , SCA, sometimes DAST , sometimes IAST , sometimes fuzzing critical components. The problem is that almost all of these tools are either static or work within pre-known patterns. They are good at catching injections, unsafe dependencies, and trivial serialization errors. But as soon as complex business logic, non-obvious scenarios, and contextual restrictions begin, the scanner will not cope

FuzzForge AI falls squarely into this gray area. In the AppSec context, the project idea looks like LLM-assisted generation of application-aware inputs . That is, generating test data not by format, but by understanding the context of the API, protocol or service. The model can take into account the endpoint description, intended use cases, expected limitations and purposefully try to circumvent them 😅

Simply put, this is an attempt to automate what pentesters and AppSec engineers usually do manually: “what happens if you pass this field in a different state”, “what if the steps are swapped”, “what if you follow the format, but break the logic”

From the point of view of secure SDLC, the project fits well into pre-prod and staging circuits. Especially where there are custom APIs, etc. It is especially important that FuzzForge AI is not positioned as a amena of expertise.” It's more of an amplifier. The engineer sets the framework, context, hypotheses, and the LLM scales the attempts to break them. For AppSec, this is exactly the interaction model that is needed ( man thinks, machine does )

If you look a little forward, you can already see the groundwork for MLSecOps. How do we control the behavior of the LLM fuzzer? How do we fix the reproducibility of bugs? How do we protect ourselves from model hallucinations? How do we ensure determinism in CI/CD?

Of course, the project is still raw. LLM may be too polite to the application, may get stuck in loops, and may generate valid but useless cases. But these are problems of growth, not the concept itself. I think that in a couple of years, such tools will become firmly established in the toolkit of AppDevMLSecOps specialists ( remember this, and if suddenly this doesn’t happen, you can poke it at me when you meet ) 😉

#information_security
Open original post on Telegram

Connection graph

How this work connects to others

No explicit connections have been configured for this work yet. You can still open the full graph or the timeline of all works.

Hover over a line to see what connects one work to another.

Use the mouse wheel to zoom the graph and drag it like a map.

Post
100%

Discussion

Comments

Comments are available only to confirmed email subscribers. No separate registration or password is required: a magic link opens a comment session.

Join the discussion

Enter the same email that you already used for your site subscription. We will send you a magic link to open comments on this device.

There are no approved comments here yet.