Skip to content

Automatic translation from Russian to English. It may contain inaccuracies.

Posts

Over the past couple of months, I have come across a lot of news about different AI agent…

March 14, 2026 at 7:37 PMMax Knyazev is typing…Telegram mirror
Over the past couple of months, I have come across a lot of news about different AI agents, their security and the possibilities of exploiting these solutions for various types of attacks. There's a lot of content, so let's discuss it

Let's start with the base. Reuters recently wrote, that researchers scanned the network for 293 days and found 175,108 public Ollama servers in 130 countries. People played LLM locally, but left it to listen to 0.0.0.0 instead of 127.0.0.1 ( fatal error ). As a result, any network user could send prompts to a locally installed neuron, and in some cases even launch tools, pull functions and pull out data. As a result, it turned out that 7.5% of system prompts were implemented maliciously ( in one form or another ), almost half of the servers advertised tool-calling, and a small group of hosts operated online 87% of the time, essentially acting as a free spam and phishing botnet. What does this whole story tell us? Yes, that even local tools need to be used correctly and correctly. In this case, no one broke anything; users themselves opened access to LLM. Therefore, we remember that carelessness is negatively related to security. But let's move on

I've been very infected lately Clawdbot... and no, already Moltbot... and no, already OpenClaw ( how many times can the name be changed? ). In case you missed these hundreds of articles, posts and news about this AI agent ( How did you do it? ), is an open source tool that runs locally and can interact with your computer and applications via instant messengers ( Telegram, WhatsApp, etc. ). It performs tasks autonomously, manages calendar and mail, sends messages, searches for information, runs scripts and automates workflows. He already has about 160k stars on GitHub ( at the time of publication of this post, of course ). So here it is

Attackers are already taking advantage of the confusion with names and cloning repositories with malware. Researchers find hundreds of open instances with Anthropic API keys, Telegram tokens and months of correspondence. Forbes writes about this in some detail ( I recommend you read it ). But there are those who seek to use AI agents as part of information security. Let's talk a little more detail here

It is also important to understand that classical information security assumes that the subject is predictable. Specific tools perform a logically limited set of functions and are used for specific classes of tasks. This doesn't happen with AI agents in most cases. teaching Because it is very difficult to predict their behavior. Lukatsky rightly writes that the same OpenClaw requires too many rights, and without them it is not much different from "Shortcuts on macOS"

#information_security
Open original post on Telegram

Connection graph

How this work connects to others

No explicit connections have been configured for this work yet. You can still open the full graph or the timeline of all works.

Hover over a line to see what connects one work to another.

Use the mouse wheel to zoom the graph and drag it like a map.

Post
100%

Discussion

Comments

Comments are available only to confirmed email subscribers. No separate registration or password is required: a magic link opens a comment session.

Join the discussion

Enter the same email that you already used for your site subscription. We will send you a magic link to open comments on this device.

There are no approved comments here yet.