Posts
Hello, Ayotovites! 😉 Just yesterday, my colleagues from Baumanka and I published a scien…
April 11, 2025 at 9:01 AM•Max Knyazev is typing…Telegram mirror

Hello, Ayotovites!
😉
Just yesterday my colleagues and I came out from Baumanka scientific article "Dynamic allocation of RACH slots for minimizing collisions in NB-IoT networks based on reinforcement learning algorithms"
🤌
Let me tell you in order, what, how and why
If you are familiar with how IoT devices operate on a network, you know that they communicate with base stations through what is called a random access channel. (RACH). When there are too many devices, they literally begin to “collide” when trying to connect. These are collisions. And because of them, the delay in data transmission increases, energy consumption increases, and in general the network is unstable
And it would seem, what to do? This is where my colleagues and I come out with our article...
🥳
We proposed a method for dynamically allocating RACH slots using reinforcement learning algorithms. ( Q-learning and DQN ). That is, instead of strictly prescribing the rules, we gave the network the opportunity to adapt “itself” to the current load and condition. Special RL agent ( which we wrote in Python ) monitors the situation on the network, evaluates metrics ( collisions, successful connections, etc. ) and offers a new slot configuration to make everything work more stable
🙏
We checked this whole thing in the simulator NS-3. We simulated a real NB-IoT network: 50 devices ( periodic, sporadic, low priority ), transmitting data at different times. The agent interacted with the simulator every 30 seconds and selected the optimal actions. We compared it with the usual static approach, where everything is fixed in advance. And the results are... ( my respects ):
Collisions reduced by 74% (yes, that's really a lot)
Successful connections increased by 16%
Energy consumption decreased by 15%
Now let's talk about why this is important.
😅
Because NB-IoT is not just a smart kettle. These include industrial sensors, logistics, smart meters, and medicine. When there are many such devices, classical approaches stop working. But adaptive methods like RL do the job
🧠
We believe that our approach can be scaled to more complex scenarios and then tested on real hardware. And this is no longer just a simulation, but a step towards real improvement in the sustainability of IoT infrastructure in cities, in factories and other smart systems
I decided on my own to further deepen the research and... the results are impressive, but I won’t push the locomotive ahead. This is already an article that I will present at a scientific and practical conference. I'll tell you about this at the end of the month.
🤝
#Internet_of_things #information_security #graduate studies #machine_learning #NB_IoT
Open original post on TelegramJust yesterday my colleagues and I came out from Baumanka scientific article "Dynamic allocation of RACH slots for minimizing collisions in NB-IoT networks based on reinforcement learning algorithms"
Let me tell you in order, what, how and why
If you are familiar with how IoT devices operate on a network, you know that they communicate with base stations through what is called a random access channel. (RACH). When there are too many devices, they literally begin to “collide” when trying to connect. These are collisions. And because of them, the delay in data transmission increases, energy consumption increases, and in general the network is unstable
And it would seem, what to do? This is where my colleagues and I come out with our article...
We proposed a method for dynamically allocating RACH slots using reinforcement learning algorithms. ( Q-learning and DQN ). That is, instead of strictly prescribing the rules, we gave the network the opportunity to adapt “itself” to the current load and condition. Special RL agent ( which we wrote in Python ) monitors the situation on the network, evaluates metrics ( collisions, successful connections, etc. ) and offers a new slot configuration to make everything work more stable
We checked this whole thing in the simulator NS-3. We simulated a real NB-IoT network: 50 devices ( periodic, sporadic, low priority ), transmitting data at different times. The agent interacted with the simulator every 30 seconds and selected the optimal actions. We compared it with the usual static approach, where everything is fixed in advance. And the results are... ( my respects ):
Successful connections increased by 16%
Energy consumption decreased by 15%
Now let's talk about why this is important.
Because NB-IoT is not just a smart kettle. These include industrial sensors, logistics, smart meters, and medicine. When there are many such devices, classical approaches stop working. But adaptive methods like RL do the job
We believe that our approach can be scaled to more complex scenarios and then tested on real hardware. And this is no longer just a simulation, but a step towards real improvement in the sustainability of IoT infrastructure in cities, in factories and other smart systems
I decided on my own to further deepen the research and... the results are impressive, but I won’t push the locomotive ahead. This is already an article that I will present at a scientific and practical conference. I'll tell you about this at the end of the month.
#Internet_of_things #information_security #graduate studies #machine_learning #NB_IoT
Discussion
Comments
Comments are available only to confirmed email subscribers. No separate registration or password is required: a magic link opens a comment session.
Join the discussion
Enter the same email that you already used for your site subscription. We will send you a magic link to open comments on this device.
There are no approved comments here yet.