AI has learned to repel 95% of cyber attacks in a simulated environment
Will the robot replace cybersecurity specialists or become their assistant?
Scientists at the Pacific Northwest National Laboratory (PNNL), a division of the US Department of Energy, developed a new system based on deep reinforcement learning (Deep Reinforcement Learning, DRL) that can prevent 95% of cyber attacks in a simulated environment before they escalate.
The scientists created a simulated environment using the OpenAI Gym toolkit. This framework was then used to develop attacker entities demonstrating varying levels of skill and persistence based on a subset of 15 approaches and 7 tactics from the MITER ATT&CK framework. The goal of the attackers is to go through the 7 stages of the attack chain – from initial access and reconnaissance to other attack phases, until the hackers reach their final goal – the attack and data exfiltration phase.
The experts then trained 4 DRL models using reinforcement learning principles such as maximizing rewards for avoiding compromises and reducing network disruptions. It is important to note that the goal of the researchers was not to create a model capable of blocking the enemy before he could launch an attack inside the system. Scientists assumed that the system had already been compromised.
Experiments have shown that DRL algorithms can be trained under multi-stage attacks with different levels of skill and persistence, showing effective protection results in experimental conditions. The study demonstrates that AI models can successfully train in a simulated environment and are able to respond to cyber attacks in real time.
Similar Technologies already established in 2016 specialists of the Computer Science and Artificial Intelligence Laboratory of the Massachusetts Institute of Technology (Computer Science and Artificial Intelligence Laboratory). Scientists have developed an AI capable of tracking hacker attacks 3 times more effective than existing software solutions at that time.
In addition, we recently reported that US scientists have created AI to expand and accelerate scientific discovery. The authors have developed models that not only predict future scientific discoveries but also create hypotheses that might not be considered by human scientists in the near future.
Source link
www.securitylab.ru