Saturday, April 13, 2024
HomeSECURITYscientists have found a way to implement backdoors in neural networks

scientists have found a way to implement backdoors in neural networks

-


Artificial brain under attack: scientists have found a way to implement backdoors in neural networks

Researchers at the University of Toronto and MIT have shown how you can create triggers that only fire on certain images.

Scientists from the University of Toronto and the Massachusetts Institute of Technology (MIT) discovered how to introduce a “back door” into neural networks that allows attackers to manipulate their behavior. The study was published in the journal IEEE Transactions on Neural Networks and Learning Systems.

Neural networks are machine learning algorithms that are capable of performing complex tasks such as face recognition, text translation, and image analysis. However, they are also susceptible to attacks that can disrupt their operation or use them for malicious purposes.

One such attack is the “backdoor”, which consists in the fact that the attacker introduces a hidden trigger into the neural network, which is activated under certain conditions. For example, if a certain character or color appears in an image, the neural network may give an incorrect answer or transmit sensitive information.

Scientists have developed a method that allows you to inject backdoors into neural networks in such a way that it remains invisible to conventional detection methods. They used a technique called “overfitting,” which is when the neural network remembers certain examples from the training data set instead of generalizing them. This way they were able to create triggers that only work on certain images and not all.

The scientists conducted experiments on several types of neural networks and showed that their method can effectively introduce backdoors into them. They also showed that their method is resistant to various defenses against such attacks.

Scientists emphasize that their study does not call for the use of backdoors in neural networks, but rather warns of a potential threat and stimulates the development of more reliable methods for detecting and preventing such attacks.



Source link

www.securitylab.ru

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular