Cybercriminals taught AI hacking: ChatGPT alter-ego created
The new WormGPT neural network acts as the dark forces of cyberspace.
According to report company SlashNext, attackers use generative artificial intelligence technologies to prepare and implement BEC (Business Email Compromise) attacks. For this, a tool called WormGPT is used, which is designed specifically for malicious activities.
WormGPT is trained on various malware-related data. The neural network creates the most believable text that is indistinguishable from human, and can create convincing fake emails. The tool allows hackers to create phishing emails without having to know the language well.
WormGPT advertised on a hacker forum
According to the researchers, WormGPT is able not only to produce a persuasive tone, but also to display “strategic cunning,” which speaks to its ability to orchestrate complex phishing attacks and BEC attacks.
Experts noted that WormGPT is similar to ChatGPT, but without ethical framework or restrictions. The report also reveals that cybercriminals use a specially crafted jailbreak to manipulate generative AI interfaces to create information that could include the disclosure of sensitive information, the production of inappropriate content, or even the execution of malicious code.
A good example of how WormGPT works
SlashNext said that attackers can now carry out such attacks on a large scale and at no cost, while the attacks can be “more precise” than before. Now, if hackers fail on the first try, they can try again with new content. Researchers are confident that AI-based attacks can be countered with AI-based defenses.
In January, a group of scientists the world’s first cyberattack using AI to create malicious code that can allow an attacker to collect sensitive information and conduct DoS attacks.