Large language models can execute complete ransomware attacks autonomously, research shows
Criminals can use artificial intelligence, specifically large language models, to autonomously carry out ransomware attacks that steal personal files and demand payment, handling every step from breaking into computer systems to writing threatening messages to victims, according to new research from NYU Tandon School of Engineering posted to the arXiv preprint server.
In recent years, cyber attackers have become increasingly skilled at circumventing security measures and successfully targeting technology users. Developing effective methods to detect, neutralize or mitigate the impact of these attacks is of utmost importance.
Deep neural networks are at the heart of artificial intelligence, ranging from pattern recognition to large language and reasoning models like ChatGPT. The principle: during a training phase, the parameters of the network's artificial neurons are optimized in such a way that they can carry out specific tasks, such as…