Malware developers are now using generative AI to speed up the process of writing code, accelerating the number of attacks while essentially letting anyone tech-savvy develop malware.
In a September report from HP’s Wolf Security team, HP detailed how they discovered a variation of the asynchronous remote access trojan (AsyncRAT) — a type of software that can be used to remotely control a victim’s computer — while investigating a suspicious email sent to a client.
However, while AsyncRAT itself was developed by humans, this new version contained an injection method that appeared to have been developed using generative AI.
In the past, researchers have found generative AI “phishing lures” or deceptive websites used to lure in victims and scam them. But according to the report, “there has been limited evidence of attackers using this technology to write malicious code in the wild” prior to this discovery.
The program had several characteristics that provided strong evidence it was developed by an AI program. First, nearly every function in it was accompanied by a comment explaining what it did.
Cybercriminals rarely take such care in providing notes for readers, as they generally do not want the public to understand how their code works. The researchers also believed that the structure of the code and “choice of function names and variables” gave strong evidence that the code was developed using AI.
The team first discovered the email when it was sent to a subscriber of HP’s Sure Click threat containment software. It posed as an invoice written in French, which indicated that it was likely a malicious file targeting French speakers.
However, they could not initially determine what the file did, as the relevant code was stored inside a script that could only be decrypted with a password. Despite this roadblock, the researchers eventually succeeded at cracking the password and decrypting the file, which revealed the malware hidden within it.
Inside the file was a Visual Basic Script (VBScript) that wrote variables onto the user’s PC registry, installed a JavaScript file onto one of the user’s directories, and then ran the JavaScript file. This second file loaded the variable from the registry and injected it into a Powershell process. Two more scripts were then run, causing AsyncRAT malware to be installed on the device.
According to cybersecurity software developer Blackberry, AsyncRAT is software released through GitHub in 2019. Its developers claim it to be “a legitimate open-source remote administration tool.” However, it “is used almost exclusively by cybercriminal threat actors.”
The software allows its users to “control infected hosts remotely” by providing them with a user interface that can perform tasks on the victim’s computer. Because it allows an attacker to take control of a victim’s computer, AsyncRAT can be used to steal a crypto user’s private key or seed words, potentially leading to the loss of funds.
Related: New ‘overlay attacks’ are a growing threat to crypto users — security CEO
Although AsyncRAT itself is not new, this particular variation uses a novel injection method, and the researchers found telltale signs of AI-generated code in this injection method, indicating that this new technology is making it easier than ever for malware developers to carry out attacks.
“The activity shows how GenAI [generative AI] is accelerating attacks and lowering the bar for cybercriminals to infect endpoints,” the HP report stated.
Cybersecurity researchers are still grappling with the effects of AI advancement on security. In December, some ChatGPT users discovered that it could be used to discover vulnerabilities in smart contracts.
As many in the crypto community noted at the time, this could make the AI program a useful tool for white hat hackers, but it could also allow black hats to find vulnerabilities for them to exploit.
In May 2023, Meta’s security department released a report warning that some malware operators were creating fake versions of popular generative AI programs and using them as lures to attract victims.
Magazine: Advanced AI system is already ‘self-aware’ — ASI Alliance founder