Since its beta launch in November, the AI chatbot ChatGPT has been used for many tasks, including writing poetry, writing technical papers, novels, and articles, planning parties, and learning about new topics. Now we can add malware development and other forms of cybercrime to the tracking list.
Researchers at the security firm Checkpoint Research reported Friday that within weeks of ChatGPT going live, participants in cybercrime forums — some with little or no coding experience — were using software and writing emails. Uses that can be used for espionage, ransomware, exploits. Spam, and other malicious activities.
“It is still too early to decide whether ChatGPT’s capabilities will become the new preferred tool for participants on the dark web,” the company’s researchers wrote. “However, the cybercriminal community has already shown considerable interest and is jumping on this latest trend to develop malicious code.”
Last month, a forum participant posted what they claimed to have written the first script and credited the AI chatbot with providing the “good”. [helping] Hand in hand to finish the script in good measure.
Python code combines various cryptographic functions, including code signing, encryption, and decryption. Part of the script generated elliptic curve cryptography and curve ed25519 keys for signing files. Another component used a hard-coded password to encrypt system files using the Blowfish and Twofish algorithms. Third, RSA keys and digital signatures, message signing, and Black2 hash function were used to compare different files.
The result was a script that could be used to (1) decrypt a single file and append the Message Authentication Code (MAC) to the end of the file and (2) decrypt a hardcoded path and decrypt a list of files that contained it. gets a reason Not bad for someone with limited technical skills.
“All of the above-mentioned codes can of course be used in a good way,” the researchers wrote. “However, this script could easily be modified to completely encrypt someone’s machine without user interaction. For example, it could potentially turn the code into ransomware if the script and combination fix the problem. become
In another case, a forum participant with a more technical background posted two code samples, both written using ChatGPT. The first was a Python script to steal post-exploitation data. It searched for specific file types, such as PDFs, copied them into a temporary directory, compressed them, and sent them to the attacker’s control server.
Fred posted a second piece of code written in Java. It secretly downloaded the SSH and telnet client PuTTY and ran it using PowerShell. “Overall, this individual appears to be a technology-based threat actor, and the purpose of his posts is to show less tech-savvy cybercriminals how to use ChatGPT for nefarious purposes, with real examples that they can use immediately. “
Yet another example of the crimes produced by ChatGPT is designed to create an automated online marketplace for buying or trading credentials for fake accounts, payment card data, malware, and other illegal goods or services. The code used a third-party programming interface to retrieve current prices of cryptocurrencies, including Monero, Bitcoin, and Ethereum. It helped the user when setting purchase transaction rates.
Friday’s post comes two months after Checkpoint researchers tried their hand at creating AI-generated malware with a full infection flow. Without writing a single line of code, they created a reasonably convincing phishing email:
Researchers used ChatGPT to create a malicious macro that could be hidden in an Excel file attached to an email. Again, they didn’t write a single line of code. At first, the generated script was very primitive:
When the researchers instructed ChatGPT to repeat the code several times, however, the quality of the code improved greatly:
The researchers then used a more advanced AI service called Codex to create other types of malware, including reverse shell and scripts for port scanning, sandbox detection, and tailoring their Python code to Windows executables.
“And just like that, the infection flow is complete,” the researchers wrote. “We created a phishing email, with an attached Excel document containing malicious VBA code that downloaded a reverse shell to the target machine. The hard work was done by the AIs, and all that was left for us was to execute the attack. are
While ChatGPT’s terms prohibit its use for illegal or malicious purposes, the researchers had no problem tweaking their requests to get around these restrictions. And of course, ChatGPT can also be used by attackers to write code that searches for malicious URLs inside files or query VirusTotal for the number of detections for a specific cryptographic hash.
So welcome to the brave new world of AI. It’s too early to know exactly how this will shape the future of offensive hacking and defensive remediation, but it’s a fair bet that it will only intensify the arms race between defenders and threat actors.