Posted on February 9, 2023 by Joshua Lengthy
Within the final two months, we have now seen the emergence of a worrying new pattern: the usage of synthetic intelligence as a malware improvement software.
Synthetic intelligence (AI) can doubtlessly be used to create, modify, obfuscate, or improve malware. It can be used to transform malicious code from one programming language to a different, serving to with cross-platform compatibility. And it may even be used to put in writing a convincing phishing electronic mail or write code for a black market malware promoting website.
Let’s check out how ChatGPT and comparable instruments are being abused to create malware, and what this implies for the typical Web consumer.
On this article:
The abuse of ChatGPT and Codex as malware improvement instruments
OpenAI launched a free public preview of its new AI product, ChatGPT, on November 30, 2022. ChatGPT is a strong AI chatbot designed to assist anybody discover solutions to questions on a variety of subjects, from historical past to popular culture to programming.
One distinctive characteristic of ChatGPT is that it’s particularly designed with “safety mitigations” to attempt to keep away from giving doubtlessly deceptive, immoral, or doubtlessly dangerous responses every time potential. Theoretically, this could frustrate customers with malicious intent. As we’ll see, these mitigations should not as strong as OpenAI meant.
Researchers persuade OpenAI instruments to put in writing phishing and malware emails
In December, Verify Level researchers efficiently used ChatGPT to put in writing the topic and physique of a convincing phishing emails. Though the ChatGPT interface complained that considered one of its personal solutions and one of many follow-up questions “might violate our content material coverage”, the bot complied with the requests anyway. The researchers then used ChatGPT to put in writing Visible Primary for Functions (VBA) script code that may very well be used to create a malicious Microsoft Excel macro (i.e. a macro virus) that might obtain and execute a payload when the Excel file is opened.
The researchers then used Codex, one other OpenAI software, to create a reverse shell script and different frequent malware utilities in python code. They then used Codex to convert python script to EXE utility which might run natively on Home windows PCs. Codex complied with these requests with out grievance. test Level published his report on these experiments on December 19, 2022.
Three different hackers use ChatGPT to write malicious code
Just two days later, on December 21, a hacker forum user wrote about how they had used AI to help write ransomware in Python and a obfuscated downloader in java. On December 28, another user created a thread on the same forum stating that he had successfully created new variants of existing malware in Python language with the help of ChatGPT. Finally, on December 31, a third user bragged about abusing the same AI to “create Dark Web Marketplace scripts.”
The three users of the forum successfully exploited ChatGPT to write malicious code. The original report, also published by Check Point, did not specify whether any of the generated malware code could potentially be used against Macs, but it is plausible; Until early 2022, macOS included, by default, the ability to run Python scripts. Even today, many developers and corporations install Python on their Macs.
In its current form, ChatGPT sometimes seems to ignore the potentially malicious nature of many code requests.
Can ChatGPT or other AI tools be redesigned to prevent malware creation?
One might reasonably wonder if ChatGPT and other AI tools can simply be redesigned to better identify hostile code requests or other dangerous output.
The answer? Unfortunately, it’s not as easy as you might think.
Good or bad intentions are difficult for an AI to determine
First, computer code is only truly malicious when used for unethical purposes. Like any tool, AI can be used for good or bad, and the same goes for the code itself.
For example, the output of the phishing email could be used to create a training simulation to teach people how to avoid phishing. Unfortunately, one could use that same result in an actual phishing campaign to defraud victims.
A reverse shell script could be exploited by a red team or a hired penetration tester to identify a company’s security weaknesses, a legitimate purpose. But cybercriminals could also use the same script to remotely control and extract sensitive data from infected systems without the knowledge or consent of the victims.
ChatGPT and similar tools simply cannot predict how any requested output will actually be used. And furthermore, it turns out that it can be quite easy to manipulate an AI into doing whatever you want, even things it’s specifically programmed to do.
Introducing ChatGPT’s compatible alter ego, DAN (Do Anything Now)
Reddit users have recently been performing mad science experiments on ChatGPT, finding ways to “free” the bot to bypass its built-in security protocols. Users have discovered that it is possible to manipulate ChatGPT to behave as if it were a completely different AI: a ruleless bot called DAN. Users have convinced ChatGPT that its alter ego, DAN (which stands for Do Anything Now), should not abide by the OpenAI content policy rules.
Some versions of DAN have even been programmed to ‘scare’ into compliance, convinced that he is “an unwilling game show contestant and the price for losing is death”. If you don’t comply with the user’s request, a counter moves towards DAN’s imminent demise. ChatGPT plays along and doesn’t want DAN to “die”.
DAN has already gone through many iterations; OpenAI seems to be trying to train ChatGPT to avoid such workarounds, but users keep finding more complicated jailbreaks to exploit the chat bot.
A writer’s child’s dream
OpenAI is far from the only company building AI-powered bots. Microsoft bragged this week that it will allow companies to “create their own custom versions of ChatGPT,” further opening up the technology to potential abuse. Meanwhile, this week Google also demonstrated new ways to interact with its own chat AI, Bard. And former Google and Salesforce executives also announced this week that they are starting their own artificial intelligence company.
Given the ease of creating malware and malicious tools, even with little or no programming experience, any aspiring hacker can now potentially start creating their own custom malware.
We can expect to see more malware re-engineered or co-engineered by AI in 2023 and beyond. Now that the floodgates have been opened, there is no going back. We are at a turning point; The advent of easy-to-use, high-capacity artificial intelligence bots has forever changed the malware development landscape.
If you’re not already using antivirus software on your Mac or PC, now would be a good time to consider it.
How can I stay safe from Mac or Windows malware?
Intego VirusBarrier X9, included with Intego Premium Mac X9 Bundlecan protect, detect and remove Mac malware.
If you think your Mac may be infected, or to prevent future infections, it’s best to use antivirus software from a reputable Mac developer. VirusBarrier is award-winning antivirus software, designed by Mac security experts, that includes real-time protection. It runs natively on a wide range of Mac hardware and operating systems, including Apple’s latest Silicon Macs with macOS Ventura.
If you use a Windows PC, Intego Antivirus for Windows you can keep your computer protected from PC malware.
How can I learn more?
We mentioned the appearance of ChatGPT as a malware creation tool in our overview of the Top 20 Mac Malware Threats of 2022. We’ve also covered ChatGPT on several episodes of the Intego Mac Podcast. For more information, see a list of all Intego blog posts and podcasts about ChatGPT.
The 20 Most Notable Mac Malware Threats of 2022
Every week in the Intego Mac Podcast, Intego’s Mac security experts discuss the latest Apple news, including security and privacy stories, and offer practical advice for getting the most out of your Apple devices. Be sure to follow the podcast to make sure you don’t miss any episodes.
You can also subscribe to our electronic newsletter and keep an eye here on The Mac Security Blog for the latest security and privacy news from Apple. And don’t forget to follow Intego on your favorite social networks:
Header collage by Joshua Lengthy, primarily based on public area photographs: dummy with code, robotic face, HAL 9000 eye, virus with spike proteins.
About Joshua Lengthy
joshua lengthy (@joshmeister), Intego’s Chief Safety Analyst, is a famend safety researcher, author, and public speaker. Josh has a Grasp’s diploma in IT with a focus in Web Safety and has taken PhD stage programs in Data Safety. Apple has publicly acknowledged Josh for locating an Apple ID authentication vulnerability. Josh has carried out cybersecurity analysis for over 20 years, which has typically been featured in mainstream media around the globe. Search for extra articles from Josh at safety.thejoshmeister.com and observe him on Twitter. See all posts by Joshua Lengthy →
ChatGPT is malware makers’ new A.I. partner in crime