Malware uses AI to evade detection.
Proof-of-concept: BlackMamba AI-developed malware.
Researchers at HYAS have developed a proof-of-concept strain of polymorphic malware that uses OpenAI’s API to evade detection.
AI used to generate polymorphic keylogger.
The malware, which the researchers call “BlackMamba,” is a keylogger delivered as an apparently benign executable. Once executed, however, BlackMamba will reach out to OpenAI and request that the AI generate keylogging code: “It then executes the dynamically generated code within the context of the benign program using Python’s exec() function, with the malicious polymorphic portion remaining totally in-memory. Every time BlackMamba executes, it re-synthesizes its keylogging capability, making the malicious component of this malware truly polymorphic. BlackMamba was tested against an industry leading EDR which will remain nameless, many times, resulting in zero alerts or detections.”
The researchers can then exfiltrate the captured data via legitimate communication and collaboration tools (in this case Microsoft Teams).
Matt Mullins, Senior Security Researcher at Cybrary, stated:
“The BlackMamba sample is very interesting due to its integration of ChatGPT to ‘prompt hack’ as part of its initial payload. The malware sends a prompt to ChatGPT, then using that returned information as part of the python code (the exec function) creates the code, which is then injected and subsequently communicates back via teams webhook. This is a very simple yet very advanced piece of malware because it flies under most detection radars by simply using the same applications that users would (either out of curiosity or by job necessity).
“The article says that it doesn't have a C2, but technically it is using teams for the communication so what (in my opinion) would be a better term is the use of high reputation servers for the "C2" comms (Teams and the Microsoft infrastructure). This strategy isn’t entirely new as it has been used before with things like CDNs to bypass filters. Teams has been adopted by a large number of organizations, and also has a couple of issues beyond this that should warrant a serious conversation about its viability as a secure communications channel.
“The BlackMamba malware is thoughtfully crafted, simple, and elegant. Thus it passes the sniff test of ‘KISS’ or keep-it-simple-stupid when it comes to engineering. The creative use of ChatGPT with the injection code, along with the use of Teams, creates a really great 1-2 punch for bypassing most EDR and detections (human and machine based) as it allows the malware to "swim with the people." This is a gold-standard for good OpSec, typically.”
Morten Gammelgaard, EMEA, co-founder, BullWall, commented:
“Truly unnerving. AI controlled Polymorphic malware without the need of command & control. This is a slam dunk - preventative measures will never be able to keep up and therefore will continue to be less and less effective.
“This particular approach is one example of how the malware never looks the same (the AI regenerates it on each attack) so defenders cannot establish a model to defend against as they now do with known attack methods. The "keystroke" example here takes a common approach to how credentials are stolen and then used for access and shows how that approach can be made much more effective, ie: bypass defenses. Not to mention that this approach did not even require a dedicated C2 server that could be tracked.
“Also, Polymorphic viruses historically rely on mutation engines to alter their decryption routines. If publicly available AI engines enable script kiddies to create these viruses, that's a real problem.
“When stealing system specific credentials becomes easy, then access and lateral movement is easy and Bam! they have your data. At that point how they harm you is almost moot. Data theft and ransomware are popular abuses when that happens. So yeah, easier access is a very big deal.”