2 min read

Google warns of AI-powered malware capable of dynamic code generation

Google warns of AI-powered malware capable of dynamic code generation

New malware families are using large language models to evade detection, manipulate code on the fly, and disrupt conventional cybersecurity tools.

 

What happened

Google’s Threat Intelligence Group (GTIG) has reported the emergence of malware families that actively use large language models (LLMs) during execution. These threats, identified in 2025, showcase how attackers are integrating AI to build malware that can dynamically adapt in real time, a method Google refers to as “just-in-time” self-modification.

Among the malware examples outlined are PromptFlux, a VBScript-based dropper; PromptSteal, a data miner seen in Ukraine; FruitShell, a PowerShell reverse shell; QuietVault, a credential stealer; and PromptLock, a cross-platform ransomware using Lua.

 

Going deeper

PromptFlux, described as experimental, uses Google’s Gemini LLM to generate obfuscated VBScript variants. Its “Thinking Robot” module periodically queries Gemini for new code to bypass antivirus detection. Though early in development, the malware’s design shows intent to change continuously. Google disabled PromptFlux’s Gemini API access and removed associated assets.

Other examples include:

  • FruitShell: Publicly available PowerShell malware with hardcoded prompts designed to bypass LLM-based security analysis.
  • QuietVault: JavaScript malware that steals GitHub/NPM credentials and uses AI tools to uncover more secrets on compromised systems.
  • PromptLock: An experimental ransomware that targets Windows, macOS, and Linux with Lua scripts for both theft and encryption.

Google also identified misuse of Gemini across the full attack lifecycle. Threat groups from China, Iran, and North Korea used the platform for code generation, malware development, phishing, obfuscation, and deepfake creation. Google has since disabled the relevant accounts and introduced new safety measures.

 

What was said

Google stated the need for responsible AI development, stating that all AI systems must have “strong safety guardrails.” The company confirmed that it monitors for abuse and works with law enforcement when necessary. In every case of abuse mentioned, associated Gemini accounts were disabled, and model defenses were reinforced to resist similar techniques.

 

The big picture

According to Cybersecurity Dive, “Attackers are moving beyond ‘vibe coding’ and the baseline observed in 2024 of using AI tools for technical support. We are only now starting to see this type of activity, but expect it to increase in the future.” The report added that “the newly discovered uses of AI in malware highlight the need for defenders to replace traditional static detection tools with software that can identify a broader range of anomalous activity.”

 

FAQs

What makes “just-in-time” self-modifying malware difficult to detect?

Because it uses AI to rewrite or modify code at runtime, traditional antivirus tools may not recognize it based on known patterns or static signatures.

 

How do attackers gain access to platforms like Gemini?

Many use false identities posing as students or researchers or repurpose developer APIs through misuse of legitimate access.

 

What is PromptFlux’s “Thinking Robot” and how does it work?

It’s a module that continuously queries an LLM for fresh code to bypass antivirus software, making each iteration of the malware slightly different.

 

How is AI changing the underground cybercrime market?

AI-based tools are now being advertised and sold in forums, lowering skill requirements and offering multifunctional services like malware creation, phishing, and reconnaissance.

 

What is Google doing to prevent AI misuse in malware?

Google disables abusive accounts, strengthens model guardrails, collaborates with law enforcement, and applies lessons learned from threat activity to improve AI model security.

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.