New malware families are using large language models to evade detection, manipulate code on the fly, and disrupt conventional cybersecurity tools.
Google’s Threat Intelligence Group (GTIG) has reported the emergence of malware families that actively use large language models (LLMs) during execution. These threats, identified in 2025, showcase how attackers are integrating AI to build malware that can dynamically adapt in real time, a method Google refers to as “just-in-time” self-modification.
Among the malware examples outlined are PromptFlux, a VBScript-based dropper; PromptSteal, a data miner seen in Ukraine; FruitShell, a PowerShell reverse shell; QuietVault, a credential stealer; and PromptLock, a cross-platform ransomware using Lua.
PromptFlux, described as experimental, uses Google’s Gemini LLM to generate obfuscated VBScript variants. Its “Thinking Robot” module periodically queries Gemini for new code to bypass antivirus detection. Though early in development, the malware’s design shows intent to change continuously. Google disabled PromptFlux’s Gemini API access and removed associated assets.
Other examples include:
Google also identified misuse of Gemini across the full attack lifecycle. Threat groups from China, Iran, and North Korea used the platform for code generation, malware development, phishing, obfuscation, and deepfake creation. Google has since disabled the relevant accounts and introduced new safety measures.
Google stated the need for responsible AI development, stating that all AI systems must have “strong safety guardrails.” The company confirmed that it monitors for abuse and works with law enforcement when necessary. In every case of abuse mentioned, associated Gemini accounts were disabled, and model defenses were reinforced to resist similar techniques.
According to Cybersecurity Dive, “Attackers are moving beyond ‘vibe coding’ and the baseline observed in 2024 of using AI tools for technical support. We are only now starting to see this type of activity, but expect it to increase in the future.” The report added that “the newly discovered uses of AI in malware highlight the need for defenders to replace traditional static detection tools with software that can identify a broader range of anomalous activity.”
Because it uses AI to rewrite or modify code at runtime, traditional antivirus tools may not recognize it based on known patterns or static signatures.
Many use false identities posing as students or researchers or repurpose developer APIs through misuse of legitimate access.
It’s a module that continuously queries an LLM for fresh code to bypass antivirus software, making each iteration of the malware slightly different.
AI-based tools are now being advertised and sold in forums, lowering skill requirements and offering multifunctional services like malware creation, phishing, and reconnaissance.
Google disables abusive accounts, strengthens model guardrails, collaborates with law enforcement, and applies lessons learned from threat activity to improve AI model security.