3 min read

State-sponsored hackers are using AI at every stage of cyberattacks

State-sponsored hackers are using AI at every stage of cyberattacks

Google's Threat Intelligence Group released a report that state-sponsored hacking groups have used AI tool Gemini across nearly every stage of the cyber attack cycle, with groups from North Korea, Iran, China, and Russia all leveraging the technology.

 

What happened

Google's Threat Intelligence Group found that multiple nation-state hacking groups used Gemini to accelerate and scale their cyberattack operations. North Korean groups used the tool to synthesize open-source intelligence on cybersecurity and defense companies, and one group consulted it multiple days a week for technical support, troubleshooting problems and generating new malware code mid-operation. An Iranian APT used Gemini to enhance reconnaissance against targeted victims. Groups from China, Russia, Iran, and North Korea all used the tool to generate fake articles, personas, and other assets for information operations. In nearly all cases, threat actors used Gemini as one tool among many rather than to fully automate attacks.

 

The backstory

In 2025, Anthropic identified a Chinese government-backed campaign that used Claude to automate large portions of a cyber attack, one of the first documented cases of a state actor using frontier AI for mostly-automated hacking. Google's new report builds on that precedent, revealing that AI-assisted hacking has since expanded well beyond a single group or platform.

According to Stanford University's AI Index Report 2024, the number of AI-related incidents tracked by the AI Incident Database grew by over twentyfold since 2013, with 123 incidents reported in 2023 alone, a 32.3% increase from the prior year. The report shows the rise of AI integration into real-world applications and heightened awareness of its potential for misuse.

 

What was said

John Hultquist, chief analyst at Google's Threat Intelligence Group, told CyberScoop that many countries still appear to be experimenting with AI, trying to find where it fits best into their attack chain, "Nobody's got everything completely worked out. They're all trying to figure this out and that goes for attacks on AI, too."

Hultquist noted that state actors focused on espionage may not benefit as much from the speed and scale of agentic AI if it makes their operations louder and more detectable. He added that on average, these developments will help smaller cybercriminal outfits more than state-sponsored hackers.

 

By the numbers

According to the UK AI Security Institute's Frontier AI Trends Report, December 2025:

  • The duration of autonomous AI cyber tasks has grown from under 10 minutes in early 2023 to over an hour by mid-2025.
  • Open-source AI models can now match frontier model capabilities within 4–8 months of a frontier model's release, shrinking the gap between state-level and widely available tools.

According to Stanford University's AI Index Report 2024, drawing on a global survey of more than 1,000 organizations conducted in collaboration with Accenture:

  • 88% of organizations either agree or strongly agree that companies developing foundation models bear responsibility for mitigating all associated risks.
  • 86% agree that generative AI presents enough of a threat to warrant globally agreed-upon governance.
  • 47% of surveyed organizations identified cybersecurity risks as relevant to their AI adoption strategy.

In the know

An Advanced Persistent Threat (APT) is often a state-sponsored hacking group that conducts prolonged cyberattack campaigns against specific targets. These groups focus on espionage, data theft, or sabotage. What makes the Google report notable is that APTs are now using AI not just for technical tasks, but across the full intrusion cycle.

 

Why it matters

The fact that North Korea, Iran, China, and Russia are all actively integrating Gemini into their attack workflows shows that AI-assisted hacking is not an emerging risk but a present one. Furthermore, AI tools are now helping attackers more efficiently identify targets, research organizations, and craft convincing fake personas which help with phishing and social engineering attacks, which remain the leading causes of healthcare data breaches.

 

The bottom line

Organizations can no longer treat AI-powered threats as future-state risks. Security teams should evaluate whether their current defenses account for faster, more targeted reconnaissance and more convincing social engineering, both now accelerated by AI. Organizations should review email security protocols and provide staff phishing.

Related: HIPAA Compliant Email: The Definitive Guide

 

FAQs

Does using AI in an attack leave a traceable signature that investigators can identify?

AI-assisted attacks can be harder to attribute because the tools are widely available and don't carry the unique fingerprints that custom malware does.

 

How does AI change the economics of mounting a large-scale cyberattack?

AI lowers the cost and skill threshold required, meaning operations that once needed a team of specialists can be handled by fewer people.

 

Are healthcare organizations specifically being targeted more than other sectors?

Healthcare is among the most targeted industries due to the high value of patient data.

 

Subscribe to Paubox Weekly

Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.