Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

Deepfake phishing a growing threat as criminals exploit generative tools

Written by Farah Amod | February 5, 2026

Researchers warn that artificial intelligence is making impersonation scams faster, cheaper, and harder to detect.

 

What happened

Security researchers reported that a late 2025 phishing campaign linked to the North Korean threat group Kimsuky used generative AI tools to create realistic military identification cards embedded directly into phishing emails. According to Autogpt, the campaign relied on AI-generated imagery and text to impersonate government entities and increase credibility during credential harvesting attempts. Investigators noted that the use of mainstream AI tools reduced the time and cost required to produce convincing forgeries.

 

Going deeper

Modern deepfake phishing campaigns rely on a mix of image generation, voice cloning, and automated messaging. With minimal training data, attackers can replicate voices using short audio samples sourced from public interviews, social media posts, or video recordings. Video-based impersonation requires more material but allows attackers to reproduce facial expressions and speech patterns closely enough to pass casual verification. These assets are often paired with AI-generated emails and messages that match the tone and authority of executives, government officials, or internal staff. Attackers deliver these scams across multiple channels, starting with email or professional networking platforms, then escalating to voice or video calls where urgency and familiarity pressure victims into acting quickly.

 

In the know

Kimsuky is a long-standing advanced persistent threat group that US agencies say has been operating since at least 2012 under direction from the North Korean government to gather intelligence worldwide. A joint advisory from CISA and the FBI says the group concentrates on foreign policy, national security, sanctions, and nuclear matters tied to the Korean peninsula. Its campaigns have repeatedly targeted individuals, research organizations, and government bodies in South Korea, Japan, and the United States.

US officials say social engineering is central to how Kimsuky gains access. The advisory explains that the group most often uses spear-phishing, typically starting with harmless or conversational emails to build trust before sending malicious links or files. Kimsuky has a track record of posing as trusted services or journalists, registering look-alike domains that resemble legitimate platforms, and shaping its lures around current events. CISA, the FBI, and U.S. Cyber Command have warned potential targets to stay especially alert for phishing activity and unexpected requests that seem normal or familiar at first glance.

 

The big picture

“The global cost of deepfake fraud is expected to reach $1 trillion in 2024,” said Srini Tummalapenta, Distinguished Engineer and CTO of Security Services at IBM. The convergence of artificial intelligence and social engineering has escalated cybersecurity risks to a new level. Healthcare organizations are at high risk as AI-driven tools allow attackers to produce convincing phishing messages, replicate voices with alarming precision, and even generate realistic video deepfakes. These tactics are used to deceive staff into exposing protected health information and granting access to healthcare systems

 

FAQs

Why are deepfake phishing attacks more convincing than traditional phishing?

They combine visual or audio impersonation with familiar communication styles, which makes requests feel authentic and harder to challenge.

 

What types of requests are commonly used in deepfake scams?

Attackers often ask for payment approvals, credential sharing, password resets, or exceptions to internal processes.

 

Why do these attacks work even in organizations with strong security tools?

They target human decision-making at the moment of trust, where technical controls may not detect wrongdoing until after damage occurs.

 

How do attackers obtain training material for deepfakes?

They collect publicly available audio, video, and images from social media, interviews, conferences, and online profiles.

 

What steps help reduce the risk of deepfake phishing?

Clear verification procedures, call back requirements, separation of duties, and training that includes voice and video impersonation scenarios.