Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

Microsoft: AI makes phishing 4.5x more effective and far more profitable

Written by Farah Amod | October 30, 2025

A new Microsoft report reveals that AI-generated phishing emails now outperform traditional phishing by a wide margin, with higher click rates and larger financial gains for cybercriminals.

 

What happened

Microsoft’s 2025 Digital Defense Report found that people are 4.5 times more likely to click on phishing emails written with the help of artificial intelligence. The report measured a 54% click-through rate for AI-generated phishing messages, compared to only 12% for those written manually. According to Microsoft, the use of AI can make phishing attacks up to 50 times more profitable due to higher engagement and automation efficiency.

AI is helping attackers craft personalized, native-language messages that appear credible and relevant to their targets. Microsoft described this as “the most significant change in phishing over the last year,” stating that even less sophisticated criminals are now likely to integrate AI tools into their attacks.

 

Going deeper

Beyond phishing, Microsoft reported that AI is transforming nearly every stage of the cyberattack process. Threat actors now use AI to automate reconnaissance, identify vulnerabilities, create malware, and even clone voices or generate deepfake videos. The technology also enables new attack surfaces, including large language models themselves, to be exploited.

Microsoft noted a sharp increase in AI-generated content from nation-state actors. Between July 2023 and July 2025, documented samples grew from zero to 225, as countries began using AI to amplify influence operations. Still, most attacks worldwide were financially driven, with 52% of known incidents motivated by profit and only 4% tied to espionage.

The report also observed a new trend: criminals are increasingly “logging in, not breaking in.” Instead of relying solely on phishing, they now combine social engineering with legitimate infrastructure to gain access and remain undetected.

 

What was said

Amy Hogan-Burney, Microsoft’s Corporate Vice President for Customer Security and Trust, wrote that the adoption of AI by both criminals and nation-states “has picked up in the past six months as actors use the technology to make their efforts more advanced, scalable, and targeted.”

Microsoft’s threat intelligence team further pointed out that “ClickFix” attacks, a form of social engineering where users are tricked into running malicious commands under the guise of system fixes, accounted for 47% of all initial access incidents detected by Microsoft Defender Experts last year, surpassing phishing as the top entry method.

 

The big picture

Microsoft’s findings show just how much AI has changed phishing. Messages written with generative tools don’t just look cleaner, they sound credible, localized, and personal, which makes people far more likely to click. Attackers no longer need writing skills or translation help; AI handles that, letting even small criminal groups run large-scale, profitable phishing operations that appear legitimate to both users and filters.

Paubox recommends Inbound Email Security as a way to stay ahead of these AI-driven threats. Its generative AI analyzes tone, sender behavior, and communication patterns to detect subtle inconsistencies that automated phishing relies on. That context-based detection helps organizations stop convincing, AI-written messages before employees interact with them.

 

FAQs

How does AI make phishing more effective?

AI can analyze language patterns and create tailored, convincing messages in multiple languages, making phishing emails more believable and harder to detect.

 

What is a “ClickFix” attack?

ClickFix is a technique where users are tricked into executing malicious commands themselves, often disguised as security updates or IT fixes, allowing attackers to bypass traditional phishing filters.

 

Why are attackers “logging in, not breaking in”?

Rather than exploiting software vulnerabilities, attackers increasingly use stolen or socially engineered credentials to access systems legitimately, reducing detection risk.

 

How are nation-states using AI in cyber operations?

Governments are employing AI to generate propaganda, influence campaigns, and disinformation at scale, often through realistic synthetic content and deepfakes.

 

What steps can organizations take to mitigate AI-driven phishing risks?

Companies should strengthen identity protection with phishing-resistant MFA, deploy behavioral detection tools, train staff to recognize AI-assisted social engineering, and continuously monitor for credential misuse.