3 min read

86% of phishing campaigns now use AI as attacks expand beyond email

86% of phishing campaigns now use AI as attacks expand beyond email

A recent phishing trends report finds AI adoption among attackers has grown steadily for two years, with campaigns now routinely chaining email, calendar, and messaging platform lures to improve success rates.

 

What happened

KnowBe4's seventh annual Phishing Threat Trends report, released April 30, 2026, found that 86% of phishing campaigns tracked over the past six months involved some use of AI, up from 84% in 2025 and 80% in 2024. According to The Register, the report documents a 49 percent increase in phishing attacks delivered through calendar invites and a 41% increase in attacks using Microsoft Teams messages impersonating IT support staff to harvest credentials. Phishing campaigns increasingly use email as the first stage of a multi-vector attack rather than the sole delivery mechanism, with Teams messages and calendar notifications serving as follow-on contact designed to reinforce the initial lure.

 

Going deeper

AI is being used across multiple phases of phishing campaigns, not just message drafting. Automated reconnaissance enables attackers to scan large volumes of publicly available information, extract target-specific details, and feed that data into AI-generated lures. The result is polymorphic phishing: campaigns that take a base template and generate a unique, personalized version for each recipient, making pattern-based detection significantly harder. A typical multi-stage campaign might begin with a phishing email and follow up with a Teams message from someone impersonating IT support, asking the target to click a link to reset a password or sign a document via DocuSign. Each touchpoint uses a different channel and a different pretext, but both ultimately deliver credentials or remote access to the attacker. According to Microsoft data cited in The Register, phishing campaigns using AI-generated lures are 4.5 times more effective than those crafted manually.

 

What was said

KnowBe4 stated in its Phishing Threat Trends report that the steady year-on-year increase in AI adoption among attackers suggests holdouts are "increasingly adopting the tech to broaden their reach," and that AI is enabling attackers to automate reconnaissance and information gathering, "speeding up the phishing process and giving attackers more time to shift to multiple attack vectors to better gain their victims' trust." The FBI reported that US cybercrime losses reached a record $20.87 billion in 2025, with phishing the most common complaint category and AI-related fraud accounting for approximately $893 million of that total.

 

In the know

The expansion of phishing beyond email into Teams and calendar invites reflects a deliberate response to improved email filtering. As organizations have layered more controls onto inboxes, attackers have moved toward channels that carry the same trusted identity signals but face less scrutiny. According to Microsoft's own Q1 2026 email threat landscape report, published the same day as the KnowBe4 findings, 8.3 billion phishing emails were detected in a single quarter, with business email compromise alone generating 10.7 million incidents. The convergence of both reports on the same date paints a consistent picture: phishing volume is not declining, the techniques are diversifying, and AI is accelerating both.

 

The big picture

For healthcare organizations, the shift to multi-vector phishing campaigns targeting collaboration tools carries particular risk. Clinical and administrative staff use Teams for internal communications, scheduling, and vendor coordination. A phishing message arriving via Teams from what appears to be an IT support account, following up on an earlier email about a credential reset, fits naturally into the kind of communications healthcare staff receive and act on daily. According to Paubox's 2026 Healthcare Email Security Report, only 5% of known phishing attacks are reported by employees to security teams, meaning multi-stage campaigns that blend email, Teams, and calendar channels can run through their full attack chain with no internal detection signal at any point.

 

FAQs

What makes AI-generated phishing emails harder to detect than traditional ones?

Traditional phishing relied on generic templates with poor grammar and obvious inconsistencies. AI-generated lures use correct language, incorporate target-specific details from automated reconnaissance, and produce a unique version for each recipient, removing the pattern-matching signals that filters and trained employees are most likely to catch.

 

Why are calendar invites an effective second-stage phishing vector?

Calendar invites arrive through a different channel than email and carry an inherent implication of legitimacy that someone is scheduling time with you. Recipients are less likely to apply the same scrutiny to a calendar notification as they would to an unsolicited email, and the invite format normalizes clicking links to join meetings or access documents.

 

How does impersonating IT support in Teams messages increase attack success rates?

IT support is one of the most frequently impersonated personas in phishing because staff are conditioned to respond quickly to IT requests. A Teams message from a familiar IT support account name, arriving after a phishing email has already primed the target, creates a second independent trust signal that greatly increases the probability of the target complying.

 

What does a 4.5x effectiveness increase for AI lures mean in practice?

A campaign that would compromise one in twenty targets without AI assistance would compromise roughly one in four with AI-generated personalization, at the same scale and effort. Across millions of phishing attempts, that multiplier translates directly into substantially more compromised accounts per campaign.

 

What controls are most effective against multi-vector phishing campaigns?

Phishing-resistant MFA removes the value of stolen credentials regardless of which channel the attack uses. Configuring Teams to restrict external message delivery and flag messages from unverified external senders reduces the effectiveness of Teams-based lures. Training staff specifically on multi-stage attack patterns, rather than single-email phishing scenarios, addresses the trust gap that makes second-stage contact effective.

Subscribe to Paubox Weekly

Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.