5 min read

Protect your email from AI-generated attacks

Protect your email from AI-generated attacks

Email remains the number one vector for cyberattacks, even as security tools advance. In August 2025, Field CISO Mick Leach reiterated this issue in his SANS talk, The AI Threat: Protecting Your Email from AI-Generated Attacks.

More specifically, he stated how Business Email Compromise (BEC) continues to inflict massive damage, with total exposed losses since 2013 reaching $55 billion. Each successful attack costs organizations an average of $129,192, and FBI data shows reported losses climbing year after year, hitting nearly $3 billion in 2024.

Despite decades of security investments, attackers adapt faster. Moreover, email is still the most direct and effective way to reach employees, trick them into action, and divert funds or credentials.

 

The evolution of email attacks

Over the last two decades, phishing has evolved dramatically. The presentation breaks it down into three eras:

  • Spray and pray: “Unpersonalized, bulk emails” that relied on volume over sophistication.
  • Socially engineered: “Highly-personalized, labor-intensive, sophisticated” attacks crafted to fool recipients with legitimacy.
  • AI-generated: Today’s threat, where attackers “use public information and AI tools to create and deliver highly personalized attacks at speed.”

In other words, criminals no longer need to spend days crafting emails. With tools like WormGPT and GhostGPT, they can generate persuasive, context-aware emails in seconds.

 

How AI makes attacks too easy to launch

Artificial intelligence has lowered the bar for launching attacks. It used to require time, talent, and careful research to create a believable phishing or BEC attempt. Attackers had to research targets, learn company structures, and create emails that were professionally appearing. 

Now, due to generative AI, much of that is eliminated. With a simple prompt, attackers are able to create customized emails that seem legitimate. They can talk about real people, use an appropriate tone, and look as though they come from trusted vendors or colleagues. What makes this so dangerous is that these AI-created emails often bear none of the traditional warning signs defenders are trained to look for. They can lack suspicious links and attachments, and they can pass common authentication checks, like SPF, DKIM, and DMARC.

Go deeper: Understanding email authentication

 

Why traditional defenses struggle

Legacy security measures were built to detect patterns like suspicious URLs, known malware, or flagged IP addresses. However, AI-generated attacks often don’t contain these signals. 

As Mick Leach explained, they often show “No Known-Bad Sender IP,” “Authenticate Successfully,” and include “Links: None, Attachments: None, No IOCs to Analyze.” This makes them appear legitimate to traditional defenses, allowing sophisticated phishing emails to slip past filters and reach unsuspecting employees.

 

The elevated risk of the human risk

Even with security training, humans remain vulnerable. According to Leach, 99% of security leaders admit their organizations experienced incidents tied to “an avoidable user action” in the past year. Meanwhile, 98.4% of security leaders say AI is already being used by attackers.

So, no matter how advanced technical defenses become, email continues to be a lucrative target because attackers can count on at least some users making mistakes.

 

The new threat to email security

Attackers can now send “more polished and persuasive emails” that look indistinguishable from legitimate correspondence. Using AI tools that “learn from public data like LinkedIn and company websites,” criminals can gather context on employees, vendors, and organizational structures in seconds. 

They can also run “rapid iteration and A/B testing of attack variants to find what works,” allowing them to fine-tune messages until they achieve the highest success rate. These tools also allow attackers to “send thousands of tailored emails instantly.” 

This creates a scalable business model with enticing impersonation scenarios being sent to entire organizations with little effort, overcoming traditional defenses, and puts employees at risk.

 

How to detect AI-generated phishing

Spotting AI-generated phishing emails requires several approaches:

  1. Use linguistic analysis tools: Platforms like GLTR highlight words and phrases that are statistically common in AI-generated text, making it easier to flag suspicious messages.
  2. Use AI classifiers: OpenAI’s text classifier can evaluate whether an email is “Possibly AI” or “Likely AI,” giving security teams an additional layer of scrutiny.
  3. Validate with independent detection systems: Tools like GPTZero offer another perspective, identifying emails “likely to be written entirely by AI.”
  4. Cross-check with multiple tools: In the presentation, both OpenAI and GPTZero flagged a BEC payroll diversion attempt as AI-generated, showing the value of using more than one detector.

While these methods improve visibility, they are not foolproof. Leach cautions that attackers can retrain models or adjust prompts to bypass detection. 

Therefore, AI-detection tools are just one piece of a broader defense strategy rather than a complete solution.

 

Fighting AI with AI

If attackers are automating, defenders must do the same. According to Leach, “Defensive AI is AI deployed as an active defense to make accurate, impactful security decisions.”

More specifically, AI can be used to:

  • Analyze internal signals: Defensive AI “ingests, analyzes, and makes decisions based on internal data sources and telemetry,” to map communication behaviors across employees, vendors, and partners.
  • Learn from data: AI “learns from data without needing to be explicitly programmed.” The system builds a model of what typical communication looks like and adjusts as new information becomes available.
  • Update continuously: It “constantly updates to detect rapidly-changing attacks with net-new tactics.” The system refines itself so that new attack methods can be identified as soon as they appear.

Ultimately, evaluating signals like unusual sender behavior, financial tone in the message, or a mismatch in domains, defensive AI can identify high-risk emails. Once flagged, these messages are automatically removed before reaching an employee’s inbox, reducing exposure and limiting the chance of a successful attack.

In practice, the system evaluates signals like unusual sender behavior, unexpected financial tone, and context mismatches. For example, the AI can flag phrases like “update payment details” or spot anomalies such as “unusual sender domain,” and automatically remove the message before the employee sees it.

 

How defensive AI works in practice

Defensive AI platforms monitor thousands of behavior signals across employees and external partners to strengthen protection. For employees, these signals include factors like “tone & frequency,” “reporting relationships,” “devices used,” and “sign-in locations.” On the partner side, defensive AI evaluates “vendor contacts,” “communication cadence,” “new vendor contact,” and changes such as a “new mail forwarding rule.”

It continuously analyzes this information, so the system builds detailed “per-user, per-supplier models” that establish a baseline of what normal communication looks like. When an email deviates from those patterns, it is flagged as suspicious. It allows security teams to move beyond static detection and respond to subtle signals that might otherwise go unnoticed.

Once an anomaly is identified, defensive AI can act automatically. Suspicious emails are removed from inboxes before employees interact with them, compromised accounts can be reset, phishing simulations sent to reinforce awareness, and value reports generated for leadership. 

Through this combination of behavioral analysis and autonomous response, defensive AI provides organizations with a proactive defense that adapts as attackers change their methods.

 

5 Steps to stay ahead of AI email threats

Leach ended his presentation with 5 practical recommendations for organizations:

  1. Evaluate your current email security stack: Identify gaps in defending against AI-driven attacks.
  2. Adopt behavior-based threat detection: Static rules no longer work; behavior is the best signal.
  3. Invest in AI-native platforms: Avoid retrofitted legacy tools that cannot adapt at AI speed.
  4. Train your workforce on AI-powered social engineering: Employees need updated awareness.
  5. Partner with vendors who are innovating, not reacting: The threat landscape changes too quickly for slow adopters.

 

How Paubox email helps stop AI-powered phishing

As Leach warns, “AI-enabled threats are outpacing defenses.” Paubox email helps organizations close that gap, protecting people and data while keeping communication simple and secure.

More specifically, it delivers HIPAA compliant emails that are automatically encrypted, removing the risks associated with manual security decisions. Messages are automatically secured without requiring employees to click extra buttons or follow complicated steps. It limits the opportunities for human error, which the data shows is responsible for 99% of incidents linked to avoidable user actions.

Furthermore, Paubox integrates directly into existing workflows, reducing friction for users while enhancing security. Instead of relying on static filters or retrofitted legacy systems, Paubox allows organizations to modernize their email defenses and build resilience against AI-driven attacks. 

 

FAQs

Why are legacy email filters not enough anymore?

Legacy tools rely on spotting known patterns like bad links or flagged IPs. AI-generated phishing often lacks these indicators, slipping through undetected.

Learn more: How legacy systems disrupt patient care

 

Can defensive AI stop every phishing email?

No system is 100% perfect, but defensive AI identifies subtle anomalies and acts faster than human reviewers or static filters to reduce the risk of phishing emails.

 

What happens if PHI is exposed in a phishing attack?

Exposure of protected health information (PHI) through a phishing attack qualifies as a HIPAA data breach. This requires mandatory reporting and can result in regulatory investigations, financial penalties, and lasting reputational harm.

Go deeper: The complete guide to HIPAA violations

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.