Email remains the number one vector for cyberattacks, even as security tools advance. In August 2025, Field CISO Mick Leach reiterated this issue in his SANS talk, The AI Threat: Protecting Your Email from AI-Generated Attacks.
More specifically, he stated how Business Email Compromise (BEC) continues to inflict massive damage, with total exposed losses since 2013 reaching $55 billion. Each successful attack costs organizations an average of $129,192, and FBI data shows reported losses climbing year after year, hitting nearly $3 billion in 2024.
Despite decades of security investments, attackers adapt faster. Moreover, email is still the most direct and effective way to reach employees, trick them into action, and divert funds or credentials.
Over the last two decades, phishing has evolved dramatically. The presentation breaks it down into three eras:
In other words, criminals no longer need to spend days crafting emails. With tools like WormGPT and GhostGPT, they can generate persuasive, context-aware emails in seconds.
Artificial intelligence has lowered the bar for launching attacks. It used to require time, talent, and careful research to create a believable phishing or BEC attempt. Attackers had to research targets, learn company structures, and create emails that were professionally appearing.
Now, due to generative AI, much of that is eliminated. With a simple prompt, attackers are able to create customized emails that seem legitimate. They can talk about real people, use an appropriate tone, and look as though they come from trusted vendors or colleagues. What makes this so dangerous is that these AI-created emails often bear none of the traditional warning signs defenders are trained to look for. They can lack suspicious links and attachments, and they can pass common authentication checks, like SPF, DKIM, and DMARC.
Go deeper: Understanding email authentication
Legacy security measures were built to detect patterns like suspicious URLs, known malware, or flagged IP addresses. However, AI-generated attacks often don’t contain these signals.
As Mick Leach explained, they often show “No Known-Bad Sender IP,” “Authenticate Successfully,” and include “Links: None, Attachments: None, No IOCs to Analyze.” This makes them appear legitimate to traditional defenses, allowing sophisticated phishing emails to slip past filters and reach unsuspecting employees.
Even with security training, humans remain vulnerable. According to Leach, 99% of security leaders admit their organizations experienced incidents tied to “an avoidable user action” in the past year. Meanwhile, 98.4% of security leaders say AI is already being used by attackers.
So, no matter how advanced technical defenses become, email continues to be a lucrative target because attackers can count on at least some users making mistakes.
Attackers can now send “more polished and persuasive emails” that look indistinguishable from legitimate correspondence. Using AI tools that “learn from public data like LinkedIn and company websites,” criminals can gather context on employees, vendors, and organizational structures in seconds.
They can also run “rapid iteration and A/B testing of attack variants to find what works,” allowing them to fine-tune messages until they achieve the highest success rate. These tools also allow attackers to “send thousands of tailored emails instantly.”
This creates a scalable business model with enticing impersonation scenarios being sent to entire organizations with little effort, overcoming traditional defenses, and puts employees at risk.
Spotting AI-generated phishing emails requires several approaches:
While these methods improve visibility, they are not foolproof. Leach cautions that attackers can retrain models or adjust prompts to bypass detection.
Therefore, AI-detection tools are just one piece of a broader defense strategy rather than a complete solution.
If attackers are automating, defenders must do the same. According to Leach, “Defensive AI is AI deployed as an active defense to make accurate, impactful security decisions.”
More specifically, AI can be used to:
Ultimately, evaluating signals like unusual sender behavior, financial tone in the message, or a mismatch in domains, defensive AI can identify high-risk emails. Once flagged, these messages are automatically removed before reaching an employee’s inbox, reducing exposure and limiting the chance of a successful attack.
In practice, the system evaluates signals like unusual sender behavior, unexpected financial tone, and context mismatches. For example, the AI can flag phrases like “update payment details” or spot anomalies such as “unusual sender domain,” and automatically remove the message before the employee sees it.
Defensive AI platforms monitor thousands of behavior signals across employees and external partners to strengthen protection. For employees, these signals include factors like “tone & frequency,” “reporting relationships,” “devices used,” and “sign-in locations.” On the partner side, defensive AI evaluates “vendor contacts,” “communication cadence,” “new vendor contact,” and changes such as a “new mail forwarding rule.”
It continuously analyzes this information, so the system builds detailed “per-user, per-supplier models” that establish a baseline of what normal communication looks like. When an email deviates from those patterns, it is flagged as suspicious. It allows security teams to move beyond static detection and respond to subtle signals that might otherwise go unnoticed.
Once an anomaly is identified, defensive AI can act automatically. Suspicious emails are removed from inboxes before employees interact with them, compromised accounts can be reset, phishing simulations sent to reinforce awareness, and value reports generated for leadership.
Through this combination of behavioral analysis and autonomous response, defensive AI provides organizations with a proactive defense that adapts as attackers change their methods.
Leach ended his presentation with 5 practical recommendations for organizations:
As Leach warns, “AI-enabled threats are outpacing defenses.” Paubox email helps organizations close that gap, protecting people and data while keeping communication simple and secure.
More specifically, it delivers HIPAA compliant emails that are automatically encrypted, removing the risks associated with manual security decisions. Messages are automatically secured without requiring employees to click extra buttons or follow complicated steps. It limits the opportunities for human error, which the data shows is responsible for 99% of incidents linked to avoidable user actions.
Furthermore, Paubox integrates directly into existing workflows, reducing friction for users while enhancing security. Instead of relying on static filters or retrofitted legacy systems, Paubox allows organizations to modernize their email defenses and build resilience against AI-driven attacks.
Legacy tools rely on spotting known patterns like bad links or flagged IPs. AI-generated phishing often lacks these indicators, slipping through undetected.
Learn more: How legacy systems disrupt patient care
No system is 100% perfect, but defensive AI identifies subtle anomalies and acts faster than human reviewers or static filters to reduce the risk of phishing emails.
Exposure of protected health information (PHI) through a phishing attack qualifies as a HIPAA data breach. This requires mandatory reporting and can result in regulatory investigations, financial penalties, and lasting reputational harm.
Go deeper: The complete guide to HIPAA violations