According to the Paubox research report, “Healthcare IT is dangerously overconfident about email security”, 89% of healthcare IT leaders now identify AI and machine learning as essential technologies for detecting and preventing email-based threats. This agreement isn't coincidental—it reflects the reality of modern healthcare cybersecurity, where traditional security measures are proving inadequate against increasingly sophisticated attacks.
However, the implementation of AI in healthcare brings its own set of challenges. As Blake Murdoch notes in Privacy and artificial intelligence: challenges for protecting health information in a new era, "The nature of the implementation of AI could mean such corporations, clinics and public bodies will have a greater than typical role in obtaining, utilizing and protecting patient health information. This raises privacy issues relating to implementation and data security." This reality shows the difficulties healthcare organizations face when implementing AI-powered email security solutions.
Healthcare organizations face a combination of factors that make them particularly vulnerable to email-based attacks. Unlike other industries, healthcare entities must balance accessibility with security, often in life-or-death situations where system downtime isn't an option.
The scale of these vulnerabilities became apparent in June 2025 when healthcare services firm Episource reported a massive data breach affecting 5.4 million people. The incident, which occurred after a cybercriminal accessed the company's computer systems over the winter, represents one of the largest breaches reported to federal regulators this year. The breach exposed contact information, health insurance details, medical record numbers, diagnoses, test results, and treatment information, along with personal data like Social Security numbers and birth dates.
The sector's attractiveness to cybercriminals stems from several factors. Healthcare data is valuable on the black market. According to Mike Britton, CIO at Abnormal Security, in Countering the Rise of Email Threats Against Healthcare, "a single record can fetch up to 20 times the price of credit card data" on the dark web. This premium exists because medical records contain personal information that can be used for identity theft, insurance fraud, and other criminal activities over extended periods.
The U.S. Department of Health and Human Services' Health Sector Cybersecurity Coordination Center (HC3) reinforces this concern in their 2023 white paper "AI-Augmented Phishing and the Threat to the Health Sector," noting that "phishing is a common tactic for hackers to use against the health sector, because it often leads to data breaches, and the stolen health data has the potential to be lucrative for the attackers."
Furthermore, healthcare organizations often operate with legacy systems that weren't designed with modern security threats in mind. These systems frequently require integration with newer technologies, creating complex IT environments with multiple potential vulnerabilities. The pressure to maintain 24/7 operations means that security updates and patches are often delayed or implemented during limited maintenance windows, leaving systems exposed.
The human element adds another layer of complexity. Healthcare workers are trained to prioritize patient care, not cybersecurity awareness. As Mike Britton notes in "Countering the Rise of Email Threats Against Healthcare," "Healthcare professionals operate in high-pressure, fast-paced environments." When workloads are heavy and time is scarce, staff are more likely to open and act on emails without scrutinizing them carefully, making them more susceptible to social engineering attacks. When a phishing email appears to come from a colleague requesting urgent patient information, the instinct to help can override security protocols.
Learn more: Why 83% of healthcare IT teams say legacy systems disrupt operations
The scope and sophistication of email threats targeting healthcare organizations have increased in recent years. The HC3's 2023 analysis provides evidence of this escalation: "In 2022, the FBI's Internet Crime Complaint Center (IC3) found that phishing attacks were the number one reported cyber crime, with over 300,000 complaints reported."
The financial impact has grown. According to the HC3 white paper, "the cost of phishing attacks quadrupled from 2015 to 2021... the average cost of a successful phishing attack in 2021 was $14.8 million." For healthcare organizations specifically, the threat is even more, with "the Healthcare Information and Management Systems Society found that the most common attack impacting healthcare organizations was phishing, comprising almost half of all attacks."
According to Deep Instinct's fourth edition report cited in The Need For AI-Powered Cybersecurity to Tackle AI-Driven Cyberattacks, "75% of security professionals have witnessed an increase in cyberattacks this year and 85% were powered by generative AI."
Mike Britton, CIO at Abnormal Security, reports an alarming "37% increase in phishing targeting healthcare in the last 12 months alone." This escalation reflects both the value of healthcare data and the sector's perceived vulnerability.
Beyond traditional phishing, healthcare organizations face sophisticated Vendor Email Compromise (VEC) attacks. According to Britton's research, "VEC attacks on healthcare surge by 60% in the past year." These attacks target the complex web of third-party relationships that healthcare organizations maintain, exploiting trusted vendor communications to gain unauthorized access to sensitive systems and data.
The financial motivations behind these attacks are clear. Britton notes that "Criminal gangs will routinely threaten to leak sensitive medical records online unless the target organisation pays up." This extortion model has proven effective against healthcare organizations, where the combination of sensitive data and operational requirements creates pressure to pay ransoms quickly.
Email remains the primary attack vector for cybercriminals targeting healthcare organizations. However, the nature of these threats has evolved driven by advances in artificial intelligence and machine learning that mirror the technologies now being used to defend against them.
As Hoala Greevy, CEO of Paubox, observes: "We've seen email threats evolve faster than many tools meant to stop them. It's not just about phishing anymore—it's about deception at scale." Mike Britton reinforces this observation noting that "Modern phishing attacks often appear highly realistic, especially in today's generative AI era." This evolution represents a shift from traditional, easily identifiable threats to sophisticated, AI-powered attacks that can fool both humans and conventional security systems.
Traditional email threats relied on obvious indicators that security systems could easily detect: suspicious sender addresses, grammatical errors, generic greetings, and malicious attachments. Today's attacks are far more sophisticated, leveraging AI to create highly personalized, contextually appropriate messages that can fool both humans and traditional security systems.
As explained in The Need For AI-Powered Cybersecurity to Tackle AI-Driven Cyberattacks, attackers now "use generative AI to make phishing emails and fake websites more personalized, compelling, sophisticated and almost similar to the targeted original website." According to the Paubox report, phishing attacks have evolved to become faster, more personalized, and increasingly generated by AI. Attackers now use generative AI to craft messages that mimic the tone, structure, and urgency of real communication. They're going beyond targeting just the executive team to focus on billing teams, HR, and clinicians with surgical precision.
Modern phishing attacks targeting healthcare organizations often begin with extensive reconnaissance. The Paubox report reveals that attackers are now scraping LinkedIn profiles and other public data sources to craft spoofed messages that bypass outdated logic entirely. They might research their targets through social media, professional networks, and publicly available information to craft convincing personas and scenarios. They might pose as pharmaceutical representatives, medical device vendors, or even regulatory officials from organizations like the CDC or FDA.
The rise of spear-phishing represents a dangerous evolution. These targeted attacks focus on specific individuals within healthcare organizations, often senior executives or IT administrators with elevated privileges. Attackers might spend weeks or months gathering intelligence about their targets before launching carefully crafted campaigns.
Business Email Compromise (BEC) attacks have also become common in healthcare settings. These schemes involve attackers compromising legitimate email accounts and using them to request fraudulent wire transfers or sensitive information. The Los Angeles County Department of Mental Health experienced this exact scenario in 2021, when malicious actors obtained login credentials for three employee Microsoft Office 365 accounts through phishing emails that originated from a trusted business partner whose email server had been compromised. The attackers then used these legitimate, trusted email accounts to conduct their operations, potentially exposing Social Security numbers, medical information, and financial account numbers of over 5,000 individuals. This incident shows how BEC attacks exploit trusted relationships within healthcare organizations' communication networks, making detection more challenging than traditional external threats.
Read also: HIPAA compliant email
Most organizations are only beginning to adopt tailored cybersecurity awareness programs focused on AI-enhanced threats.
Few healthcare organizations currently have AI-specific internal use policies or monitoring systems in place.
AI models can analyze sender behavior and communication patterns to spot subtle anomalies that human monitoring may miss.
Organizations must ensure AI tools are configured to avoid storing or mishandling protected health information, or risk HIPAA violations.