4 min read

AI is making phishing smarter and healthcare systems more vulnerable

AI is making phishing smarter and healthcare systems more vulnerable

Tools like ChatGPT let attackers produce near-perfect phishing emails and fake login pages at scale. A 2020 study published in the Telecommunication System attackers no longer need technical skill or awkward grammar to "copy the behavior of legitimate websites" and match the tone, formatting, and branding that healthcare staff expect. Phishing is already a massive issue, with the study noting a reported 1,220,523 phishing attacks in 2016 (a 65% jump over 2015) and found 266,387 unique phishing sites in Q3 2019. 

There was an estimated global loss of about $9 billion from phishing in 2016, and the FBI put phishing-related losses at $2.3 billion between October 2013 and February 2016. Conventional defences still miss a lot; the study noted older approaches "can recognize only about 20% of phishing attacks." Once credentials are stolen, attackers commonly deploy ransomware or pivot to access patient records, harming patient safety and destroying organizational credibility.

 

How cybercriminals use AI to implement phishing attacks 

 

Human-like language with zero typos

Attackers can now write emails and spin up fake login pages that read like they were written by a trusted colleague, no typos, no clumsy wording, so the messages feel natural and credible. As Professor Gally notes in his special contribution, No Longer Only Human:

Language in the Age of AI, “Language educators teach based on theories of language…an abstract framework of concepts, relationships, hypotheses, and claims,” and AI now taps into those same linguistic cues. 

Modern models don’t just mimic vocabulary; they mirror the way humans communicate across different contexts. As the contribution explains, AI systems can “respond appropriately to written prompts…compose well-organized essays, and…converse…almost as if [they were] human,” even when the input contains “typos, grammar mistakes, and nonstandard expressions.”

 

AI voice cloning and deepfake phone calls

Voice cloning and deepfakes raise the stakes further. The pitch ‘AI-assisted Tagging of Deepfake Audio Calls using Challenge-Response’ warns that “the rise of AI voice-cloning technology, particularly audio Realtime Deepfakes (RTDFs), has intensified social engineering attacks by enabling real-time voice impersonation that bypasses conventional enrollment-based authentication.” These synthetic voices don’t just sound convincing, they pose what experts call “an existential threat to phone-based authentication systems,” especially as total identity fraud losses reached $43 billion.

Between 2022 and 2023, “deepfake attempts occurred once every five minutes,” and 26% of Americans report they have encountered deepfake scams and 9% have fallen victim. Unlike robocalls, these scams are personalized and "target high-value accounts and circumvent existing defensive measures.

 

Scalable attacks with automated email generation

Instead of manually crafting scams, adversaries now use deep learning to generate convincing phishing content rapidly. As the study Automated email Generation for Targeted Attacks using Natural Language explains, “Phishers are always looking for automated means for launching fast and effective attack vectors,” and emerging Natural Language Generation systems let them “generate the perfect deceptive email… fine-tuned to create the perfect deception.” 

These models don’t just spam generic templates, they pull wording, structure, and style from real inboxes. The paper describes how “email masquerading is also a popular cyberattack technique” where attackers, after accessing someone’s account, “Can study the nature/content of the emails” and then “synthesize targeted malicious emails masqueraded as a benign email by incorporating features observed in the target’s emails.”

 

Why healthcare is the perfect target for AI driven phishing 

Large health systems span multiple departments, contractors, and third-party vendors, all of which depend on constant electronic communication. In that volume of daily emails and system messages, AI-generated phishing attempts can blend in easily. 

A study indexed in the Journal of Medical internet Research for 2024 noted that “a systematic literature review (SLR) revealed that, between 2018 and 2019, more than 24% of the data breaches in all industries happened within the health care context,” underscoring how frequently attackers exploit communication channels in this sector.

Modern models can mirror the tone, formatting, and terminology of internal communications with striking accuracy, drawing on information from public sources or previous breaches to reference real projects, staff names, or clinical initiatives. Even trained professionals may not question a message that appears routine and contextually relevant.

The SLR confirms that “humans are the weakest link,” with “>70% of data fraud and breaches” tied to human-related threats and phishing attacks. It also found that at least “60% to 70% of health care organizations have witnessed breaches of health information without disclosure,” demonstrating both scale and underreporting.

Resource constraints make the problem worse when compared to sectors like finance or technology, healthcare organizations often have smaller cybersecurity budgets and leaner security teams. Those teams are already overwhelmed by the volume of threats they face, making it difficult to identify and stop automated phishing campaigns at scale. Many institutions still rely on aging legacy systems and medical devices that lack modern security controls, widening the attack surface.

 

Why traditional email security can’t keep up

A Digital Health study notes, “Findings… revealed that technical threats, such as hacking, phishing, malware, and encryption weaknesses, pose more substantial dangers to DHTs compared to physical threats.” 

Traditional email security tools were built for an earlier generation of threats. They rely on signatures, predefined rules, and basic heuristics to identify suspicious messages. While that may work against known attacks, these methods fall short when faced with phishing emails generated by AI. 

These newer threats can replicate human tone and formatting, reference real healthcare operations, and evolve quickly to bypass filters. As a result, signature-based defenses often miss highly tailored spear-phishing campaigns or attachments carrying advanced malware, creating openings for ransomware and data breaches.

The structure of healthcare email environments increases the risk. Legacy systems, forwarding rules, shared mailboxes, and exposed metadata introduce weaknesses that conventional tools are not designed to manage. 

HIPAA compliant email platforms like Paubox offer a more effective defense. These solutions integrate machine learning models trained on large volumes of messaging behavior. Instead of waiting for a known threat signature, they examine language patterns, sender behavior, and message structure in real time to detect subtle anomalies before delivery. They also incorporate healthcare-specific context and live threat intelligence, making it much harder for AI-generated phishing emails to go unnoticed.

 

FAQs

What is generative AI?

Generative AI refers to artificial intelligence models that can create new content, such as text, images, audio, code, or video based on patterns learned from existing data. Tools like ChatGPT, Midjourney, and DALL·E are popular examples.

 

How does generative AI work?

Generative AI uses large datasets to train machine learning models, often neural networks, that learn patterns, structures, and relationships in the data. When prompted, the system predicts and generates content that mimics those learned patterns.

 

Can generative AI be used with confidential or regulated data?

Only if the platform has safeguards like data encryption, access controls, and a signed business associate agreement (BAA) when handling protected health information (PHI). Most public AI tools are not HIPAA compliant by default.

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.