6 min read

The AI arms race in healthcare cybersecurity

The AI arms race in healthcare cybersecurity

Healthcare organizations today are facing growing challenges, and it has now become common for cybercriminals and defenders to use artificial intelligence. As email-based attacks become sophisticated and traditional security measures struggle to keep pace, medical institutions are being forced to rethink their cybersecurity strategies. This escalating threat has created a need for equally sophisticated AI-powered defenses, marking the beginning of what security experts call the "AI arms race" in cybersecurity.

 

The AI arms race

Cybersecurity has evolved into an AI-powered arms race, with both attackers and defenders leveraging machine learning technologies. As noted in The AI arms race in cybersecurity: Why trust is the ultimate defense published in Security Magazine, "Artificial intelligence (AI) is reshaping cybersecurity at a pace that few anticipated. It is both a weapon and a shield, creating an ongoing battle between security teams and cybercriminals." 

However, this pace of AI development brings challenges. As Blake Murdoch notes in Privacy and artificial intelligence: challenges for protecting health information in a new era, "We are currently in a situation in which regulation and oversight risk falling behind the technologies they govern. Given we are now dealing with technologies that can improve themselves at a rapid pace, we risk falling very behind, very quickly."

On the attack side, AI enables cybercriminals to automate and scale their operations in unprecedented ways. The U.S. Department of Health and Human Services' Health Sector Cybersecurity Coordination Center (HC3) highlights how accessible these tools have become. In their 2023 white paper, "AI-Augmented Phishing and the Threat to the Health Sector”, noted that platforms like "FraudGPT" are available "for a relatively cheap price – a $200 per month or $1700 per year subscription fee – which makes it well within the price range of even moderately-sophisticated cybercriminals."

Machine learning algorithms can analyze vast amounts of data about potential targets, automatically generating personalized phishing emails that adapt based on the recipient's likely responses. These systems can create thousands of unique, targeted messages with minimal human intervention.

Generative AI technologies have revolutionized the quality of phishing content. As the Security Magazine article observes, "AI-generated phishing emails are nearly indistinguishable from legitimate messages, tricking even the most cautious recipients. Machine learning models help attackers refine their techniques, making malware more evasive and adaptive." Where previous attacks were often identifiable by poor grammar or generic language, AI-generated content can match the writing style and terminology specific to healthcare organizations. An AI system might generate a phishing email that perfectly mimics the communication style of a particular hospital's administration, complete with appropriate medical terminology and organizational references.

The HC3 white paper provides real-world examples of AI-powered fraud, including a case where "in October 2021, a Hong Kong bank manager was allegedly scammed into authorizing transfers worth $35 million from a 'deep voice' technology scheme impersonating the voice of the company's director."

Voice synthesis and deepfake technologies add another dimension to these attacks. Cybercriminals can now create convincing audio recordings of executives or colleagues, supporting email-based social engineering with additional layers of deception. These technologies enable pretexting attacks where criminals might call healthcare workers while simultaneously sending supporting email evidence of their claimed authority.

The speed and scale of AI-powered attacks present challenges for healthcare organizations. As the Security Magazine article notes, "The result is a cybersecurity landscape where organizations must constantly evolve their strategies to keep pace with AI-driven threats. The challenge is not just staying ahead, it is ensuring that AI remains a force for protection rather than exploitation." Traditional security approaches that rely on human analysis and response are simply too slow to keep pace with automated attack systems that can launch thousands of targeted campaigns simultaneously.

 

Why traditional security measures fall short

The inadequacy of traditional email security measures in healthcare environments stems from several limitations that become apparent as threats evolve. The Paubox research report, “Healthcare IT is dangerously overconfident about email security” highlights a vulnerability: while rules-based filters still form an essential baseline and establish a necessary first layer of defense, 44% of healthcare organizations stop here, relying solely on legacy solutions.

This creates gaps in their defenses, as these systems alone can't match the sophistication and adaptability of AI-generated threats. As cybersecurity expert Amy Larson DeCarlo, Principal Analyst at Global Data, notes, "Cybercriminals are exploiting the biggest vulnerability within any organisation: humans. As progress in artificial intelligence (AI) and analytics continues to advance, hackers will find more inventive and effective ways to capitalise on human weakness in areas of (mis)trust, the desire for expediency, and convenient rewards."

Signature-based detection systems, which form the backbone of many traditional security solutions, rely on identifying known malicious patterns or indicators. These systems maintain databases of known threats and compare incoming emails against these signatures. However, as noted in The Need For AI-Powered Cybersecurity to Tackle AI-Driven Cyberattacks, "Traditional security tools are not capable of detecting security vulnerabilities that have never been encountered in the past." This approach fails against novel attacks and can be easily circumvented by attackers who slightly modify their techniques.

The complexity of AI systems also introduces new challenges for traditional oversight mechanisms. As Murdoch explains, AI can be opaque due to the "black box" problem; "This opacity may also apply to how health and personal information is used and manipulated if appropriate safeguards are not in place."

The Paubox report shows this limitation with a real-world example: one organization's system flagged over 200 marketing emails as threats, creating excessive false positives, but would have missed a spoofed email impersonating the CFO—potentially resulting in a $70,000 loss—if not for an advanced AI-powered detection solution that was in place as a second layer of defense.

In healthcare environments, signature-based systems face additional challenges. Medical organizations frequently communicate with external partners, vendors, and regulatory bodies, creating a web of legitimate communications that traditional systems struggle to differentiate from sophisticated attacks. The high volume of external communications increases the likelihood of false positives, potentially blocking medical information.

Traditional systems also struggle with the context-awareness required for effective healthcare security. A communication that might be legitimate for a cardiologist might be suspicious coming from an administrative assistant. Traditional systems lack the understanding of organizational roles and responsibilities necessary to make these distinctions effectively.

The challenge of zero-day attacks represents another limitation. Traditional security measures are inherently reactive, requiring known examples of threats before they can provide protection. In contrast, modern attackers continuously develop new techniques, ensuring that there's always a window of vulnerability between the development of new attack methods and the deployment of corresponding defenses.

According to an article published by The AI Journal, "Traditional manual compliance processes are increasingly proving inadequate, time-consuming, and prone to human error, potentially exposing healthcare organizations to significant risks and penalties." This broader challenge extends beyond email security to include the entire spectrum of healthcare cybersecurity management.

 

The AI and machine learning advantage

Artificial intelligence and machine learning technologies offer healthcare organizations several advantages in email security that address the limitations of traditional approaches.

Pattern recognition capabilities represent an advantage. Unlike traditional systems that rely on predetermined rules or signatures, machine learning algorithms can identify subtle patterns and anomalies that might indicate malicious activity. As Mike Britton explains in Countering the Rise of Email Threats Against Healthcare, "By learning and baselining 'normal' email behavior, these solutions can detect and block malicious anomalies" before they reach employees' inboxes. These systems analyze multiple dimensions of email communications simultaneously, including sender behavior, content analysis, timing patterns, and network metadata.

In healthcare contexts, this pattern recognition is valuable. AI systems can learn the normal communication patterns within medical organizations, understanding workflows, communication chains, and information-sharing practices. When emails deviate from these established patterns, the system can flag them for additional scrutiny or automatic blocking.

As noted in the Security Magazine article, "Companies that use AI to detect and respond to threats proactively can reassure customers that their data is safe. Organizations integrating AI-driven security measures into their products demonstrate a commitment to protecting user information." 

However, the implementation of AI systems must be carefully managed. As Murdoch notes: "A 2018 survey of four thousand American adults found that only 11% were willing to share health data with tech companies, versus 72% with physicians. Moreover, only 31% were 'somewhat confident' or 'confident' in tech companies' data security."

The speed of AI-based analysis enables healthcare organizations to respond to threats in real-time. While human analysts might take hours or days to thoroughly investigate suspicious communications, AI systems can make initial threat assessments within seconds or minutes. 

Read also: ExecProtect+ for comprehensive display name spoofing protection

 

Data privacy and anonymization challenges

The implementation of AI-powered email security in healthcare brings additional privacy considerations that organizations must address. Modern AI systems require access to large amounts of email data to function effectively, raising questions about data protection and patient privacy.

A particular concern emerges around the anonymization of healthcare data used in AI systems. Murdoch highlights that "A number of recent studies have highlighted how emerging computational strategies can be used to identify individuals in health data repositories managed by public or private institutions. And this is true even if the information has been anonymized and scrubbed of all identifiers."

This vulnerability extends to email security implementations where patient information might be contained within communications. Traditional anonymization techniques may not provide adequate protection against sophisticated re-identification algorithms. As Murdoch further explains, modern "techniques of re-identification effectively nullify scrubbing and compromise privacy."

Healthcare organizations implementing AI-powered email security must therefore consider not only the immediate security benefits but also the long-term privacy implications of storing and analyzing large volumes of healthcare communications. This includes ensuring that AI systems are designed with privacy-by-design principles and that appropriate safeguards are in place to prevent unauthorized access or misuse of patient information.

 

FAQs

What safeguards can healthcare organizations implement to detect AI-generated content in phishing emails?

Techniques like linguistic pattern analysis, behavioral baselining, and AI authenticity scoring are being explored, but are not yet widespread.

 

How do healthcare organizations balance the benefits of AI detection with patient trust and consent concerns?

They often implement privacy-by-design principles and attempt to keep AI operations invisible to patients, though public skepticism remains high.

 

What are the ethical considerations of using AI that analyzes communications involving patient data?

Ethical concerns include informed consent, potential bias in decision-making, and risks of surveillance beyond intended use.

 

How might adversarial AI be used to poison or mislead healthcare security systems?

Cybercriminals can introduce subtle data distortions to train or trick AI models into ignoring malicious activity.

 

Is there a risk that AI-generated false positives will lead to critical healthcare communication delays?

Yes, overly aggressive AI filters can flag legitimate medical communications, risking treatment delays or compliance issues.

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.