Phishing has always been a persistent threat to healthcare, but the tactics employed by attackers have evolved dramatically over the years. A Tech Science Press article titled ‘Phishing Attacks Detection Using Ensemble Machine Learning Algorithms’ notes the following on phishing, “The nature of these attacks makes it difficult for humans to distinguish between legitimate and phishing attacks.”
Early phishing attacks were often crude, relying on generic messages riddled with grammatical errors and suspicious links. These emails were relatively easy for both users and traditional security filters to spot. As technology advanced, so did the sophistication of these attacks. Context-aware phishing, leveraging publicly available information and social engineering to craft convincing messages.
This is concerning because staff routinely handle sensitive information and are accustomed to rapid, high-stakes communication, making them more susceptible to well-crafted lures. The advent of artificial intelligence (AI) has further transformed the phishing landscape. AI-powered phishing attacks utilize machine learning algorithms and natural language processing to generate emails that closely mimic legitimate correspondence, often referencing specific projects, patient cases, or internal terminology.
This level of personalization dramatically increases the likelihood of a successful breach. There is a marked increase in the frequency and success rate of phishing attacks in healthcare, with a notable uptick in incidents involving advanced social engineering techniques, as illustrated in the above-mentioned article. For example, a 2022 NCBI study on healthcare cybersecurity breaches found that the majority of incidents involved email as the attack vector, with attackers increasingly using publicly available data to tailor their messages to specific individuals or departments.
Automation increases the volume of attacks and their effectiveness, as AI-generated messages are less likely to be flagged by traditional security systems. While documented instances of AI-generated phishing emails are still relatively rare in published literature, case studies and breach reports suggest that the technology is being actively used, particularly in high-value sectors like healthcare.
The healthcare sector has long been a prime target for phishing attacks and threat actors, a reality underscored by both the volume and impact of breaches reported in the literature. At the heart of this targeting is the unique value and sensitivity of healthcare data. Protected health information (PHI) is not only highly personal but also extremely valuable on the black market, often fetching a higher price than financial data due to its utility in identity theft, insurance fraud, and blackmail.
An example of this attack is the Change Healthcare data breach. In February 2024, Change Healthcare, a major U.S. medical claims processor and subsidiary of UnitedHealth Group, experienced the largest healthcare data breach in history. The BlackCat/ALPHV ransomware group infiltrated the company’s network, exfiltrated sensitive data, and deployed ransomware that crippled operations. The attackers gained access using compromised credentials for a Citrix portal that did not have multifactor authentication enabled. This is an example of credentials being harvested, often through phishing or related social engineering tactics.
According to a journal article published in the Journal of Medical Internet Research on the topic of cybersecurity during COVID’s climate, “Cybercrime adapts to changes in the world situation very quickly... malware cyberattackers identified common vulnerabilities and adapted their attacks to exploit these vulnerabilities.”
HIPAA compliant email systems are designed to ensure the confidentiality, integrity, and availability of PHI through a combination of encryption, access controls, and audit mechanisms. The rise of AI-driven phishing introduces new complexities that these systems were not originally designed to address.
AI-generated phishing emails are often indistinguishable from legitimate communications, leveraging advanced natural language processing to mimic the tone, style, and context of internal messages. This allows attackers to bypass traditional security filters, which rely on known signatures, keywords, or suspicious patterns.
AI can automate the creation of highly personalized emails at scale, targeting specific individuals or departments with messages that reference real projects, patients, or organizational events. The level of targeting increases the likelihood of successful credential theft or unauthorized access, directly testing the effectiveness of access controls and authentication mechanisms mandated by HIPAA.
Another challenge is the exploitation of human factors. AI-powered attacks exacerbate this risk by crafting messages that are contextually relevant and emotionally compelling, making it difficult for recipients to distinguish between legitimate and malicious communications. AI-driven phishing can also adapt, learning from failed attempts and refining its approach to evade detection. The adaptability puts pressure on HIPAA-compliant email systems to detect and block known threats and anticipate and respond to novel attack vectors.
The Healthcare study ‘Healthcare Data Breaches: Insights and Implications’ noted the vulnerabilities that are exploited, “Due to software vulnerabilities, security failures, and human error, these databases are sometimes accessed by unauthorized users.”
Traditional email security measures, like spam filters and signature-based detection, are designed to identify known threats based on predefined rules or patterns. AI-powered phishing emails are often unique, contextually relevant, and free of the typical markers that trigger these defenses. Attackers are now using AI to craft messages that closely mimic legitimate communications, making it difficult for both users and automated systems to distinguish between real and fake emails. It allows phishing emails to bypass filters and reach their intended targets, increasing the risk of successful breaches.
Encryption does not prevent users from being tricked into disclosing credentials or other sensitive information. Access controls and multi-factor authentication add important layers of security, but they are not foolproof. If a user is deceived by a convincing phishing email and voluntarily provides their credentials. There is also the challenge of human factors. Despite ongoing training and awareness programs, healthcare staff remain susceptible to social engineering, particularly when messages appear urgent or reference real organizational events.
The sheer volume of email communications in healthcare settings further exacerbates this risk, as staff may overlook subtle cues that indicate a phishing attempt. Additionally, the decentralized nature of healthcare IT environments, with multiple systems and vendors, creates numerous points of entry for attackers. Incident response plans and audit logs are needed for detecting and responding to breaches, but they are reactive rather than preventive.
Phishing is a cyberattack where an attacker impersonates a trusted entity, often via email to trick individuals into revealing sensitive information, such as login credentials, financial data, or personal details. Variations include spear-phishing (targeted at specific individuals), whaling (targeting high-profile executives), smishing (via SMS), and vishing (voice calls).
Phishing is often the initial access point for ransomware attacks. Once credentials are stolen or malware is delivered via a phishing email, attackers can deploy ransomware to encrypt systems and demand payment for decryption keys.
Ransomware is a type of malicious software (malware) that encrypts a victim’s files or systems, rendering them inaccessible. Attackers then demand a ransom payment, often in cryptocurrency, in exchange for the decryption key. If the ransom is not paid, data may remain locked or be leaked publicly.
Double extortion is when attackers not only encrypt data but also steal it. They threaten to publish or sell sensitive information if the ransom is not paid, increasing pressure on victims to comply.