Skip to the main content.
Contact Sign in Start for free
Contact Sign in Start for free

5 min read

How AI is arming phishing and deepfake attacks

How AI is arming phishing and deepfake attacks

"The global cost of deepfake fraud is expected to reach $1 trillion in 2024," states Srini Tummalapenta, Distinguished Engineer and CTO of Security Services at IBM. The combination of artificial intelligence and social engineering has created a new level of cybersecurity threats. Healthcare organizations now face unprecedented challenges as AI tools enable attackers to craft realistic phishing emails, clone voices with disturbing accuracy, and potentially create convincing video deepfakes, all of which are designed to manipulate staff into compromising protected health information (PHI) and healthcare systems. This technological evolution demands an urgent reassessment of how healthcare entities approach cybersecurity.

 

The AI-powered threat landscape

Realistic phishing

Traditional phishing emails, which are often filled with grammatical errors, suspicious links, and generic greetings, are being replaced by AI-generated communications nearly indistinguishable from legitimate messages. These sophisticated attacks use machine learning to analyze and mimic organizational communication patterns, creating targeted messages that reference specific projects, use appropriate terminology, and perfectly replicate the writing style of trusted colleagues or executives.

According to an article published in the International Journal of Engineering and Technology Research (IJETR), "AI-powered scams exploit machine learning to generate thousands of personalized phishing emails within minutes, each tailored to the recipient’s behavior and vulnerabilities. Unlike manual scams, these bypass traditional detection by mimicking legitimate communications with alarming accuracy."

For healthcare organizations, the implications are worse. AI-crafted emails now routinely impersonate hospital administrators, referencing specific patient cases, insurance negotiations, or staff changes, with details often mined from public records or prior breaches to create deceptively authentic requests. An academic paper in the Asian Journal of Research in Computer Science (AJRCOS) states, "AI-powered phishing attacks have increased patient record exposure by 60% since 2021, with credential theft being the primary vector.” 

 

Voice cloning (vishing)

Even more alarming is the rapid advancement in voice cloning technology. The AJRCOS paper further discusses that modern AI systems can generate convincing voice replicas from just 60 seconds of sample audio, easily obtained from conference presentations, webinars, or even voicemail greetings.

In healthcare, these clones are enabling devastating attacks:

 

  • 60% more patient records exposed through voice scams since 2021
  • $15M stolen in the H-M Health breach using a cloned CEO's voice
  • Nearly half of attacks (47.6%) evade detection by current security systems

Attackers now routinely:

  • Impersonate physicians requesting urgent patient information
  • Pose as IT staff conducting "emergency" password resets
  • Mimic executives authorizing fraudulent transactions

The paper goes on to note that the reason why these attacks work so well is because the human voice carries inherent authority that bypasses normal skepticism, especially in healthcare's high-pressure environments, where 74% of breaches stem from rushed human decisions. Unlike written communications, a familiar voice triggers instinctive trust, making staff more likely to bypass protocols.

 

Emerging deepfake video threats

While currently less common, video deepfakes represent healthcare's next critical threat vector, with synthetic media creation costs now as low as $1.33 per video. The technology′s accessibility was demonstrated in February 2024 when a Hong Kong finance worker transferred 25 million to fraudsters after attending a deepfake video call with what appeared to be the company's CFO and other colleagues, all of whom were AI-generated impersonations. 

This incident, investigated by Hong Kong police, reveals how attackers are now combining cloned voices with simulated video environments to create convincing scams. 

In healthcare, similar attacks could involve fake video calls from medical directors demanding immediate data transfers for "emergency patient care" or simulated conference calls where deepfaked specialists authorize access to restricted research. The psychological impact is profound, with the AJRCOS paper showing that humans fail to detect 47.6% of high-quality deepfakes, and clinicians are three times more likely to comply with video requests than text-based ones.

 

The amplified risk to PHI and HIPAA compliance

AI-enhanced social engineering increases attack success rates against the healthcare sector, which already faces unique vulnerabilities. The industry's high-value data makes it a prime target. The IJETR article states that medical records command "$1,000 per EHR on dark markets," ten times more valuable than financial data due to their non-expiring nature.

The operational consequences extend beyond data breaches. AI-facilitated attacks often serve as the initial access vector for ransomware deployment, system compromise, or long-term surveillance within healthcare networks. The resulting disruptions can impact patient care, compromise critical medical systems, and create substantial financial burdens, with healthcare breaches costing $9.8 million on average, nearly double the cross-industry norm, according to an IBM report.

From a regulatory perspective, the implications are equally serious. HIPAA requires covered entities to implement security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level. As threat techniques evolve, so too must the defensive measures considered reasonable and appropriate, creating compliance challenges for organizations still relying on conventional security approaches in an era where 74% of breaches exploit legacy system gaps, as stated in the IJETR article.

The cybersecurity community has been sounding alarms about AI-enhanced social engineering with increasing urgency, stating that "AI-powered scams have surged by 300% since 2023, with healthcare now the most targeted sector," according to researchers in the AJRCOS paper.

 

Detection challenges

Traditional security tools are failing against AI-powered attacks, with the AJRCOS article revealing that "47.6% of deepfake voice clones and AI-generated phishing emails bypassing current detection systems." Conventional phishing filters rely on identifying suspicious domains, known malicious patterns, or grammatical errors criteria that no longer exist because the IJETR paper documents that "AI now crafts flawless, personalized lures using stolen healthcare data".

Voice authentication offers no solution. The AJRCOS study also states, "High-quality AI voice clones achieve 90.5% detection recall rates in lab tests—yet still produce 47.6% false negatives in real-world healthcare deployments." This gap widens during emergencies, when stress impairs staff’s ability to spot subtle inconsistencies.

 

Beyond traditional training

Defending against these advanced threats requires a multi-layered approach that combines technological defenses, process improvements, and enhanced training:

 

Advanced technical controls

"Healthcare organizations must move to modern, cloud-hosted email systems as a baseline for security," advises David Chou, Founder of Chou Group Healthcare Technology Advisory Services. "Equally important is ongoing education to protect staff from phishing and social engineering, which continue to be the most effective tactics used by attackers."

Implementation of AI-powered threat detection tools can help identify subtle anomalies in email communications that human reviewers might miss. Zero-trust architecture and multi-factor authentication, preferably using phishing-resistant methods like security keys, are required for limiting the damage even when credentials are compromised.

 

Enhanced verification protocols

Organizations must implement strict out-of-band verification procedures for sensitive requests. These might include:

  • Requiring separate authentication channels for high-risk actions (e.g., a secure app confirmation in addition to email approval)
  • Establishing mandatory call-back procedures using independently verified phone numbers for requests involving financial changes or PHI access
  • Creating tiered approval systems for unusual data transfers or account modifications

 

Specialized training

Traditional security awareness programs must evolve to specifically address AI-driven threats. Staff need to understand the sophisticated nature of these attacks, the subtle cues that might indicate deception, and the importance of following verification procedures even when communications appear completely legitimate.

 

Lessons learned

  • Public information provides attack material: Conference presentations, webinars, and social media posts offer attackers rich sources of audio, video, and organizational knowledge to create convincing impersonations. Healthcare organizations must recognize that their public communications may inadvertently provide resources for social engineering attacks.
  • Emergency exceptions create vulnerabilities: Many healthcare breaches exploit emergency override protocols designed for patient care situations. Organizations must design emergency procedures that maintain security while accommodating genuine urgent needs, perhaps requiring multiple, independent verifications rather than eliminating verification entirely.
  • Multi-vector attacks are increasingly common: Rather than relying on a single approach, sophisticated attackers now combine AI-generated emails, voice calls, and potentially video interactions to build credibility across multiple channels. Defense strategies must similarly span all communication methods.
  • Verification failures cascade quickly: In each case, initial access obtained through social engineering is rapidly leveraged to gain broader network access. The window for detection before significant damage occurs is often measured in hours, not days.
  • Technical indicators may be absent: Unlike traditional attacks that leave detectable signatures, these AI-enhanced approaches often appear legitimate even to security monitoring systems. The primary protection must come from procedural controls and human verification.

"We encountered a significant case where a medical group became the target of a sophisticated phishing attack," notes Matt Murren, CEO of True North ITG. "The consequences were severe. The ransomware attack rendered the organization's systems inaccessible for nearly two weeks, routine appointments were delayed or canceled, and urgent care cases had to be diverted to other facilities."

 

FAQs

What is an AI-generated deepfake?

AI-generated deepfakes are artificial videos, images, or audio recordings that realistically mimic real people. Using deep learning technology, these fakes can make it appear as though someone said or did something they never actually did, creating convincing impersonations that can fool both humans and some security systems.

 

What is social engineering?

Social engineering is a type of attack that manipulates people into divulging confidential information or performing actions that compromise security. Rather than exploiting technical vulnerabilities, these attacks target human psychology using deception, manipulation, and impersonation to trick people into breaking normal security procedures.

 

How is AI-powered social engineering different from traditional attacks?

Traditional social engineering relies on generic tricks and often contains red flags like spelling errors or unusual requests. AI-powered attacks are personalized, contextually accurate, and technically flawless. They reference real organizational details, mimic known contacts perfectly, and can coordinate across multiple communication channels simultaneously.

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.