5 min read
How AI-enabled social engineering is moving from theory to operations
Mara Ellis
April 22, 2026
AI-enabled social engineering is the use of artificial intelligence to improve the psychological manipulation of a target into clicking, replying, revealing, approving, resetting, enrolling, transferring, or trusting. AI-generated phishing emails read cleanly and match a recipient’s tone; voice-cloned calls to a help desk; deepfake video or audio meant to impersonate an executive or clinician; fake social profiles supply a believable pretext; and automated content generation lets an attacker run many customized campaigns at once. The FBI's public service announcement from December 2024 makes it obvious in text, pictures, audio, and video that AI can make realistic spear-phishing content, fake profiles, voice clones, and fake executive video chats.
In healthcare, “AI-enabled” means that the attack surface goes beyond the inbox to include signing up for a patient portal, telehealth, payer workflows, HR onboarding, help-desk identity checks, and any other process where a staff member has to decide if a request is real. HHS’s white paper on AI-augmented phishing says phishing attacks in healthcare “often begin with a successful phishing attack," and the advent of AI has made such attempts more effective. HHS’ 2024 sector briefing adds that AI is lowering the barrier to entry for cybercriminals and increasing attack sophistication.
Why the shift from theory to operations matters
The old way of doing social engineering took a lot of work. An attacker had to learn about a target, write a believable message, sometimes translate it poorly, imitate the target's voice badly, and hope the target did not notice the mistakes. According to the FBI, generative AI makes it easier and faster to trick people and fixes the mistakes people used to make that were warning signs. The Patterns-indexed review on AI deception reaches the same conclusion in more academic language: AI deception “not only increases the efficacy but also its scale."
The change in operations has three effects. First, personalization costs less. Instead of just one generic phishing email, the attacker can make ten different ones for a billing manager, a practice administrator, a revenue-cycle director, a nurse manager, and a help-desk analyst. Second, it is deceptively simple that threat actors were calling from numbers that appeared to mix up modalities: an email is followed by a text, then a call with a cloned voice, and finally a fake portal or lookalike domain. Third, the attacker can change all the time.
The April 2024 healthcare help desk alert from HHS said threat actors were calling from numbers appearing to be from the area. They were using personal information stolen from social media sites and previous breaches to get staff to sign up for a new MFA device. The joint FBI-HHS advisory on healthcare social engineering notes that the same pattern was recorded: phishing or pretexting to get credentials, tricking the help desk into bypassing MFA, using lookalike domains, and then ACH diversion.
AI-enabled social engineering targeting healthcare organizations
A December 2025 story from the American Hospital Association links the FBI’s broader deepfake fraud warning to the realities of healthcare operations. John Riggi, AHA national advisor for cybersecurity and risk, said criminals are using AI-generated audio and video more and more to trick healthcare workers. He then talked about the actual effects, which include phishing clicks, stolen credentials, fake remote-worker hiring, and unauthorized fund transfers. The issue shifts the discussion from a theoretical level to a real risk. Deepfakes are now more than just a problem with communication. They are becoming a risk to the workflow, the staff, and the money.
A June 2025 story involving the American Hospital Association and CMS shows how traditional healthcare fraud is converging with synthetic impersonation tactics. CMS found that a phishing-fax scheme was going after providers and suppliers. At the same time, AHA said there had been an increase in social engineering attempts aimed at hospital IT, HR, vendor, and patient-portal help desks through a mix of phone calls, texts, and fake audio and video.
The AHA said those attempts might use a mix of phone calls, texts, and fake audio and video. All of those warnings together show healthcare organizations were already dealing with mixed, multi-channel attacks instead of just one suspicious email at a time. Attackers are using different departments, systems, and ways to talk to each other to make lying seem normal and believable.
An August 2025 CBS News investigation broadens the issue from internal operations to public trust. CBS found dozens of accounts and more than 100 videos using fake doctors' or real doctors' names to sell beauty, health, and weight-loss products. Some of these videos got millions of views. The campaigns still hurt, even if they do not get into a hospital network. They take advantage of the trust people have in doctors, mix up real and fake medical knowledge, and teach people to believe fake medical signals.
How a modern AI-enabled social engineering attack works
A modern attack usually begins with intelligence collection, not malware. The attacker takes information from job postings, licensing records, doctor biographies, payer relationships, staff directories, LinkedIn-style profiles, previous breach data, and vendor information. The health-sector help-desk alert from HHS says hackers used demographic information, SSN fragments, and corporate IDs they probably got from social networking sites and previous breaches. The FBI's AI-fraud advisory explains how generative AI can also make a lot of fake social profiles and supporting content.
The next step is to build the pretext. It is where AI really shines. A large language model can draft a convincing “urgent invoice correction,” “MFA reset,” “updated payer enrollment form,” or “portal migration” email in the victim’s preferred tone, with clean grammar and healthcare-specific language. If the attacker wants to make the pretext stronger, they can use a cloned voicemail, a text message, a domain that looks like the real one, or a video clip that looks like it came from a doctor or executive.
After that, the attacker picks the channel with the least friction. In healthcare, the workflow is least questioned when time is short; it is often a help desk call from a number appearing like it belongs to a local business, a fax request that looks like it comes from CMS, a message about a problem with the patient portal, or a request for billing help to the revenue cycle. The HHS's help desk also warned that attackers were calling from a local area code, saying their phone was broken, and asking the help desk to set up a new MFA device. The joint FBI-HHS advisory talks about the same pattern.
If the attacker gets to a point where a person has to make a decision, they try to make it fail. They make things seem urgent, borrow authority, and give just enough correct information to seem real. The study on generalized-marginal time-frequency distributions discovered that healthcare employees clicked on nearly one in seven simulated phishing emails, noting that AI makes it possible for the level of customization to happen.
The solution in AI-enabled social engineering through Paubox
For the specific problem of AI-enabled social engineering delivered through email, Paubox's public stance is clear: move protection forward in the mail flow, automate encryption, and rely less on user judgment. Paubox's Inbound Email Security is part of Email Suite Plus and Premium.
The company says it protects against ransomware, malware, phishing, and display-name spoofing by using AI-powered analysis, sender validation, phishing and malware scanning, attachment, link, and QR code inspection, custom rules, quarantine, and patented ExecProtect/ExecProtect+ protections against impersonation. Paubox says the system looks at tone, sender behavior, and message intent, gives admins reasons for its detections, and "learns and evolves" over time.
It fits well with modern AI-enabled social engineering because the attacker's edge is that it makes sense in context. Traditional keyword filters work less well if a lure is grammatically correct. Paubox's documented method is to look for strange behavior, context, and intent instead of just static signatures. ExecProtect is a patented way to stop display-name spoofing flags or quarantine emails coming from an unauthorized external address with a matching internal display name.
ExecProtect+, on the other hand, is said to stop attacks from lookalike domains and compromised accounts pretending to be coworkers. Paubox Tags use checks like SPF, DKIM, and DMARC to add visual cues to safe senders who have been verified. It can help staff be more skeptical of messages that do not have labels or are out of the ordinary.
See also: HIPAA Compliant Email: The Definitive Guide (2026 Update)
FAQs
Is social engineering the same as phishing?
Not exactly. Phishing is one type of social engineering. Social engineering is the bigger category, and it can include emails, phone calls, text messages, fake websites, social media messages, or even in-person deception.
What are the most common forms of social engineering?
The most common forms include phishing emails, text scams, voice scams, business email compromise, fake tech support, impersonation, and pretexting. Each one is built around the same idea: getting someone to act before they stop and verify.
What is pretexting in social engineering?
Pretexting is when an attacker creates a believable story to get information or access. The story might sound like an IT issue, an urgent payment request, a password reset, a delivery problem, or a compliance matter.
Subscribe to Paubox Weekly
Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.
