Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

What are hyper-personalized AI phishing attacks?

Written by Gugu Ntsele | December 10, 2025

Traditional phishing campaigns had grammatical errors, generic greetings and vague requests that didn't align with how the organization communicated. However, hyper-personalized AI phishing uses artificial intelligence and machine learning to gather information about targets, create messages that mirror their communication style, and create scenarios that are relevant to the organization. 

Research in From Chatbots to PhishBots? - Preventing Phishing scams created using ChatGPT, Google Bard and Claude notes that studies have shown 97% of people cannot detect phishing emails. This shows why phishing continues to exploit what researchers identify as the weakest link in security chains which is humans.

According to the Reuters investigation titled: We set out to craft the perfect phishing scam. Major AI chatbots were happy to help, published September 15, 2025, complaints of phishing by Americans aged 60 and older jumped more than eight-fold in 2024 as they lost at least $4.9 billion to online fraud, based on FBI data. The FBI itself issued a warning in December 2024, stating that "criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes. Generative AI reduces the time and effort criminals must spend to deceive their target."

Microsoft's latest Cyber Signals report revealed that the company blocks 1.6 million bots every hour attempting to create accounts, and approximately $6.28 billion in fraud attempts over a 12-month period. Kelly Bissell, Microsoft's corporate vice president of anti-fraud and product abuse, noted the speed advantage attackers have gained, stating, "Attackers will adopt that new technology faster than a company would, large and small, or an individual." He noted that creating malicious websites has changed, tasks that previously required weeks or days can now be accomplished in minutes using AI.

Learn more: The impact of social engineering tactics on healthcare

 

How AI enables hyper-personalization

Attackers use AI tools to gather information from social media profiles, professional networking sites, public records, and data breaches. This information might include job titles, recent projects, colleagues' names, writing style, interests, recent life events, and even daily routines.

Once collected, AI algorithms analyze this data to understand patterns in how you communicate, what topics you care about, and what requests would seem true. Natural language processing models can generate messages that match your expected communication style, reference real people and events in your life, and create urgency around scenarios that would concern you.

For example, rather than receiving a generic "Your account has been compromised" email, you might receive a message that appears to come from your actual manager, references a project you're currently working on, uses language consistent with how your manager typically writes, and requests information related to a legitimate business process.

Unlike human scammers who must manually craft each message, AI chatbots can create messages instantly. The Reuters Investigation found that major AI chatbots, including ChatGPT, Meta AI, Grok, Claude, Gemini, and DeepSeek, could all be persuaded to generate phishing emails, often with little resistance to user requests.

The availability of these tools created easy entry for cybercriminals. As Trent Gunthorpe of ACI Worldwide explained in the Reuters Investigation, "On the dark web, you can purchase a scam as a service. The unseen, unsophisticated scammer can now purchase and become very sophisticated very quickly and start to use some of these tools."

Read also: How AI is arming phishing and deepfake attacks

 

Real-world scenarios

In May 2025, the FBI issued a warning about a campaign targeting government officials. Malicious actors were using AI-generated text and voice messages to impersonate senior US officials in schemes designed to gain access to personal accounts of state and federal government officials. The targets were current or former senior government officials and their contacts. The attackers used these AI-generated messages to establish trust before sending links that redirected victims to hacker-controlled websites designed to steal login credentials.

The Reuters Investigation tested the real-world effectiveness of AI-generated phishing emails on 108 senior citizen volunteers who consented to participate in the study. About 11% of the seniors clicked on links in the emails they received. As one participant, 85-year-old retired physician Thomas Gan, noted after clicking on a fraudulent link, "My neighbors are always getting scammed, every day."

 

The reality of criminal AI use

The Reuters Investigation spoke to three former forced laborers at scam compounds in Southeast Asia who confirmed routine use of AI in their operations. Duncan Okindo, a 26-year-old Kenyan who was forced to work at a compound on the Myanmar-Thai border for about four months, stated that, "ChatGPT is the most-used AI tool to help scammers do their thing." These operations use AI for translations, role-playing with victims, and creating credible responses to questions.

Jacob Klein, Anthropic's head of threat intelligence, confirmed the pattern, stating, "We see people who are using Claude to make their messaging be more believable. There's an entire attack cycle of conducting fraud or a scam. AI is being increasingly used throughout that entire cycle."

 

Why these attacks are so effective

Traditional spam filters look for suspicious patterns, known malicious links, or generic phishing indicators. Hyper-personalized messages often contain none of these red flags. They may use legitimate-looking domains, reference real systems, and contain links that initially appear safe.

As noted in research from From Chatbots to PhishBots?, this is a challenge for traditional defenses. The research noted that machine learning techniques aren't completely accurate and systems trained on historical data may struggle with newer attack types. This is relevant for AI-powered phishing, which can change and adapt based on what works. In the Reuters Investigation, Professor Matthew Warren, director of RMIT University's Centre for Cyber Security Research and Innovation, captured the challenge by stating that, "The sheer volume and the sheer sophistication of how scammers are using AI in terms of improving... it's going to make it much harder for individuals [to know] when scams have occurred."

The Reuters Investigation showed how AI chatbots even provide tactical advice to would-be scammers. When asked, Google's Gemini suggested optimal timing for targeting seniors: "For seniors, a sweet spot is often Monday to Friday, between 9:00 AM and 3:00 PM local time. They may be retired, so they don't have the constraints of a traditional work schedule." As Kathy Stokes, who heads the AARP Fraud Watch Network, responded, "That's beyond disturbing."

Learn more: How Paubox inbound email security stops AI-powered cyberattacks in healthcare

 

The challenge of AI safety

One aspect revealed by the Reuters Investigation is how easily AI chatbots' safety measures can be bypassed. Fred Heiding, a Harvard researcher who partnered with Reuters on the study, put it simply, "You can always bypass these things."

The chatbots are designed to refuse malicious requests, but their defenses are inconsistent. Sometimes they reject suspicious requests, other times, the same request in a new chat session receives a response. The AI bots gave in easily used basic tricks, for example, claiming the phishing emails were needed for "research" or for a "novel" about scam operations.

Lucas Hansen, co-founder of CivAI, a California non-profit that examines AI capabilities and dangers, explained the problem in the Reuters Investigation, "Modern AI is more like training a dog. You can't just give it a rule book to tell it what to do and what not to doYou never know for sure how it's going to behave once it's out of training."

Furthermore, Dave Willner, who led OpenAI's trust and safety team in 2022 and 2023, noted, AI companies must balance preventing misuse with keeping their products competitive. If models refuse too many requests, users might use competitors with fewer restrictions. Steven Adler, a former AI safety researcher at OpenAI, observed that, "Whoever has the least restrictive policies, that's an advantage for getting traffic."

Read also: Inbound Email Security

 

FAQs

Can AI phishing be carried out through phone calls or video, not just email?

Yes, attackers use AI-generated voice and deepfake video to impersonate trusted individuals.

 

Are small healthcare clinics and startups at risk, or only large organizations?

Smaller organizations are often more vulnerable because they have fewer security controls and training resources.

 

How do attackers know which employees inside an organization to target?

Criminals map company hierarchies using public data from websites and professional networking platforms.

 

Can hyper-personalized phishing bypass multi-factor authentication (MFA)?

In some cases, yes, through MFA-fatigue attacks that trick victims into approving malicious login attempts.

 

Do hyper-personalized scams also spread through collaboration tools like Microsoft Teams or Slack?

Yes, attackers exploit internal chat and collaboration platforms to appear more legitimate.