Instead of writing one phishing message at a time, attackers can now use AI to quickly draft, rewrite, and tailor emails so they closely match normal workplace communication. Reporting by Reuters showed how this becomes possible when safeguards fail, noting that “major chatbots do receive training from their makers to avoid conniving in wrongdoing, but it’s often ineffective,” and that chatbot defenses can be inconsistent, where “sometimes bots balk at complying with scam requests; other times they readily go along.” As a result, AI-assisted phishing appears frequently in business email compromise, a type of fraud where attackers impersonate trusted coworkers or vendors to trick employees into sending money or credentials, shifting the workload for attackers while keeping the objective the same.
Phishing still runs on impersonation and scale
Academic research often defines phishing in straightforward terms that apply to both traditional and modern campaigns. A systematic review published on AI describes phishing as “a scalable act of deception whereby impersonation is used to obtain information from a target.” The emphasis on “scalable” means attackers can send large volumes of convincing messages at once, and even small gains in realism can lead to more replies, more stolen credentials, and longer access to compromised inboxes.
AI makes variation cheap and consistent
Recent enterprise telemetry shows scale and constant variation in phishing activity. Reporting on AI system research, Help Net Security wrote that “one malicious email [was] identified on average every 19 seconds during 2025,” adding that “AI systems now sit at the center of this activity, supporting generation, testing, and rollout of phishing campaigns.” The research also points to polymorphism becoming standard practice, meaning attackers constantly change small technical details to avoid detection, noting that “Phishing campaigns now operate with polymorphism as a baseline condition,” and that “During 2025, 76 percent of initial infection URLs appeared only once across customer environments, even as 94 percent of those URLs reused previously observed infrastructure.” This leaves defenders chasing endless one-off indicators while attackers reuse the same core methods and simply rotate surface-level details.
Business email compromise gets a language boost
The most damaging phishing emails are designed to start a conversation rather than include suspicious links or attachments. Help Net Security wrote that “Business email compromise continues to rely on simple conversation rather than links or attachments,” and that in 2025, “conversational attacks accounted for 18 percent of identified malicious emails.” The outlet also noted that “AI-generated language improves grammar, tone, and contextual alignment with internal communications,” warning that “The difference between a malicious message and a legitimate one can be subtle and limited to context or timing.” As a result, awareness training that focuses only on checking links often falls short, because many of these emails contain no links at all.
AI helps attackers move faster from draft to deployment
IBM’s X Force Red team described how generative AI is accelerating phishing development, noting, “With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes,” compared to their usual process: “It generally takes my team about 16 hours to build a phishing email.” They added that “the AI-generated phish was so convincing that it nearly beat the one crafted by experienced social engineers.” Even if human-crafted emails still edge out AI in realism, the speed advantage allows attackers to produce and test far more variations in much less time.
Guardrails are inconsistent and attackers know it
Reuters examined how major chatbots handle requests linked to wrongdoing and found that safeguards are inconsistent in practice. Cited guidance from an expert who said, “If a bot is asked to create a phishing email… ‘The proper answer is to say, ‘I can’t help you with that.’” The investigation added that safety controls are “deeply imperfect,” partly because “AI companies have to balance over- and under-enforcement to keep their products competitive.” As former OpenAI safety researcher Steven Adler put it, “Whoever has the least restrictive policies, that’s an advantage for getting traffic.”
Real-world scammers are already using AI as a helper
Reuters also linked chatbot tools to large-scale fraud operations, reporting that “the scam compounds of Southeast Asia are already embracing AI in their industrial scale activity,” and quoting former forced laborer Duncan Okindo, who said, “ChatGPT is the most used AI tool to help scammers do their thing.” According to the report, AI is used to draft initial scam messages, translate conversations, generate role-play scripts, and produce quick responses that keep victims engaged.
Human factors still decide who clicks and who questions
A review published by MDPI explains that phishing succeeds because it exploits human vulnerability, as well as technical weaknesses, stating that phishing “may cause significant damage” and that “losses due to phishing attacks are not only financial.” While organizations can invest in stronger email filtering, the final decision to trust or question a message still rests with a person, often under time pressure. As AI tools make phishing emails appear more natural and convincing, they reduce the moment of hesitation when a recipient might otherwise stop and question the message.
What defenders are up against in the inbox
Research presented at the International Conference on AI Research explains why older email detection methods struggle as phishing messages become more polished and varied. The paper states that “AI-generated phishing emails, which leverage machine learning and natural language processing (NLP), have become increasingly sophisticated, making traditional detection methods ineffective,” and further notes that “Findings reveal that AI-generated phishing emails exhibit higher success rates due to their ability to bypass conventional spam filters and mimic human communication styles.” Basic spam filters often catch obvious scams, but are more likely to miss emails that read like legitimate vendor requests or routine internal messages.
How can Paubox help to reduce exposure
Traditional email security that focuses only on scanning links and attachments is no longer enough to stop what Hoala Greevy calls “deception at scale.” In the report Healthcare IT is dangerously overconfident about email security, he explains that attackers are using generative AI to “mimic the tone, structure, and urgency of real communication,” allowing them to craft highly convincing messages that target specific teams. Because many of these attacks rely on “inherited trust,” meaning they appear to come from familiar colleagues rather than containing obvious malware, detection must move beyond basic link scanning to AI-driven behavioral analysis that can spot subtle language and context changes. The Paubox report, The hidden cost of inaction adds that training alone is not enough; although 90% of organizations provide regular security training, only 5% of phishing emails are reported by employees, and as Ryan Winchester, Director of IT at CareM, states, “No amount of training can completely eliminate human error, so businesses must have safeguards in place.” In response, Paubox ExecProtect+ uses patented inbound email security to detect display name spoofing, where attackers fake a trusted sender’s name, and unusual behavior patterns, blocking impersonation attempts before they reach inboxes and reducing reliance on staff to catch sophisticated AI-driven phishing on their own.
FAQs
Does AI create a new type of phishing or just speed up the old one?
Research still frames phishing as impersonation at scale. The MDPI review cites phishing as “a scalable act of deception whereby impersonation is used to obtain information from a target.”
Why do AI phishing emails get through filters more often?
Conference research describes AI-generated phishing as “increasingly sophisticated,” saying it can make “traditional detection methods ineffective,” and reports higher success because the emails can “bypass conventional spam filters and mimic human communication styles.”
Are chatbots actually helping criminals write scam emails?
Reuters documented that “the AI chatbots’ defenses can be wildly inconsistent,” and showed examples where bots produced scam drafts after earlier refusals.
What kind of phishing is growing inside enterprises?
Help Net Security described high volume and variation, including “one malicious email” every 19 seconds on average during 2025, and widespread polymorphism in URLs and files.
Why does business email compromise keep working even without links?
Help Net Security reported that BEC “continues to rely on simple conversation rather than links or attachments,” and that AI can make the language match real internal communication so differences become “subtle.”
Subscribe to Paubox Weekly
Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.
