Organizations are facing more convincing phishing campaigns while adopting AI driven tools to detect them.
What happened
A Forbes Technology Council analysis warned that phishing campaigns have become more convincing as attackers use generative tools to automate message creation and personalize lures. The article cited security research showing that AI written phishing messages reduce the effort required to launch campaigns and increase the likelihood that recipients will engage. Enterprise security teams are now encountering phishing attempts that closely mimic internal communication styles, executive voices, and routine workflows.
Going deeper
Phishing activity has shifted away from poorly written emails toward messages that rely on context, timing, and emotional pressure. Attackers use voice messages, cloned login pages, and cross channel delivery methods that move between email, collaboration platforms, and mobile messaging. These campaigns are difficult to detect using traditional filters because they often contain no malicious links or attachments. As a result, security teams face growing alert volumes and rely more heavily on behavioral analysis that looks for unusual communication patterns rather than known indicators.
What was said
Security practitioners noted that AI-based defenses can analyze language structure, tone, and sender behavior to identify deviations from normal communication patterns. Instead of focusing only on technical artifacts, these systems assess intent, timing, and message context. Teams deploying these tools described the need for a learning period where AI systems operate in observation mode, allowing analysts to compare results and adjust thresholds. Experts also stated that staff training and clearly defined escalation processes remain necessary because automated detection does not replace human judgment.
The big picture
Recent threat research shows that phishing is shaped around human behavior rather than technical exploits. Proofpoint’s 2025 Human Factor report found that “the most damaging cyberthreats today don’t target machines or systems. They target people,” with attackers relying on persuasion, familiarity, and routine workflows to trigger clicks. The report also noted that URLs now appear four times more often than attachments in malicious emails, reflecting a shift away from traditional malware delivery toward credential harvesting and social engineering.
That shift extends well beyond email. Proofpoint observed that phishing activity now spreads across collaboration platforms, SMS, QR codes, and SaaS tools, with at least 55 percent of suspected smishing messages containing malicious links. ClickFix-style URL campaigns have increased nearly 400 percent year over year. The findings reinforce why organizations are turning to AI-driven detection that focuses on behavior, context, and intent, rather than relying solely on static indicators that no longer align with how modern phishing campaigns operate.
FAQs
Why are AI-generated phishing messages harder to detect?
They closely resemble legitimate communication in tone, structure, and context, which reduces reliance on spelling errors or obvious technical indicators.
Do these attacks rely only on email?
No. Many campaigns now move across voice calls, messaging platforms, document sharing tools, and mobile channels.
Can AI fully prevent phishing incidents?
No. AI improves detection speed and consistency, but human verification and response processes remain necessary.
What data do AI defenses analyze?
They examine communication patterns, sender behavior, timing anomalies, language cues, and identity signals rather than just URLs or attachments.
How should organizations introduce AI-based phishing detection?
They should integrate identity systems and logging sources first, allow an observation period, and provide staff guidance on how automated findings are reviewed and acted upon.
Subscribe to Paubox Weekly
Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.
