Organizations are facing more convincing phishing campaigns while adopting AI driven tools to detect them.
A Forbes Technology Council analysis warned that phishing campaigns have become more convincing as attackers use generative tools to automate message creation and personalize lures. The article cited security research showing that AI written phishing messages reduce the effort required to launch campaigns and increase the likelihood that recipients will engage. Enterprise security teams are now encountering phishing attempts that closely mimic internal communication styles, executive voices, and routine workflows.
Phishing activity has shifted away from poorly written emails toward messages that rely on context, timing, and emotional pressure. Attackers use voice messages, cloned login pages, and cross channel delivery methods that move between email, collaboration platforms, and mobile messaging. These campaigns are difficult to detect using traditional filters because they often contain no malicious links or attachments. As a result, security teams face growing alert volumes and rely more heavily on behavioral analysis that looks for unusual communication patterns rather than known indicators.
Security practitioners noted that AI-based defenses can analyze language structure, tone, and sender behavior to identify deviations from normal communication patterns. Instead of focusing only on technical artifacts, these systems assess intent, timing, and message context. Teams deploying these tools described the need for a learning period where AI systems operate in observation mode, allowing analysts to compare results and adjust thresholds. Experts also stated that staff training and clearly defined escalation processes remain necessary because automated detection does not replace human judgment.
Recent threat research shows that phishing is shaped around human behavior rather than technical exploits. Proofpoint’s 2025 Human Factor report found that “the most damaging cyberthreats today don’t target machines or systems. They target people,” with attackers relying on persuasion, familiarity, and routine workflows to trigger clicks. The report also noted that URLs now appear four times more often than attachments in malicious emails, reflecting a shift away from traditional malware delivery toward credential harvesting and social engineering.
That shift extends well beyond email. Proofpoint observed that phishing activity now spreads across collaboration platforms, SMS, QR codes, and SaaS tools, with at least 55 percent of suspected smishing messages containing malicious links. ClickFix-style URL campaigns have increased nearly 400 percent year over year. The findings reinforce why organizations are turning to AI-driven detection that focuses on behavior, context, and intent, rather than relying solely on static indicators that no longer align with how modern phishing campaigns operate.
They closely resemble legitimate communication in tone, structure, and context, which reduces reliance on spelling errors or obvious technical indicators.
No. Many campaigns now move across voice calls, messaging platforms, document sharing tools, and mobile channels.
No. AI improves detection speed and consistency, but human verification and response processes remain necessary.
They examine communication patterns, sender behavior, timing anomalies, language cues, and identity signals rather than just URLs or attachments.
They should integrate identity systems and logging sources first, allow an observation period, and provide staff guidance on how automated findings are reviewed and acted upon.