6 min read

The move from traditional defences to defensive AI

The move from traditional defences to defensive AI

Cybercriminals relied on phishing campaigns, spoofed domains, and malware-laden attachments for decades. However, in recent years, cybersecurity threats have shifted dramatically. 

Generative AI now allows attackers to craft highly convincing emails, create synthetic identities, and even generate deepfake content, all at unprecedented scale. A 2024 study, Evaluating Large Language Models' Capability to Launch Fully Automated Spear Phishing Campaigns: Validated on Human Subjects, showed that AI-generated spear-phishing emails achieved a 54% click-through rate, compared to only 12% for standard phishing attempts. In turn, traditional email security, reliant on static rules, blocklists, or signature-based detection, struggles to keep up. 

Defensive AI transforms email security from a reactive filter to a proactive defense mechanism by analyzing behavioral patterns, modeling intent, and continuously learning from anomalies.

 

Limitations of traditional defenses

Traditional cybersecurity measures, such as rule-based intrusion detection systems (IDS), signature-based antivirus software, and firewalls, have long been the cornerstone of digital defense strategies. However, as cyber threats evolve in sophistication and scale, these legacy systems are increasingly inadequate in addressing the challenges posed by AI-driven cyberattacks. The study, The Need For AI-Powered Cybersecurity to Tackle AI-Driven Cyberattacks, indicates several critical limitations of traditional defenses in the face of modern threats:

  • Inability to detect evolving threats: Traditional security measures rely heavily on predefined rules and known signatures to identify threats. This approach is effective against known malware and attack patterns but falls short when faced with novel or rapidly evolving threats. AI-powered cybercriminals can generate new attack vectors that bypass these static defenses, rendering them ineffective. The study notes that "traditional defense controls like rule-based intrusion detection and prevention systems, signature-based antivirus software and firewalls have proved ineffective in preventing evolving AI-driven cyberattacks."
  • Limited scalability: The increasing volume and complexity of cyberattacks demand scalable defense mechanisms. Traditional systems often struggle to process and analyze the vast amounts of data generated in real-time, leading to delayed detection and response. AI-driven attacks, with their ability to operate at scale, can overwhelm these legacy systems, exploiting their limitations to launch widespread attacks.
  • Lack of adaptability: Cyber threats are dynamic and constantly evolving. Traditional defenses, however, are typically rigid and lack the flexibility to adapt to new attack methodologies. This static nature makes them ill-suited to counteract the adaptive strategies employed by AI-powered attackers, who can modify their tactics in real-time to circumvent detection.
  • High rate of false positives: Rule-based systems often generate many false positives, leading to alert fatigue among security teams. This inundation can cause genuine threats to be overlooked or ignored, increasing the risk of successful breaches. AI-driven attacks can exploit this by mimicking legitimate behaviors, further complicating the detection process.
  • Resource-intensive: Maintaining and updating traditional security systems requires substantial resources, both in terms of time and personnel. The manual effort involved in configuring and tuning these systems can lead to delays in responding to new threats. Moreover, the complexity of modern attack strategies necessitates continuous monitoring and adjustment, placing additional strain on already stretched security teams.

While traditional cybersecurity defenses have served their purpose in the past, they are increasingly inadequate in addressing the challenges posed by AI-driven cyberattacks. The limitations outlined above stress the need for more advanced, adaptive, and scalable defense mechanisms to effectively combat the evolving threat landscape. Integrating AI-powered cybersecurity solutions offers a promising path forward, enabling organizations to proactively detect, analyze, and mitigate emerging threats in real time.

 

What is defensive AI?

According to an article by Forbes, “Defensive AI refers to the application of artificial intelligence and machine learning to augment cybersecurity defenses. Unlike standard security tools, which rely on predefined rules, APIs and signatures, defensive AI systems are dynamic, adaptive and capable of learning from data. This enables them to identify novel threats, predict potential vulnerabilities and respond to incidents in real-time.”

 

Moving towards defensive AI

In response to the surge of AI-driven attacks, defenders can no longer rely purely on reactive, signature-based systems. According to Forbes, defensive AI must itself become a strategic pillar in cybersecurity; not just a tool, but a mindshare shift.

Here’s how that article frames the idea and how it should inform an email-centric defensive posture:

  • Defensive AI is purpose-built to fight AI threats. The article argues that since attackers are employing AI to scale, personalize, and conceal attacks, defenders must use AI to counter those threats, in effect, AI vs. AI.
  • The approach is not about infusing “some AI” on top of legacy systems. Instead, the article suggests a reorientation: designing defenses from the ground up to anticipate dynamic, evolving, adversarial attacks.
  • Key attributes of defensive AI include anticipatory detection, adversarial robustness, explainability, and automation.

Below is a breakdown of the capabilities and roles that defensive AI must play, especially when defending email systems, drawing on that Forbes framing plus domain-specific elaboration.

Related: How does AI improve defense against cyberattacks?

 

Core capabilities of defensive AI in the inbox

Anticipatory detection and predictive signal

One of the central propositions in the Forbes piece is that defensive AI should not wait for attacks to manifest; it should detect likely precursors and patterns before damage occurs. In the context of email, this translates to modeling “leading indicators.” For example, subtle shifts in writing style, unusual external domains engaged by new contacts, or anomalous timing relative to past behavior.

Combined with threat intelligence and historical data, the system can flag messages exhibiting probable malicious intent, even if they do not yet match known attack signatures.

Forbes calls this “moving the detection boundary earlier,” shifting from blocking post facto to intervening proactively.

 

Explainability and transparency

Forbes also stresses that AI systems must be transparent and defensible. Decision-making must tolerate human review, appeals, and audit trails.

In an email defense context:

  • When a message is flagged or quarantined, the system should provide explanations (e.g. “this message deviated 3 standard deviations from the user’s normal tone, referenced a new domain, and requested credential information”) so that security analysts or users can understand why.
  • Explainability helps build trust among employees: they can see why a legitimate email might have been flagged, appeal it, or adjust system sensitivity locally.
  • In regulated industries (e.g. financial services, healthcare), explainability is often required for compliance or forensic purposes.

 

Automation with risk-driven escalation

Defensive AI must link detection with effective, context-appropriate action, not just alerts. The Forbes piece notes the need to automate responses while preserving human oversight.

For the inbox:

  • Low-to-moderate risk messages could be soft-intervened (e.g. flagged to the user, or overlaid with cautionary UI messages).
  • Higher-risk messages (e.g. suspected BEC, credential requests) can be quarantined or blocked automatically, pending human review.
  • Escalation rules should be dynamic: messages crossing multiple risk thresholds (behavioral, domain, relationship anomalies) may bypass user prompts and go straight to quarantine.
  • Over time, the system can learn which escalations and automatic responses are too aggressive (via feedback loops) and adjust.

Automation ensures speed, crucial when a threat may succeed within minutes, while escalation frameworks preserve human control for ambiguous cases.

 

Continuous learning, adaptation, and threat fusion

Forbes argues that static AI is insufficient; defensive systems must be evolving systems, always learning from new data, new threats, and cross-domain signals.

In practice:

  • Models must incorporate feedback from false positives and false negatives to refine decision boundaries.
  • The system should ingest threat intelligence feeds, zero-day indicators, and anonymized community data to keep ahead of attacker innovation.
  • Cross-domain correlation is essential: email signals should be enriched by endpoint anomalies, identity changes, login patterns, network anomalies, and device context.

As attacker tactics shift (e.g. from credential-based attacks to deepfake payloads), the AI must shift too. This continuous feedback and retraining cycles make the defense more resilient over time — and ensure the system doesn’t stagnate.

 

Defensive AI in practice

To illustrate how these capabilities translate into real-world email defense, here are a few use cases:

Preventing business email compromise (BEC)

  • A CFO receives a seemingly legitimate message “from” the CEO asking for an urgent wire transfer.
  • The AI notices the writing style diverges from the CEO’s norm, the tone is more urgent than usual, and the timing is off (e.g. late evening).
  • It triggers a mid-level alert, injects a verification UI prompt to the recipient, or quarantines the message.

Read also: What are Business Email Compromise attacks?

 

Vendor impersonation/supply chain phishing

  • A vendor email arrives with a slightly altered domain but otherwise seems normal.
  • The AI evaluates the historical pattern of vendor communication, notes the deviation in domain and invoice structure, and flags the message for review or quarantines it before delivery.

Read also: What is a supply chain attack?

 

Credential harvesting/account takeover attempts

  • The AI detects an incoming email prompting credential input or login via a suspicious link.
  • It cross-checks user behavior, link reputation, and meta signals (e.g. location, device). If risk is high, it quarantines the email and issues a safe-block message to the user.

Learn more

 

Internal threat/insider misuse

  • Unusual internal messages from an employee with atypical wording or sending patterns (e.g. mass internal emails outside norms) are flagged as anomalous behavior.
  • If risk crosses threshold, the system escalates to security staff for investigation.

In each of these cases, what sets defensive AI apart is its ability to act with context and intent — not just by blacklisting or matching signatures.

 

Strategic considerations and risks

Deploying defensive AI isn’t without challenges. The Forbes article mentions some strategic imperatives that must be factored into deployment:

  • Governance and ethical use: As AI systems make automated decisions affecting users’ communication, privacy, and workflow, governance frameworks must ensure ethical boundaries, human oversight, and rights of appeal.
  • Trust and user experience: Overly aggressive interventions will annoy or alienate users. Balancing security with usability is critical; users must trust that the system helps rather than obstructs.
  • Attackers vs. defenders parity: The Forbes article warns that attackers will continuously adapt their AI strategies. Defensive AI must maintain an asymmetry, that is, defenders need greater visibility, richer signals, and stronger adaptability than attackers.
  • Operational complexity: Such systems require high-fidelity data pipelines, integrations (email, identity, endpoint, network), and careful calibration. Not every organization has the resources or maturity to do this well.
  • Adversarial exposure: If attackers gain insight into your detection models or thresholds, they might craft bypass strategies. Defensive AI must be designed assuming that adversaries will probe it.

See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)

 

FAQS

Will defensive AI replace human security teams?

Defensive AI is not meant to replace humans but to augment them. It automates detection, triage, and remediation for low- to mid-level risks, freeing human analysts to focus on complex or high-severity incidents.

 

How accurate is defensive AI?

Defensive AI generally offers higher detection rates than static filters, but accuracy depends on the quality of data, integration with other systems, and feedback loops. False positives can still occur, but continuous learning helps reduce them over time.

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.