The Paubox 2025 Healthcare Email Security Report reveals that between January 2024 and January 2025, the HHS Office for Civil Rights received breach reports from 180 healthcare organizations citing email security failures as a contributing factor. What many of these incidents share is a common vulnerability: many employees are using artificial intelligence or other tools without IT or compliance approval.
This phenomenon, known as shadow AI, represents one of healthcare's most dangerous blind spots. Unlike traditional security threats that organizations can detect and respond to, shadow AI operates in plain sight, hidden not by technical sophistication but by organizational confusion about what employees are already doing.
According to the NSA Artificial Intelligence Security Center and CISA, organizations deploying externally developed AI systems must provide appropriate mitigations for known vulnerabilities in AI systems and ensure proper confidentiality, integrity, and availability controls, requirements that shadow AI systematically bypasses.
Healthcare amplifies these challenges. A 2024 analysis by the Department of Homeland Security identified systemic vulnerabilities in the healthcare sector, which included:
These organizational pressures create conditions where employees, facing extreme workloads and seeking legitimate productivity gains, turn to unapproved AI tools without understanding compliance implications.
The JCDC AI Cybersecurity Collaboration Playbook emphasizes that AI systems introduce unique complexities due to their reliance on data-driven, non-deterministic models, making them vulnerable to malicious cyber activity such as model poisoning, data manipulation, and adversarial inputs. These vulnerabilities, coupled with the rapid adoption of AI systems, demand comprehensive strategies and public-private partnerships to address evolving risks. Yet healthcare organizations attempting shadow AI adoption operate largely in isolation, and without the coordinated information sharing, vendor oversight, or technical safeguards that federal guidance prescribes.
According to a Paubox survey of healthcare IT and compliance leaders, 95% of organizations report that staff are already using AI tools for work email, yet 25% have not formally approved any staff AI email use. More alarming is that 62% of leaders have directly observed employees experimenting with ChatGPT or similar AI tools despite knowing these tools are unsanctioned. The gap between organizational policy and employee behavior has created a compliance crisis that traditional security controls cannot address.
When clinicians and administrators paste patient information into unapproved AI chatbots, when they use free online AI services to draft emails containing protected health information, or when they assume that embedded AI features in Microsoft 365 and Google Workspace are automatically HIPAA compliant, they're introducing data exposure pathways that bypass the carefully constructed compliance infrastructure healthcare organizations have built.
The result is a paradox that shows healthcare organizations are simultaneously accelerating AI adoption while losing visibility into how that AI is being used. This collision between innovation momentum and compliance reality is creating conditions for preventable breaches.
Healthcare organizations are faced with the pressure to modernize. Executive leadership sees AI as a competitive advantage and a pathway to improved clinical workflows. IT teams receive directives to implement AI tools rapidly. Clinical staff discover productivity benefits and begin using AI independently. Meanwhile, compliance teams, which are often under-resourced and struggling to understand AI's regulatory implications, find themselves playing catch-up.
Leadership momentum outpaces security readiness. According to the Paubox survey, 69% of IT leaders feel pressured to adopt AI faster than their organization can actually secure it. This creates a culture where speed is rewarded and verification is viewed as an obstacle. When an executive champions rapid AI deployment, IT and compliance teams face impossible choices to either appear obstructionist by raising concerns or accelerate implementation and hope problems don't emerge.
Compliance oversight becomes reactive instead of proactive. The survey reveals that 16% of compliance leaders were not even consulted before AI features were activated in Gmail or Outlook at their organizations. This isn't organizational dysfunction but the result of AI arriving embedded in mainstream productivity tools. Microsoft 365 Copilot and Google Gemini don't require explicit installation. They appear as features within systems staff already use. By the time compliance teams realize AI is operational, employees have already begun using it with patient data.
Knowledge gaps about AI compliance requirements are widespread. The research found that 21% of healthcare teams believe a Business Associate Agreement (BAA) isn't required for an AI email assistant. This fundamental misunderstanding cascades through organizations. If leadership and staff don't understand that any tool processing Protected Health Information (PHI) requires a BAA or risk HIPAA noncompliance, they proceed without the necessary legal protections.
75% of IT leaders believe their staff assumes tools like Microsoft Copilot are automatically HIPAA compliant simply because they come from established vendors. This assumption, while understandable, is dangerously incomplete. A BAA and formal compliance assessment are required regardless of vendor reputation.
An overwhelming 94% of IT leaders feel confident they could detect AI misuse before a HIPAA violation occurs. Yet 62% have observed staff using unsanctioned ChatGPT despite this confidence. Traditional security controls, email Data Loss Prevention (DLP), encryption gateways, and network monitoring were not designed to detect shadow AI usage. Someone copying patient information to a web-based chatbot may not trigger traditional security alerts, especially if they're doing so from their own device or during approved internet activity.
According to research by Netskope cited in the Paubox shadow AI report, "Healthcare workers routinely expose sensitive data such as PHI by using generative AI tools such as ChatGPT and Google Gemini" without oversight. An IBM poll found that 38% of employees admitted to sharing sensitive work information with AI tools without employer approval. In healthcare specifically, where workloads are extreme and efficiency gains feel like patient care imperatives, this rate likely runs higher.
When an employee pastes a patient's medical summary, lab results, or clinical notes into ChatGPT, that data may be stored by the service, used to train the AI model, or accessed by the AI company for other purposes. Many free and freemium AI services explicitly state in their terms of service that user-submitted data becomes part of their training datasets. For organizations handling PHI, this constitutes a breach — one that may go undetected until a regulatory audit or patient complaint surfaces it.
HIPAA requires comprehensive audit trails documenting every access, modification, and transmission of PHI. When staff use web-based AI tools external to organizational systems, these interactions fall outside normal audit logging. A compliance officer reviewing email security logs will see no evidence that PHI was shared with external AI services because the sharing happened through a web browser rather than organizational infrastructure.
Many unapproved AI tools lack BAAs. Using them to process PHI violates HIPAA's Privacy Rule, which requires any entity processing PHI on behalf of a covered entity to be contractually bound by HIPAA privacy and security requirements. According to the Paubox survey, only 42% of organizations have signed a BAA covering an AI email tool. That means 58% of healthcare organizations are either using AI tools without required legal protections or haven't yet addressed this compliance gap.
The HHS Office for Civil Rights enforcement data shows that willful neglect violations, which include failing to implement required Business Associate Agreements, carry penalties up to $1.5 million. Shadow AI that processes PHI without a BAA in place represents exactly this type of violation.
Training gaps compound the problem. The survey found that 84% of healthcare organizations have not trained most of their staff (75-100%) who have access to PHI on proper AI usage in email. This means the vast majority of clinical and administrative staff lack formal guidance about what they can and cannot do with AI tools when PHI is involved. They're left to make independent judgment calls about compliance, with many defaulting to the assumption that "if the tool exists, it must be fine to use."
Shadow AI refers to the use of artificial intelligence tools by employees without formal IT or compliance approval. This involves staff using generative AI services like ChatGPT, Gemini, or Copilot for work tasks, including processing PHI without organizational oversight or proper legal agreements in place.
A Business Associate Agreement is a legally binding contract between a covered entity (like a healthcare organization) and any vendor that processes, stores, or accesses PHI. The BAA ensures that the vendor implements required HIPAA safeguards and maintains the privacy and security of patient data.
Model poisoning is when malicious actors introduce corrupted or biased data into an AI system's training dataset. This can cause the AI model to produce inaccurate or harmful outputs. In healthcare, a poisoned AI model could provide incorrect clinical recommendations or systematically discriminate against certain patient populations. Shadow AI usage increases poisoning risk because unapproved tools lack the data validation and integrity controls that enterprise AI systems maintain.