Many healthcare systems already struggle with outdated or incomplete security frameworks, and when AI tools are added to the mix, they often inherit those weaknesses. Most healthcare breaches happen through hacking or insider misuse, and AI creates fresh entry points for attackers because of how complex and interconnected these systems are. These tools process massive amounts of patient data, and if encryption or access controls aren’t handled properly, it becomes much easier for someone to compromise protected health information (PHI).
An Indian Dermatology Online Journal study notes, “AI techniques… inherently require a large amount of data,” often aggregated from multiple environments that may not all be secure.” Even when data is de-identified under HIPAA, combining information from different places makes it easier to re-identify individuals. The amount of data AI handles and the speed at which it moves through systems often goes beyond what current privacy rules were designed to manage, creating room for accidental leaks or intentional abuse.
As the study states, healthcare professionals “often find themselves unsure about ethical and legal constraints that underlie data sharing.” Without oversight, no one is clearly responsible for tracking how that data is stored, shared, or secured. This creates both ethical and legal risks, especially if internal actors or third parties mishandle or misuse the data. Weak regulation makes it easier for PHI to slip through the cracks.
What shadow AI looks like in a real healthcare setting
Shadow AI describes the use of artificial intelligence tools in healthcare without formal approval, oversight, or integration into an organization’s official systems. These tools are often adopted by individual clinicians or departments looking for faster or more efficient solutions, but they operate outside established compliance, security, and auditing structures. Shadow AI can take many forms, including unsanctioned diagnostic apps, predictive models, or data analysis tools that were never vetted by IT or legal teams. The rapid push to adopt AI, and the perception that internal approval processes are too slow, has led many healthcare workers to experiment with unapproved technologies that haven’t been evaluated for privacy, safety, or regulatory impact.
An example of this is when in 2023, an internal memo at Kaiser Permanente revealed that some clinicians had begun using an AI-powered dermatology image analysis app without going through the organization’s formal approval process. According to reporting by Business Insider, images were uploaded directly from mobile devices, and the tool was used without encryption, security review, or compliance oversight. Once the activity was identified during an internal audit, the organization halted its use. While Kaiser Permanente did not issue a public statement about the incident, internal confirmation acknowledged that the app had been used without authorization.
The reason behind healthcare workers using unapproved AI
Many hospitals and clinics face staffing shortages, rising patient loads, and time-sensitive decision-making demands. According to the study ‘Artificial intelligence: opportunities and implications for the health workforce’ there was an estimated shortage of 18 million healthcare workers as far back as 2013, driven by chronic underinvestment and labor constraints. This has created unsatisfactory work environments and contributed to burnout rates “reaching between 25% and 75% in some clinical specialties.” At the same time, the volume of clinical information has grown at a pace that outstrips human capacity.
Medical knowledge that once doubled every 50 years is now projected to double every 73 days, making it increasingly difficult for clinicians to keep up with the demands of data processing and evidence-based decision-making. As one review puts it, “healthcare involves cyclic data processing to derive meaningful, actionable decisions,” but the rapid increase in clinical data has “added to the occupational stress of healthcare workers.”
These pressures make fast, unregulated AI tools attractive when approved systems are too slow, limited, or still in development. AI is already being used to automate documentation, summarize clinical records, and support diagnostics, and in some nursing environments it has reportedly increased productivity by 30–50%.
Clinicians seeking relief from administrative burden see immediate appeal in tools that can “auto-populate structured data fields,” “transcribe recorded patient encounters,” or extract insights from unstructured text. In practical terms, when official AI systems are delayed by compliance reviews or regulatory approvals, staff may look to third-party or open-source tools that can deliver quick results—even if those tools operate outside governance frameworks.
How patient data gets exposed
Shadow AI has become a growing liability in healthcare because many of the tools in use were never vetted, approved, or secured. A number of these models run on external or poorly protected servers, with little to no encryption in place during data processing or storage. When clinicians or staff use these tools to analyze diagnostic images, interpret lab results, or draft communications, they may unintentionally transmit protected health information through unsecured channels.
The problem is compounded by the fact that most use goes undetected. According to our research, nearly 95% of healthcare organizations believe their staff are already using generative AI in email or content workflows, and 62% of leaders have directly observed employees experimenting with unsanctioned tools like ChatGPT. Despite this, a quarter of organizations have not formally approved any AI use in email, meaning staff are acting without oversight and outside compliance frameworks.
These unregulated tools often bypass institutional security controls entirely, increasing opportunities for data leaks and breaches. Healthcare workers may use AI services on personal devices, third-party apps, or unmanaged cloud platforms, allowing PHI to move beyond secure networks. This fragmented data flow exposes patient information to unauthorized access, especially when threat actors look for vulnerabilities in unmonitored systems.
In many cases, the exposure happens unintentionally, 38% of employees admit to pasting sensitive work information into AI tools without employer approval. If a clinician enters a patient summary or diagnostic note into a chatbot such as ChatGPT or Google Gemini, that information may be retained, processed, or used to train the model, with no guarantee of HIPAA compliance.
How can healthcare organizations combat the use of shadow AI
Healthcare systems should start by formalizing enterprise AI governance with clear roles, decision rights, and approval pathways. Recent healthcare governance case studies like the NPJ study, ‘A practical framework for appropriate implementation and review of artificial intelligence (FAIR-AI) in healthcare,’ recommend a cross-functional AI committee to set principles (safety, equity, privacy), gatekeep deployments, and monitor post-implementation risk; these bodies align clinical leaders, information security, compliance, and patient representatives around a single operating model for AI oversight.
Building on this, organizations should publish a “responsible use of AI” standard that specifies use-cases allowed, pre-deployment evidence required (clinical validity, utility, and security), change-control, and retirement criteria. Practical guidance shows that codified policies improve transparency and accountability while preserving innovation velocity.
See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)
FAQs
Is shadow AI always intentional?
Not necessarily. In many cases, clinicians and staff use unapproved AI tools because they are unaware of existing policies, assume the tools are safe, or lack access to compliant alternatives.
Does using free or open-source AI tools increase the risk?
Yes. Free or public AI platforms typically lack business associate agreements (BAAs), encryption standards, data-use restrictions, and retention controls required for handling PHI. Even if no patient names are entered, re-identification risks and metadata exposure remain high.
Can shadow AI activity go undetected?
Yes. Because these tools operate outside formal IT systems, there may be no logging, oversight, or reporting. Organizations often discover shadow AI only after a breach, audit, or employee disclosure, making the risk harder to contain.
Subscribe to Paubox Weekly
Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.
