A growing number of healthcare organizations use artificial intelligence (AI) tools in their daily operations. Yet many staff members still turn to personal or non-HIPAA compliant AI accounts to complete work tasks. What might seem like a harmless shortcut can expose entire organizations to serious compliance risks. Most public AI tools do not sign business associate agreements (BAAs).
As one recent Journal of Law and Medical Ethics analysis explains, “Developers and vendors of large language models like ChatGPT, Google Bard, and Microsoft’s Bing can be subject to HIPAA when they process PHI on behalf of covered entities. In doing so, they become business associates or subcontractors of a business associate under HIPAA.” This means that if a hospital or clinic staff member inputs identifiable patient data into a public AI tool, the vendor may be operating as an unregulated entity, leaving both the user and the healthcare organization exposed to compliance violations.
Most publicly available AI platforms are also not built for healthcare’s privacy and security needs. They process data in countries with very different data residency laws and may retain or share information in ways that violate HIPAA’s privacy and security standards.
Even when users try to remove identifiers, the risk persists: “There can be instances when patients or practitioners share PHI with an AI chat tool for medical questions, documentation, or summaries,” explaining the same journal.
In such cases, seemingly harmless interactions can lead to privacy breaches if the data is later re-identified through advanced analytics or model training. As the authors note, “The deployment of AI chatbots in healthcare can be accompanied by privacy risks both for data subjects and the developers and vendors of these AI-driven tools.”
The convenience and accessibility of widely available AI and cloud-based tools often tempt healthcare staff to bypass organizational protocols and regulatory requirements, putting sensitive patient information at risk. This behavior creates compliance challenges and introduces new threats to patient privacy and erodes the trust that healthcare depends on.
Many popular AI tools, such as chatbots, note-taking applications, and cloud storage platforms, are not built to meet the technical and administrative safeguards required by HIPAA. As described in Journal of Medical Internet Research studies, AI chatbots “use AI and natural language processing to understand customer questions and generate natural, dialogue-like responses” and rely on massive amounts of data to function effectively.
Most consumer-grade AI tools do not include these protections, nor do their providers typically sign BAAs. When healthcare staff upload PHI to these platforms, the data is processed, stored, or transmitted outside the organization’s oversight, exposing both the patient and the institution to potential HIPAA violations.
The risks are not theoretical. Healthcare data breaches linked to non-compliant platforms are on the rise. Cybercriminals target the healthcare sector because of the sheer volume of sensitive data it holds, exploiting weak points in third-party tools. AI systems not designed specifically for healthcare may retain or use data for model training, creating unintended exposure.
Even de-identified information can be re-identified through advanced analytics, further amplifying privacy risks. Such practices conflict with HIPAA’s core principles of data minimization and the “minimum necessary” standard, which require organizations to limit PHI use to what is strictly needed for legitimate purposes.
AI models rely on vast and diverse datasets, which often include sensitive patient information. As the BMJ Global Health literature notes, while AI “offers promising solutions in healthcare, it also poses a number of threats to human health and well-being via social, political, economic and security-related determinants of health,” the problem extends beyond technical concerns.
Collecting, storing, and processing this data securely is needed, yet these same datasets are attractive targets for malicious actors. A common threat is data poisoning, where attackers deliberately insert corrupted or biased data to compromise model performance.
The manipulation can lead to faulty clinical predictions, introduce hidden backdoors in AI algorithms, or trigger harmful outputs under specific conditions. The result is a breach of data confidentiality and a risk to the integrity and reliability of AI systems, with consequences for patient care if decisions are based on compromised outputs.
Deep learning and other advanced techniques also often function as black boxes, making it difficult for healthcare providers or security teams to understand how decisions are reached or to detect unusual behavior that might signal tampering.
As the study points out, while the health literature tends to focus on the potential benefits of AI, there is still limited attention on how misuse or misapplication of these systems can “worsen social and health inequalities by either incorporating existing human biases and patterns of discrimination into automated algorithms or by deploying AI in ways that reinforce social inequalities in access to healthcare.”
While technical and legal safeguards define what makes an AI tool HIPAA compliant, compliance alone isn’t enough to ensure safe AI use in healthcare. HIPAA provides a baseline focused on protecting PHI, but AI introduces new risks. Research from the International Journal of Medical Informatics notes, “Substantial concerns regarding safety, trust, security, and ethical implications have developed” alongside AI’s rapid adoption in healthcare, emphasizing that legal compliance cannot address all potential hazards.
AI models can be vulnerable to adversarial attacks such as data poisoning or model inversion, which may not directly violate HIPAA but can compromise model integrity and patient safety. HIPAA does not explicitly cover issues like algorithmic fairness, bias mitigation, explainability, or the broader ethical implications of AI-driven decisions—factors that are “fundamental for fostering trust among healthcare providers” and for ensuring that AI recommendations are interpretable and reliable.
Explainable AI (XAI), for instance, has emerged as a key development that can make AI “processes clear as crystal and understandable,” helping clinicians evaluate outputs safely and confidently. Healthcare organizations need to adopt comprehensive governance that spans the entire AI lifecycle, including model training, validation, deployment, and ongoing monitoring.
AI systems can degrade over time or learn new patterns post-deployment, making ongoing evaluation necessary to prevent reduced accuracy, unsafe recommendations, or privacy breaches. As the literature notes, “robust AI systems need to be designed to execute consistently under innumerable conditions and repel adversarial attacks,” and this requires technical oversight coupled with ethical and regulatory guidance.
See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)
AI bias occurs when an artificial intelligence system produces results that systematically favor certain outcomes, groups, or individuals over others. Bias can emerge from the data used to train the model, the algorithms themselves, or the way AI is implemented in real-world settings.
Bias often originates from the training data. If historical data reflect existing societal or healthcare inequities, AI models can learn and perpetuate those patterns. Other sources include flawed algorithms, incomplete datasets, or assumptions embedded by developers.
Completely eliminating bias is extremely difficult because AI reflects the data and societal context in which it is developed.