4 min read

5 HIPAA violations caused by improper AI use

5 HIPAA violations caused by improper AI use

According to AI and HIPAA compliance: How to navigate major risks published by TechTarget, 66% of physicians reported using AI in their practices in 2025 compared to just 38% in 2023. However, this usage has also created compliance challenges with regards to HIPAA.

A recent academic paper shows the urgency of this issue, noting that healthcare AI deployment is "accelerating without commensurate security evaluation." As Tony UcedaVelez, CEO of VerSprite Security, noted in the TechTarget article: "If we hadn't had a problem with data governance before, we have it now with AI. It's a new paradigm for [personally identifiable information] governance."

As healthcare organizations adopt AI solutions, many are exposing themselves to violations that can result in fines and damaged patient trust. Here are five HIPAA violations caused by improper AI use that every healthcare provider should be aware of.

Read also: Risk of ungoverned AI use in healthcare

 

1. Uploading protected health information to unsecured AI platforms

According to a recent analysis in Health care workers are leaking patient data through AI tools, cloud apps, 71% of healthcare workers are using personal AI accounts for work purposes. The problem is that most public AI tools like ChatGPT and Google Gemini don't sign business associate agreements or meet HIPAA compliance standards, making their use with PHI a direct violation.

Furthermore, the report reveals that 81% of data policy violations in healthcare organizations involved regulated data like PHI. When a physician uploads patient records, diagnostic images, or clinical notes to a non-compliant AI tool for analysis or assistance, they're creating an unauthorized disclosure of PHI. These platforms may store, analyze, or even use this data to train their models. In fact, 96% of healthcare organizations rely on AI tools that train on user data, potentially exposing sensitive patient information to unknown third parties.

Ray Canzanese, director of Netskope Threat Labs, warns that the consequences extend far beyond fines; "breaches erode patient trust and damage organizational credibility with vendors and partners."

Healthcare organizations must ensure that any AI platform used for clinical purposes has a signed BAA and implements appropriate administrative, physical, and technical safeguards to protect patient data.

Related: Anthropic brings Claude AI to healthcare with HIPAA tools

 

2. AI-powered chatbots and unauthorized data sharing

According to HIPAA Violations in the AI Era: Real-World Cases and Lessons Learned, one hospital implemented an AI-driven chatbot to assist patients with scheduling and medical inquiries, but the chatbot shared sensitive patient data, such as appointment details and symptoms, with third-party analytics providers without proper safeguards or patient consent.

This violation of the HIPAA minimum necessary standard occurs when AI-powered chatbots, analytics dashboards, or decision support systems provide broader access to patient information than needed or share data with unauthorized parties. For example, an AI scheduling assistant might have access to full medical histories when it only needs appointment-related information, or a patient-facing chatbot might transmit PHI to third-party analytics platforms for performance monitoring without a business associate agreement in place.

Related: AI chatbots in healthcare: Innovation meets HIPAA compliance

 

3. Failure to conduct risk assessments before AI implementation

HIPAA's Security Rule requires covered entities to conduct regular risk assessments of systems that handle PHI. The academic paper notes that, "adversaries with access to as few as 100-500 samples can successfully compromise healthcare AI, regardless of dataset size," with attack success rates exceeding 60% and "detection timescales ranging from 6 to 12 months or never." This means that even organizations with millions of data points can be compromised by small-scale attacks that may go undetected for extended periods.

As Bhushan Jayeshkumar Patel explained in the TechTarget article, the fundamental challenge is that "traditional HIPAA frameworks were not designed for real-time AI decision-making." AI systems introduce unique vulnerabilities that traditional risk assessment frameworks may not address, from adversarial attacks that manipulate model outputs to data poisoning that corrupts training datasets.

Without proper risk assessment, organizations may not identify certain issues. The TechTarget article states that many medical devices, such as surgical robots and wearables, now transmit patient data to cloud-based platforms, increasing exposure to breaches.

Furthermore, the academic paper explains that, "traditional HIPAA frameworks were not designed for real-time AI decision-making," showing how AI systems introduce vulnerabilities that traditional risk assessment frameworks may not address, from adversarial attacks that manipulate model outputs to data poisoning that corrupts training datasets.

The consequences relate to more than just compliance. An unvetted AI system might create new pathways for data breaches, expose PHI through API vulnerabilities, or fail to maintain data integrity, all of which violate HIPAA's requirements for confidentiality, integrity, and availability of health information. Organizations must conduct risk assessments that address AI-related vulnerabilities before deployment.

Read also: How to conduct a HIPAA risk assessment for AI tools

 

4. Using patient data to train AI models without authorization

Some healthcare organizations have ventured into developing their own AI models or partnering with technology companies to create AI solutions. However, using patient data to train these models without proper authorization is a HIPAA violation.

The Privacy Rule requires that uses and disclosures of PHI be limited to what patients have authorized or what's permitted by law. HIPAA's de-identification standards require either expert determination or the removal of 18 specific identifiers plus no actual knowledge that remaining information could identify individuals.

Furthermore, if trained models retain patterns or information that could reconstruct individual patient data, this creates ongoing compliance concerns about data retention and the right to access or amend records.

Learn more: Can de-identified data be used to train AI under HIPAA?

 

5. Inadequate training and policies for AI use

Without clear guidance, healthcare workers may use AI inappropriately, such as seeking diagnostic assistance from consumer chatbots, sharing screenshots of patient records with AI tools for transcription, or using generative AI to draft clinical documentation without understanding how the platform handles input data.

Organizations must establish policies that specify which AI tools are approved for use, how to handle PHI when using these tools, and what to do if an unauthorized disclosure occurs. These policies should address everything from acceptable use of consumer AI products to procedures for vetting new AI vendors.

Related: AMA releases 8-step AI governance toolkit for healthcare providers

 

Protecting your organization

The academic paper notes that, "legal protections designed to safeguard patient privacy and prevent discrimination, such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in the European Union, paradoxically might shield attackers from detection." This means healthcare organizations must be especially vigilant in implementing proactive security measures.

The paper further notes that "healthcare's distributed infrastructure creates numerous points of entry where insiders with routine access can execute attacks with minimal technical sophistication," highlighting why safeguards are important at every level of your organization.

To protect your organization, this means conducting thorough vendor assessments, ensuring BAAs are in place, implementing proper access controls, training staff on AI-specific risks, and maintaining documentation of all AI-related privacy and security measures. Given that "current regulatory frameworks lack mandatory adversarial robustness testing," as the academic paper points out, organizations must go beyond minimum compliance requirements.

 

FAQs

Can patients sue a healthcare organization directly for HIPAA violations involving AI?

HIPAA itself does not provide a private right of action, but AI-related HIPAA violations can still lead to lawsuits under state privacy, negligence, or consumer protection laws.

 

Does HIPAA apply differently to AI tools used for administrative tasks versus clinical decision-making?

HIPAA applies equally whenever protected health information is involved, regardless of whether AI is used for scheduling, billing, documentation, or clinical support.

 

Are AI vendors automatically considered business associates under HIPAA?

No, an AI vendor is only a business associate if it creates, receives, maintains, or transmits PHI on behalf of a covered entity and has a signed business associate agreement in place.

Subscribe to Paubox Weekly

Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.