When vendors describe their AI tools as "HIPAA-ready," they normally mean that their platform has certain technical capabilities that could support HIPAA compliance. This might include encryption at rest and in transit, audit logging capabilities, or user authentication features. These elements are good, but they represent only a fraction of what actual HIPAA compliance requires.
Learn more: What is the key to HIPAA compliance?
One of the differences between "ready" and "compliant" is the business associate agreement. Under HIPAA, any entity that handles protected health information on behalf of a covered entity must sign a BAA. This legally binding contract outlines specific responsibilities for safeguarding patient data and establishes liability in case of breaches.
As detailed in AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors published by Cambridge University Press in the Journal of Law, Medicine & Ethics, "Developers and vendors of large language models ('LLMs') — such as ChatGPT, Google Bard, and Microsoft's Bing at the forefront—can be subject to Health Insurance Portability and Accountability Act of 1996 ('HIPAA') when they process protected health information ('PHI') on behalf of the HIPAA covered entities. In doing so, they become business associates or subcontractors of a business associate under HIPAA."
Some AI vendors offering HIPAA-ready products either won't sign a BAA or will only do so under their enterprise tier pricing, which can be expensive for smaller practices. Without a signed BAA, using the tool with any PHI is a violation of HIPAA regulations, regardless of how robust the platform's security features might be. The "readiness" of the technology becomes irrelevant if the contractual side of compliance doesn't exist.
As researchers at Mississippi State University note in Towards a HIPAA Compliant Agentic AI System in Healthcare, when AI systems operate through third-party APIs, a business associate agreement becomes a mandatory factor that must be validated before any authorization to access protected health information can be granted.
Related: When does AI become a business associate under HIPAA?
Even when a vendor will sign a BAA, HIPAA compliance isn't automatic. The regulations require covered entities to implement appropriate administrative, physical, and technical safeguards. In the analysis, Ahmad K. Momani notes that healthcare providers must implement comprehensive policies and procedures, train their employees on HIPAA regulations, and ensure continuous compliance. Momani further states that organizations must guarantee that any new technology used to store or transmit protected health information complies with HIPAA security provisions.
A HIPAA-ready platform might offer the technical capabilities to support these requirements, but organizations must still configure and use the system correctly. Consider an AI transcription service that offers encrypted storage and role-based access controls. These features are important, but compliance also depends on how the organization uses the tool. The vendor's technology might be ready, but compliance is an organizational responsibility that looks at more than just the product being used.
The compliance challenges are worsened by agentic AI systems that can autonomously interact with electronic health records and execute clinical tasks with minimal human oversight. According to Towards a HIPAA Compliant Agentic AI System in Healthcare, these autonomous systems introduce critical risks to protected health information security because they can dynamically access sensitive data and make decisions without human supervision.
Researchers have identified that truly compliant AI systems require multiple layers of technical safeguards working in concert. These include attribute-based access control mechanisms that evaluate user roles, data sensitivity levels, and environmental conditions in real-time before granting access, hybrid sanitization approaches that combine pattern-matching techniques with AI models to detect and redact protected information in both structured data and unstructured clinical narratives, and immutable audit trails that cryptographically secure all access decisions and data interactions to prevent tampering and ensure accountability.
There is a gap between a HIPAA-ready platform and these technical requirements. For instance, a vendor might offer basic encryption and access controls, but lack the context-aware governance mechanisms necessary for autonomous AI systems handling protected health information.
AI systems learn from data, and many AI vendors improve their models using customer data. Momani notes privacy concerns arising from the use of AI tools for patient data analysis, stating the need to establish strict data protection measures and ensure that all AI applications comply with relevant health data privacy regulations.
Even if a vendor signs a BAA and implements strong security measures, if they're using patient data to train their AI models without proper de-identification or authorization, organizations could be violating HIPAA's minimum necessary standard and patient privacy rights. Vendors may claim their training processes are HIPAA compliant, but without transparency into exactly how data is de-identified and what safeguards prevent re-identification, healthcare organizations are taking a risk.
Research from Towards a HIPAA Compliant Agentic AI System in Healthcare demonstrates the complexity of proper de-identification, showing that effective sanitization requires dual-stage approaches. Initial sanitization must occur before data reaches AI models, followed by post-inference redaction to address any residual protected information that might leak through in AI-generated outputs.
Related: Can de-identified data be used to train AI under HIPAA?
Cloud-based AI services sometimes operate under a shared responsibility model. The vendor is responsible for the security of the cloud infrastructure, while the customer is responsible for security in the cloud, including how they configure and use the service. A HIPAA-ready platform provides the tools, but using them correctly remains the organization's responsibility.
From this. healthcare organizations may assume that purchasing a HIPAA-ready product transfers compliance responsibility to the vendor. In reality, even with a robust BAA in place, covered entities remain responsible for ensuring PHI is properly protected. If a breach occurs due to misconfiguration or improper use, the covered entity faces the regulatory consequences, regardless of the platform's capabilities.
Momani speaks on this point by noting that determining liability in AI-driven healthcare is complex, and when an AI system makes a decision that results in harm or error, it becomes challenging to pinpoint accountability. This uncertainty can impact patient rights and trust in the healthcare system.
Many states have enacted additional privacy laws that may apply to AI systems handling health data. California's CMIA, New York's SHIELD Act, and various state breach notification laws can impose requirements beyond HIPAA. A tool that's technically HIPAA-ready may not address these additional requirements.
Furthermore, if an AI system processes any data from individuals in the European Union, GDPR compliance becomes relevant. Cambridge University Press notes that countries worldwide have their own privacy regulations that impact the use of AI in healthcare, citing Canada's PIPEDA and the UK's Data Protection Act as examples that impose requirements affecting the use of AI with healthcare data. The technical and contractual requirements for these regulations can differ from HIPAA, and a HIPAA-ready product may not address these requirements.
Cambridge University Press identifies a weakness in the current regulatory landscape by stating that, "There are certain scenarios of AI/ML use in the healthcare industry that HIPAA lacks sufficient protection for patients and clarity regarding the responsibilities of AI developers and vendors."
The publication further explains that, "This is an important deficiency because a considerable number of AI developers and vendors are technology companies that operate outside the traditional scope of HIPAA's covered entities and business associates framework and thus, patients' PHI is no longer regulated when processed by these companies."
Adding to these concerns, Cambridge University Press highlights another risk, "With massive access of dominant tech companies — such as Meta, Google, and Microsoft — to patients' personal information, there is a significant risk of privacy violation through re-identification of health datasets that are de-identified through the Safe Harbor mechanism (also known as 'data triangulation')." When these companies integrate generative AI into their services or require users to rely on their platforms to access AI capabilities, the risk of re-identification grows.
No, data remains regulated under HIPAA unless it meets strict de-identification standards and cannot reasonably be re-identified.
Yes, HIPAA applies regardless of whether AI is built internally or purchased, as long as it creates, receives, maintains, or transmits PHI.
If those tasks involve PHI then HIPAA requirements still apply.