HIPAA doesn't specifically address AI technologies, instead, it applies the same privacy and security standards to any tool or system that creates, receives, maintains, or transmits PHI. AI chatbot, predictive analytics platform, or automated documentation system must meet the same standards as your electronic health record system.
As noted in NIST's Artificial Intelligence Risk Management Framework (AI RMF 1.0), AI systems can be trained on data that changes over time, sometimes significantly and unexpectedly, affecting system functionality in ways that are hard to understand. This characteristic makes traditional risk assessment approaches insufficient on their own.
The HIPAA Security Rule requires covered entities and business associates to conduct regular risk assessments to identify vulnerabilities and implement appropriate safeguards. Specifically, 45 C.F.R. § 164.308(a)(1)(ii)(A) requires organizations to "Conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity or business associate." According to the HHS Guidance on Risk Analysis, "Conducting a risk analysis is the first step in identifying and implementing safeguards that comply with and carry out the standards and implementation specifications in the Security Rule."
The NIST Framework notes that trustworthy AI systems require multiple characteristics working in concert; valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. For healthcare organizations handling PHI, these trustworthiness characteristics align closely with HIPAA's security and privacy requirements.
Step 1: Inventory your AI tools and data flows
Begin by creating an inventory of all AI tools currently in use or under consideration. Document each tool's purpose, the type of data it processes, where data is stored, and who has access.
The HHS guidance states that "an organization must identify where the e-PHI is stored, received, maintained or transmitted." Where does the data originate? How is it transmitted to the AI system? Is it stored locally or in the cloud? Who are the third-party vendors involved? Understanding these data flows is important because HIPAA compliance extends to all organizations that handle PHI, including AI vendors who become business associates under the regulation.
The scope of your analysis must be thorough. As the HHS guidance clarifies, the scope "includes the potential risks and vulnerabilities to the confidentiality, availability and integrity of all e-PHI that an organization creates, receives, maintains, or transmits."
This inventory should also include understanding what software and services are running on your AI systems. As the January 2026 OCR Cybersecurity Newsletter on System Hardening notes, many information systems may include software that has never been used but may contain serious vulnerabilities that could be exploited by an attacker. Creating and maintaining an accurate IT asset inventory helps organizations understand their environment and identify information systems to be hardened.
Step 2: Identify potential threats and vulnerabilities
Identify threats to the confidentiality, integrity, and availability of PHI. For AI tools, common vulnerabilities include inadequate encryption during data transmission, insufficient access controls, lack of audit logging, and insecure application programming interfaces (APIs).
The HHS guidance defines vulnerability as "a flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised and result in a security breach." Meanwhile, threat is defined as "the potential for a person or thing to exercise (accidentally trigger or intentionally exploit) a specific vulnerability."
According to the January 2026 OCR Cybersecurity Newsletter, "The HIPAA Security Rule risk analysis provision requires regulated entities to conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of all ePHI – this includes risks and vulnerabilities to ePHI from unpatched software." For AI systems, this extends to vulnerabilities in the AI platform itself, its dependencies, and any third-party libraries or services it uses.
Consider AI-specific risks as well. Can the model be manipulated through adversarial attacks to reveal sensitive information? Does the training process adequately de-identify data? Could the AI system memorize and later reproduce PHI in its outputs?
Building diverse assessment teams is important. As the AI RMF 1.0 notes, "Diverse teams contribute to more open sharing of ideas and assumptions about the purposes and functions of technology—making these implicit aspects more explicit."
The HHS guidance identifies several threat categories to consider, "natural threats such as floods, earthquakes, tornadoes, and landslides; human threats including intentional network attacks and unauthorized access; and environmental threats such as power failures." Also note human factors. Assess whether staff members understand how to use AI tools appropriately, whether there are clear policies governing AI usage, and whether employees might expose PHI through careless interactions with AI systems.
Step 3: Assess current security measures
Evaluate the safeguards currently in place to protect PHI when using AI tools. The HIPAA Security Rule categorizes safeguards into administrative, physical, and technical controls, and all three apply to AI implementations.
Administrative safeguards include policies and procedures, workforce training, and incident response plans. Review whether your organization has specific policies addressing AI tool usage, whether employees receive training on HIPAA compliant AI practices, and whether your business associate agreements cover AI vendors.
Physical safeguards relate to protecting the physical infrastructure where PHI is stored and processed. If your AI tools run on on-premises servers, assess whether these facilities have appropriate access controls, environmental protections, and disposal procedures. For cloud-based AI, verify that your vendor maintains equivalent physical security measures.
Technical safeguards relate to examining encryption methods for data at rest and in transit, authentication and authorization mechanisms, audit logging capabilities, and data integrity controls. According to the HHS guidance, organizations should "assess and document the security measures an entity uses to safeguard e-PHI, whether security measures required by the Security Rule are already in place, and if current security measures are configured and used properly."
The January 2026 OCR Cybersecurity Newsletter notes that security measures often found in operating systems and software intersect with technical safeguard standards and implementation specifications of the HIPAA Security Rule, such as access controls, encryption, audit controls, and authentication.
System hardening should be part of your security assessment. This includes patching known vulnerabilities, removing or disabling unneeded software and services, and enabling and configuring security measures. As the newsletter notes, implementing a vulnerability management program is one method to identify and mitigate vulnerabilities in a timely manner, and "patching vulnerabilities is not a one-time event. Over time, new vulnerabilities may be identified in software that was already patched, in software that had previously not needed patching, or in the previously applied patches themselves."
Step 4: Determine the likelihood and impact of threats
Assess both the likelihood that each identified threat could materialize and the potential impact if it does. The HHS guidance requires organizations to "take into account the probability of potential risks to e-PHI" and to consider "the criticality, or impact, of potential risks to confidentiality, integrity, and availability of e-PHI."
Consider your organization's specific context when evaluating likelihood. Factors include the sensitivity of the data processed, the number of users with access, the technical sophistication of potential threat actors, and the security track record of your AI vendors.
Impact assessment should consider regulatory penalties, reputational damage, patient harm, and operational disruption.
Step 5: Implement and document risk mitigation strategies
The HHS guidance states that organizations must "implement reasonable and appropriate security measures to protect against reasonably anticipated threats or hazards to the security or integrity of e-PHI." HIPAA allows flexibility in how organizations address risks, requiring safeguards that are "reasonable and appropriate" given the organization's size, complexity, and capabilities.
The January 2026 OCR Cybersecurity Newsletter reinforces this requirement, stating that "The Security Rule risk management provision requires regulated entities to implement security measures to reduce risks and vulnerabilities to a reasonable and appropriate level."
Strong governance structures support effective risk management. The NIST Framework observes that "strong governance can drive and enhance internal practices and norms to facilitate organizational risk culture."
When implementing changes to AI systems or their configurations, the January 2026 OCR Cybersecurity Newsletter advises that "when environmental or operational changes are made that affect the security of ePHI, the Security Rule evaluation standard requires regulated entities to perform technical and non-technical evaluations of their security safeguards to demonstrate and document compliance with their policies and the requirements of the Security Rule."
Documentation is crucial throughout this process. The HHS guidance states that "the output should be documentation of the assigned risk levels and a list of corrective actions to be performed to mitigate each risk level." Maintain detailed records of your risk assessment methodology, identified risks, implemented safeguards, and the rationale for your decisions.
Step 6: Establish ongoing monitoring and reassessment
The HHS guidance is clear that "the risk analysis process should be ongoing" and that organizations should "conduct continuous risk analysis to identify when updates are needed." As AI technology evolves, new threats emerge, and your organization's use of AI expands, you must regularly reassess risks and update your safeguards accordingly.
The guidance notes that "some covered entities may perform these processes annually or as needed depending on the circumstances of their environment." Establish a schedule for periodic reviews, at minimum annually.
Additionally, the HHS guidance recommends that "if the covered entity has experienced a security incident, has had change in ownership, turnover in key staff or management, is planning to incorporate new technology, the potential risk should be analyzed."
The January 2026 OCR Cybersecurity Newsletter notes the nature of cybersecurity threats by stating that, "As new threats and vulnerabilities evolve and are discovered, and attackers vary and improve their tactics, techniques, and procedures, regulated entities need to remain vigilant to ensure that their implemented security solutions remain effective."
Implement continuous monitoring mechanisms to detect potential security incidents involving AI tools. The newsletter reinforces that evaluating the ongoing effectiveness of implemented security measures is important to ensure such measures remain effective over time, and for regulated entities, the periodic review and modification, as needed, of security measures implemented under the HIPAA Security Rule is a requirement to maintain protection of ePHI.
FAQs
Does using de-identified data remove HIPAA risk assessment requirements for AI tools?
No, organizations must still assess re-identification risk and downstream data handling even when AI systems use de-identified data.
How does HIPAA risk assessment differ for generative AI versus traditional analytics tools?
Generative AI introduces additional risks related to data memorization, prompt inputs, and output disclosure that require more security than static analytics models.
Are open-source AI models automatically non-compliant with HIPAA?
No, but organizations must evaluate how the model is hosted, configured, and governed to ensure PHI is not exposed or reused improperly.
How should organizations handle AI models that continuously learn from new data?
Continuously learning systems require more frequent reassessment because changes in model behavior can introduce new risks to ePHI.
Subscribe to Paubox Weekly
Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.
