Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

Which states require disclosure of AI use in treatment?

Written by Tshedimoso Makhene | December 23, 2025

Artificial intelligence (AI) is rapidly reshaping healthcare, from diagnostic imaging and clinical decision support to patient communications and administrative workflows. The World Economic Forum says, “With 4.5 billion people currently without access to essential healthcare services and a health worker shortage of 11 million expected by 2030, AI has the potential to help bridge that gap and revolutionize global healthcare.” However, as these technologies become more embedded in care delivery, regulators are increasingly focused on one key question: should patients be told when AI is involved in their treatment?

In the United States, there is no single federal law that broadly requires healthcare providers to disclose AI use in treatment decisions. Instead, a patchwork of state-level laws and regulations is emerging, some of which impose explicit disclosure requirements, while others mandate transparency indirectly through restrictions on how AI may be used. 

 

Why AI disclosure in healthcare matters

As artificial intelligence becomes more common in healthcare, transparency about its use is essential for protecting patient rights, maintaining trust, and supporting ethical care delivery. These concerns mirror findings in other sectors. In the article Why AI Disclosure Could Make or Break Customer Trust published by CX Today, the authors note that people expect to be told when AI is involved in interactions that affect them, particularly when decisions are complex or personal. When AI use is hidden, trust can quickly erode, even if the technology performs accurately.

In healthcare, this issue is closely tied to HIPAA principles, particularly the concepts of transparency, accountability, and patient control over health information. While HIPAA does not explicitly regulate artificial intelligence, it does require covered entities to communicate clearly about how protected health information (PHI) is used and safeguarded. When AI systems process, analyze, or generate clinical information using PHI, failing to disclose their involvement may undermine HIPAA’s intent by limiting a patient’s understanding of how their data informs care decisions.

AI disclosure also aligns with the ethical principle of informed consent. Patients have the right to understand material factors that influence their diagnosis, treatment, or care communications. Just as clinicians disclose the use of new procedures, medical devices, or experimental therapies, the meaningful use of AI, especially in clinical decision support or patient-facing communications, should be disclosed so patients can ask questions, assess risks, and participate actively in their care.

From an ethical perspective, transparency reinforces beneficence and nonmaleficence by ensuring that AI tools are used to support, rather than replace, professional judgment. Disclosure makes it clear that responsibility for care remains with licensed healthcare professionals, reducing the risk that patients view AI-generated information as unquestionable or authoritative without human oversight.

Finally, as CX Today emphasizes, disclosure is not intended to discourage the use of AI but to ensure it is introduced honestly and responsibly. In healthcare, where trust is at the core of treatment adherence, patient engagement, and long-term outcomes, clear communication about AI involvement helps safeguard autonomy, supports ethical practice, and strengthens confidence in both technology and the clinicians who use it.

See also: Artificial Intelligence in healthcare

 

States requiring AI disclosure in healthcare-related decisions

While only a few states have laws that directly mandate AI use disclosure in clinical treatment, a broader set of states are adopting regulatory measures that require transparency about AI in healthcare administrative decisions, insurer actions, and patient communications. These laws, referenced in the Morgan Lewis article, (AI)n’t Done Yet: States Continue to Craft Rules to Manage AI Tools in Healthcare, show how states are using disclosure requirements as part of broader AI governance frameworks. 

 

California: Payor transparency and patient communication disclosure

California is one of the most developed examples of combining AI disclosure requirements with healthcare oversight. The state enacted:

  • AB 3030, which requires clinics, physician offices, and health facilities that use generative AI to generate patient communications to include a clear disclaimer that the message was created by AI and provide instructions on how patients can contact a human healthcare professional.
  • SB 1120, which applies to healthcare service plans and disability insurers. This law requires plans that use AI for utilization review or utilization management to implement safeguards to ensure fair use and compliance. This includes the requirement for disclosing the usage of AI, and it guarantees that decisions regarding medical necessity are made by licensed professionals instead of relying solely on automated systems.

These provisions push transparency when AI is used to communicate with patients and when AI influences decisions that directly affect access to care and coverage. 

 

Colorado: ‘High-Risk’ AI, safeguards, and disclosure

Colorado’s approach, under SB24-205, applies to AI systems defined as “high-risk” because they materially influence significant decisions such as approval or denial of healthcare services. In this context, entities that deploy or develop such high-risk AI systems must take steps to protect consumers from algorithmic discrimination, including implementing safeguards and disclosing AI use. Although this law is broader than clinical treatment alone, its transparency requirements apply directly to decisions that impact patients’ access to care. 

 

Utah: Provider and regulated services disclosure

Utah has layered disclosure obligations that intersect with healthcare:

  • HB 452 specifically requires suppliers of mental health chatbots to disclose that AI is being used.
  • Other state statutes, SB 149 and SB 226, extend disclosure requirements to “regulated occupations,” which includes healthcare professionals. These provisions require that service providers tell patients when generative AI systems are used in delivering regulated services. 

This dual focus ensures that administrative systems and clinical tools that might affect mental health and therapeutic interactions include clear transparency mechanisms. 

 

Additional states adopting related AI disclosure or transparency rules

Beyond those listed above, the Morgan Lewis article identified a range of other state efforts to mandate transparency in healthcare-related AI uses, especially in insurance, utilization review, and benefit determination, where AI systems can materially affect patient care outcomes:

  • Massachusetts has proposed protections requiring that carriers or utilization review organizations using AI tools for utilization review or claims adjudication provide disclosures and ensure determinations of medical necessity are made by licensed professionals. 
  • Rhode Island requires insurers to disclose when AI is used to manage claims or coverage and that adverse determinations are reviewed by a human healthcare professional. 
  • Tennessee similarly mandates safeguards and disclosures related to AI use in utilization review and management decisions. 
  • New York has multiple bills requiring plans that use AI for utilization review or utilization management to implement safeguards and provide disclosures about their AI use to enrollees. 

These measures ensure that even if a state does not require disclosure at the point of clinical diagnosis or treatment, many states are pushing for visibility and human accountability where AI influences coverage decisions, claims outcomes, or patient communications in ways that materially affect care.

 

Implications for healthcare providers and organizations

Healthcare organizations operating across multiple states face growing compliance challenges. Key considerations include:

  • Updating consent forms and patient notices to reflect AI use
  • Training clinicians and staff on when and how to disclose AI involvement
  • Ensuring AI tools are explainable and auditable
  • Coordinating disclosure practices across clinical, administrative, and digital platforms

Failing to comply with state AI disclosure laws can expose organizations to regulatory enforcement, litigation risk, and reputational harm.

 

See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)

 

FAQS

What does ‘AI disclosure’ mean in healthcare?

AI disclosure refers to informing patients, members, or consumers when artificial intelligence systems are used in healthcare-related decisions. This may include clinical communications, diagnostic or treatment support tools, utilization review, claims adjudication, or coverage determinations. Disclosure is intended to promote transparency, accountability, and patient trust.

 

Which healthcare activities most commonly trigger AI disclosure requirements?

According to Morgan Lewis, disclosure requirements most often apply when AI is used for:

  • Patient-facing clinical communications (e.g., AI-generated messages or explanations)
  • Utilization review and utilization management
  • Claims processing and coverage determinations
  • Mental health or therapeutic interactions involving AI tools

These activities are considered high-impact because they directly affect patient access to care and understanding of their health information.

 

What risks do healthcare organizations face if they fail to disclose AI use?

Failure to comply with state AI disclosure laws can result in:

  • Increased litigation risk
  • Reputational damage and loss of patient trust
  • Ethical concerns related to autonomy and transparency