Artificial intelligence (AI) is rapidly reshaping healthcare, from diagnostic imaging and clinical decision support to patient communications and administrative workflows. The World Economic Forum says, “With 4.5 billion people currently without access to essential healthcare services and a health worker shortage of 11 million expected by 2030, AI has the potential to help bridge that gap and revolutionize global healthcare.” However, as these technologies become more embedded in care delivery, regulators are increasingly focused on one key question: should patients be told when AI is involved in their treatment?
In the United States, there is no single federal law that broadly requires healthcare providers to disclose AI use in treatment decisions. Instead, a patchwork of state-level laws and regulations is emerging, some of which impose explicit disclosure requirements, while others mandate transparency indirectly through restrictions on how AI may be used.
As artificial intelligence becomes more common in healthcare, transparency about its use is essential for protecting patient rights, maintaining trust, and supporting ethical care delivery. These concerns mirror findings in other sectors. In the article Why AI Disclosure Could Make or Break Customer Trust published by CX Today, the authors note that people expect to be told when AI is involved in interactions that affect them, particularly when decisions are complex or personal. When AI use is hidden, trust can quickly erode, even if the technology performs accurately.
In healthcare, this issue is closely tied to HIPAA principles, particularly the concepts of transparency, accountability, and patient control over health information. While HIPAA does not explicitly regulate artificial intelligence, it does require covered entities to communicate clearly about how protected health information (PHI) is used and safeguarded. When AI systems process, analyze, or generate clinical information using PHI, failing to disclose their involvement may undermine HIPAA’s intent by limiting a patient’s understanding of how their data informs care decisions.
AI disclosure also aligns with the ethical principle of informed consent. Patients have the right to understand material factors that influence their diagnosis, treatment, or care communications. Just as clinicians disclose the use of new procedures, medical devices, or experimental therapies, the meaningful use of AI, especially in clinical decision support or patient-facing communications, should be disclosed so patients can ask questions, assess risks, and participate actively in their care.
From an ethical perspective, transparency reinforces beneficence and nonmaleficence by ensuring that AI tools are used to support, rather than replace, professional judgment. Disclosure makes it clear that responsibility for care remains with licensed healthcare professionals, reducing the risk that patients view AI-generated information as unquestionable or authoritative without human oversight.
Finally, as CX Today emphasizes, disclosure is not intended to discourage the use of AI but to ensure it is introduced honestly and responsibly. In healthcare, where trust is at the core of treatment adherence, patient engagement, and long-term outcomes, clear communication about AI involvement helps safeguard autonomy, supports ethical practice, and strengthens confidence in both technology and the clinicians who use it.
See also: Artificial Intelligence in healthcare
While only a few states have laws that directly mandate AI use disclosure in clinical treatment, a broader set of states are adopting regulatory measures that require transparency about AI in healthcare administrative decisions, insurer actions, and patient communications. These laws, referenced in the Morgan Lewis article, (AI)n’t Done Yet: States Continue to Craft Rules to Manage AI Tools in Healthcare, show how states are using disclosure requirements as part of broader AI governance frameworks.
California is one of the most developed examples of combining AI disclosure requirements with healthcare oversight. The state enacted:
These provisions push transparency when AI is used to communicate with patients and when AI influences decisions that directly affect access to care and coverage.
Colorado’s approach, under SB24-205, applies to AI systems defined as “high-risk” because they materially influence significant decisions such as approval or denial of healthcare services. In this context, entities that deploy or develop such high-risk AI systems must take steps to protect consumers from algorithmic discrimination, including implementing safeguards and disclosing AI use. Although this law is broader than clinical treatment alone, its transparency requirements apply directly to decisions that impact patients’ access to care.
Utah has layered disclosure obligations that intersect with healthcare:
This dual focus ensures that administrative systems and clinical tools that might affect mental health and therapeutic interactions include clear transparency mechanisms.
Beyond those listed above, the Morgan Lewis article identified a range of other state efforts to mandate transparency in healthcare-related AI uses, especially in insurance, utilization review, and benefit determination, where AI systems can materially affect patient care outcomes:
These measures ensure that even if a state does not require disclosure at the point of clinical diagnosis or treatment, many states are pushing for visibility and human accountability where AI influences coverage decisions, claims outcomes, or patient communications in ways that materially affect care.
Healthcare organizations operating across multiple states face growing compliance challenges. Key considerations include:
Failing to comply with state AI disclosure laws can expose organizations to regulatory enforcement, litigation risk, and reputational harm.
See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)
AI disclosure refers to informing patients, members, or consumers when artificial intelligence systems are used in healthcare-related decisions. This may include clinical communications, diagnostic or treatment support tools, utilization review, claims adjudication, or coverage determinations. Disclosure is intended to promote transparency, accountability, and patient trust.
According to Morgan Lewis, disclosure requirements most often apply when AI is used for:
These activities are considered high-impact because they directly affect patient access to care and understanding of their health information.
Failure to comply with state AI disclosure laws can result in: