6 min read

AI, informed consent, and patient autonomy

AI, informed consent, and patient autonomy

Increasingly, providers are utilizing artificial intelligence (AI) for automated electrocardiogram interpretations, incorporating algorithmic risk prediction and synthesized clinical knowledge bases. These AI-driven tools now inform clinicians' diagnoses, treatments, and patient monitoring.

As JAMA Network’s study on Ethical Obligations to Inform Patients About Use of AI Tools states, the “permeation of artificial intelligence (AI) tools into healthcare tests traditional understandings of what patients should be told about their care.” While AI systems influence clinical outcomes, they are often invisible to the very patients whose lives they affect.

Historically, medicine has incorporated decision support tools without requiring explicit patient disclosure. Reference texts, clinical calculators, and standardized guidelines have long shaped physician judgment behind the scenes.

However, AI introduces new complexities as it learns from datasets, generates probabilistic recommendations, and may rely on opaque reasoning processes that even its developers cannot fully explain. These features raise questions about accuracy and bias vs. transparency and patient autonomy.

As clinicians grow accustomed to algorithmic assistance, AI may be treated as merely another instrument in the clinical toolkit. Yet this framing ignores the extent to which AI can reconfigure decision-making authority, moving influence from individual clinicians to data-driven systems. When such changes happen without patient awareness, the principles of informed consent are placed under strain.

 

The purpose of informed consent

Informed consent exists to protect patients from being acted upon without their knowledge or agreement. Patients must have access to information that would reasonably affect their decision to accept or decline care. However, in practice, consent conversations usually focus on procedures and risks, leaving decision-making processes unexplored.

As evidenced in JAMA Network, “Despite the general importance of informed consent, decision support tools (e.g., automatic electrocardiogram readers, rule-based risk classifiers, and UpToDate summaries) are not usually discussed with patients even though they affect treatment decisions.”

This historical practice reflects an assumption that decision support mechanisms are not material to patient choice. However, AI challenges this assumption. When algorithms affect diagnoses or treatment recommendations, they become active participants in care delivery. The question, then, is whether withholding information about AI use constitutes a failure of informed consent.

The legal doctrine of informed consent requires disclosure of information that is material to a reasonable patient’s decision to accept a health care service. Materiality is, therefore, determined by what patients themselves would find relevant. Increasingly, empirical evidence suggests that AI use is material in this sense.

Go deeper: The function of informed consent in healthcare

 

Materiality and patient perspectives

The legal standard for informed consent typically requires disclosure of information that would be considered material by a reasonable patient. Whether AI use meets this threshold depends, in part, on patient attitudes and expectations.

The above mentioned research study found that “60% of US adults said they would be uncomfortable with their physician relying on AI.” At the same time, “70% to 80% had low expectations AI would improve important aspects of their care.” These responses suggest a degree of caution toward AI-guided medicine. In addition, trust in institutional governance appears limited, where “only one-third trusted health care systems to use AI responsibly.”

Furthermore, “63% said it was very true that they would want to be notified about the use of AI in their care.” This finding implies that, for many patients, AI involvement is information they consider relevant to their health care decisions. If patients would “think differently about care if they knew it was guided by AI,” then nondisclosure may warrant closer scrutiny under existing consent standards.

 

The problem with disclosure

Routine disclosure of AI use presents practical challenges since clinical encounters are time-constrained, and clinicians already face competing demands to relay complex information. There is also concern that explanations of AI systems may be confusing or misinterpreted, particularly given the technical nature of many algorithms.

However, informed consent does not require a very technical explanation. Rather, it requires that information be presented in a manner that is understandable and relevant to the patient’s decision. Disclosure could therefore address the role AI plays in care, its purpose, and its limitations, without delving into technical detail. The challenge lies in developing communication strategies that are accurate, proportionate, and feasible within clinical workflows.

 

Informed consent is an ongoing dialogue

AI further complicates consent because models may be updated, retrained, or applied in new contexts over time. A one-time disclosure may not adequately capture these changes. We, therefore, need mechanisms for continued communication between health systems and patients. These mechanisms must allow for the sharing of information about AI use while safeguarding patient privacy. The ongoing dialogue will keep patients informed about how their data is being used to make informed decisions about their participation in AI-driven healthcare.

 

How HIPAA compliant emails can help

Since discussions about AI often involve explanations of how patient data are used, analyzed, or shared, these topics may inherently involve protected health information (PHI). HIPAA law states that covered entities, including healthcare providers, safeguard individuals’ PHI during transmission.

As such, healthcare providers must use HIPAA compliant emails to communicate information about AI. These email solutions use advanced encryption and access controls to uphold federal privacy and security regulations.

Using HIPAA compliant email also allows providers to share written information outside the clinical encounter, giving patients the opportunity to review disclosures at their own pace. This may be particularly relevant given that “63% said it was very true that they would want to be notified about use of AI in their care.” Written communication can also support consistency, so explanations are accurate and aligned with institutional policies.

 

Documentation and accountability

HIPAA compliant email supports documentation. As AI-assisted care becomes more common, questions about disclosure and accountability could increase. Secure email systems provide records of what information was shared and when, which may be relevant in evaluating whether informed consent obligations were met.

Furthermore, with “only one-third trusted health care systems to use AI responsibly,” transparency may address patient concerns, even if it does not eliminate skepticism. Clear, factual communication about AI use, limitations, and governance structures may help align institutional practices with patient expectations.

For example, providing detailed information on how AI algorithms are developed, tested, and monitored can help build trust and reassure patients about the safety and reliability of their healthcare data.

Additionally, involving patients in the decision-making process regarding AI implementation can empower them to take an active role in their own care, leading to better health outcomes where patient values and preferences are respected.

 

Integrating ethics, law, and infrastructure

In the context of AI, HIPAA compliance intersects with ethical considerations. Secure communication infrastructure allows health systems to operationalize principles of informed consent and respect for autonomy in technologically complex environments. When patients express a desire to be informed about AI involvement, health systems may need to adapt existing consent practices accordingly.

For example, providers can use HIPAA compliant emails to explain when AI algorithms are being used in their care and obtain explicit consent for this involvement. Additionally, health systems can establish clear policies for how patient data is collected, stored, and shared to promote transparency and trust in the use of AI technologies.

 

Trust and public health implications of AI

According to the European Journal of Public Health’s study on Trust, Democracy and Public Health, “Public health thrives on trust. Trust in science, trust in institutions, trust in each other.” These forms of trust shape whether individuals and communities accept health interventions, comply with recommendations, and engage constructively with health systems.

The erosion of trust has measurable consequences for population health. As noted, “It is not a coincidence that the erosion of trust goes hand in hand with the erosion of health: we see it in vaccine hesitancy, in the rise of disinformation, and in the backlash against equity-oriented policies.” These patterns are relevant to AI-enabled care, where limited transparency and unclear accountability may contribute to uncertainty or resistance if patients are not adequately informed.

For example, if patients perceive biases in the technology, like algorithms favoring certain demographics over others, they may be less likely to trust the recommendations or decisions made by AI systems. The lack of trust can ultimately impact the effectiveness and acceptance of AI-enabled care in improving health outcomes.

 

Democracy and AI

While public health frameworks frequently address social and economic determinants, “We often talk about determinants of health—but rarely do we name democracy as one of the most fundamental.” Democratic principles like transparency, accountability, and participation influence how health technologies are perceived and legitimized. In clinical settings, these principles are reflected in practices that support informed consent and meaningful patient communication.

Within this context, the way AI systems are introduced, explained, and governed affects public confidence in health institutions. As emphasized in the journal, “this is not only about money. It is about values.” Translating these values into practice requires governance mechanisms that support ethical use, protect patient autonomy, and reinforce institutional credibility.

Moreover, providers can use HIPAA compliant emails to conduct a survey of patient satisfaction with AI technologies to continuously improve and adapt their practices. This upholds federal regulations and demonstrates transparency and patient-centered care.

 

FAQs

Do providers need patient consent for HIPAA compliant emails?

Yes, a provider must get explicit patient consent before sharing their PHI through HIPAA compliant emails.

Learn more: A HIPAA consent form template that's easy to share

 

Are standard emails secure for discussing sensitive healthcare information?

No, standard emails do not provide the necessary encryption to protect sensitive healthcare information from potential breaches. So, providers must use a HIPAA compliant email platform, like Paubox, to safeguard patients' protected health information (PHI) during transmission and at rest.

 

Can AI be integrated into HIPAA compliant emails?

Yes, AI-powered features can be integrated with HIPAA compliant emailing platforms, like Paubox, to automate processes like patient consent management and send personalized emails while maintaining HIPAA compliance.

Read also: Support the HHS's AI strategic plan with HIPAA compliant email

Subscribe to Paubox Weekly

Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.