5 min read

Risk of ungoverned AI use in healthcare

Risk of ungoverned AI use in healthcare

Artificial intelligence (AI) improves healthcare by supporting more accurate diagnoses, streamlining workflows, and enhancing patient experiences. The study Artificial intelligence in healthcare: transforming the practice of medicine notes, “The application of technology and artificial intelligence (AI) in healthcare has the potential to address some of these supply-and-demand challenges. The increasing availability of multi-modal data (genomics, economic, demographic, clinical and phenotypic) coupled with technology innovations in mobile, internet of things (IoT), computing power and data security herald a moment of convergence between healthcare and technology to fundamentally transform models of healthcare delivery through AI-augmented healthcare systems.” However, these benefits are not guaranteed, especially when AI is used without strong governance. 

Weak policies, poor oversight, inadequate security, and lack of ethical safeguards can introduce new risks, threatening patient safety, fairness, privacy, and regulatory compliance. When AI is poorly governed, it doesn’t just fall short of expectations; it can actively cause harm to healthcare organizations and the patients they serve.

 

Implementing AI responsibly in healthcare

“It’s all about finding the right balance between innovation and responsibility. AI can absolutely help healthcare organizations improve care and efficiency, but it has to be done carefully,” says David Holt, owner of Holt Law LLC. “One good approach is to use a structured framework—like a readiness model—that helps organizations figure out where they stand and how to grow their use of AI safely. It also helps to build diverse teams that include not just tech experts, but also clinicians, patients, and people who understand the ethical and operational side of healthcare.” This emphasizes that governance should come before expansion, allowing organizations to understand where AI can be safely introduced, what safeguards are missing, and how adoption can scale responsibly over time. 

Furthermore, Holt notes that “AI tools should be evaluated and monitored regularly, not just when they’re launched, to make sure they’re still working well and not causing new problems. On the technical side, things like strong encryption, secure access, and detailed logs of who is accessing what are key. Any vendor touching patient data should sign a Business Associate Agreement, so everyone knows the rules and responsibilities. And finally, starting small—like using AI for low-risk, non-clinical tasks—is a smart way to test the waters before expanding to more sensitive uses. With the right strategy, it’s absolutely possible to get the benefits of AI while still protecting patient privacy and staying compliant with HIPAA.”

 

The dangers of ungoverned AI

Bias and inequity in clinical decision support

Artificial intelligence (AI) systems learn from past data, but if that data reflects current injustices or underrepresents particular demographics, the AI may duplicate or magnify those biases.

Studies show that algorithmic bias in clinical AI can lead to disparate outcomes across demographic groups, particularly for women and ethnic minorities. For example, research indicates that “Artificial intelligence tools used by more than half of England’s councils are downplaying women’s physical and mental health issues and risk creating gender bias in care decisions, research has found,” writes The Guardian.

Additionally, research by the World Health Organization (WHO) reveals that when AI tools lack diverse and representative training data, they risk reinforcing health disparities, making healthcare less equitable instead of more so. 

Governance frameworks that include bias audits, performance testing across populations, and diverse stakeholder involvement (e.g., ethicists, affected community representatives) help ensure that AI tools serve all patients fairly. Without these safeguards, biased AI systems can unintentionally harm groups already facing healthcare disadvantages.

Read also: AI algorithmic bias in healthcare decision making

 

Vulnerabilities in security and privacy

AI in healthcare depends on large amounts of sensitive patient data, which introduces new privacy and security challenges beyond traditional protections like HIPAA. As noted in the comprehensive review, AI and data protection law in health, AI systems often process data across various contexts, including clinical care, research, and commercial use, making it difficult to determine how existing privacy rules apply.

Even when data has been de-identified, AI can occasionally re-identify individuals by connecting patterns across various datasets, posing a threat to patient privacy. Additionally, many AI tools run on cloud platforms or involve third-party vendors, potentially reducing healthcare organizations' ability to control who accesses and protects patient information.

Security risks are further increased due to AI systems depending on intricate infrastructures that can be susceptible to attacks, unauthorized access, or misuse. This risk is particularly significant when employees use unsanctioned AI tools without adequate safeguards.

When privacy and security are not carefully governed, patient data can be exposed, leading to legal consequences, loss of patient trust, and potential harm. Strong governance, including clear data policies, encryption, access controls, and continuous monitoring, protects healthcare data while enabling safe AI use.

Strong governance policies must specify:

  • Only approved, HIPAA compliant tools with business associate agreements (BAAs) must be used, especially when working with PHI.
  • All AI interactions involving PHI must be logged and encrypted.

Read also: AI in healthcare privacy: Enhancing security or introducing new risks?

 

Errors, hallucinations and misdiagnoses

AI systems can make errors known as hallucinations in the context of generative AI. This occurs when confident but incorrect outputs are generated. In clinical settings, such inaccuracies can translate directly into risks for patients, such as misdiagnoses or inappropriate care suggestions. 

IBM’s 2025 Cost of a Data Breach Report noted that 97% of organizations that experienced an AI-related security incident lacked proper AI access controls, while 63% of organizations did not have AI governance policies in place to manage AI usage or prevent the proliferation of shadow AI.

Effective governance involves continuous monitoring, human-in-the-loop validation, and rigorous clinical oversight to ensure AI outputs are verified before impacting patient care. Without such policies, AI mistakes can go undetected until serious harm occurs.

 

Erosion of accountability and legal complexity

In traditional healthcare, when something goes wrong, it is clearer who is accountable: the clinician, the institution, or the device manufacturer. However, the use of AI can blur the lines.

As AI adoption grows, experts warn that establishing blame for medical errors becomes more complex. As reported by The Guardian, “The use of artificial intelligence in healthcare could create a legally complex blame game when it comes to establishing liability for medical failings.”  Liability might be scattered across clinicians, AI vendors, and healthcare institutions, making legal accountability fraught and costly. 

This raises questions about the standard of care and the legal duty of healthcare professionals when AI recommendations influence clinical decisions.

Clear policies that define roles, responsibilities, and incident response protocols are essential. Healthcare facilities must understand:

  • Who is liable if an AI tool contributes to harm
  • How to investigate and remediate AI-related incidents
  • How to update governance documents as technology evolves

 

Trust and ethical concerns

AI systems that are opaque or difficult to interpret, also called black box models, reduce trust among clinicians and patients. The study Ethics and governance of trustworthy medical artificial intelligence found that when decision logic isn’t transparent, healthcare professionals struggle to validate results, and patients may feel sidelined or uncertain about their care. 

Ethical issues also extend to autonomy and dignity. In worst-case scenarios, as highlighted in Ethical concerns of AI in healthcare: A systematic review of qualitative studies, poorly governed AI could undermine the clinician–patient relationship by making decisions divorced from human context and ethical judgment. 

Ethical AI governance demands transparency, explainability, and human oversight mechanisms. When organizations fail to integrate these principles, AI use can erode trust within healthcare teams and between patients and providers.

 

Increasing operational and implementation risks

AI can influence clinical decision-making and impact workflows, resource allocation, and administrative processes. Without proper governance, there is a risk that AI systems may be misused or become out of sync with the organization's objectives.

For example, the study Scaling enterprise AI in healthcare: the role of governance in risk mitigation frameworks looks at how an AI designed to optimize scheduling might prioritize financial efficiency over clinical urgency. This compromises patient care if not properly contextualized in governance frameworks. 

Similarly, the absence of governance might lead healthcare providers to excessively depend on AI recommendations, which could diminish clinicians' critical thinking and clinical skills over time.

Governance ensures that AI tools augment, not replace, professional judgment. It aligns AI capabilities with organizational priorities, ethical standards, and clinical protocols.

 

Governance at Paubox

Paubox integrates governance into its security, compliance, and product design to support HIPAA compliant healthcare communication. The company maintains a clear AI use policy that defines how AI is used within its services, limits exposure to protected health information (PHI), and requires ongoing oversight.

All emails sent through Paubox are encrypted by default and supported by role-based access controls, audit logging, and continuous monitoring. Paubox also enforces strict vendor governance, requiring BAAs and vetting third parties to ensure compliance with HIPAA standards.

To strengthen its governance framework, Paubox maintains HITRUST CSF certification, demonstrating alignment with recognized healthcare security and risk-management best practices.

See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)

 

FAQS

Can AI systems introduce risk even if they don’t make clinical decisions?

Yes. Non-clinical AI tools used for scheduling, documentation, or communications can still process PHI. If these systems are poorly governed, they can lead to privacy breaches or compliance violations.

 

Can AI improve healthcare without compromising patient privacy?

Yes. With strong governance, clear policies, technical safeguards, and human oversight, healthcare organizations can benefit from AI while protecting patient privacy and remaining compliant with HIPAA.

 

Do patients need to be informed when AI is used in their care?

While requirements vary, transparency is increasingly considered a best practice. Informing patients about AI use supports trust, informed consent, and ethical care.

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.