Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

Is generative AI capable of ethical decision-making?

Written by Kirsten Peremore | December 09, 2025

Generative AI can’t make genuinely ethical decisions because it doesn’t have a conscience or moral compass. It produces outputs based on training data, and those outputs can just as easily support unethical choices as they can ethical ones. Barriers like inherent algorithmic biases baked into non-representative data sets when not monitored, can skew clinical decision-making instead of serving the purpose of improvement. 

As one 2025 Cureus review puts it,Generative AI generates both truth and falsehood, supports both ethical and unethical decisions, and is neither transparent nor accountable. These factors pose clear risks to optimal decision-making in complex health services such as health policy and health regulation.”

They also operate as black box models, making it hard to trace how they reached a given conclusion or who should be held responsible when something goes wrong. The understanding of context in modern AI can be shallow, even in advanced models like generative AI software. What matters most is that generative AI is applied correctly in clinical settings. 

 

What ethical decision-making actually requires

Ethical decision-making depends on balancing competing principles, such as autonomy, non-maleficence, and justice. Every decision in healthcare is a case-by-case exercise in weighing risks, benefits, and the patient's wishes in a way that requires ethical consideration. Transparency matters, especially when clinicians weigh medical evidence against ethical values like quality of life or economic constraints.

As one study on pandemic resource allocation,Ethical values and principles to guide the fair allocation of resources in response to a pandemicexplains,Allocation of resources in response to acute public health threats is challenging and must be simultaneously guided by many ethical principles and values. Ethical decision-making strategies and the prioritisation of different principles and values needs to be discussed with the public in order to prepare for future public health threats.”

 

How generative AI thinks

Generative AI works as a statistical pattern matching process instead of anything resembling human cognition. It predicts text based on probabilities, and that mechanism allows it to excel in divergent and convergent tasks by creating large amounts of ideas at speed. It outperforms humans on creativity metrics like the alternative uses task, producing outputs that score higher in originality. 

A recent Scientific Reports study found that all GenAI models outperformed human participants across both divergent and convergent thinking assessments, with ChatGPT-4o achieving a mean accuracy of 54.96 out of 57 on the Remote Associates Test compared to the human mean of 19.13. As the authors put it,All GenAI models outperformed human participants in both tasks: theaverageandbestGenAI ideas were significantly more original than human-generated ideas.”

This makes it perfect in areas where human error is a risk, like inbound email security. The catch is that these abilities come from exposure to large datasets, which in other clinical areas might not create genuine comprehension. Over-reliance on generative AI can reduce human critical thinking because the system handles the heavy lifting. That shallow cognitive engagement risks weakening deeper reasoning skills over time. 

 

The accountability problem

The accountability problem in AI healthcare shows up as a persistent ambiguity over who should be held responsible when AI-assisted decisions go wrong. Clinicians carry the legal and moral liability for patient harm, yet they often rely on opaque algorithmic outputs that they cannot fully interrogate. 

As one editorial on ethical governance in AI healthcare research,Ethical framework for artificial intelligence in healthcare research: A path to integritywarns,The integration of artificial intelligence (AI) into healthcare research marks a pivotal shift… however, this evolution introduces a spectrum of ethical challenges that necessitate meticulous scrutiny and governance.”

That tension fuels ongoing debates about whether blame should be shared with developers, institutions, or technology vendors when systems malfunction or produce flawed recommendations. Black-box models make the situation worse. Their lack of transparency makes it harder to trace how an output was generated or integrate the technology into existing structures like serious incident reporting. Audit logs rarely map cleanly onto clinical review processes, which erodes trust and chips away at clinician autonomy.

Overreliance on AI only blurs the lines further. When clinicians begin to defer to automated guidance, the question of responsibility becomes even more fraught. Strict governance that keeps AI in a decision-support role is necessary rather than a decision-making one, with human verification as a non-negotiable step.

 

How generative AI can be effectively applied

Generative AI can be applied ethically in healthcare when its use is guided by governance frameworks that center bias mitigation. Diverse training data, explainable AI techniques, and safeguards aligned with HIPAA help keep systems accountable, while clear liability structures ensure that clinicians, not algorithms, retain ultimate responsibility for patient outcomes. 

As one study on translational implementation,Generative AI in healthcare: an implementation science informed translational path on application, integration and governance stresses,technological progress alone will not revolutionize healthcare overnight; real change requires carefully orchestrated sociotechnical transitions that put people first.” 

The same study notes that generative AI systems supporting billing, diagnosis, treatment and research have the potential to improve care delivery and efficiency, yet warns that its utility and impact remain poorly understood, with integration requiring meticulous planning, risk mitigation and structured adoption programs. That caution aligns with the finding that over one hour of clinician time is spent on electronic health record tasks for every hour of direct face-to-face care. 

In specialized areas like inbound email security, where healthcare organizations face phishing, impersonation attacks, and data exfiltration risks, generative AI can be used responsibly to create synthetic threat simulations, flag anomalous email patterns without exposing PHI, and strengthen human review workflows. Those uses remain ethical only when outputs go through strict validation.

See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)

 

FAQs

What is the difference between generative AI and regular AI?

Regular AI analyzes, while generative AI creates.

 

Is generative AI a type of AI or something separate?

Generative AI is a subset of artificial intelligence. It sits under the broader AI umbrella and uses specialized models.

 

Does generative AI understand the content it produces?

No. Generative AI doesn’t understand meaning, context, morality, or intention. It predicts the next likely word, pixel, or token based on statistical patterns.