According to the 2025 KPMG and University of Melbourne survey of 48,000 people across 47 countries, “only 46% of people globally are willing to trust AI systems.” This finding indicates a paradox at the heart of today’s technological shift. AI adoption is accelerating at an unprecedented pace, powering everything from consumer apps to enterprise workflows, and reshaping sectors like healthcare, finance, and governance. Yet public confidence in these systems hasn’t kept up. This widening gap between how widely AI is used and how much it is trusted is known as the AI trust gap.
The AI trust gap refers to the dissonance between how much we use or rely on AI systems and how much we actually trust them to act safely, reliably, fairly, and transparently. It’s not just a matter of skepticism; it’s a structural challenge in human-machine systems, ethics, governance, expectations, and psychology.
Some dimensions of the trust gap include:
See also: Paubox launches generative AI email security for healthcare
One of the biggest culprits is opacity. When systems produce decisions or predictions without transparent reasoning (or with explanations that are themselves tenuous), users struggle to trust them. Explainable AI has made strides, but its promise is often overstated; the “explanations” are sometimes post hoc rationalizations, not faithful representations of internal computation.
Further complicating this is calibration of confidence: AI systems often output confidence estimates or “certainty scores” that are poorly calibrated, that is, they may overstate or understate their true reliability. As stated in the study Understanding the Effects of Miscalibrated AI Confidence on User Trust, Reliance, and Decision Efficacy, “Providing well-calibrated AI confidence can help promote users' appropriate trust in and reliance on AI, which are essential for AI-assisted decision-making.” That misalignment can lead users to over-reliance on AI when it's wrong, or to dismiss it when it's right.
The more we use generative models, the more we see instances where they produce factually incorrect statements (“hallucinations”), contradictory answers, or inconsistent outputs on the same prompt. These failures erode trust, especially when they occur unpredictably.
For example, a recent interview with a Huawei executive provocatively suggested we “embrace hallucinations,” a stance that underscores how deeply integrated hallucination is in current systems, but also hints at the tension in managing them.
Moreover, AI systems are inherently probabilistic and non-deterministic; asking the same question may yield different answers at different times. That unpredictability is unsettling to users expecting consistency.
Read more: What are AI hallucinations?
AI systems are trained on data that reflects human and structural biases (gender, race, socioeconomic and geographic status). When systems amplify or replicate those biases, they inflict real harm on marginalized groups. That undermines trust, especially among those already vulnerable.
Additionally, fairness is often contested; different stakeholders may disagree on what “fairness” means. Because fairness is inherently value-laden, perceptions of bias are deeply political and contextual, not easily solved by a technical fix.
Related: Real-world examples of healthcare AI bias
According to an article on Trusting News, studies show that when readers see a disclosure that AI was used in writing or editing a news story, trust in that story often decreases, even when the AI support was benign.
Similarly, a University of Kansas study found that readers trust news less if they believe AI was involved, even when they don’t fully understand the role, suggesting we’re caught in an awkward trade-off: transparency is necessary, but it can backfire if users interpret “AI involvement” as sloppiness, dehumanization, or loss of editorial oversight.
In enterprises, a mismatch often emerges between leadership enthusiasm for AI and employee reluctance or skepticism. Many organizations roll out AI tools without adequate training, support, or change management. Employees can feel frustrated if they don’t know when or how to trust these tools.
A study in MIT Sloan Review, Bridging the AI Trust' Gap: Why Employees and Leaders See AI Differently, suggests that structured training and engagement in AI deployment help increase trust and adoption.
One of the most alarming asymmetries is the pace: AI adoption is accelerating faster than the establishment of ethical guardrails, regulatory oversight, standards, or accountability regimes. A recent global study revealed that while trust in generative AI is surging, only 40 % of organizations report investing in governance, explainability, or ethics safeguards.
This demonstrates that many employees are using AI tools without clarity on whether that use is permitted or whether it complies with policies.
Meanwhile, globally, only around 46 % of respondents say they’re willing to trust AI systems, despite 66 % already using AI regularly.
See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)
Closing the AI trust gap demands a holistic mix of ethics, governance, transparency, engagement, and accountability. EY’s work, in its “Bridging the AI Trust Gap” report, offers a useful set of principles and actions organizations can take. Based on those, here are key strategies to help bridge the gap:
Together, these measures align technical performance with ethical responsibility, reducing risks and building public confidence in AI.
Read also: Regulations on AI for healthcare workers and nurses
Organizations can implement strategies such as embedding ethics in design, ensuring transparency and explainability, auditing for bias, applying strong governance, engaging with regulators, and communicating openly with stakeholders.
Without trust, AI adoption will remain limited, and potential benefits in healthcare, finance, education, and other fields may not be realized. Bridging the gap ensures that AI is not only powerful but also reliable, ethical, and socially accepted.