Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

The AI trust gap

Written by Tshedimoso Makhene | October 07, 2025

According to the 2025 KPMG and University of Melbourne survey of 48,000 people across 47 countries, “only 46% of people globally are willing to trust AI systems.” This finding indicates a paradox at the heart of today’s technological shift. AI adoption is accelerating at an unprecedented pace, powering everything from consumer apps to enterprise workflows, and reshaping sectors like healthcare, finance, and governance. Yet public confidence in these systems hasn’t kept up. This widening gap between how widely AI is used and how much it is trusted is known as the AI trust gap.

 

Understanding the AI trust gap?

The AI trust gap refers to the dissonance between how much we use or rely on AI systems and how much we actually trust them to act safely, reliably, fairly, and transparently. It’s not just a matter of skepticism; it’s a structural challenge in human-machine systems, ethics, governance, expectations, and psychology.

Some dimensions of the trust gap include:

  • Capability vs. credibility: AI systems can perform tasks competently (e.g., language generation, image recognition), but their internal logic is often opaque (the “black box” problem). According to the Wilson Center, the application of black box AI models offers speed and accuracy in medical applications; however, their lack of transparency has created trust issues. Studies show that when AI systems like IBM’s Watson for Oncology failed to justify their recommendations, clinicians rejected them despite their potential. Without explainability, the adoption of medical AI remains limited.
  • Perceived vs. actual risk: People sense risks (bias, hallucination, error, misuse) more keenly than they perceive benefits. The study Effect of AI Performance, Perceived Risk, and Trust on Human Dependence in Deepfake Detection AI System, found that the participants adjusted their trust in AI depending on the AI’s error rates, indicating that their perception of risk (false positives/negatives) affected their reliance.
  • Institutional vs. public confidence: Even if companies or governments believe in their AI systems, public confidence may lag.
  • Trust paradox/verisimilitude: As AI becomes more fluent and humanlike, users may trust its output too much, even when it's wrong. The study Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States finds that people are willing to use AI-enabled technologies even when their explicit trust in them is lower–reliance can outpace trust.
  • Algorithm aversion: People may resist or discount algorithmic recommendations even when they are better than human alternatives, especially when stakes or moral judgments are involved. A recent study, Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?, found that giving users visibility into the decision logic of algorithms can reduce aversion, especially when the predictions are adjustable. This suggests that the design of transparency and user control helps mitigate the psychological resistance.

See also: Paubox launches generative AI email security for healthcare

 

Why the trust gap persists

Black-box logic, lack of explainability

One of the biggest culprits is opacity. When systems produce decisions or predictions without transparent reasoning (or with explanations that are themselves tenuous), users struggle to trust them. Explainable AI has made strides, but its promise is often overstated; the “explanations” are sometimes post hoc rationalizations, not faithful representations of internal computation. 

Further complicating this is calibration of confidence: AI systems often output confidence estimates or “certainty scores” that are poorly calibrated, that is, they may overstate or understate their true reliability. As stated in the study Understanding the Effects of Miscalibrated AI Confidence on User Trust, Reliance, and Decision Efficacy, “Providing well-calibrated AI confidence can help promote users' appropriate trust in and reliance on AI, which are essential for AI-assisted decision-making.” That misalignment can lead users to over-reliance on AI when it's wrong, or to dismiss it when it's right.

 

Hallucinations, errors, and inconsistency

The more we use generative models, the more we see instances where they produce factually incorrect statements (“hallucinations”), contradictory answers, or inconsistent outputs on the same prompt. These failures erode trust, especially when they occur unpredictably.

For example, a recent interview with a Huawei executive provocatively suggested we “embrace hallucinations,” a stance that underscores how deeply integrated hallucination is in current systems, but also hints at the tension in managing them. 

Moreover, AI systems are inherently probabilistic and non-deterministic; asking the same question may yield different answers at different times. That unpredictability is unsettling to users expecting consistency. 

Read more: What are AI hallucinations?

 

Bias, fairness, and systemic harms

AI systems are trained on data that reflects human and structural biases (gender, race, socioeconomic and geographic status). When systems amplify or replicate those biases, they inflict real harm on marginalized groups. That undermines trust, especially among those already vulnerable.

Additionally, fairness is often contested; different stakeholders may disagree on whatfairness” means. Because fairness is inherently value-laden, perceptions of bias are deeply political and contextual, not easily solved by a technical fix.

Related: Real-world examples of healthcare AI bias

 

Disclosure paradox in media and news

According to an article on Trusting News, studies show that when readers see a disclosure that AI was used in writing or editing a news story, trust in that story often decreases, even when the AI support was benign. 

Similarly, a University of Kansas study found that readers trust news less if they believe AI was involved, even when they don’t fully understand the role, suggesting we’re caught in an awkward trade-off: transparency is necessary, but it can backfire if users interpret “AI involvement” as sloppiness, dehumanization, or loss of editorial oversight.

 

Organizational and workforce trust gaps

In enterprises, a mismatch often emerges between leadership enthusiasm for AI and employee reluctance or skepticism. Many organizations roll out AI tools without adequate training, support, or change management. Employees can feel frustrated if they don’t know when or how to trust these tools. 

A study in MIT Sloan Review, Bridging the AI Trust' Gap: Why Employees and Leaders See AI Differently, suggests that structured training and engagement in AI deployment help increase trust and adoption. 

 

Adoption outpacing governance and ethics

One of the most alarming asymmetries is the pace: AI adoption is accelerating faster than the establishment of ethical guardrails, regulatory oversight, standards, or accountability regimes. A recent global study revealed that while trust in generative AI is surging, only 40 % of organizations report investing in governance, explainability, or ethics safeguards

This demonstrates that many employees are using AI tools without clarity on whether that use is permitted or whether it complies with policies. 

Meanwhile, globally, only around 46 % of respondents say they’re willing to trust AI systems, despite 66 % already using AI regularly. 

See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)

 

Strategies to bridge the AI trust gap 

Closing the AI trust gap demands a holistic mix of ethics, governance, transparency, engagement, and accountability. EY’s work, in its “Bridging the AI Trust Gap” report, offers a useful set of principles and actions organizations can take. Based on those, here are key strategies to help bridge the gap:

  • Trust by design: Build AI with clear purposes and ethical principles from the start.
  • Agile governance: Continuously monitor risks, maintain AI inventories, and use third-party audits.
  • Transparency and explainability: Ensure users understand how AI makes decisions, and establish clear accountability.
  • Policy and stakeholder engagement: Collaborate with regulators, adopt self-regulation, and include diverse voices in AI development.
  • Continuous monitoring: Validate outputs, detect bias, and ensure resilience against errors or attacks.
  • Education and culture: Train employees, communicate openly with users, and foster an ethical organizational mindset.
  • Standards and certification: Use independent assurance, certifications, and global best practices to signal credibility.

Together, these measures align technical performance with ethical responsibility, reducing risks and building public confidence in AI.

Read also: Regulations on AI for healthcare workers and nurses

 

FAQS

How can organizations build trust in AI?

Organizations can implement strategies such as embedding ethics in design, ensuring transparency and explainability, auditing for bias, applying strong governance, engaging with regulators, and communicating openly with stakeholders.

 

Why is bridging the AI trust gap important?

Without trust, AI adoption will remain limited, and potential benefits in healthcare, finance, education, and other fields may not be realized. Bridging the gap ensures that AI is not only powerful but also reliable, ethical, and socially accepted.