3 min read

How AI use policies to build trust in AI software

How AI use policies to build trust in AI software

Trust is indispensable in any framework where protected health information (PHI) is shared between organizations. Covered entities must be assured that their data will not be used for unauthorized secondary purposes. This is especially true when considering the ethical tensions that covered entities face when engaging AI software providers. A 2020 Journal of Medical Informatics Association study states, “Artificial Intelligence (AI) both promises great benefits and poses new risks for medicine. Failures in medical AI could erode public trust in healthcare.”

The study goes on to note, “lack of transparency could conceivably damage epistemic trust in the recommendations and diminish autonomy.” Establishing AI-specific use policies provides a transparent, evidence-based approach to help navigate these concerns. AI use policies have to address matters like data handling protocols, privacy safeguards, operational transparency, bias mitigation, accountability structures, and contingency plans for adverse events or system failures.

 

Why trust is a currency in healthcare 

A study on trust in healthcare titled ‘Trust Building in Public Health Approaches: The Importance of a “People-Centered” Concept in Crisis Response’ notes, “Individuals find themselves in a vulnerable and dependent position concerning practitioners, decision-makers, and institutions.” In ‘encapsulated interests,’ patients assess trustworthiness based on whether they perceive the healthcare organization’s goals align with their own well-being, including those from marginalized groups who have historically faced systemic disparities. 

The alignment signals to the patient that the organization genuinely prioritizes improving health outcomes rather than simply operating for profit or bureaucratic objectives. Trust in healthcare represents the faith and confidence patients have in the healthcare providers and institutions responsible for their care. It is fundamental for effective healthcare delivery and patient satisfaction, encompassing both confidence and reliance.” 

The Frontiers in Health Services study notes, “Trust is understood as a way of dealing with uncertainty, and according to Luhmann, trust is an attitude which leaves room for risk-taking behavior…”

The foundation of this trust however, exists in the fact that “Being trustworthy helps in gaining trust but does not imply trust per se.” Approaches to building trust include involving communities in decision-making, increasing transparency in healthcare costs and outcomes, and the ethical use of emerging technologies such as AI.

 

What is an AI use policy? 

The Frontiers study notes, “AI systems tend to be complex, unpredictable, lack evidence, and difficult to grasp, hence the many uncertainties and risks related to its use, e.g., patient harm, bias, and lack of privacy. Trust in AI and its trustworthiness have therefore been regarded as important aspects to address.”

An AI policy serves as a governance tool that defines how AI systems should be integrated into clinical and administrative functions while addressing the unique risks and challenges posed by the technology, such as privacy concerns, bias, accountability, and transparency. 

AI policies are crafted to balance the substantial benefits AI offers like improved diagnostic accuracy, operational efficiency, and personalized treatment, with the ethical obligations to protect patient rights and data security. These policies help establish trust among patients, healthcare providers, and regulators by articulating clear expectations for all stakeholders, from developers to end-users.

 

The main fears providers have about AI

Experienced healthcare workers have expressed anxiety over whether their years of training might be overridden by automated systems with opaque algorithms. A study ‘Artificial intelligence and job performance of healthcare providers in China’ reflects “profound fear that A.I. may cause certain healthcare jobs to become redundant, which will disrupt the provider–patient relationship”, revealing concerns over role erosion and diminished professional identity.

This fear is coupled with uncertainties about how AI might reshape roles within healthcare teams and the potential for humans to become mere supervisors of AI rather than active decision-makers, a dynamic that could “negatively impact providers’ job performance… bringing them job insecurity” and reductions in motivation and effectiveness.

Providers fear unclear legal and ethical responsibilities if AI software provided by third parties makes mistakes, impacting patient safety. These concerns extend to how transparent and explainable the AI decision-making processes are, since providers often do not fully understand the algorithms, nor have control over their evolution post-deployment.

 

How Paubox’s AI use policy transparency is a masterclass in AI transparency 

Many AI models, especially those based on deep learning or large language models, operate as “black boxes,” where decision-making processes are neither visible nor understandable to end-users. Paubox’s HIPAA compliant email software tackles this head-on by ensuring its generative AI-powered inbound email security system provides clear visibility into its threat detection reasoning. The solution offers easily interpretable confidence scores and detailed explanations for flagged emails, empowering healthcare security teams to understand and trust the AI’s decisions. 

This level of explainability helps demystify AI operations for users who must make security decisions based on its outputs, thereby enhancing situational awareness and response efficiency.

Paubox also has a strict adherence to HIPAA compliance and patient data protection within their AI policy, which is beneficial given the sensitivity of healthcare communications. Their AI operates within secure boundaries that prevent any patient data from being shared with third parties. 

 

FAQs

Are there federal laws specifically for AI in healthcare?

No, but existing laws like HIPAA, the HITECH Act, and FDA regulations apply, while new AI-specific frameworks are being developed.

 

What is the role of the HITECH Act with AI?

The HITECH Act enforces stronger data security and breach notification requirements that also affect AI-driven health systems.

 

Do healthcare providers need a Business Associate Agreement (BAA) with AI vendors?

Yes, if the vendor processes PHI, a BAA is required under HIPAA to ensure compliance.

 

What guidance has HHS provided on AI use?

HHS provides for transparency, patient privacy, and compliance when healthcare organizations adopt AI tools.

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.