3 min read

AMA releases 8-step AI governance toolkit for healthcare providers

AMA releases 8-step AI governance toolkit for healthcare providers

The American Medical Association released an eight-step governance framework toolkit to help healthcare systems establish accountability, oversight, and training requirements for artificial intelligence implementation after physician AI usage jumped dramatically in one year.

 

What happened

The American Medical Association created a new toolkit to guide healthcare systems in establishing governance frameworks for implementing and scaling artificial intelligence systems. AMA developed this initiative after studying a dramatic increase in physicians' AI usage since 2023. The STEPS Forward Governance for Augmented Intelligence toolkit, developed with support from Manatt Health, helps provider organizations identify, assess, and prioritize AI usage risks to ensure patient safety and care equity. The toolkit provides resources to help providers evaluate existing policies and includes a downloadable model policy that organizations can modify to align with their governance structure, roles, responsibilities, and processes.

 

Going deeper

AMA's eight pillars of responsible AI adoption include:

  • Establishing executive accountability and structure
  • Forming a working group to detail priorities, processes and policies
  • Assessing current policies
  • Developing AI policies
  • Defining project intake, vendor evaluation and assessment processes
  • Updating standard planning and implementation processes
  • Establishing an oversight and monitoring process
  • Supporting AI organizational readiness

The toolkit addresses benefits and risks of AI and machine learning deployments, including liability and patient safety concerns. AMA developed recommendations on large language models, generative pretrained transformers, and other AI-generated medical advice or content after studying unforeseen consequences of these technologies.

 

What was said

Dr. Margaret Lozovatsky, AMA's chief medical information officer and vice president of digital health innovations, stated that "healthcare AI technology is evolving faster than hospitals can implement tools" and stressed the importance of governance.

Lozovatsky told Healthcare IT News that "Physicians must be full partners throughout the AI lifecycle, from design and governance to integration and oversight, to ensure these tools are clinically valid, ethically sound and aligned with the standard of care and the integrity of the patient-physician relationship."

She explained concerns about AI's potential to "worsen bias, increase privacy risks, introduce new liability issues and offer seemingly convincing yet ultimately incorrect conclusions or recommendations that could affect patient care."

Lozovatsky emphasized that "Setting up an appropriate governance structure now is more important than it's ever been because we've never seen such quick rates of adoption."

 

By the numbers

According to AMA's physician surveys:

  • Nearly 70% of physicians used AI systems in 2024, up from 38% in 2023
  • Non-users dropped dramatically from 62% to 33% in just one year
  • AI use cases nearly doubled to 66% from 38% in 2023
  • Nearly half (47%) of physicians ranked increased oversight as the No. 1 regulatory action needed to increase trust in adopting AI tools
  • AMA's 2023 survey included more than 1,000 physicians

In the know

AMA positions clinical experts as best suited to determine whether AI applications meet quality, appropriateness, and clinical validity standards. Organizations must communicate to clinicians and patients how AI-enabled systems directly impact medical decision-making and treatment recommendations at the point of care. The survey asked physicians about various AI use cases, from automation of insurance pre-authorization and documentation to patient-facing chatbots and predictive analytics.

 

Why it matters

This governance framework addresses a gap as healthcare experiences unprecedented AI adoption rates. The increase from 38% to 70% physician AI usage in one year represents unusually fast healthcare technology adoption, creating a need for oversight structures. Without proper governance, healthcare organizations risk liability issues, patient safety concerns, and the perpetuation of bias in AI systems. The framework specifically tackles physicians' primary concern about potential liability for AI that performs poorly, while ensuring AI tools support rather than disrupt clinical workflows and maintain care quality standards.

 

The bottom line

Healthcare organizations cannot afford to delay AI governance implementation as adoption accelerates. AMA's framework provides a practical roadmap for establishing oversight before AI integration outpaces safety measures. Organizations should download and customize AMA's model policy to ensure AI deployment aligns with patient safety, clinical validity, and physician accountability standards.

 

FAQs

Does the toolkit apply to small practices as well as large hospital systems?

Yes, the framework is designed to be scalable for organizations of all sizes.

 

How does this toolkit interact with federal or state AI regulations already in place?

It complements existing laws by offering a governance structure rather than regulatory mandates.

 

Can the toolkit help organizations already using AI, or only those just beginning adoption?

It provides guidance both for organizations new to AI and those refining existing systems.

 

Does the toolkit address patient consent for AI use in care decisions?

It emphasizes transparency but leaves specific consent protocols to organizational policy.

 

How does AMA suggest healthcare systems measure the success of AI governance?

The framework encourages continuous oversight and monitoring aligned with patient safety outcomes.

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.