Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

What is the AMA AI governance framework?

Written by Gugu Ntsele | September 03, 2025

According to the AMA's recent analysis, "Nearly 70% of physicians the AMA surveyed said they used AI tools in 2024, up from 38% just a year earlier." The latest data from February 2025 confirms this momentum continues, with the AMA's study Physician sentiments around the use of AI in heath care: motivations, opportunities, risks, and use cases revealing that "Usage of AI use cases nearly doubled (66% in 2024 vs. 38% in 2023). The dramatic drop in non-users (62% to 33%) in just one year is impressive and unusually fast for healthcare technology adoption." This integration of AI tools into medical practice prompted the American Medical Association (AMA) to develop a governance framework designed to help healthcare organizations navigate AI implementation safely and effectively.

 

The genesis of the framework

The AMA's AI governance framework emerged from a need identified through research and physician surveys. The organization's 2023 AI Physician Sentiment survey of more than 1,000 physicians revealed that while AI adoption was growing, concerns about liability, patient safety, and clinical validity remained barriers to implementation. The need to establish proper governance has become clear as regulations struggle to keep pace with technological advancement. According to the AMA's Augmented Intelligence Development, Deployment, and Use in Health Care report, "There is currently no whole-of-government strategy for oversight and regulation of AI," creating a gap that healthcare organizations must address independently.

Beyond adoption rates, physician attitudes were also evolving positively, with the research regarding physician sentiments revealing that "A growing majority of physicians recognize AI's benefits, with 68% in 2024 reporting at least some advantage in patient care (up from 63% in 2023)". Additionally, "35% of physicians queried reporting that their enthusiasm for health AI exceeded their concerns."

The AMA's Augmented Intelligence Development, Deployment, and Use in Health Care report notes that "As of May 2024, 882 devices that FDA classifies as Artificial Intelligence/Machine Learning (AI/ML) devices have been approved for marketing," demonstrating the scale of AI integration across medical specialties.

Despite this growing enthusiasm, physician sentiment remains measured. According to the AMA's Future of Health report, "65% of physicians surveyed by the AMA indicated they see definite or some advantage to using AI tools," yet the same survey revealed that "30% of respondents are more excited than concerned, 29% are more concerned than excited, and the remaining 41% are equally concerned and excited" about AI's increased use in healthcare delivery.

The persistence of these concerns is evident in the most recent survey data, where Dr. Jesse M. Ehrenfeld, AMA immediate past president, noted that "There remain unresolved physician concerns with the design of health AI and the potential of flawed AI-enabled tools to put privacy at risk, integrate poorly with EHR systems, offer incorrect conclusions or recommendations and introduce new liability concerns."

This change created a need for structured oversight and governance protocols that could keep pace with implementation while maintaining the standards of patient care and safety. According to a Healthcare IT News article, Dr. Margaret Lozovatsky, AMA's chief medical information officer, emphasized the timing of this initiative, stating: "Setting up an appropriate governance structure now is more important than it's ever been because we've never seen such quick rates of adoption."

The urgency becomes even more apparent when considering the pace of technological advancement. As Dr. Lozovatsky further explained, "Technology is moving very, very quickly. It's moving much faster than we're able to actually implement these tools."

 

Understanding the eight-step framework

The AMA's STEPS Forward Governance for Augmented Intelligence toolkit, developed in collaboration with Manatt Health, provides healthcare organizations with an eight-step approach to AI governance. The framework's terminology is fundamental to note: "The AMA uses the phrase 'augmented intelligence' (AI), which is an alternative conceptualization used across health care that focuses on artificial intelligence's assistive role, emphasizing the fact that AI enhances human intelligence rather than replaces it."

The framework outlines what it calls the "Eight STEPS to Establish AI Governance," which provides a systematic methodology for establishing responsible AI adoption practices within healthcare settings:

  1. Establish executive accountability and a governance structure - Ensuring CEO and board-level commitment with designated C-suite leadership
  2. Form a working group to detail priorities, processes, and policies - Creating interdisciplinary teams with clinical, operational, and technical expertise
  3. Assess the current state and establish priorities - Conducting inventory of existing AI tools and prioritizing future use cases
  4. Develop AI policies - Creating policies that address governance, safety, and compliance requirements
  5. Define project intake, vendor evaluation, and assessment processes - Standardizing evaluation criteria and review procedures
  6. Update standard planning and implementation processes - Adapting existing technology adoption processes for AI-specific considerations
  7. Establish an oversight and monitoring process - Creating ongoing surveillance and performance evaluation systems
  8. Support AI organizational readiness - Preparing teams through training, education, and change management

 

Addressing concerns and opportunities

The framework directly addresses physicians' primary concerns about AI adoption, particularly regarding liability and patient safety. The most recent AMA survey data from Physician sentiments around the use of AI in heath care, and use cases reinforces these concerns, showing that "Nearly half of physicians (47%) ranked increased oversight as the number one regulatory action needed to increase trust in adopting AI tools." This finding is further supported by the AMA's Augmented Intelligence Development, Deployment, and Use in Health Care report, which confirms that oversight remains the top priority for building physician confidence in AI technologies.

The research reveals specific requirements that physicians consider critical for AI adoption. According to Physician sentiments around the use of AI in heath care, "Physicians emphasize the need for a feedback loop, data privacy assurances, seamless workflow integration, and adequate training and education as critical factors for AI adoption." More specifically, the study found that "There is a designated channel for feedback should issues arise" was rated as important by 88% of physicians, while "Data privacy is assured by my own practice/hospital & EHR vendor" was considered important by 85% of respondents.

Additionally, the AMA survey notes that, "physician respondents emphasized the need for a designated feedback channel (88%), data privacy assurances (87%) and EHR integration (84%) as critical factors for AI adoption".

The survey also revealed that "data privacy assurances (87%), not being held liable for AI model errors (87%) and medical liability coverage (86%) were top attributes for their buy-in.”

The AMA's research reveals that physicians see AI's greatest potential in addressing administrative challenges. The Physician sentiments around the use of AI in heath care study found that "Most physicians (57%) view addressing administrative burden through automation as the biggest area of opportunity for AI." This finding shows that physicians see AI's greatest potential not necessarily in clinical decision-making, but in reducing the administrative tasks that contribute to physician burnout and detract from patient care time.

The framework also emphasizes the importance of physician involvement in AI adoption decisions. The AMA's Augmented Intelligence Development, Deployment, and Use in Health Care report reveals that "86% of surveyed physicians indicated they would like to be either responsible or consulted in the process" of AI tool implementation. This desire for physician engagement reinforces the framework's emphasis on clinical leadership throughout the AI lifecycle.

Dr. Margaret Lozovatsky, AMA's chief medical information officer, emphasized that "physicians must be full partners throughout the AI lifecycle, from design and governance to integration and oversight, to ensure these tools are clinically valid, ethically sound and aligned with the standard of care and the integrity of the patient-physician relationship."

As stated in the Augmented Intelligence Development, Deployment, and Use in Health Care report, "Establishing AI governance is important to ensure AI technologies are implemented into care settings in a safe, ethical, and responsible manner." It acknowledges specific risks associated with AI implementation, including the potential to worsen existing biases, increase privacy risks, introduce new liability issues, and generate convincing but incorrect clinical recommendations. By providing structured approaches to identifying and mitigating these risks, the framework helps organizations implement AI safely and effectively.

As Dr. Ehrenfeld noted, "Increased oversight ranked as the top regulatory action needed to increase physician confidence and adoption of AI", making the AMA's governance framework timely and relevant to addressing these physician concerns.

 

FAQs

How does the AMA’s framework differ from federal AI regulations?

It fills gaps by offering practical governance steps since no unified government oversight strategy currently exists.

 

Who can use the AMA AI Governance Framework outside of hospitals?

Any healthcare organization, including clinics and private practices, can adapt the framework to guide AI adoption.

 

How does the AMA defineaugmented intelligencecompared to traditional AI?

The AMA emphasizes AI as a tool that supports rather than replaces physicians in decision-making.

 

Does the framework recommend specific AI tools or vendors?

No, it provides governance principles rather than endorsing particular products.

 

How can smaller practices with limited budgets implement the framework?

They can scale the eight steps to their resources by focusing on oversight, training, and policy basics first.