The American Medical Association released an eight-step governance framework toolkit to help healthcare systems establish accountability, oversight, and training requirements for artificial intelligence implementation after physician AI usage jumped dramatically in one year.
The American Medical Association created a new toolkit to guide healthcare systems in establishing governance frameworks for implementing and scaling artificial intelligence systems. AMA developed this initiative after studying a dramatic increase in physicians' AI usage since 2023. The STEPS Forward Governance for Augmented Intelligence toolkit, developed with support from Manatt Health, helps provider organizations identify, assess, and prioritize AI usage risks to ensure patient safety and care equity. The toolkit provides resources to help providers evaluate existing policies and includes a downloadable model policy that organizations can modify to align with their governance structure, roles, responsibilities, and processes.
AMA's eight pillars of responsible AI adoption include:
The toolkit addresses benefits and risks of AI and machine learning deployments, including liability and patient safety concerns. AMA developed recommendations on large language models, generative pretrained transformers, and other AI-generated medical advice or content after studying unforeseen consequences of these technologies.
Dr. Margaret Lozovatsky, AMA's chief medical information officer and vice president of digital health innovations, stated that "healthcare AI technology is evolving faster than hospitals can implement tools" and stressed the importance of governance.
Lozovatsky told Healthcare IT News that "Physicians must be full partners throughout the AI lifecycle, from design and governance to integration and oversight, to ensure these tools are clinically valid, ethically sound and aligned with the standard of care and the integrity of the patient-physician relationship."
She explained concerns about AI's potential to "worsen bias, increase privacy risks, introduce new liability issues and offer seemingly convincing yet ultimately incorrect conclusions or recommendations that could affect patient care."
Lozovatsky emphasized that "Setting up an appropriate governance structure now is more important than it's ever been because we've never seen such quick rates of adoption."
According to AMA's physician surveys:
AMA positions clinical experts as best suited to determine whether AI applications meet quality, appropriateness, and clinical validity standards. Organizations must communicate to clinicians and patients how AI-enabled systems directly impact medical decision-making and treatment recommendations at the point of care. The survey asked physicians about various AI use cases, from automation of insurance pre-authorization and documentation to patient-facing chatbots and predictive analytics.
This governance framework addresses a gap as healthcare experiences unprecedented AI adoption rates. The increase from 38% to 70% physician AI usage in one year represents unusually fast healthcare technology adoption, creating a need for oversight structures. Without proper governance, healthcare organizations risk liability issues, patient safety concerns, and the perpetuation of bias in AI systems. The framework specifically tackles physicians' primary concern about potential liability for AI that performs poorly, while ensuring AI tools support rather than disrupt clinical workflows and maintain care quality standards.
Healthcare organizations cannot afford to delay AI governance implementation as adoption accelerates. AMA's framework provides a practical roadmap for establishing oversight before AI integration outpaces safety measures. Organizations should download and customize AMA's model policy to ensure AI deployment aligns with patient safety, clinical validity, and physician accountability standards.
Yes, the framework is designed to be scalable for organizations of all sizes.
It complements existing laws by offering a governance structure rather than regulatory mandates.
It provides guidance both for organizations new to AI and those refining existing systems.
It emphasizes transparency but leaves specific consent protocols to organizational policy.
The framework encourages continuous oversight and monitoring aligned with patient safety outcomes.