4 min read

Why healthcare may end up with a 50-state AI rulebook

Why healthcare may end up with a 50-state AI rulebook

Without a full set of federal laws, state governments have rushed to make their own AI rules, which has led to the creation of dozens of new laws across the country, both general and healthcare-specific. States now require things like telling patients about their conditions, having a human look over AI decisions, and protecting data, but these requirements are very different from one state to the next. The result is a patchwork of differing definitions of risk, consent, and accountability. As Craig Konnoth notes in the opening lines of AI and data protection law in health, “Privacy is a key issue in AI regulation, especially in a sensitive area such as healthcare. AI presents a range of questions under HIPAA.”

HIPAA only sets a minimum standard for privacy for covered health data. It does not cover many AI-specific issues, such as bias, explainability, or data that is not protected health information (PHI). Healthcare organizations must therefore track AI rules in every state they touch, even beyond HIPAA’s scope. Along with HIPAA, other federal laws also apply, such as the FTC Act, which makes false claims about AI illegal, and civil rights laws, which protect people from algorithmic discrimination.

 

Why states are setting the pace

With little federal action, states have become the driving force behind U.S. AI regulation. According to the article What’s the state of healthcare AI regulation, as of 2025, lawmakers in 47 states introduced over 250 AI-related bills (including many targeting healthcare), and 33 of those became law in 21 states. State legislatures are often focused on immediate patient-safety concerns, restricting AI therapy chatbots, requiring human oversight, and demanding transparency about AI use.

As the article noted, “Health AI regulation has primarily come from states, as the federal government has largely taken an antiregulatory approach.” Even federal advocates concede this reality. In December 2025, President Trump signed an Executive Order instructing agencies to seek a single national standard, precisely because Congress had not enacted legislation to provide guardrails on AI.

Policy experts warn that this dynamic yields a fragmented system. A chapter from the Research Handbook on Health, AI and the Law explains that “governance responsibilities and policy guidance are distributed across federal and, increasingly, state agencies." As the National Academies has noted, HIPAA’s Privacy Rule is only a baseline; states “build” on it with stricter rules when federal law is silent.

Data-privacy experts in the chapter warn that “as more U.S. states enact their own consumer privacy laws, absent an overarching federal one, there is potential to further splinter an already fragmented system of data protections.” In short, healthcare organizations can no longer assume HIPAA alone will govern AI; instead, they must follow a rapidly evolving, state-driven patchwork.

 

When AI rules differ by state

Variations among state AI laws create headaches for healthcare providers and vendors. For example, imagine a telemedicine platform operating in multiple states: one state might require explicit patient consent before any AI-derived clinical advice, while another might have only a disclosure requirement. A single AI tool could thus face contradictory mandates. In practice, industry leaders warn that this will be chaotic: one consultant summed up that the proliferation of state AI bills has produced a confusing patchwork of regulation, with stakeholders needing a single federal standard to avoid it. Similarly, the previously mentioned article What’s the state of healthcare AI regulation? noted that failing to harmonize across states will likely create “lots of contradictory standards…similar to patient health data” where no nationwide rules forced companies to juggle each state’s demands.

News reports confirm these concerns. Fox News recently observed that “no single federal law requires broad AI disclosure in healthcare. Instead, a growing patchwork of state laws is filling that gap.” A Fierce Healthcare article likewise notes that innovators are “struggling with the ever-increasing patchwork of state AI laws” as they try to comply nationwide. When rules differ, organizations face compliance burdens at multiple levels: they must tailor AI governance to the strictest state(s), update consent and notice processes by jurisdiction, and prepare for each state’s enforcement regime.

 

Why HIPAA is not enough

HIPAA was a landmark in 1996, but it was not designed for AI. In practice, HIPAA provides only a baseline privacy regime for protected health information (PHI) handled by covered entities and business associates. Many AI applications now fall outside HIPAA’s core scope. For example, patient-facing chatbots or consumer apps that collect health data may be exempt if they’re not official healthcare providers.

As the American Hospital Association noted, “HIPAA provides sound foundational standards for privacy, security, and breach notification. AI systems rely on large data sets to maximize their predictive power.” The issue is that healthcare AI increasingly depends on data movement, vendor access, and consumer-facing tools that do not always fit neatly inside HIPAA’s covered entity and business associate framework.

Even when HIPAA does apply, it does not mandate new AI-specific protections. As a MedCityNews article puts it, “HIPAA was built for 1990s healthcare to enable health insurance portability, administrative simplification and patient data protection. It focuses on records, not intelligence and was designed for humans, not machines, and therefore contains outdated assumptions about data flows…”

 

How HIPAA still governs software

HIPAA still governs software when that software creates, receives, maintains, or transmits electronic PHI (ePHI) for a covered entity or business associate, so the legal question is not whether the tool uses AI, but whether it touches regulated health data in a regulated relationship. HHS OCR’s cloud guidance makes that point directly: covered entities and business associates may use cloud services to store or process ePHI only when they have a HIPAA compliant business associate agreement and otherwise comply with the HIPAA Rules.

That framework matters for modern healthcare software because AI, email platforms, cloud tools, APIs, and vendor systems often sit inside the same data pathway. Paubox’s generative AI email security offering shows the practical version of this problem: its Inbound Email Security uses large language models and vector databases to analyze the full context of incoming messages, including nuanced threats like invoice scams, brand impersonation, and domain spoofing. HIPAA does not ban that kind of software innovation, but it requires healthcare organizations to manage the privacy, security, access, vendor, and breach risks around ePHI.

Paubox’s 2026 Healthcare Email Security Report found 170 healthcare email-related breaches in 2025, with 53% involving Microsoft 365 and 74% of breached domains showing ineffective DMARC protection, reinforcing why software governance still matters after implementation. Paubox’s 2025 report also found that only 1.1% of healthcare organizations had a low-risk email security posture, showing that compliance cannot be treated as a one-time software purchase.

See also: HIPAA Compliant Email: The Definitive Guide (2026 Update)

 

FAQs

Is there one federal AI law in the United States?

No single, comprehensive federal AI law governs all AI systems across all sectors. Federal AI regulation currently works through a layered model consisting of executive orders, agency guidance, existing consumer protection laws, civil rights laws, privacy laws, health IT rules, FDA oversight, and sector-specific enforcement.

 

What does NIST do for AI regulation?

The National Institute of Standards and Technology does not usually act as an enforcement agency. Instead, it creates voluntary technical frameworks that organizations can use to manage AI risk.

 

How does the FTC regulate AI?

The Federal Trade Commission regulates AI through its authority over unfair or deceptive business practices and unfair methods of competition.

Subscribe to Paubox Weekly

Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.