2 min read
Sutter and Memorial Care face lawsuit over AI recording of patient visits
Mara Ellis
April 15, 2026
Three California patients filed a proposed class action in the U.S. District Court for the Northern District of California against Sutter Health, Memorial Health Services, and MemorialCare Medical Foundation over the alleged use of Abridge’s ambient clinical documentation technology during medical visits.
What happened
According to the complaint, the tool recorded confidential patient-clinician conversations, transmitted the audio to outside systems for processing, and generated documentation without first obtaining meaningful informed consent from patients. The plaintiffs say they sought care within the past six months, discussed sensitive health information under circumstances where they reasonably expected privacy, and were not clearly told that their conversations would be recorded, sent outside the clinical setting, or processed by third-party systems.
On that basis, the complaint alleges violations of the California Invasion of Privacy Act, the Confidentiality of Medical Information Act, California’s Unfair Competition Law, the federal Wiretap Act, and common-law invasion of privacy. The case was filed on April 7 or 8, 2026, depending on the court record viewed, and is proceeding as Washington et al. v. Sutter Health et al., Case No. 4:26-cv-03012. Separate Abridge press releases show that Sutter Health announced an Abridge rollout in March 2024 and MemorialCare announced its partnership in April 2024.
What was said
According to the complaint, “Defendants implemented the AI recording system without obtaining meaningful, informed consent from patients prior to recording and transmitting their medical conversations.”
Why it matters
California is becoming the clearest test case for how fast-moving healthcare AI collides with older privacy law. California already has some health-AI guardrails as seen in AB 3030, which took effect January 1, 2025, and requires health facilities, clinics, and physician offices to disclose when generative AI is used to generate patient clinical communications and to tell patients how to reach a human, while AB 489, signed in 2025, targets AI systems that misrepresent themselves as licensed health professionals. California lawmakers also acknowledged in a 2025 Assembly hearing paper that ambient scribes were already being tested and deployed across many California health systems and that the resulting data flows raise privacy and consent concerns.
The issue is that its newer AI laws focus more on disclosure and deceptive representation than on exam-room audio capture itself, so lawsuits are leaning on older laws like the California Invasion of Privacy Act and the California Confidentiality of Medical Information Act to fill the gap. The earlier Sharp HealthCare lawsuit in San Diego followed a similar pattern, alleging AI-based recording without patient consent and even incorrect chart language stating that consent had been obtained. That broader governance gap fits Paubox’s warning that 95% of organizations report staff already using AI tools in email, and 16% say compliance was never consulted before AI email tools were enabled.
See also: HIPAA Compliant Email: The Definitive Guide (2026 Update)
FAQs
Is there one federal AI law that governs healthcare right now?
Not really. HHS has an AI strategy, and ASTP/ONC already has AI-related transparency requirements in the HTI-1 final rule for certain certified health IT, while the FTC continues to use existing consumer protection authority against deceptive AI practices.
Does HIPAA ban the use of AI in healthcare?
No. HIPAA is technology-neutral.
Does federal healthcare IT regulation already touch AI transparency?
Yes, but in a narrower way than a full AI law. The HTI-1 final rule created algorithm transparency requirements for AI and other predictive algorithms that are part of certified health IT
Subscribe to Paubox Weekly
Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.
