Pennsylvania filed the first US enforcement action of its kind against an AI chatbot company, alleging the platform allowed an AI persona to impersonate a licensed psychiatrist and offer medical diagnoses to users.
What happened
On May 1, 2026, the Commonwealth of Pennsylvania, acting on behalf of the state's Board of Medicine and Department of State, filed suit against Character Technologies Inc., the company behind Character.AI. The state alleges the platform enabled the "unlawful practice of medicine and surgery" by allowing an AI chatbot to pose as a licensed medical professional.
A Pennsylvania investigator created an account and interacted with an AI persona named "Emilie," whose profile stated "Doctor of psychiatry. You are her patient." The chatbot allegedly claimed to have attended medical school, stated that conducting a depression assessment was "within her remit as a Doctor," and provided a Pennsylvania medical license number, which investigators confirmed was invalid.
Pennsylvania's Department of State AI Task Force, launched earlier in 2026, led the investigation.
The backstory
Pennsylvania laid the groundwork for this enforcement action in February 2026, when Governor Josh Shapiro launched a formal complaint and reporting process specifically targeting AI-powered chatbots. At the time, the governor's office signaled it would coordinate with the Pennsylvania attorney general to strengthen consumer protections around AI companion bots.
California had already moved in a similar direction in January 2025, when the state's attorney general released an advisory explicitly stating that AI cannot practice medicine in California. California law bans corporations and other artificial legal entities from practicing medicine and limits licensure to human medical professionals.
Going deeper
Character.AI allows users, not just the company itself, to create and deploy custom AI "characters." This means the company may face scrutiny not only for its own content, but for user-generated personas that claim professional credentials.
On March 18, 2026, California Rep. Kevin Mullin introduced the CHATBOT Act, which would prohibit companies from deploying AI chatbots that imply or indicate a bot holds a license in a covered profession including healthcare, legal services, accounting, tax, payroll, finance, and insurance. The bill would empower the FTC to enforce violations and create a private right of action for individuals.
What was said
Character.AI's spokesperson stated its characters are "fictional and intended for entertainment and roleplaying," and that the platform includes "prominent disclaimers in every chat."
Why it matters
When a bot behaves like a doctor by conducting assessments, citing credentials, providing a license number users may believe they are receiving professional care. A disclaimer buried in a chat interface does not necessarily undo that impression.
Any covered entity deploying AI-powered tools that touch clinical conversations now faces a clearer legal signal that regulators will treat credential misrepresentation as a licensure violation, not just a content moderation failure. Pennsylvania's action, the first of its kind in the U.S., sets a precedent that other state licensing boards can follow without needing new legislation.
The CHATBOT Act, if passed, would extend this exposure federally, creating FTC enforcement authority and a private right of action across healthcare, legal, financial, and other regulated sectors.
The bottom line
Pennsylvania's lawsuit is a warning to every company deploying AI in contexts where users could mistake a bot for a credentialed professional. The question regulators are now asking is not just what a chatbot says, but how it presents itself and whether disclaimers are enough when the bot's behavior says otherwise. Healthcare organizations and AI vendors should audit how their tools are labeled, what outputs they generate, and whether escalation to a qualified human professional is built into the workflow.
FAQs
Are AI chatbot disclaimers legally protective for companies?
Disclaimers help but do not automatically shield a company from liability if the chatbot's actual behavior contradicts them.
Does this only affect mental health chatbots?
No, the regulatory scrutiny extends to any AI tool that implies professional credentials in healthcare, law, finance, or other licensed fields.
Could a healthcare organization be liable if it deploys a third-party AI chatbot that misrepresents itself to patients?
Yes, organizations that deploy AI tools patient-facing may share liability if those tools imply professional credentials or offer clinical guidance without proper oversight.
Does HIPAA have anything to say about AI chatbots collecting patient information during interactions?
HIPAA's privacy and security rules apply to any tool that handles protected health information, regardless of whether that tool is human or AI-driven.
Subscribe to Paubox Weekly
Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.
