2 min read
OpenAI ad policy update puts healthcare AI governance back in focus
Mara Ellis
May 4, 2026
OpenAI updated its US privacy policy on April 30, 2026, and the change sits inside a broader move to test advertising in ChatGPT for logged-in adult users on the Free and Go tiers.
What happened
OpenAI’s US privacy policy now says, “We may receive information from advertisers and other data partners, which we use for purposes including to help us measure and improve the effectiveness of ads shown to Free and Go users on our Services. For example, we could receive information about purchases you make from these advertisers.”
OpenAI’s Help Center explains that ads may appear for Free and Go users, while Plus, Pro, Business, Enterprise, and Edu accounts will not have ads during the test. OpenAI allegedly does not share ChatGPT conversations, chat history, memories, or personal details with advertisers, and advertisers receive only aggregated, non-identifying ad-performance data such as total views or clicks.
Going deeper
OpenAI’s policy update can be read alongside its Advertise with ChatGPT offering, which describes the company’s interest in advertising within ChatGPT. OpenAI says its ads for Free and Go users may be based on the current conversation and that it may receive advertiser or data-partner information, including purchase information, to measure ad effectiveness, while also saying that advertisers do not receive chats, chat history, memories, or personal details.
For healthcare organizations, the concern is that AI governance is now moving quickly toward disclosure, documentation, bias control, and data-use accountability. At the federal level, ONC’s HTI-1 Final Rule creates transparency requirements for AI and predictive algorithms in certified health IT, and HHS’s Section 1557 final rule requires covered entities to identify and mitigate discrimination risks from tools that support patient care decisions.
At the state level, Colorado’s SB24-205 Consumer Protections for Artificial Intelligence requires risk management, impact assessments, annual reviews, consumer notices, and appeal opportunities for high-risk AI systems; California’s AB 3030 Health Care Services: Artificial Intelligence requires disclaimers and human contact instructions when generative AI is used for patient clinical communications; and Utah’s SB0149 Artificial Intelligence Amendments creates an AI disclosure and liability framework.
Why it matters
The updated privacy policy states outright that advertisers do not receive chats, chat history, memories, or personal details, and that ads are not eligible near sensitive or regulated topics such as health or mental health. Still, the risk for healthcare organizations comes from staff behavior.
Paubox’s shadow AI report found that 95% of organizations report staff are already using AI tools in email, 62% have observed staff experimenting with ChatGPT or similar tools even when unsanctioned, 16% say compliance was never consulted before AI email tools were enabled, and 75% believe employees assume tools like Microsoft Copilot are automatically HIPAA compliant.
Even if OpenAI says advertisers do not see chats, healthcare entities still need to control whether employees paste protected health information, patient context, payer details, internal emails, or vendor communications into consumer AI tools that are not approved for that workflow. The journal article Generative AI in Medical Practice: In-Depth Exploration of Privacy and Security Challenges supports that point, warning that “adequate safeguards are needed to prevent breaches of PHI and to maintain public trust.”
See also: HIPAA Compliant Email: The Definitive Guide (2026 Update)
FAQs
Does an AI vendor need a business associate agreement?
An AI vendor generally needs a business associate agreement when it handles PHI on behalf of a covered entity or business associate. A public or consumer AI tool without a BAA should not be treated as HIPAA compliant for PHI use.
Does a healthcare AI tool need FDA approval?
Not always. FDA oversight generally applies when software functions as a medical device or supports diagnosis, treatment, or clinical decision-making in ways that meet FDA criteria.
Why is advertising relevant to healthcare AI governance?
Advertising matters because it can introduce new data-sharing, personalization, measurement, and partner relationships.
Subscribe to Paubox Weekly
Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.
