1 min read

ChatGPT launches health feature, raising new questions about data safety

ChatGPT launches health feature, raising new questions about data safety

On January 7, 2026, OpenAI announced the introduction of ChatGPT Health, a new section within ChatGPT focused specifically on health and wellness interactions.

 

What happened

The company described the feature as a way for users to organize health-related conversations separately from other chats and, if they choose, connect medical records and wellness apps such as Apple Health and MyFitnessPal to provide additional context for responses.

According to OpenAI, the system is intended to help users understand information, prepare for medical appointments, and track general patterns in their health, while not serving as a tool for diagnosis or treatment.

OpenAI also stated that physicians were consulted during development and that the system is evaluated using internal clinical-quality benchmarks. Access to ChatGPT Health is being introduced gradually through a waitlist, with initial availability limited by region and account type.

See also: How ChatGPT can support HIPAA compliant healthcare communication

 

Is ChatGPT Health HIPAA compliant?

ChatGPT Health, as a consumer-oriented product, is not inherently HIPAA compliant for handling protected health information (PHI). The clause Section 5.4 states that customers may not use OpenAI services to create, receive, maintain, transmit, or otherwise process PHI unless they have signed a Healthcare Addendum.

This includes a Business Associate Agreement (BAA) required under HIPAA. Without such an addendum, processing PHI through any service not specifically designed or contracted for that purpose is prohibited.

 

What was said

The service agreement expressly states, “Customer agrees not to use the Services to create, receive, maintain, transmit, or otherwise process Protected Health Information, unless it has signed the Healthcare Addendum.”

Andrew Crawford from the Center of Democracy and Technology noted in a statement to the BBC, “Since it's up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger.”

 

FAQs

Is all AI HIPAA compliant?

No, AI systems are not automatically HIPAA compliant.

 

Why is the way AI models train their systems a danger to PHI?

It is a risk if sensitive data is stored, reused, or learned from without the proper safeguards, consent, and legal protections required under HIPAA.

 

What makes a software platform HIPAA compliant?

If it is willing to sign a BAA and exhibits compliance with HIPAA's Rules.

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.