ChatGPT and related large‑language models (LLMs) can be useful in summarizing medical records, translating jargon to plain language, automating administrative tasks, and improving patient engagement. However, healthcare in the U.S. is tightly governed by the Health Insurance Portability and Accountability Act (HIPAA), which mandates robust protection of protected health information (PHI).
ChatGPT as-is (e.g., chat.openai.com) is not HIPAA compliant, since PHI processed via public services may be logged, retained, and used for model training. However, as David Holt, owner of Holt Law LLC, said, “Even though the standard versions of ChatGPT aren’t HIPAA compliant, there are still ways for healthcare organizations to use it safely. One way is by only using it with de-identified data—meaning all personal information is removed so it no longer counts as protected health information under HIPAA.”
Under HIPAA’s Privacy Rule, any third-party vendor (business associate) processing PHI must enter a business associate agreement (BAA) with the healthcare provider, or covered entity, that legally commits them to HIPAA safeguards. OpenAI does not automatically sign a BAA for ChatGPT users, though they may do so under restricted circumstances (e.g., enterprise/API clients requesting BAA access).
Go deeper: Will OpenAI sign a BAA? (2025 update)
HIPAA’s Security Rule requires encryption in transit and at rest, multi‑factor authentication (MFA), access controls, audit logging, and periodic Security Risk Assessments (SRA). ChatGPT, in its standard consumer version, is not HIPAA compliant and should not be used to process PHI. However, HIPAA compliance may be achievable when using ChatGPT Enterprise or OpenAI’s API in a properly secured environment and under a BAA, with appropriate safeguards configured by the implementing organization.
HIPAA permits shared data when PHI is properly de‐identified, using either expert-certified removal of identifiers or the Safe Harbor method. ChatGPT can help de‑identify free‑text medical notes. The study DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4 shows GPT‑4 achieves high accuracy in masking private information while preserving structure and meaning.
Read also:
A peer-reviewed study by Lyu, Tan, et al., Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: Promising results, limitations, and potential, tested how ChatGPT (and GPT‑4) performed in translating radiology reports into plain language across 138 cases. Radiologists rated the output at 4.27/5 on average, with extremely low missing or inaccurate details (~0.08 and 0.07, respectively). GPT‑4 produced notably better quality than ChatGPT. This demonstrates that when non‐PHI content is used, translation and patient education tasks are viable.
To ensure compliance:
ChatGPT and other LLMs can draft appointment reminders, billing summaries, letters of explanation, or prior authorization requests. Businesses like Doximity GPT offer HIPAA‑aligned drafting tools integrated into physician workflows, with appropriate BAAs and secure infrastructure in place.
Emerging technologies combined with ChatGPT-like agents (e.g. AWS HealthScribe, Nuance DAX) record conversations and generate structured notes without exposing PHI. These platforms are HIPAA-eligible with BAAs and zero data retention models. AWS HealthScribe, in particular, advertises full HIPAA eligibility, encryption, and user control over data for transcription use cases.
A recent paper, Towards a HIPAA Compliant Agentic AI System in Healthcare, proposes a model combining several architectural safeguards:
Such agentic systems, when implemented within hospital networks or secure cloud environments, can support complex AI workflows (e.g. report generation, summarization) while maintaining regulatory observability.
A related 2024 case study titled Integrating ChatGPT into Secure Hospital Networks: A Case Study on Improving Radiology Report Analysis implemented a ChatGPT-like model inside a secure hospital network. By using sentence‑level knowledge distillation via contrastive learning, the system attained over 95% accuracy in detecting anomalies in radiology reports and flagged uncertainties to clinicians, improving interpretability and trust.
Leverage tools like DeID‑GPT or internal pipelines to preprocess documents so PHI is removed or pseudonymized before any AI processing.
Always have clinicians or compliance-trained staff review AI-generated content before use, especially for clinical or patient-facing materials.
Learn more:
See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)
Using ChatGPT with PHI outside of a HIPAA compliant environment may lead to a HIPAA violation. This could result in fines, legal action, and reputational damage for the healthcare provider or organization involved.
Read also: What are the penalties for HIPAA violations?
While ChatGPT can assist with summarizing information or providing general guidance, it should not be relied on for clinical decision-making. It does not replace a licensed medical professional’s judgment and may produce incorrect or biased information.
Training should include: