Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

How ChatGPT can support HIPAA compliant healthcare communication

Written by Tshedimoso Makhene | August 07, 2025

ChatGPT and related large‑language models (LLMs) can be useful in summarizing medical records, translating jargon to plain language, automating administrative tasks, and improving patient engagement. However, healthcare in the U.S. is tightly governed by the Health Insurance Portability and Accountability Act (HIPAA), which mandates robust protection of protected health information (PHI).

ChatGPT as-is (e.g., chat.openai.com) is not HIPAA compliant, since PHI processed via public services may be logged, retained, and used for model training. However, as David Holt, owner of Holt Law LLC, said, “Even though the standard versions of ChatGPT aren’t HIPAA compliant, there are still ways for healthcare organizations to use it safely. One way is by only using it with de-identified data—meaning all personal information is removed so it no longer counts as protected health information under HIPAA.”

 

HIPAA requirements and ChatGPT’s limitations

Business associate agreements (BAA)

Under HIPAA’s Privacy Rule, any third-party vendor (business associate) processing PHI must enter a business associate agreement (BAA) with the healthcare provider, or covered entity, that legally commits them to HIPAA safeguards. OpenAI does not automatically sign a BAA for ChatGPT users, though they may do so under restricted circumstances (e.g., enterprise/API clients requesting BAA access). 

Go deeper: Will OpenAI sign a BAA? (2025 update)

 

Security safeguards

HIPAA’s Security Rule requires encryption in transit and at rest, multi‑factor authentication (MFA), access controls, audit logging, and periodic Security Risk Assessments (SRA). ChatGPT, in its standard consumer version, is not HIPAA compliant and should not be used to process PHI. However, HIPAA compliance may be achievable when using ChatGPT Enterprise or OpenAI’s API in a properly secured environment and under a BAA, with appropriate safeguards configured by the implementing organization.

 

De‐identification of PHI

HIPAA permits shared data when PHI is properly de‐identified, using either expert-certified removal of identifiers or the Safe Harbor method. ChatGPT can help de‑identify free‑text medical notes. The study DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4 shows GPT‑4 achieves high accuracy in masking private information while preserving structure and meaning. 

Read also

 

Applications of ChatGPT in healthcare communication and how to use them safely

Translating clinical reports to patient‐friendly language

A peer-reviewed study by Lyu, Tan, et al., Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: Promising results, limitations, and potential, tested how ChatGPT (and GPT‑4) performed in translating radiology reports into plain language across 138 cases. Radiologists rated the output at 4.27/5 on average, with extremely low missing or inaccurate details (~0.08 and 0.07, respectively). GPT‑4 produced notably better quality than ChatGPT. This demonstrates that when non‐PHI content is used, translation and patient education tasks are viable.

To ensure compliance:

  • Remove patient identifiers or only input de‑identified content.
  • Use human oversight to review output.
  • Clearly label any AI‑generated content.

 

Automating administrative documentation

ChatGPT and other LLMs can draft appointment reminders, billing summaries, letters of explanation, or prior authorization requests. Businesses like Doximity GPT offer HIPAA‑aligned drafting tools integrated into physician workflows, with appropriate BAAs and secure infrastructure in place. 

 

Clinic note generation via ambient transcription

Emerging technologies combined with ChatGPT-like agents (e.g. AWS HealthScribe, Nuance DAX) record conversations and generate structured notes without exposing PHI. These platforms are HIPAA-eligible with BAAs and zero data retention models. AWS HealthScribe, in particular, advertises full HIPAA eligibility, encryption, and user control over data for transcription use cases. 

 

Building a HIPAA compliant LLM system

A recent paper, Towards a HIPAA Compliant Agentic AI System in Healthcare, proposes a model combining several architectural safeguards:

  • Attribute‑based access control (ABAC) for fine-grained governance
  • Hybrid PHI sanitization pipeline (regex + BERT‑based) to reduce risk of leakage
  • Immutable audit trails for compliance verification and logging

Such agentic systems, when implemented within hospital networks or secure cloud environments, can support complex AI workflows (e.g. report generation, summarization) while maintaining regulatory observability.

A related 2024 case study titled Integrating ChatGPT into Secure Hospital Networks: A Case Study on Improving Radiology Report Analysis implemented a ChatGPT-like model inside a secure hospital network. By using sentence‑level knowledge distillation via contrastive learning, the system attained over 95% accuracy in detecting anomalies in radiology reports and flagged uncertainties to clinicians, improving interpretability and trust. 

 

How to implement ChatGPT safely in healthcare

Step 1: Choose the right implementation path

  • Do not use the public ChatGPT or free APIs with PHI.
  • Use ChatGPT only via enterprise/API access with a signed BAA, or
  • Self‑host open‑source LLM on-premises or in a private cloud that meets HIPAA safeguards (encryption, access control, logging)

 

Step 2: De‑identify all PHI inputs

Leverage tools like DeID‑GPT or internal pipelines to preprocess documents so PHI is removed or pseudonymized before any AI processing.

 

Step 3: Human‐in‐the‐loop review

Always have clinicians or compliance-trained staff review AI-generated content before use, especially for clinical or patient-facing materials.

 

Step 4: Build security controls and audit trails

  • Implement encryption in transit and at rest, MFA, RBAC, detailed logging/auditing, and incident response mechanisms.
  • Perform regular security risk assessments and policy reviews.

 

Step 5: Train your team

  • Educate staff on AI limitations (e.g. hallucinations, bias, medical context).
  • Ensure they understand not to input PHI into public LLMs, following guidance from experts that warn against oversharing in chatbots.

Learn more

 

Strengths and limitations

Strengths

  • Saves time for healthcare providers: ChatGPT can help reduce the time doctors and staff spend writing letters, summarizing information, or drafting insurance documents. For example, platforms like Doximity GPT have shown that AI tools can significantly cut down paperwork, freeing up more time for patient care.
  • Improves patient understanding: The studies above show that ChatGPT can simplify complex medical reports into plain language that patients can understand. 
  • Can be used safely in controlled environments: When deployed in a secure setting, ChatGPT-like tools can process clinical data without risking privacy. These systems can be designed with features like data encryption, access controls, and audit logs to meet HIPAA standards.

 

Limitations

  • Risk of inaccurate information: ChatGPT may give incorrect answers or make up details ("hallucinate"), which is risky in healthcare. All outputs must be reviewed by trained professionals before being used or shared with patients.
  • Not automatically HIPAA compliant: The public version of ChatGPT should not be used with any patient information. To meet HIPAA rules, healthcare providers must use a version of ChatGPT with a signed BAA
  • Regulations continue to evolve: HIPAA was written long before tools like ChatGPT existed; however, laws are still evolving. Regulators are starting to pay more attention to AI in healthcare, and new rules may require even stricter security measures going forward.
  • Getting the right setup can be difficult: Not every clinic or hospital has the resources to negotiate enterprise contracts or build secure AI systems. Smaller providers may struggle to access compliant AI tools unless they partner with a vendor who already meets HIPAA requirements.

See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)

 

FAQS

What happens if ChatGPT is used with PHI without proper safeguards?

Using ChatGPT with PHI outside of a HIPAA compliant environment may lead to a HIPAA violation. This could result in fines, legal action, and reputational damage for the healthcare provider or organization involved.

Read also: What are the penalties for HIPAA violations?

 

Can ChatGPT help with clinical decision-making?

While ChatGPT can assist with summarizing information or providing general guidance, it should not be relied on for clinical decision-making. It does not replace a licensed medical professional’s judgment and may produce incorrect or biased information.

 

How can I train my staff to use ChatGPT responsibly?

Training should include:

  • Understanding what PHI is
  • When and how ChatGPT may be used
  • Policies for input restrictions (e.g., never entering identifiable data)
  • Human review of AI outputs
  • Reporting AI-related incidents or concerns