ChatGPT emerged in 2022 as a “conversational AI language model developed by OpenAI,” writes Jianning Li and colleagues in an article titled ChatGPT in healthcare: A taxonomy and systematic review. “It uses deep learning techniques to generate human-like responses to natural language inputs…Currently, the interface is designed for question answering (QA), i.e., ChatGPT responds in texts to the questions/prompts from users. All established or potential applications of ChatGPT in different medical specialties and/or clinical scenarios hinge on the QA feature, distinguished only by how the prompts are formulated.”
ChatGPT is particularly valuable in administrative and compliance-related tasks, such as creating HIPAA compliant email content. Since the model's output is driven entirely by user prompts, healthcare professionals can leverage ChatGPT as a smart assistant to generate email templates, training materials, internal policies, and breach response documentation that align with HIPAA requirements—so long as they craft their prompts carefully and avoid inputting any protected health information (PHI). By understanding how to ask the right questions, users can unlock ChatGPT’s full potential to streamline secure communication workflows.
While ChatGPT is a powerful tool for content generation, there are limitations to its use in regulated environments like healthcare. By default, standard versions of ChatGPT are not HIPAA compliant and should not be used to process, store, or transmit protected health information (PHI). However, this doesn’t mean the tool cannot be used by healthcare organizations altogether.
According to David Holt, owner of Holt Law LLC, “Even though the standard versions of ChatGPT aren’t HIPAA compliant, there are still ways for healthcare organizations to use it safely. One way is by only using it with de-identified data—meaning all personal information is removed so it no longer counts as protected health information under HIPAA.”
De-identification is a practical workaround that enables healthcare teams to use ChatGPT for general administrative tasks such as drafting policies, educational content, training modules, and email templates.
Holt also points to the development of specialized AI tools that offer stronger privacy safeguards. “There are special tools out there, like BastionGPT and CompliantGPT, that act as a secure layer around ChatGPT. These tools are built with HIPAA in mind and can sign Business Associate Agreements,” he explains. “Some organizations are also setting up ChatGPT models directly on their own servers, which keeps everything in-house and avoids sending patient data over the internet.”
In other words, organizations that require HIPAA-level protections can explore self-hosted solutions or compliant versions of ChatGPT to ensure full control over sensitive data.
The privacy concerns aren't limited to healthcare. Holt adds that legal professionals also face similar restrictions. “Lawyers like myself have to do a similar workaround to protect private information,” he says, illustrating the broader relevance of AI safety in regulated industries.
For organizations looking for AI solutions that are compliant right out of the box, tools like Hathr AI and Hippocratic AI are emerging as secure, healthcare-specific alternatives. These platforms are designed to meet HIPAA standards natively and may be more appropriate for patient-facing or data-intensive applications.
Still, even standard ChatGPT can be applied to healthcare, if used appropriately. “Finally,” Holt concludes, “even the standard ChatGPT can be useful in situations that don’t involve patient data, like drafting educational materials or helping write policies and templates that can be customized later.”
This makes ChatGPT particularly well-suited for use cases like generating HIPAA compliant email prompts. By keeping queries free from PHI and focusing on policy, procedure, and general communication strategies, healthcare teams can take advantage of AI-powered efficiency without compromising compliance.
Read also: A quick guide to using ChatGPT in a HIPAA compliant way
ChatGPT can assist in:
The prompts you give ChatGPT matter. A well-crafted prompt ensures the AI's response is tailored, relevant, and compliant. Below, we break down sample prompts across five essential areas:
With ChatGPT, you can generate compliant messaging that informs, reminds, and engages without overstepping boundaries.
Sample prompts
See also: HIPAA compliant email marketing: What you need to know
ChatGPT can act as a knowledge base to help draft policy, summarize guidelines, and build awareness across your team.
Sample prompts
With clear policies and regular audits, facilitated by tools like ChatGPT, healthcare organizations can drastically reduce risk.
One of the biggest threats to HIPAA compliance is human error. In fact, according to an article, titled Human errors and their prevention in healthcare, “Human errors form a significant portion of preventable mishaps in healthcare. Even the most competent clinicians are not immune to it.” Employees often misuse email unintentionally, whether by emailing PHI to the wrong person, not using encryption, or misunderstanding what can be shared. ChatGPT can be used to create engaging training materials and quick-reference guides that reduce these risks.
Sample prompts
Read also: The importance of training healthcare staff in email best practices
ChatGPT can be used to generate technical documentation, compare vendors, or assist IT teams in setting up secure systems.
Sample prompts
When breaches occur, responding swiftly and appropriately minimizes harm and regulatory exposure. ChatGPT can help teams develop incident response templates, breach notifications, and reports.
Sample prompts
Go deeper: 100+ ChatGPT prompts for healthcare professionals
While ChatGPT is not a HIPAA compliant tool for handling PHI, it’s a powerful assistant for content generation, training, and documentation. Here are a few guardrails to follow:
Learn more: How do healthcare organizations use ChatGPT?
No, the standard version of ChatGPT is not HIPAA compliant and should not be used to process or transmit protected health information (PHI).
Protected health information (PHI) includes any data that can identify an individual and relates to their health, such as:
Even seemingly harmless details may qualify if combined with identifying data.