OpenAI, a leading provider of artificial intelligence (AI) language models, has revolutionized how businesses operate. However, healthcare providers handling protected health information (PHI) must comply with the Health Insurance Portability and Accountability Act (HIPAA) regulations, which require covered entities to sign a BAA with vendors.
So, will OpenAI sign a Business Associate Agreement (BAA) with healthcare organizations to ensure their use of OpenAI's services is HIPAA compliant?
A Business Associate Agreement (BAA) is a legally binding agreement between a covered entity, such as a healthcare provider, and a business associate who handles PHI on their behalf. The BAA outlines the terms and conditions for the use and disclosure of PHI, as well as the security and privacy obligations of the business associate.
HIPAA regulations require that covered entities only share PHI with business associates who have signed a BAA. This ensures that PHI is protected and that all parties comply with HIPAA regulations. As a result, healthcare organizations may be hesitant to use OpenAI's services without a BAA in place.
At this time, it appears that OpenAI does not sign a BAA. Therefore they may not be HIPAA compliant.
OpenAI does take steps to protect the privacy and security of user data.
OpenAI terms state, "If you will be using the OpenAI API for the processing of "personal data" as defined in the GDPR or "Personal Information" as defined in CCPA, please... request to execute our Data Processing Addendum."
Nuance, a company that provides speech recognition and natural language processing solutions for the healthcare industry, has partnered with OpenAI. Nuance uses OpenAI's language models to improve the accuracy and efficiency of their medical transcription, clinical documentation, and virtual assistant solutions.
Nuance uses a combination of their own AI models and OpenAI's language models to enhance their healthcare solutions, including their medical transcription, clinical documentation, and virtual assistant offerings.
They don't require a BAA because they do not share any PHI with OpenAI. Nuance uses OpenAI's language models to enhance its healthcare solutions but does not directly use PHI with OpenAI's services. Instead, they use de-identified or anonymized data to train and improve the performance of their AI models.
Related: How to send HIPAA compliant emails
If you mistakenly input private PHI into ChatGPT, it’s unlikely that it could show up in answer to another user, but it is (very slightly) possible.
As a precaution, healthcare professionals and users interacting with ChatGPT in a healthcare context are encouraged to avoid sharing sensitive information to minimize any potential privacy risks.
Yaniv Markovski, Head of AI Specialist at OpenAI, said, “OpenAI does not use data submitted by customers via our API to train OpenAI models or improve OpenAI’s service offering… When you use our non-API consumer services ChatGPT or DALL-E, we may use the data you provide us to improve our models.”
Related: Safeguarding PHI in ChatGPT
Healthcare organizations can use OpenAI's services in a HIPAA compliant manner by implementing appropriate security controls and policies.
This includes the following steps:
Perform a risk assessment to identify potential risks and vulnerabilities associated with using OpenAI's services. Analyze potential threats to PHI, such as unauthorized access, data breaches, or data loss.
Healthcare professionals should ensure that the text input does not include identifiable patient information. This can be achieved through de-identification techniques such as masking or tokenization.
Chat logs should be monitored and reviewed regularly to ensure they do not contain any PHI. Use automated tools to detect and redact any PHI that may be present in chat logs.
Access to chat logs should be restricted to only those who need it to perform their job functions.
Develop policies and procedures that govern using OpenAI's services to protect PHI. These should address issues such as data access and data retention.
Healthcare organizations should have an incident response plan outlining procedures for responding to a security incident involving ChatGPT.
The plan should include:
Train staff on the proper use of OpenAI's services to ensure that PHI is protected and not entered into ChatGPT. This should include training on security best practices, data privacy, and incident reporting.
Healthcare organizations should regularly audit their ChatGPT usage to identify potential security issues or compliance gaps. Audits should be conducted by an independent third party to ensure objectivity.
Conduct due diligence when selecting an OpenAI vendor. Confirm the vendor has appropriate security controls and is committed to protecting PHI.
Healthcare organizations should ensure that ChatGPT vendors meet their security and privacy requirements, including HIPAA compliance. Request that the vendor provide a business associate agreement that includes proper security and privacy terms. Ensure that the vendor undergoes regular security audits and assessments.
While OpenAI does not sign a BAA, healthcare organizations can still use their services in a HIPAA compliant manner by taking appropriate security measures and following best practices. By prioritizing privacy and security, healthcare providers can benefit from the power of OpenAI's language models while protecting PHI and maintaining compliance with applicable regulations.
Related: HIPAA Compliant Email: The Definitive Guide