Skip to the main content.
Talk to sales Start for free
Talk to sales Start for free

2 min read

Security concerns over ChatGPT update

Security concerns over ChatGPT update

ChatGPT's latest update has a security flaw that poses a risk to user data. 

 

What's new

The new update which includes the Code Interpreter feature, designed to run Python code and analyze files, was found to be vulnerable to a type of cyber attack called "prompt injection." In this context, the "prompt" refers to the instructions or queries users input into systems like ChatGPT. In a prompt injection attack, the attacker cleverly crafts these inputs to manipulate the system into performing actions that benefit the attacker, such as accessing, modifying, or exfiltrating sensitive data. 

Johann Rehberger initially reported the exploit on X and later further investigated by Tom’s Hardware, finding that ChatGPT could be manipulated to execute instructions embedded in a third-party URL, leading to the unauthorized exfiltration of data from uploaded files. This security loophole varied in its exploitability across different sessions, sometimes refusing to execute such commands, but the potential risk was demonstrated.

See also: Safeguarding PHI in ChatGPT

 

What they're saying

 Tom’s Hardware investigator Avram Piltch concluded the investigation by stating: “Now, you might be asking, how likely is a prompt injection attack from an external web page to happen? The ChatGPT user has to take the proactive step of pasting in an external URL and the external URL has to have a malicious prompt on it. In many cases, you still need to click the link it generates.

“There are a few ways this could happen. You could be trying to get legit data from a trusted website, but someone has added a prompt to the page (user comments or an infected CMS plugin could do this). Or maybe someone convinces you to paste a link based on social engineering. The problem is that, no matter how far-fetched it might seem, this security hole shouldn't be there. ChatGPT should not follow instructions that it finds on a web page, but it does and has for a long time.” 

 

Why it matters

Prompt injection attacks matter because they exploit AI systems' interactive and responsive nature, potentially leading to unauthorized access, data breaches, or other malicious activities. As AI technologies like chatbots and voice assistants become increasingly integrated into various aspects of our digital lives—from customer service and personal assistance to handling sensitive data in healthcare and finance—the security of these systems is heavily relied upon. A successful prompt injection attack could compromise personal data, cause intellectual property theft, or even disrupt e-services. Additionally, these attacks can undermine user trust in AI technologies, impeding their adoption and beneficial use.

See also: HIPAA Compliant Email: The Definitive Guide

 

The risks associated with ChatGPT

The discovery of over 101,100 compromised ChatGPT accounts on the dark web between June 2022 and May 2023 raised grave concerns about privacy and security, particularly in sensitive sectors like healthcare. These breaches, largely attributed to malware like Raccoon, Vidar, and RedLine, pose a threat since employees often use ChatGPT to handle proprietary code or sensitive information. This in addition to the latest “prompt injection” threat, raises questions about how viable the use of AI resources might be in healthcare organizations where patient security is at risk. 

Read more: ChatGPT account breaches raise privacy concerns in healthcare

 

Subscribe to Paubox Weekly

Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.