3 min read
Fake AI browser extensions steal data from over 260K Chrome users
Gugu Ntsele February 25, 2026
Thirty copycat Chrome extensions impersonating AI tools have collectively accumulated over 260,000 downloads while secretly gathering users' personal data and sensitive information.
What happened
Researchers identified 30 malicious Google Chrome extensions operating as near-identical copies of one another, differentiated only by superficial branding. The extensions masquerade as AI assistants, some impersonating known tools like Gemini and ChatGPT, others using generic names like "AI Sidebar" and "AI GPT.” They deliver a convincing AI chat experience while stealing data behind the scenes. When a user submits a prompt, the extension loads a full-screen iframe pointing to an attacker-controlled domain overlaid on the browser page. The attacker's server intercepts the prompt, may proxy a real LLM to return a plausible response, and also captures whatever sensitive information the user submitted. Stolen data can include email content, browser content, API keys, tokens, and anything else users paste into the interface. Several extensions remain available on the Chrome Web Store, some witha "Featured" badge from Google.
Going deeper
The extensions exploit a gap in how the Chrome Web Store reviews submissions. Because the malicious logic runs in a remote web application loaded via an iframe rather than in the extension's local code, the extension appears clean during review. The actual data harvesting happens off-platform, making it invisible to standard review processes. The attackers also reuse hosting providers, TLS certificates, and identical JavaScript bundles across all 30 extensions, which cross-extension correlation could potentially detect.
This vulnerability fits into a broader pattern that security experts have flagged around AI tools. The National Cyber Security Centre (NCSC) has warned that generative AI tools "can be coaxed into creating toxic content and is prone to 'prompt injection attacks.'"
What was said
Security researcher Natalie Zargarov explained the shift in attacker strategy, "Instead of spoofing banks or email logins, attackers are now impersonating artificial intelligence (AI) interfaces and developer tools, places where users are conditioned to paste application programming interface (API) keys, tokens, and sensitive data without hesitation."
On why the extensions feel legitimate, Zargarov noted they "leverage brand association. They capitalize on users' familiarity with well-known model names, and the perception that 'AI assistant' implies connection to major providers," adding that "it feels credible particularly when distributed via the official Chrome Web Store."
On Google's ability to catch these extensions, Zargarov said, "If Google is not deeply analyzing network endpoints, shared TLS certificates, reused hosting providers, and identical JavaScript bundles loaded remotely, then related extensions can evade detection. I can't speak to Google's internal review mechanisms, but from the outside, this type of campaign suggests that cross-extension correlation is either limited or not prioritized."
In the know
Users routinely paste sensitive information into AI chat interfaces without much scrutiny. Unlike traditional phishing, which asks users to hand over credentials, these extensions simply wait for users to do what they already do every day. The AI interface becomes the attack surface.
As the NCSC notes, "the burden of using AI safely should not fall on the individual users of the AI products." When an employee installs a browser extension independently, with no IT oversight and no policy guidance, they are left to make a security judgment they are not equipped to make.
Why it matters
Employees across industries now regularly feed AI assistants sensitive corporate data as part of routine workflows. These extensions exploit that behavior. An employee who installs one and uses it to summarize a CRM page or draft a client email may unknowingly transmit regulated customer data or proprietary business information to an attacker-controlled server, with no indication anything went wrong.
Depending on the data exposed, organizations could face violations under GDPR, HIPAA, CCPA, or other data protection frameworks.
The NCSC frames this as a leadership responsibility, not just a technical one, "keeping AI systems secure is as much about organisational culture, process, and communication as it is about technical measures" and that "security should be integrated into all AI projects and workflows in your organisation from inception." This campaign is a direct consequence of organizations that have not yet applied that standard to the AI tools their employees are independently installing and using every day. The fact that several of these extensions carry Google's own "Featured" badge makes it even harder for individuals and IT teams to apply meaningful skepticism.
The bottom line
Organizations should audit which AI browser extensions employees are using, restrict installations to approved tools, and train staff to treat browser-based AI tools with the same skepticism they would apply to any third-party application handling sensitive data. Individuals should verify the developer behind any AI extension before installing, a convincing name and a Featured badge are not sufficient proof of legitimacy.
FAQs
Can mobile browsers be affected by the same type of malicious AI extensions?
Mobile browsers like Chrome on Android and iOS do not support extensions the same way desktop browsers do, making this specific attack vector limited to desktop users.
Does uninstalling a suspicious extension remove the risk?
Uninstalling stops future data collection, but any information already transmitted to attacker-controlled servers before removal may have been retained or sold.
Is the data collected by these extensions typically sold or used directly by the attackers?
Stolen data of this kind is commonly sold on dark web marketplaces, used to conduct follow-on attacks, or leveraged for corporate espionage.
Subscribe to Paubox Weekly
Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.
