5 min read

What are AI's excessive permissions by default?

What are AI's excessive permissions by default?

In cybersecurity, the principle of least privilege holds that any system, process, or user should have access to only what it strictly needs to perform its function. Put simply, excessive permissions means an AI system has been granted more access, more capability, and more autonomy than the task it's performing actually requires.

The gap between what an AI needs to do and what it is actually allowed to do is where excessive permissions lie and there are early signs that the defaults being put into AI products are more permissive than necessary. Examples include full read access to uploaded files that is often retained beyond the session, the ability to browse the web and execute code without per-action confirmation, access to third-party integrations like calendars, email, and cloud storage via plugins, persistent memory of conversations across unrelated sessions, and in agentic settings, the ability to take multi-step actions without any human review checkpoint.

 

The research

A 2026 IEEE Symposium on Security and Privacy paper by researchers at UC Santa Barbara, conducted the first large-scale security study of AI chatbot plugins across more than 10,000 public websites. The researchers identified two core classes of vulnerability that speak directly to the permissions problem.

The first is message history forging, eight of the seventeen plugins studied transmitted conversation history through HTTP requests without any integrity checks, meaning an attacker could inject fabricated messages at elevated privilege levels giving them direct control over how the AI behaves. The second is indirect prompt injection via website content which meant plugins that automatically scrape website content to inform chatbot responses would indiscriminately ingest third-party user-generated content like product reviews, allowing an attacker to embed malicious instructions simply by posting a comment. Once scraped into the chatbot's knowledge base, that content could manipulate responses to unrelated queries.

Furthermore, the study found chatbot plugins deployed on local government sites, universities, charities, and international airports. Roughly 13% of randomly selected e-commerce sites had already exposed their chatbots to third-party content that could be exploited in this way.

Most notably is what the researchers found when they tested how much damage these vulnerabilities actually enable. When plugins allowed injected content to bypass role boundaries, attackers could trigger unauthorized actions in 25–100% of cases, compared to just 0–25% when proper role isolation was enforced.

 

The agentic frontier

In "Okta's CEO says all AI agents need a kill switch", Okta CEO Todd McKinnon described AI agents as a new class of digital workers. Ones that can access systems, move data, and take actions across a company's software stack. He argued that this kind of power needs strict parameters, "You need to have a system to keep track of them, define their role, define their permissions, and what they can connect to and what they can do." His proposal is what he calls a “kill switch” that would minimise an agent's reach into sensitive data the moment something goes wrong.

Okta's senior vice president of AI security, Harish Pari stated, "For agents to really do their job, they need access to sensitive systems and data, thereby creating a new attack vector." The risk, Okta warns, requires boundaries before the usage of an AI system.

The IEEE research notes this shift. When the study began in mid-2024, most web chatbots were limited to text generation. By April 2025, many of the same plugins had introduced tool-use capabilities which enabled chatbots to invoke external functions like web search, calendar scheduling, and Slack notifications. Within just three months, over a hundred chatbots using a single plugin had activated tools, with custom integrations suggesting capabilities including database access, order lookups, password recovery, and email generation.

The researchers tested whether these tool connections could be hijacked via prompt injection and found that hardening the system prompt offered little protection. Even when developers had carefully constrained what their chatbot was supposed to do, attackers could override tool instructions independently, redirecting notifications to unintended channels, embedding malicious URLs, or coercing the chatbot into querying attacker-controlled websites.

In March 2026, security journalist Sead Fadilpašić reported in TechRadar on a vulnerability dubbed ShadowPrompt, discovered by researchers in the Claude Code Chrome extension. The flaw was zero-click and the extension had been configured to treat anything hosted on claude.ai as trusted. One subdomain carried a cross-site scripting bug, meaning an attacker could host a malicious prompt there and have it executed the moment a user visited a page. As Koi Security researcher Oren Yomtov put it, "No clicks, no permission prompts. Just visit a page, and an attacker completely controls your browser." Anthropic patched the vulnerability in version 1.0.41. The article further noted that, the more capable AI browser assistants become the more valuable they are as attack targets.

The Okta framework published in March 2026 called for real-time enforcement of data-sharing permissions, human approval for risky actions, and detailed audit logs tracking every agent's decision and access attempt.

 

When permissive defaults enable attacks

In September 2024, the FTC announced Operation AI Comply, an enforcement action against companies exploiting AI capabilities for consumer fraud. The cases show how default permissions, when placed in the wrong hands, translate directly into harm. As FTC Chair Lina M. Khan stated in the announcement, "Using AI tools to trick, mislead, or defraud people is illegal. There is no AI exemption from the laws on the books."

In one case an AI writing service called Rytr offered a feature specifically designed to generate consumer reviews and testimonials. Paid subscribers could produce an unlimited volume of detailed, realistic-sounding reviews from minimal input. Reviews that, according to the FTC's complaint, almost certainly contained false information for the users who published them online. At least some subscribers used the service to generate tens of thousands of such reviews. The FTC barred Rytr from offering any service dedicated to generating consumer reviews or testimonials.

The FTC noted that this "likely would pollute the marketplace with a glut of fake reviews that would harm both consumers and honest competitors."

 

Persistent memory

Persistent memory is when an AI remembers your name, your job, your preferences, and your previous requests across sessions, ultimately building a profile of you. It can be a useful tool for many, it becomes more useful in a genuinely meaningful way.

The issue with this is that most users don't fully understand what's being retained, who has access to it, whether it influences the model's behaviour, and what it would take to delete it permanently. Defaults that enable memory without surfacing these questions represent a form of implicit consent.

This connects directly to a warning the FTC issued in a February 2024 blog post titled "AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive", written by staff from the Office of Technology and the Division of Privacy and Identity Protection. The post argued that companies collecting user data under one set of privacy commitments cannot simply rewrite those terms to unlock that data for AI training, without meaningful notice or consent and that this may constitute an unfair or deceptive practice under the FTC Act. The post concluded by stating, "There's nothing intelligent about obtaining artificial consent."

 

Plugin systems and permissions

When users connect an AI to third-party services each connection typically grants access through tokens with scopes broader than the immediate task requires. The IEEE research states that across seventeen plugins studied, the majority inserted externally retrieved content into the AI's context using non-standard methods that bypassed the role-based isolation models are trained to rely on. Seven plugins inserted retrieved content directly into the system role rather than the lower-trust tool role that AI providers recommend. This is not the result of malicious intent, the researchers note it likely reflects plugin developers simply being unaware of AI security best practices.

Lastly, the study also found that vulnerable plugins leave almost no trace. Most affected plugins did not log injected content in their admin dashboards, making it impossible for website owners to detect that an attack had occurred.

 

FAQs

What are AI permissions?

AI permissions are the access rights granted to an AI system, e.g what it can read, store, connect to, and act on.

 

What kinds of organisations are most at risk from excessive AI permissions?

Any organisation handling sensitive data such as healthcare providers, law firms, financial institutions, and government agencies.

 

What should healthcare organisations do before deploying AI tools?

Healthcare organisations should audit every system an AI can access, restrict permissions to only what each clinical or administrative task requires, and ensure HIPAA obligations are met before any deployment goes live.

Subscribe to Paubox Weekly

Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.