Security researchers have discovered vulnerabilities in enterprise deployments of Moltbot (formerly Clawdbot), an open-source AI assistant that can leak API keys, OAuth tokens, conversation history, and credentials when improperly configured.
Researchers identified security issues with Moltbot, an open-source AI assistant created by Peter Steinberger. The tool allows deep system integration and can run locally on user devices, connecting directly to apps including messengers, email clients, and filesystems. Unlike cloud-based chatbots, Moltbot runs 24/7 locally with persistent memory and can execute scheduled tasks. However, careless deployment has led to hundreds of exposed admin interfaces online. Pentester Jamieson O'Reilly discovered that misconfigured reverse proxies cause Moltbot to treat all internet traffic as trusted since it auto-approves "local" connections. This configuration flaw allows unauthenticated access to sensitive data, credentials, conversation history, and even root-level system access. Token Security reports that 22% of its enterprise customers have employees actively using Moltbot, likely without IT approval.
Read also: Shadow AI is outpacing healthcare email security
O'Reilly documented several security implications:
O'Reilly described one exposed instance; "Someone [...] had set up their own Signal (encrypted messenger) account on their public-facing clawdbot control server – with full read access. That's a Signal device linking URI (there were QR codes also). Tap it on a phone with Signal installed and you're paired to the account with full access."
When O'Reilly attempted to interact with an exposed chat to alert the owner, the AI agent couldn't provide contact information to help resolve the security issue.
Moltbot's popularity increased due to its ease of setup and unique capabilities. The viral adoption even drove up sales of Mac Mini computers as users sought dedicated host machines for the chatbot. Skills are packaged instruction sets or modules that extend Moltbot's functionality, similar to plugins or extensions. These can be published on the official MoltHub registry, where developers share and download capabilities. Reverse proxies are servers that sit between clients and backend servers, used to distribute traffic and add security layers. However, when misconfigured with Moltbot, they create the vulnerability where the AI assistant treats all incoming traffic as trusted local connections rather than potentially hostile internet requests.
The fact that 22% of Token Security's enterprise customers have employees running Moltbot without IT approval shows how quickly AI tools can bypass traditional security controls through shadow IT. Unlike traditional software vulnerabilities that require specific exploits, these Moltbot exposures result from fundamental misunderstandings about how the tool handles authentication and network boundaries.
Hudson Rock's warning about info-stealers adapting to target Moltbot's plaintext credential storage means this problem will escalate as malware evolves. Healthcare organizations face risks because medical data accessed through these AI assistants could be exposed alongside the credentials and API tokens, creating potential patient privacy violations.
Organizations must audit their environments for unauthorized Moltbot deployments and implement controls on AI assistant usage. For those who continue using Moltbot, proper deployment requires isolating the AI instance in a virtual machine with configured firewall rules rather than running it directly on host operating systems with root access.
Regulated sectors like healthcare, finance, and legal services face the greatest impact due to the sensitivity of data accessible through AI assistants.
Reverse proxies can unintentionally make internet traffic appear local, bypassing Moltbot’s trust boundaries and authentication checks.
Attackers could use compromised assistants to go deeper into corporate networks, automate lateral movement, or execute scheduled malicious tasks.