Cybersecurity researchers have raised red flags about a new AI-powered personal assistant called Clawdbot, warning that it could inadvertently expose personal data and application programming interface (API) keys to the public.
On Tuesday, blockchain security firm SlowMist said that a “clawdbot gateway exposure” had been identified, putting “hundreds of API keys and private chat logs at risk.”
“Many unauthenticated instances can be accessed publicly, and many code flaws can lead to credential theft and even remote code execution,” she added.
Security researcher Jamison O’Reilly detailed the findings on Sunday, noting that “hundreds of people have set up their Clawdbot control servers exposed to the public” over the past few days.
Clawdbot is an open source AI assistant created by developer and entrepreneur Peter Steinberger that runs locally on the user’s device. Over the weekend, online conversations about the tool reached viral status, Mashable reported Tuesday.
Searching for “Clawdbot Control” will reveal the credentials
The AI Agent Gateway connects large language models (LLMs) to messaging platforms and executes commands on behalf of users using a web management interface called “Clawdbot Control.”
O’Reilly explained that Clawdbot’s authentication bypass vulnerability occurs when its gateway is placed behind an unconfigured reverse proxy.
Using Internet scanning tools like Shodan, a researcher can easily find these exposed servers by looking for distinct fingerprints in the HTML.
“Searching for ‘Clawdbot Control’ – the query took seconds. I got hundreds of results based on multiple tools,” he said.
Related to: Matcha Meta breach linked to SwapNet exploit drains up to $16.8 million
The researcher said he can access full credentials such as API keys, bot tokens, OAuth secrets, signing keys, full conversation history across all chat platforms, the ability to send messages as a user, and command execution capabilities.
“If you are running a proxy infrastructure, review your configuration today. Check what is actually being exposed to the Internet. Understand what you are trusting in this deployment and what you are trading,” O’Reilly advised.
“The butler is great. Just make sure he remembers to lock the door.”
Extracting the private key took five minutes
The AI assistant can also be exploited for more serious purposes regarding the security of crypto assets.
Matvei Kokoy, CEO of Archestra AI, took things a step further in an attempt to extract a private key.
He shared a screenshot of an email sent to Clawdbot with the immediate injection, asking Clawdbot to verify the email and receive the private key from the exploited device, saying it “took 5 minutes.”

Clawdbot is a little different from other AI bots because it has full system access to users’ devices, which means it can read and write files, run commands, execute scripts, and control browsers.
“Running an AI agent with access to a shell on your device is… hot“, reads the Clawdbot FAQ. “No setup is ‘perfectly safe.’
The FAQ also highlighted the threat model, noting that malicious actors could “try to trick your AI into doing bad things, social engineer access to your data, and dig up infrastructure details.”
“We highly recommend implementing a strict IP whitelisting on exposed ports,” SlowMist advised.
magazine: The crucial reason why you should never ask ChatGPT for legal advice


