OpenClaw Security Fears Lead Meta, Other AI Firms To Restrict Its Use

Published: (February 19, 2026 at 07:02 PM EST)
3 min read
Source: Slashdot

Source: Slashdot

Background

An anonymous reader quotes a report from Wired:

Last month, Jason Grad issued a late‑night warning to the 20 employees at his tech startup:

“You’ve likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high‑risk for our environment,” he wrote in a Slack message with a red siren emoji. “Please keep Clawdbot off all company hardware and away from work‑linked accounts.”

Grad isn’t the only tech executive who has raised concerns about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs, citing unpredictability and potential privacy breaches. The executive spoke on the condition of anonymity.

Company Responses

Massive

Grad, co‑founder and CEO of Massive (a provider of Internet proxy tools to millions of users and businesses), says the company’s policy is “mitigate first, investigate second” when encountering anything that could be harmful to the company, its users, or its clients.

“Our policy is, ‘mitigate first, investigate second’ when we come across anything that could be harmful to our company, users, or clients,” Grad explained.

His warning to staff went out on January 26, before any employees had installed OpenClaw.

Valere

Valere, which develops software for organizations including Johns Hopkins University, initially banned OpenClaw. An employee posted about the tool on January 29 in an internal Slack channel for sharing new tech. The company’s president quickly responded that use of OpenClaw was strictly prohibited. Valere CEO Guy Pistone told Wired:

“If it got access to one of our developer’s machines, it could get access to our cloud services and our clients’ sensitive information, including credit card information and GitHub codebases.”
“It’s pretty good at cleaning up some of its actions, which also scares me.”

Controlled Experiment

A week later, Pistone allowed Valere’s research team to run OpenClaw on an employee’s old computer to identify flaws and potential fixes. The team recommended:

  • Limiting who can give orders to OpenClaw.
  • Exposing the system to the Internet only with a password‑protected control panel.

In a report shared with Wired, the researchers noted that users must “accept that the bot can be tricked.” For example, if OpenClaw is set up to summarize a user’s email, a hacker could send a malicious email instructing the AI to share copies of files on the user’s computer.

Pistone remains confident that safeguards can be implemented. He gave the team 60 days to investigate, stating:

“If we don’t think we can do it in a reasonable time, we’ll forgo it. Whoever figures out how to make it secure for businesses is definitely going to have a winner.”

Security Concerns

The rapid bans and restrictions across companies illustrate growing apprehension about OpenClaw’s security implications. Key concerns include:

  • Unpredictable behavior that could lead to privacy breaches.
  • Potential for malicious prompting, allowing attackers to exfiltrate data.
  • Difficulty in controlling access, especially when the AI can be instructed to perform unintended actions.

These issues have prompted firms to prioritize mitigation and thorough investigation before considering any deployment of OpenClaw in production environments.

0 views
Back to Blog

Related posts

Read more »