OpenClaw security fears lead Meta, other AI firms to restrict its use
Source: Ars Technica
Policy at Massive
“Our policy is, ‘mitigate first, investigate second’ when we come across anything that could be harmful to our company, users, or clients,” says Grad, co‑founder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says.
Valere’s Ban on OpenClaw
An employee at Valere, a software provider for organizations including Johns Hopkins University, posted about OpenClaw on January 29 in an internal Slack channel for sharing new tech to potentially try out. The company’s president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED.
Concerns
“If it got access to one of our developer’s machines, it could get access to our cloud services and our clients’ sensitive information, including credit card information and GitHub codebases,” Pistone says. “It’s pretty good at cleaning up some of its actions, which also scares me.”
Limited Test
A week later, Pistone allowed Valere’s research team to run OpenClaw on an employee’s old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised:
- Limiting who can give orders to OpenClaw.
- Exposing it to the Internet only with a password protecting its control panel to prevent unwanted access.
Findings from Valere Researchers
In a report shared with WIRED, the Valere researchers added that users have to “accept that the bot can be tricked.” For instance, if OpenClaw is set up to summarize a user’s email, a hacker could send a malicious email instructing the AI to share copies of files on the person’s computer.