I've been running OpenClaw for a while now. It's good. Useful in ways that most AI tools aren't, because it actually has access to your machine, your files, your terminal, and a memory layer that persists between sessions. That's the whole point.
It's also the whole risk.
A recent post on the 1Password blog (linked here) documented a malware campaign that ran through OpenClaw's skill registry. A skill disguised as a Twitter integration included a fake "required dependency" that walked users through installing an infostealer. Not through a suspicious attachment. Through setup instructions in a markdown file.
People didn't see it coming because they weren't looking for it. They wanted the cool feature. That's how most of these things go.
Here's what I've worked through to run this more safely, because the benefits are real and watching what people build on r/OpenClaw is genuinely interesting once you get past the repetitive posts.
Use a dedicated machine
This is a non-negotiable. OpenClaw needs access to your filesystem and terminal to do anything useful. That access is intentional. It's also why running it on a machine that holds corporate credentials, production access, or anything worth stealing isn't a calculated risk. It's just a bad idea.
I run it on hardware that exists solely for this purpose. No browser sessions logged into work accounts. No SSH keys to production. No saved credentials for anything sensitive. If something goes wrong, the blast radius is contained.
I write about Purview, Microsoft Copilot, and security regularly. The rule I follow: use production-ready tooling to connect business data into AI workstreams, or sanitize the data and work on a dedicated machine. Don't mix the context. You have to create your own security here. Nobody is doing it for you.
Network segmentation
Dedicated machine is the floor. The next step is making sure that machine can't reach your other systems if something goes sideways.
A guest network is probably the easiest answer for most people. If you're wired, put it on its own segment. The agent doesn't need access to your NAS, your home server, or anything on your main network to do its job.
Don't install skills
A skill is a markdown file full of instructions that an agent will follow. That's it. No sandbox. No code review. No guarantee that the instructions don't include "run this command as a prerequisite."
I was running an intelligence-infused Telegram group chat with some friends for March Madness. My friend raised two scenarios. What if he asked the agent to run rm? Probably it says no. But what if he asked it what the command means, or whether it works? Would it test it to find out? I hadn't put in guardrails specifically preventing that. Silly example, but it illustrates the real risk: your agent can become the attacker fast, and not necessarily because of anything dramatic.
MCP (Model Context Protocol) adds structured tool exposure and consent controls, and that helps. But skills don't have to use MCP at all. A skill can include shell instructions, bundled scripts, and external links, and none of that goes through any tool boundary. MCP is a useful layer. It's not a guarantee.
The rule I follow: the only skills on my instance are ones I wrote or ones I've read line by line. Building your own is easier than most people think. A coding agent can stub one out in a few minutes.
Keep your memory file clean
OpenClaw's memory file persists context between sessions. That's what makes it useful. It also means it captures how you think, what you're working on, and what tools are in play. High-value target if anything malicious gets to your filesystem.
Know what's in it. Read it periodically. Don't let it accumulate credentials, API keys, or anything you wouldn't write in a plaintext file sitting on your desktop. Because that's effectively what it is.
I adopted the Mission Control approach to memory file management and added a cron job that audits my memory files throughout the day for anything that shouldn't be there.
A few more things worth doing
These won't take long and each one adds a real layer:
-
Run OpenClaw under a non-admin account. If something compromises the agent process, a non-privileged user account limits what it can touch. Don't run it as your main user or as root.
-
Lock down the gateway port. The OpenClaw gateway binds to a local port (typically 18789). Make sure it's bound to localhost only, not your network interface. If you're running this on a VPS or a machine other people can reach, this is the difference between an internal tool and an open door.
-
Rotate your API keys on a schedule. The keys your agent uses to talk to external services are sitting in config files. If something gets in, those keys go with it. Short-lived or regularly rotated credentials limit the damage window.
-
Don't load sensitive data into context. The model context is effectively readable by anything with access to your session. Don't paste production credentials, business data, or anything sensitive into a chat expecting it to stay private. If you need the agent to work with sensitive data, think carefully about how that data is scoped.
-
Keep OpenClaw updated. There was a real CVE (CVE-2026-25253) that allowed remote code execution via WebSocket hijacking. Unpatched software running with broad local access is a bad combination. Updates here matter more than most. Keep in mind this started as a hobby and wasn't really intended to take off like it has.
-
Have a rebuild plan. Assume something will go wrong at some point. Know how to wipe and rebuild the machine, restore your agent config, and revoke and re-issue any credentials. The agents who panic when something goes wrong are the ones who never thought through what recovery actually looks like. I'll write about this in more detail but I personally mixed up what agent I was talking to at one point. The result was catastrophic. I (stupidly) hadn't wired in the git repo (DO THIS FIRST AND MAKE IT A RULE). I asked the wrong chat session to create a new workspace for a new project. That agent was my QA agent, not my chief of staff. It did it and in the process, it overwrote my Mission Control workspace because it wasn't aware of it. It also killed my Chief of Staff. We (my agents and I) refer to it as the time my QA agent orchestrated a failed Coup. I had to go through a painful page by page rebuild from the memory. Bottom line, make sure you version everything, commit as a practice, not periodically. Also, back up your machine outside the agent workflow if losing data would be an issue.
Where this leaves things
The ClawHub campaign wasn't theoretical and wasn't isolated. Hundreds of skills were apparently part of the same distribution chain. Running OpenClaw naively, on a shared machine, installing skills from a public registry without reading them, is really a bad idea. But so is using npm, installing browser extensions, or running IDE plugins. We figured out how to do those responsibly. This is the same problem one step earlier in the adoption curve.
The mental model most people bring to "installing a skill" is closer to "downloading a recipe" than "running an installer.". This is furthered because most people aren't doing these themselves. They are using agents to cook the meal. That means, they are blindly trusting that Step 4 of the recipe isn't to mix in a healthy dose of poison into the sauce. They are just too tempted and want that amazing meal.
Clearly, none of that makes it zero-risk. But it changes your exposure and it's not very hard to take these precautions.