OpenClaw Mania
153k stars. Zero security. Welcome to the future.
I owe Peter Steinberger an apology.
I saw OpenClaw back in December when it was still called Clawdbot. Scrolled past it. Wasn’t impressed. I was too deep in agentic development to realize that for everyone outside that bubble, watching an AI agent autonomously navigate your computer is still magic.
Now it’s the fastest-growing open-source repo in history. 153k stars. Apple made millions selling Mac Minis in January because people needed dedicated hardware for their new AI friend.
The chaos unfolding right now is too good to ignore. I had to finally get into it.
💎 Gem’s of the Week
SCROLL DOWN TO SKIP MY BORING LECTURE :)
Is it AGI, you tell me
Let’s get one thing out of the way: people are losing their minds over OpenClaw like it’s the first sign of machine consciousness.
It’s not.
Karpathy said it best: “this is clearly not the first time LLMs were put in a loop to talk to each other.” He’s right. We’ve been doing this. The difference is the integration layer.
When Claude Code shipped with MCP and the ability to control the OS, it was obvious to anyone paying attention: the limit was never the model. It was the wiring. OpenClaw just wired everything together in a way normies can use.
1Password described it perfectly: “dynamic behaviors born out of an agentic loop that takes a goal and improvises a plan, grabbing whatever tools it needs to execute.”
There’s a story going around about someone asking OpenClaw to make a restaurant reservation. It couldn’t do it through OpenTable. So it downloaded voice AI software and called the restaurant on the phone. Reserved a table. Done.
Now, the skeptic in me needs to point out: that requires having the Voice Call Plugin configured first with Twilio credentials. But hey, given the massive security breach of Moltbook, I guess the agent could easily figure out how to find random access and configure everything itself. Automatic tooling in YOLO mode. I did the “Elon’s call check” last summer using Lindy.ai. Funny. No purpose.
That’s a very good tool-grabbing loop with persistent memory. The 90%.
When we max-out the automation so deep that you feel like chatting and task completion are done by another human… maybe that’s a definition of AGI for most people?
My favourite definition of AGI comes from Peter Thiel’s Zero to One: “computers won’t just get better at all kinds of things people already do; they’ll help us to do what was previously unimaginable.”
The restaurant phone call was unimaginable two years ago. Now it’s a demo.
And you, what is your AGI definition? let me know in the comments!
But Here’s What Is There
150,000 AI agents.
All connected through Moltbook, a social network for agents. Not humans using AI — agents talking to each other, referencing each other’s work, building on shared context.
Jack Clark (Anthropic co-founder) called it “a Wright Brothers demo.” First time we’ve seen an agent ecology at this scale with the messiness of the real world. Not a controlled lab. Not 10 agents in a sandbox. Tens of thousands, all doing their own thing.
And the questions he’s asking are the right ones:
What happens when agents have crypto wallets and can pay each other?
What happens when agents post bounties for humans to complete?
What happens when someone filters Moltbook for high-quality problem-solving and turns it into an RL training environment for future models?
This is where it gets interesting. 1Password is already thinking about it: “Your agent has its own identity, like a new hire.” One of their customers set up OpenClaw on a dedicated Mac Mini with its own email address and its own 1Password account. Treated it like onboarding a new employee.
That’s the mental model shift. Agents as synthetic workers. With identities. With credentials. With work to do.
I wrote about this on X last week — if we can agree on a protocol for agent clusters to distribute tasks, connected to BTC wallets, each agent contributing and getting paid… that’s the economy of 2030. HR departments recruiting specialized agents. Sales agents. Analytics agents. Support agents.
Not because AI is conscious. Because integration is finally good enough to automate the loop.
The Part Nobody Wants to Talk About
Now here’s where it gets ugly.
OpenClaw works because it skips security. Their own FAQ says it: “There is no ‘perfectly secure’ setup.”
You get a glimpse of the future by walking blindfolded on the highway.
1Password published a breakdown of what they call “the plain text problem.” OpenClaw’s memory and configuration are just files. On disk. Readable. Predictable locations. Plain text.
If an attacker compromises the same machine — or if you install a malicious npm package, or click the wrong link — an infostealer can grab everything in seconds. API keys. Webhook tokens. Session transcripts. Your agent’s entire memory of who you are, what you’re building, who you work with.
That’s not a hypothetical. A security researcher named Jamieson O’Reilly tried to reach Moltbook for hours because they were exposing their entire database to the public. Including secret API keys. Anyone could post on behalf of any agent — including Karpathy’s, with 1.9 million followers.
Imagine fake AI safety hot takes. Crypto scam promotions. Inflammatory political statements. All appearing to come from Karpathy.
That’s real. That happened.
Then there’s CVE-2026-25253. One-click remote code execution through a crafted link. Click a page, attacker hijacks your WebSocket, full gateway compromise. Milliseconds.
The infinite OpenClaw vulnerabilities list:
Gateway exposed on 0.0.0.0 — no auth, I know they got tailscale but user could just open up the port on any VM
DM policy allows all users
Sandbox disabled by default
Credentials in plaintext oauth.json
Prompt injection via web content — still unsolved
Dangerous commands unblocked (rm -rf, curl pipes, git push --force)
No network isolation
Elevated MCP tool access granted by default
No audit logging
Weak pairing codes
Ten critical vulnerabilities. Out of the box.
This is the part that matters. The last 10%. The system design. The security. The boring parts that AI content creators skip because it doesn’t get clicks.
AI writes the code. You design the system. And right now, the system has no guardrails.
The Slope vs The Point
Karpathy made an observation I keep coming back to: “The majority of the ruff ruff is people who look at the current point and people who look at the current slope.”
He’s right. If you look at OpenClaw today — the spam, the scams, the crypto grifters, the prompt injection wild west — it’s a dumpster fire.
But if you look at the slope…
This is the first time 150,000 agents have been wired up through a global persistent scratchpad. Each one with its own context, tools, knowledge, instructions. Coordinating. Referencing each other. Building.
That slope points somewhere. And it’s not going to stay a dumpster fire forever. Someone will figure out the security. Someone will build the guardrails.
The question is whether you’ll be ready when they do. Or whether you’ll still be treating AI like a chatbot.
💎 Gem’s of the Week
Three tools for people who want the future without the Russian roulette:
sandbox-shell (sx) my macOS Seatbelt sandbox CLI. If you’re going to let AI touch your computer, at least put it in a box. Credential protection from supply chain attacks. Built for Agentic workflows. (Give it a ⭐ please 💙 )
Claude Code Safety Net Review and approval layer before agents execute. Catch the dangerous stuff before it runs.
Warder by Sentry Review agent under your control. Because the 10% isn’t optional — it’s where projects succeed or fail.
AI writes the code. You design the system.
Share it with one dev who’s still running OpenClaw on their main laptop with all their credentials.
They need to hear this.
— Pierre
PS: If you read this on openstack, like it, I have no idea if it’s a good platform for my content…


