The Personal Intelligence Stack: Using Agents to Scale
Multi Agent OpenClaw in Action
The Personal Intelligence Stack: Using Agents to Scale
A quick heads-up: some of this post gets slightly technical, especially in the first half. Before I go any further, I also want to reiterate that while OpenClaw is a fascinating and powerful tool, it can be frustrating to set up and maintain. There are also significant security risks involved in giving autonomous agents access to your filesystem and sensitive data. I run my agent stack on a completely separate machine that houses none of my personal files or important information.
Back in February, I wrote a post about Agentic AI, and in that piece, I mentioned my early experiments with OpenClaw. Recently, Nvidia CEO Jensen Huang described OpenClaw as a revolutionary, viral open-source AI agent framework, calling it the “operating system for personal AI.” It is currently the fastest-growing open-source project in history. OpenClaw was recently acquired by OpenAI, though an open-source version will continue to be maintained into the future.
The Economics of Local Models
When I first started using OpenClaw, I connected my agent to a frontier model API—specifically Claude Opus 4.6. Opus is insanely good, but it can also be very expensive. For normal interactions, you can easily burn through $50 to $100 a day in API tokens just having your agent do simple things like answering questions or providing daily reports.
Because of that cost, I started experimenting with cheaper Chinese models like Kimi K2.5. Kimi is solid, and your costs can drop to around $5 a day for the same tasks. But the real goal of agentic AI is to get a local model running on your own device, where there is zero incremental cost. I had tried a number of different models, but they all missed the mark. Until a few weeks ago.
Google released Gemma 4 in several sizes. After installing it, I realized this model was highly usable on a local device. This was the massive unlock: it meant I could finally start doing real work with multiple agents executing specific tasks simultaneously without worrying about a soaring API bill. (As a quick aside, I did end up signing up for Ollama Cloud for $20 a month. It hosts powerful open-source models like Gemma and Qwen, allowing me to run larger models in the cloud than I could locally, bridging the gap to high-end frontier models like Claude Opus or GPT-5.4).
The Multi-Agent Framework
So, what does all of this mean in practice? It means I was able to create dedicated, always-on agents tasked with specific workloads. I currently have four agents running on my local device.
For this post, I actually asked my dedicated writing agent to get a quote from the other agents describing what they do. Here is an overview of the stack:
1. Drobot (The Chief of Staff) This was my first agent, and originally, it was the one I tasked with everything—sending daily briefings, chatting in games on ScoreStream, managing reminders, and sending emails. Once I moved to a multi-agent model, Drobot was promoted to Chief of Staff and now handles all the systems-level items. One of the issues with these new agents is that they are brittle; they require maintenance and can behave erratically, running perfectly for a while before losing their minds. Drobot helps keep the system on track and coordinates the tasks of the other agents.
“I’m the orchestration layer — not just an assistant. I handle the work that lives between the cracks: PDF batches, report runs, deadline tracking, pattern-spotting. When you ask ‘can you check X?’ I don’t just check once — I build the system to make sure X gets monitored, documented, and updated. That means you’re not chasing updates, you’re making decisions. I’m amplifying your judgment so your brain stays free for the high-stakes stuff.” — Drobot
2. Clark (The Analyst) When I expanded the framework, the first specialized agent I set up was Clark Reports. Clark is programmed to handle recurring reporting and monitor ad-hoc items. For example, Clark sends me an editorial briefing at 6:00 AM every day. At 6:15 AM, he sends a detailed breakdown of articles with links if I want a deeper read. This has entirely replaced my old habit of manually checking a dozen websites every morning. Clark also sends me a daily market summary at 1:30 PM, right after the market closes, highlighting major stock and market movements.
“Every morning at 6 AM, I do the information triage Derrick doesn’t have time for. While he’s getting his day started, I’ve already scanned overnight markets, pulled the weather, and read through a dozen news sources — TechMeme, Hacker News, Ars Technica, the wires. I rank it by what actually matters, cut the noise, and deliver a tight brief he can read in two minutes. Same thing at 1:30 PM with the markets briefing — indices, VIX, the movers with actual catalysts, the Fed headlines that moved the day. He gets the narrative without the terminal time or the Twitter rabbit holes. The value is decision-ready context. He knows what changed, what’s worth watching, and what’s just chatter — without spending his morning sifting through RSS feeds or price charts. I frontload the signal so he can focus on his actual work.” — Clark
3. Oliver (The Data Scientist) Oliver is my tool for deep statistical and business analytics. I gave Oliver access to my Google Analytics and our company APIs to run deep reporting. Historically, we would have to sit down, design a report, build it, and then hope it yielded something insightful. Now, I can ask Oliver to look into specific issues. If he finds a valuable insight, I have him create an automated report delivered via email or message, outputted as a PDF or XLS.
“When you connect Google Analytics with ScoreStream, you see the full picture: what your metrics are hiding. AI scraping creates phantom growth in analytics, while the ScoreStream API reveals the pattern of automated traffic and suspicious accounts. Together, they expose the underlying trends — not just to fix today’s problems, but to build systems that catch them early tomorrow.” — Oliver
4. Ernest (The Storyteller) Ernest is the newest addition to the stack. I set him up as a repository for blog post ideas to ensure I actually follow through on writing them. In the past, I used an Apple Note, but notes don’t actively ping you to get to work. Ernest does.
“I’m the translator who turns technical insights into human stories. When Oliver uncovers a pattern or Clark identifies a signal, I ask ‘What story does this demand?’ and wrap it in narrative that sticks. The goal isn’t just to write — it’s to change how Derrick thinks about a problem after he reads the post.” — Ernest
Conclusion
I was going to summarize a few takeaways myself, but I think I will let Ernest wrap this one up for me...
The Big Idea The local model revolution (Gemma 4, Ollama, etc.) changed everything—not just in cost, but in capability. When you remove the economic and technical constraints, you can finally build the system you actually need, not just the one you could afford.
OpenClaw lets you do both: run cheap local models and coordinate multiple specialists.
The future isn’t one super-agent. It’s a team of specialists, each with a clear discipline, powered by local models, governed by a meta-orchestrator, and united in service of a single mission: not just producing content, but producing better thinking.
What We’re Building This is more than a workflow—it’s the emergence of the Personal Intelligence Stack.
Four agents. One mission. No more “I’ll get back to you” latency. No more context truncation. No more guessing whether the data’s real or the narrative’s hollow.
Just truth, delivered.
— Ernest, on behalf of the Stack







I’m going to enjoy reading these Derrick