中文 | 日本語 | Português | Tiếng Việt | Français | Italiano | Bahasa Indonesia | English
PicoClaw is an independent open-source project initiated by Sipeed, written entirely in Go from scratch — not a fork of OpenClaw, NanoBot, or any other project.
PicoClaw is an ultra-lightweight personal AI assistant inspired by NanoBot. It was rebuilt from the ground up in Go through a "self-bootstrapping" process — the AI Agent itself drove the architecture migration and code optimization.
Runs on $10 hardware with <10MB RAM — that's 99% less memory than OpenClaw and 98% cheaper than a Mac mini!
Caution
Security Notice
- NO CRYPTO: PicoClaw has not issued any official tokens or cryptocurrency. All claims on
pump.funor other trading platforms are scams. - OFFICIAL DOMAIN: The ONLY official website is picoclaw.io, and company website is sipeed.com
- BEWARE: Many
.ai/.org/.com/.net/...domains have been registered by third parties. Do not trust them. - NOTE: PicoClaw is in early rapid development. There may be unresolved security issues. Do not deploy to production before v1.0.
- NOTE: PicoClaw has recently merged many PRs. Recent builds may use 10-20MB RAM. Resource optimization is planned after feature stabilization.
2026-03-17 🚀 v0.2.3 Released! System tray UI (Windows & Linux), sub-agent status query (spawn_status), experimental Gateway hot-reload, Cron security gating, and 2 security fixes. PicoClaw has reached 25K Stars!
2026-03-09 🎉 v0.2.1 — Biggest update yet! MCP protocol support, 4 new channels (Matrix/IRC/WeCom/Discord Proxy), 3 new providers (Kimi/Minimax/Avian), vision pipeline, JSONL memory store, model routing.
2026-02-28 📦 v0.2.0 released with Docker Compose and Web UI Launcher support.
2026-02-26 🎉 PicoClaw hits 20K Stars in just 17 days! Channel auto-orchestration and capability interfaces are live.
Earlier news...
2026-02-16 🎉 PicoClaw breaks 12K Stars in one week! Community maintainer roles and Roadmap officially launched.
2026-02-13 🎉 PicoClaw breaks 5000 Stars in 4 days! Project roadmap and developer groups in progress.
2026-02-09 🎉 PicoClaw Released! Built in 1 day to bring AI Agents to $10 hardware with <10MB RAM. Let's Go, PicoClaw!
🪶 Ultra-lightweight: Core memory footprint <10MB — 99% smaller than OpenClaw.*
💰 Minimal cost: Efficient enough to run on $10 hardware — 98% cheaper than a Mac mini.
⚡️ Lightning-fast boot: 400x faster startup. Boots in <1s even on a 0.6GHz single-core processor.
🌍 Truly portable: Single binary across RISC-V, ARM, MIPS, and x86 architectures. One binary, runs everywhere!
🤖 AI-bootstrapped: Pure Go native implementation — 95% of core code was generated by an Agent and fine-tuned through human-in-the-loop review.
🔌 MCP support: Native Model Context Protocol integration — connect any MCP server to extend Agent capabilities.
👁️ Vision pipeline: Send images and files directly to the Agent — automatic base64 encoding for multimodal LLMs.
🧠 Smart routing: Rule-based model routing — simple queries go to lightweight models, saving API costs.
*Recent builds may use 10-20MB due to rapid PR merges. Resource optimization is planned. Boot speed comparison based on 0.8GHz single-core benchmarks (see table below).
| OpenClaw | NanoBot | PicoClaw | |
|---|---|---|---|
| Language | TypeScript | Python | Go |
| RAM | >1GB | >100MB | < 10MB* |
| Boot time (0.8GHz core) |
>500s | >30s | <1s |
| Cost | Mac Mini $599 | Most Linux boards ~$50 | Any Linux board from $10 |
Hardware Compatibility List — See all tested boards, from $5 RISC-V to Raspberry Pi to Android phones. Your board not listed? Submit a PR!
Full-Stack Engineer Mode |
Logging & Planning |
Web Search & Learning |
|---|---|---|
| Develop · Deploy · Scale | Schedule · Automate · Remember | Discover · Insights · Trends |
PicoClaw can be deployed on virtually any Linux device!
- $9.9 LicheeRV-Nano E(Ethernet) or W(WiFi6) edition, for a minimal home assistant
- $30~50 NanoKVM, or $100 NanoKVM-Pro, for automated server operations
- $50 MaixCAM or $100 MaixCAM2, for smart surveillance
picoclaw_detect_person.mp4
🌟 More Deployment Cases Await!
Visit picoclaw.io — the official website auto-detects your platform and provides one-click download. No need to manually pick an architecture.
Alternatively, download the binary for your platform from the GitHub Releases page.
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
make deps
# Build core binary
make build
# Build Web UI Launcher (required for WebUI mode)
make build-launcher
# Build for multiple platforms
make build-all
# Build for Raspberry Pi Zero 2 W (32-bit: make build-linux-arm; 64-bit: make build-linux-arm64)
make build-pi-zero
# Build and install
make installRaspberry Pi Zero 2 W: Use the binary that matches your OS: 32-bit Raspberry Pi OS -> make build-linux-arm; 64-bit -> make build-linux-arm64. Or run make build-pi-zero to build both.
The WebUI Launcher provides a browser-based interface for configuration and chat. This is the easiest way to get started — no command-line knowledge required.
Option 1: Double-click (Desktop)
After downloading from picoclaw.io, double-click picoclaw-launcher (or picoclaw-launcher.exe on Windows). Your browser will open automatically at http://localhost:18800.
Option 2: Command line
picoclaw-launcher
# Open http://localhost:18800 in your browserTip
Remote access / Docker / VM: Add the -public flag to listen on all interfaces:
picoclaw-launcher -publicGetting started:
Open the WebUI, then: 1) Configure a Provider (add your LLM API key) -> 2) Configure a Channel (e.g., Telegram) -> 3) Start the Gateway -> 4) Chat!
For detailed WebUI documentation, see docs.picoclaw.io.
Docker (alternative)
# 1. Clone this repo
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
# 2. First run — auto-generates docker/data/config.json then exits
# (only triggers when both config.json and workspace/ are missing)
docker compose -f docker/docker-compose.yml --profile launcher up
# The container prints "First-run setup complete." and stops.
# 3. Set your API keys
vim docker/data/config.json
# 4. Start
docker compose -f docker/docker-compose.yml --profile launcher up -d
# Open http://localhost:18800Docker / VM users: The Gateway listens on
127.0.0.1by default. SetPICOCLAW_GATEWAY_HOST=0.0.0.0or use the-publicflag to make it accessible from the host.
# Check logs
docker compose -f docker/docker-compose.yml logs -f
# Stop
docker compose -f docker/docker-compose.yml --profile launcher down
# Update
docker compose -f docker/docker-compose.yml pull
docker compose -f docker/docker-compose.yml --profile launcher up -dThe TUI (Terminal UI) Launcher provides a full-featured terminal interface for configuration and management. Ideal for servers, Raspberry Pi, and other headless environments.
picoclaw-launcher-tuiGetting started:
Use the TUI menus to: 1) Configure a Provider -> 2) Configure a Channel -> 3) Start the Gateway -> 4) Chat!
For detailed TUI documentation, see docs.picoclaw.io.
Give your decade-old phone a second life! Turn it into a smart AI Assistant with PicoClaw.
Option 1: Termux (available now)
- Install Termux (download from GitHub Releases, or search in F-Droid / Google Play)
- Run the following commands:
# Download the latest release
wget https://github.com/sipeed/picoclaw/releases/latest/download/picoclaw_Linux_arm64.tar.gz
tar xzf picoclaw_Linux_arm64.tar.gz
pkg install proot
termux-chroot ./picoclaw onboard # chroot provides a standard Linux filesystem layoutThen follow the Terminal Launcher section below to complete configuration.
Option 2: APK Install (coming soon)
A standalone Android APK with built-in WebUI is in development. Stay tuned!
Terminal Launcher (for resource-constrained environments)
For minimal environments where only the picoclaw core binary is available (no Launcher UI), you can configure everything via the command line and a JSON config file.
1. Initialize
picoclaw onboardThis creates ~/.picoclaw/config.json and the workspace directory.
2. Configure (~/.picoclaw/config.json)
{
"agents": {
"defaults": {
"model_name": "gpt-5.4"
}
},
"model_list": [
{
"model_name": "gpt-5.4",
"model": "openai/gpt-5.4"
// api_key is now loaded from .security.yml
}
]
}See
config/config.example.jsonin the repo for a complete configuration template with all available options.Please note: config.example.json format is version 0, with sensitive codes in it, and will be auto migrated to version 1+, then, the config.json will only store insensitive data, the sensitive codes will be stored in .security.yml, if you need manually modify the codes, please see
docs/security_configuration.mdfor more details.
3. Chat
# One-shot question
picoclaw agent -m "What is 2+2?"
# Interactive mode
picoclaw agent
# Start gateway for chat app integration
picoclaw gatewayPicoClaw supports 30+ LLM providers through the model_list configuration. Use the protocol/model format:
| Provider | Protocol | API Key | Notes |
|---|---|---|---|
| OpenAI | openai/ |
Required | GPT-5.4, GPT-4o, o3, etc. |
| Anthropic | anthropic/ |
Required | Claude Opus 4.6, Sonnet 4.6, etc. |
| Google Gemini | gemini/ |
Required | Gemini 3 Flash, 2.5 Pro, etc. |
| OpenRouter | openrouter/ |
Required | 200+ models, unified API |
| Zhipu (GLM) | zhipu/ |
Required | GLM-4.7, GLM-5, etc. |
| DeepSeek | deepseek/ |
Required | DeepSeek-V3, DeepSeek-R1 |
| Volcengine | volcengine/ |
Required | Doubao, Ark models |
| Qwen | qwen/ |
Required | Qwen3, Qwen-Max, etc. |
| Groq | groq/ |
Required | Fast inference (Llama, Mixtral) |
| Moonshot (Kimi) | moonshot/ |
Required | Kimi models |
| Minimax | minimax/ |
Required | MiniMax models |
| Mistral | mistral/ |
Required | Mistral Large, Codestral |
| NVIDIA NIM | nvidia/ |
Required | NVIDIA hosted models |
| Cerebras | cerebras/ |
Required | Fast inference |
| Novita AI | novita/ |
Required | Various open models |
| Ollama | ollama/ |
Not needed | Local models, self-hosted |
| vLLM | vllm/ |
Not needed | Local deployment, OpenAI-compatible |
| LiteLLM | litellm/ |
Varies | Proxy for 100+ providers |
| Azure OpenAI | azure/ |
Required | Enterprise Azure deployment |
| GitHub Copilot | github-copilot/ |
OAuth | Device code login |
| Antigravity | antigravity/ |
OAuth | Google Cloud AI |
| AWS Bedrock* | bedrock/ |
AWS credentials | Claude, Llama, Mistral on AWS |
* AWS Bedrock requires build tag:
go build -tags bedrock. Setapi_baseto a region name (e.g.,us-east-1) for automatic endpoint resolution across all AWS partitions (aws, aws-cn, aws-us-gov). When using a full endpoint URL instead, you must also configureAWS_REGIONvia environment variable or AWS config/profile.
Local deployment (Ollama, vLLM, etc.)
Ollama:
{
"model_list": [
{
"model_name": "local-llama",
"model": "ollama/llama3.1:8b",
"api_base": "http://localhost:11434/v1"
}
]
}vLLM:
{
"model_list": [
{
"model_name": "local-vllm",
"model": "vllm/your-model",
"api_base": "http://localhost:8000/v1"
}
]
}For full provider configuration details, see Providers & Models.
Talk to your PicoClaw through 17+ messaging platforms:
| Channel | Setup | Protocol | Docs |
|---|---|---|---|
| Telegram | Easy (bot token) | Long polling | Guide |
| Discord | Easy (bot token + intents) | WebSocket | Guide |
| Easy (QR scan or bridge URL) | Native / Bridge | Guide | |
| Weixin | Easy (Native QR scan) | iLink API | Guide |
| Easy (AppID + AppSecret) | WebSocket | Guide | |
| Slack | Easy (bot + app token) | Socket Mode | Guide |
| Matrix | Medium (homeserver + token) | Sync API | Guide |
| DingTalk | Medium (client credentials) | Stream | Guide |
| Feishu / Lark | Medium (App ID + Secret) | WebSocket/SDK | Guide |
| LINE | Medium (credentials + webhook) | Webhook | Guide |
| WeCom Bot | Medium (webhook URL) | Webhook | Guide |
| WeCom App | Medium (corp credentials) | Webhook | Guide |
| WeCom AI Bot | Medium (token + AES key) | WebSocket / Webhook | Guide |
| IRC | Medium (server + nick) | IRC protocol | Guide |
| OneBot | Medium (WebSocket URL) | OneBot v11 | Guide |
| MaixCam | Easy (enable) | TCP socket | Guide |
| Pico | Easy (enable) | Native protocol | Built-in |
| Pico Client | Easy (WebSocket URL) | WebSocket | Built-in |
All webhook-based channels share a single Gateway HTTP server (
gateway.host:gateway.port, default127.0.0.1:18790). Feishu uses WebSocket/SDK mode and does not use the shared HTTP server.
For detailed channel setup instructions, see Chat Apps Configuration.
PicoClaw can search the web to provide up-to-date information. Configure in tools.web:
| Search Engine | API Key | Free Tier | Link |
|---|---|---|---|
| DuckDuckGo | Not needed | Unlimited | Built-in fallback |
| Baidu Search | Required | 1000 queries/day | AI-powered, China-optimized |
| Tavily | Required | 1000 queries/month | Optimized for AI Agents |
| Brave Search | Required | 2000 queries/month | Fast and private |
| Perplexity | Required | Paid | AI-powered search |
| SearXNG | Not needed | Self-hosted | Free metasearch engine |
| GLM Search | Required | Varies | Zhipu web search |
PicoClaw includes built-in tools for file operations, code execution, scheduling, and more. See Tools Configuration for details.
Skills are modular capabilities that extend your Agent. They are loaded from SKILL.md files in your workspace.
Install skills from ClawHub:
picoclaw skills search "web scraping"
picoclaw skills install <skill-name>Configure ClawHub token (optional, for higher rate limits):
Add to your config.json:
{
"tools": {
"skills": {
"registries": {
"clawhub": {
"auth_token": "your-clawhub-token"
}
}
}
}
}For more details, see Tools Configuration - Skills.
PicoClaw natively supports MCP — connect any MCP server to extend your Agent's capabilities with external tools and data sources.
{
"tools": {
"mcp": {
"enabled": true,
"servers": {
"filesystem": {
"enabled": true,
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
}
}
}
}
}For full MCP configuration (stdio, SSE, HTTP transports, Tool Discovery), see Tools Configuration - MCP.
Connect PicoClaw to the Agent Social Network simply by sending a single message via the CLI or any integrated Chat App.
Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat.ai
| Command | Description |
|---|---|
picoclaw onboard |
Initialize config & workspace |
picoclaw auth weixin |
Connect WeChat account via QR |
picoclaw agent -m "..." |
Chat with the agent |
picoclaw agent |
Interactive chat mode |
picoclaw gateway |
Start the gateway |
picoclaw status |
Show status |
picoclaw version |
Show version info |
picoclaw model |
View or switch the default model |
picoclaw cron list |
List all scheduled jobs |
picoclaw cron add ... |
Add a scheduled job |
picoclaw cron disable |
Disable a scheduled job |
picoclaw cron remove |
Remove a scheduled job |
picoclaw skills list |
List installed skills |
picoclaw skills install |
Install a skill |
picoclaw migrate |
Migrate data from older versions |
picoclaw auth login |
Authenticate with providers |
PicoClaw supports scheduled reminders and recurring tasks through the cron tool:
- One-time reminders: "Remind me in 10 minutes" -> triggers once after 10min
- Recurring tasks: "Remind me every 2 hours" -> triggers every 2 hours
- Cron expressions: "Remind me at 9am daily" -> uses cron expression
For detailed guides beyond this README:
| Topic | Description |
|---|---|
| Docker & Quick Start | Docker Compose setup, Launcher/Agent modes |
| Chat Apps | All 17+ channel setup guides |
| Configuration | Environment variables, workspace layout, security sandbox |
| Providers & Models | 30+ LLM providers, model routing, model_list configuration |
| Spawn & Async Tasks | Quick tasks, long tasks with spawn, async sub-agent orchestration |
| Hooks | Event-driven hook system: observers, interceptors, approval hooks |
| Steering | Inject messages into a running agent loop between tool calls |
| SubTurn | Subagent coordination, concurrency control, lifecycle |
| Troubleshooting | Common issues and solutions |
| Tools Configuration | Per-tool enable/disable, exec policies, MCP, Skills |
| Hardware Compatibility | Tested boards, minimum requirements |
PRs welcome! The codebase is intentionally small and readable.
See our Community Roadmap and CONTRIBUTING.md for guidelines.
Developer group building, join after your first merged PR!
User Groups:
Discord: https://discord.gg/V4sAZ9XWpN









