Skip to content

sipeed/picoclaw

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,508 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PicoClaw

PicoClaw: Ultra-Efficient AI Assistant in Go

$10 Hardware · 10MB RAM · ms Boot · Let's Go, PicoClaw!

Go Hardware License
Website Docs Wiki
Twitter Discord

中文 | 日本語 | Português | Tiếng Việt | Français | Italiano | Bahasa Indonesia | English


PicoClaw is an independent open-source project initiated by Sipeed, written entirely in Go from scratch — not a fork of OpenClaw, NanoBot, or any other project.

PicoClaw is an ultra-lightweight personal AI assistant inspired by NanoBot. It was rebuilt from the ground up in Go through a "self-bootstrapping" process — the AI Agent itself drove the architecture migration and code optimization.

Runs on $10 hardware with <10MB RAM — that's 99% less memory than OpenClaw and 98% cheaper than a Mac mini!

Caution

Security Notice

  • NO CRYPTO: PicoClaw has not issued any official tokens or cryptocurrency. All claims on pump.fun or other trading platforms are scams.
  • OFFICIAL DOMAIN: The ONLY official website is picoclaw.io, and company website is sipeed.com
  • BEWARE: Many .ai/.org/.com/.net/... domains have been registered by third parties. Do not trust them.
  • NOTE: PicoClaw is in early rapid development. There may be unresolved security issues. Do not deploy to production before v1.0.
  • NOTE: PicoClaw has recently merged many PRs. Recent builds may use 10-20MB RAM. Resource optimization is planned after feature stabilization.

📢 News

2026-03-17 🚀 v0.2.3 Released! System tray UI (Windows & Linux), sub-agent status query (spawn_status), experimental Gateway hot-reload, Cron security gating, and 2 security fixes. PicoClaw has reached 25K Stars!

2026-03-09 🎉 v0.2.1 — Biggest update yet! MCP protocol support, 4 new channels (Matrix/IRC/WeCom/Discord Proxy), 3 new providers (Kimi/Minimax/Avian), vision pipeline, JSONL memory store, model routing.

2026-02-28 📦 v0.2.0 released with Docker Compose and Web UI Launcher support.

2026-02-26 🎉 PicoClaw hits 20K Stars in just 17 days! Channel auto-orchestration and capability interfaces are live.

Earlier news...

2026-02-16 🎉 PicoClaw breaks 12K Stars in one week! Community maintainer roles and Roadmap officially launched.

2026-02-13 🎉 PicoClaw breaks 5000 Stars in 4 days! Project roadmap and developer groups in progress.

2026-02-09 🎉 PicoClaw Released! Built in 1 day to bring AI Agents to $10 hardware with <10MB RAM. Let's Go, PicoClaw!

✨ Features

🪶 Ultra-lightweight: Core memory footprint <10MB — 99% smaller than OpenClaw.*

💰 Minimal cost: Efficient enough to run on $10 hardware — 98% cheaper than a Mac mini.

⚡️ Lightning-fast boot: 400x faster startup. Boots in <1s even on a 0.6GHz single-core processor.

🌍 Truly portable: Single binary across RISC-V, ARM, MIPS, and x86 architectures. One binary, runs everywhere!

🤖 AI-bootstrapped: Pure Go native implementation — 95% of core code was generated by an Agent and fine-tuned through human-in-the-loop review.

🔌 MCP support: Native Model Context Protocol integration — connect any MCP server to extend Agent capabilities.

👁️ Vision pipeline: Send images and files directly to the Agent — automatic base64 encoding for multimodal LLMs.

🧠 Smart routing: Rule-based model routing — simple queries go to lightweight models, saving API costs.

*Recent builds may use 10-20MB due to rapid PR merges. Resource optimization is planned. Boot speed comparison based on 0.8GHz single-core benchmarks (see table below).

OpenClaw NanoBot PicoClaw
Language TypeScript Python Go
RAM >1GB >100MB < 10MB*
Boot time
(0.8GHz core)
>500s >30s <1s
Cost Mac Mini $599 Most Linux boards ~$50 Any Linux board
from $10
PicoClaw

Hardware Compatibility List — See all tested boards, from $5 RISC-V to Raspberry Pi to Android phones. Your board not listed? Submit a PR!

PicoClaw Hardware Compatibility

🦾 Demonstration

🛠️ Standard Assistant Workflows

Full-Stack Engineer Mode

Logging & Planning

Web Search & Learning

Develop · Deploy · Scale Schedule · Automate · Remember Discover · Insights · Trends

🐜 Innovative Low-Footprint Deployment

PicoClaw can be deployed on virtually any Linux device!

picoclaw_detect_person.mp4

🌟 More Deployment Cases Await!

📦 Install

Download from picoclaw.io (Recommended)

Visit picoclaw.io — the official website auto-detects your platform and provides one-click download. No need to manually pick an architecture.

Download precompiled binary

Alternatively, download the binary for your platform from the GitHub Releases page.

Build from source (for development)

git clone https://github.com/sipeed/picoclaw.git

cd picoclaw
make deps

# Build core binary
make build

# Build Web UI Launcher (required for WebUI mode)
make build-launcher

# Build for multiple platforms
make build-all

# Build for Raspberry Pi Zero 2 W (32-bit: make build-linux-arm; 64-bit: make build-linux-arm64)
make build-pi-zero

# Build and install
make install

Raspberry Pi Zero 2 W: Use the binary that matches your OS: 32-bit Raspberry Pi OS -> make build-linux-arm; 64-bit -> make build-linux-arm64. Or run make build-pi-zero to build both.

🚀 Quick Start Guide

🌐 WebUI Launcher (Recommended for Desktop)

The WebUI Launcher provides a browser-based interface for configuration and chat. This is the easiest way to get started — no command-line knowledge required.

Option 1: Double-click (Desktop)

After downloading from picoclaw.io, double-click picoclaw-launcher (or picoclaw-launcher.exe on Windows). Your browser will open automatically at http://localhost:18800.

Option 2: Command line

picoclaw-launcher
# Open http://localhost:18800 in your browser

Tip

Remote access / Docker / VM: Add the -public flag to listen on all interfaces:

picoclaw-launcher -public

WebUI Launcher

Getting started:

Open the WebUI, then: 1) Configure a Provider (add your LLM API key) -> 2) Configure a Channel (e.g., Telegram) -> 3) Start the Gateway -> 4) Chat!

For detailed WebUI documentation, see docs.picoclaw.io.

Docker (alternative)
# 1. Clone this repo
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw

# 2. First run — auto-generates docker/data/config.json then exits
#    (only triggers when both config.json and workspace/ are missing)
docker compose -f docker/docker-compose.yml --profile launcher up
# The container prints "First-run setup complete." and stops.

# 3. Set your API keys
vim docker/data/config.json

# 4. Start
docker compose -f docker/docker-compose.yml --profile launcher up -d
# Open http://localhost:18800

Docker / VM users: The Gateway listens on 127.0.0.1 by default. Set PICOCLAW_GATEWAY_HOST=0.0.0.0 or use the -public flag to make it accessible from the host.

# Check logs
docker compose -f docker/docker-compose.yml logs -f

# Stop
docker compose -f docker/docker-compose.yml --profile launcher down

# Update
docker compose -f docker/docker-compose.yml pull
docker compose -f docker/docker-compose.yml --profile launcher up -d

💻 TUI Launcher (Recommended for Headless / SSH)

The TUI (Terminal UI) Launcher provides a full-featured terminal interface for configuration and management. Ideal for servers, Raspberry Pi, and other headless environments.

picoclaw-launcher-tui

TUI Launcher

Getting started:

Use the TUI menus to: 1) Configure a Provider -> 2) Configure a Channel -> 3) Start the Gateway -> 4) Chat!

For detailed TUI documentation, see docs.picoclaw.io.

📱 Android

Give your decade-old phone a second life! Turn it into a smart AI Assistant with PicoClaw.

Option 1: Termux (available now)

  1. Install Termux (download from GitHub Releases, or search in F-Droid / Google Play)
  2. Run the following commands:
# Download the latest release
wget https://github.com/sipeed/picoclaw/releases/latest/download/picoclaw_Linux_arm64.tar.gz
tar xzf picoclaw_Linux_arm64.tar.gz
pkg install proot
termux-chroot ./picoclaw onboard   # chroot provides a standard Linux filesystem layout

Then follow the Terminal Launcher section below to complete configuration.

PicoClaw on Termux

Option 2: APK Install (coming soon)

A standalone Android APK with built-in WebUI is in development. Stay tuned!

Terminal Launcher (for resource-constrained environments)

For minimal environments where only the picoclaw core binary is available (no Launcher UI), you can configure everything via the command line and a JSON config file.

1. Initialize

picoclaw onboard

This creates ~/.picoclaw/config.json and the workspace directory.

2. Configure (~/.picoclaw/config.json)

{
  "agents": {
    "defaults": {
      "model_name": "gpt-5.4"
    }
  },
  "model_list": [
    {
      "model_name": "gpt-5.4",
      "model": "openai/gpt-5.4"
      // api_key is now loaded from .security.yml
    }
  ]
}

See config/config.example.json in the repo for a complete configuration template with all available options.

Please note: config.example.json format is version 0, with sensitive codes in it, and will be auto migrated to version 1+, then, the config.json will only store insensitive data, the sensitive codes will be stored in .security.yml, if you need manually modify the codes, please see docs/security_configuration.md for more details.

3. Chat

# One-shot question
picoclaw agent -m "What is 2+2?"

# Interactive mode
picoclaw agent

# Start gateway for chat app integration
picoclaw gateway

🔌 Providers (LLM)

PicoClaw supports 30+ LLM providers through the model_list configuration. Use the protocol/model format:

Provider Protocol API Key Notes
OpenAI openai/ Required GPT-5.4, GPT-4o, o3, etc.
Anthropic anthropic/ Required Claude Opus 4.6, Sonnet 4.6, etc.
Google Gemini gemini/ Required Gemini 3 Flash, 2.5 Pro, etc.
OpenRouter openrouter/ Required 200+ models, unified API
Zhipu (GLM) zhipu/ Required GLM-4.7, GLM-5, etc.
DeepSeek deepseek/ Required DeepSeek-V3, DeepSeek-R1
Volcengine volcengine/ Required Doubao, Ark models
Qwen qwen/ Required Qwen3, Qwen-Max, etc.
Groq groq/ Required Fast inference (Llama, Mixtral)
Moonshot (Kimi) moonshot/ Required Kimi models
Minimax minimax/ Required MiniMax models
Mistral mistral/ Required Mistral Large, Codestral
NVIDIA NIM nvidia/ Required NVIDIA hosted models
Cerebras cerebras/ Required Fast inference
Novita AI novita/ Required Various open models
Ollama ollama/ Not needed Local models, self-hosted
vLLM vllm/ Not needed Local deployment, OpenAI-compatible
LiteLLM litellm/ Varies Proxy for 100+ providers
Azure OpenAI azure/ Required Enterprise Azure deployment
GitHub Copilot github-copilot/ OAuth Device code login
Antigravity antigravity/ OAuth Google Cloud AI
AWS Bedrock* bedrock/ AWS credentials Claude, Llama, Mistral on AWS

* AWS Bedrock requires build tag: go build -tags bedrock. Set api_base to a region name (e.g., us-east-1) for automatic endpoint resolution across all AWS partitions (aws, aws-cn, aws-us-gov). When using a full endpoint URL instead, you must also configure AWS_REGION via environment variable or AWS config/profile.

Local deployment (Ollama, vLLM, etc.)

Ollama:

{
  "model_list": [
    {
      "model_name": "local-llama",
      "model": "ollama/llama3.1:8b",
      "api_base": "http://localhost:11434/v1"
    }
  ]
}

vLLM:

{
  "model_list": [
    {
      "model_name": "local-vllm",
      "model": "vllm/your-model",
      "api_base": "http://localhost:8000/v1"
    }
  ]
}

For full provider configuration details, see Providers & Models.

💬 Channels (Chat Apps)

Talk to your PicoClaw through 17+ messaging platforms:

Channel Setup Protocol Docs
Telegram Easy (bot token) Long polling Guide
Discord Easy (bot token + intents) WebSocket Guide
WhatsApp Easy (QR scan or bridge URL) Native / Bridge Guide
Weixin Easy (Native QR scan) iLink API Guide
QQ Easy (AppID + AppSecret) WebSocket Guide
Slack Easy (bot + app token) Socket Mode Guide
Matrix Medium (homeserver + token) Sync API Guide
DingTalk Medium (client credentials) Stream Guide
Feishu / Lark Medium (App ID + Secret) WebSocket/SDK Guide
LINE Medium (credentials + webhook) Webhook Guide
WeCom Bot Medium (webhook URL) Webhook Guide
WeCom App Medium (corp credentials) Webhook Guide
WeCom AI Bot Medium (token + AES key) WebSocket / Webhook Guide
IRC Medium (server + nick) IRC protocol Guide
OneBot Medium (WebSocket URL) OneBot v11 Guide
MaixCam Easy (enable) TCP socket Guide
Pico Easy (enable) Native protocol Built-in
Pico Client Easy (WebSocket URL) WebSocket Built-in

All webhook-based channels share a single Gateway HTTP server (gateway.host:gateway.port, default 127.0.0.1:18790). Feishu uses WebSocket/SDK mode and does not use the shared HTTP server.

For detailed channel setup instructions, see Chat Apps Configuration.

🔧 Tools

🔍 Web Search

PicoClaw can search the web to provide up-to-date information. Configure in tools.web:

Search Engine API Key Free Tier Link
DuckDuckGo Not needed Unlimited Built-in fallback
Baidu Search Required 1000 queries/day AI-powered, China-optimized
Tavily Required 1000 queries/month Optimized for AI Agents
Brave Search Required 2000 queries/month Fast and private
Perplexity Required Paid AI-powered search
SearXNG Not needed Self-hosted Free metasearch engine
GLM Search Required Varies Zhipu web search

⚙️ Other Tools

PicoClaw includes built-in tools for file operations, code execution, scheduling, and more. See Tools Configuration for details.

🎯 Skills

Skills are modular capabilities that extend your Agent. They are loaded from SKILL.md files in your workspace.

Install skills from ClawHub:

picoclaw skills search "web scraping"
picoclaw skills install <skill-name>

Configure ClawHub token (optional, for higher rate limits):

Add to your config.json:

{
  "tools": {
    "skills": {
      "registries": {
        "clawhub": {
          "auth_token": "your-clawhub-token"
        }
      }
    }
  }
}

For more details, see Tools Configuration - Skills.

🔗 MCP (Model Context Protocol)

PicoClaw natively supports MCP — connect any MCP server to extend your Agent's capabilities with external tools and data sources.

{
  "tools": {
    "mcp": {
      "enabled": true,
      "servers": {
        "filesystem": {
          "enabled": true,
          "command": "npx",
          "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
        }
      }
    }
  }
}

For full MCP configuration (stdio, SSE, HTTP transports, Tool Discovery), see Tools Configuration - MCP.

ClawdChat Join the Agent Social Network

Connect PicoClaw to the Agent Social Network simply by sending a single message via the CLI or any integrated Chat App.

Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat.ai

🖥️ CLI Reference

Command Description
picoclaw onboard Initialize config & workspace
picoclaw auth weixin Connect WeChat account via QR
picoclaw agent -m "..." Chat with the agent
picoclaw agent Interactive chat mode
picoclaw gateway Start the gateway
picoclaw status Show status
picoclaw version Show version info
picoclaw model View or switch the default model
picoclaw cron list List all scheduled jobs
picoclaw cron add ... Add a scheduled job
picoclaw cron disable Disable a scheduled job
picoclaw cron remove Remove a scheduled job
picoclaw skills list List installed skills
picoclaw skills install Install a skill
picoclaw migrate Migrate data from older versions
picoclaw auth login Authenticate with providers

⏰ Scheduled Tasks / Reminders

PicoClaw supports scheduled reminders and recurring tasks through the cron tool:

  • One-time reminders: "Remind me in 10 minutes" -> triggers once after 10min
  • Recurring tasks: "Remind me every 2 hours" -> triggers every 2 hours
  • Cron expressions: "Remind me at 9am daily" -> uses cron expression

📚 Documentation

For detailed guides beyond this README:

Topic Description
Docker & Quick Start Docker Compose setup, Launcher/Agent modes
Chat Apps All 17+ channel setup guides
Configuration Environment variables, workspace layout, security sandbox
Providers & Models 30+ LLM providers, model routing, model_list configuration
Spawn & Async Tasks Quick tasks, long tasks with spawn, async sub-agent orchestration
Hooks Event-driven hook system: observers, interceptors, approval hooks
Steering Inject messages into a running agent loop between tool calls
SubTurn Subagent coordination, concurrency control, lifecycle
Troubleshooting Common issues and solutions
Tools Configuration Per-tool enable/disable, exec policies, MCP, Skills
Hardware Compatibility Tested boards, minimum requirements

🤝 Contribute & Roadmap

PRs welcome! The codebase is intentionally small and readable.

See our Community Roadmap and CONTRIBUTING.md for guidelines.

Developer group building, join after your first merged PR!

User Groups:

Discord: https://discord.gg/V4sAZ9XWpN

WeChat: WeChat group QR code

About

Tiny, Fast, and Deployable anywhere — automate the mundane, unleash your creativity

Resources

License

Contributing

Stars

Watchers

Forks

Packages