High-level overview
NanoClaw consists of a single Node.js process that orchestrates everything:Core components
Channel factory
NanoClaw uses a factory registry pattern for messaging channels. Each channel (WhatsApp, Telegram, Discord, Slack, Gmail) self-registers at startup. Channels with missing credentials emit a warning and are skipped — no configuration file is needed to enable or disable channels. All channels implement a commonChannel interface for message handling and sending, allowing the rest of the system to be channel-agnostic.
Message router
The router (src/index.ts) is the central orchestrator:
- Polls SQLite database every 2 seconds for new messages
- Filters messages by registered groups only
- Checks for trigger pattern (
@{ASSISTANT_NAME}) - Maintains cursor state to track processed messages
- Routes messages to the appropriate group queue
The main group (typically your self-chat) doesn’t require a trigger - all messages are processed automatically.
Group queue
The GroupQueue (src/group-queue.ts) manages container lifecycle and concurrency:
- Concurrency limiting: Maximum 5 concurrent containers by default (configurable via
MAX_CONCURRENT_CONTAINERS) - Per-group state: Each group has a dedicated queue for messages and tasks
- Retry logic: Exponential backoff (5s base, up to 5 retries) for failed container runs
- Idle management: Keeps containers alive for 30 minutes (default
IDLE_TIMEOUT) to handle follow-up messages - IPC message piping: Follow-up messages are sent to active containers via IPC files
When a container is already active for a group, new messages are piped directly to the running container via IPC instead of spawning a new one.
Container runner
The container runner (src/container-runner.ts) spawns and manages isolated agent execution:
Container lifecycle:
- Build volume mounts based on group privileges
- Spawn container with Docker CLI
- Pass prompt and metadata via stdin JSON (credentials handled by secret injection layer, never passed here)
- Stream stdout/stderr for real-time output
- Parse output markers (
---NANOCLAW_OUTPUT_START---/---NANOCLAW_OUTPUT_END---) - Clean up automatically on exit (
--rmflag)
- Hard timeout:
CONTAINER_TIMEOUT(default 30 minutes) - Grace period: At least
IDLE_TIMEOUT + 30sto allow graceful shutdown - Activity-based reset: Timeout resets on each streaming output
- Post-output timeout: Not considered an error (idle cleanup)
- All container runs logged to
groups/{name}/logs/container-{timestamp}.log - Verbose mode (
LOG_LEVEL=debug) logs full input/output - Error runs log input metadata (prompt length, session ID) and full stderr — prompt content is not included
Task scheduler
The scheduler (src/task-scheduler.ts) runs scheduled tasks:
- Polls database every 60 seconds for due tasks
- Supports three schedule types:
- cron: Cron expressions (e.g.,
0 9 * * *for 9am daily) - interval: Millisecond intervals (e.g.,
3600000for hourly) - once: ISO timestamp for one-time execution
- cron: Cron expressions (e.g.,
- Tasks run in group context with full agent capabilities
- Results can be sent to the group chat or completed silently
- Task containers close automatically 10 seconds after producing output
Task execution flow
Task execution flow
- Scheduler finds due task from database
- Enqueues task in GroupQueue (respects concurrency limits)
- Spawns container in task mode (
isScheduledTask: true) - Streams output and optionally sends to chat via
send_messagetool - Logs run to database with duration and result
- Calculates next run time based on schedule type
- Container closes after 10-second grace period
IPC watcher
The IPC watcher (src/ipc.ts) enables container-to-host communication:
- Watches
data/ipc/{group}/messages/*.jsonfor outbound messages - Watches
data/ipc/{group}/tasks/*.jsonfor task operations - Validates operations against group privileges (see security.mdx)
- Atomic file writes (
.tmpthen rename) prevent race conditions - Each group has isolated IPC namespace
send_message: Send message to group chat (own chat only for non-main)schedule_task,pause_task,resume_task,cancel_task,update_task: Task managementregister_group,refresh_groups: Group management (main only)
Database
SQLite database (store/messages.db) stores:
- messages: All messages with timestamps, sender info, and bot-message flag (queries capped at 200 per invocation)
- chats: Chat metadata (name, last activity, channel, is_group)
- sessions: Claude session IDs per group folder
- registered_groups: Active groups with folder, trigger, container config, is_main flag
- router_state: Message cursors and last processed timestamps
- scheduled_tasks: Task definitions with schedule, context_mode (
grouporisolated), and status - task_run_logs: Task execution history with duration and results
Data flow
Incoming message flow
Follow-up message flow (piped to active container)
File system layout
nanoclaw
src
container
Dockerfile
agent-runner
skills
groups
main
CLAUDE.md
logs
{group-name}
data
sessions
{group}
.claude
agent-runner-src
ipc
{group}
messages
tasks
input
store
messages.db
auth
Container image
The agent container (container/Dockerfile) includes:
- Base:
node:22-slim - Browser: Chromium with all required dependencies
- Tools:
agent-browserCLI for browser automation - Runtime:
@anthropic-ai/claude-code(Claude Agent SDK) - User: Runs as
nodeuser (uid 1000, non-root) - Working directory:
/workspace/group(group’s folder)
The container is rebuilt by
./container/build.sh. Changes to agent-runner code require a rebuild.Subsystems
Session management
Each group maintains an isolated Claude conversation session:- Sessions stored at
data/sessions/{group}/.claude/ - Include full message history and file contents read
- Auto-compact when context gets too long
- Settings configured per group:
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1(enable subagent orchestration)CLAUDE_CODE_ADDITIONAL_DIRECTORIES_CLAUDE_MD=1(load memory from mounts)CLAUDE_CODE_DISABLE_AUTO_MEMORY=0(enable persistent memory)
Skills system
Shared skills incontainer/skills/ are synced to each group’s .claude/skills/ on startup:
- Skills are available to all agents
- Per-group copies allow customization without affecting others
- Changes to shared skills require container restart to sync
- Built-in container skills include
/agent-browser(web automation),/capabilities(system introspection), and/status(health check) /capabilitiesand/statusare main-channel only — they check for the/workspace/projectmount to enforce access
Agent runner customization
Each group gets a writable copy ofagent-runner/src/ at data/sessions/{group}/agent-runner-src/:
- Recompiled on every container startup via
entrypoint.sh - Allows agents to add custom tools or modify behavior
- Isolated from other groups (changes don’t affect them)
- MCP servers can be added by modifying the agent runner code
The agent runner is the TypeScript code that wraps Claude Agent SDK. It handles IPC, streaming output, and tool registration.
Startup sequence
- Container system check: Ensure Docker is running, clean up orphaned containers
- Database initialization: Create tables if needed, load schema
- State loading: Restore message cursors, sessions, registered groups
- OneCLI agent sync: Ensure each registered non-main group has a corresponding OneCLI agent for per-group credential scoping (best-effort, non-blocking)
In v1.2.22+, this step syncs OneCLI agents for all registered groups. In earlier versions, this starts the built-in credential proxy on
CREDENTIAL_PROXY_PORT.- Remote Control restore: Re-adopt any surviving Remote Control session from a previous run
- Shutdown handlers: Register graceful shutdown on
SIGTERMandSIGINT - Channel connection: Connect to messaging channels, authenticate if needed
- Subsystem startup:
- Task scheduler loop (60s interval)
- IPC watcher (1s poll interval)
- Message loop (2s poll interval)
- Recovery: Check for unprocessed messages from previous crash
- Ready: System begins processing messages and tasks
Graceful shutdown
OnSIGTERM or SIGINT:
- GroupQueue enters shutdown mode (stops accepting new work)
- Active containers are detached (not killed)
- Channels disconnect gracefully
- Process exits with code 0