Channel-Agnostic Core
One shared agent loop drives Telegram, Discord, Slack, Feishu, and Web adapters.
Agentic AI assistant for modern chat surfaces, built in Rust.
MicroClaw is optimized for teams that need tool-using automation with durable memory, resumable sessions, and channel adapters that do not fork business logic.
Recommended for most macOS setups
curl -fsSL https://microclaw.ai/install.sh | bashChannel-agnostic core with adapter-based delivery
AGENTS.md + SQLite structured memory with observability
Tool loop, sub-agents, scheduling, and background tasks
Skills catalog plus MCP tool federation
A runtime-centered stack that keeps your logic stable while channels, models, and tools evolve.
One shared agent loop drives Telegram, Discord, Slack, Feishu, and Web adapters.
The model can chain tools over multiple steps until the task reaches end_turn.
File memory + structured SQLite memory with reflector extraction and dedupe lifecycle.
Cron and one-shot tasks run through the same runtime, not a separate automation stack.
Attach external tool servers and domain-specific skills without rewriting the core loop.
Usage and memory observability endpoints help teams track quality and drift over time.
The same engine powers every conversation, regardless of where users talk to your agent.
Channel adapters normalize inbound events into a single runtime format.
Session state, AGENTS.md memory, structured memory, and active skills are injected.
Provider layer streams responses and executes tool calls in a controlled loop.
Conversations, sessions, and memories are persisted; reflector updates durable facts.
Responses are split by channel limits and emitted with consistent delivery semantics.
From solo operators to platform teams, one core runtime can cover multiple paths.
Run one assistant across your chats with memory, scheduling, and shell/file tooling.
Read implementation detailsUse permission-aware tools and session history to support recurring internal workflows.
Read implementation detailsShip new channels and tools quickly on a shared Rust core instead of fragmented bots.
Read implementation detailsFollow Quickstart for setup, then move into tools, permissions, memory, and channel deployment.