A local AI workspace built in Rust that orchestrates multiple agents using parallel voting (First-to-ahead-by-k) to achieve complex task execution on standard hardware.
Boomai is an open-source project focused on building a privacy preserving AI operator that runs on your computer. It connects a Rust daemon with a desktop app to keep setup simple while still supporting agentic workflows and model orchestration.
- Local First – Data and orchestration stay on your machine by default.
- Runtime Flexibility – Choose your own models with
ModelProviderwhich is AI model agnostic and provider agnositc (works with Ollama, cloud APIs, etc.). - Reliability as a Systems Problem – The current AI architecture focuses on decomposition, validation, and error correction over brute-force model size.
Boomai is built as a dual component system.
A Rust backend that coordinates agents, manages state, indexing, and exposes an HTTP API at localhost:3030.
Key points:
- Built with Axum and Tokio for processing multiple operations simultaneously
- Tiered execution: deterministic internal stubs (calculator, system time) plus Maker as the voting tool for reasoning, all routed by policy
- Safety sidecar intercepts ingress + tool calls (tainting, prompt-injection checks, risk policy)
- Parallel task execution to avoid sequential bottlenecks with other agents
- OpenAI style JSON over HTTP through a unified provider abstraction
A cross-platform UI built with Tauri and React.
- Desktop app with native webview
- Control of chat, configuration, and tool permissions
- Auto-pairs with the running daemon, for lazy plug and play
Tooling is powered by the Model Context Protocol (MCP) so external services (files, productivity apps, etc.) can be integrated without bloating the daemon. Access is mediated so secrets and scopes stay protected.
Boomai applies Massively Decomposed Agentic Processes (MDAP) to reduce errors with smaller models, and treats Maker as a first-class tool built from the MAD/Maker documentation.
- Small Action Steps – Tasks are decomposed into small actions to limit context drift.
- Parallel Voting (Maker tool) – Multiple candidates run in parallel; progress advances when one answer is ahead-by-k.
- Structural Validation – Outputs are checked for format and size; suspicious responses are retried before they affect downstream steps.
- Tiered Reliability – Deterministic lanes use internal stubs; probabilistic steps use Maker; single-probe remains for low-risk fallbacks.
- Simple setup with hardware-aware defaults
- Local model semantic search and retrieval for grounding
- Mod installation workflow for extending capabilities
- Rust-based indexing and orchestration for performance and safety
Requirements
- Rust (stable toolchain)
- Node.js (for the desktop client)
- Ollama (for local AI models|temporary model provider)
1. Install and setup Ollama
brew install ollama- Visit ollama.ai and click Download for Windows
- Run the downloaded
OllamaSetup.exeand follow the installer wizard - Once installation completes, Ollama will be available on your system
- Open a terminal
- Run the official install script:
curl -fsSL https://ollama.com/install.sh | sh - Verify installation:
ollama -v
# Start Ollama service
ollama serve
# Pull a model
ollama pull llama2:7b # or another model2. Run the backend daemon
# From project root
export BOOMAI_PORT=3030
cargo run -p boomai-daemon3. Run the desktop app
cd desktop
npm install
npm run tauri devThe desktop client will automatically connect to the daemon at localhost:3030.
- Start with
docs/overview.mdfor the problem/solution framing. - Read
docs/architecture.mdfor components, core types, and orchestration flow. - Follow
docs/setup.mdto run daemon + desktop locally. - See
docs/contributing.mdfor workflow, style notes, and testing targets. - Check
docs/FUTURE.mdfor the roadmap (concurrency refactor, MAKER scaling, MCP/RAG plans).
Fast checks before PRs:
cargo fmt --all -- --checkcargo clippy --workspace --all-targets --all-features -- -D warningscargo nextest run -p boomai-daemon --all-targets(orcargo test --workspaceif you don’t have nextest)cargo auditandcargo deny check(security/licenses)- (optional, nightly)
cargo +nightly udeps --workspace --all-targets - Frontend:
cd desktop && npm run lint - Run daemon + desktop and smoke a chat request.
CI summary:
- Caches cargo + sccache to speed builds.
- Lint matrix on stable/beta/nightly (fmt on stable, clippy
-D warnings). - Tests on ubuntu + macOS via
cargo nextest. - Security/licensing via
cargo-auditandcargo-deny; unused deps viacargo-udeps(non-blocking).
- Daemon config lives at:
- macOS:
~/Library/Application Support/boomai/config.json - Linux:
~/.config/boomai/config.json - Windows:
%APPDATA%\\boomai\\config.json
- macOS:
- The file holds active model settings and up to 5 backups; API keys are stored in the OS keychain (
keyring) and sanitized out ofconfig.json. - Model binaries are managed by Ollama in its cache (e.g.,
~/.ollama). Remove withollama rm <model>to reclaim space.
Boomai is available under the MIT License.
- Pre-1.0 SemVer-ish:
0.x.ywithvtags (e.g.,v0.1.0). - See
docs/versioning.mdfor how to tag and what the release workflow does.
