Run 12+ AI agents concurrently. One identity. Full governance. Measurable ROI.
| You are... | Start with |
|---|---|
| Engineering Leader evaluating AI adoption | Why AssemblyZero? |
| AI Strategy / Operations implementing Claude Enterprise | AI Strategy & Operations |
| Technical Architect designing agent infrastructure | Technical Architecture |
| Security & Compliance approving AI tooling | Secret Guard Architecture |
| Practitioner building with Claude Code | Quick Start |
This isn't theoretical. AssemblyZero has been under continuous development for 70 days with contributions on 64 of them (91.4%). Only 6 days off since January 10, 2026.
| Metric | Value |
|---|---|
| Issues closed | 493 |
| Commits | 912 |
| PRs merged | 283 |
| Tests | 5,090+ across 134 files |
| Active days | 64 of 70 (91.4%) |
Daily Activity Log → · Metrics → · March 2026 Velocity →
graph TD
subgraph Intent["HUMAN ORCHESTRATOR"]
O["Human Intent<br/>& Oversight"]
end
subgraph LG["LANGGRAPH WORKFLOWS"]
W["5 State Machines<br/>SQLite Checkpointing"]
end
subgraph Agents["CLAUDE AGENTS (12+)"]
A["Feature | Bug Fix<br/>Docs | Review"]
end
subgraph Gemini["GEMINI VERIFICATION"]
G["LLD Review | Code Review<br/>Security | Quality"]
end
subgraph Gov["GOVERNANCE GATES"]
M["Requirements | Implementation<br/>Reports | Audit Trail"]
end
O --> LG
LG --> Agents
Agents --> Gemini
Gemini --> Gov
What makes AssemblyZero different:
| Capability | What It Means |
|---|---|
| 12+ Concurrent Agents | Multiple Claude agents work in parallel on features, bugs, docs — all under one user identity |
| Gemini Reviews Claude | Every design doc and code change is reviewed by Gemini before humans see it |
| Enforced Gates | LLD review, implementation review, report generation — gates that can't be skipped |
| Secret Guard Architecture | Two-layer hook system protecting credentials from AI session transcripts — 12/17 bypass vectors blocked |
| Cerberus PR Governance | GitHub App auto-approves PRs after CI validation — no human bottleneck |
| Prompt Economics | System prompt caching (90% savings), Haiku routing, batch generation, budget guards |
| 34 Governance Audits | OWASP, GDPR, NIST AI Safety — adversarial audits that find violations |
AI coding assistants like Claude Code and GitHub Copilot are transforming development. But enterprise adoption stalls because:
| Challenge | Reality |
|---|---|
| No coordination | Multiple agents conflict and duplicate work |
| No governance | Security teams can't approve ungoverned AI |
| No verification | AI-generated code goes unreviewed |
| No metrics | Leadership can't prove ROI |
| Permission friction | Constant approval prompts destroy flow state |
Organizations run pilots. Developers love the tools. Then adoption plateaus at 10-20% because the infrastructure layer is missing.
AssemblyZero is that infrastructure layer.
The headline feature: run 12+ AI agents concurrently under single-user identity with full coordination.
| Component | Function |
|---|---|
| Single-User Identity | All agents share API credentials, git identity, permission patterns |
| Worktree Isolation | Each agent gets its own git worktree - no conflicts, clean PRs |
| Credential Rotation | Automatic rotation across API keys when quota exhausted |
| Session Coordination | Agents can see what others are working on via session logs |
| Role | Context | Tasks |
|---|---|---|
| Feature Agent | Full codebase | New functionality, refactors |
| Bug Fix Agent | Issue-focused | Specific bug investigation |
| Documentation Agent | Docs + code | README, wiki, API docs |
| Review Agent | PR diff | Code review assistance |
| Audit Agent | Compliance | Security, privacy audits |
Result: One engineer orchestrating 12+ agents can accomplish what previously required a team.
Full Architecture Documentation
The key differentiator: Claude builds, Gemini reviews.
This isn't just "two models" - it's adversarial verification where one AI checks another's work before humans approve.
| Gate | When | What Gemini Checks |
|---|---|---|
| Issue Review | Before work starts | Requirements clarity, scope, risks |
| LLD Review | Before coding | Design completeness, security, testability |
| Code Review | Before PR | Quality, patterns, vulnerabilities |
| Security Audit | Before merge | OWASP Top 10, dependency risks |
| Single Model | Multi-Model (AssemblyZero) |
|---|---|
| Claude reviews Claude's work | Gemini reviews Claude's work |
| Same blind spots | Different model catches different mistakes |
| Trust the output | Verify the output |
| "It looks good to me" | Structured JSON verdicts: APPROVE/BLOCK |
AssemblyZero detects silent model downgrades:
- Gemini CLI sometimes returns Flash when you request Pro
- Our tools verify the actual model used in the response
- If downgraded, the review is flagged as invalid
| Provider | Class | Use Case | Cost Model |
|---|---|---|---|
Claude CLI (claude -p) |
ClaudeCLIProvider |
Drafting, implementation | Free (Max subscription) |
| Anthropic API | AnthropicProvider |
Automatic fallback | Per-token |
| Fallback | FallbackProvider |
CLI (180s) → API (300s) | Free first, paid if needed |
| Gemini | GeminiProvider |
Adversarial review only | Free (API quota) |
Claude is invoked via claude -p with --tools "" and --strict-mcp-config — no tools, no MCP, deterministic side-effect-free calls. The Anthropic API exists only as a paid fallback for resilience.
All workflows are LangGraph StateGraph instances with typed state and SQLite checkpointing:
| Workflow | Nodes | Purpose |
|---|---|---|
| Issue | 7 | Idea → structured GitHub issue |
| Requirements | 10 | Issue → approved LLD (design) |
| Implementation Spec | 7 | LLD → concrete implementation instructions |
| TDD Implementation | 13 | Spec → code + tests + PR |
| Scout | Variable | External intelligence gathering |
AssemblyZero uses a local RAG system built on ChromaDB and all-MiniLM-L6-v2 embeddings:
- Document retrieval (The Librarian) — injects relevant ADRs, standards, and past LLDs into design prompts
- Codebase retrieval (Hex) — AST-based Python indexing with vector search for implementation context
- Duplicate detection (The Historian) — checks for similar past issues before drafting
- Token budget management — trims context to fit LLM prompt limits
- Local-only — no data leaves the developer machine (see ADR-0212)
Full Technical Architecture → | RAG Architecture →
Three mandatory checkpoints that cannot be bypassed:
Idea → Issue → LLD Review → Coding → Implementation Review → PR → Report Generation → Merge
↑ ↑ ↑
Gemini Gate Gemini Gate Auto-Generated
Before writing ANY code:
- Design document submitted to Gemini
- Gemini evaluates completeness, security, testability
- APPROVE → Proceed to coding
- BLOCK → Revise design first
Cost of design fix: 1 hour Cost of code fix: 8 hours Cost of production fix: 80 hours
Before creating ANY PR:
- Implementation report + test report submitted to Gemini
- Gemini evaluates quality, coverage, security
- APPROVE → Create PR
- BLOCK → Fix issues first
Before merge, auto-generate:
implementation-report.md- What changed and whytest-report.md- Full test output, coverage metrics
Permission prompts are the #1 adoption killer.
Every "Allow this command?" prompt breaks flow state. Developers either:
- Click "Allow" without reading (security risk)
- Get frustrated and stop using the tool (adoption failure)
AssemblyZero identifies:
- Commands that always get approved → Add to allow list
- Commands that always get denied → Add to deny list
- Novel patterns → Flag for review
| Metric | Before | After |
|---|---|---|
| Prompts per session | 15-20 | 2-3 |
| Time lost to prompts | 10+ min | < 1 min |
| Developer frustration | High | Low |
Audits designed with an adversarial philosophy: they exist to find violations, not confirm compliance.
| Category | Count | Focus |
|---|---|---|
| Security & Privacy | 3 | OWASP, GDPR, License compliance |
| AI Governance | 7 | Bias, Explainability, Safety, Agentic risks |
| Code Quality | 4 | Standards, Accessibility, Capabilities |
| Permission Management | 3 | Friction, Permissiveness, Self-audit |
| Documentation Health | 6 | Reports, LLD alignment, Terminology |
| Extended | 10 | Cost, Structure, References, Wiki |
| Meta | 1 | Audit system governance |
| Audit | Standard | What It Checks |
|---|---|---|
| 0808 | OWASP LLM 2025 | AI-specific vulnerabilities |
| 0809 | OWASP Agentic 2026 | Agent autonomy risks |
| 0810 | ISO/IEC 42001 | AI management system |
| 0815 | Internal | Permission friction patterns |
"How do I prove ROI to leadership?"
| Metric | What It Shows |
|---|---|
| Active users / Total engineers | Adoption rate |
| Sessions per user per week | Engagement depth |
| Features shipped with AI assist | Productivity impact |
| Metric | Target |
|---|---|
| Permission prompts per session | < 3 |
| Time to first productive action | < 30 seconds |
| Session abandonment rate | < 5% |
| Metric | What It Shows |
|---|---|
| Gemini first-pass approval rate | Design quality |
| PR revision count | Code quality |
| Post-merge defects | Overall quality |
| Metric | Calculation |
|---|---|
| Cost per feature | Total API spend / Features shipped |
| Cost per agent-hour | API spend / Active agent hours |
| ROI | (Time saved × Engineer cost) / Platform cost |
AssemblyZero workflows run as LangGraph state machines with SQLite checkpointing:
| Phase | Status | Capability | Impact |
|---|---|---|---|
| 1 | COMPLETE | LangGraph state machines (5 workflows) | Gates structurally enforced |
| 2 | COMPLETE | SQLite checkpointing (SqliteSaver) | Long tasks survive interruptions |
| 3 | Q2 2026 | Supervisor pattern | Autonomous task decomposition |
| 4 | Q2 2026 | LangSmith observability | Full dashboards, traces, cost attribution |
| 5 | Q3 2026 | Dynamic tool graphs | Context-aware tool selection |
git clone https://github.com/martymcenroe/AssemblyZero.git
cd AssemblyZero
poetry installmkdir -p YourProject/.claude
cp AssemblyZero/.claude/project.json.example YourProject/.claude/project.json
# Edit project.json with your project detailspoetry run python tools/assemblyzero-generate.py --project YourProjectThe generated configs include:
- Permission patterns that eliminate friction
- Slash commands for common operations
- Hooks for pre/post tool execution
Full documentation at AssemblyZero Wiki (50 pages):
| Page | Description |
|---|---|
| Technical Architecture | LLM invocation, LangGraph workflows, codebase intelligence |
| Metrics Dashboard | Velocity charts, production numbers |
| Multi-Agent Orchestration | The headline feature - 12+ concurrent agents |
| Requirements Workflow | LLD → Gemini → Approval flow |
| Implementation Workflow | Worktree → Code → Reports → PR |
| Governance Gates | LLD, implementation, report gates |
| How AssemblyZero Learns | Self-improving governance from verdicts |
| LangGraph Evolution | Roadmap to enterprise state machines |
| Gemini Verification | Multi-model review architecture |
| Quick Start | 5-minute setup guide |
See Start Here at the top of this page for persona-based navigation.
AssemblyZero was built by Martin McEnroe, applying 29 years of enterprise technology leadership to the emerging challenge of scaling AI coding assistants across engineering organizations.
| Role | Organization | Relevance |
|---|---|---|
| Director, Data Science & AI | AT&T | Led 45-person team, $10M+ annual savings from production AI |
| VP Product | Afiniti | AI-powered platform at scale |
| AI Strategic Consultant | TX DOT | 76-page enterprise AI strategy |
Having led enterprise AI adoption, I know the blockers:
| Blocker | AssemblyZero Solution |
|---|---|
| "Security won't approve ungoverned AI" | 34 audits, Gemini gates, secret guard hooks, 62-repo security audit |
| "We can't measure productivity" | KPI framework, friction tracking, per-call cost attribution |
| "Agents conflict with each other" | Worktree isolation, single-user identity model |
| "PRs block waiting for human review" | Cerberus GitHub App auto-approves after CI validation |
| "Developers hate the permission prompts" | Pattern detection, friction analysis, auto-remediation |
| "Token costs are unpredictable" | Prompt caching (90% savings), budget guards, circuit breakers |
| "It's just pilots, not real adoption" | Infrastructure that scales — 492 issues closed in 68 days |
- LangChain, LangGraph, LangSmith - Hands-on implementation experience
- 21 US Patents - Innovation track record
- CISSP, CCSP, AWS ML Specialty - Security and ML credentials
- GitHub Copilot, Claude Code - Daily production use
This isn't theoretical. It's production infrastructure I use daily to orchestrate 12+ AI agents with full governance.
The code in this repo is the same code that:
- Runs the Gemini verification gates
- Protects secrets via hook-based guard architecture
- Governs PR merges via the Cerberus GitHub App
- Optimizes prompt costs (90% savings via caching and routing)
- Tracks permission friction patterns
- Generates the audit reports
- Bypasses Windows' 32K-char limit with a temp-dir CLAUDE.md hack
If you're scaling AI coding assistants across an engineering organization, this is the infrastructure layer you need.
Workflows are named after Terry Pratchett's Discworld characters. Each persona defines the agent's operational philosophy — making system behavior intuitive and memorable.
graph TB
subgraph Orch["ORCHESTRATION"]
Om["Om<br/><i>Human</i>"]
Moist["Moist<br/><i>Pipeline</i>"]
end
subgraph Intel["INTELLIGENCE"]
Lib["Librarian<br/><i>Doc Retrieval</i>"]
Hex["Hex<br/><i>Code RAG</i>"]
Hist["Historian<br/><i>Duplicates</i>"]
Angua["Angua<br/><i>Scout</i>"]
end
subgraph Found["FOUNDATION"]
Brutha["Brutha<br/><i>Vector Store</i>"]
end
subgraph Maint["MAINTENANCE"]
LuTze["Lu-Tze<br/><i>Janitor</i>"]
DEATH["DEATH<br/><i>Reconciliation</i>"]
end
Om --> Moist
Moist --> Intel
Lib --> Brutha
Hex --> Brutha
Hist --> Brutha
| Persona | Function | Status | Issue |
|---|---|---|---|
| Om | Human orchestrator — pure intent | Active | — |
| Moist von Lipwig | Pipeline orchestration (Issue → PR) | Implemented | #305 |
| Brutha | RAG vector store — perfect recall | Implemented | #113 |
| The Librarian | Document retrieval for LLDs | Implemented | #88 |
| Hex | Codebase intelligence — AST indexing | Implemented | #92 |
| The Historian | Duplicate detection before drafting | Implemented | #91 |
| Captain Angua | External intelligence scout | Implemented | #93 |
| Lu-Tze | Repository hygiene — constant sweeping | Implemented | #94 |
| DEATH | Documentation reconciliation | Manual | #114 |
| Cerberus | PR governance — three heads, one gate | Implemented | #736 |
| Vimes | Regression guardian — deep suspicion | Planned | — |
| Ponder Stibbons | Auto-fix compositor | Planned | #307 |
Full Cast & Architecture → | Architecture Diagrams →
"A man is not dead while his name is still spoken." GNU Terry Pratchett
PolyForm Noncommercial 1.0.0