fix: sync pipeline, extract, features, autopilot (v0.10.1)#129
Merged
fix: sync pipeline, extract, features, autopilot (v0.10.1)#129
Conversation
Add YAML frontmatter (name, version, description, triggers, tools, mutating), Contract, Anti-Patterns, and Output Format sections to all existing skills. Rename Workflow to Phases. Ingest becomes thin router delegating to specialized ingestion skills (Phase 2). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
RESOLVER.md is the skill dispatcher modeled on Wintermute's AGENTS.md. Categorized routing table: Always-on, Brain ops, Ingestion, Thinking, Operational, Setup, Identity. Conventions directory extracts cross-cutting rules (quality, brain-first lookup, model routing, test-before-bulk). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
skills-conformance.test.ts validates every skill has YAML frontmatter with required fields, Contract, Anti-Patterns, and Output Format sections, and manifest.json coverage. resolver.test.ts validates routing table categories, skill path existence, and manifest-to-resolver coverage. 50 new tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Generalized from Wintermute's battle-tested skills: - signal-detector: always-on idea+entity capture on every message - brain-ops: brain-first lookup, read-enrich-write loop, source attribution - idea-ingest: links/articles/tweets with author people page mandatory - media-ingest: video/audio/PDF/book with entity extraction (absorbs video/youtube/book) - meeting-ingestion: transcripts with attendee enrichment chaining - citation-fixer: audit and fix citation formatting - repo-architecture: filing rules by primary subject - skill-creator: create skills with conformance standard + MECE check - daily-task-manager: task lifecycle with priority levels All Garry-specific references generalized. Core workflows preserved. Updated RESOLVER.md and manifest.json. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Operational skills: - daily-task-prep: morning prep with calendar context and open threads - cross-modal-review: quality gate via second model with refusal routing - cron-scheduler: schedule staggering, quiet hours, wake-up override, idempotency - reports: timestamped reports with keyword routing - testing: skill validation framework (conformance checks) - soul-audit: 6-phase interview generating SOUL.md, USER.md, ACCESS_POLICY.md, HEARTBEAT.md - webhook-transforms: external events to brain signals with dead-letter queue Identity layer: - SOUL.md template (agent identity, generated by soul-audit) - USER.md template (user profile, generated by soul-audit) - ACCESS_POLICY.md template (4-tier access control) - HEARTBEAT.md template (operational cadence) - cross-modal.yaml convention (review pairs, refusal routing chain) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…lates GBrain is now a GStack mod for agent platforms. Updated architecture description, key files listing (16 new skill files, RESOLVER.md, conventions, templates), skills section (24 skills organized by resolver categories), and testing section (new conformance and resolver tests). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After brain initialization, gbrain init now reports: - Number of skills loaded (from manifest.json) - GStack detection (checks known host paths, uses gstack-global-discover if available) - GStack install instructions if not found - Resolver and soul-audit pointers Also adds installDefaultTemplates() for SOUL.md/USER.md/ACCESS_POLICY.md/HEARTBEAT.md deployment, and detectGStack() using gstack-global-discover with fallback to known paths (DRY: doesn't reimplement GStack's host detection logic). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- CHANGELOG: 24 skills, signal detector, RESOLVER.md, soul-audit, access control, conventions, conformance standard, GStack detection in init - README: updated skill section with 24 skills, resolver, conventions - TODOS: added runtime MCP access control (P1) - VERSION: 0.9.2 → 0.10.0 - package.json + manifest.json version bumped Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
16-row table detailing every new skill, what it does, and why it matters. Written to sell the upgrade, not document the implementation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
# Conflicts: # CHANGELOG.md # CLAUDE.md # VERSION # package.json # src/commands/init.ts
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Lead with GStack mod identity. 24 skills table organized by category. Install block references RESOLVER.md and soul-audit. GBrain+GStack relationship explained. Removed redundancy (733 -> 406 lines). All essential content preserved: install, recipes, architecture, search, commands, engines, voice, knowledge model. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The 30-line copy-paste install block becomes one line:
"Retrieve and follow INSTALL_FOR_AGENTS.md"
Benefits: agent always gets latest instructions (no stale copy-paste),
README stays clean, install details live where agents read them.
README now leads with what GBrain does ("gives your agent a brain")
instead of GStack relationship. Removed "requires frontier model" note.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. llstatSync typo (merge corruption) → lstatSync
2. __dirname undefined in ESM module → fileURLToPath polyfill
3. require('fs') in ESM → use imported readFileSync
All three would crash gbrain init at runtime. Caught by /review.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Shared function at src/core/check-resolvable.ts validates that all skills are reachable from RESOLVER.md, detects MECE overlaps (with whitelist for always-on/router skills), finds gaps in frontmatter triggers, and scans for DRY violations. Returns structured ResolvableIssue objects with machine-parseable fix objects alongside human-readable action strings. Three call sites: bun test, gbrain doctor, skill-creator skill. Cleans up test/resolver.test.ts: removes stale 9-line skip list, imports from production check-resolvable.ts instead of reimplementing parsing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ecture Doctor now runs filesystem checks (resolver health, skill conformance) before connecting to DB. New --fast flag skips DB checks. Falls back to filesystem-only when DB is unavailable. Adds schema_version: 2 to JSON output, composite health score (0-100), and structured issues array with action strings for agent parsing. Resolver health check calls checkResolvable() and surfaces actionable fix instructions. Link integrity check uses engine.getHealth() dead_links count. CLI routing split: doctor dispatched before connectEngine() so filesystem checks always run. Fixes Codex-identified blocker where doctor required DB. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
backoff.ts: System load checking (CPU via os.loadavg, memory via os.freemem), exponential backoff with 20-attempt max guard, active hours multiplier (2x slower during waking hours), concurrent process limit (max 2). Windows-safe: defaults to "proceed" when os.loadavg returns zeros. fail-improve.ts: Deterministic-first, LLM-fallback pattern with JSONL failure logging. Cascade failure handling: when both paths fail, throws LLM error and logs both. Log rotation at 1000 entries. Call count tracking for deterministic hit rate metrics. Auto-generates test cases from successful LLM fallbacks. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
transcription.ts: Groq Whisper (default) with OpenAI fallback. Files >25MB segmented via ffmpeg. Provider auto-detection from env vars. Clear error messages for missing API keys and unsupported formats. enrichment-service.ts: Global enrichment service callable from any ingest pathway. Entity slug generation (people/jane-doe, companies/acme-corp), mention counting via searchKeyword, tier auto-escalation (Tier 3→2→1 based on mention frequency and source diversity), batch enrichment with backoff throttling, regex-based entity extraction from text. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…tracker New skill: data-research — one parameterized pipeline for any email-to- structured-data workflow (investor updates, donations, company metrics). 7-phase pipeline: define recipe, search, classify, extract (with extraction integrity rule), archive, deduplicate, update tracker. data-research.ts: Recipe validation, MRR/ARR/runway/headcount regex extraction (battle-tested patterns), dedup with configurable tolerance, markdown tracker parsing/appending, quarterly/monthly date windowing, 6-phase HTML email stripping with 500KB ReDoS cap. Registers data-research in manifest.json (25th skill) and RESOLVER.md. Fixes backoff test robustness for high-load systems. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CLAUDE.md: added 6 new core files (check-resolvable, backoff, fail-improve, transcription, enrichment-service, data-research), 6 new test files, updated skill count to 25, test file count to 34. README.md: updated skill count to 25, added data-research to skills table. CHANGELOG.md: added Infrastructure section documenting resolver validation, doctor expansion, adaptive throttling, fail-improve loop, voice transcription, enrichment service, and data-research skill. TODOS.md: anonymized personal references. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace require('fs') with ES module import in doctor.ts for consistency
with the rest of the file. Backoff test made resilient to parallel test
execution leaking module-level state.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…sell # Conflicts: # CHANGELOG.md # README.md # skills/data-research/SKILL.md # skills/reports/SKILL.md
…-slugs - Move sync to CLI_ONLY so --watch flag reaches runSync() (was routed through operation layer which only calls performSync single-pass) - Hide sync_brain from CLI help (MCP still exposes it) - Fix performFullSync missing sync state persistence (C1) - Align Postgres dead_links query to match PGLite (count dangling links, not empty-content chunks) (C3) - Fix doctor recommending nonexistent 'gbrain embed refresh' (C4) - Refactor doctor outputResults to not call process.exit directly - Add --slugs flag to embed for targeted page embedding - Add sync auto-extract + auto-embed after performSync - Add noExtract to SyncOpts - Route extract, features, autopilot in CLI_ONLY - Update help text with new commands Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- gbrain extract <links|timeline|all> — batch extraction of links and timeline entries from brain markdown files. Broad regex for all .md links (C7: filters external URLs). Frontmatter field parsing (company, investors, attendees). Directory-based link type inference. JSONL progress on stderr for agents. Sync integration hooks (extractLinksForSlugs, extractTimelineForSlugs). - gbrain features [--json] [--auto-fix] — scan brain usage, pitch unused features with the user's own numbers. Priority 1 (data quality): missing embeddings, dead links. Priority 2 (unused features): zero links, zero timeline, low coverage, unconfigured integrations, no sync. Embedded recipe metadata for binary-safe integration detection. Persistence in ~/.gbrain/feature-offers.json. Doctor teaser hook. Upgrade hook. - gbrain autopilot [--repo] [--interval N] — self-maintaining brain daemon. Pipeline: sync → extract → embed. Health-based adaptive scheduling (brain_score >= 90 doubles interval, < 70 halves it). --install/--uninstall for launchd (macOS) and crontab (Linux). Signal handling. Consecutive error tracking (stops at 5). Log to ~/.gbrain/autopilot.log. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After gbrain post-upgrade completes, automatically run gbrain features to show the user what's new and what to fix. Best-effort (doesn't fail the upgrade). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Weighted composite score computed in getHealth() for both Postgres and PGLite: embed_coverage: 0.35, link_density: 0.25, timeline_coverage: 0.15, no_orphans: 0.15, no_dead_links: 0.10 Returns 0 for empty brains. Agents use brain_score as a health gate. Autopilot uses it for adaptive scheduling (>=90 slows down, <70 speeds up). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
25 tests covering: - extractMarkdownLinks: relative links, external URL filtering, edge cases - extractLinksFromFile: slug resolution, frontmatter parsing, directory-based type inference (works_at, deal_for, invested_in) - extractTimelineFromContent: bullet format, header format with detail, em/en dash handling, empty content - features: module exports, brain_score calculation weights, CLI routing Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Agent-facing tools are invisible without instruction-layer coverage. - RESOLVER.md: add routing for extract, features, autopilot - maintain/SKILL.md: add link graph extraction, timeline extraction, autopilot check sections Without these, agents reading skills/ will never discover or run the new commands. This is the #1 DX finding from the devex review. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add extract.ts, features.ts, autopilot.ts to key files. Add extract.test.ts, features.test.ts to test list. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- #3: autopilot extract step was a no-op (imported but never called) - #6: PGLite orphan_pages query aligned with Postgres (check both inbound+outbound) - #8: embedPage throws instead of process.exit (was killing sync/autopilot) - #9: dead-links set auto_fixable=false (needs repo path we may not have) - #10: JSON auto-fix output was dead code (unreachable !jsonMode check) - #14: autopilot lock file prevents concurrent instances - #20: --dir without value no longer crashes extract Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- #1: Crontab install used echo pipe with shell-interpolated values. Now uses a temp file via crontab(1) and single-quote escaping on all interpolated paths. No shell expansion possible. - #2: OPENAI_API_KEY was baked as plaintext into the launchd plist (readable by any local process, backed up by Time Machine). Now uses a wrapper script (~/.gbrain/autopilot-run.sh) that sources ~/.zshrc at runtime. No secrets in plist or crontab. - #16: extract.ts used a custom 20-line YAML parser that only handled single-line key:value pairs. Multi-line arrays (attendees list with - items) were silently ignored. Now uses the project's gray-matter parser via parseMarkdown() from src/core/markdown.ts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
TFITZ57
added a commit
to TFITZ57/gbrain
that referenced
this pull request
Apr 23, 2026
* feat: GStackBrain — 16 new skills, resolver, conventions, identity layer (v0.10.0) (#120)
* feat: migrate 8 existing skills to conformance format
Add YAML frontmatter (name, version, description, triggers, tools, mutating),
Contract, Anti-Patterns, and Output Format sections to all existing skills.
Rename Workflow to Phases. Ingest becomes thin router delegating to specialized
ingestion skills (Phase 2).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add RESOLVER.md, conventions directory, and output rules
RESOLVER.md is the skill dispatcher modeled on Wintermute's AGENTS.md.
Categorized routing table: Always-on, Brain ops, Ingestion, Thinking,
Operational, Setup, Identity. Conventions directory extracts cross-cutting
rules (quality, brain-first lookup, model routing, test-before-bulk).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: add skills conformance and resolver validation tests
skills-conformance.test.ts validates every skill has YAML frontmatter with
required fields, Contract, Anti-Patterns, and Output Format sections, and
manifest.json coverage. resolver.test.ts validates routing table categories,
skill path existence, and manifest-to-resolver coverage. 50 new tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add 9 brain skills from Wintermute (Phase 2)
Generalized from Wintermute's battle-tested skills:
- signal-detector: always-on idea+entity capture on every message
- brain-ops: brain-first lookup, read-enrich-write loop, source attribution
- idea-ingest: links/articles/tweets with author people page mandatory
- media-ingest: video/audio/PDF/book with entity extraction (absorbs video/youtube/book)
- meeting-ingestion: transcripts with attendee enrichment chaining
- citation-fixer: audit and fix citation formatting
- repo-architecture: filing rules by primary subject
- skill-creator: create skills with conformance standard + MECE check
- daily-task-manager: task lifecycle with priority levels
All Garry-specific references generalized. Core workflows preserved.
Updated RESOLVER.md and manifest.json.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add operational infrastructure + identity layer (Phase 3)
Operational skills:
- daily-task-prep: morning prep with calendar context and open threads
- cross-modal-review: quality gate via second model with refusal routing
- cron-scheduler: schedule staggering, quiet hours, wake-up override, idempotency
- reports: timestamped reports with keyword routing
- testing: skill validation framework (conformance checks)
- soul-audit: 6-phase interview generating SOUL.md, USER.md, ACCESS_POLICY.md, HEARTBEAT.md
- webhook-transforms: external events to brain signals with dead-letter queue
Identity layer:
- SOUL.md template (agent identity, generated by soul-audit)
- USER.md template (user profile, generated by soul-audit)
- ACCESS_POLICY.md template (4-tier access control)
- HEARTBEAT.md template (operational cadence)
- cross-modal.yaml convention (review pairs, refusal routing chain)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: update CLAUDE.md with 24 skills, RESOLVER.md, conventions, templates
GBrain is now a GStack mod for agent platforms. Updated architecture description,
key files listing (16 new skill files, RESOLVER.md, conventions, templates), skills
section (24 skills organized by resolver categories), and testing section (new
conformance and resolver tests).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add GStack detection + mod status to gbrain init (Phase 4)
After brain initialization, gbrain init now reports:
- Number of skills loaded (from manifest.json)
- GStack detection (checks known host paths, uses gstack-global-discover if available)
- GStack install instructions if not found
- Resolver and soul-audit pointers
Also adds installDefaultTemplates() for SOUL.md/USER.md/ACCESS_POLICY.md/HEARTBEAT.md
deployment, and detectGStack() using gstack-global-discover with fallback to known paths
(DRY: doesn't reimplement GStack's host detection logic).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: v0.10.0 release documentation
- CHANGELOG: 24 skills, signal detector, RESOLVER.md, soul-audit, access control,
conventions, conformance standard, GStack detection in init
- README: updated skill section with 24 skills, resolver, conventions
- TODOS: added runtime MCP access control (P1)
- VERSION: 0.9.2 → 0.10.0
- package.json + manifest.json version bumped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: add skill table to CHANGELOG v0.10.0
16-row table detailing every new skill, what it does, and why it matters.
Written to sell the upgrade, not document the implementation.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: restore package.json version after merge conflict resolution
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: zero-based README rewrite for GStackBrain v0.10.0
Lead with GStack mod identity. 24 skills table organized by category.
Install block references RESOLVER.md and soul-audit. GBrain+GStack
relationship explained. Removed redundancy (733 -> 406 lines).
All essential content preserved: install, recipes, architecture,
search, commands, engines, voice, knowledge model.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: extract install block to INSTALL_FOR_AGENTS.md, simplify README
The 30-line copy-paste install block becomes one line:
"Retrieve and follow INSTALL_FOR_AGENTS.md"
Benefits: agent always gets latest instructions (no stale copy-paste),
README stays clean, install details live where agents read them.
README now leads with what GBrain does ("gives your agent a brain")
instead of GStack relationship. Removed "requires frontier model" note.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: 3 bugs in init.ts from merge conflict resolution
1. llstatSync typo (merge corruption) → lstatSync
2. __dirname undefined in ESM module → fileURLToPath polyfill
3. require('fs') in ESM → use imported readFileSync
All three would crash gbrain init at runtime. Caught by /review.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add checkResolvable shared core function for resolver validation
Shared function at src/core/check-resolvable.ts validates that all skills
are reachable from RESOLVER.md, detects MECE overlaps (with whitelist for
always-on/router skills), finds gaps in frontmatter triggers, and scans
for DRY violations. Returns structured ResolvableIssue objects with
machine-parseable fix objects alongside human-readable action strings.
Three call sites: bun test, gbrain doctor, skill-creator skill.
Cleans up test/resolver.test.ts: removes stale 9-line skip list, imports
from production check-resolvable.ts instead of reimplementing parsing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: expand doctor with resolver validation, filesystem-first architecture
Doctor now runs filesystem checks (resolver health, skill conformance) before
connecting to DB. New --fast flag skips DB checks. Falls back to filesystem-only
when DB is unavailable. Adds schema_version: 2 to JSON output, composite health
score (0-100), and structured issues array with action strings for agent parsing.
Resolver health check calls checkResolvable() and surfaces actionable fix
instructions. Link integrity check uses engine.getHealth() dead_links count.
CLI routing split: doctor dispatched before connectEngine() so filesystem
checks always run. Fixes Codex-identified blocker where doctor required DB.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add adaptive load-aware throttling and fail-improve loop
backoff.ts: System load checking (CPU via os.loadavg, memory via os.freemem),
exponential backoff with 20-attempt max guard, active hours multiplier (2x
slower during waking hours), concurrent process limit (max 2). Windows-safe:
defaults to "proceed" when os.loadavg returns zeros.
fail-improve.ts: Deterministic-first, LLM-fallback pattern with JSONL failure
logging. Cascade failure handling: when both paths fail, throws LLM error and
logs both. Log rotation at 1000 entries. Call count tracking for deterministic
hit rate metrics. Auto-generates test cases from successful LLM fallbacks.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add transcription service and enrichment-as-a-service
transcription.ts: Groq Whisper (default) with OpenAI fallback. Files >25MB
segmented via ffmpeg. Provider auto-detection from env vars. Clear error
messages for missing API keys and unsupported formats.
enrichment-service.ts: Global enrichment service callable from any ingest
pathway. Entity slug generation (people/jane-doe, companies/acme-corp),
mention counting via searchKeyword, tier auto-escalation (Tier 3→2→1 based
on mention frequency and source diversity), batch enrichment with backoff
throttling, regex-based entity extraction from text.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add data-research skill with recipe system, extraction, dedup, tracker
New skill: data-research — one parameterized pipeline for any email-to-
structured-data workflow (investor updates, donations, company metrics).
7-phase pipeline: define recipe, search, classify, extract (with extraction
integrity rule), archive, deduplicate, update tracker.
data-research.ts: Recipe validation, MRR/ARR/runway/headcount regex
extraction (battle-tested patterns), dedup with configurable tolerance,
markdown tracker parsing/appending, quarterly/monthly date windowing,
6-phase HTML email stripping with 500KB ReDoS cap.
Registers data-research in manifest.json (25th skill) and RESOLVER.md.
Fixes backoff test robustness for high-load systems.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: update project documentation for v0.10.0 infrastructure additions
CLAUDE.md: added 6 new core files (check-resolvable, backoff, fail-improve,
transcription, enrichment-service, data-research), 6 new test files, updated
skill count to 25, test file count to 34.
README.md: updated skill count to 25, added data-research to skills table.
CHANGELOG.md: added Infrastructure section documenting resolver validation,
doctor expansion, adaptive throttling, fail-improve loop, voice transcription,
enrichment service, and data-research skill.
TODOS.md: anonymized personal references.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: doctor.ts use ES module imports, harden backoff test
Replace require('fs') with ES module import in doctor.ts for consistency
with the rest of the file. Backoff test made resilient to parallel test
execution leaking module-level state.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: README rewrite with production brain stats, sample output, new infrastructure
Lead with the flex: 17,888 pages, 4,383 people, 723 companies, 526 meeting
transcripts built in 12 days. Show sample query output so readers see what
they'll get. Document self-improving infrastructure (tier auto-escalation,
fail-improve loop, doctor trajectory). Add data-research recipes to Getting
Data In. Update commands section with doctor --fix, transcribe, research
init/list. Fix stale "24" references to "25".
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: README lead with YC President origin and production agent deployments
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: README lead with skill philosophy and link to Thin Harness Fat Skills
Skills section now explains: skill files are code, they encode entire
workflows, they call deterministic TypeScript for the parts that shouldn't
be LLM judgment. Links to the tweet and the architecture essay.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: link GStack repo, add 70K stars and 30K daily users
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: remove meeting transcript count from README (sensitive)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: README lead with YC President origin and production agent deployments
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: rename political-donations recipe to expense-tracker (sensitivity)
Renamed the built-in data-research recipe from political-donations to
expense-tracker across README, CHANGELOG, SKILL.md, and reports routing.
Same extraction patterns (amounts, dates, recipients), neutral framing.
Also renamed social-radar keyword route to social-mentions.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: sync pipeline, extract, features, autopilot (v0.10.1) (#129)
* feat: migrate 8 existing skills to conformance format
Add YAML frontmatter (name, version, description, triggers, tools, mutating),
Contract, Anti-Patterns, and Output Format sections to all existing skills.
Rename Workflow to Phases. Ingest becomes thin router delegating to specialized
ingestion skills (Phase 2).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add RESOLVER.md, conventions directory, and output rules
RESOLVER.md is the skill dispatcher modeled on Wintermute's AGENTS.md.
Categorized routing table: Always-on, Brain ops, Ingestion, Thinking,
Operational, Setup, Identity. Conventions directory extracts cross-cutting
rules (quality, brain-first lookup, model routing, test-before-bulk).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: add skills conformance and resolver validation tests
skills-conformance.test.ts validates every skill has YAML frontmatter with
required fields, Contract, Anti-Patterns, and Output Format sections, and
manifest.json coverage. resolver.test.ts validates routing table categories,
skill path existence, and manifest-to-resolver coverage. 50 new tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add 9 brain skills from Wintermute (Phase 2)
Generalized from Wintermute's battle-tested skills:
- signal-detector: always-on idea+entity capture on every message
- brain-ops: brain-first lookup, read-enrich-write loop, source attribution
- idea-ingest: links/articles/tweets with author people page mandatory
- media-ingest: video/audio/PDF/book with entity extraction (absorbs video/youtube/book)
- meeting-ingestion: transcripts with attendee enrichment chaining
- citation-fixer: audit and fix citation formatting
- repo-architecture: filing rules by primary subject
- skill-creator: create skills with conformance standard + MECE check
- daily-task-manager: task lifecycle with priority levels
All Garry-specific references generalized. Core workflows preserved.
Updated RESOLVER.md and manifest.json.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add operational infrastructure + identity layer (Phase 3)
Operational skills:
- daily-task-prep: morning prep with calendar context and open threads
- cross-modal-review: quality gate via second model with refusal routing
- cron-scheduler: schedule staggering, quiet hours, wake-up override, idempotency
- reports: timestamped reports with keyword routing
- testing: skill validation framework (conformance checks)
- soul-audit: 6-phase interview generating SOUL.md, USER.md, ACCESS_POLICY.md, HEARTBEAT.md
- webhook-transforms: external events to brain signals with dead-letter queue
Identity layer:
- SOUL.md template (agent identity, generated by soul-audit)
- USER.md template (user profile, generated by soul-audit)
- ACCESS_POLICY.md template (4-tier access control)
- HEARTBEAT.md template (operational cadence)
- cross-modal.yaml convention (review pairs, refusal routing chain)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: update CLAUDE.md with 24 skills, RESOLVER.md, conventions, templates
GBrain is now a GStack mod for agent platforms. Updated architecture description,
key files listing (16 new skill files, RESOLVER.md, conventions, templates), skills
section (24 skills organized by resolver categories), and testing section (new
conformance and resolver tests).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add GStack detection + mod status to gbrain init (Phase 4)
After brain initialization, gbrain init now reports:
- Number of skills loaded (from manifest.json)
- GStack detection (checks known host paths, uses gstack-global-discover if available)
- GStack install instructions if not found
- Resolver and soul-audit pointers
Also adds installDefaultTemplates() for SOUL.md/USER.md/ACCESS_POLICY.md/HEARTBEAT.md
deployment, and detectGStack() using gstack-global-discover with fallback to known paths
(DRY: doesn't reimplement GStack's host detection logic).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: v0.10.0 release documentation
- CHANGELOG: 24 skills, signal detector, RESOLVER.md, soul-audit, access control,
conventions, conformance standard, GStack detection in init
- README: updated skill section with 24 skills, resolver, conventions
- TODOS: added runtime MCP access control (P1)
- VERSION: 0.9.2 → 0.10.0
- package.json + manifest.json version bumped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: add skill table to CHANGELOG v0.10.0
16-row table detailing every new skill, what it does, and why it matters.
Written to sell the upgrade, not document the implementation.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: restore package.json version after merge conflict resolution
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: zero-based README rewrite for GStackBrain v0.10.0
Lead with GStack mod identity. 24 skills table organized by category.
Install block references RESOLVER.md and soul-audit. GBrain+GStack
relationship explained. Removed redundancy (733 -> 406 lines).
All essential content preserved: install, recipes, architecture,
search, commands, engines, voice, knowledge model.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: extract install block to INSTALL_FOR_AGENTS.md, simplify README
The 30-line copy-paste install block becomes one line:
"Retrieve and follow INSTALL_FOR_AGENTS.md"
Benefits: agent always gets latest instructions (no stale copy-paste),
README stays clean, install details live where agents read them.
README now leads with what GBrain does ("gives your agent a brain")
instead of GStack relationship. Removed "requires frontier model" note.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: 3 bugs in init.ts from merge conflict resolution
1. llstatSync typo (merge corruption) → lstatSync
2. __dirname undefined in ESM module → fileURLToPath polyfill
3. require('fs') in ESM → use imported readFileSync
All three would crash gbrain init at runtime. Caught by /review.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add checkResolvable shared core function for resolver validation
Shared function at src/core/check-resolvable.ts validates that all skills
are reachable from RESOLVER.md, detects MECE overlaps (with whitelist for
always-on/router skills), finds gaps in frontmatter triggers, and scans
for DRY violations. Returns structured ResolvableIssue objects with
machine-parseable fix objects alongside human-readable action strings.
Three call sites: bun test, gbrain doctor, skill-creator skill.
Cleans up test/resolver.test.ts: removes stale 9-line skip list, imports
from production check-resolvable.ts instead of reimplementing parsing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: expand doctor with resolver validation, filesystem-first architecture
Doctor now runs filesystem checks (resolver health, skill conformance) before
connecting to DB. New --fast flag skips DB checks. Falls back to filesystem-only
when DB is unavailable. Adds schema_version: 2 to JSON output, composite health
score (0-100), and structured issues array with action strings for agent parsing.
Resolver health check calls checkResolvable() and surfaces actionable fix
instructions. Link integrity check uses engine.getHealth() dead_links count.
CLI routing split: doctor dispatched before connectEngine() so filesystem
checks always run. Fixes Codex-identified blocker where doctor required DB.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add adaptive load-aware throttling and fail-improve loop
backoff.ts: System load checking (CPU via os.loadavg, memory via os.freemem),
exponential backoff with 20-attempt max guard, active hours multiplier (2x
slower during waking hours), concurrent process limit (max 2). Windows-safe:
defaults to "proceed" when os.loadavg returns zeros.
fail-improve.ts: Deterministic-first, LLM-fallback pattern with JSONL failure
logging. Cascade failure handling: when both paths fail, throws LLM error and
logs both. Log rotation at 1000 entries. Call count tracking for deterministic
hit rate metrics. Auto-generates test cases from successful LLM fallbacks.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add transcription service and enrichment-as-a-service
transcription.ts: Groq Whisper (default) with OpenAI fallback. Files >25MB
segmented via ffmpeg. Provider auto-detection from env vars. Clear error
messages for missing API keys and unsupported formats.
enrichment-service.ts: Global enrichment service callable from any ingest
pathway. Entity slug generation (people/jane-doe, companies/acme-corp),
mention counting via searchKeyword, tier auto-escalation (Tier 3→2→1 based
on mention frequency and source diversity), batch enrichment with backoff
throttling, regex-based entity extraction from text.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add data-research skill with recipe system, extraction, dedup, tracker
New skill: data-research — one parameterized pipeline for any email-to-
structured-data workflow (investor updates, donations, company metrics).
7-phase pipeline: define recipe, search, classify, extract (with extraction
integrity rule), archive, deduplicate, update tracker.
data-research.ts: Recipe validation, MRR/ARR/runway/headcount regex
extraction (battle-tested patterns), dedup with configurable tolerance,
markdown tracker parsing/appending, quarterly/monthly date windowing,
6-phase HTML email stripping with 500KB ReDoS cap.
Registers data-research in manifest.json (25th skill) and RESOLVER.md.
Fixes backoff test robustness for high-load systems.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: update project documentation for v0.10.0 infrastructure additions
CLAUDE.md: added 6 new core files (check-resolvable, backoff, fail-improve,
transcription, enrichment-service, data-research), 6 new test files, updated
skill count to 25, test file count to 34.
README.md: updated skill count to 25, added data-research to skills table.
CHANGELOG.md: added Infrastructure section documenting resolver validation,
doctor expansion, adaptive throttling, fail-improve loop, voice transcription,
enrichment service, and data-research skill.
TODOS.md: anonymized personal references.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: doctor.ts use ES module imports, harden backoff test
Replace require('fs') with ES module import in doctor.ts for consistency
with the rest of the file. Backoff test made resilient to parallel test
execution leaking module-level state.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: sync --watch routing, dead_links parity, doctor command, embed --slugs
- Move sync to CLI_ONLY so --watch flag reaches runSync() (was routed through
operation layer which only calls performSync single-pass)
- Hide sync_brain from CLI help (MCP still exposes it)
- Fix performFullSync missing sync state persistence (C1)
- Align Postgres dead_links query to match PGLite (count dangling links, not
empty-content chunks) (C3)
- Fix doctor recommending nonexistent 'gbrain embed refresh' (C4)
- Refactor doctor outputResults to not call process.exit directly
- Add --slugs flag to embed for targeted page embedding
- Add sync auto-extract + auto-embed after performSync
- Add noExtract to SyncOpts
- Route extract, features, autopilot in CLI_ONLY
- Update help text with new commands
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: extract, features, and autopilot commands
- gbrain extract <links|timeline|all> — batch extraction of links and timeline
entries from brain markdown files. Broad regex for all .md links (C7: filters
external URLs). Frontmatter field parsing (company, investors, attendees).
Directory-based link type inference. JSONL progress on stderr for agents.
Sync integration hooks (extractLinksForSlugs, extractTimelineForSlugs).
- gbrain features [--json] [--auto-fix] — scan brain usage, pitch unused features
with the user's own numbers. Priority 1 (data quality): missing embeddings,
dead links. Priority 2 (unused features): zero links, zero timeline, low
coverage, unconfigured integrations, no sync. Embedded recipe metadata for
binary-safe integration detection. Persistence in ~/.gbrain/feature-offers.json.
Doctor teaser hook. Upgrade hook.
- gbrain autopilot [--repo] [--interval N] — self-maintaining brain daemon.
Pipeline: sync → extract → embed. Health-based adaptive scheduling
(brain_score >= 90 doubles interval, < 70 halves it). --install/--uninstall
for launchd (macOS) and crontab (Linux). Signal handling. Consecutive error
tracking (stops at 5). Log to ~/.gbrain/autopilot.log.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: hook features scan into post-upgrade flow
After gbrain post-upgrade completes, automatically run gbrain features to show
the user what's new and what to fix. Best-effort (doesn't fail the upgrade).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: brain_score (0-100) in BrainHealth
Weighted composite score computed in getHealth() for both Postgres and PGLite:
embed_coverage: 0.35, link_density: 0.25, timeline_coverage: 0.15,
no_orphans: 0.15, no_dead_links: 0.10
Returns 0 for empty brains. Agents use brain_score as a health gate.
Autopilot uses it for adaptive scheduling (>=90 slows down, <70 speeds up).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: extract and features unit tests
25 tests covering:
- extractMarkdownLinks: relative links, external URL filtering, edge cases
- extractLinksFromFile: slug resolution, frontmatter parsing, directory-based
type inference (works_at, deal_for, invested_in)
- extractTimelineFromContent: bullet format, header format with detail,
em/en dash handling, empty content
- features: module exports, brain_score calculation weights, CLI routing
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: instruction layer for extract, features, autopilot
Agent-facing tools are invisible without instruction-layer coverage.
- RESOLVER.md: add routing for extract, features, autopilot
- maintain/SKILL.md: add link graph extraction, timeline extraction,
autopilot check sections
Without these, agents reading skills/ will never discover or run the
new commands. This is the #1 DX finding from the devex review.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: bump version and changelog (v0.10.1)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: sync CLAUDE.md with v0.10.1 additions
Add extract.ts, features.ts, autopilot.ts to key files.
Add extract.test.ts, features.test.ts to test list.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: adversarial review fixes — 7 issues
- #3: autopilot extract step was a no-op (imported but never called)
- #6: PGLite orphan_pages query aligned with Postgres (check both inbound+outbound)
- #8: embedPage throws instead of process.exit (was killing sync/autopilot)
- #9: dead-links set auto_fixable=false (needs repo path we may not have)
- #10: JSON auto-fix output was dead code (unreachable !jsonMode check)
- #14: autopilot lock file prevents concurrent instances
- #20: --dir without value no longer crashes extract
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* security: fix command injection + plaintext API key in daemon install
- #1: Crontab install used echo pipe with shell-interpolated values.
Now uses a temp file via crontab(1) and single-quote escaping on all
interpolated paths. No shell expansion possible.
- #2: OPENAI_API_KEY was baked as plaintext into the launchd plist
(readable by any local process, backed up by Time Machine). Now uses
a wrapper script (~/.gbrain/autopilot-run.sh) that sources ~/.zshrc
at runtime. No secrets in plist or crontab.
- #16: extract.ts used a custom 20-line YAML parser that only handled
single-line key:value pairs. Multi-line arrays (attendees list with
- items) were silently ignored. Now uses the project's gray-matter
parser via parseMarkdown() from src/core/markdown.ts.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* security: fix wave 3 — 9 vulns (file_upload, SSRF, recipe trust, prompt injection) (#174)
* feat(engine): add cap parameter to clampSearchLimit (H6)
clampSearchLimit(limit, defaultLimit, cap = MAX_SEARCH_LIMIT) — third arg
is a caller-specified cap so operation handlers can enforce limits below
MAX_SEARCH_LIMIT. Backward compatible: existing two-arg callers still cap
at MAX_SEARCH_LIMIT.
This fixes a Codex-caught semantics bug: the prior signature took (limit,
defaultLimit) where the second arg was misread as a cap. clampSearchLimit(x, 20)
was actually allowing values up to 100, not 20.
* feat(integrations): SSRF defense + recipe trust boundary (B1, B2, Fix 2, Fix 4, B3, B4)
- B1: split loadAllRecipes into trusted (package-bundled) and untrusted
(cwd/recipes, $GBRAIN_RECIPES_DIR) tiers. Only package-bundled recipes
get embedded=true. Closes the fake trust boundary that let any cwd-local
recipe bypass health-check gates.
- B2: hard-block string health_checks for non-embedded recipes (was previously
only blocked when isUnsafeHealthCheck regex matched, which the cwd recipe
exploit bypassed). Embedded recipes still get the regex defense.
- Fix 2: gate command DSL health_checks on isEmbedded. Non-embedded
recipes cannot spawnSync.
- Fix 4 + B3 + B4: gate http DSL health_checks on isEmbedded; for embedded
recipes, validate URLs via new isInternalUrl() before fetch:
- Scheme allowlist (http/https only): blocks file:, data:, blob:, ftp:, javascript:
- IPv4 range check covering hex/octal/decimal/single-integer bypass forms
- IPv6 loopback ::1 + IPv4-mapped ::ffff: (canonicalized hex hextets handled)
- Metadata hostnames (AWS, GCP, instance-data) blocked
- fetch with redirect: 'manual' + per-hop re-validation up to 3 hops
Original PRs #105-109 by @garagon. Wave 3 collector branch reimplemented
the fixes after Codex outside-voice review found that PRs #106/#108 alone
did not actually gate cwd-local recipes (B1) and that PR #108 missed
redirect-following SSRF (B3) and non-http schemes (B4).
* feat(file_upload): path/slug/filename validation + remote-caller confinement (Fix 1, B5, H5, M4, Fix 5)
- Fix 1 + B5 + H1: validateUploadPath uses realpathSync + path.relative
to defeat symlink-parent traversal. lstatSync alone (the original PR #105
approach) only catches final-component symlinks; a symlinked parent dir
still followed to /etc/passwd. Now the entire path chain is resolved.
- H5: validatePageSlug uses an allowlist regex (alphanumeric + hyphens,
slash-separated segments). Closes URL-encoded traversal (%2e%2e%2f),
Unicode lookalikes, backslashes, control chars implicitly.
- M4: validateFilename allowlist regex. Rejects control chars, backslash,
RTL override (\u202E), leading dot/dash. Filename flows into storage_path
so this matters for every storage backend.
- Fix 5: clamp list_pages and get_ingest_log limits at the operation layer
via new clampSearchLimit cap parameter (list_pages caps at 100,
get_ingest_log at 50). Internal bulk commands bypass the operation
layer and remain uncapped.
- New OperationContext.remote flag distinguishes trusted local CLI from
untrusted MCP callers. file_upload uses strict cwd confinement when
remote=true (default), loose mode when remote=false (CLI). MCP stdio
server sets remote=true; cli.ts and handleToolCall (gbrain call) set
remote=false.
Original PR #105 by @garagon. Issue #139 reported by @Hybirdss.
* feat(search): query sanitization + structural prompt boundary (Fix 3, M1, M2, M3)
- M1: restructure callHaikuForExpansion to use a system message that declares
the user query as untrusted data, plus an XML-tagged <user_query> boundary
in the user message. Layered defense with the existing tool_choice constraint
(3 layers vs 1).
- Fix 3 (regex sanitizer, defense-in-depth): sanitizeQueryForPrompt strips
triple-backtick code fences, XML/HTML tags, leading injection prefixes,
and caps at 500 chars. Original query is still used for downstream search;
only the LLM-facing copy is sanitized.
- M2: sanitizeExpansionOutput validates the model's alternative_queries array
before it flows into search. Strips control chars, caps length, dedupes
case-insensitively, drops empty/non-string items, caps to 2 items.
- M3: console.warn on stripped content NEVER logs the query text — privacy-safe
debug signal only.
Original PR #107 by @garagon. M1/M2/M3 are wave 3 hardening per Codex review.
* chore: bump version and changelog (v0.10.2)
Security wave 3: 9 vulnerabilities closed across file_upload, recipe trust
boundary, SSRF defense, prompt injection, and limit clamping. See CHANGELOG
for full details.
Contributors:
- @garagon (PRs #105-109)
- @Hybirdss (Issue #139)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: sync documentation with v0.10.2 security wave 3
- CLAUDE.md: document OperationContext.remote, new security helpers
(validateUploadPath, validatePageSlug, validateFilename, isInternalUrl,
parseOctet, hostnameToOctets, isPrivateIpv4, getRecipeDirs,
sanitizeQueryForPrompt, sanitizeExpansionOutput), updated clampSearchLimit
signature, recipe trust boundary, new test files
- docs/integrations/README.md: replace string-form health_check example
with typed DSL (string checks now hard-block for non-embedded recipes);
add recipe trust boundary subsection
- docs/mcp/DEPLOY.md: document file_upload remote-caller cwd confinement,
symlink rejection, slug/filename allowlists
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* Minions v7 + v0.11.1 canonical migration + skillify (#130)
* feat: add minion_jobs schema, migration v5, and executeRaw to BrainEngine
Foundation for the Minions job queue system. Adds:
- minion_jobs table (20 columns) with CHECK constraints, partial indexes,
and RLS. Inspired by BullMQ's job model, adapted for Postgres.
- Migration v5 creates the table for existing databases.
- executeRaw<T>() method on BrainEngine interface for raw SQL access,
needed by the Minions module for claim queries (FOR UPDATE SKIP LOCKED),
token-fenced writes, and atomic stall detection.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: Minions job queue — queue, worker, backoff, types
BullMQ-inspired Postgres-native job queue built into GBrain. No Redis.
No external dependencies. Postgres transactions replace Lua scripts.
- MinionQueue: submit, claim (FOR UPDATE SKIP LOCKED), complete/fail
(token-fenced), atomic stall detection (CTE), delayed promotion,
parent-child resolution, prune, stats
- MinionWorker: handler registry, lock renewal, graceful SIGTERM,
exponential backoff with jitter, UnrecoverableError bypass
- MinionJobContext: updateProgress(), log(), isActive() for handlers
- 8-state machine: waiting/active/completed/failed/delayed/dead/
cancelled/waiting-children
Patterns stolen from: BullMQ (lock tokens, stall detection, flows),
Sidekiq (dead set, backoff formula), Inngest (checkpoint/resume).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: 43 tests for Minions job queue
Full coverage of the Minions module against PGLite in-memory:
- Queue CRUD (9): submit, get, list, remove, cancel, retry, duplicate
- State machine (6): waiting→active→completed/failed, retry→delayed→waiting
- Backoff (4): exponential, fixed, jitter range, attempts_made=0 edge
- Stall detection (3): detect stalled, counter increment, max→dead
- Dependencies (5): parent waits, fail_parent, continue, remove_dep, orphan
- Worker lifecycle (5): register, start-without-handlers, claim+execute,
non-Error throws, UnrecoverableError bypass
- Lock management (3): renewal, token mismatch, claim sets lock fields
- Claim mechanics (4): empty queue, priority ordering, name filtering,
delayed promotion timing
- Cancel & retry (2): cancel active, retry dead
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: Minions CLI commands and MCP operations
Wire Minions into the GBrain CLI and MCP layer:
CLI (gbrain jobs):
submit <name> [--params JSON] [--follow] [--dry-run]
list [--status S] [--queue Q] [--limit N]
get <id> — detailed view with attempt history
cancel/retry/delete <id>
prune [--older-than 30d]
stats — job health dashboard
work [--queue Q] [--concurrency N] — Postgres-only worker daemon
6 MCP operations (contract-first, auto-exposed via MCP server):
submit_job, get_job, list_jobs, cancel_job, retry_job, get_job_progress
Built-in handlers: sync, embed, lint, import. --follow runs inline.
Worker daemon blocked on PGLite (exclusive file lock).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: update project documentation for Minions job queue
CLAUDE.md: added Minions files to key files, updated operation count (36),
BrainEngine method count (38), test file count (45), added jobs CLI commands.
CHANGELOG.md: added Minions entry to v0.10.0 (background jobs, retry, stall
detection, worker daemon).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: Minions v2 — agent orchestration primitives (pause/resume, inbox, tokens, replay)
Adds the foundation for Minions as universal agent orchestration infrastructure.
GBrain's Postgres-native job queue now supports durable, observable, steerable
background agents. The OpenClaw plugin (separate repo) will consume these via
library import, not MCP, for zero-latency local integration.
## New capabilities
- **Concurrent worker** — Promise pool replaces sequential loop. Per-job
AbortController for cooperative cancellation. Graceful shutdown waits for
all in-flight jobs via Promise.allSettled.
- **Pause/resume** — pauseJob clears the lock and fires AbortSignal on active
jobs. Handlers check ctx.signal.aborted and exit cleanly. resumeJob returns
paused jobs to waiting. Catch block skips failJob when signal.aborted.
- **Inbox (separate table)** — minion_inbox table for sidechannel messages.
sendMessage with sender validation (parent job or admin). readInbox is
token-fenced and marks read_at atomically. Separate table avoids row bloat
from rewriting JSONB on every send.
- **Token accounting** — tokens_input/tokens_output/tokens_cache_read columns.
updateTokens accumulates; completeJob rolls child tokens up to parent.
USD cost computed at read time (no cost_usd column — pricing too volatile).
- **Job replay** — replayJob clones a terminal job with optional data overrides.
New job, fresh attempts, no parent link.
## Handler contract additions
MinionJobContext now provides:
- `signal: AbortSignal` — cooperative cancellation
- `updateTokens(tokens)` — accumulate token usage
- `readInbox()` — check for sidechannel messages
- `log()` — now accepts string or TranscriptEntry
## MCP operations added
pause_job, resume_job, replay_job, send_job_message — all auto-generate CLI
commands and MCP server endpoints.
## Library exports
package.json exports map adds ./minions and ./engine-factory paths so plugins
can `import { MinionQueue } from 'gbrain/minions'` for direct library use.
## Instruction layer (the teaching)
- skills/minion-orchestrator/SKILL.md — when/how to use Minions, decision
matrix, lifecycle management, anti-patterns
- skills/conventions/subagent-routing.md — cross-cutting rule: all background
work goes through Minions
- RESOLVER.md — trigger entries for agent orchestration
- manifest.json — registered
## Schema migration v6
Additive: 3 token columns, paused status, minion_inbox table with unread index.
Full Postgres + PGLite support. No backfill needed.
## Tests
65 tests (was 43): pause/resume (5), inbox (6), tokens (4), replay (4),
concurrent worker context (3), plus all existing coverage.
## What's NOT in this commit
Deferred to follow-up PRs:
- LISTEN/NOTIFY subscribe (needs real Postgres E2E)
- Resource governor (depends on concurrent worker stress testing)
- Routing eval harness (needs API keys + benchmark data)
- OpenClaw plugin (separate @gbrain/openclaw-minions-plugin repo)
See docs/designs/MINIONS_AGENT_ORCHESTRATION.md for full CEO-approved design.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(minions): migration v7 — agent_parity_layer schema
Adds columns on minion_jobs (depth, max_children, timeout_ms, timeout_at,
remove_on_complete, remove_on_fail, idempotency_key) plus the new
minion_attachments table. Three partial indexes for bounded scans:
idx_minion_jobs_timeout, idx_minion_jobs_parent_status, and
uniq_minion_jobs_idempotency. Check constraints enforce non-negative depth
and positive child cap / timeout.
Additive migration — existing installs pick it up via ensureSchema on next
use. No user action required.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* feat(minions): extend types for v7 parity layer
Extends MinionJob with depth/max_children/timeout_ms/timeout_at/
remove_on_complete/remove_on_fail/idempotency_key. Extends MinionJobInput
with the same options plus max_spawn_depth override. Adds MinionQueueOpts
(maxSpawnDepth default 5, maxAttachmentBytes default 5 MiB). Adds
AttachmentInput/Attachment shapes and ChildDoneMessage in the InboxMessage
union. rowToMinionJob updated to pick up the new columns.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* feat(minions): attachments validator
New module validateAttachment() gates every attachment write. Rejects empty
filenames, path traversal (.., /, \), null bytes, oversized content (5 MiB
default, per-queue override), invalid base64, and implausible content_type
headers. Returns normalized { filename, content_type, content (Buffer),
sha256, size } on success.
The DB also enforces UNIQUE (job_id, filename) as defense-in-depth for
concurrent addAttachment races — JS-only checks are not sufficient.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* feat(minions): queue v7 — depth, child cap, timeouts, cascade, idempotency, child_done
Wraps completeJob and failJob in engine.transaction() so parent hook
invocations (resolveParent, failParent, removeChildDependency) fold into
the same transaction as the child update. A process crash between child
and parent can't strand the parent in waiting-children anymore.
Adds v7 behaviors:
- Depth tracking. add() computes depth = parent.depth + 1 and rejects
past maxSpawnDepth (default 5).
- Per-parent child cap. add() takes SELECT ... FOR UPDATE on the parent,
counts non-terminal children, rejects when count >= max_children.
NULL max_children = no cap.
- Per-job wall-clock timeout. claim() populates timeout_at when
timeout_ms is set. New handleTimeouts() dead-letters expired rows with
error_text='timeout exceeded'. Terminal — no retry.
- Cascade cancel. cancelJob() walks descendants via recursive CTE with
depth-100 runaway cap. Returns the root row. Re-parented descendants
(parent_job_id NULL) are naturally excluded.
- Idempotency. add() uses INSERT ... ON CONFLICT (idempotency_key) DO
NOTHING RETURNING; falls back to SELECT when RETURNING is empty. Same
key always yields the same job id.
- child_done inbox. completeJob inserts {type:'child_done', child_id,
job_name, result} into the parent's inbox in the same transaction as
the token rollup, guarded by EXISTS so terminal/deleted parents skip
without FK violation. New readChildCompletions(parent_id, lock_token,
since?) helper; token-fenced like readInbox.
- removeOnComplete / removeOnFail. Deletes the row after the parent hook
fires, so parent policy sees consistent state.
- Attachment methods. addAttachment validates via validateAttachment
then INSERTs; UNIQUE (job_id, filename) backs the JS dup check.
listAttachments, getAttachment, deleteAttachment round out the API.
Fixes pre-existing inverted status bug: add() now puts children in
waiting/delayed (not waiting-children) and atomically flips the parent
to waiting-children in the same transaction. Tests no longer need
manual UPDATE workarounds.
Two correctness fixes:
- Sibling completion race. Under READ COMMITTED, two grandchildren
completing concurrently each saw the other as still-active in the
pre-commit snapshot and neither flipped the parent. Fixed by taking
SELECT ... FOR UPDATE on the parent row at the start of completeJob
and failJob transactions, serializing siblings on the parent lock.
- JSONB double-encode. postgres.js conn.unsafe(sql, params) auto-
JSON-encodes parameters. Calling JSON.stringify(obj) first stored a
JSON string literal (jsonb_typeof=string) and broke payload->>'key'
queries silently. Removed JSON.stringify from three call sites
(child_done inbox post, updateProgress, sendMessage). PGLite tolerated
both forms so unit tests missed it — real-PG E2E caught it.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* feat(minions): worker — timeout safety net + handleTimeouts tick
Worker tick now calls handleStalled() first, then handleTimeouts() — stall
requeue wins over timeout dead-letter when both could fire in the same
cycle. handleTimeouts() guards on lock_until > now() so stalled jobs take
the retryable path.
launchJob schedules a per-job setTimeout(timeout_ms) that fires ctx.signal
as a best-effort handler interrupt. The timer is always cleared in .finally
so process exit isn't delayed by a dangling timer. Handlers that respect
AbortSignal stop cleanly; handlers that ignore it still get dead-lettered
by the DB-side handleTimeouts.
Removed post-completeJob and post-failJob parent-hook calls from the worker
— those are now inside the queue method transactions. Worker becomes
simpler and crash-safer.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* test(minions): 33 new unit tests for v7 parity layer
Covers depth cap, per-parent child cap, timeout dead-letter, cascade
cancel (including the re-parent edge case), removeOnComplete /
removeOnFail, idempotency (single + concurrent), child_done inbox
(posted in txn + survives child removeOnComplete + since cursor),
attachment validation (oversize, path traversal, null byte, duplicates,
base64), AbortSignal firing on pause mid-handler, catch-block skipping
failJob when aborted, worker in-flight bookkeeping, token-rollup guard
when parent already terminal, and setTimeout safety-net cleanup.
Existing tests updated to remove the inverted-status manual UPDATE
workarounds that the add() fix made obsolete.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* test(e2e): Minions v7 concurrency + OpenClaw resilience coverage
minions-concurrency.test.ts spins two MinionWorker instances against the
test Postgres, submits 20 jobs, and asserts zero double-claims (every job
runs exactly once). This is the only test that actually proves FOR UPDATE
SKIP LOCKED under real concurrency — PGLite runs on a single connection
and can't exercise the race.
minions-resilience.test.ts covers the six OpenClaw daily pains:
1. Spawn storm caps enforce under concurrent submit. 2. Agent stall →
handleStalled() requeues; handleTimeouts() skips (lock_until guard).
3. Forgotten dispatches recoverable via child_done inbox. 4. Cascade
cancel stops grandchildren mid-flight. 5. Deep tree fan-in
(parent → 3 children → 2 grandchildren each) completes with the full
inbox chain. 6. Parent crash/recovery resumes from persisted state.
helpers.ts extends ALL_TABLES with minion_attachments, minion_inbox, and
minion_jobs (FK dependents first) so E2E teardown doesn't leak rows
between runs.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore: release v0.11.0 — Minions v7 agent orchestration primitives
Bumps VERSION / package.json to 0.11.0. Adds CHANGELOG entry covering
depth tracking, max_children, per-job timeouts, cascade cancel,
idempotency keys, child_done inbox, removeOnComplete/Fail, attachments,
migration v7, plus the two correctness fixes (sibling completion race
and JSONB double-encode).
TODOS.md captures the four v7 follow-ups: per-queue rate limiting,
repeat/cron scheduler, worker event emitter, and waitForChildren
convenience helpers.
1066 unit + 105 E2E = 1171 tests passing.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(minions): unify JSONB inserts, tighten nullish coalescing
Three non-blocker cleanups from post-ship review of v0.11.0:
- queue.ts add() and completeJob(): pre-stringifying with JSON.stringify
while other sites pass raw objects with $n::jsonb casts. postgres.js
double-encodes if you stringify first — works on PGLite (text→JSONB
auto-cast), fails silently on real PG. Unify on raw object + explicit
$n::jsonb cast.
- queue.ts readChildCompletions: since clause used sent_at > $2 relying
on PG's implicit text→TIMESTAMPTZ coercion. Explicit $2::timestamptz
is safer and clearer.
- types.ts rowToMinionJob: parent_job_id used || which coerces 0 to null.
Harmless today (SERIAL IDs start at 1) but ?? is semantically correct.
All 110 unit tests pass.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(minions): updateProgress missed $1::jsonb cast in unification
Residual from c502b7e — updateProgress was the only remaining JSONB write
without the explicit ::jsonb cast. Not broken (implicit cast works) but
breaks the convention the prior commit unified everywhere else.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* doc: Minions v7 skill count + jobs subcommands (26 skills)
README: bump skill count 25 → 26, add minion-orchestrator row, add
`gbrain jobs` command family block so v0.11.0's headline feature is
actually discoverable from the top-level commands reference.
CLAUDE.md: unit test count 48 → 49 (minions.test.ts expanded), skill
count 25 → 26, add minion-orchestrator to Key files + skills categorization,
expand MinionQueue one-liner to cover v7 primitives (depth/child-cap,
timeouts, idempotency, child_done inbox, removeOnComplete/Fail).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* feat: Minions adoption UX — smoke test + migration + pain-triggered routing
Teach OpenClaw when to reach for Minions vs native subagents. Ship three
pieces so upgrading from v0.10.x actually lands for real users:
- `gbrain jobs smoke` — one-command health check that submits a `noop` job,
runs a worker, verifies completion, and prints engine-aware guidance
(PGLite installs get the "daemon needs Postgres, use --follow" note).
Fails loud if schema's below v7 so the user knows to `gbrain init`.
- `skills/migrations/v0.11.0.md` — post-upgrade migration file the
auto-update agent reads. Six steps: apply schema, run smoke, ask user
via AskUserQuestion which mode they want (always / pain_triggered / off),
write to `~/.gbrain/preferences.json`, sanity-check handlers, mark done.
Completeness scores on each option so the recommendation is explicit.
- `skills/conventions/subagent-routing.md` rewritten — was a "MUST use
Minions for ALL background work" mandate, now reads preferences.json
on every routing decision and branches on three modes. Mode B
(pain_triggered) is the default: keep subagents until gateway drops
state, parallel > 3, runtime > 5min, or user expresses frustration.
Then pitch the switch in-session with a specific script.
Rename pass: "Minions v7" → "Minions" in README (JOBS block), TODOS.md
(P1 section header + depends-on), CHANGELOG.md v0.11.0 entry. v7 stays
as the internal schema version in code/migration contexts. The product
name is just Minions.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* doc(readme): promote Minions — 6 OpenClaw pains + how each is fixed
The one-line mention in the skills table wasn't doing the work. Added a
dedicated section between "How It Works" and "Getting Data In" that leads
with the six multi-agent failures every OpenClaw user hits daily (spawn
storms, hung handlers, forgotten dispatches, unstructured debugging,
gateway crashes, runaway grandchildren) and maps each pain to the
specific Minions primitive that fixes it.
Includes the smoke test command, the adoption default (pain_triggered),
and a pointer to skills/minion-orchestrator for the full patterns.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* test(bench): add harness for Minions vs OpenClaw subagent dispatch
Shared harness (openclawDispatch + minionsHandler) using matching
claude-haiku-4-5 calls on both sides so the delta measures queue+
dispatch overhead on top of identical LLM work. Includes
statsFromResults (p50/p95/p99) and formatStats helpers. Uses
`openclaw agent --local` embedded mode; does not test gateway
multi-agent fan-out (documented in the harness header).
* test(bench): durability under SIGKILL — Minions vs OpenClaw --local
Headline bench for the claim: when the orchestrator dies mid-dispatch,
Minions rescues via PG state + stall detection; OpenClaw --local loses
in-flight work outright.
Minions side: seed 10 active+expired-lock rows (exact state a SIGKILLed
worker leaves) then run a rescue worker. Expect 10/10 completed.
OpenClaw side: spawn 10 `openclaw agent --local` in parallel, SIGKILL
each at 500ms, count pre-kill delivered output. Expect 0/10 — no
persistence layer, nothing to recover.
Budget: ~$0 (Minions handlers sleep 10ms; OC calls die at 500ms so
partial LLM billing is negligible).
* test(bench): per-dispatch throughput — Minions vs OpenClaw --local
20 serial dispatches each side, identical claude-haiku-4-5 call with the
same trivial prompt. p50/p95/p99 reported via statsFromResults. Serial
(not parallel) so the per-dispatch cost is measured honestly and LLM
token spend stays bounded (~$0.08 total).
Minions: one queue, one worker, one concurrency. Submit → poll to
completion before next submit. OpenClaw: N sequential
`openclaw agent --local` spawns.
* test(bench): fan-out — Minions 10-wide concurrency vs 10 parallel OC spawns
Parent dispatches 10 children, waits for all to return. Minions uses
worker concurrency=10 sharing one warm process; OpenClaw parallel
`openclaw agent --local` spawns, each boots its own runtime.
3 runs × 10 children per run. Reports ok count and wall time per run
plus summary. Honest caveat documented: does not test OC gateway
multi-agent fan-out — that needs a custom WS client and LLM-backed
parent agent. This measures what users script today.
Budget: ~$0.12 LLM spend.
* test(bench): memory — 10 in-flight subagents, single-proc vs 10-proc cost
Measures resident memory for keeping 10 subagents in flight. Minions:
one worker process, concurrency=10 with handlers that park on a
promise — sample RSS of the test process via process.memoryUsage().
OpenClaw: 10 parallel `openclaw agent --local` processes, sum their
RSS via `ps -o rss=`.
Handlers are cheap sleeps, no LLM — we want harness memory, not LLM
client state. Budget: $0.
* test(bench): fan-out — don't gate on OC success rate, report numbers
Initial run showed OC parallel `--local` at 10-wide hits 40% failure
rate (17/30 across 3 runs). That's the finding, not a test bug —
process startup stampede + LLM rate limits. Bench now prints error
samples and reports the numbers instead of gating.
Minions side still gates at 90% (30/30 observed in practice).
* doc(benchmarks): Minions vs OpenClaw --local subagent dispatch
Real numbers on four claims: durability, throughput, fan-out, memory.
Same claude-haiku-4-5 call on both sides so the delta is queue+dispatch+
process cost on top of identical LLM work.
Headline: Minions rescues 10/10 from a SIGKILLed worker in 458ms while
OpenClaw --local loses all 10; ~10× faster per dispatch (778ms p50 vs
8086ms p50); ~21× faster at 10-wide fan-out AND 100% reliable vs OC's
43% failure rate; 2 MB vs 814 MB to keep 10 subagents in flight.
Honest caveats section covers what this doesn't test (OC gateway
multi-agent, load tests, other models). Fully reproducible via
test/e2e/bench-vs-openclaw/.
* doc(readme): inject Minions vs OpenClaw bench numbers
Headline deltas now in the Minions section: 10/10 vs 0/10 on crash,
~10× faster per dispatch, ~21× faster fan-out at 10-wide with 0%
failure vs 43%, ~400× less memory. Links to the full bench doc.
Prose first said Minions "fixes all six pains." Now it shows the
numbers that prove it.
* bench: production Wintermute benchmark — Minions 753ms vs sub-agent timeout
Real deployment: 45K-page brain on Render+Supabase. Task: pull 99 tweets,
write brain page, commit, sync. Minions: 753ms, $0. Sub-agent: gateway
timeout (>10s, couldn't even spawn under production load).
Also: 19,240 tweets backfilled across 36 months in 15 min at $0.
Sub-agents would cost $1.08 and fail 40% of spawns.
* bench: tweet ingestion — Minions 719ms vs OpenClaw 12.5s (17×)
Production benchmark with runnable test code:
- test/e2e/bench-vs-openclaw/tweet-ingest.bench.ts (reusable)
- docs/benchmarks/2026-04-18-tweet-ingestion.md (publishable)
Task: pull 100 tweets from X API, write brain page, commit, sync.
Minions: 719ms mean, $0, 100% success.
OpenClaw: 12,480ms mean, $0.03/run, 60% success (gateway timeouts).
At scale: 36-month backfill, 19K tweets, 15 min, $0 vs est. $1.08.
* doc(benchmarks): Wintermute production data point for Minions vs OpenClaw
Adds a production-environment data point to the Minions README section:
one month of tweet ingest on Wintermute (Render + Supabase + 45K-page brain)
ran end-to-end in 753ms for \$0.00 via Minions, while the equivalent
sessions_spawn hit the 10s gateway timeout and produced nothing.
Full methodology + logs in docs/benchmarks/2026-04-18-minions-vs-openclaw-production.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(core): preferences.ts + cli-util.ts — foundations for v0.11.1
Adds two foundational modules that apply-migrations (Lane A-4), the
v0.11.0 orchestrator (Lane C-1), and the stopgap script (Lane C-4) all
depend on.
- src/core/preferences.ts: atomic-write ~/.gbrain/preferences.json
(mktemp + rename, 0o600, forward-compatible for unknown keys) with
validateMinionMode, loadPreferences, savePreferences. Plus
appendCompletedMigration + loadCompletedMigrations for the
~/.gbrain/migrations/completed.jsonl log (tolerates malformed lines).
Uses process.env.HOME || homedir() so $HOME overrides work in CI and
tests; Bun's os.homedir() caches the initial value and ignores later
mutations.
- src/core/cli-util.ts: promptLine(prompt) helper, extracted from
src/commands/init.ts:212-224. Shared so init, apply-migrations, and
the v0.11.0 orchestrator's mode prompt don't each reinvent it.
test/preferences.test.ts: 21 unit tests covering load/save atomicity,
0o600 perms, forward-compat for unknown keys, minion_mode validation,
completed.jsonl JSONL append idempotence, auto-ts population, malformed-
line tolerance in loadCompletedMigrations.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(init): add --migrate-only flag (schema-only, no saveConfig)
Context: v0.11.0 migration orchestrators need a safe way to re-apply the
schema against an existing brain without risking a config flip. Today
running bare `gbrain init` with no flags defaults to PGLite and calls
saveConfig, which would silently overwrite an existing Postgres
database_url — caught by Codex in the v0.11.1 plan review as a
show-stopper data-loss bug.
The new --migrate-only path:
- loadConfig() reads the existing config (does NOT call saveConfig)
- errors out with a clear "run gbrain init first" if no config exists
- connects via the already-configured engine, calls engine.initSchema(),
disconnects
- --json emits structured success/error payloads
Everything downstream in the v0.11.1 migration chain (apply-migrations,
the stopgap bash script, the package.json postinstall hook) will invoke
this flag rather than bare gbrain init.
test/init-migrate-only.test.ts: 4 tests covering the no-config error
path, --json error payload shape, happy-path with a PGLite fixture
(verifies config.json content is byte-identical after the call — the
real invariant), and idempotent rerun.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(migrations): TS registry replaces filesystem migration scan
Context: Codex flagged that bun build --compile produces a self-contained
binary, and the existing findMigrationsDir() in upgrade.ts:145 walks
skills/migrations/v*.md on disk — which fails on a compiled install
because the markdown files aren't bundled. The plan's fix is a TS
registry: migrations are code, imported directly, visible to both source
installs and compiled binaries.
- src/commands/migrations/types.ts: shared Migration, OrchestratorOpts,
OrchestratorResult types.
- src/commands/migrations/index.ts: exports the migrations[] array,
getMigration(version), and compareVersions() (semver comparator).
The feature_pitch data that lived in the MD file frontmatter now
lives here as a code constant on each Migration, so runPostUpgrade's
post-upgrade pitch printer can consume it without a filesystem read.
- src/commands/migrations/v0_11_0.ts: stub orchestrator + pitch. The
full phase implementation lands in Lane C-1; for now the stub throws
a clear "not yet implemented" so apply-migrations --list (Lane A-4)
can still enumerate the migration.
test/migrations-registry.test.ts: 9 tests covering ascending-semver
ordering, feature_pitch shape invariants, getMigration lookup, and
compareVersions edge cases (equal / newer / older / single-digit
across major bumps).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(cli): gbrain apply-migrations — migration runner CLI
Reads ~/.gbrain/migrations/completed.jsonl, diffs against the TS migration
registry, runs pending orchestrators. Resumes status:"partial" entries
(the stopgap bash script writes these so v0.11.1 apply-migrations can
pick up where it left off). Idempotent: rerunning when up-to-date exits 0.
Flags:
--list Show applied + partial + pending + future.
--dry-run Print the plan; take no action.
--yes / --non-interactive Skip prompts (used by runPostUpgrade + postinstall).
--mode <a|p|o> Preset minion_mode (bypasses the Phase C TTY prompt).
--migration vX.Y.Z Force-run one specific version.
--host-dir <path> Include $PWD in host-file walk (default is
$HOME/.claude + $HOME/.openclaw only).
--no-autopilot-install Skip Phase F.
Diff rule (Codex H9): apply when no status:"complete" entry exists AND
migration.version ≤ installed VERSION. Previously proposed rule was
"version > currentVersion", which would SKIP v0.11.0 when running v0.11.1;
regression test in apply-migrations.test.ts pins the correct semantics.
Registered in src/cli.ts CLI_ONLY Set; dispatched before connectEngine so
each phase owns its own engine/subprocess lifecycle (no double-connect
when the orchestrator shells out to init --migrate-only or jobs smoke).
test/apply-migrations.test.ts: 18 unit tests covering parseArgs for every
flag, indexCompleted/statusForVersion correctness (including stopgap-then-
complete transition), and buildPlan's four buckets (applied / par…
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes 5 production gaps found from running a 26,358-page brain for 10 days. Sync --watch works. Sync auto-embeds. New commands to build the link graph and structured timeline. Feature adoption salesman that scans your brain and tells you what you're missing.
Bug Fixes
sync --watchnow actually loops (was routing through operation layer, single-pass only)performFullSyncpersists sync checkpoint (was repeating forever)dead_linksmetric aligned across Postgres and PGLite engines--stalenotrefresh)New Commands
gbrain extract links|timeline|all— batch extraction of links and timeline entries from markdowngbrain features --json --auto-fix— scan usage, pitch unused features with user's numbersgbrain autopilot --install— persistent daemon: sync + extract + embed in a loopgbrain embed --slugs— targeted page embedding for sync pipelineInfrastructure
brain_score(0-100) in BrainHealth — weighted composite of embed coverage, link density, timeline coverage, orphans, dead linksTest Coverage
Tests: 816 → 841 (+25 new). 972 total runs, 0 failures.
Pre-Landing Review
4 Codex findings verified and fixed (C1: performFullSync checkpoint, C3: dead_links parity, C4: doctor wrong command, C7: link regex URL filter).
Plan Completion
9 plan items, all implemented. CEO + DX + Eng + Codex reviews cleared.
TODOS
No items completed. Existing P0/P1/P2 items remain.
Test plan
🤖 Generated with Claude Code