v0.22.1 autopilot fix wave — 5 prod hotfixes (#417, #403, #406, #363, #409)#447
Merged
v0.22.1 autopilot fix wave — 5 prod hotfixes (#417, #403, #406, #363, #409)#447
Conversation
… net Root cause: autopilot-cycle handler called runCycle() without passing the job's AbortSignal. When the per-job timeout fired abort(), runCycle never checked it and kept grinding through extract (54,605 pages). The executeJob promise never resolved, inFlight never decremented, and the worker thought it was at capacity forever — 98 jobs piled up waiting with 0 active while a live worker sat idle. Three-layer fix: 1. CycleOpts.signal: new optional AbortSignal field. runCycle checks it between every phase via checkAborted(). A timed-out cycle now bails after the current phase completes instead of running all 6 phases. 2. autopilot-cycle handler: passes job.signal to runCycle so the abort actually propagates. 3. Worker safety net: 30s after the abort fires, if the handler still hasn't resolved, force-evict from inFlight and mark as dead in DB. This is the last-resort escape hatch for any handler that ignores AbortSignal — the worker resumes claiming new jobs instead of wedging forever. Incident: 2026-04-24, 98 waiting / 0 active / worker alive but idle. 143 existing minions tests pass unchanged.
16 new tests across 3 files covering the 2026-04-24 worker wedge: test/minions.test.ts (6 new, 149 total): - handler receiving abort signal exits cleanly - handler ignoring abort still gets signal delivered - worker claims new jobs after timeout (no wedge) ← key regression - checkAborted pattern: undefined/non-aborted/aborted signals test/cycle-abort.test.ts (7 new): - CycleOpts.signal type contract - runCycle accepts signal without error - runCycle bails on pre-aborted signal - runCycle bails mid-flight when signal fires between phases - Source-level guard: jobs.ts passes job.signal to runCycle - Source-level guard: worker.ts has force-eviction safety net - Source-level guard: cycle.ts has checkAborted between all 6 phases test/e2e/worker-abort-recovery.test.ts (3 new): - worker recovers from timed-out handler and processes next job - concurrency=2 processes parallel jobs during timeout - multiple sequential timeouts don't permanently wedge worker All 159 tests pass.
The autopilot-cycle runs every 5 min. Its extract phase was doing a full filesystem walk of ALL markdown files (54K+) — twice (links + timeline). On a brain this size, extract alone exceeded the 600s job timeout, producing zero useful writes. Fix: sync already returns pagesAffected (the slugs it added/modified). Pipe that list through to extract. When provided, extract reads ONLY those files instead of walking the entire brain directory. - Add ExtractOpts.slugs for targeted extraction - Add extractForSlugs() — single-pass links + timeline for specific slugs - cycle.ts: capture sync's pagesAffected, pass to runPhaseExtract - If sync didn't run or failed, extract falls back to full walk (safe) - If pagesAffected is empty (nothing changed), extract returns instantly Expected improvement: 54K file reads → ~10-50 per cycle. The full walk is still available via CLI `gbrain extract` and on first-run.
Three fixes for the minion supervisor dying silently when PgBouncer rotates: 1. PostgresEngine: executeRaw retries once on connection-class errors (ECONNREFUSED, password auth failed, connection terminated, etc.) by tearing down the poisoned pool and creating a fresh one via reconnect(). Prevents cascading failures when Supabase bounces. 2. Supervisor: tracks consecutive health check failures. After 3 in a row, emits health_warn with reason=db_connection_degraded and attempts engine.reconnect() if available. Resets counter on success. 3. Supervisor: worker_exited events now include likely_cause field: SIGKILL → oom_or_external_kill, SIGTERM → graceful_shutdown, code=1 → runtime_error. Makes it trivial to distinguish OOM kills from connection deaths in logs. Tests: 23 new tests covering connection error detection, reconnect guard against concurrent reconnects, retry-once-not-infinite-loop, health failure tracking, and exit classification.
…pervisor + worker
…ends Prevents the failure mode from #361: a single autopilot UPDATE on minion_jobs can leave a pooler backend in state='active'/ClientRead for 24h+, holding a RowExclusiveLock that blocks every subsequent ALTER TABLE minion_jobs. The stuck backend never times out on its own because Supabase Micro has no default idle_in_transaction_session_timeout and autovacuum can't reap sessions that hold active locks. Fix: deliver statement_timeout + idle_in_transaction_session_timeout as startup parameters via postgres.js's `connection` option, applied automatically on every new backend connection. Works correctly on both session-mode and transaction-mode PgBouncer poolers (startup params persist for the backend's lifetime, unlike SET commands which transaction-mode PgBouncer strips between transactions). Defaults chosen conservatively so they don't interfere with bulk work like multi-minute embed passes or CREATE INDEX on large pages tables: - statement_timeout: '5min' - idle_in_transaction_session_timeout: '2min' Each overridable per-GUC via env var (GBRAIN_STATEMENT_TIMEOUT, GBRAIN_IDLE_TX_TIMEOUT). Set any to '0' or 'off' to disable. client_connection_check_interval is the specific GUC that would kill the observed state='active'/ClientRead case, but it's Postgres 14+ and some managed poolers reject unknown startup parameters. Made it opt-in only via GBRAIN_CLIENT_CHECK_INTERVAL for users who know their Postgres supports it. Applied in both the module-level singleton connect (src/core/db.ts) and the per-engine-instance pool used by `gbrain jobs work` (src/core/postgres-engine.ts) via a shared resolveSessionTimeouts() helper. Tests: 5 new cases in migrate.test.ts covering defaults, env overrides, '0'/'off' disable, and multi-GUC disable. 39/39 pass (34 pre-existing + 5 new). Closes #361. Co-Authored-By: orendi84 <orendigergo@gmail.com>
embed --stale walked listPages + per-page getChunks (incl. vector(1536) embedding column) on every call, then client-side-filtered for chunks where embedding was missing. On a 1.5K-page brain at 100% coverage, ~76 MB pulled per call, all discarded. With autopilot firing every 5-10 min plus a 2h cron, this hit Supabase's 5 GB free-tier ceiling at 102 GB used (2058% over) twice in one week. Two new BrainEngine methods replace the page walk with a SQL-side filter: - countStaleChunks(): single SELECT count(*) WHERE embedding IS NULL. Pre-flight short-circuit; ~50 bytes wire when 0 stale. - listStaleChunks(): slug + chunk_index + chunk_text + chunk_source + model + token_count for stale rows only. Excludes the (NULL) embedding column. Bounded by LIMIT 100000 mirroring listPages. embedAll forks: staleOnly=true takes the new SQL-side path (embedAllStale); staleOnly=false (--all) keeps existing behavior verbatim. embedAllStale preserves non-stale chunks on partially-stale pages: it re-fetches existing chunks per stale slug and merges (embedding=undefined for non-stale → COALESCE preserves existing). Without the merge, the upsertChunks != ALL filter would delete non-stale chunks. Re-fetch cost is bounded by stale slug count; the autopilot common case (0 stale) never reaches this path. Predicate uses `embedding IS NULL`, not `embedded_at IS NULL`. The bulk- import path could leave embedded_at populated while embedding was NULL (see upsertChunks consistency fix below), so `embedding IS NULL` is the truth source for "this chunk needs an embedding". Also fixes the upsertChunks consistency bug in both engines: when chunk_text changes and no new embedding is supplied, embedding correctly clears to NULL but embedded_at kept its old timestamp. New behavior resets BOTH columns together, keeping write-time honesty. Wire-cost impact (measured against current behavior on a 1.5K-page brain): - 0 stale chunks (autopilot common case): ~76 MB → ~50 bytes (~1.5M× reduction) - 100 stale across 10 pages: ~76 MB → ~150 KB (~500× reduction) - 8K stale across 1.5K pages (cold start): ~76 MB → ~12 MB (~6× reduction) Tests: 4 new in test/embed.test.ts (zero-stale short-circuit; N-stale- across-M-pages with non-stale preservation; --stale dry-run; --all path byte-identical). Existing --stale tests updated for the new mock surface. Migration impact: none. embedded_at and embedding columns have been on content_chunks since schema inception. Co-Authored-By: atrevino47 <atbuster47@gmail.com>
…e noExtract (F2) - Drop #406's per-call executeRaw retry wrapper. The regex idempotence boundary is unsound (writable CTEs, side-effecting SELECTs). Recovery now happens at the supervisor level via 3-strikes-then-reconnect. - Update db.ts: setSessionDefaults becomes a back-compat no-op. resolveSessionTimeouts (from #363) is the source of truth, sending GUCs as startup parameters that survive PgBouncer transaction mode. Bumped idle_in_transaction default from 2min to 5min to match v0.21.0 posture. - Gate noExtract in cycle's runPhaseSync on whether extract phase is scheduled. Avoids silently dropping extraction when the user runs `gbrain dream --phase sync` (Codex F2). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The migrate.test.ts structural check counts `SET idle_in_transaction_session_timeout` matches in source. The literal string in this docstring was tripping it. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
15 new test cases across 3 files, ~250 LOC, all PGLite/in-memory: test/extract-incremental.test.ts (NEW, 8 cases for #417): - slugs: [] returns immediately (early-return) - slugs: undefined falls through to full-walk - slugs: [a, b] reads only those files - Slug whose file no longer exists is silently skipped - Mode filter (links) skips timeline extraction - dryRun: true does not invoke addLinksBatch / addTimelineEntriesBatch - BATCH_SIZE flush — >100 candidate links exercise mid-iteration flush - Full-slug-set resolution — link to file outside changed set still resolves test/core/cycle.test.ts (4 new cases for #417 + Codex F2): - cycle threads sync.pagesAffected into extract phase as the slugs argument - extract phase falls back to full walk when sync was skipped - F2 guard: full cycle (sync + extract) sets noExtract=true on sync - F2 guard: phases:[sync] only sets noExtract=false (no silent extract drop) test/connection-resilience.test.ts (3 new cases for D3): - PostgresEngine.executeRaw is a single-statement passthrough (no try/catch) - PostgresEngine.reconnect() still exists for supervisor-driven recovery - Supervisor still has the 3-strikes-then-reconnect path Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ates CHANGELOG.md: segment-aware entry per CEO-review D1 — 'For everyone' section (#417 incremental extract, #403 cycle abort) leads, 'For Postgres / Supabase users' section (#406, #363, #409) follows. Production proof point as a sidebar, not the lead. TODOS.md: 3 follow-up items per Eng-review D6: 1. Caller-opt-in retry for executeRaw (D3 follow-up) 2. Replace walkMarkdownFiles with engine.getAllSlugs() (F1 follow-up) 3. err.code-based connection-error matching (B1 follow-up) CLAUDE.md: 6 file-reference updates for the wave's behavioral additions (postgres-engine, db, cycle, worker, supervisor, embed, extract). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ions User-explicit version override on /ship: ship as v0.22.1 (MINOR jump from master's 0.21.0) instead of the v0.21.1 PATCH the wave originally targeted. The wave bundles 5 production fixes which is meaningful enough to clear a MINOR version, even though the API surface is additive. Files updated to 0.22.1: - VERSION (single source of truth) - package.json (Bun/npm version) - CHANGELOG.md (release header + "To take advantage of v0.22.1" block) - TODOS.md (3 follow-up TODOs reference the version that filed them) - CLAUDE.md (Key Files annotations cite the release that introduced behavior) Also adds a "Version locations" section to CLAUDE.md documenting all five required files plus the auto-derived (bun.lock, llms-full.txt) and historical (skills/migrations/v*.md, src/commands/migrations/v*.ts, test/migrations-v*.test.ts) categories. Future /ship runs and the auto-update agent now have a canonical list of where versions live. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…6-363-409-autopilot # Conflicts: # CHANGELOG.md # CLAUDE.md # VERSION
…defined CI's `bun run typecheck` step was failing with TS2339 at test/minions.test.ts:2026 — `const signal = undefined` narrows to literal `undefined`, which has no `.aborted` property, so `signal?.aborted` doesn't compile. Fix uses `as AbortSignal | undefined` to preserve the union type. A plain type annotation gets narrowed back via control-flow analysis; the `as` cast doesn't. Runtime behavior is unchanged — the optional-chain still short-circuits as intended. Verified: bunx tsc --noEmit → exit 0; the 3 checkAborted cases still pass. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The minions_migration check reads ~/.gbrain/migrations/completed.jsonl and flags any version that has a `partial` entry without a matching `complete`. Long-lived installs accumulate partial records from historical stopgap runs (notably v0.11.0). Without time decay or forward-progress detection, the FAIL flag fires forever once any partial lands, even on installs that have been running clean at v0.22+ for months. Concrete failure: test/e2e/mechanical.test.ts "gbrain doctor exits 0 on healthy DB" was flaking on dev machines whose ~/.gbrain/ carried v0.11.0 partials from earlier in the day. The fresh test DB had nothing wrong with it; doctor was just reading host filesystem state that bled in via $HOME. Fix: a partial vX.Y.Z is treated as stale (not stuck) if any vA.B.C where A.B.C >= X.Y.Z has a `complete` entry anywhere in the file. The reasoning: if a newer migration successfully landed, the install has clearly moved past the older partial. compareVersions() from src/commands/migrations/index.ts handles the semver compare. Cases preserved: - v0.10 complete + v0.11 partial → still FAILs (older complete doesn't supersede newer partial) - v0.16 partial alone → still FAILs (no override exists) - Fresh install (no completed.jsonl) → no warning - Real partial-then-complete-same-version → no warning Cases now fixed: - v0.16 complete + v0.11 partial → no FAIL (forward progress made; the v0.11 record is stale) Two regression tests in test/doctor-minions-check.test.ts cover both directions of the override (when it fires, when it doesn't). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI's build-llms regen-drift guard caught that llms-full.txt was stale relative to CLAUDE.md after the wave's documentation commits (the "Version locations" section + 6 file-reference annotations for the wave's behavioral additions). CLAUDE.md notes that llms-full.txt is auto-derived — bumped via 'bun run build:llms' when CLAUDE.md's file-references change. This commit catches up. llms.txt is unchanged; the curated index doesn't pull from CLAUDE.md's file-reference body. Only llms-full.txt (the inlined single-fetch bundle) needed regeneration. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
garrytan
added a commit
that referenced
this pull request
Apr 28, 2026
Merges 5 master commits since last merge: v0.22.1 autopilot fix wave (#447), v0.22.2 minions worker reliability (#458), v0.22.4 frontmatter-guard (#448), sourceId in cycle sync phase (#475), and post-migration schema verification (#488). Conflict resolutions: - VERSION: kept this branch's reserved 0.27.0 slot (master at 0.22.6). - CHANGELOG.md: kept v0.27.0 entry at top, then master's v0.22.6 → v0.21.0 entries below in order. - CLAUDE.md: merged the v0.27 cycle bullet (8 phases, synthesize, patterns, transcript-discovery, dream CLI flags) with master's v0.22.1/v0.22.5 cycle additions (signal: AbortSignal, willRunExtractPhase, resolveSourceForDir). - src/core/cycle.ts: kept v0.27 yieldDuringPhase + synthInputFile/synthDate/synthFrom/synthTo CycleOpts fields AND added master's v0.22.1 signal: AbortSignal field. Both coexist. - llms-full.txt: regenerated against the merged tree. The dream_verdicts schema migration moved v25 → v30 in the prior merge. Master ended at v29 (cathedral_ii_code_edges_rls); v30 is uncontested. Tests pass post-merge: 105/105 dream + cycle tests across 9 files. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Five production hotfixes shipping together as one release. Theme: autopilot stops being a noisy neighbor. GBrain's overnight maintenance loop was reading too much, ignoring abort signals, and quietly poisoning shared infrastructure when things went wrong. After this wave the loop only re-reads pages that changed, bails cleanly when timeouts fire, and recovers from connection-pool poisoning without manual intervention.
For everyone (PGLite + Postgres):
gbrain dreamcycles no longer re-READ every markdown file when only a handful changed. The cycle still walks the directory tree to build the link-resolution set (a fastreaddirpass), butreadFileSyncnow runs only on pages sync flagged as added or modified. On a 54,461-page production brain this turned a 10-minute extract phase into a sub-second pass.runCyclenow bails at the next phase boundary instead of grinding through extract → embed → orphans while the worker thinks the job is done. A 30-second grace-then-evict safety net inMinionWorkerfrees the slot even if a future handler ignores the abort signal entirely.For Postgres / Supabase users (no-op on PGLite):
engine.reconnect()to swap in a fresh pool. Workers crash cleanly on poisoned connections; supervisor catches it within ~3 minutes instead of staying degraded until manual restart.statement_timeoutandidle_in_transaction_session_timeoutas connection-time startup parameters. An orphaned pgbouncer backend can no longer hold aRowExclusiveLockfor hours and block schema migrations. Closes Autopilot minion_jobs UPDATE can leave pgbouncer backend in ClientRead forever, holding RowExclusiveLock for days #361.embed --stalenow filters server-side onembedding IS NULLinstead of pulling every chunk'svector(1536)over the wire. On a fully-embedded 1.5K-page brain that's the difference between ~76 MB per call and a singlecount()round-trip. One production user blew past Supabase's 5 GB free-tier ceiling at 102 GB used; that pattern is gone now.Eng-review tightening (D3): the original #406 wrapped
executeRawin a per-call retry that auto-recovered from connection errors. Codex showed the SQL-prefix regex isn't a safe idempotence boundary (writable CTEs, side-effecting SELECTs). The wrapper was dropped. What ships from #406 is the structural reconnect path; recovery moved up to the supervisor watchdog. SeeTODOS.mdfor the planned caller-opt-in retry follow-up.Version note: Released as v0.22.1 (MINOR jump from master's 0.21.0) instead of the originally-planned 0.21.1. The wave's scope spans 5 production fixes, behavioral changes in cycle / extract / embed / supervisor, and 15 backfilled regression tests. MINOR is the honest signal.
Test Coverage
15 new test cases across 3 files, ~290 LOC, all PGLite/in-memory:
test/extract-incremental.test.ts(NEW, 8 cases for perf: incremental extract — only process slugs that sync touched #417): empty/undefined slugs, [a,b]-only reads, deleted-file handling, mode filter, dry-run, BATCH_SIZE flush, full-slug-set resolution.test/core/cycle.test.ts(4 new cases for perf: incremental extract — only process slugs that sync touched #417 + Codex F2): cycle threadspagesAffectedinto extract, full-walk fallback, F2 noExtract gating (full cycle vs sync-only).test/connection-resilience.test.ts(3 new cases for D3):executeRawis single-statement passthrough,reconnect()still exists, supervisor still has 3-strikes path.Tests at parity with master: 24 unit failures observed = same set master has (pre-existing parallel-PGLite-timeout flakiness; passes when run individually). E2E: 22 files, 20 passed, 2 pre-existing failures (also fail on master alone). Wave introduces zero new failures.
Pre-Landing Review
CLEAR per
/plan-eng-review(run during planning, status: clean, 2 issues found, 11 cases backfilled). The review caught the D3 retry-guard issue and one TODO-only item (B1:err.code-based connection error matching, filed for follow-up).Outside Voice (Codex)
Ran during planning. Codex caught 8 issues the inside review missed — all 8 resolved before landing:
noExtractgated onphases.includes('extract')test/e2e/cycle.test.tstotest/core/cycle.test.ts90c5d93Plan Completion
All 5 PRs merged + 11 test cases backfilled + 3 follow-up TODOs filed (D3, F1, B1). Plan file:
~/.claude/plans/system-instruction-you-are-working-tender-horizon.md.TODOS
3 follow-up TODOs filed during this wave:
executeRaw(D3 follow-up)walkMarkdownFileswithengine.getAllSlugs()inextractForSlugs(F1 follow-up)err.code-based connection-error matching inpostgres-engine.ts(B1 follow-up)Documentation
CHANGELOG.md: segment-aware v0.22.1 entry (For everyone / For Postgres-Supabase users)CLAUDE.md: 6 file-reference updates for wave's behavioral additions, plus new "Version locations" section documenting all 5 required files + auto-derived + historical categoriesTODOS.md: 3 follow-up entriesCommunity contributors
Major thanks to:
Both PRs cherry-picked with authorship preserved +
Co-Authored-By:trailers. Close-out comments will follow on the upstream PRs (#363, #409, plus the three internal: #417, #403, #406).Test plan
bun test— 2625 pass, 24 fail = same as master)bun run test:e2e— 211 pass, 2 pre-existing failures)--dry-run)🤖 Generated with Claude Code