Skip to content

fix: v0.18.1 — RLS hardening + schema backfill (supersedes #336)#343

Merged
garrytan merged 14 commits intomasterfrom
garrytan/rls-hardening
Apr 23, 2026
Merged

fix: v0.18.1 — RLS hardening + schema backfill (supersedes #336)#343
garrytan merged 14 commits intomasterfrom
garrytan/rls-hardening

Conversation

@garrytan
Copy link
Copy Markdown
Owner

@garrytan garrytan commented Apr 22, 2026

Summary

Row Level Security hardening for gbrain on Supabase. Three changes stapled together:

  • gbrain doctor RLS check widened — scans every pg_tables row in public instead of a hardcoded allowlist. Severity upgraded warn → fail; gbrain doctor now exits 1 when any public table is missing RLS.
  • Base schema + migration — ensures every gbrain-managed public table ships with RLS enabled on fresh installs, and backfills existing brains automatically on gbrain upgrade via a new v0.18.1 orchestrator.
  • Escape hatchCOMMENT ON TABLE public.<name> IS 'GBRAIN:RLS_EXEMPT reason=<why>' lets operators mark a table as intentionally anon-readable. No CLI subcommand. psql only. Deliberately painful. Exempt tables are enumerated on every doctor run.

Supersedes #336 (the original check-widening PR); that work is preserved as commit b73ddc8 at the base of this branch. Codex review during planning found additional gaps; those are folded in.

Test Coverage

Path Coverage
RLS scan with no gaps → status ok existing e2e (exit 0 on healthy DB)
RLS scan with gaps → status fail + exit 1 new e2e CLI-spawn
GBRAIN:RLS_EXEMPT with valid reason → exempt list new e2e CLI-spawn
GBRAIN:RLS_EXEMPT missing reason= → fail new e2e CLI-spawn
Non-exempt comment on no-RLS table → fail new e2e CLI-spawn
PGLite engine.kind === 'pglite' skip source-grep + manual smoke
New schema migration shape (BYPASSRLS gate, RAISE EXCEPTION on non-bypass) unit test
v0.18.1 orchestrator registered in migrations/index.ts apply-migrations.test.ts
No hardcoded IN filter near RLS block source-grep
Quoted-identifier remediation SQL source-grep
Identifier " escape in remediation message source-grep

Test plan

  • bun run typecheck clean
  • bun test test/doctor.test.ts test/migrate.test.ts test/apply-migrations.test.ts — all green
  • Full tier-1 e2e against Docker Postgres 16 + pgvector — 154/154 pass, 77/77 in mechanical.test.ts
  • Manual: fresh Docker Postgres gbrain init → all managed public tables show rowsecurity=t
  • Manual: disable RLS on a table, run gbrain init → RLS restored via schema.sql's DO block
  • Manual: direct PGLite run → doctor reports "Skipped (PGLite — no PostgREST exposure, RLS not applicable)"

Supersedes

This PR supersedes #336. Please close #336 in favor of this.

🤖 Generated with Claude Code

Wintermute and others added 7 commits April 22, 2026 19:11
The RLS check was hardcoded to only verify 10 gbrain-managed tables:
pages, content_chunks, links, tags, raw_data, page_versions,
timeline_entries, ingest_log, config, files.

Any other table in the public schema (created by the application,
extensions, or manually) was invisible to the check. This allowed
12 tables to exist without RLS for months — publicly readable by
anyone with the Supabase anon key.

Changes:
- Query ALL tables in public schema, not a hardcoded list
- Upgrade severity from 'warn' to 'fail' — missing RLS is a security
  issue, not a suggestion
- Include table count in success message for visibility
- Include remediation SQL in failure message

Supabase exposes the public schema via PostgREST. Any table without
RLS is readable/writable by the anon key by default.
The base schema and prior migrations shipped 10 public tables
without Row Level Security enabled: access_tokens, mcp_request_log,
minion_inbox, minion_attachments, subagent_messages,
subagent_tool_executions, subagent_rate_leases, gbrain_cycle_locks,
budget_ledger, budget_reservations.

Supabase exposes the public schema via PostgREST, so tables without
RLS are readable and writable by anyone holding the anon key.
access_tokens and the subagent conversation history tables carry
the most sensitive data in the set.

Fix: add the missing ENABLE RLS statements to src/schema.sql
(inside the existing BYPASSRLS-gated DO block, so dev sessions
without bypass don't get locked out). Add a new schema migration
v17 rls_backfill_missing_tables that does the same on existing
brains. budget_ledger and budget_reservations were previously
migration-only (v12); promoted to the base schema so fresh installs
pick up RLS from the standard gate.

Regenerated src/core/schema-embedded.ts.
…MPT escape hatch

The RLS check was hardcoded to 10 gbrain-managed tables; any other
table in the public schema (plugin-created, user-created, extension-
created) was invisible to the check. Widen the scan to every
pg_tables row in the public schema.

Upgrade severity warn to fail. Missing RLS is a security issue, not
a suggestion. gbrain doctor now exits 1 when any public table lacks
RLS. Cron and CI wrappers that call gbrain doctor should be aware
of the exit-code flip.

Add an explicit escape hatch for tables that should stay readable
by the anon key on purpose (analytics, public materialized views,
plugin tables). The doctor reads pg_description for each non-RLS
table and treats a comment matching GBRAIN:RLS_EXEMPT reason=<why>
as an intentional exemption. Doctor enumerates exempt tables by
name on every successful run so they never go invisible.

There is no gbrain rls-exempt CLI subcommand by design. The escape
hatch is deliberately painful: operators drop to psql and type the
justification as raw SQL. Comment lives in pg_description, survives
pg_dump, shows up in schema diffs, and appears in shell history.

PGLite is now explicitly skipped with an ok status (embedded and
single-user, no PostgREST exposure). Previously hit the
db.getConnection() throw-catch path and surfaced a misleading warn.

Remediation SQL now quotes identifiers (ALTER TABLE "public"."<name>"
...) so it works on tables with hyphens, reserved words, or mixed
case.

See docs/guides/rls-and-you.md for the full user-facing guide.
Four layers of guard for the v0.18 RLS changes:

test/doctor.test.ts: source-grep structural regression guards on
the doctor RLS block — absence of the old tablename IN filter,
presence of status=fail on the gap branch, quoted-identifier
remediation SQL, PGLite skip wrapper, GBRAIN:RLS_EXEMPT parsing
with required reason=. Fast, no DB needed. Mirrors the
statement_timeout regression pattern in test/postgres-engine.test.ts.

test/migrate.test.ts: structural guard for migration v17. Asserts
the migration exists with the expected name, all 10 ALTER TABLE
statements are present, BYPASSRLS gating is in place, and
LATEST_VERSION has caught up.

test/e2e/mechanical.test.ts: rewrote the E2E RLS Verification
block. The old hardcoded-allowlist query is replaced with an
every-public-table-has-RLS assertion. Four new CLI-spawn cases
verify real end-to-end behavior: (a) no-RLS public table makes
gbrain doctor --json return status=fail with ALTER TABLE in the
message and exit code 1, (b) a GBRAIN:RLS_EXEMPT comment with a
valid reason makes doctor report the table as explicitly exempt
and keep status=ok, (c) a GBRAIN:RLS_EXEMPT prefix without a
reason= segment still fails doctor, (d) an unrelated comment on
a no-RLS table still fails doctor.

All helpers use try/finally with unique-per-run suffixes
(gbrain_rls_..._<pid>_<timestamp>) so assertion failures don't
pollute subsequent tests.
Covers why RLS matters on Supabase (PostgREST exposes the public
schema to the anon key), what to do when gbrain doctor fails, the
exact SQL template for an intentional exemption, how to audit
exemptions later, and how the check behaves on PGLite vs
self-hosted Postgres.

Emphasizes that the escape hatch is deliberately painful on
purpose: there is no gbrain rls-exempt CLI subcommand and no
config-file allowlist. The operator drops to psql and writes the
justification in SQL, which makes the action visible in shell
history, pg_dump, schema diffs, and doctor output on every run.

Referenced from gbrain doctor's failure message when any public
table lacks RLS.
Reconciles VERSION and package.json (were drifting: 0.17.0 vs
0.16.4). Runtime gbrain --version reads from package.json via
src/version.ts, so prior ships were reporting 0.16.4. Both now
land on 0.18.0.

Minor bump (not patch) because gbrain doctor's exit code semantics
change: missing RLS on a public table was warn+exit-0, is now
fail+exit-1. Any external cron, CI, or skillpack-check wrapper
around gbrain doctor needs to be aware. skillpack-check.ts itself
is unaffected (uses --fast, skips DB checks).

CHANGELOG entry follows the release-summary format from CLAUDE.md:
headline, lead paragraph, numbers-that-matter table, what-this-
means-for-your-workflow, To take advantage of v0.18.0 block with
remediation SQL + exemption format, itemized changes.

Also sweeps a stale @WinterMute reference in the 0.17.0 entry to
"Garry's OpenClaw" per the CLAUDE.md privacy rule.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
# Conflicts:
#	CHANGELOG.md
#	src/core/schema-embedded.ts
#	src/schema.sql
#	test/migrate.test.ts
@garrytan garrytan changed the title fix: v0.18.0 — RLS hardening + schema backfill (supersedes #336) fix: v0.18.1 — RLS hardening + schema backfill (supersedes #336) Apr 22, 2026
garrytan and others added 7 commits April 22, 2026 16:48
… + identifier escape)

Four fixes from `/codex` review of the merged diff:

1. HIGH — wire migration v24 into the `gbrain apply-migrations`
   upgrade path. Without an orchestrator entry, `gbrain upgrade`'s
   post-upgrade step runs `apply-migrations --yes`, which walks the
   registry in `src/commands/migrations/index.ts`. The registry
   stopped at v0_18_0, so v24 never fired on upgrade (connectEngine
   and doctor do not call initSchema). New `v0_18_1.ts` orchestrator
   mirrors v0.18.0's Phase A: shells out to `gbrain init
   --migrate-only`, which triggers initSchema → runMigrations → v24
   applies. Registered in the migrations array.

2. HIGH — fail loudly when v24 runs under a non-BYPASSRLS role
   instead of RAISE WARNING-then-silently-bumping-version. The
   runner at migrate.ts:773 unconditionally calls
   `setConfig('version', String(m.version))` when a migration
   completes without throwing, so a WARNING-and-continue path would
   permanently lock the backfill out: schema_version=24 on the next
   run means `m.version > current` is false and v24 is skipped
   forever, even after the role gets BYPASSRLS. Changed `RAISE
   WARNING` → `RAISE EXCEPTION` so the transaction aborts,
   schema_version stays at 23, and a subsequent initSchema retries
   cleanly after the role is fixed. Test asserts the SQL uses
   EXCEPTION and does not use WARNING.

3. MEDIUM — escape double-quote characters in the remediation SQL
   output. doctor.ts was building `ALTER TABLE "public"."${n}"`
   with `n` un-escaped, so a pathological table name containing a
   literal `"` would break out of the quoted identifier and produce
   invalid copy-paste SQL. Double the `"` before interpolating,
   matching Postgres quoted-identifier escaping rules. Extremely
   rare in practice, cheap to get right.

4. LOW — CHANGELOG cleanup: corrected the upgrade-behavior claim
   (v24 runs via `apply-migrations --yes` through the new
   orchestrator, not during `gbrain doctor`) and split the "tables
   with RLS" row into two metrics (21 base-schema tables + 2
   migration-only budget_* tables = 23 managed total, all covered).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI-only failure: test/apply-migrations.test.ts hardcodes the
orchestrator-migration version list in two `skippedFuture` expectations.
The v0.18.1 orchestrator I added in the prior commit pushed the list to
8 entries. Both assertions now include 0.18.1 at the tail.

Caught by the gbrain CI run on the merged branch — locally the rest of
the unit suite (dream/orphans) is flaky due to unrelated PGLite
parallelism, but `bun test test/apply-migrations.test.ts` now passes
18/18. CI should follow.
Responsible-disclosure pass on the public-facing release notes. The
prior CHANGELOG entry enumerated which gbrain-managed public tables
had shipped without RLS and highlighted the most sensitive ones by
name. That gives anyone reading the CHANGELOG a directed probe list
for unpatched Supabase installs before operators have had a chance
to run `gbrain upgrade`.

Rewritten to describe the change at a functional level (what doctor
does now, what the upgrade path does, what the escape hatch is)
without naming the specific tables or quantifying the gap. The actual
SQL remains in the binary — anyone reverse-engineering can find it
there — but we shouldn't put it on the release page with a banner.

User-facing content kept intact: the "To take advantage of" block,
the upgrade commands, the exemption SQL template, the breaking
exit-code note.
Prior incident on this branch: the original v0.18.1 CHANGELOG entry
enumerated the specific public tables that had shipped without RLS,
quantified the exposure duration, and highlighted the most sensitive
ones by name. Garry caught it. Scrubbed in ecd06a0.

This directive codifies the rule so future sessions (or other agents
working in this repo) don't repeat the mistake:

- Describe security fixes functionally, not by attack surface.
- Public artifacts (CHANGELOG, README, docs/, PR titles/bodies,
  commit messages, release pages) get the functional description.
- Private artifacts (plan files under ~/.claude/plans/ or
  ~/.gstack/projects/) keep the detailed before/after tables.
- Source code will disclose the specifics to reverse engineers
  anyway — that's intrinsic. The concern is the broadcast-channel
  asymmetry of a release page.

Also added a corresponding feedback memory at
~/.claude/projects/.../feedback_responsible_disclosure.md so the rule
carries across sessions and other projects, not just gbrain.

Placed right after the existing privacy rule (scrub real names) since
they share the same "public artifact hygiene" posture.
Adding the responsible-disclosure rule to CLAUDE.md in ffe340d
diverged the committed llms-full.txt from the generator output.
The build-llms drift-guard test caught it in CI. Regenerated.
Garry flagged: migration v24 fires `ALTER TABLE budget_ledger ENABLE
ROW LEVEL SECURITY` unconditionally. budget_ledger and
budget_reservations are migration-only (v12) — not in schema.sql,
not re-created on every initSchema. In the normal flow v12 runs
before v24 so they exist, but two edge cases break that assumption:

  1. An operator manually dropped them (budget data is regenerable
     from resolver call logs, so `DROP TABLE` is a reasonable
     cleanup move).
  2. A brain was somehow running an old gbrain that lacked v12, and
     is only catching up now.

Bare ALTER hits 42P01 (relation does not exist), aborts the
transaction, and leaves schema_version at 23. On next initSchema,
v24 retries and hits the same error — stuck in a loop.

Fix: wrap each of the two budget ALTERs in
    IF EXISTS (SELECT 1 FROM information_schema.tables
                WHERE table_schema = 'public'
                  AND table_name = '<tbl>') THEN ... END IF;

The other 8 tables are not guarded. schema.sql creates them
idempotently on every initSchema run before migrations fire, so
they are guaranteed to exist by the time v24 runs. Adding guards
there would be unnecessary and make the SQL noisier.

Also simplified the DECLARE/BEGIN structure: moved the
non-BYPASSRLS early-exit to the top so the happy path reads
cleanly without the outer IF.

Tests:
  - test/migrate.test.ts: new assertion that both budget_* ALTERs
    are wrapped in information_schema.tables IF EXISTS blocks;
    BYPASSRLS gate assertion relaxed to match either phrasing.
  - Manual e2e: fresh Postgres init (v0→v24), then DROP TABLE
    budget_ledger + budget_reservations, reset version=23, re-run
    init. v24 applied cleanly, version advanced to 24, budget_*
    stayed dropped. Without the guard this would have errored out.
Behavioral e2e proof for the IF EXISTS guard added in 2fc7780. Scenario:

  1. Fresh Postgres init to v24 (setupDB in beforeAll).
  2. DROP TABLE budget_ledger + budget_reservations.
  3. Roll config.version back to '23'.
  4. CLI-spawn `gbrain init --non-interactive` to re-trigger initSchema.
  5. Assert: exit 0, no 42P01 in stderr, version advances to 24,
     budget_* stay dropped (since v12 doesn't re-run at
     current=23 > v12=12).

Without the guard, step 4 hits 42P01 (relation does not exist),
aborts the transaction, leaves version at 23, and the next
initSchema re-runs v24 forever — an infinite retry loop. This test
catches any future regression that strips the guard.

Cleanup (finally block) restores budget_* with the exact migration
v12 schema so downstream tests that reference these tables see the
original shape. Version is restored from the pre-test snapshot.

Runs with the rest of the E2E: RLS Verification block. 78/78 in
test/e2e/mechanical.test.ts with the addition.
@garrytan garrytan merged commit 2751581 into master Apr 23, 2026
4 checks passed
garrytan added a commit that referenced this pull request Apr 23, 2026
Pulls upstream v0.16.1–v0.18.1: minions worker deploy guide (#287/#317),
subagent Anthropic SDK fix + tsc CI gate (#318), check-resolvable CLI
(#325), dream + runCycle primitive (#321), multi-source brains with
federation + dotfile resolution (#337), RLS hardening + schema backfill
(#343). Test count grows 2000 → 2354.

Conflicts resolved:
- VERSION — kept 0.19.0; upstream is 0.18.1
- package.json — v0.19.0 wins
- CHANGELOG.md — v0.19.0 preserved above upstream's v0.18.1/v0.18.0/v0.17.0/v0.16.x
- src/cli.ts — CLI_ONLY merges `agent`, `providers`, and upstream's new `sources`, `dream`, `check-resolvable`
- src/core/config.ts — merged: kept embedding_model / embedding_dimensions /
  expansion_model / provider_base_urls (mine) + storage (upstream)

Build clean: 948 modules, ~165ms compile, 0.19.0 binary runs. Typecheck green.
18 flaky failures in `bun test` are all PGLite shared-state timeouts in
setup hooks — every failing file passes cleanly in isolation (dream 11/0,
orphans 35/0, check-update 20/0). Pre-existing infra, not introduced by
this merge.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
garrytan added a commit that referenced this pull request Apr 23, 2026
CI Tier 1 (Mechanical) was failing on 4 E2E tests after the v0.18.1 RLS
hardening landed on master (PR #343). Our v25 oauth_infrastructure migration
adds 3 new public tables (oauth_clients, oauth_tokens, oauth_codes) but
didn't enable RLS, so gbrain doctor's new check flagged them and the
"RLS on every public table" assertion failed.

Fixes:
- src/schema.sql: ALTER TABLE ... ENABLE ROW LEVEL SECURITY for the 3 OAuth
  tables inside the existing BYPASSRLS-gated DO block (fresh installs).
- src/core/migrate.ts v25: append a BYPASSRLS-gated DO block after the OAuth
  CREATE TABLE statements (existing installs on upgrade). Mirrors the v24
  rls_backfill gating pattern — RAISE WARNING if the current role lacks
  BYPASSRLS, so migrations don't silently lock the operator out.
- src/core/schema-embedded.ts: regenerated via `bun run build:schema`.
- test/e2e/mechanical.test.ts: one unrelated v24 test asserted the post-
  migration version equals exactly '24'. That breaks when any later
  migration exists (like our v25). Relaxed to `>= 24` since the test's
  intent is "v24 didn't abort the chain", not "v24 is the final version".

Verified locally: 78/78 E2E tests pass against real Postgres 16 + pgvector.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
garrytan added a commit that referenced this pull request May 3, 2026
…oard (#358)

* feat: OAuth 2.1 schema tables + shared token utilities

Add oauth_clients, oauth_tokens, oauth_codes tables to both PGLite and
Postgres schemas. Migration v5 creates tables for existing databases.
PGLite now includes auth infrastructure (access_tokens, mcp_request_log,
OAuth tables) because `serve --http` makes it network-accessible.

Extract hashToken() and generateToken() to src/core/utils.ts for DRY
reuse across auth.ts and oauth-provider.ts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: GBrainOAuthProvider — MCP SDK OAuthServerProvider implementation

Implements OAuthServerProvider backed by raw SQL (PGLite or Postgres).
Supports client credentials, authorization code with PKCE, token refresh
with rotation, revocation, and legacy access_tokens fallback.

Key decisions from eng review:
- Uses raw SQL connection, not BrainEngine (OAuth is infrastructure)
- All tokens/secrets SHA-256 hashed before storage
- Legacy tokens grandfathered as read+write+admin
- sweepExpiredTokens() wrapped in try/catch (non-blocking startup)
- Client credentials: no refresh token per RFC 6749 4.4.3

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: scope + localOnly annotations on all 30 operations

Add AuthInfo, scope ('read'|'write'|'admin'), and localOnly fields to
Operation interface. Per-operation audit:
- 14 read ops, 9 write ops, 2 admin ops, 4 admin+localOnly ops
- sync_brain, file_upload, file_list, file_url: admin + localOnly
- Scope enforcement happens in serve-http.ts before handler dispatch

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: HTTP MCP server with OAuth 2.1 + 27 OAuth tests

gbrain serve --http starts Express 5 server with:
- MCP SDK mcpAuthRouter (authorize, token, register, revoke endpoints)
- Custom client_credentials handler (SDK doesn't support CC grant)
- Bearer auth + scope enforcement on /mcp tool calls
- Admin dashboard auth via HTTP-only cookie + bootstrap token
- SSE live activity feed at /admin/events
- DCR default OFF (--enable-dcr to enable)
- Rate limiting on /token (50/15min)
- localOnly operations excluded from HTTP

CLI: gbrain serve --http [--port 3131] [--token-ttl 3600] [--enable-dcr]

Dependencies: express@5.2.1, express-rate-limit@7.5.1, cors@2.8.6
SDK pinned to exact 1.29.0 (was ^1.0.0)

27 new tests covering OAuth provider, scope enforcement, auth code flow,
refresh rotation, token revocation, legacy fallback, and sweep.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: React admin dashboard — 7 screens, dark theme, Krug-designed

Admin SPA at /admin with client-side routing (#login, #dashboard,
#agents, #log). Built with Vite + React, served from admin/dist/.

Screens:
- Login: one field, one button, zero happy talk
- Dashboard: metrics bar, SSE live activity feed, token health panel
- Agents: table with scopes/badges, + Register Agent button
- Register: modal form (name, scopes), 3 mindless choices
- Credentials: full-screen modal, copy buttons, download JSON, warning
- Request Log: paginated table (50/page), time-relative timestamps
- Agent Detail: slide-out drawer, config export tabs (Perplexity/Claude/JSON)

Design tokens: #0a0a0f bg, Inter + JetBrains Mono, 4-32px spacing.
Build: bun run build:admin (Vite, 65KB gzipped).
Admin API: /admin/api/register-client endpoint for dashboard registration.
SPA serving: Express static + index.html fallback for client-side routing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: add admin SPA lockfile

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: bump version and changelog (v1.0.0.0)

Milestone release: multi-agent GBrain with OAuth 2.1, HTTP server,
and React admin dashboard. See CHANGELOG.md for details.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs: update project documentation for v1.0.0.0

Sync README, CLAUDE.md, and docs/mcp/ with the OAuth 2.1 + HTTP server
+ admin dashboard surface that shipped in v1.0.0.0.

- README.md: new "Remote MCP with OAuth 2.1" section covering
  gbrain serve --http, admin dashboard, scoped operations, legacy
  bearer fallback; add serve --http + auth notes to the commands
  reference.
- CLAUDE.md: add src/commands/serve-http.ts, src/core/oauth-provider.ts,
  admin/ directory as key files; document scope + localOnly additions
  to Operation contract; add oauth.test.ts (27 cases) to the test list;
  add v1.0.0 key-commands section clarifying that OAuth client
  registration is via the /admin dashboard or SDK (no CLI subcommand).
- docs/mcp/DEPLOY.md: promote --http as the recommended remote path,
  add OAuth 2.1 Setup section, list ChatGPT in supported clients,
  remove the "not yet implemented" footer.
- docs/mcp/CHATGPT.md (new): unblocks the P0 TODO. Full ChatGPT
  connector setup via OAuth 2.1 + PKCE.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: wire gbrain auth subcommand with OAuth register-client

Previously auth.ts was a standalone script invoked via
`bun run src/commands/auth.ts`. CHANGELOG and README documented
`gbrain auth ...` commands that didn't actually work.

- Export `runAuth(args)` from auth.ts (keeps standalone entry intact
  via `import.meta.url === file://${process.argv[1]}` check)
- Add `auth` to CLI_ONLY + dispatch in handleCliOnly
- New subcommand `gbrain auth register-client <name> [--grant-types]
  [--scopes]` wraps GBrainOAuthProvider.registerClientManual
- Lazy DB check: only subcommands that need DATABASE_URL error out

Now the documented CLI flow works end to end:
  gbrain auth register-client perplexity --grant-types client_credentials --scopes "read write"
  gbrain serve --http --port 3131

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs: reflect wired gbrain auth register-client CLI

After /ship, the doc subagent wrote docs assuming `gbrain auth
register-client` did not exist (it said so explicitly in CLAUDE.md:184).
A follow-up commit (c4a86ce) wired it into src/cli.ts + src/commands/auth.ts.
These docs were now contradicting reality.

- CLAUDE.md: removed "There is no gbrain auth register-client CLI
  subcommand" claim, documented the three registration paths
  (CLI / dashboard / SDK).
- README.md: replaced `bun run src/commands/auth.ts` hint with
  `gbrain auth create|list|revoke|test` and `gbrain auth register-client`.
- docs/mcp/DEPLOY.md: added CLI registration example above the
  programmatic example.
- TODOS.md: moved "ChatGPT MCP support (OAuth 2.1)" P0 item to
  Completed with v1.0.0.0 completion note. Closes the P0 that had been
  blocking the "every AI client" promise since v0.6.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix: enable RLS on OAuth tables + loosen v24-exact test assertion

CI Tier 1 (Mechanical) was failing on 4 E2E tests after the v0.18.1 RLS
hardening landed on master (PR #343). Our v25 oauth_infrastructure migration
adds 3 new public tables (oauth_clients, oauth_tokens, oauth_codes) but
didn't enable RLS, so gbrain doctor's new check flagged them and the
"RLS on every public table" assertion failed.

Fixes:
- src/schema.sql: ALTER TABLE ... ENABLE ROW LEVEL SECURITY for the 3 OAuth
  tables inside the existing BYPASSRLS-gated DO block (fresh installs).
- src/core/migrate.ts v25: append a BYPASSRLS-gated DO block after the OAuth
  CREATE TABLE statements (existing installs on upgrade). Mirrors the v24
  rls_backfill gating pattern — RAISE WARNING if the current role lacks
  BYPASSRLS, so migrations don't silently lock the operator out.
- src/core/schema-embedded.ts: regenerated via `bun run build:schema`.
- test/e2e/mechanical.test.ts: one unrelated v24 test asserted the post-
  migration version equals exactly '24'. That breaks when any later
  migration exists (like our v25). Relaxed to `>= 24` since the test's
  intent is "v24 didn't abort the chain", not "v24 is the final version".

Verified locally: 78/78 E2E tests pass against real Postgres 16 + pgvector.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* chore: regenerate llms-full.txt for v1.0.0 docs

CI test/build-llms.test.ts > committed llms.txt + llms-full.txt match
current generator output failed. The committed llms-full.txt was built
before the v1.0.0 doc updates landed (OAuth 2.1 README section, new
docs/mcp/CHATGPT.md, CLAUDE.md serve-http references, etc.), so the
regen-drift guard flagged it.

Ran `bun run build:llms`. llms.txt is unchanged (skinny index still
matches); llms-full.txt picks up 166 net-new lines of bundled content.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* connected-gbrains PR 0 — minimal runtime (mounts, registry, aggregated RESOLVER) (#372)

* feat(mounts): connected-gbrains PR 0 foundation — registry + resolver + CLI

Lays the foundation for connected gbrains (v0.19.0) per the approved plan.
This is PR 0 — minimal runtime for direct-transport, path-mounted brains.

What this slice ships:
- src/core/brain-registry.ts — keyed BrainRegistry with lazy engine init,
  schema-validated mounts.json loader, DuplicateMountPathError (load-bearing
  identity check per Codex finding #9 correction), UnknownBrainError with
  actionable available-id list. Pure: no AsyncLocalStorage, no singleton
  mutation. ~280 LOC.

- src/core/brain-resolver.ts — 6-tier brain-id resolution mirroring
  v0.18.0's source-resolver.ts so agents learn ONE mental model:
    1. --brain <id>     2. GBRAIN_BRAIN_ID env      3. .gbrain-mount dotfile
    4. longest-path match over registered mounts    5. (reserved v2 default)
    6. 'host' fallback
  Orthogonal to --source: --brain picks which DB, --source picks the repo
  within that DB. Corruption-resistant: mounts.json load failures fall
  through to 'host' instead of breaking every CLI invocation.

- src/commands/mounts.ts — `gbrain mounts add|list|remove` (direct transport
  only). Validates on add (path exists on disk, id regex, no dupes). WARNS
  but does not block on same db_url/db_path across ids (teams may
  legitimately alias a remote brain). Password redaction in list output.
  Atomic write via temp+rename. 0600 perms. PR 1 adds pin/sync/enable;
  PR 2 adds --mcp-url + OAuth.

- src/cli.ts — wires `gbrain mounts` into handleCliOnly (no DB required
  for the config-only subcommands).

- test/brain-registry.test.ts (28 cases): schema validation across every
  malformed-input branch, ALS-free resolution, duplicate id + path detection,
  disabled-mount exclusion, UnknownBrainError context.

- test/brain-resolver.test.ts (22 cases): priority order (explicit > env >
  dotfile > path-prefix > fallback), dotfile walk-up, malformed dotfile
  recovery, longest-prefix match, sibling-path false-positive guard,
  loader-failure defense.

- test/mounts-cli.test.ts (17 cases): parseAddArgs surface, redactUrl,
  atomic write, add/list/remove roundtrip via temp HOME.

67 new tests, all green. Typecheck clean. Depends on mcp-key-mgmt (base
branch) for the OAuth/scope annotations that PR 2 will leverage.

Next in this branch: PR 0 still needs (a) the deep host-brain-bias audit
(postgres-engine internal singleton fallback + a few operations.ts
callers), (b) OperationContext threading to make ctx.brainId populated at
dispatch, (c) composeResolvers + composeManifests, (d) aggregated
~/.gbrain/mounts-cache/ for host-agent runtime ownership.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(mounts): brains-and-sources mental model + agent routing convention

Two orthogonal axes organize GBrain knowledge. Users AND agents need to
understand both, or queries misroute silently.

  --brain  → WHICH DATABASE    (host + mounts)
  --source → WHICH REPO IN DB  (v0.18.0 sources: wiki, gstack, ...)

Both axes use the same 6-tier resolution (explicit > env > dotfile >
path-prefix > default > fallback), so learning one teaches both.

Ships:

- docs/architecture/brains-and-sources.md — canonical mental model doc.
  Covers four topologies with ASCII diagrams:
    1. Single-person developer (one brain, one source)
    2. Personal brain with multiple repos (one brain, N sources)
    3. Personal + one team brain mount (2 brains)
    4. Senior user with multiple team memberships (N mounted team brains
       alongside personal) — the CEO-class topology
  Explicit "when to move each axis" decision table. Generic example names
  throughout per the project's privacy rule.

- skills/conventions/brain-routing.md — agent-facing decision table.
  Rules for when to switch brain (team-owned question, explicit name,
  data owner changes) vs switch source (working in a repo, topic scoped
  to one repo). Cross-brain federation is latent-space only in v0.19 —
  the agent fans out; the DB never does. Anti-patterns listed: silent
  brain jumps, writing to host when data is team-owned, missing brain
  prefix in citations, ignoring .gbrain-mount dotfiles.

- CLAUDE.md — adds "Two organizational axes (read this first)" section
  at the top pointing at both new docs.

- AGENTS.md — adds brains-and-sources.md + brain-routing.md to the
  "read this order" (positions 3 and 4, before RESOLVER.md).

- skills/RESOLVER.md — adds brain-routing.md to the Conventions section
  so it appears alongside quality.md, brain-first.md, subagent-routing.md.

No code changes. Pre-existing check-resolvable warnings unchanged (2
warnings on base unrelated to this work). 67 PR-0 tests still green.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(mounts): thread brainId through OperationContext + subagent chain

PR 0 plumbing for connected gbrains. Adds an optional brainId field that
identifies which database an operation targets and ensures subagents
inherit the parent job's brain instead of process-wide defaults. No
dispatch-path changes in this commit — that is PR 1 (registry wiring at
MCP + CLI entry points). The fields exist so callers can set them now
and downstream code respects them.

Changes:

- src/core/operations.ts: OperationContext grows `brainId?: string`.
  Optional for back-compat. 'host' is the implicit default when absent.
  Orthogonal to v0.18.0's source_id (source = which repo within the
  brain, brain = which database). See docs/architecture/brains-and-sources.md.

- src/core/minions/types.ts: SubagentHandlerData gains `brain_id?: string`.
  Parent jobs set this when submitting a child subagent to lock the
  child into a specific brain. Omitted = host (unchanged behavior).

- src/core/minions/handlers/subagent.ts: buildBrainTools call site
  reads data.brain_id and passes it through. Child subagents spawned
  from this handler will see the same brainId unless they override in
  their own data.

- src/core/minions/tools/brain-allowlist.ts: BuildBrainToolsOpts +
  OpContextDeps grow brainId; buildOpContext stamps it on every
  OperationContext the subagent builds for tool calls. Addresses Codex
  finding #6 (brain-allowlist hardwired parent config without brain
  awareness, so switching brain only in subagent.ts was not enough).

Tests: 166 affected tests green (subagent suite + minions + brain
registry + resolver). Typecheck clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(mounts): composeResolvers + composeManifests + aggregated cache

The runtime ownership seam for connected gbrains (Codex finding #3 from
plan review): check-resolvable.ts VALIDATES RESOLVER.md; it does not
DISPATCH skills. Host agents (Wintermute/OpenClaw/Claude Code) read
skills/RESOLVER.md directly to route user requests. Without an aggregated
resolver, mounted team brains cannot contribute skills to the host
agent's routing table.

This commit adds the aggregation:

- src/core/mounts-cache.ts (NEW): pure composeResolvers + composeManifests
  functions plus filesystem writers for ~/.gbrain/mounts-cache/. The
  aggregated files carry every host skill plus every mount skill,
  namespace-prefixed (e.g. `yc-media::ingest`). Host skills always beat
  a same-named mount skill (locked decision 1); bare-name collisions
  between two mounts surface as structured ambiguity info so doctor can
  warn (PR 1).

  Also addresses Codex finding #8: manifests compose alongside the
  resolver, else doctor conformance breaks on remote skills.

- src/commands/mounts.ts: refreshMountsCache() called on `mounts add`
  and `mounts remove` (the latter clearing the cache entirely when the
  last mount goes away). Uses findRepoRoot() to locate the host skills
  dir; skips with a stderr note when run outside a gbrain repo so the
  user isn't confused by a "cache not refreshed" error in the wrong
  cwd.

- test/mounts-cache.test.ts (NEW): 23 unit tests covering empty world,
  host-only, single mount, two-mount ambiguity, host-shadows-mount,
  disabled mount excluded, missing RESOLVER.md is a no-op, manifest
  composition with same-name collision, render shape, atomic rewrite,
  clear on missing dir.

Output format for ~/.gbrain/mounts-cache/RESOLVER.md adds a Brain column
so host agents can see which brain each trigger routes to at a glance,
plus Shadows and Ambiguous sections when those conditions exist.

Tests: 90 PR 0 tests green (brain-registry + resolver + mounts-cache +
mounts-cli). Full suite regression pending in task 11.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(mounts): force instance-level pool for mount brains + CI guard

Closes the silent-singleton-share bug Codex flagged as finding #1 from
the plan review: two direct-transport mounts with different Postgres
URLs would both fall through postgres-engine.ts's `get sql()` getter to
db.getConnection() and quietly share whichever singleton connected
first. Your yc-media writes end up in garrys-list or vice versa. No
error at the call site — just wrong data.

The fix:

- src/core/brain-registry.ts: initMountBrain now passes poolSize when
  calling engine.connect(). That forces postgres-engine.ts:33-60 down
  the instance-level path (setting this._sql) instead of the module
  singleton path (calling db.connect). Hard-coded 5 for PR 0 — per-mount
  override is PR 1. PGLite ignores poolSize (no pool concept), so this
  is Postgres-specific.

  Host brain still uses the singleton path via initHostBrain (unchanged).
  That is fine for PR 0: the singleton is "the host's one connection"
  by definition. PR 1 removes the singleton entirely once every CLI
  command is engine-injectable.

- scripts/check-no-legacy-getconnection.sh (NEW): CI grep guard against
  new db.getConnection() / db.connect() calls landing in src/core/ or
  src/commands/ (the multi-brain dispatch surface). Has an explicit
  ALLOWED list grandfathering today's legitimate callers, each marked
  "PR 1 refactors" so the list shrinks over time. Skips comment lines
  so the grep doesn't trip on doc references to the old pattern.

- package.json: scripts.test chains the new guard after the existing
  check-jsonb-pattern + check-progress-to-stdout guards. `bun run test`
  now fails the build on singleton regression.

Tests: 295 affected pass (registry, resolver, mounts-cache, mounts-cli,
minions, pglite-engine). Typecheck clean. CI guard reports "ok: no new
singleton callers" on current tree.

Left for PR 1: remove the singleton fallback in postgres-engine.ts's
`get sql()` entirely; refactor src/commands/doctor.ts, files.ts,
repair-jsonb.ts, serve-http.ts, init.ts, and the 3 localOnly ops in
operations.ts (file_list, file_upload, file_url) to accept ctx.engine
explicitly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(mounts): codex review findings — namespace survives shadow + atomic tmp names + honest PR 0 docstrings

Codex outside-voice review on PR #372 found 5 issues. Real bugs fixed, overclaims
rewritten. Details:

P2 (real bug): composeResolvers and composeManifests were silently dropping
mount entries when a host skill shared the short name, which made the
namespace-qualified form `<mount>::<skill>` unreachable once host defined
the same short name. That defeated the entire namespace-disambiguation
model — if host had `ingest`, no mount could ship an `ingest` skill even
with explicit `yc-media::ingest`. Fix: always keep namespace-qualified
mount entries in the composed output. Shadow tracking moves to metadata
(`shadows[]`) that doctor can warn on, but never drops routing.

  Before:  host ingest + yc-media ingest → only 1 entry (host), yc-media::ingest unreachable
  After:   host ingest + yc-media ingest → 2 entries: bare `ingest` = host, `yc-media::ingest` = mount
  Verified live: gbrain mounts add of a mount with `ingest` now shows
  `team-demo::ingest` alongside host `ingest` in the aggregated manifest.

P1 (real bug): writeMountsFile + writeMountsCache used fixed `.tmp`
filenames. Two concurrent `gbrain mounts add` invocations (e.g. from
parallel terminals or CI) would clobber each other's temp file and
one writer's update would be lost. Fix: tmp filenames include
`process.pid + random suffix` so every writer has its own scratch file.
The atomic rename is self-contained per-writer. (Full lock + read-modify-
write safety deferred to PR 1 under `gbrain mounts sync --lock`.)

P1 (honesty): `SubagentHandlerData.brain_id` +
`BuildBrainToolsOpts.brainId` docstrings claimed child jobs inherit the
parent's brain and brain tools target the resolved brain. True for the
`ctx.brainId` field only — `ctx.engine` is still the worker's base
engine at dispatch time because `buildOpContext` doesn't yet do the
registry lookup, and `gbrain agent run` doesn't yet accept `--brain` to
populate the field on submission. Rewrote both docstrings to state the
PR 0 behavior explicitly (field plumbed, engine routing is PR 1) so
nobody reads the code thinking multi-brain subagents already work.

Also cleaned up two `require('fs')` runtime imports left over from the
initial PR — swapped for ESM named imports (renameSync). Pre-existing
style issue surfaced by the self-review pass.

Tests: 90 PR-0 tests pass. Updated two shadow-related test cases to
assert the corrected semantics (both entries survive, host wins bare
name, namespace form routes to mount).

Not fixed in this commit (documented as known PR 0 limitations):
- `file_list` / `file_upload` / `file_url` in operations.ts still hit the
  singleton (localOnly + admin, never reachable from HTTP MCP — safe in
  practice, refactor in PR 1 alongside command-level cleanups).
- writeMountsCache's two-file swap (RESOLVER.md + manifest.json) is not
  atomic across files; readers can briefly observe mismatched pairs.
  Acceptable because the cache is recomputable at any time from
  mounts.json. Generation-directory swap is PR 1 work.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(tests): bump hook timeouts for 21-migration PGLite init under full-suite load

Root cause of 19 pre-existing full-suite flakes (CHANGELOG v0.18.0 noted
"17 pre-existing master timeouts"): every PGLite test does

  beforeAll/beforeEach(async () => {
    engine = new PGLiteEngine();
    await engine.connect({});
    await engine.initSchema();  // runs 21 migrations through v0.18.2
  });

In isolation this takes ~5s. Under full-suite contention (128 files,
process-shared FS and CPU) it exceeds bun's default 5000ms hook timeout,
beforeEach times out, engine stays undefined, then afterEach crashes
with `TypeError: undefined is not an object (evaluating 'engine.disconnect')`.
That single hook failure reports as the whole test "failing" even though
the test body never executed, which is why the failure count sometimes
looked inflated compared to the number of genuinely-broken tests.

Fix applied across 7 test files:

- Raise setup hook timeout to 30_000 (6x the default) — gives migration
  init enough headroom even under worst-case load without masking real
  regressions in a post-migration test.
- Raise teardown hook timeout to 15_000 — engine.disconnect() is usually
  fast but can stall when PGLite's WASM runtime is still completing a
  migration at shutdown.
- Add `if (engine) await engine.disconnect()` guard so afterEach doesn't
  double-fault when beforeEach already failed. This was the source of
  the opaque "(unnamed)" failures — they were disconnect crashes,
  not test-body failures.

Files:
  test/dream.test.ts                (5 beforeEach + 5 afterEach blocks)
  test/orphans.test.ts              (1 pair)
  test/brain-allowlist.test.ts      (1 pair)
  test/oauth.test.ts                (1 pair)
  test/extract-db.test.ts           (1 pair)
  test/multi-source-integration.test.ts (1 pair)
  test/core/cycle.test.ts           (1 pair)

Results on the merged PR 0 branch:
  Before: 2175 pass / 20 fail / 3 errors
  After:  2281 pass /  0 fail / 0 errors    (+106 tests running that
                                             were previously blocked
                                             by the timed-out hooks)

No changes to production code. No test assertions changed. Just
timeout-bump + null-guard discipline that should have been in these
hooks from the start. The real longer-term fix is reusing an engine
across tests where possible (brain-allowlist.test.ts already does this
via beforeAll+DELETE-pages pattern), but that's per-file structural
work — out of scope for this cleanup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: regenerate llms-full.txt for brains-and-sources + brain-routing docs

The test/build-llms.test.ts test validates that the committed llms.txt
and llms-full.txt match the current generator output. PR 0 added
docs/architecture/brains-and-sources.md content paths and updated
CLAUDE.md + skills/RESOLVER.md in earlier commits, but the generated
bundle file wasn't regenerated alongside. This caused one of the 20
fails we chased down today — a straight content mismatch, not a runtime
bug. Running `bun run build:llms` picks up the new section content so
the bundle matches the sources again.

No functional change. Only the compiled doc bundle.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Bump version 1.0.0.0 → 0.22.0

OAuth + admin dashboard is meaningful but doesn't quite warrant the
major-version reset to 1.0. Renumber as v0.22.0, slotting cleanly above
master's v0.21.0 (Cathedral II).

Touched:
- VERSION, package.json: 1.0.0.0 → 0.22.0
- CHANGELOG.md: heading + "BEFORE/AFTER v1.0" table + "To take advantage"
  + "pre-v1.0" all renamed. Narrative voice unchanged otherwise.
- TODOS.md: ChatGPT MCP completion stamp updated to v0.22.0 (2026-04-25).
- CLAUDE.md, README.md, docs/mcp/{DEPLOY,CHATGPT}.md, src/schema.sql,
  src/core/schema-embedded.ts: every reader-facing v1.0.0 reference
  rewritten to v0.22.0 / pre-v0.22 in the same place.
- llms-full.txt: regenerated to match.

Slug-test occurrences of "v1.0.0" (`test/slug-validation.test.ts`,
`test/file-upload-security.test.ts`) and the `HOMEBREW_FOR_PERSONAL_AI`
roadmap reference to a future v1.0 vision left intact — those are
unrelated to this branch's release version.

Typecheck clean. cli + oauth + slug + file-upload tests pass (106 tests).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* v0.26.0 fix: 4 security findings from /cso pass + version bump

Bumped 0.22.0 → 0.26.0 to slot above master's v0.21 chain with headroom
for v0.23/0.24/0.25 to ship from master between now and merge.

Security fixes (all from CSO finding writeups):

#1 cookie-parser middleware — admin dashboard auth was silently broken.
   Express 5 has no built-in cookie parsing; req.cookies was always
   undefined, so /admin/login set the cookie but every subsequent admin
   API call returned 401. Added cookie-parser@^1.4.7 + @types/cookie-parser
   as direct + dev deps. app.use(cookieParser()) wired before CORS.

#2 + #3 TOCTOU races — exchangeAuthorizationCode and exchangeRefreshToken
   used SELECT-then-DELETE, letting concurrent requests with the same
   code/refresh both pass the SELECT before either ran DELETE, both
   issuing token pairs. Switched to atomic DELETE...RETURNING. RFC 6749
   §10.5 (codes) + §10.4 (refresh detection) violations closed. Added
   regression tests that fire 10 concurrent exchanges and assert exactly
   one wins — both pass.

#5 pgArray escape + DCR redirect_uri validation — pgArray() did
   `arr.join(',')` with no escaping, so an element containing a comma
   would be parsed by Postgres as TWO array elements. With --enable-dcr
   on, this could smuggle a second redirect_uri into a registered client
   and steal auth codes. Now every element is double-quoted with `"` and
   `\` escaped. Added validateRedirectUri() per RFC 6749 §3.1.2.1:
   redirect_uris must be https:// or loopback (localhost / 127.0.0.1).
   Wired into the DCR registerClient path; CLI registration trusts the
   operator and bypasses. Regression test confirms a comma-in-URI element
   round-trips as 1 element, not 2.

#6 --public-url flag — issuerUrl was hardcoded to http://localhost:{port}.
   Behind reverse proxies / ngrok / production deploys, the issuer claim
   in tokens wouldn't match the discovery URL clients hit (RFC 8414 §3.3).
   New --public-url URL flag on `gbrain serve --http`, propagates through
   serve.ts → serve-http.ts → ServeHttpOptions.publicUrl → issuerUrl.
   Startup banner surfaces the configured issuer.

Findings #4 (admin requests filter dead code), #7 (admin register-client
hardcoded grant_types), #8 (legacy token grandfathering posture) are
documentation / minor functional fixes and are deferred per user direction.

Tests: oauth.test.ts now 34 cases (was 27). 7 new:
- single-use TOCTOU regression (10 concurrent code exchanges)
- single-use TOCTOU regression (10 concurrent refresh exchanges)
- redirect_uri http://localhost passes
- redirect_uri https://example.com passes
- redirect_uri http://example.com (non-loopback plaintext) rejected
- redirect_uri non-URL rejected
- redirect_uri with embedded comma stored as single element

Files:
- VERSION, package.json: 0.22.0 → 0.26.0
- CHANGELOG.md: heading + table + "To take advantage" + "pre-v0.22" → v0.26;
  new "Security hardening (post-/cso pass)" subsection at top of itemized
  changes; CLI flag list updated for --public-url.
- src/core/oauth-provider.ts: pgArray escape, validateRedirectUri,
  registerClient enforces validation, DELETE...RETURNING in
  exchangeAuthorizationCode + exchangeRefreshToken.
- src/commands/serve-http.ts: cookie-parser import + wire-up,
  publicUrl option, issuerUrl honors it, startup banner shows issuer.
- src/commands/serve.ts: parses --public-url and threads through.
- src/cli.ts: help text adds --public-url URL flag.
- test/oauth.test.ts: +7 regression tests (now 34 total).
- llms-full.txt: regenerated.

Typecheck clean. 34 oauth + 14 cli tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant