Conversation
… harden CI/CD Replace agent-browser with playwright-cli across all rules, docs, installer, and statusline. Update command syntax (bare refs, no -i flag, run-code for waits) and add comprehensive playwright-cli rule with tabs, cookies, network mocking, tracing, and video recording support. Backport 7 stability and security fixes from upstream claude-mem v9.1.0/v9.1.1: - Restrict CORS to localhost origins (prevent cross-origin attacks) - Add DOMPurify XSS defense-in-depth to TerminalPreview - Make hooks fail-open when worker unavailable (exit 0 instead of 2) - Add stdin safety timeout to prevent hook hangs - Disable FK checks during table-recreation migrations - Remove hardcoded CHECK constraints on observation type column - Fix Express middleware ordering for DB initialization guard Add git-crypt encryption for Vercel API functions and dedicated CI/CD deployment workflow with Vercel CLI (replaces auto-deploy for encrypted repos).
|
The latest updates on your projects. Learn more about Vercel for GitHub. 1 Skipped Deployment
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughLoosens the SQLite observations.type CHECK constraint and adds related migrations; introduces project_roots storage and worktree management (routes, UI, parsing); adds fail-open hook handling and stdin safety debounce; enforces localhost CORS allowlist and an initialization guard for /api/context/inject; adds HTML sanitization for terminal output; replaces agent-browser with Playwright CLI; adds Vercel deploy workflow and many tests. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
installer/steps/dependencies.py (1)
473-494:⚠️ Potential issue | 🔴 CriticalReplace the incorrect Playwright CLI commands with the standard Playwright package and install approach.
The npm package
@playwright/cliand the subcommandplaywright-cli install-browserdo not exist in the official Playwright ecosystem. The standard approach is:npm install -D `@playwright/test` npx playwright install chromiumThe current code will fail silently—the npm install will fail (package doesn't exist), the binary won't be available, and the subprocess call will return a non-zero exit code, causing the function to return
Falsewithout clear indication of why installation failed.Update lines 473-494 to use the correct package and command, or reconsider using
@playwright/testwith the standardnpx playwright installapproach.
🤖 Fix all issues with AI agents
In `@console/package.json`:
- Line 30: Replace the pre-release dependency "isomorphic-dompurify":
"3.0.0-rc.2" in package.json with the latest stable release "2.35.0" (or an
appropriate stable version you choose), then reinstall and update lockfile (npm
install / yarn install) and run the test/build to confirm nothing breaks; if you
truly need 3.0.0 features, instead add a comment in package.json or PR
description explaining the requirement and create a ticket to move to the stable
3.0.0 when it is released.
- Line 30: The package.json currently pins the dependency "isomorphic-dompurify"
to the non-existent pre-release "3.0.0-rc.2" and also declares engines "node":
">=18.0.0" which conflicts with newer 2.x releases that require Node >=20.19.5;
change the "isomorphic-dompurify" entry to a published stable version (e.g.,
"2.35.0") or replace it with the intended released tag, and if you must keep a
pre-release, add a clarifying comment and adjust the "node" engines field to
match the required Node version (or add an install-time guard) so
runtime/install won't fail—look for the "isomorphic-dompurify" dependency key
and the "node" engines field in package.json to apply the fix.
In `@console/src/services/worker-service.ts`:
- Around line 202-216: The route handler registered via
this.server.app.get("/api/context/inject", ...) can throw when await
Promise.race([this.initializationComplete, timeoutPromise]) rejects; wrap that
await in a try/catch so rejections (including the timeoutPromise) are caught,
respond with res.status(503).json({ error: "Initialization timeout" }) (or
similar) and return, and only call next() when the try block succeeds and
this.searchRoutes is initialized; reference the Promise.race call,
timeoutPromise, initializationComplete, and this.searchRoutes to find where to
add the try/catch and error response.
- Around line 202-216: The route handler using await
Promise.race([this.initializationComplete, timeoutPromise]) can throw when
timeoutPromise rejects; wrap that await in a try/catch and handle the timeout
error explicitly (using the existing timeoutMs/timeoutPromise), respond with
res.status(503).json({ error: "Initialization timeout" }) and return instead of
calling next(), so the unhandled rejection is caught and the client receives a
proper 503; ensure this.server.app.get handler references
initializationComplete, timeoutPromise, timeoutMs and searchRoutes as needed.
In `@docs/site/package.json`:
- Line 43: Remove the unused dependency entry "@upstash/redis" from
package.json: open the docs/site package.json, delete the line containing
"@upstash/redis": "^1.36.2", and run your package manager's install
(npm/yarn/pnpm) to update the lockfile so the dependency is no longer installed.
In `@docs/site/vercel.json`:
- Around line 14-21: The CORS config for the "/api/(.*)" route currently sets
Access-Control-Allow-Origin to "*" which allows any site to call mutation routes
like "/api/trial/migrate" and "/api/trial/start"; change the
Access-Control-Allow-Origin value to a restricted list of your app's allowed
origin(s) (e.g. your production and staging domains) or reference an environment
variable that contains allowed origins, and ensure the rule targets only safe
endpoints (or add a separate route entry for "/api/trial/*" with the restricted
origin header) while keeping Access-Control-Allow-Methods and -Headers as
needed.
In `@pyproject.toml`:
- Line 17: Remove the unused dependency entry "cryptography>=44.0.0" from
pyproject.toml (the line containing cryptography>=44.0.0) to avoid adding an
unused package to requirements; if you intend to use cryptography, instead add
import/use sites in the codebase referencing the cryptography package and add a
brief comment in pyproject.toml explaining why the dependency is required.
🧹 Nitpick comments (16)
console/src/cli/hook-command.ts (1)
19-26: Catch-all swallows programming errors, not just connectivity failures.Every exception — including
TypeError,ReferenceError, or logic bugs inadapter.normalizeInput/handler.execute— is treated as "worker unavailable" and silently exits 0. This makes genuine bugs invisible in production.Consider narrowing the fail-open path to connection-related errors and re-throwing (or exiting with a non-zero code) for unexpected ones:
Suggested approach
} catch (error) { - console.error(`Hook error (fail-open): ${error}`); - if (event === "context") { - console.log(JSON.stringify({ hookSpecificOutput: { hookEventName: "SessionStart", additionalContext: "" } })); - } else { - console.log(JSON.stringify({ continue: true, suppressOutput: true })); + const isConnError = + error instanceof Error && + (error.message.includes("ECONNREFUSED") || error.message.includes("ECONNRESET") || error.message.includes("ETIMEDOUT")); + if (isConnError) { + console.error(`Hook error (fail-open): ${error}`); + if (event === "context") { + console.log(JSON.stringify({ hookSpecificOutput: { hookEventName: "SessionStart", additionalContext: "" } })); + } else { + console.log(JSON.stringify({ continue: true, suppressOutput: true })); + } + process.exit(HOOK_EXIT_CODES.SUCCESS); + } else { + console.error(`Hook error: ${error}`); + process.exit(HOOK_EXIT_CODES.FAILURE); } - process.exit(HOOK_EXIT_CODES.SUCCESS); }console/src/cli/stdin-reader.ts (1)
26-63: Stdin event listeners are never removed after the promise settles.If the promise resolves via the 100ms no-data timeout or the 3s safety timeout, the
data/end/errorlisteners remain attached toprocess.stdin. This is fine ifreadJsonFromStdinis only called once per process (typical for CLI hooks), but would leak listeners on repeated calls.Consider cleaning up with
removeListeneror using{ once: true }forend/error, or wrapping in an abort pattern, if there's ever a future multi-call scenario.console/tests/cli/stdin-reader-timeout.test.ts (1)
43-56: Safety timeout test waits 3 real seconds — consider also asserting a lower bound.The elapsed-time assertion (
< 4000) confirms the timeout doesn't hang forever, but doesn't verify the 3-second safety timeout actually fired (as opposed to some other, faster path resolving early). Addingexpect(elapsed).toBeGreaterThanOrEqual(2900)would make this a tighter, more meaningful assertion..gitignore (1)
227-230:.env*pattern overlaps with existing entries and is very broad.Line 229 (
.env*) subsumes the existing.envon line 160 and.envrcon line 161, making those earlier entries redundant. The glob also matches any file starting with.env(e.g.,.environment,.envoy). If this broad matching is intentional, consider removing the now-redundant entries above for clarity. If not, a more targeted pattern like.env.*or explicit entries would be safer..github/workflows/claude.yml (1)
53-53: Consider pinningclaude-code-actionto a full commit SHA instead of a mutable tag.
anthropics/claude-code-action@v1uses a major version tag that can be moved to point to any new commit. For supply-chain safety, pinning to a specific commit SHA (e.g.,anthropics/claude-code-action@<sha>) ensures the action code doesn't change unexpectedly. This applies to both usages on lines 53 and 77.As per coding guidelines, "Review workflows for security best practices and proper permissions."
.github/workflows/deploy-website.yml (4)
52-52: Pin the Vercel CLI version.
npm install -g vercelinstalls whatever the latest version is at build time, which can introduce unexpected breaking changes. Pin to a specific version for reproducible deployments.- run: npm install -g vercel + run: npm install -g vercel@41
57-60: Token passed on CLI may appear in debug/error logs.Passing
--token=$VERCEL_TOKENon the command line risks leaking the token in process listings or verbose error output. Consider using theVERCEL_TOKENenvironment variable directly, which the Vercel CLI already supports without--token.Proposed fix
- run: vercel pull --yes --token=$VERCEL_TOKEN + run: vercel pull --yesThe
VERCEL_TOKENenv var is already set in the step'senvblock, and the Vercel CLI will pick it up automatically.As per coding guidelines, "Review workflows for security best practices and proper permissions."
52-52: Pin the Vercel CLI version.
npm install -g vercelinstalls whatever the latest version is at build time, which can introduce unexpected breaking changes. Pin to a specific version for reproducible deployments.- run: npm install -g vercel + run: npm install -g vercel@41As per coding guidelines, "Review workflows for security best practices and proper permissions."
54-60: Token passed on CLI may appear in debug/error logs.Passing
--token=$VERCEL_TOKENon the command line risks leaking the token in process listings or verbose error output. The Vercel CLI automatically reads theVERCEL_TOKENenvironment variable, so the--tokenflag is redundant here. The same applies to the build and deploy steps.Proposed fix (pull step shown; apply same to build/deploy)
- run: vercel pull --yes --token=$VERCEL_TOKEN + run: vercel pull --yesAs per coding guidelines, "Review workflows for security best practices and proper permissions."
console/src/services/worker/http/middleware.ts (2)
39-50: Note:callback(null, false)doesn't block the request itself.With
callback(null, false), the server still processes the request—it just omits CORS headers so the browser discards the response. If you need to hard-block disallowed cross-origin requests (e.g., to prevent any server-side side effects), you'd need to short-circuit with a 403 before reaching route handlers. The current approach is the standard CORS pattern though, so this is fine if the threat model only concerns browser-based data exfiltration.
39-50: Note:callback(null, false)doesn't block the request itself.With
callback(null, false), the server still processes the request — it just omits CORS headers so the browser discards the response. If you need to hard-block disallowed cross-origin requests to prevent server-side side effects, you'd need to short-circuit with a 403 before reaching route handlers. The current approach is the standard CORS pattern though, so this is fine if the threat model only concerns browser-based data exfiltration.console/tests/worker/middleware-ordering.test.ts (2)
23-35: Fragile regex extraction of method body — consider a simpler assertion.The regex
private\s+registerRoutes\(\)[\s\S]*?^\s{2}\}/mrelies on exact indentation and will break on formatting changes. A simpler approach that's equally effective: just verify ordering via twoindexOfcalls on the full source, since both strings ("/api/context/inject"andthis.server.registerRoutes() are unique enough in context.Simplified alternative
- const registerRoutesMatch = source.match( - /private\s+registerRoutes\(\)[\s\S]*?^\s{2}\}/m, - ); - expect(registerRoutesMatch).toBeTruthy(); - - const methodBody = registerRoutesMatch![0]; - - const guardIndex = methodBody.indexOf('"/api/context/inject"'); - const firstRegisterRoutes = methodBody.indexOf("this.server.registerRoutes("); + const guardIndex = source.indexOf('"/api/context/inject"'); + const firstRegisterRoutes = source.indexOf("this.server.registerRoutes("); expect(guardIndex).toBeGreaterThan(-1); expect(firstRegisterRoutes).toBeGreaterThan(-1); expect(guardIndex).toBeLessThan(firstRegisterRoutes);
23-35: Fragile regex extraction of method body — consider a simpler assertion.The regex
private\s+registerRoutes\(\)[\s\S]*?^\s{2}\}/mrelies on exact indentation and will break on formatting changes. A simpler approach: just verify ordering via twoindexOfcalls on the full source, since both"/api/context/inject"andthis.server.registerRoutes(are unique enough in context.Simplified alternative
- const registerRoutesMatch = source.match( - /private\s+registerRoutes\(\)[\s\S]*?^\s{2}\}/m, - ); - expect(registerRoutesMatch).toBeTruthy(); - - const methodBody = registerRoutesMatch![0]; - - const guardIndex = methodBody.indexOf('"/api/context/inject"'); - const firstRegisterRoutes = methodBody.indexOf("this.server.registerRoutes("); + const guardIndex = source.indexOf('"/api/context/inject"'); + const firstRegisterRoutes = source.indexOf("this.server.registerRoutes("); expect(guardIndex).toBeGreaterThan(-1); expect(firstRegisterRoutes).toBeGreaterThan(-1); expect(guardIndex).toBeLessThan(firstRegisterRoutes);console/src/services/sqlite/migrations/runner.ts (1)
702-732: Fragile coupling between dynamic column list and hardcoded table schema.The
SELECT ${columnList}(derived dynamically fromPRAGMA table_info) is inserted intoobservations_new(hardcoded with 17 columns). If the column count doesn't match—e.g., a future migration adds a column to observations before v22 but the hardcoded schema isn't updated—thisINSERT INTO ... SELECTwill fail with a column count mismatch.Currently safe because migration 22 runs last in
runAllMigrations(), and all prior migrations ensure the column set aligns. However, this is brittle for maintenance.Consider either:
- Using an explicit column list on the INSERT:
INSERT INTO observations_new (${columnList}) SELECT ${columnList} FROM observations- Or building the CREATE TABLE dynamically
Suggested safer INSERT
- this.db.run(`INSERT INTO observations_new SELECT ${columnList} FROM observations`); + this.db.run(`INSERT INTO observations_new (${columnList}) SELECT ${columnList} FROM observations`);This way, if the new table has columns not in the old table (e.g., a newly added column with a DEFAULT), they'll get the default value instead of causing a failure.
console/tests/sqlite/migration-fk-safety.test.ts (1)
32-34: Static analysis:forEachcallback implicitly returns a value.
spy.mockRestore()returns the spy instance, and Biome flags callbacks passed toforEachthat return values. Use afor...ofloop instead.Suggested fix
- loggerSpies.forEach((spy) => spy.mockRestore()); + for (const spy of loggerSpies) spy.mockRestore();console/tests/sqlite/observation-type-constraint.test.ts (1)
30-32: SameforEachcallback return value lint issue.Same fix as in the FK safety test file.
Suggested fix
- loggerSpies.forEach((spy) => spy.mockRestore()); + for (const spy of loggerSpies) spy.mockRestore();
| "express": "^4.18.2", | ||
| "glob": "^11.0.3", | ||
| "handlebars": "^4.7.8", | ||
| "isomorphic-dompurify": "3.0.0-rc.2", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
latest stable version isomorphic-dompurify npm
💡 Result:
The latest stable version of isomorphic-dompurify on npm is 2.35.0 (published Dec 27, 2025). [1][2]
Consider pinning to a stable release instead of a release candidate. isomorphic-dompurify at 3.0.0-rc.2 is a pre-release version in a production dependency, which carries risk of undiscovered bugs. However, note that the latest stable version is 2.35.0 — a downgrade from the RC. If the project requires features from the upcoming 3.0.0 release, consider monitoring the package for the stable 3.0.0 release and update as soon as it becomes available.
🤖 Prompt for AI Agents
In `@console/package.json` at line 30, Replace the pre-release dependency
"isomorphic-dompurify": "3.0.0-rc.2" in package.json with the latest stable
release "2.35.0" (or an appropriate stable version you choose), then reinstall
and update lockfile (npm install / yarn install) and run the test/build to
confirm nothing breaks; if you truly need 3.0.0 features, instead add a comment
in package.json or PR description explaining the requirement and create a ticket
to move to the stable 3.0.0 when it is released.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the console/package.json file
find . -name "package.json" -path "*/console/*" | head -20Repository: maxritter/claude-pilot
Length of output: 88
🏁 Script executed:
# Also search for package.json in common locations
fd package.json | grep -E "(console|^console)" | head -20Repository: maxritter/claude-pilot
Length of output: 86
🏁 Script executed:
# Let's check the root and common directories
ls -la | head -20Repository: maxritter/claude-pilot
Length of output: 1282
🏁 Script executed:
# Read the console/package.json file
cat -n console/package.jsonRepository: maxritter/claude-pilot
Length of output: 2176
🏁 Script executed:
# Check for lock files
fd "package-lock.json|yarn.lock|bun.lockb" | head -10Repository: maxritter/claude-pilot
Length of output: 93
🏁 Script executed:
# Search for isomorphic-dompurify usage in the codebase
rg "isomorphic-dompurify" --type js --type ts -A 3Repository: maxritter/claude-pilot
Length of output: 414
🏁 Script executed:
# Search for clearWindow usage
rg "clearWindow" --type js --type ts -A 2 -B 2Repository: maxritter/claude-pilot
Length of output: 48
🏁 Script executed:
# Read the sanitize.ts file to understand usage pattern
cat -n console/src/ui/viewer/utils/sanitize.tsRepository: maxritter/claude-pilot
Length of output: 364
🏁 Script executed:
# Search for where sanitizeHtml is called
rg "sanitizeHtml" --type js --type ts -A 2 -B 2Repository: maxritter/claude-pilot
Length of output: 5249
🏁 Script executed:
# Search for jsdom references
rg "jsdom" --type js --type tsRepository: maxritter/claude-pilot
Length of output: 48
🌐 Web query:
isomorphic-dompurify 3.0.0-rc.2 npm Node.js version requirement
💡 Result:
isomorphic-dompurify@3.0.0-rc.2 does not appear to be published on npm (no npm/unpkg entry for that version; current published line is 2.x). [1]
For the currently published versions, the declared engines.node in the package is >=18 (example: 2.28.0 package.json). [2]
However, the project’s README documents stricter requirements for newer 2.x versions: >=2.27.0 → Node >=20, and >=2.30.0 → Node >=20.19.5. [3]
Sources: [1] [2] [3]
🌐 Web query:
isomorphic-dompurify latest version npm 2.35.0 2.x stable
💡 Result:
- Latest stable 2.x on npm:
isomorphic-dompurify@2.35.0(published Dec 27, 2025). [1][2] - Security: Snyk lists no known direct vulnerabilities for
2.35.0. [1] - Note: There are 3.0.0 release candidates (e.g.
3.0.0-rc.2) shown by Socket, which are pre-release, not “stable 2.x”. [3]
Install stable 2.x:
npm i isomorphic-dompurify@^2.35.0Sources: [1] Snyk versions/vuln DB, [2] jsDelivr package page, [3] Socket versions list.
🏁 Script executed:
# Check if TerminalPreview is client-side or server-side
cat -n console/src/ui/viewer/components/TerminalPreview.tsx | head -50Repository: maxritter/claude-pilot
Length of output: 1835
3.0.0-rc.2 is not published on npm—this will cause installation failures.
The package.json pins isomorphic-dompurify@3.0.0-rc.2, but this version does not exist on npm. The current published line is 2.x, with the latest stable version being 2.35.0 (Dec 2025).
Additionally, there is a Node.js compatibility issue: newer stable versions (e.g., 2.30.0+) require Node >=20.19.5, which conflicts with the declared "node": ">=18.0.0" on line 8. This could cause runtime failures on Node 18.x.
Resolution: Either use the latest stable version 2.35.0 or verify that 3.0.0-rc.2 (once released) is intended and add a comment explaining why a pre-release is necessary.
🤖 Prompt for AI Agents
In `@console/package.json` at line 30, The package.json currently pins the
dependency "isomorphic-dompurify" to the non-existent pre-release "3.0.0-rc.2"
and also declares engines "node": ">=18.0.0" which conflicts with newer 2.x
releases that require Node >=20.19.5; change the "isomorphic-dompurify" entry to
a published stable version (e.g., "2.35.0") or replace it with the intended
released tag, and if you must keep a pre-release, add a clarifying comment and
adjust the "node" engines field to match the required Node version (or add an
install-time guard) so runtime/install won't fail—look for the
"isomorphic-dompurify" dependency key and the "node" engines field in
package.json to apply the fix.
| { | ||
| "source": "/api/(.*)", | ||
| "headers": [ | ||
| { "key": "Access-Control-Allow-Origin", "value": "*" }, | ||
| { "key": "Access-Control-Allow-Methods", "value": "POST, OPTIONS" }, | ||
| { "key": "Access-Control-Allow-Headers", "value": "Content-Type" } | ||
| ] | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find docs/site -type f \( -name "*.ts" -o -name "*.js" -o -name "*.tsx" -o -name "*.jsx" \) | grep -i api | head -20Repository: maxritter/claude-pilot
Length of output: 242
🏁 Script executed:
# Check if docs/site/api directory exists
ls -la docs/site/ 2>/dev/null | grep -E "api|vercel" || echo "Checking directory structure..."
fd --type d api docs/site --hidden 2>/dev/null || echo "No api directory found in docs/site"Repository: maxritter/claude-pilot
Length of output: 198
🏁 Script executed:
# List all files in docs/site to understand structure
ls -la docs/site/ 2>/dev/null || find docs/site -maxdepth 2 -type f 2>/dev/null | head -30Repository: maxritter/claude-pilot
Length of output: 1278
🏁 Script executed:
cat -n docs/site/api/trial/heartbeat.tsRepository: maxritter/claude-pilot
Length of output: 1817
🏁 Script executed:
cat -n docs/site/api/trial/migrate.tsRepository: maxritter/claude-pilot
Length of output: 3046
🏁 Script executed:
cat -n docs/site/api/trial/start.tsRepository: maxritter/claude-pilot
Length of output: 2346
🏁 Script executed:
cat -n docs/site/api/_lib/cors.tsRepository: maxritter/claude-pilot
Length of output: 608
🏁 Script executed:
file docs/site/api/trial/*.ts docs/site/api/_lib/*.tsRepository: maxritter/claude-pilot
Length of output: 110
Wildcard CORS on API mutation routes — restrict to known origins.
The /api/trial/* endpoints (migrate, start) are mutation operations. Access-Control-Allow-Origin: * allows any website to trigger these mutations from a browser. Restrict this header to your application's domain(s) to prevent unauthorized cross-origin requests.
🤖 Prompt for AI Agents
In `@docs/site/vercel.json` around lines 14 - 21, The CORS config for the
"/api/(.*)" route currently sets Access-Control-Allow-Origin to "*" which allows
any site to call mutation routes like "/api/trial/migrate" and
"/api/trial/start"; change the Access-Control-Allow-Origin value to a restricted
list of your app's allowed origin(s) (e.g. your production and staging domains)
or reference an environment variable that contains allowed origins, and ensure
the rule targets only safe endpoints (or add a separate route entry for
"/api/trial/*" with the restricted origin header) while keeping
Access-Control-Allow-Methods and -Headers as needed.
| dependencies = [ | ||
| "rich>=14.0.0", | ||
| "platformdirs>=4.3.6", | ||
| "cryptography>=44.0.0", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if cryptography is actually imported/used in the codebase
echo "=== Searching for cryptography imports ==="
rg -rn --type=py 'import cryptography|from cryptography' || echo "No direct imports found"
echo ""
echo "=== Searching for indirect references ==="
rg -rn --type=py 'cryptography' || echo "No references found"Repository: maxritter/claude-pilot
Length of output: 195
🌐 Web query:
cryptography package version 44.0.0 python pypi
💡 Result:
cryptography 44.0.0 on PyPI was uploaded Nov 27, 2024 and provides both a source tarball (cryptography-44.0.0.tar.gz) and prebuilt wheels for multiple platforms/interpreters (including manylinux, Windows, CPython/PyPy variants). It supports Python 3.7+ (per the project’s PyPI metadata). [1]
Install that exact version:
pip install "cryptography==44.0.0"Remove unused cryptography dependency.
The cryptography>=44.0.0 package is not imported or used anywhere in the codebase. While version 44.0.0 is a valid release (published Nov 27, 2024, on PyPI), adding unused dependencies increases the attack surface and bloats requirements unnecessarily. Either use the package in the code or remove it from pyproject.toml.
🤖 Prompt for AI Agents
In `@pyproject.toml` at line 17, Remove the unused dependency entry
"cryptography>=44.0.0" from pyproject.toml (the line containing
cryptography>=44.0.0) to avoid adding an unused package to requirements; if you
intend to use cryptography, instead add import/use sites in the codebase
referencing the cryptography package and add a brief comment in pyproject.toml
explaining why the dependency is required.
…tion Add missing npm install step before vercel build in deploy-website workflow (fixes spawn sh ENOENT error). Wrap Promise.race in try/catch for the /api/context/inject initialization guard to prevent unhandled rejections when timeout fires.
|
Claude finished @maxritter's task in 1m 55s —— View job Code Review Complete ✅I've completed a thorough review of this PR. The changes are well-structured with comprehensive testing, but there are several critical issues that need to be addressed before merging. 🔴 Critical Issues1. Playwright CLI Package Does Not Exist (
|
|
Claude finished @maxritter's task in 3m 0s —— View job Code Review Complete ✅I've completed a comprehensive review of PR #46. The changes are well-structured with excellent test coverage, but there are several critical issues that need to be addressed before merging. 🔴 Critical Issues1. Playwright CLI Package Does Not Exist (
|
|
Claude finished @maxritter's task in 2m 14s —— View job Code Review Complete ✅I've completed a thorough review of this PR. The changes are well-structured with comprehensive testing, but there are several critical issues that need to be addressed before merging. 🔴 Critical Issues1. Playwright CLI Package Does Not Exist (
|
Remove ~500 lines of unused legacy functions, obfuscated binary encoding, and migration logic. Clean up dead config constants, remove unused Vercel API function, fix API domain references, and update README pricing links.
|
Claude finished @maxritter's task in 2m 27s —— View job Code Review Complete ✅I've completed a thorough review of this PR. The changes are well-structured with comprehensive testing, but there are several critical issues that need to be addressed before merging. 🔴 Critical Issues1. Playwright CLI Package Does Not Exist (
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@docs/site/src/main.tsx`:
- Around line 6-12: The current mixpanel.init call sets record_sessions_percent:
100 and autocapture: true which can capture PII and inflate costs; update the
mixpanel.init configuration (the mixpanel.init call) to lower
record_sessions_percent to a sampling value (e.g., 10–25) and either set
autocapture: false or scope it (e.g., enable only specific selectors) and ensure
sensitive inputs are excluded via mp-no-track attributes or a property filter
before sending; keep api_host as needed but apply the sampling/autocapture
changes to reduce privacy and cost exposure.
🧹 Nitpick comments (3)
.github/workflows/deploy-website.yml (3)
51-52: Pin the Vercel CLI version for reproducible deployments.
npm install -g vercelinstalls whatever the latest version is at workflow run time. A breaking change in a future Vercel CLI release could silently break deployments.Suggested fix
- run: npm install -g vercel + run: npm install -g vercel@41Replace
41with the current version you're targeting.
54-62: Deploy step looks correct; consider moving org/project IDs to secrets or vars for portability.The
--prodconditional logic is safe — the${{ }}expression resolves to a string literal before shell execution, so no injection risk. TheVERCEL_TOKENis properly sourced from secrets.
VERCEL_ORG_IDandVERCEL_PROJECT_IDare not sensitive, but hardcoding them reduces portability if the project or org changes. Consider using repository variables (vars.*) for easier rotation.
35-49: Consider pinning actions to commit SHAs for enhanced supply chain security (optional).Both
actions/checkout@v6andactions/setup-node@v6are current (v6.0.2 and v6.2.0 respectively as of January 2026). GitHub's security best practices recommend pinning third-party actions to their full commit SHA for immutable references, though mutable tags like@v6are acceptable when the publisher is officially verified (which both GitHub-owned actions are). If stricter supply chain controls are desired, pin to the release SHA with a comment indicating the version:- uses: actions/checkout@a7bd7a3a97b4ebefb83b8fcb2c7a39aedc96f32a # v6.0.2 - uses: actions/setup-node@38a8681edfb83886d4e24c27d841317b02e91620 # v6.2.0
docs/site/src/main.tsx
Outdated
| mixpanel.init("e5ea960341852bb39afdfa8026bf9975", { | ||
| track_pageview: true, | ||
| persistence: "localStorage", | ||
| autocapture: true, | ||
| record_sessions_percent: 100, | ||
| api_host: "https://api-eu.mixpanel.com", | ||
| }); |
There was a problem hiding this comment.
100% session recording with autocapture may have privacy/cost implications.
record_sessions_percent: 100 records every visitor session, and autocapture: true captures all DOM interactions (clicks, form inputs, etc.). On a public-facing docs site this could:
- Inadvertently capture PII from any input fields (search queries, contact forms), which matters for GDPR/CCPA even with the EU API host.
- Generate significant Mixpanel session-replay volume/cost.
Consider reducing record_sessions_percent to a sample (e.g., 10–25%) and reviewing whether autocapture needs to be scoped or if sensitive elements should be tagged with mp-no-track.
🤖 Prompt for AI Agents
In `@docs/site/src/main.tsx` around lines 6 - 12, The current mixpanel.init call
sets record_sessions_percent: 100 and autocapture: true which can capture PII
and inflate costs; update the mixpanel.init configuration (the mixpanel.init
call) to lower record_sessions_percent to a sampling value (e.g., 10–25) and
either set autocapture: false or scope it (e.g., enable only specific selectors)
and ensure sensitive inputs are excluded via mp-no-track attributes or a
property filter before sending; keep api_host as needed but apply the
sampling/autocapture changes to reduce privacy and cost exposure.
Vercel project settings already set root directory to docs/site. The workflow's working-directory caused the path to double to docs/site/docs/site, breaking deploys.
|
Claude finished @maxritter's task in 2m 46s —— View job Code Review Complete ✅I've completed a thorough review of PR #46. The changes are well-structured with comprehensive testing, but there are several critical issues that must be addressed before merging. 🔴 Critical Issues1. Playwright CLI Package Does Not Exist (
|
Implement isolated workspaces for spec-driven development using git worktrees. After plan approval, implementation happens in a separate worktree on a dedicated branch. Changes sync back via squash merge after user approval. - Add launcher/worktree.py for worktree lifecycle management - Add worktree CLI subcommands (create, detect, diff, sync, cleanup, status) - Track worktree state in session data for Endless Mode persistence - Wrapper supports CWD switching and auto-restore on restart - Console API endpoints and UI panel for worktree status/diff/sync - Statusline worktree branch indicator - Stale session cleanup removes orphaned worktrees and branches - Update spec commands, rules, README, and website documentation
|
Claude finished @maxritter's task in 2m 7s —— View job Code Review Complete ✅I've completed a thorough review of PR #46. This PR contains substantial improvements to stability and security, but there are several critical issues that must be addressed before merging. 🔴 Critical Issues1. Playwright CLI Package Does Not Exist (
|
- Add Worktree field to plan header (default: Yes) with user choice during approval - Update spec-implement to skip worktree creation when Worktree: No - Add worktree field to planFileReader.ts parsing and Console UI indicators - Update workflow-enforcement and git-operations rules for consistency - Fix worker-service startup crash: getSessionStore() was called before dbManager.initialize() in start(), moved to initializeBackground() - Add regression test for DB initialization order
|
Claude finished @maxritter's task in 1m 55s —— View job Code Review Complete ✅I've completed a thorough review of PR #46. This PR contains substantial improvements to stability and security, but there are several critical issues that must be addressed before merging. 🔴 Critical Issues1. Playwright CLI Package Does Not Exist (
|
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
console/src/ui/viewer/views/Spec/index.tsx (1)
123-142:⚠️ Potential issue | 🟠 MajorAdd project context to
deleteSpecDELETE request.The
deleteSpecfunction at line 129 omits the project parameter that other spec operations (loadSpecs,loadContent) include. The backend DELETE handler is hardcoded to useprocess.env.CLAUDE_PROJECT_ROOTinstead of reading theprojectquery parameter like all other handlers do. When cross-project support is active, this causes deletion to target the wrong project or fail entirely. Update the DELETE request URL to include${selectedProject ?&project=${encodeURIComponent(selectedProject)}: ''}for consistency with other operations.console/src/services/worker/http/routes/PlanRoutes.ts (1)
154-179:⚠️ Potential issue | 🟠 Major
handleDeletePlanstill uses hardcoded project root — not migrated toresolveProjectRoot.Every other plan handler (
handleGetActivePlan,handleGetAllPlans,handleGetGitInfo,handleGetActiveSpecs,handleGetPlanContent,handleGetPlanStats) was updated to resolve the project root viaresolveProjectRoot(this.dbManager, project).handleDeletePlanstill falls back toprocess.env.CLAUDE_PROJECT_ROOT || process.cwd(), ignoring theprojectquery parameter entirely.This means delete operations will always target the default project root, even when the UI is scoped to a different project — potentially deleting the wrong file or returning a 404 for a file that exists in the correct project root.
Proposed fix
private handleDeletePlan = this.wrapHandler((req: Request, res: Response): void => { - const projectRoot = process.env.CLAUDE_PROJECT_ROOT || process.cwd(); + const project = req.query.project as string | undefined; + const projectRoot = resolveProjectRoot(this.dbManager, project); const plansDir = path.join(projectRoot, "docs", "plans");
🤖 Fix all issues with AI agents
In `@console/src/services/sqlite/migrations/runner.ts`:
- Around line 703-733: The INSERT/SELECT is positional and will misalign because
the new CREATE TABLE observations_new defines columns in a different order than
the existing table; fix by explicitly specifying the target column list to match
the new table's column order (or reorder the SELECT to match the new table).
Concretely, update the code that builds columnList (tableInfo, columns) and
change the INSERT to use an explicit target list that matches the fields
declared in CREATE TABLE observations_new (e.g., INSERT INTO observations_new
(id, memory_session_id, project, text, type, title, subtitle, facts, narrative,
concepts, files_read, files_modified, prompt_number, discovery_tokens, tags,
created_at, created_at_epoch) SELECT id, memory_session_id, ... FROM
observations) so columns are matched by name rather than position.
In `@console/src/services/worker-service.ts`:
- Around line 288-293: When computing projectName from projectRoot (projectRoot
= process.env.CLAUDE_PROJECT_ROOT || process.cwd()) add a guard for the edge
case where path.basename(projectRoot) returns an empty string; if projectName
=== "" replace it with a safe fallback (for example use projectRoot itself if
non-empty, or os.hostname(), or a literal like "root") before calling
this.dbManager.getSessionStore().upsertProjectRoot(projectName, projectRoot).
Ensure the check occurs immediately after computing projectName so initialize()
and upsertProjectRoot receive a non-empty key.
In `@console/src/services/worker/http/routes/utils/planFileReader.ts`:
- Around line 29-34: statusMatch currently captures any \w+ and casts
statusMatch[1] to the union type without validation; update the code that sets
status (the status variable derived from statusMatch in planFileReader.ts) to
validate the captured string against the allowed values
("PENDING","COMPLETE","VERIFIED") before casting—if it doesn't match, return
null (so PlanInfo isn't created for unknown statuses) or explicitly map unknown
values to a safe default; this ensures getPlanStats only counts known statuses
and prevents silent inflation of totalSpecs.
In `@console/src/services/worker/http/routes/VexorRoutes.ts`:
- Line 114: The field _isReindexing is a global boolean but reindexing is
per-project; change _isReindexing to a Set<string> (projectRoot keys) and update
all usages: in handleReindex (add projectRoot to _isReindexing at start, check
this._isReindexing.has(projectRoot) to reject concurrent per-project requests,
and remove projectRoot from the set in finally), and in handleStatus /
emptyStatus return isReindexing: this._isReindexing.has(projectRoot) instead of
the boolean; also update any other checks in the reindex flow (methods
referenced: _isReindexing, handleReindex, handleStatus, emptyStatus) to use
set.has/add/delete semantics so reindexing is tracked per project.
In `@console/src/services/worker/http/routes/WorktreeRoutes.ts`:
- Around line 96-121: The handler performing checkout→merge→commit→cleanup in
the route (referenced by getMainRepoRoot, countFilesFromStat and the
execFileSync git calls) is not atomic and can leave the main repo in a bad
state; update the code to (1) wrap the critical git sequence in a
try/catch/finally so any merge failure runs execFileSync("git",
["merge","--abort"], {cwd: mainRoot}) and other cleanup always runs, (2)
separate the commit success path from cleanup so if commit succeeds but
worktree/branch removal fails you return success with a warning (include commit
hash and files_changed) and schedule/attempt cleanup asynchronously or in
finally, and (3) avoid disruptive git checkout on the main repo by performing
operations on a temporary worktree/branch (use git worktree add or operate
inside a cloned temp repo) instead of execFileSync("git", ["checkout",
status.baseBranch], ...) to prevent affecting concurrent work. Ensure all
references to execFileSync in this route are updated accordingly and errors
include contextual info so clients receive accurate success/failure state.
In `@console/src/ui/viewer/views/Dashboard/SpecActivity.tsx`:
- Around line 104-118: The map over specStats.recentlyVerified uses
key={spec.name} which can collide; update the key in the .map inside
SpecActivity (the element with key={spec.name}) to a stable composite key such
as `${spec.name}-${spec.verifiedAt}` or, preferably, a unique id like spec.id if
available to ensure React list reconciliation is correct.
- Around line 46-52: The mapping that builds formattedData uses new
Date(item.date) which treats "YYYY-MM-DD" as UTC and can shift the displayed day
in negative UTC offsets; update the mapping over specStats.completionTimeline to
parse the YYYY-MM-DD string into year/month/day (e.g., split on '-' and call new
Date(year, monthIndex, day)) and then call toLocaleDateString on that local Date
when computing displayDate so dates render correctly in local time; look for
formattedData, specStats.completionTimeline and displayDate to locate where to
change the parsing.
In `@console/src/ui/viewer/views/Dashboard/StatsGrid.tsx`:
- Around line 41-56: The spec-related StatsCard values are passed as raw
numbers; update the three value props on the StatsCard instances (for
totalSpecs, verified, inProgress) to use locale-formatted strings by calling
.toLocaleString() on the resolved number (e.g., (specStats?.totalSpecs ??
0).toLocaleString()), so each StatsCard value prop is consistently formatted as
a string like the other cards; keep the subtext (successRate) unchanged.
In `@console/src/ui/viewer/views/Spec/WorktreePanel.tsx`:
- Around line 40-57: The loadStatus async function is parsing fetch responses
without checking response.ok; update the fetch handling for both
fetch('/api/worktree/status') and fetch('/api/worktree/diff') inside loadStatus
to check res.ok (and diffRes.ok) before calling .json(), and if not ok throw or
reject with a descriptive Error (include status and response text) so the catch
branch will run; ensure on error you setStatus(null) and setFiles([]) as
appropriate and keep setIsLoading(false) in finally.
In `@console/tests/sqlite/project-roots.test.ts`:
- Around line 44-53: The test "should update root_path and last_seen_at on
duplicate project" currently only checks root_path; update it to also verify
last_seen_at was refreshed by reading last_seen_at from the row before the
second upsert and after, using store.upsertProjectRoot("my-project",
"/old/path") then capture the initial last_seen_at (from project_roots via
store.db.query(...).get), perform store.upsertProjectRoot("my-project",
"/new/path"), read the row again and assert the new last_seen_at is updated
(either greater than the previous timestamp or at least truthy); if timing is
tight, introduce a small delay between upserts or relax the assertion to
truthiness on the updated row to avoid flakiness.
🧹 Nitpick comments (22)
console/tests/ui/spec-project-filter.test.ts (1)
13-34: Source-text assertions are brittle — consider render-based tests long-term.These tests verify feature integration by matching raw source strings, which means any refactor (renaming variables, extracting helpers, rewriting URL construction) could break them without any functional regression. The regex on Line 25 (
/plans\/active.*project/) is also quite loose — it would match any occurrence of "project" appearing anywhere after "plans/active" on the same line, including comments.That said, this pattern appears consistent with sibling test files (
spec-activity-panel.test.ts,observation-timeline-panel.test.ts, etc.), so this is fine for now. If these tests become a maintenance burden, consider migrating to component-level render tests that assert on actual fetch call arguments.console/src/ui/viewer/views/Dashboard/PlanStatus.tsx (1)
11-11:worktreefield is declared but unused in this component.The new
worktreeproperty onPlanInfois not referenced anywhere inPlanRoworPlanStatus. If it's purely for downstream consumers of thePlanInfotype, consider whether this interface should live in a shared types file rather than being co-located with a component that doesn't use it.console/tests/ui/spec-activity-panel.test.ts (1)
23-47: Source-level string assertions are brittle.These tests will break on any rename or refactor (e.g., switching from
BarCharttoAreaChart, or changing the empty-state copy). The rendering tests below (lines 49–98) are much more robust since they exercise actual component behavior. Consider whether the source-scanning tests add enough value to justify the maintenance cost.console/tests/ui/workspace-project-scoping.test.ts (1)
1-42: Source-scanning tests are functional but fragile — consistent with other test files in this PR.The regex patterns tightly couple tests to implementation details (variable names like
vexorParam,loadVexorStatus, dependency array structure). These will break on any internal refactoring ofuseStats.ts. As a future improvement, consider integration tests that mockfetchand verify actual API calls include the project parameter.console/src/ui/viewer/views/Dashboard/StatsGrid.tsx (1)
11-21: ExtractSpecStatsinterface to a shared types file.This
SpecStatsinterface is duplicated across three files:useStats.ts(line 12),SpecActivity.tsx(line 14), andStatsGrid.tsx(line 11). All three definitions are identical. Changes to this interface require updates in three separate locations, creating maintenance risk.console/src/services/worker/http/routes/utils/resolveProjectRoot.ts (1)
24-31: Consider wrapping DB/FS calls in try-catch to preserve the fail-safe fallback pattern.If
getProjectRoot()throws (e.g., table not yet migrated, DB locked), orstatSync()throws (e.g., permission denied afterexistsSyncreturns true), the error propagates to the caller instead of gracefully falling back todefaultRoot. This breaks the otherwise robust fallback design.♻️ Proposed fix
- const root = dbManager.getSessionStore().getProjectRoot(project); - if (!root) { - return defaultRoot; - } - - if (!existsSync(root) || !statSync(root).isDirectory()) { + let root: string | undefined; + try { + root = dbManager.getSessionStore().getProjectRoot(project); + } catch { + return defaultRoot; + } + if (!root) { + return defaultRoot; + } + + try { + if (!existsSync(root) || !statSync(root).isDirectory()) { + return defaultRoot; + } + } catch { return defaultRoot; }console/tests/ui/stats-grid-spec-cards.test.ts (1)
56-91: Good render-based coverage for the new spec cards.Minor note: the assertion
expect(html).toContain("10")on Line 85 is ambiguous — it matches bothsummaries: 10andverified: 10. This won't cause false failures, but consider asserting on a more distinct value if you want to specifically validate the "Verified" card rendering.console/tests/worker/plan-stats-endpoint.test.ts (1)
22-55: These are source-inspection tests only — no functional coverage of the endpoint.All assertions here check for string presence in source files, which validates code structure but not actual behavior (e.g., that
getPlanStatsreturns correct counts, or that the endpoint responds with the expected JSON). Consider adding at least one functional test that invokesgetPlanStatswith a test fixture directory to catch regressions in the parsing logic.console/tests/worker/project-root-endpoint.test.ts (2)
39-41: Overly broad regex —rootswill match unrelated occurrences.
/roots/matches substrings like"getAllProjectRoots"and"/api/project-roots", so this assertion passes even if the handler never returns arootsfield in JSON. Consider a tighter pattern, e.g./\broots\b\s*[:=]/or checking forJSONserialization of arootskey.
56-61: Non-greedy}match may truncate if theworkerobject contains nested braces.The regex
worker:\s*{([\s\S]*?)}will stop at the first}, which fails for nested objects likeworker: { foo: { bar: 1 }, workspaceProject: "..." }. If the shape evolves, this test could silently break.A simple mitigation would be to just check that
workspaceProjectappears afterworker:without trying to extract the full block:expect(dataRoutesSource).toMatch(/worker:\s*\{[^]*workspaceProject/);console/tests/ui/use-stats-spec-metrics.test.ts (1)
68-71: Regex may fail if the fetch URL andprojectparam are on separate lines.
/plans\/stats.*project/uses.which doesn't match newlines by default. If the URL string andprojectquery-string construction span multiple lines, this assertion will incorrectly fail. Consider using[\s\S]or thesflag:const planStatsFetch = hookSource.match(/plans\/stats[\s\S]*?project/);console/src/ui/viewer/views/Spec/index.tsx (1)
83-99:loadSpecsdepends onselectedSpecbut also sets it — potential double-fetch on mount.
loadSpecshasselectedSpecin its dependency array (line 99) and conditionally setsselectedSpec(line 91). On first render:loadSpecsruns → setsselectedSpec→useCallbackproduces a new ref → the effect at line 144 re-runs →loadSpecsfires again. The guard (!selectedSpec) prevents an infinite loop, but you get an unnecessary duplicate network request.Consider splitting the "auto-select first spec" logic out of
loadSpecs, or removingselectedSpecfrom the dependency array and using a ref to track whether auto-selection has occurred.console/tests/worker/cross-project-routes.test.ts (1)
60-88: Handler extraction via.slice(source.indexOf(...))is imprecise — assertions may pass from later handlers.
source.slice(source.indexOf("handleGetActivePlan"))captures everything from that handler to EOF, including all subsequent handlers. IfhandleGetActivePlanitself lackedreq.query.project, the test could still pass because a later handler (e.g.,handleGetAllPlans) contains it.To isolate each handler body, extract up to the next handler or use a bounded regex. That said, since every handler is checked, the aggregate coverage remains valid — just be aware this won't catch the scenario where one handler regresses while others stay correct.
console/src/ui/viewer/views/Spec/WorktreePanel.tsx (2)
60-64: No cancellation of in-flight fetches on unmount.When the component unmounts,
clearIntervalstops new polls, but anyloadStatuscall already in-flight will continue and callsetStatus/setFileson an unmounted component. Consider using anAbortControllerto cancel pending fetches on cleanup, or a mounted-ref guard.
173-180: Fragile string-based detection for sync result styling.
syncResult.includes('failed')to determine error vs. success styling is brittle — if a success message ever contains "failed" (e.g., "0 previously failed files synced"), the banner would incorrectly show error styling. Consider tracking a booleansyncSuccessstate alongside the message instead.console/src/ui/viewer/views/Dashboard/index.tsx (1)
42-45: RecentActivity sits alone in a two-column grid.
RecentActivityis the sole child of agrid-cols-2container, so it only occupies half the row width onmd+. This looks intentional as a placeholder for a future second panel, but if not, consider either usinggrid-cols-1or having the component span both columns.console/src/services/worker/http/routes/utils/planFileReader.ts (1)
153-185: Duplicated file-reading logic acrossgetAllPlansandgetPlanStats.Both functions enumerate
readdirSync(plansDir), filter.mdfiles, read each file, and callparsePlanContent. Consider extracting a sharedreadAllPlanInfos(projectRoot)helper that both functions can call, reducing the duplication.console/src/services/worker/http/routes/WorktreeRoutes.ts (2)
109-111: Redundant.toString()onexecFileSyncresult.When
encoding: "utf-8"is specified,execFileSyncalready returns astring, making.toString()unnecessary.Proposed fix
- const commitHash = execFileSync("git", ["rev-parse", "HEAD"], { cwd: mainRoot, encoding: "utf-8", timeout: 5000 }).toString().trim(); + const commitHash = execFileSync("git", ["rev-parse", "HEAD"], { cwd: mainRoot, encoding: "utf-8", timeout: 5000 }).trim(); - const statOutput = execFileSync("git", ["diff", "--stat", "HEAD~1"], { cwd: mainRoot, encoding: "utf-8", timeout: 5000 }).toString(); + const statOutput = execFileSync("git", ["diff", "--stat", "HEAD~1"], { cwd: mainRoot, encoding: "utf-8", timeout: 5000 });
196-212: Two separate git diff commands where one could suffice.
getChangedFilesruns bothgit diff --name-statusandgit diff --numstatseparately. These could be combined into a singlegit diff --numstatcall (which already provides the status letter when using--name-statusformat), or by usinggit diff --stat --name-statusin one invocation, halving the git process spawns. Low impact given the small file set, but worth noting.console/src/services/worker/http/routes/VexorRoutes.ts (1)
228-228:statusCache.clear()evicts all projects' caches on a single-project reindex.Lines 228 and 246 wipe the entire cache map, which invalidates cached status for every project — not just the one being reindexed. This causes unnecessary Vexor CLI invocations for unrelated projects after any single reindex.
Use
this.statusCache.delete(projectRoot)instead.Proposed fix
- this.statusCache.clear(); + this.statusCache.delete(projectRoot); res.json({ started: true }); ... } finally { this._isReindexing = false; - this.statusCache.clear(); + this.statusCache.delete(projectRoot); }Also applies to: 246-246
console/src/ui/viewer/hooks/useStats.ts (2)
142-151: The 8-wayPromise.alldestructuring is becoming fragile.Adding
specStatsResandtimelineResbrings the parallel fetch count to 8. Positional destructuring makes it easy to silently mismap a response if someone reorders or inserts a fetch. Consider switching to a named-result pattern (e.g.,Promise.allSettledwith a helper, or an object of fetch promises resolved via a utility) to make the mapping self-documenting.This is an existing pattern being extended, so no urgency, but worth considering as the list grows.
160-168:specStatsResis set without shape validation — acceptable for internal API but worth noting.
setSpecStats(specData)trusts the API shape entirely. If/api/plans/statsreturns a partial or malformed object, the component will receive incompleteSpecStats. TheobservationTimelinepath at least has a fallback (tlData.data || []). For consistency, consider applying similar defensive defaults forspecStats:- setSpecStats(specData); + setSpecStats({ ...specStats, ...specData });This would preserve default values for any missing fields.
| const tableInfo = this.db.query("PRAGMA table_info(observations)").all() as TableColumnInfo[]; | ||
| const columns = tableInfo.map((col) => col.name); | ||
| const columnList = columns.join(", "); | ||
|
|
||
| this.db.run("PRAGMA foreign_keys = OFF"); | ||
| this.db.run("BEGIN TRANSACTION"); | ||
|
|
||
| this.db.run(` | ||
| CREATE TABLE observations_new ( | ||
| id INTEGER PRIMARY KEY AUTOINCREMENT, | ||
| memory_session_id TEXT NOT NULL, | ||
| project TEXT NOT NULL, | ||
| text TEXT, | ||
| type TEXT NOT NULL, | ||
| title TEXT, | ||
| subtitle TEXT, | ||
| facts TEXT, | ||
| narrative TEXT, | ||
| concepts TEXT, | ||
| files_read TEXT, | ||
| files_modified TEXT, | ||
| prompt_number INTEGER, | ||
| discovery_tokens INTEGER DEFAULT 0, | ||
| tags TEXT, | ||
| created_at TEXT NOT NULL, | ||
| created_at_epoch INTEGER NOT NULL, | ||
| FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE | ||
| ) | ||
| `); | ||
|
|
||
| this.db.run(`INSERT INTO observations_new SELECT ${columnList} FROM observations`); |
There was a problem hiding this comment.
Critical: Column-order mismatch in INSERT ... SELECT will corrupt data.
columnList is derived from PRAGMA table_info(observations), which returns columns in the existing table's order. After prior migrations (9 → ALTER TABLEs in 11, 21), the existing order ends with: ..., prompt_number, created_at, created_at_epoch, discovery_tokens, tags.
However, the new table definition (lines 711–731) places discovery_tokens and tags before created_at and created_at_epoch:
..., prompt_number, discovery_tokens, tags, created_at, created_at_epoch.
Since line 733 uses INSERT INTO observations_new SELECT ${columnList} FROM observations — a positional INSERT without an explicit target column list — created_at values will be written into the discovery_tokens column, created_at_epoch into tags, and so on.
Fix: specify target columns to match by name
- this.db.run(`INSERT INTO observations_new SELECT ${columnList} FROM observations`);
+ this.db.run(`INSERT INTO observations_new (${columnList}) SELECT ${columnList} FROM observations`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const tableInfo = this.db.query("PRAGMA table_info(observations)").all() as TableColumnInfo[]; | |
| const columns = tableInfo.map((col) => col.name); | |
| const columnList = columns.join(", "); | |
| this.db.run("PRAGMA foreign_keys = OFF"); | |
| this.db.run("BEGIN TRANSACTION"); | |
| this.db.run(` | |
| CREATE TABLE observations_new ( | |
| id INTEGER PRIMARY KEY AUTOINCREMENT, | |
| memory_session_id TEXT NOT NULL, | |
| project TEXT NOT NULL, | |
| text TEXT, | |
| type TEXT NOT NULL, | |
| title TEXT, | |
| subtitle TEXT, | |
| facts TEXT, | |
| narrative TEXT, | |
| concepts TEXT, | |
| files_read TEXT, | |
| files_modified TEXT, | |
| prompt_number INTEGER, | |
| discovery_tokens INTEGER DEFAULT 0, | |
| tags TEXT, | |
| created_at TEXT NOT NULL, | |
| created_at_epoch INTEGER NOT NULL, | |
| FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE | |
| ) | |
| `); | |
| this.db.run(`INSERT INTO observations_new SELECT ${columnList} FROM observations`); | |
| const tableInfo = this.db.query("PRAGMA table_info(observations)").all() as TableColumnInfo[]; | |
| const columns = tableInfo.map((col) => col.name); | |
| const columnList = columns.join(", "); | |
| this.db.run("PRAGMA foreign_keys = OFF"); | |
| this.db.run("BEGIN TRANSACTION"); | |
| this.db.run(` | |
| CREATE TABLE observations_new ( | |
| id INTEGER PRIMARY KEY AUTOINCREMENT, | |
| memory_session_id TEXT NOT NULL, | |
| project TEXT NOT NULL, | |
| text TEXT, | |
| type TEXT NOT NULL, | |
| title TEXT, | |
| subtitle TEXT, | |
| facts TEXT, | |
| narrative TEXT, | |
| concepts TEXT, | |
| files_read TEXT, | |
| files_modified TEXT, | |
| prompt_number INTEGER, | |
| discovery_tokens INTEGER DEFAULT 0, | |
| tags TEXT, | |
| created_at TEXT NOT NULL, | |
| created_at_epoch INTEGER NOT NULL, | |
| FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE | |
| ) | |
| `); | |
| this.db.run(`INSERT INTO observations_new (${columnList}) SELECT ${columnList} FROM observations`); |
🤖 Prompt for AI Agents
In `@console/src/services/sqlite/migrations/runner.ts` around lines 703 - 733, The
INSERT/SELECT is positional and will misalign because the new CREATE TABLE
observations_new defines columns in a different order than the existing table;
fix by explicitly specifying the target column list to match the new table's
column order (or reorder the SELECT to match the new table). Concretely, update
the code that builds columnList (tableInfo, columns) and change the INSERT to
use an explicit target list that matches the fields declared in CREATE TABLE
observations_new (e.g., INSERT INTO observations_new (id, memory_session_id,
project, text, type, title, subtitle, facts, narrative, concepts, files_read,
files_modified, prompt_number, discovery_tokens, tags, created_at,
created_at_epoch) SELECT id, memory_session_id, ... FROM observations) so
columns are matched by name rather than position.
| await this.dbManager.initialize(); | ||
|
|
||
| const projectRoot = process.env.CLAUDE_PROJECT_ROOT || process.cwd(); | ||
| const projectName = path.basename(projectRoot); | ||
| this.dbManager.getSessionStore().upsertProjectRoot(projectName, projectRoot); | ||
|
|
There was a problem hiding this comment.
Edge case: empty project name if cwd() returns root path.
path.basename("/") returns "", so if CLAUDE_PROJECT_ROOT is unset and the process runs from /, projectName will be an empty string, creating a project root entry with an empty key. This is an unlikely but possible edge case in containerized environments.
Proposed guard
const projectRoot = process.env.CLAUDE_PROJECT_ROOT || process.cwd();
const projectName = path.basename(projectRoot);
- this.dbManager.getSessionStore().upsertProjectRoot(projectName, projectRoot);
+ if (projectName) {
+ this.dbManager.getSessionStore().upsertProjectRoot(projectName, projectRoot);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await this.dbManager.initialize(); | |
| const projectRoot = process.env.CLAUDE_PROJECT_ROOT || process.cwd(); | |
| const projectName = path.basename(projectRoot); | |
| this.dbManager.getSessionStore().upsertProjectRoot(projectName, projectRoot); | |
| await this.dbManager.initialize(); | |
| const projectRoot = process.env.CLAUDE_PROJECT_ROOT || process.cwd(); | |
| const projectName = path.basename(projectRoot); | |
| if (projectName) { | |
| this.dbManager.getSessionStore().upsertProjectRoot(projectName, projectRoot); | |
| } |
🤖 Prompt for AI Agents
In `@console/src/services/worker-service.ts` around lines 288 - 293, When
computing projectName from projectRoot (projectRoot =
process.env.CLAUDE_PROJECT_ROOT || process.cwd()) add a guard for the edge case
where path.basename(projectRoot) returns an empty string; if projectName === ""
replace it with a safe fallback (for example use projectRoot itself if
non-empty, or os.hostname(), or a literal like "root") before calling
this.dbManager.getSessionStore().upsertProjectRoot(projectName, projectRoot).
Ensure the check occurs immediately after computing projectName so initialize()
and upsertProjectRoot receive a non-empty key.
| const statusMatch = content.match(/^Status:\s*(\w+)/m); | ||
| if (!statusMatch) { | ||
| return null; | ||
| } | ||
|
|
||
| const status = statusMatch[1] as "PENDING" | "COMPLETE" | "VERIFIED"; |
There was a problem hiding this comment.
Unchecked cast of status to the union type.
statusMatch[1] captures any \w+ string, but it's directly cast to "PENDING" | "COMPLETE" | "VERIFIED" without validation. A plan file with Status: DRAFT would pass through as a valid PlanInfo, leading to incorrect counts in getPlanStats (it wouldn't match any of the verified/inProgress/pending filters, silently inflating totalSpecs without being categorized).
Proposed fix
const statusMatch = content.match(/^Status:\s*(\w+)/m);
if (!statusMatch) {
return null;
}
- const status = statusMatch[1] as "PENDING" | "COMPLETE" | "VERIFIED";
+ const rawStatus = statusMatch[1].toUpperCase();
+ if (rawStatus !== "PENDING" && rawStatus !== "COMPLETE" && rawStatus !== "VERIFIED") {
+ return null;
+ }
+ const status = rawStatus as "PENDING" | "COMPLETE" | "VERIFIED";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const statusMatch = content.match(/^Status:\s*(\w+)/m); | |
| if (!statusMatch) { | |
| return null; | |
| } | |
| const status = statusMatch[1] as "PENDING" | "COMPLETE" | "VERIFIED"; | |
| const statusMatch = content.match(/^Status:\s*(\w+)/m); | |
| if (!statusMatch) { | |
| return null; | |
| } | |
| const rawStatus = statusMatch[1].toUpperCase(); | |
| if (rawStatus !== "PENDING" && rawStatus !== "COMPLETE" && rawStatus !== "VERIFIED") { | |
| return null; | |
| } | |
| const status = rawStatus as "PENDING" | "COMPLETE" | "VERIFIED"; |
🤖 Prompt for AI Agents
In `@console/src/services/worker/http/routes/utils/planFileReader.ts` around lines
29 - 34, statusMatch currently captures any \w+ and casts statusMatch[1] to the
union type without validation; update the code that sets status (the status
variable derived from statusMatch in planFileReader.ts) to validate the captured
string against the allowed values ("PENDING","COMPLETE","VERIFIED") before
casting—if it doesn't match, return null (so PlanInfo isn't created for unknown
statuses) or explicitly map unknown values to a safe default; this ensures
getPlanStats only counts known statuses and prevents silent inflation of
totalSpecs.
| private activeProcesses = new Set<Subprocess>(); | ||
| private statusCache: { data: VexorStatus; timestamp: number } | null = null; | ||
| private statusCache = new Map<string, { data: VexorStatus; timestamp: number }>(); | ||
| private _isReindexing = false; |
There was a problem hiding this comment.
_isReindexing is a global singleton but reindex is now per-project.
With per-project scoping, two problems arise:
- False blocking: If project A is reindexing, a reindex request for project B is rejected at line 214 (
if (this._isReindexing)), even though B's index is idle. - False status: While project A reindexes,
handleStatusfor project B returnsisReindexing: true(line 143/160), misleading the UI.
Consider making _isReindexing a Set<string> keyed by projectRoot:
Proposed fix
- private _isReindexing = false;
+ private _reindexingProjects = new Set<string>();Then in handleReindex:
- if (this._isReindexing) {
+ if (this._reindexingProjects.has(projectRoot)) {
res.status(409).json({ error: "Reindexing already in progress" });
return;
}
...
- this._isReindexing = true;
+ this._reindexingProjects.add(projectRoot);
...
} finally {
- this._isReindexing = false;
+ this._reindexingProjects.delete(projectRoot);
this.statusCache.delete(projectRoot);
}And in handleStatus / emptyStatus, pass projectRoot:
- res.json({ ...cached.data, isReindexing: this._isReindexing });
+ res.json({ ...cached.data, isReindexing: this._reindexingProjects.has(projectRoot) });Also applies to: 213-248
🤖 Prompt for AI Agents
In `@console/src/services/worker/http/routes/VexorRoutes.ts` at line 114, The
field _isReindexing is a global boolean but reindexing is per-project; change
_isReindexing to a Set<string> (projectRoot keys) and update all usages: in
handleReindex (add projectRoot to _isReindexing at start, check
this._isReindexing.has(projectRoot) to reject concurrent per-project requests,
and remove projectRoot from the set in finally), and in handleStatus /
emptyStatus return isReindexing: this._isReindexing.has(projectRoot) instead of
the boolean; also update any other checks in the reindex flow (methods
referenced: _isReindexing, handleReindex, handleStatus, emptyStatus) to use
set.has/add/delete semantics so reindexing is tracked per project.
| try { | ||
| const mainRoot = this.getMainRepoRoot(projectRoot); | ||
| if (!mainRoot) { | ||
| res.status(500).json({ error: "Cannot determine main repository root" }); | ||
| return; | ||
| } | ||
|
|
||
| execFileSync("git", ["checkout", status.baseBranch], { cwd: mainRoot, encoding: "utf-8", timeout: 10000 }); | ||
| execFileSync("git", ["merge", "--squash", status.branch], { cwd: mainRoot, encoding: "utf-8", timeout: 30000 }); | ||
|
|
||
| const slug = status.planSlug || status.branch.replace("spec/", ""); | ||
| execFileSync("git", ["commit", "-m", `feat: implement spec/${slug}`], { cwd: mainRoot, encoding: "utf-8", timeout: 10000 }); | ||
|
|
||
| const commitHash = execFileSync("git", ["rev-parse", "HEAD"], { cwd: mainRoot, encoding: "utf-8", timeout: 5000 }).toString().trim(); | ||
|
|
||
| const statOutput = execFileSync("git", ["diff", "--stat", "HEAD~1"], { cwd: mainRoot, encoding: "utf-8", timeout: 5000 }).toString(); | ||
| const filesChanged = this.countFilesFromStat(statOutput); | ||
|
|
||
| execFileSync("git", ["worktree", "remove", projectRoot, "--force"], { cwd: mainRoot, encoding: "utf-8", timeout: 10000 }); | ||
| execFileSync("git", ["branch", "-D", status.branch], { cwd: mainRoot, encoding: "utf-8", timeout: 5000 }); | ||
|
|
||
| res.json({ success: true, files_changed: filesChanged, commit_hash: commitHash }); | ||
| } catch (error) { | ||
| res.status(500).json({ error: (error as Error).message }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Sync operation is not atomic — partial failure leaves main repo in inconsistent state.
This handler performs a multi-step sequence: checkout → squash merge → commit → worktree remove → branch delete. If the merge fails (conflicts), git leaves the main repo in a merge state with no git merge --abort cleanup. If any step after the commit fails, the commit stands but cleanup is incomplete.
Key risks:
- Merge conflict at line 104: leaves the main repo with an in-progress merge and no rollback.
- Post-commit failure (lines 114–115): the commit has been made but the worktree/branch may not be cleaned up, and the 500 response would mislead the client into thinking the sync failed entirely.
- Concurrent work in main repo:
git checkout(line 103) forcefully switches the main repo's working tree, which is disruptive if someone is working there.
Consider adding rollback logic (e.g., git merge --abort on merge failure, or wrapping the critical section in a try/catch with cleanup), and/or separating the cleanup step so that a partial sync failure is communicated accurately.
🤖 Prompt for AI Agents
In `@console/src/services/worker/http/routes/WorktreeRoutes.ts` around lines 96 -
121, The handler performing checkout→merge→commit→cleanup in the route
(referenced by getMainRepoRoot, countFilesFromStat and the execFileSync git
calls) is not atomic and can leave the main repo in a bad state; update the code
to (1) wrap the critical git sequence in a try/catch/finally so any merge
failure runs execFileSync("git", ["merge","--abort"], {cwd: mainRoot}) and other
cleanup always runs, (2) separate the commit success path from cleanup so if
commit succeeds but worktree/branch removal fails you return success with a
warning (include commit hash and files_changed) and schedule/attempt cleanup
asynchronously or in finally, and (3) avoid disruptive git checkout on the main
repo by performing operations on a temporary worktree/branch (use git worktree
add or operate inside a cloned temp repo) instead of execFileSync("git",
["checkout", status.baseBranch], ...) to prevent affecting concurrent work.
Ensure all references to execFileSync in this route are updated accordingly and
errors include contextual info so clients receive accurate success/failure
state.
| const formattedData = specStats.completionTimeline.map((item) => ({ | ||
| ...item, | ||
| displayDate: new Date(item.date).toLocaleDateString('en-US', { | ||
| month: 'short', | ||
| day: 'numeric', | ||
| }), | ||
| })); |
There was a problem hiding this comment.
Date parsing of YYYY-MM-DD strings can shift by a day in negative UTC offsets.
new Date("2026-02-08") is parsed as UTC midnight, so toLocaleDateString('en-US', ...) in timezones like UTC-8 will display "Feb 7" instead of "Feb 8". Consider appending T00:00:00 or parsing the date parts manually to treat it as a local date.
Suggested fix
const formattedData = specStats.completionTimeline.map((item) => ({
...item,
- displayDate: new Date(item.date).toLocaleDateString('en-US', {
+ displayDate: new Date(item.date + 'T00:00:00').toLocaleDateString('en-US', {
month: 'short',
day: 'numeric',
}),
}));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const formattedData = specStats.completionTimeline.map((item) => ({ | |
| ...item, | |
| displayDate: new Date(item.date).toLocaleDateString('en-US', { | |
| month: 'short', | |
| day: 'numeric', | |
| }), | |
| })); | |
| const formattedData = specStats.completionTimeline.map((item) => ({ | |
| ...item, | |
| displayDate: new Date(item.date + 'T00:00:00').toLocaleDateString('en-US', { | |
| month: 'short', | |
| day: 'numeric', | |
| }), | |
| })); |
🤖 Prompt for AI Agents
In `@console/src/ui/viewer/views/Dashboard/SpecActivity.tsx` around lines 46 - 52,
The mapping that builds formattedData uses new Date(item.date) which treats
"YYYY-MM-DD" as UTC and can shift the displayed day in negative UTC offsets;
update the mapping over specStats.completionTimeline to parse the YYYY-MM-DD
string into year/month/day (e.g., split on '-' and call new Date(year,
monthIndex, day)) and then call toLocaleDateString on that local Date when
computing displayDate so dates render correctly in local time; look for
formattedData, specStats.completionTimeline and displayDate to locate where to
change the parsing.
| {specStats.recentlyVerified.length > 0 && ( | ||
| <div> | ||
| <div className="text-sm font-medium text-base-content/70 mb-2">Recently Verified</div> | ||
| <div className="space-y-1"> | ||
| {specStats.recentlyVerified.map((spec) => ( | ||
| <div key={spec.name} className="flex justify-between items-center text-sm"> | ||
| <span className="font-mono text-base-content/80">{spec.name}</span> | ||
| <span className="text-base-content/50"> | ||
| {new Date(spec.verifiedAt).toLocaleDateString()} | ||
| </span> | ||
| </div> | ||
| ))} | ||
| </div> | ||
| </div> | ||
| )} |
There was a problem hiding this comment.
key={spec.name} assumes unique spec names.
If two specs share the same name (e.g., across projects), React will warn about duplicate keys and may mis-reconcile the list items. Consider using a composite key like ${spec.name}-${spec.verifiedAt}.
🤖 Prompt for AI Agents
In `@console/src/ui/viewer/views/Dashboard/SpecActivity.tsx` around lines 104 -
118, The map over specStats.recentlyVerified uses key={spec.name} which can
collide; update the key in the .map inside SpecActivity (the element with
key={spec.name}) to a stable composite key such as
`${spec.name}-${spec.verifiedAt}` or, preferably, a unique id like spec.id if
available to ensure React list reconciliation is correct.
| <StatsCard | ||
| icon="lucide:scroll" | ||
| label="Total Specs" | ||
| value={specStats?.totalSpecs ?? 0} | ||
| /> | ||
| <StatsCard | ||
| icon="lucide:shield-check" | ||
| label="Verified" | ||
| value={specStats?.verified ?? 0} | ||
| subtext={successRate} | ||
| /> | ||
| <StatsCard | ||
| icon="lucide:loader" | ||
| label="In Progress" | ||
| value={specStats?.inProgress ?? 0} | ||
| /> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd StatsCard -t fRepository: maxritter/claude-pilot
Length of output: 117
🏁 Script executed:
fd StatsGrid -t fRepository: maxritter/claude-pilot
Length of output: 117
🏁 Script executed:
cat -n console/src/ui/viewer/views/Dashboard/StatsCard.tsxRepository: maxritter/claude-pilot
Length of output: 1463
🏁 Script executed:
cat -n console/src/ui/viewer/views/Dashboard/StatsGrid.tsxRepository: maxritter/claude-pilot
Length of output: 2705
Apply .toLocaleString() to spec card values for consistent number formatting.
Lines 44, 49, and 55 pass raw numbers while other cards (lines 39, 60, 70, 76) use .toLocaleString(). Although StatsCard accepts both string | number, this inconsistency means large spec numbers will display without locale formatting (e.g., no thousands separators).
Suggested fix
<StatsCard
icon="lucide:scroll"
label="Total Specs"
- value={specStats?.totalSpecs ?? 0}
+ value={(specStats?.totalSpecs ?? 0).toLocaleString()}
/>
<StatsCard
icon="lucide:shield-check"
label="Verified"
- value={specStats?.verified ?? 0}
+ value={(specStats?.verified ?? 0).toLocaleString()}
subtext={successRate}
/>
<StatsCard
icon="lucide:loader"
label="In Progress"
- value={specStats?.inProgress ?? 0}
+ value={(specStats?.inProgress ?? 0).toLocaleString()}
/>🤖 Prompt for AI Agents
In `@console/src/ui/viewer/views/Dashboard/StatsGrid.tsx` around lines 41 - 56,
The spec-related StatsCard values are passed as raw numbers; update the three
value props on the StatsCard instances (for totalSpecs, verified, inProgress) to
use locale-formatted strings by calling .toLocaleString() on the resolved number
(e.g., (specStats?.totalSpecs ?? 0).toLocaleString()), so each StatsCard value
prop is consistently formatted as a string like the other cards; keep the
subtext (successRate) unchanged.
| const loadStatus = useCallback(async () => { | ||
| try { | ||
| const res = await fetch('/api/worktree/status'); | ||
| const data = await res.json(); | ||
| setStatus(data); | ||
|
|
||
| if (data.active) { | ||
| const diffRes = await fetch('/api/worktree/diff'); | ||
| const diffData = await diffRes.json(); | ||
| setFiles(diffData.files || []); | ||
| } else { | ||
| setFiles([]); | ||
| } | ||
| } catch { | ||
| setStatus(null); | ||
| } finally { | ||
| setIsLoading(false); | ||
| } |
There was a problem hiding this comment.
Missing response.ok checks before parsing JSON.
Both fetch('/api/worktree/status') (line 42) and fetch('/api/worktree/diff') (line 47) parse the response as JSON without verifying the response status first. If the server returns a 4xx/5xx, res.json() may succeed but yield an error-shaped payload (e.g., { error: "..." }), which would be silently set into component state as if it were valid WorktreeStatus or diff data.
Proposed fix
try {
const res = await fetch('/api/worktree/status');
+ if (!res.ok) throw new Error(`Status fetch failed: ${res.status}`);
const data = await res.json();
setStatus(data);
if (data.active) {
const diffRes = await fetch('/api/worktree/diff');
+ if (!diffRes.ok) throw new Error(`Diff fetch failed: ${diffRes.status}`);
const diffData = await diffRes.json();
setFiles(diffData.files || []);🤖 Prompt for AI Agents
In `@console/src/ui/viewer/views/Spec/WorktreePanel.tsx` around lines 40 - 57, The
loadStatus async function is parsing fetch responses without checking
response.ok; update the fetch handling for both fetch('/api/worktree/status')
and fetch('/api/worktree/diff') inside loadStatus to check res.ok (and
diffRes.ok) before calling .json(), and if not ok throw or reject with a
descriptive Error (include status and response text) so the catch branch will
run; ensure on error you setStatus(null) and setFiles([]) as appropriate and
keep setIsLoading(false) in finally.
| it("should update root_path and last_seen_at on duplicate project", () => { | ||
| store.upsertProjectRoot("my-project", "/old/path"); | ||
| store.upsertProjectRoot("my-project", "/new/path"); | ||
|
|
||
| const row = store.db | ||
| .query("SELECT * FROM project_roots WHERE project = ?") | ||
| .get("my-project") as { project: string; root_path: string } | null; | ||
|
|
||
| expect(row!.root_path).toBe("/new/path"); | ||
| }); |
There was a problem hiding this comment.
Test name mentions last_seen_at but doesn't verify it was updated.
The test asserts only root_path changed. To match the stated intent, capture last_seen_at from both the first and second upsert, and verify the timestamp was refreshed. Since the in-memory DB operates fast, you may need to introduce a small delay or at least assert last_seen_at is truthy on the updated row.
Suggested addition
it("should update root_path and last_seen_at on duplicate project", () => {
store.upsertProjectRoot("my-project", "/old/path");
+ const before = store.db
+ .query("SELECT last_seen_at FROM project_roots WHERE project = ?")
+ .get("my-project") as { last_seen_at: string };
store.upsertProjectRoot("my-project", "/new/path");
const row = store.db
.query("SELECT * FROM project_roots WHERE project = ?")
- .get("my-project") as { project: string; root_path: string } | null;
+ .get("my-project") as { project: string; root_path: string; last_seen_at: string } | null;
expect(row!.root_path).toBe("/new/path");
+ expect(row!.last_seen_at).toBeTruthy();
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it("should update root_path and last_seen_at on duplicate project", () => { | |
| store.upsertProjectRoot("my-project", "/old/path"); | |
| store.upsertProjectRoot("my-project", "/new/path"); | |
| const row = store.db | |
| .query("SELECT * FROM project_roots WHERE project = ?") | |
| .get("my-project") as { project: string; root_path: string } | null; | |
| expect(row!.root_path).toBe("/new/path"); | |
| }); | |
| it("should update root_path and last_seen_at on duplicate project", () => { | |
| store.upsertProjectRoot("my-project", "/old/path"); | |
| const before = store.db | |
| .query("SELECT last_seen_at FROM project_roots WHERE project = ?") | |
| .get("my-project") as { last_seen_at: string }; | |
| store.upsertProjectRoot("my-project", "/new/path"); | |
| const row = store.db | |
| .query("SELECT * FROM project_roots WHERE project = ?") | |
| .get("my-project") as { project: string; root_path: string; last_seen_at: string } | null; | |
| expect(row!.root_path).toBe("/new/path"); | |
| expect(row!.last_seen_at).toBeTruthy(); | |
| }); |
🤖 Prompt for AI Agents
In `@console/tests/sqlite/project-roots.test.ts` around lines 44 - 53, The test
"should update root_path and last_seen_at on duplicate project" currently only
checks root_path; update it to also verify last_seen_at was refreshed by reading
last_seen_at from the row before the second upsert and after, using
store.upsertProjectRoot("my-project", "/old/path") then capture the initial
last_seen_at (from project_roots via store.db.query(...).get), perform
store.upsertProjectRoot("my-project", "/new/path"), read the row again and
assert the new last_seen_at is updated (either greater than the previous
timestamp or at least truthy); if timing is tight, introduce a small delay
between upserts or relax the assertion to truthiness on the updated row to avoid
flakiness.
Replace GumroadLicenseVerifier with PolarLicenseVerifier using dual fallback (Polar first, Gumroad for existing subscribers). Rename tiers standard/enterprise to solo/team with backward-compatible state loading. Add embedded Polar checkout on website, portal API endpoint, and update all license URLs. Fix Cython build to include trial.py module and strip empty TYPE_CHECKING blocks.
The statusline writes context percentage to context-pct.json but without a timestamp. When a new Claude session starts within the same Pilot session, the stale file from the previous session caused false high context readings (e.g. 87% when actual was 31%). Add timestamp to cache writes and 60-second TTL validation to both readers (context_monitor.py hook and helper.py CLI).
Previously asked at plan approval (Step 1.8) which was too late and often forgotten. Now asked by the dispatcher (spec.md Section 0.1.1) before planning begins, passed to spec-plan via --worktree flag, and written into the plan header at creation time (Step 1.1).
- Auto-switch POLAR_ORG_ID based on POLAR_API_BASE (sandbox vs prod) - Make POLAR_EVENTS_PROXY_URL configurable via env var - Add http.client.IncompleteRead to exception handlers - Move license file from ~/.config/claude-pilot/ to ~/.pilot/ - Add progress feedback during activate/deactivate commands - Conditional Polar embed checkout (production only, sandbox opens new tab) - Update .env.development with working sandbox checkout link URLs
…ments Add parallel wave execution to spec-implement with spec-executor agent, deviation handling rules, discuss/clarify planning step, per-task commits, goal-backward verification, cognitive bias debugging guidance, agent model and skills optimization, and README/docs-site social proof sections.
… license display Restructure spec-implement to make parallel wave execution the primary execution path (Step 2.3/2.3a) instead of an optional sub-step (2.2b). Sequential TDD becomes the fallback (2.3b) for dependent tasks. Rename spec-executor agent to spec-implementer for consistency. Update workflow-enforcement rule references and README implement phase details. Update website components: WorkflowSteps highlights parallel sub-agents, AgentRoster replaces TDD card with IMPLEMENTER/Parallel Executor, WhatsInside adds parallel execution to spec summary. Add new site components (AgentRoster, DeploymentFlow, QualifierSection, TechStack) and update ComparisonSection, HeroSection, PricingSection, Index page. Add console license display: LicenseRoutes backend endpoint with cached pilot status, LicenseBadge component with useLicense hook in Topbar, and test suites for both. Rebuild worker service and viewer bundles. Update launcher config for seat-based pricing and tool redirect tests.
Use created_at, expires_at, and limit_activations from the Polar/Gumroad API response instead of local activation timestamps. Display seat count for multi-seat team licenses (e.g., "Team License (5 seats)"). Refresh seat and expiry data during periodic revalidation. Extract GumroadLicenseVerifier to launcher/gumroad.py to keep auth.py under the file size limit. Add gumroad.py to build MODULE_ORDER before auth.py so the merged single-file build resolves correctly.
…ontamination - process-manager/graceful-shutdown: add mkdirSync in beforeEach, use try/catch for PID file backup, remove racy post-removal assertions - worker-json-status: use node:fs instead of fs to avoid mock.module contamination from license-routes.test.ts, fix Buffer-to-string conversion for Bun's spawnSync - vexor-status-widget: use absolute path for file existence check
|
🎉 This PR is included in version 6.3.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary
agent-browserwith Microsoft'splaywright-cliacross all rules, documentation, installer, and statusline. Adds comprehensive rule with support for tabs, cookies, network mocking, tracing, and video recordingComing next
This PR lays the groundwork for two upcoming features:
docs/plans/2026-02-09-console-spec-dashboard-metrics.md): Enrich the Console dashboard with spec-driven development metrics — total/verified/in-progress spec counts in StatsGrid, SpecActivity panel with completion charts, ObservationTimeline panel, and cross-project support viaproject_rootstable/specsessions to work on different plans simultaneously without branch conflictsTest plan
uv run pytest -q)basedpyright launcher— 0 errors)cd console && bun test)docs/site/api/**filesVERCEL_TOKENGitHub secret is set before merging (required for new deploy workflow)exit 0)Summary by CodeRabbit
New Features
Bug Fixes / Resilience
Security
Updates
Documentation
Tests
Chores