Conversation
- Increase UsageProvider cache TTL from 60s to 300s for stability - Share single UsageProvider instance across widgets in formatter - Add multi-location trial markers with obfuscated binary format - Remove rollback mechanism from installer steps (simplification)
## [5.1.10](v5.1.9...v5.1.10) (2026-01-23) ### Bug Fixes * rename /setup to /sync and improve value messaging ([ae3b58f](ae3b58f))
- Update claude-mem marketplace and plugin if already installed - TDD enforcer now checks for failing tests specific to the module being edited, not just any failing test in the cache
## [5.1.11](v5.1.10...v5.1.11) (2026-01-23) ### Bug Fixes * improve claude-mem updates and TDD enforcer accuracy ([f4a7c5e](f4a7c5e))
- Rename patch_hook_paths to patch_claude_paths to handle both hooks and bin paths - Add BIN_PATH_PATTERN and SOURCE_REPO_BIN_PATH constants for statusLine command - Add Vercel Analytics and Speed Insights to docs site - Create vercel.json with SPA routing, security headers, and caching
## [5.1.12](v5.1.11...v5.1.12) (2026-01-23) ### Bug Fixes * patch statusLine bin path in installer and add Vercel analytics ([f88d5ea](f88d5ea))
- Skills now use naming convention: standard (plan, implement, verify, standards-*) vs custom (any other name) - all in flat .claude/skills/ - Add standards-skill-creator skill for creating new skills - Add update-refs custom skill for updating naming/docs across codebase - /sync now discovers and creates custom skills via skill-creator - Update README, banner, site for consistent terminology - Fix file_checker_python.py and tdd_enforcer.py hooks
…eanup - Move plan, implement, verify from skills/ to commands/ - Update installer to handle command/skill separation: - Standard commands (spec, sync, plan, implement, verify) cleared on update - Custom commands preserved - Migrated skill directories removed - Standards-* skills cleared, custom skills preserved - Fix stale pipe detection: check if PIDs are still running before counting other sessions, clean up pipes from dead processes - Fix claude-mem shutdown: wait for graceful shutdown before force killing, change warning to debug level - Update README and website with new structure documentation - Add unit tests for all new behaviors
…, and OAuth improvements - Add TipsWidget with 52 user-focused tips rotating every 15 minutes - Add --quiet/-q flag to installer for minimal output during updates - Add graceful CTRL+C handling with InstallationCancelled exception - Move OAuth token prompt to cli.py alongside other preferences - Write OAuth token directly to ~/.claude/.credentials.json - Skip OAuth prompt if credentials already exist - Add updater status messages for download/install progress - Add gh-cli and web-search rules
- Remove qlty code quality tool from project - Delete .qlty/ config directory and quality.yml workflow - Update installer to use new LSP plugins from claude-code-lsps marketplace - Update website tagline to match README
- Add release-dev.yml workflow for PR-based dev pre-releases
- Runs tests and builds in parallel for speed
- Creates GitHub pre-release with dev-{sha}-{date} versioning
- Auto-cleans old pre-releases (keeps last 5)
- Add claude-code-review.yml for automatic PR code review
- Add claude.yml for @claude mentions in issues/PRs
- Enhance install.sh with --dev and --version flags
- Update statusline formatter, tips, and widgets
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
The workflow was using github.sha which for PR events is a synthetic merge commit, not the actual dev branch commit. This caused tags to point to main-like code instead of dev branch code. Fixed by using github.event.pull_request.head.sha for all checkouts and version generation, with fallback to github.sha for workflow_dispatch.
- Add approve-release job with 'production' environment - Display version and changelog for review before publish - Remove Netlify deploy (now using Vercel)
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughTerminology shifted from “skill” to “command”. Plan files are created immediately and commands must invoke the next command (implement → verify) in the same response. Installer refactored: rollbacks and CCP/env steps removed, npm/marketplace flows added, target_version and quiet modes introduced, and PreToolUse redirection + docs/CI updates applied. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant SpecCmd as /spec (Plan)
participant ImplementCmd as /implement
participant VerifyCmd as /verify
participant FS as Filesystem
User->>SpecCmd: invoke /spec (create plan)
SpecCmd->>FS: write plan file (docs/plans/<date>-slug.md)
SpecCmd-->>User: return plan summary and plan path
Note right of SpecCmd: CRITICAL — IMMEDIATELY call /implement in same response
SpecCmd->>ImplementCmd: call /implement "docs/plans/...-feature.md"
ImplementCmd->>FS: read plan, write implementation files
ImplementCmd-->>User: return implementation summary
Note right of ImplementCmd: CRITICAL — IMMEDIATELY call /verify in same response
ImplementCmd->>VerifyCmd: call /verify "docs/plans/...-feature.md"
VerifyCmd->>FS: run tests/checks, record results
VerifyCmd-->>User: final verification result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
docs/site/src/components/PricingSection.tsx (1)
5-35: Fix missing aria-labelledby target by adding a heading id
aria-labelledby="pricing-heading"points to no element, so the section label is not announced by screen readers. Add an id on the heading (and pass it throughSectionHeader) to restore proper labeling.✅ Proposed fix
-const SectionHeader = ({ - title, - subtitle, -}: { - title: string; - subtitle?: string; -}) => ( +const SectionHeader = ({ + id, + title, + subtitle, +}: { + id?: string; + title: string; + subtitle?: string; +}) => ( <div className="text-center mb-12"> - <h2 className="text-2xl sm:text-3xl md:text-4xl font-bold text-foreground mb-4"> + <h2 + id={id} + className="text-2xl sm:text-3xl md:text-4xl font-bold text-foreground mb-4" + > {title} </h2> {subtitle && ( <p className="text-muted-foreground text-base sm:text-lg max-w-2xl mx-auto"> {subtitle} </p> )} </div> ); @@ - <SectionHeader + <SectionHeader + id="pricing-heading" title="Pricing" subtitle="Get instant access to best practices from daily production usage — a shortcut to state-of-the-art Claude Code development." />.github/workflows/release.yml (1)
149-165: Artifacts may embed the wrong version; pass the computed version into the build.
Builds happen before semantic-release publishes, soccp.buildwill likely use the previous version unless explicitly passed the computed version. Apply the same change to all build jobs (x86/arm64/darwin).🐛 Suggested fix (apply to each build job)
- python -m ccp.build --release && \ + python -m ccp.build --release --version ${{ needs.prepare-release.outputs.version }} && \.claude/commands/spec.md (1)
19-53: Add Read to /spec allowed-tools to match the new orchestration rule.Lines 23-26 require using the Read tool, but frontmatter only allows Skill. If tool enforcement is active, /spec won’t be able to read plan status.
🛠️ Proposed fix
-allowed-tools: Skill +allowed-tools: [Skill, Read]installer/steps/ccp_binary.py (1)
192-280: Parse dev versions explicitly to avoid perpetual updates.Lines 192-195 introduce
dev-tags, but_get_installed_version(Line 83) splits on"v", which will turn adev-...output into a mismatched suffix (e.g.,-e59320b-...). That preventsinstalled_version == target_version, causing repeated downloads and failed “up-to-date” checks.🛠️ Proposed fix
+import re + +_VERSION_RE = re.compile(r"\b(?:v(?P<semver>\d+\.\d+\.\d+)|(?P<dev>dev-[\w-]+))\b") @@ - if "v" in output: - return output.split("v")[-1].strip() + match = _VERSION_RE.search(output) + if match: + return match.group("semver") or match.group("dev")
🤖 Fix all issues with AI agents
In @.claude/commands/implement.md:
- Around line 231-238: Replace the bold-as-heading block that currently reads
"**When all tasks are done and Status is set to COMPLETE, this command ends.**"
and the subsequent numbered instructions with a proper Markdown heading (e.g.,
"## ⛔ WHEN THIS COMMAND COMPLETES - CRITICAL") and keep the explanatory
sentences and numbered steps as normal paragraph/list content; update the
section title so it uses a real heading token instead of just bold text to
satisfy Markdownlint MD036 and preserve the exact wording and the required
immediate action to invoke Skill(verify, "plan-path").
In @.claude/commands/sync.md:
- Around line 323-340: The markdown fenced code blocks in the Sync doc lack
language identifiers and fail markdownlint; update the two backtick-fenced
blocks shown (the block containing the Question/Header/Options and the block
with Skill(skill="skill-creator", args="...")) by adding a language identifier
(e.g., text) after the opening ``` so both fences are ```text; ensure both
matching closing fences remain and no other content is changed.
In @.claude/rules/standard/lsp-servers.md:
- Line 5: The doc incorrectly states LSP positions are 1-based; update text and
all examples to say positions (line and character) are 0-based per the LSP spec
and adjust any sample position tuples accordingly. Search the file for the
phrase "1-based" and for examples using position tuples and replace them with
0-based values; verify surrounding examples and explanatory text (in the same
section as the "**Available:** Python (basedpyright), TypeScript
(typescript-language-server), Go (gopls)" header) are updated to reflect 0-based
indexing consistently. Ensure any sample code/comments that construct Position
objects or call LSP methods use 0-based indices.
In @.claude/rules/standard/web-search.md:
- Around line 27-45: Update the tool descriptions to fix wording and grammar:
ensure compound adjectives are hyphenated (e.g., "full-page content",
"batch-fetch multiple URLs", "request-timeout in ms" where applicable),
capitalize proper nouns consistently (e.g., "Playwright"), and make parallel
phrasing for option descriptions (`extractContent`, `waitUntil`, `timeout`,
`returnHtml`). Also standardize comparisons: use the exact tool identifiers
shown (WebSearch vs web-search/search, WebFetch vs web-fetch/fetch_url) and
rephrase the table entries to be concise and parallel (e.g., "Limited results;
no scraping" -> "Limited results, no scraping"). Apply these edits to the lines
mentioning fetch_url, fetch_urls, browser_install, WebSearch, web-search/search,
WebFetch, and web-fetch/fetch_url.
In @.devcontainer/devcontainer.json:
- Line 92: The postCreateCommand currently pipes a remote install script from
the mutable tag "maxritter/claude-codepro/v5.1.12/install.sh" which is a
supply-chain risk; update the "postCreateCommand" to fetch a pinned immutable
reference (replace the tag URL with a specific commit SHA URL) or fetch the
script first, verify its checksum/signature, and only then execute it; target
the "postCreateCommand" key and the URL string in
.devcontainer/devcontainer.json when making this change.
In @.github/workflows/claude-code-review.yml:
- Around line 14-20: The claude-review job will fail for forked PRs because
secrets.CLAUDE_CODE_OAUTH_TOKEN is unavailable; update the job named
claude-review to skip or only run for same-repo PRs by adding a guard condition
(e.g., use an if that compares github.event.pull_request.head.repo.full_name or
.owner with github.repository or github.repository_owner) or restrict the
workflow to workflow_dispatch only so the secret is never required for forks;
ensure the condition prevents execution when the secret is absent while keeping
the job's permissions and intent unchanged.
In @.github/workflows/claude.yml:
- Around line 21-27: The workflow permissions block currently sets contents,
pull-requests, and issues to read which prevents the
anthropics/claude-code-action@v1 from posting comments; update the permissions
object so that the keys contents, pull-requests, and issues have the value write
(leave id-token and actions as-is) so the Claude action can create/update
comments and PR threads.
In @.github/workflows/release-dev.yml:
- Around line 294-298: Replace the naive tag/release push with a defensive
delete-and-retry pattern: before pushing or creating a release, ensure any
existing release and its tag are force-removed by calling gh release delete
"$VERSION" --yes --cleanup-tag (fall back to deleting the git ref via gh api -X
DELETE "repos/${{ github.repository }}/git/refs/tags/$VERSION" if needed) and
make those calls tolerant (redirect errors / OR true) so reruns don’t fail;
after deletion, add a brief retry/wait loop that checks gh release view
"$VERSION" until it’s gone (with backoff) before running git tag "$VERSION", git
push origin "$VERSION" and gh release create to avoid eventual-consistency race
conditions.
In @.gitignore:
- Around line 232-235: The current .gitignore rule " .claude/skills/*/ "
prevents Git from recursing into subfolders so the negation
"!.claude/skills/standards-*/" won't re-include files; update the ignore rules
by keeping the existing ignore pattern (".claude/skills/*/") but add explicit
unignore patterns for the standards directories and their contents (e.g., add
negations targeting ".claude/skills/standards-*/" and the files within such as
".claude/skills/standards-*/*" or ".claude/skills/standards-*/**") so that the
standards-* directories and all nested files are tracked.
In `@docs/site/package.json`:
- Around line 43-44: Add/update the privacy policy to explicitly document use of
`@vercel/analytics` and `@vercel/speed-insights` and state that no third‑party
cookies or personal data are collected; then implement Vercel's redaction hooks
by configuring the analytics beforeSend handler to strip or redact any personal
data from request URLs, query parameters and custom events (e.g., remove emails,
user IDs, tokens) before sending, and ensure any places that call custom event
functions also avoid including identifiers; reference the package names
"@vercel/analytics" and "@vercel/speed-insights" and the analytics beforeSend
hook when making these changes.
In `@install.sh`:
- Around line 99-107: The current install.sh uses brittle sed edits on
"$config_file" (and variables mode) which can corrupt JSON; replace the
sed-based updates with a robust JSON editor (e.g., use jq or a short Python JSON
read/modify/write) to set the install_mode key, write updates to a temporary
file and atomically move it over the original, and validate the resulting JSON
(jq -e or python json.load) before replacing the original; ensure you handle the
case where the file doesn't exist by writing a minimal JSON object and still
validating it, and remove the sed branches that assume specific formatting.
In `@installer/steps/ccp_binary.py`:
- Around line 296-315: The non-interactive check is calling the bound method
instead of invoking it, so the branch always treats the UI as non-interactive;
change the condition from checking ui.non_interactive to calling it (i.e., use
if ui.non_interactive():) in the retry loop that surrounds
_download_ccp_artifacts and _check_macos_gatekeeper, ensuring subsequent
branches (the call to ui.confirm and the "skip retry" behavior) run only when
the UI is actually non-interactive.
In `@installer/steps/dependencies.py`:
- Around line 810-846: The JSON written by _configure_web_mcp_servers currently
calls claude_config_path.write_text(json.dumps(config, indent=2)) without a
trailing newline; update the write to append a newline (e.g.,
write_text(json.dumps(config, indent=2) + "\n")) so the output matches the
behavior of _patch_claude_config and avoids missing-trailing-newline issues;
modify the write call in the _configure_web_mcp_servers function accordingly and
keep the rest of the logic unchanged.
In `@installer/steps/finalize.py`:
- Around line 96-102: The condition is using the bound method ui.quiet instead
of calling it, so the success panel never shows; update the conditional in
finalize.py to call the method (if not ui.quiet()) so the block that prints the
rule, star message, and Installed version (__version__) runs when not in quiet
mode and not when ui.quiet() returns True.
In `@LICENSE`:
- Line 41: Replace the ambiguous sentence "Most of Claude CodePro's source code
is open source under AGPL-3.0." with a clear, explicit licensing statement that
enumerates which components are proprietary versus AGPL-3.0; either state "All
source code is AGPL-3.0" if applicable, or use the format "All source code is
AGPL-3.0 except: [list proprietary components]" (list component names or
directories) so readers and contributors can unambiguously determine licensing
obligations.
In `@README.md`:
- Around line 107-111: Update the README entry that reads "Local vector store
based semantic code search for token-efficient retrieval" to hyphenate the
compound modifier as "vector-store-based" so it becomes "Local
vector-store-based semantic code search for token-efficient retrieval"; locate
the bullet under the "**Semantic Search**" heading and replace the unhyphenated
phrase with the hyphenated form to fix the grammar.
- Around line 56-66: The README uses bolded emphasis as section titles (e.g.,
"**Option A: 🐳 Dev Container (Recommended)**" and "**Option B: 🍺 Local
Installation**"), which triggers markdownlint "emphasis-as-heading"; replace
those bolded lines with proper Markdown headings (for example "## Option A: 🐳
Dev Container (Recommended)" and "## Option B: 🍺 Local Installation") and
ensure any other similar bolded section titles in the file are converted to
consistent heading levels.
♻️ Duplicate comments (1)
.claude/hooks/file_checker_python.py (1)
153-156: Ruff detection still misses 4+ letter prefixes.
The new regex limits prefixes to 1–3 letters, so codes likePERF/FURBwon’t be detected andhas_issuescan be false. Consider widening the pattern or parsing Ruff JSON output.🔧 Suggested minimal fix (broader regex)
- error_pattern = re.compile(r":\d+:\d+: [A-Z]{1,3}\d+") + error_pattern = re.compile(r":\d+:\d+:\s+[A-Z]+\d+")
🧹 Nitpick comments (14)
pyproject.toml (1)
108-111: Consider replacing blanket per-file ignores with targeted noqa comments.The pattern
"installer/steps/*"applies PLR0913 and PLR0915 ignores across all step modules. Blanket ignores can hide complexity regressions. For functions that legitimately exceed these limits, use inline# noqa: PLR0913, PLR0915instead to keep linter signals active elsewhere in the codebase..claude/skills/standards-skill-creator/SKILL.md (1)
39-48: Add language identifiers to fenced blocks for lint compliance (Lines 39-48, 75-82).markdownlint flags these fences. Use a neutral language like
textto keep formatting consistent.♻️ Suggested fix
-``` +```text skill-name/ ├── SKILL.md (required) │ ├── YAML frontmatter (name, description) │ └── Markdown instructions └── Bundled Resources (optional) ├── scripts/ - Executable code ├── references/ - Documentation loaded as needed └── assets/ - Files used in output (templates, etc.)-``` +```text bigquery-skill/ ├── SKILL.md (overview + navigation) └── references/ ├── finance.md ├── sales.md └── product.mdAlso applies to: 75-82
.claude/hooks/tdd_enforcer.py (2)
93-123: Loosen test-name matching to reduce false warnings.
Exact matching only catchestest_<module>.py/<module>_test.py; commontest_<module>_*patterns will still warn.♻️ Suggested adjustment
- if test_name == f"test_{module_name}" or test_name == f"{module_name}_test": - return True + if ( + test_name == f"test_{module_name}" + or test_name.startswith(f"test_{module_name}_") + or test_name == f"{module_name}_test" + or test_name.startswith(f"{module_name}_test_") + ): + return True
179-198: Skip__init__.pyedits to avoid noisy warnings.
Package initializers rarely have 1:1 test modules; short-circuiting avoids repeated false positives.♻️ Suggested adjustment
if file_path.endswith(".py"): + if Path(file_path).name == "__init__.py": + return 0 path = Path(file_path).parent found_failing = Falseinstaller/platform_utils.py (1)
11-31: Resolvenvidia-smiviashutil.whichor constrain subprocessPATHto mitigate S607 (PATH spoofing).The current code passes a non-absolute executable path to
subprocess.run(), which triggers S607 (CWE-426 untrusted search path). The suggested fix usingshutil.which()makes the resolved path explicit, but note that it doesn't eliminate PATH spoofing if the environment'sPATHvariable is compromised. For stronger defense, either validate the resolved path against known directories (/usr/bin,/bin) or pass a constrainedenvdict with a minimalPATHtosubprocess.run().🔧 Suggested adjustment
- proc = subprocess.run( - ["nvidia-smi"], - capture_output=True, - timeout=10, - ) - if proc.returncode == 0: - return True + nvidia_smi = shutil.which("nvidia-smi") + if nvidia_smi: + proc = subprocess.run( + [nvidia_smi], + capture_output=True, + timeout=10, + ) + if proc.returncode == 0: + return Truedocs/site/vercel.json (1)
43-51: Consider escaping the dot in the woff2 pattern.The pattern
/(.*).woff2treats.as "any character" in regex. While unlikely to cause issues in practice, escaping it as/(.*)\\.woff2would be more precise.Suggested fix
{ - "source": "/(.*).woff2", + "source": "/(.*)\.woff2", "headers": [.claude/commands/plan.md (1)
478-490: Clarify the handoff mechanics to /spec.The CRITICAL section describes what
/spec MUST do next, but this is within the/plancommand file. This could be confusing since the instructions target behavior of a different command (/spec). Consider either:
- Moving this guidance to the /spec command file
- Rephrasing to clarify this is documenting the expected contract from /spec's perspective
install.sh (2)
14-37: Consider quoting$INSTALLER_ARGSaccumulation to handle spaces.The current approach concatenates arguments with spaces, which can break if an argument contains spaces. Consider using an array instead:
♻️ Suggested improvement using arrays
-INSTALLER_ARGS="" +INSTALLER_ARGS=() while [ $# -gt 0 ]; do case "$1" in --dev) INSTALL_DEV=true shift ;; --version) INSTALL_VERSION="$2" shift 2 ;; --version=*) INSTALL_VERSION="${1#*=}" shift ;; *) - if [ -z "$INSTALLER_ARGS" ]; then - INSTALLER_ARGS="$1" - else - INSTALLER_ARGS="$INSTALLER_ARGS $1" - fi + INSTALLER_ARGS+=("$1") shift ;; esac doneThen at line 478, use
"${INSTALLER_ARGS[@]}"instead of$INSTALLER_ARGS.
39-58: Complex text processing for pre-release extraction may be fragile.The pipeline using
tr,grep,paste,sed, andsortto extract dev releases is clever but brittle. If GitHub changes their JSON structure or field ordering, this could silently fail.Consider adding a fallback or using
jqif available:♻️ Suggested improvement with jq fallback
get_latest_prerelease() { local api_url="https://api.github.com/repos/${REPO}/releases" local releases="" if command -v curl >/dev/null 2>&1; then releases=$(curl -fsSL "$api_url" 2>/dev/null) || true elif command -v wget >/dev/null 2>&1; then releases=$(wget -qO- "$api_url" 2>/dev/null) || true fi if [ -z "$releases" ]; then return 1 fi + # Try jq first if available (more reliable) + if command -v jq >/dev/null 2>&1; then + echo "$releases" | jq -r '[.[] | select(.tag_name | startswith("dev-"))] | sort_by(.created_at) | reverse | .[0].tag_name // empty' + return + fi + # Extract dev-* releases with their created_at timestamp, sort by timestamp, get latest - # Format: created_at|tag_name, then sort by created_at descending echo "$releases" | tr ',' '\n' | grep -E '"(tag_name|created_at)"' | paste - - | \ grep 'dev-' | sed 's/.*"created_at"[^"]*"\([^"]*\)".*"tag_name"[^"]*"\([^"]*\)".*/\1|\2/' | \ sort -t'|' -k1 -r | head -1 | cut -d'|' -f2 }installer/tests/unit/steps/test_dependencies.py (1)
239-258: Clean up unused variableoriginal_info.The variable
original_infois assigned but never used. This was flagged by static analysis (F841).♻️ Remove unused variable
ui = Console(non_interactive=True) # Capture printed output by mocking ui.info info_calls = [] - original_info = ui.info ui.info = lambda msg: info_calls.append(msg)installer/steps/dependencies.py (4)
169-173: Silent exception handling is acceptable for cleanup but consider logging.The static analyzer flags the
try-except-passpattern. For cleanup operations before installation, silently ignoring failures is reasonable since the goal is best-effort removal. However, logging at debug level could help troubleshoot installation issues.♻️ Optional: Add debug logging for troubleshooting
if native_bin.exists() or native_bin.is_symlink(): try: native_bin.unlink() - except Exception: - pass + except OSError as e: + # Best-effort cleanup - log but continue + import logging + logging.debug(f"Could not remove {native_bin}: {e}")
279-293: Minor inconsistency in success detection.Line 291 checks for
"already"while line 266 checks for"already installed". Both work, but using consistent checks ("already installed") across marketplace functions would improve maintainability.♻️ Optional: Use consistent success message check
output = (result.stdout + result.stderr).lower() - return result.returncode == 0 or "already" in output + return result.returncode == 0 or "already installed" in output
437-456: Consider reducing code duplication.The subprocess call is duplicated between the
if uiandelsebranches. This can be consolidated.♻️ Optional: Consolidate duplicated subprocess call
def _setup_vexor_local_model(ui: Any = None) -> bool: """Download and setup the local embedding model for Vexor.""" + def run_setup(): + return subprocess.run( + ["vexor", "local", "--setup", "--model", "intfloat/multilingual-e5-small"], + capture_output=True, + text=True, + ) + try: if ui: with ui.spinner("Downloading local embedding model..."): - result = subprocess.run( - ["vexor", "local", "--setup", "--model", "intfloat/multilingual-e5-small"], - capture_output=True, - text=True, - ) - return result.returncode == 0 + result = run_setup() else: - result = subprocess.run( - ["vexor", "local", "--setup", "--model", "intfloat/multilingual-e5-small"], - capture_output=True, - text=True, - ) - return result.returncode == 0 + result = run_setup() + return result.returncode == 0 except Exception: return False
705-719: Same duplication pattern as_setup_vexor_local_model.Consider applying the same consolidation refactor here for consistency.
| ``` | ||
| Question: "I found patterns that might work better as skills. Create any?" | ||
| Header: "New Skills" | ||
| multiSelect: true | ||
| Options: | ||
| - "[Workflow 1]" - [Description of multi-step process] | ||
| - "[Tool integration]" - [Description of tool/format handling] | ||
| - "[Domain area]" - [Description of specialized knowledge] | ||
| - "None" - Skip skill creation | ||
| ``` | ||
|
|
||
| #### Step 8.2: Create Selected Skills | ||
|
|
||
| For each selected skill, **invoke the skill-creator skill**: | ||
|
|
||
| ``` | ||
| Skill(skill="skill-creator", args="create skill for [selected pattern]") | ||
| ``` |
There was a problem hiding this comment.
Add language identifiers to fenced blocks to satisfy lint.
markdownlint will flag these fences without a language.
✍️ Suggested fix
-```
+```text
Question: "I found patterns that might work better as skills. Create any?"
Header: "New Skills"
multiSelect: true
Options:
- "[Workflow 1]" - [Description of multi-step process]
- "[Tool integration]" - [Description of tool/format handling]
- "[Domain area]" - [Description of specialized knowledge]
- "None" - Skip skill creation
```
@@
-```
+```text
Skill(skill="skill-creator", args="create skill for [selected pattern]")
```🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
332-332: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
340-340: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In @.claude/commands/sync.md around lines 323 - 340, The markdown fenced code
blocks in the Sync doc lack language identifiers and fail markdownlint; update
the two backtick-fenced blocks shown (the block containing the
Question/Header/Options and the block with Skill(skill="skill-creator",
args="...")) by adding a language identifier (e.g., text) after the opening ```
so both fences are ```text; ensure both matching closing fences remain and no
other content is changed.
| **Use LSP tools BEFORE grep/search for code understanding. LSP provides compiler-accurate results that grep cannot match.** | ||
|
|
||
| **Available:** Python (Pyright), TypeScript | ||
| **Available:** Python (basedpyright), TypeScript (typescript-language-server), Go (gopls) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .claude/rules/standard/lsp-servers.mdRepository: maxritter/claude-codepro
Length of output: 2889
🏁 Script executed:
# Search for any references to position indexing in the file
rg -n "line|character|position|0-based|1-based" .claude/rules/standard/lsp-servers.md -C2Repository: maxritter/claude-codepro
Length of output: 188
🏁 Script executed:
# Search for LSP server name usage in the codebase
rg -n "typescript-language-server|vtsls|basedpyright|gopls" --type md --type json --type yaml -C2Repository: maxritter/claude-codepro
Length of output: 50
🌐 Web query:
LSP (Language Server Protocol) specification line and character position indexing 0-based or 1-based
💡 Result:
In the Language Server Protocol (LSP), both line and character positions are 0-based (zero-based). The spec defines Position as “zero-based line and zero-based character offset,” and Range start/end positions are also zero-based (with the end position being exclusive). [1]
Most integrations/documentation that reference the spec repeat the same rule: 0-based lines and 0-based character offsets. [2]
Sources: [1] (microsoft.github.io) [2] (go.googlesource.com)
Citations:
- 1: https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/
- 2: https://go.googlesource.com/wiki/%2Bshow/6b93e07e6958eae5d77d5e16e72fc6406259d136/gopls-integrator-FAQ.md?utm_source=openai
Fix LSP position indexing documentation: positions are 0-based, not 1-based.
The LSP specification defines all positions (line and character) as 0-based, per the official specification at microsoft.github.io. The doc incorrectly states positions are 1-based at line 34 and throughout the examples. This must be corrected to prevent developers from using incorrect position values when calling LSP operations.
Also applies to: 34-34
🤖 Prompt for AI Agents
In @.claude/rules/standard/lsp-servers.md at line 5, The doc incorrectly states
LSP positions are 1-based; update text and all examples to say positions (line
and character) are 0-based per the LSP spec and adjust any sample position
tuples accordingly. Search the file for the phrase "1-based" and for examples
using position tuples and replace them with 0-based values; verify surrounding
examples and explanatory text (in the same section as the "**Available:** Python
(basedpyright), TypeScript (typescript-language-server), Go (gopls)" header) are
updated to reflect 0-based indexing consistently. Ensure any sample
code/comments that construct Position objects or call LSP methods use 0-based
indices.
| "ghcr.io/braun-daniel/devcontainer-features/fzf:1": {} | ||
| }, | ||
| "postCreateCommand": "curl -fsSL https://raw.githubusercontent.com/maxritter/claude-codepro/v5.1.9/install.sh | bash", | ||
| "postCreateCommand": "curl -fsSL https://raw.githubusercontent.com/maxritter/claude-codepro/v5.1.12/install.sh | bash", |
There was a problem hiding this comment.
Pin or verify the install script to avoid tag‑retarget risk (Line 92).
Executing curl | bash from a mutable tag is a supply‑chain risk. Prefer pinning to a commit SHA or verifying a checksum before execution.
🔒 Example hardening (pin to commit SHA)
-"postCreateCommand": "curl -fsSL https://raw.githubusercontent.com/maxritter/claude-codepro/v5.1.12/install.sh | bash",
+"postCreateCommand": "curl -fsSL https://raw.githubusercontent.com/maxritter/claude-codepro/<COMMIT_SHA>/install.sh | bash",🤖 Prompt for AI Agents
In @.devcontainer/devcontainer.json at line 92, The postCreateCommand currently
pipes a remote install script from the mutable tag
"maxritter/claude-codepro/v5.1.12/install.sh" which is a supply-chain risk;
update the "postCreateCommand" to fetch a pinned immutable reference (replace
the tag URL with a specific commit SHA URL) or fetch the script first, verify
its checksum/signature, and only then execute it; target the "postCreateCommand"
key and the URL string in .devcontainer/devcontainer.json when making this
change.
| def _configure_web_mcp_servers(ui: Any) -> None: | ||
| """Configure open-websearch and fetcher-mcp in ~/.claude.json.""" | ||
| import json | ||
|
|
||
| claude_config_path = Path.home() / ".claude.json" | ||
|
|
||
| try: | ||
| if claude_config_path.exists(): | ||
| config = json.loads(claude_config_path.read_text()) | ||
| else: | ||
| config = {} | ||
|
|
||
| if "mcpServers" not in config: | ||
| config["mcpServers"] = {} | ||
|
|
||
| api_key = os.environ.get("FIRECRAWL_API_KEY") | ||
| if not api_key: | ||
| env_file = project_dir / ".env" | ||
| if env_file.exists(): | ||
| for line in env_file.read_text().split("\n"): | ||
| if line.startswith("FIRECRAWL_API_KEY="): | ||
| api_key = line.split("=", 1)[1].strip() | ||
| break | ||
| config["mcpServers"]["web-search"] = { | ||
| "command": "npx", | ||
| "args": ["-y", "open-websearch@latest"], | ||
| "env": { | ||
| "MODE": "stdio", | ||
| "DEFAULT_SEARCH_ENGINE": "duckduckgo", | ||
| "ALLOWED_SEARCH_ENGINES": "duckduckgo,bing,exa", | ||
| }, | ||
| } | ||
|
|
||
| config["mcpServers"]["web-fetch"] = { | ||
| "command": "npx", | ||
| "args": ["-y", "fetcher-mcp"], | ||
| } | ||
|
|
||
| claude_config_path.write_text(json.dumps(config, indent=2)) | ||
|
|
||
| if api_key: | ||
| _configure_firecrawl_mcp(api_key) | ||
| if ui: | ||
| ui.success("Firecrawl MCP configured") | ||
| ui.success("Web MCP servers configured (open-websearch, fetcher-mcp)") | ||
| except Exception as e: | ||
| if ui: | ||
| ui.warning(f"Could not configure web MCP servers: {e}") |
There was a problem hiding this comment.
Missing trailing newline in JSON output.
Line 840 writes the config without a trailing newline, unlike _patch_claude_config (line 195) which adds + "\n". This inconsistency could cause issues with some tools and version control diffs.
🔧 Proposed fix
- claude_config_path.write_text(json.dumps(config, indent=2))
+ claude_config_path.write_text(json.dumps(config, indent=2) + "\n")🧰 Tools
🪛 Ruff (0.14.13)
844-844: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In `@installer/steps/dependencies.py` around lines 810 - 846, The JSON written by
_configure_web_mcp_servers currently calls
claude_config_path.write_text(json.dumps(config, indent=2)) without a trailing
newline; update the write to append a newline (e.g.,
write_text(json.dumps(config, indent=2) + "\n")) so the output matches the
behavior of _patch_claude_config and avoids missing-trailing-newline issues;
modify the write call in the _configure_web_mcp_servers function accordingly and
keep the rest of the logic unchanged.
| if not ui.quiet: | ||
| ui.rule() | ||
| ui.print() | ||
| ui.print(" [bold yellow]⭐ Star this repo:[/bold yellow] https://github.com/maxritter/claude-codepro") | ||
| ui.print() | ||
| ui.print(f" [dim]Installed version: {__version__}[/dim]") | ||
| ui.print() |
There was a problem hiding this comment.
Bug: ui.quiet is a method, so this block never executes.
Line 96 uses ui.quiet instead of ui.quiet(). The bound method is always truthy, so the success panel is suppressed even in non-quiet mode.
🐛 Fix
- if not ui.quiet:
+ if not ui.quiet():📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if not ui.quiet: | |
| ui.rule() | |
| ui.print() | |
| ui.print(" [bold yellow]⭐ Star this repo:[/bold yellow] https://github.com/maxritter/claude-codepro") | |
| ui.print() | |
| ui.print(f" [dim]Installed version: {__version__}[/dim]") | |
| ui.print() | |
| if not ui.quiet(): | |
| ui.rule() | |
| ui.print() | |
| ui.print(" [bold yellow]⭐ Star this repo:[/bold yellow] https://github.com/maxritter/claude-codepro") | |
| ui.print() | |
| ui.print(f" [dim]Installed version: {__version__}[/dim]") | |
| ui.print() |
🤖 Prompt for AI Agents
In `@installer/steps/finalize.py` around lines 96 - 102, The condition is using
the bound method ui.quiet instead of calling it, so the success panel never
shows; update the conditional in finalize.py to call the method (if not
ui.quiet()) so the block that prints the rule, star message, and Installed
version (__version__) runs when not in quiet mode and not when ui.quiet()
returns True.
| The following components are proprietary and closed source: | ||
| - The CCP binary (.claude/bin/ccp) - license verification and advanced features | ||
| - Related license management code in the ccp/ directory | ||
| Most of Claude CodePro's source code is open source under AGPL-3.0. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the current LICENSE file content
cat LICENSERepository: maxritter/claude-codepro
Length of output: 34934
🏁 Script executed:
# Check git history for the LICENSE file to see what changed
git log --oneline LICENSE | head -20Repository: maxritter/claude-codepro
Length of output: 124
🏁 Script executed:
# Get the git diff for recent changes to LICENSE to understand what was removed
git diff HEAD~5 LICENSE 2>/dev/null || git show HEAD:LICENSE 2>/dev/null | head -100Repository: maxritter/claude-codepro
Length of output: 4581
Clarify which components are proprietary vs. AGPL-licensed.
The current statement—"Most of Claude CodePro's source code is open source under AGPL-3.0"—is legally ambiguous. The term "most" is undefined, and the license fails to specify:
- Which components are proprietary
- Which components are AGPL-3.0
- What percentage constitutes "most"
This creates compliance uncertainty for users evaluating AGPL obligations and for contributors understanding the licensing terms.
Either restore explicit enumeration of proprietary components or add a clear statement: "All source code is AGPL-3.0 except: [specific list]" (or "All source code is AGPL-3.0" if that's now the case).
🤖 Prompt for AI Agents
In `@LICENSE` at line 41, Replace the ambiguous sentence "Most of Claude CodePro's
source code is open source under AGPL-3.0." with a clear, explicit licensing
statement that enumerates which components are proprietary versus AGPL-3.0;
either state "All source code is AGPL-3.0" if applicable, or use the format "All
source code is AGPL-3.0 except: [list proprietary components]" (list component
names or directories) so readers and contributors can unambiguously determine
licensing obligations.
| **Option A: 🐳 Dev Container (Recommended)** | ||
|
|
||
| Pre-configured, isolated environment with all tools ready. No system conflicts, consistent across machines, easy cleanup. | ||
|
|
||
| - **Container Runtime** - [Docker Desktop](https://www.docker.com/products/docker-desktop) or compatible equivalent | ||
| - **IDE** - [VS Code](https://code.visualstudio.com/), [Cursor](https://cursor.sh/), or [Windsurf](https://codeium.com/windsurf) | ||
| - **Dev Containers extension** - [Install from Marketplace](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) | ||
| - Works on **macOS**, **Linux**, and **Windows (with WSL2)** | ||
| - **IDE with Dev Container Support** - Any IDE supporting the [Dev Container spec](https://containers.dev/): | ||
| - VS Code and forks: [VS Code](https://code.visualstudio.com/), [Cursor](https://cursor.sh/), [Windsurf](https://windsurf.com/), [Antigravity](https://antigravity.google/) (with [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)) | ||
| - JetBrains IDEs: [IntelliJ IDEA](https://www.jetbrains.com/help/idea/connect-to-devcontainer.html), WebStorm, PyCharm, GoLand, etc. | ||
| - Cloud: [GitHub Codespaces](https://github.com/features/codespaces), [DevPod](https://devpod.sh/), [CodeSandbox](https://codesandbox.io/) | ||
|
|
||
| **Option B: 🍺 Local Installation (Only macOS/Linux)** | ||
| - **macOS or Linux** - Homebrew-compatible system | ||
| - **Homebrew** - Installed automatically if not present | ||
| **Option B: 🍺 Local Installation** |
There was a problem hiding this comment.
Use headings instead of emphasis for section titles.
Markdownlint flags these as “emphasis-as-heading.” Consider converting to heading syntax.
💡 Suggested fix
-**Option A: 🐳 Dev Container (Recommended)**
+#### Option A: 🐳 Dev Container (Recommended)
-**Option B: 🍺 Local Installation**
+#### Option B: 🍺 Local Installation🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
56-56: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
66-66: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
🤖 Prompt for AI Agents
In `@README.md` around lines 56 - 66, The README uses bolded emphasis as section
titles (e.g., "**Option A: 🐳 Dev Container (Recommended)**" and "**Option B: 🍺
Local Installation**"), which triggers markdownlint "emphasis-as-heading";
replace those bolded lines with proper Markdown headings (for example "## Option
A: 🐳 Dev Container (Recommended)" and "## Option B: 🍺 Local Installation") and
ensure any other similar bolded section titles in the file are converted to
consistent heading levels.
| - **Zero API Keys** - Web search and scraping work out of the box, no external API keys needed | ||
| - **Persistent Memory** - Cross-session memory system that automatically ingests context | ||
| - **Semantic Search** - Local vector store based semantic code search for token-efficient retrieval | ||
| - **External Context** - External context retrieval for code / libraries and web search / scraping | ||
| - **External Context** - Library docs via Context7, web search and scraping via MCP tools | ||
| - **Browser Automation** - Headless browser automation and testing even within the Dev Container |
There was a problem hiding this comment.
Hyphenate compound modifier in “vector-store-based.”
Small grammar fix improves readability.
💡 Suggested fix
-- **Semantic Search** - Local vector store based semantic code search for token-efficient retrieval
+- **Semantic Search** - Local vector-store-based semantic code search for token-efficient retrieval📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Zero API Keys** - Web search and scraping work out of the box, no external API keys needed | |
| - **Persistent Memory** - Cross-session memory system that automatically ingests context | |
| - **Semantic Search** - Local vector store based semantic code search for token-efficient retrieval | |
| - **External Context** - External context retrieval for code / libraries and web search / scraping | |
| - **External Context** - Library docs via Context7, web search and scraping via MCP tools | |
| - **Browser Automation** - Headless browser automation and testing even within the Dev Container | |
| - **Zero API Keys** - Web search and scraping work out of the box, no external API keys needed | |
| - **Persistent Memory** - Cross-session memory system that automatically ingests context | |
| - **Semantic Search** - Local vector-store-based semantic code search for token-efficient retrieval | |
| - **External Context** - Library docs via Context7, web search and scraping via MCP tools | |
| - **Browser Automation** - Headless browser automation and testing even within the Dev Container |
🧰 Tools
🪛 LanguageTool
[grammar] ~109-~109: Use a hyphen to join words.
Context: ...Semantic Search - Local vector store based semantic code search for token-eff...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
In `@README.md` around lines 107 - 111, Update the README entry that reads "Local
vector store based semantic code search for token-efficient retrieval" to
hyphenate the compound modifier as "vector-store-based" so it becomes "Local
vector-store-based semantic code search for token-efficient retrieval"; locate
the bullet under the "**Semantic Search**" heading and replace the unhyphenated
phrase with the hyphenated form to fix the grammar.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @.claude/hooks/context_monitor.py:
- Around line 134-140: The printed instruction uses a grep pattern "'^Status:
PENDING\\|^Status: COMPLETE' docs/plans/*.md" inside context_monitor.py (the
step-print block) that incorrectly treats COMPLETE as an "active" plan; define a
clear plan status schema (e.g., PENDING, ACTIVE, IN_PROGRESS, COMPLETE,
CANCELLED) and update the grep pattern used by the printed step (and any code
that searches plan files) to match only the intended active statuses (e.g.,
'^Status: ACTIVE\\|^Status: IN_PROGRESS' or whatever the agreed active-state
names are); also update the docs/docs/plans/ status spec so the pattern and any
references (the printed guidance and the plan-parsing code) remain consistent.
In @.github/workflows/release-dev.yml:
- Around line 46-48: The workflow job named build-ccp-arm64 uses an invalid
runner label; update the runs-on value in that job from
"ubuntu-24.04-arm-4-core" to the correct GitHub-hosted runner label
"ubuntu-24.04-arm" so the build-ccp-arm64 job can find a matching ARM64 runner
and execute successfully.
- Around line 171-183: The workflow step "Create pre-release" currently
interpolates github.head_ref directly into the HERE-doc when building NOTES
(using `${{ github.head_ref }}`), which risks shell injection; instead set
github.head_ref into an environment variable (e.g., HEAD_REF) in that job/step
and reference it in the script via a safe shell expansion like ${HEAD_REF} when
constructing the NOTES variable; update the step's env block and replace direct
`${{ github.head_ref }}` occurrences with the env var reference so the script
uses the trusted shell variable rather than raw template interpolation.
- Around line 211-213: The gh api call that lists releases needs pagination and
aggregation: add the --paginate flag to the gh api invocation and stop using
gh's --jq; instead pipe the paginated JSON output into jq -s and combine pages
(e.g., using add) before filtering/sorting so the subsequent logic that selects
dev prereleases and drops the newest 5 operates over all releases. Update the
line containing the gh api command and the jq expression so it uses --paginate
and a piped jq -s aggregation (keep the same filtering/sorting logic that
selects prerelease & tag_name startswith("dev-") and then .[5:] | .[].tag_name).
♻️ Duplicate comments (4)
LICENSE (1)
41-41: Clarify proprietary vs AGPL scope.“Most” is still legally ambiguous. Please explicitly enumerate which components are proprietary vs AGPL (or state “All source code is AGPL-3.0”).
.github/workflows/claude-code-review.yml (1)
14-19: Guard forked PRs without secrets (same issue as earlier review).
Line 14–19: For forked PRs,secrets.CLAUDE_CODE_OAUTH_TOKENis unavailable and this job will fail; consider gating to same-repo PRs.🔒 Suggested guard
jobs: claude-review: + if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository runs-on: ubuntu-latestinstaller/steps/dependencies.py (1)
814-851: Missing trailing newline in JSON output (duplicate of past review).Line 844 writes the config without a trailing newline, unlike
_patch_claude_config(line 195) which adds+ "\n". This inconsistency could cause issues with some tools and version control diffs.🔧 Proposed fix
- claude_config_path.write_text(json.dumps(config, indent=2)) + claude_config_path.write_text(json.dumps(config, indent=2) + "\n").github/workflows/release-dev.yml (1)
193-200: Rerun safety for existing tags/releasesRe-running with the same
VERSIONwill fail when the tag or release already exists. Add a defensive delete-and-wait step before tagging/creating the release.
🧹 Nitpick comments (1)
.github/workflows/release-dev.yml (1)
63-69: Centralize VERSION generation to avoid cross‑job driftEach job recomputes
VERSIONwith the current date. If jobs span midnight UTC or queue delays occur, binaries can embed a different version than the release tag. Consider generating the version once and sharing it via job outputs.Example refactor (conceptual)
+ prepare-version: + runs-on: ubuntu-latest + outputs: + VERSION: ${{ steps.version.outputs.VERSION }} + COMMIT_SHA: ${{ steps.version.outputs.COMMIT_SHA }} + steps: + - id: version + run: | + COMMIT_SHA="${{ github.event.pull_request.head.sha || github.sha }}" + VERSION="dev-$(echo $COMMIT_SHA | cut -c1-7)-$(date +%Y%m%d)" + echo "VERSION=$VERSION" >> "$GITHUB_OUTPUT" + echo "COMMIT_SHA=$COMMIT_SHA" >> "$GITHUB_OUTPUT" + build-ccp-arm64: + needs: prepare-version + env: + VERSION: ${{ needs.prepare-version.outputs.VERSION }}Also applies to: 119-124, 152-159
| f"{RED}1. Check for active plan: grep -l '^Status: PENDING\\|^Status: COMPLETE' docs/plans/*.md 2>/dev/null | head -1{NC}", | ||
| file=sys.stderr, | ||
| ) | ||
| print(f"{RED}2. Write /tmp/claude-continuation.md (include Active Plan path if found){NC}", file=sys.stderr) | ||
| print( | ||
| f"{RED}3. Run: $PWD/.claude/bin/ccp send-clear <plan-path> (or --general if no active plan){NC}", | ||
| file=sys.stderr, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all unique Status values in plan files
rg -n '^Status:' docs/plans/ 2>/dev/null | sed -E 's/^[^:]+:[0-9]+:Status:\s*//g' | sort -uRepository: maxritter/claude-codepro
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Check if docs/plans exists
ls -la docs/plans/ 2>/dev/null || echo "docs/plans/ not found"
# Search for any .md files that might contain Status
find . -name "*.md" -type f 2>/dev/null | head -20
# Check the context_monitor.py file to understand the grep pattern better
cat -n .claude/hooks/context_monitor.py | sed -n '130,145p'Repository: maxritter/claude-codepro
Length of output: 1810
🏁 Script executed:
#!/bin/bash
# Search for references to "docs/plans" or plan structure documentation
rg "docs/plans" . 2>/dev/null
# Look for Status field pattern or plan format documentation
rg "Status:" . --type md 2>/dev/null | head -20
# Check for any comments in context_monitor.py about the plans feature
cat -n .claude/hooks/context_monitor.py | head -50Repository: maxritter/claude-codepro
Length of output: 1918
Clarify plan status values before implementing the planning feature.
The grep pattern on line 134 assumes plans use PENDING|COMPLETE statuses to identify "active plans," but no specification exists for the plan status lifecycle. COMPLETE semantically represents finished plans, not active ones. Once docs/plans/ is created, the pattern may not correctly identify currently active plans. Please define the plan status schema (e.g., PENDING, ACTIVE, IN_PROGRESS, COMPLETE, etc.) and update the grep pattern to match the intended behavior.
🤖 Prompt for AI Agents
In @.claude/hooks/context_monitor.py around lines 134 - 140, The printed
instruction uses a grep pattern "'^Status: PENDING\\|^Status: COMPLETE'
docs/plans/*.md" inside context_monitor.py (the step-print block) that
incorrectly treats COMPLETE as an "active" plan; define a clear plan status
schema (e.g., PENDING, ACTIVE, IN_PROGRESS, COMPLETE, CANCELLED) and update the
grep pattern used by the printed step (and any code that searches plan files) to
match only the intended active statuses (e.g., '^Status: ACTIVE\\|^Status:
IN_PROGRESS' or whatever the agreed active-state names are); also update the
docs/docs/plans/ status spec so the pattern and any references (the printed
guidance and the plan-parsing code) remain consistent.
| - name: Create pre-release | ||
| env: | ||
| GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| run: | | ||
| VERSION="${{ steps.version.outputs.VERSION }}" | ||
| PR_NUM="${{ github.event.pull_request.number }}" | ||
|
|
||
| if [ -n "$PR_NUM" ]; then | ||
| NOTES="Pre-release build from PR #${PR_NUM} | ||
|
|
||
| **Commit:** ${{ steps.version.outputs.COMMIT_SHA }} | ||
| **Branch:** ${{ github.head_ref }} | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/release-dev.yml | sed -n '171,183p'Repository: maxritter/claude-codepro
Length of output: 585
🌐 Web query:
GitHub Actions github.head_ref security shell injection
💡 Result:
github.head_ref (and github.event.pull_request.head.ref) is attacker-controlled for PRs, especially from forks. If you interpolate it directly into a run: script (for example run: echo ${{ github.head_ref }} or build commands/paths from it), an attacker can craft a branch name containing shell metacharacters and trigger shell/script injection (arbitrary command execution on the runner). GitHub explicitly lists head_ref among context fields that must be treated as untrusted for script-injection risk. [1], [2]
This becomes particularly dangerous in workflows triggered by pull_request_target, which run in the base repo context and can have access to secrets / write permissions; GitHub Security Lab advisories show real exploits using malicious branch names injected via github.head_ref / pull_request.head.ref. [3], [4]
Safer patterns
- Don’t inline
${{ github.head_ref }}inside shell scripts. Treat it like untrusted input. [1], [2] - If you must use it, pass via
env:and quote in shell (reduces risk, but still validate if used in sensitive contexts):(GitHub recommends avoiding untrusted values flowing into executable contexts.) [1], [2]- env: HEAD_REF: ${{ github.head_ref }} run: | printf '%s\n' "$HEAD_REF"
- Prefer immutable refs for checkout/actions logic: use the PR SHA (
head.sha) rather than the mutable branch name (head.ref) to avoid manipulation/TOCTOU issues. [5] - Avoid
pull_request_targetfor untrusted forks where possible; if you must use it, minimize permissions and ensure no untrusted input (includinghead_ref) can influence execution or secret access. [4], [6]
Sources
- GitHub Docs – Script injections (lists
head_refas untrusted / injection risk). [1] - GitHub Security Lab – Untrusted input (includes
github.head_ref/ branch names; shows injection examples). [2] - GHSL advisory example exploiting
pull_request.head.refinjected intorun. [3] - GHSL advisory example exploiting
github.head_refinpull_request_target. [4] - GitHub Security Lab – notes
head.refis attacker-manipulable; prefer safer refs. [5] - GitHub Changelog – reiterates
pull_request_target+ forks increases risk. [6]
🏁 Script executed:
head -n 20 .github/workflows/release-dev.ymlRepository: maxritter/claude-codepro
Length of output: 458
Avoid shell injection via direct github.head_ref interpolation
github.head_ref is attacker-controlled input that may contain shell metacharacters like $() or backticks. When directly interpolated via ${{ github.head_ref }} in a bash script (line 182), these can trigger command substitution. Pass it through an env variable and expand via ${VAR} instead.
Safer pattern
- name: Create pre-release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ PR_BRANCH: ${{ github.head_ref }}
run: |
VERSION="${{ steps.version.outputs.VERSION }}"
PR_NUM="${{ github.event.pull_request.number }}"
if [ -n "$PR_NUM" ]; then
NOTES="Pre-release build from PR #${PR_NUM}
**Commit:** ${{ steps.version.outputs.COMMIT_SHA }}
- **Branch:** ${{ github.head_ref }}
+ **Branch:** ${PR_BRANCH}🧰 Tools
🪛 actionlint (1.7.10)
174-174: "github.head_ref" is potentially untrusted. avoid using it directly in inline scripts. instead, pass it through an environment variable. see https://docs.github.com/en/actions/reference/security/secure-use#good-practices-for-mitigating-script-injection-attacks for more details
(expression)
🤖 Prompt for AI Agents
In @.github/workflows/release-dev.yml around lines 171 - 183, The workflow step
"Create pre-release" currently interpolates github.head_ref directly into the
HERE-doc when building NOTES (using `${{ github.head_ref }}`), which risks shell
injection; instead set github.head_ref into an environment variable (e.g.,
HEAD_REF) in that job/step and reference it in the script via a safe shell
expansion like ${HEAD_REF} when constructing the NOTES variable; update the
step's env block and replace direct `${{ github.head_ref }}` occurrences with
the env var reference so the script uses the trusted shell variable rather than
raw template interpolation.
| gh api repos/${{ github.repository }}/releases \ | ||
| --jq '[.[] | select(.prerelease and (.tag_name | startswith("dev-")))] | sort_by(.created_at) | reverse | .[5:] | .[].tag_name' \ | ||
| | while read -r tag; do |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub CLI gh api default pagination releases per page
💡 Result:
- GitHub REST API default page size (including “List releases”) is 30 items per page (
per_pagedefault30, max100). [1] - In GitHub CLI,
gh apiwill return only the first page unless you paginate. Use:- Set page size: add
per_page(e.g.,-f per_page=100on a GET request) - Fetch all pages:
--paginate(follows the REST APILinkheader pagination) [2][3]
- Set page size: add
Example (all releases, 100 per request):
gh api repos/OWNER/REPO/releases -f per_page=100 --paginate --slurpNotes
--paginateemits one JSON document per page unless you add--slurpto wrap them. [2]- If you only need the latest release, use the non-paginated endpoint:
gh api repos/OWNER/REPO/releases/latestSources: [1] [2] [3]
🌐 Web query:
GitHub CLI --paginate flag documentation behavior
💡 Result:
In GitHub CLI, --paginate is a flag on gh api that makes the CLI automatically request all pages of a paginated API response and print them to stdout. [1]
- REST (API v3) endpoints:
gh api --paginatefollows pagination via the HTTPLinkresponse header, requesting each subsequent page until there are no more. [1], [2] - GraphQL (API v4):
gh api graphql --paginateonly works if your query:- accepts an
$endCursor: Stringvariable, and - selects
pageInfo { hasNextPage endCursor }on the connection you want to paginate.
The CLI then iterates usingendCursoruntilhasNextPageis false. [1], [3], [4]
- accepts an
- Output shape: each page is emitted as a separate JSON object/array on stdout; add
--slurpto wrap all pages into a single outer JSON array. [1] - Practical note: the CLI team also calls out this “multiple JSON documents unless you use
--slurp” behavior in support guidance. [5]
Docs: see gh api manual (options section for --paginate / --slurp). [1]
🌐 Web query:
GitHub API releases endpoint default page size
💡 Result:
For GitHub’s REST API Releases endpoints (e.g., GET /repos/{owner}/{repo}/releases), the default page size is 30 items per page (per_page=30, max 100). [1]
Add pagination to fetch all dev releases for cleanup
gh api repos/.../releases defaults to returning only the first 30 releases. If you have more than 30 dev prereleases, older ones won't be cleaned. Add --paginate and use jq -s to combine all pages before filtering and sorting.
Suggested change
- gh api repos/${{ github.repository }}/releases \
- --jq '[.[] | select(.prerelease and (.tag_name | startswith("dev-")))] | sort_by(.created_at) | reverse | .[5:] | .[].tag_name' \
+ gh api repos/${{ github.repository }}/releases --paginate \
+ | jq -s '[.[][] | select(.prerelease and (.tag_name | startswith("dev-")))] | sort_by(.created_at) | reverse | .[5:] | .[].tag_name' \🤖 Prompt for AI Agents
In @.github/workflows/release-dev.yml around lines 211 - 213, The gh api call
that lists releases needs pagination and aggregation: add the --paginate flag to
the gh api invocation and stop using gh's --jq; instead pipe the paginated JSON
output into jq -s and combine pages (e.g., using add) before filtering/sorting
so the subsequent logic that selects dev prereleases and drops the newest 5
operates over all releases. Update the line containing the gh api command and
the jq expression so it uses --paginate and a piped jq -s aggregation (keep the
same filtering/sorting logic that selects prerelease & tag_name
startswith("dev-") and then .[5:] | .[].tag_name).
- Statusbar: show session observation count (S:N) on first line, move session lines changed and git info to second line - Add session-based memory observation counting (MemoryProvider) - Add idle Claude Code session cleanup after claude-mem cleanup - Rename memory.py to services.py (broader scope) - Dev workflow: remove x64 builds, upgrade arm64 runners for speed - Remove unused claude.yml workflow (@claude mentions) - Add track_progress to PR review workflow - Disable git commit/PR attribution in installer defaults - Make TDD enforcer a non-blocking reminder - Update tool redirect hook to suggest ToolSearch for web tools
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.claude/hooks/tool_redirect.py:
- Around line 60-72: The function is_semantic_pattern should guard against
non-string inputs to avoid crashing on .lower(); add an early type check at the
top of is_semantic_pattern that returns False (or a sane default) when pattern
is None or not an instance of str, then proceed to call pattern.lower() and the
existing checks using CODE_PATTERNS and SEMANTIC_PHRASES; this ensures malformed
hook input doesn't raise an exception.
♻️ Duplicate comments (5)
.claude/hooks/context_monitor.py (1)
134-140: Align “active plan” grep statuses with the plan schema.This repeats the previously raised concern: the grep pattern treats
COMPLETEas active. Please align with the actual active status list and update the pattern accordingly.installer/steps/dependencies.py (1)
814-851: Missing trailing newline in JSON output.Line 844 writes the config without a trailing newline, unlike
_patch_claude_config(line 195) which adds+ "\n". This inconsistency could cause issues with some tools and version control diffs.🔧 Proposed fix
- claude_config_path.write_text(json.dumps(config, indent=2)) + claude_config_path.write_text(json.dumps(config, indent=2) + "\n").github/workflows/release-dev.yml (3)
170-183: Avoid directgithub.head_refinterpolation in the script.
This was flagged previously; still applies here.
192-199: Handle tag/release collisions on reruns before tagging/pushing.
Same concern as earlier review: add a defensive delete/retry pattern.
210-212: Paginate releases before filtering old dev prereleases.
Same issue as earlier:gh apionly returns the first page.
🧹 Nitpick comments (2)
.claude/hooks/tdd_enforcer.py (1)
93-124: Logic looks correct for module-scoped failing test detection.The matching logic handles both
test_{module_name}.pyand{module_name}_test.pynaming conventions correctly.Regarding the static analysis hint (TRY300): moving
return Falseon line 122 into anelseblock is a stylistic preference to separate the "success" path from exception handling. The current structure is clear enough, but you could refactor if preferred.♻️ Optional: Move return to else block (TRY300)
for test_path in lastfailed: test_file = test_path.split("::")[0] test_name = Path(test_file).stem if test_name == f"test_{module_name}" or test_name == f"{module_name}_test": return True - return False except (json.JSONDecodeError, OSError): return False + else: + return Falseinstaller/tests/unit/steps/test_dependencies.py (1)
235-258: Remove unused variableoriginal_info.Line 249 assigns
original_info = ui.infobut it's never used. This appears to be leftover from a previous implementation.♻️ Proposed fix
ui = Console(non_interactive=True) # Capture printed output by mocking ui.info info_calls = [] - original_info = ui.info ui.info = lambda msg: info_calls.append(msg)
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @.github/workflows/release-dev.yml:
- Around line 141-146: The publish-prerelease job currently only depends on
build jobs (publish-prerelease / needs: build-ccp-arm64,
build-ccp-darwin-arm64); add the test gating by including the test job (e.g.,
python-tests) in the needs array so publish-prerelease will wait for tests to
pass before running (update the publish-prerelease job's needs to include
python-tests).
- Around line 134-139: The Darwin upload step currently only uploads
ccp/dist/ccp-darwin-arm64.so causing installs to fail because the expected ccp
wrapper is missing; update the "Upload artifact" step for artifact name
ccp-darwin-arm64 (the actions/upload-artifact@v4 step) to include the wrapper
file as an additional path (e.g., add ccp/dist/ccp alongside
ccp/dist/ccp-darwin-arm64.so) so the macOS release matches the Linux upload and
the installer can find both the .so and the ccp wrapper.
♻️ Duplicate comments (3)
.github/workflows/release-dev.yml (3)
172-184: Shell injection risk viagithub.head_refdirect interpolation.This issue was previously identified.
github.head_refis attacker-controlled and can contain shell metacharacters. Pass it through an environment variable instead.
213-215: Missing pagination will skip cleanup of older pre-releases.This issue was previously identified. The
gh apicall returns only the first 30 releases by default. Add--paginateto ensure all dev pre-releases are considered for cleanup.
194-203: Workflow reruns will fail if tag already exists.If this workflow is re-run for the same commit on the same day,
git tag "$VERSION"andgit push origin "$VERSION"will fail because the tag already exists. Add defensive cleanup before creating the tag.🔧 Proposed fix for idempotent tag handling
+ # Clean up existing tag/release if re-running + gh release delete "$VERSION" --yes --cleanup-tag 2>/dev/null || true + git tag -d "$VERSION" 2>/dev/null || true + git push origin ":refs/tags/$VERSION" 2>/dev/null || true + git tag "$VERSION" git push origin "$VERSION" gh release create "$VERSION" \
| - name: Upload artifact | ||
| uses: actions/upload-artifact@v4 | ||
| with: | ||
| name: ccp-darwin-arm64 | ||
| path: ccp/dist/ccp-darwin-arm64.so | ||
| retention-days: 1 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the exact lines mentioned in the workflow file
cat -n .github/workflows/release-dev.yml | sed -n '87,89p;134,139p'Repository: maxritter/claude-codepro
Length of output: 404
🏁 Script executed:
# Get a broader context around those lines to understand the build jobs
cat -n .github/workflows/release-dev.yml | sed -n '70,140p'Repository: maxritter/claude-codepro
Length of output: 2706
🏁 Script executed:
# Check if there's an install.sh that may reveal what artifacts are expected
find . -name "install.sh" -o -name "install*.sh" | head -20Repository: maxritter/claude-codepro
Length of output: 80
🏁 Script executed:
# Search for references to the ccp wrapper in the build configuration
rg "ccp-linux|ccp-darwin" --type yaml -B2 -A2Repository: maxritter/claude-codepro
Length of output: 216
🏁 Script executed:
cat install.shRepository: maxritter/claude-codepro
Length of output: 12542
🏁 Script executed:
# Look for the ccp.build script to understand what artifacts get generated
find . -name "build.py" | grep ccpRepository: maxritter/claude-codepro
Length of output: 82
🏁 Script executed:
# Check if the ccp wrapper is conditionally generated or platform-specific
rg "ccp-darwin|ccp-linux|ccp wrapper" -C3Repository: maxritter/claude-codepro
Length of output: 50
🏁 Script executed:
wc -l ccp/build.pyRepository: maxritter/claude-codepro
Length of output: 83
🏁 Script executed:
head -150 ccp/build.pyRepository: maxritter/claude-codepro
Length of output: 10624
🏁 Script executed:
# Search for platform-specific artifact generation
rg "darwin|linux|wrapper|\.so" ccp/build.py -B2 -A2Repository: maxritter/claude-codepro
Length of output: 50
🏁 Script executed:
# Look at the download_ccp_binary function more carefully to see if there's platform-specific logic
sed -n '290,380p' install.shRepository: maxritter/claude-codepro
Length of output: 2244
Darwin build missing ccp wrapper artifact — breaks installation on macOS.
The Linux ARM64 build uploads both ccp-linux-arm64.so and the ccp wrapper (lines 87-89), but Darwin only uploads the .so file (line 138). The install.sh script expects both files for all platforms and will fail on macOS when attempting to download the missing wrapper, deleting the .so and aborting the installation.
Add the ccp wrapper to the Darwin artifact upload (line 138) to match the Linux build.
🤖 Prompt for AI Agents
In @.github/workflows/release-dev.yml around lines 134 - 139, The Darwin upload
step currently only uploads ccp/dist/ccp-darwin-arm64.so causing installs to
fail because the expected ccp wrapper is missing; update the "Upload artifact"
step for artifact name ccp-darwin-arm64 (the actions/upload-artifact@v4 step) to
include the wrapper file as an additional path (e.g., add ccp/dist/ccp alongside
ccp/dist/ccp-darwin-arm64.so) so the macOS release matches the Linux upload and
the installer can find both the .so and the ccp wrapper.
- Only download standards-* skills (not custom skills) - Fix ruff detection for multi-letter error codes (ERA, PLR) - Use --output-format=concise for ruff to match regex - Add CLI argument support for testing python hook - TypeScript LSP always installs npm package first - Remove gopls LSP installation - Gate publish-prerelease on python-tests - Add ccp wrapper to dev release artifacts
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
installer/tests/unit/steps/test_dependencies.py (1)
322-368: Use theresultvariables or drop them (F841).Both tests assign
resultbut never assert it. Either remove the variable or assert the return value for extra coverage.✅ Assert return values
- result = install_typescript_lsp() + result = install_typescript_lsp() + assert result is True @@ - result = install_pyright_lsp() + result = install_pyright_lsp() + assert result is True
🤖 Fix all issues with AI agents
In `@installer/steps/dependencies.py`:
- Around line 783-800: The UI reports "CUDA" whenever has_nvidia_gpu() returns
true in _install_vexor_with_ui but that detected use_cuda is not propagated to
installation/configuration; either pass use_cuda into install/config flow (e.g.,
forward use_cuda from _install_vexor_with_ui into install_vexor and ultimately
into _configure_vexor_local to set local_cuda accordingly) or change the
status/success messages in _install_vexor_with_ui to avoid implying CUDA was
enabled; update the call sites (install_vexor and _configure_vexor_local) so a
boolean like use_cuda or enable_cuda controls the local_cuda value in the Vexor
config.
In `@installer/tests/unit/steps/test_dependencies.py`:
- Around line 246-250: The assignment to original_info is unused and triggers
F841; remove the line assigning original_info (the variable named original_info
that stores ui.info) and keep the mocking of ui.info via the lambda that appends
to info_calls (i.e., remove only "original_info = ui.info" and leave ui =
Console(non_interactive=True) and ui.info = lambda msg: info_calls.append(msg)
intact).
- Around line 168-240: The tests patch several helpers (e.g.,
_get_forced_claude_version, _configure_claude_defaults, _run_bash_with_retry,
_remove_native_claude_binaries) but some patch parameters in the test functions
(like mock_remove, mock_run, mock_config, mock_version) are unused and trigger
Ruff ARG002; rename unused parameters by prefixing them with an underscore
(e.g., _mock_remove, _mock_run, _mock_config, _mock_version) or add a
per-parameter noqa comment to the function signature for each test (for example
in test_install_claude_code_removes_native_binaries,
test_install_claude_code_uses_npm, test_install_claude_code_uses_version_tag,
test_install_claude_code_configures_defaults,
test_install_claude_code_with_ui_shows_pinned_version_info) so the linter stops
flagging unused patched args.
♻️ Duplicate comments (5)
.github/workflows/release-dev.yml (3)
134-139: Darwin build missingccpwrapper artifact — breaks installation on macOS.The Linux ARM64 build uploads both
ccp-linux-arm64.soand theccpwrapper (lines 87-89), but Darwin only uploads the.sofile. Ifinstall.shexpects both files for all platforms, macOS installations will fail.Add the
ccpwrapper to the Darwin artifact upload:Proposed fix
- name: Upload artifact uses: actions/upload-artifact@v4 with: name: ccp-darwin-arm64 - path: ccp/dist/ccp-darwin-arm64.so + path: | + ccp/dist/ccp-darwin-arm64.so + ccp/dist/ccp retention-days: 1
173-185: Avoid shell injection via directgithub.head_refinterpolation.
github.head_refis attacker-controlled input that may contain shell metacharacters. Pass it through an environment variable instead of direct interpolation.Safer pattern
- name: Create pre-release env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + PR_BRANCH: ${{ github.head_ref }} run: | VERSION="${{ steps.version.outputs.VERSION }}" PR_NUM="${{ github.event.pull_request.number }}" if [ -n "$PR_NUM" ]; then NOTES="Pre-release build from PR #${PR_NUM} **Commit:** ${{ steps.version.outputs.COMMIT_SHA }} - **Branch:** ${{ github.head_ref }} + **Branch:** ${PR_BRANCH}
214-216: Add pagination to fetch all dev releases for cleanup.
gh api repos/.../releasesdefaults to returning only 30 releases. If more than 30 dev prereleases exist, older ones won't be cleaned. Add--paginateand aggregate pages.Suggested change
- gh api repos/${{ github.repository }}/releases \ - --jq '[.[] | select(.prerelease and (.tag_name | startswith("dev-")))] | sort_by(.created_at) | reverse | .[5:] | .[].tag_name' \ + gh api repos/${{ github.repository }}/releases --paginate \ + | jq -s '[.[][] | select(.prerelease and (.tag_name | startswith("dev-")))] | sort_by(.created_at) | reverse | .[5:] | .[].tag_name' \.claude/hooks/file_checker_python.py (1)
147-156: Ruff detection still misses 4+ letter rule codes and ignores non‑zero exit
[A-Z]{1,3}\d+skips prefixes likePERF,TRIO,ASYNC,FURB, so those violations won’t be detected. Also, a non‑zero ruff exit (config/runtime errors) can currently slip through if no lines match the regex.🛠️ Suggested fix
- error_pattern = re.compile(r":\d+:\d+: [A-Z]{1,3}\d+") + error_pattern = re.compile(r":\d+:\d+: [A-Z]+\d+") error_lines = [line for line in lines if error_pattern.search(line)] - has_issues = len(error_lines) > 0 + has_issues = len(error_lines) > 0 or result.returncode != 0- error_pattern = re.compile(r":\d+:\d+: [A-Z]{1,3}\d+") + error_pattern = re.compile(r":\d+:\d+: [A-Z]+\d+")Ruff --output-format=concise example and list of rule code prefixes (e.g., PERF, TRIO, ASYNC)Also applies to: 191-193
installer/steps/dependencies.py (1)
803-839: Add trailing newline when writing ~/.claude.json.Line 833 writes JSON without a trailing newline, unlike
_patch_claude_config. This mismatch can cause diff noise.🔧 Proposed fix
- claude_config_path.write_text(json.dumps(config, indent=2)) + claude_config_path.write_text(json.dumps(config, indent=2) + "\n")
🧹 Nitpick comments (3)
installer/steps/claude_files.py (2)
24-41: Consider edge cases for path prefixes.The path replacement logic only handles paths prefixed with space (
) or double quote ("). If paths appear with single quotes, at the start of content, or with other delimiters, they won't be patched. This may be fine for the current JSON settings format, but could cause issues if the function is used more broadly.♻️ Optional: Use regex for more robust path matching
+import re + def patch_claude_paths(content: str, project_dir: Path) -> str: """Patch .claude paths to use absolute paths for the target project. Handles both relative paths (.claude/hooks/, .claude/bin/) and existing absolute paths from the source repo (/workspaces/claude-codepro/.claude/). """ abs_hooks_path = str(project_dir / ".claude" / "hooks") + "/" abs_bin_path = str(project_dir / ".claude" / "bin") + "/" content = content.replace(SOURCE_REPO_HOOKS_PATH, abs_hooks_path) content = content.replace(SOURCE_REPO_BIN_PATH, abs_bin_path) - content = content.replace(" " + HOOKS_PATH_PATTERN, " " + abs_hooks_path) - content = content.replace('"' + HOOKS_PATH_PATTERN, '"' + abs_hooks_path) - content = content.replace(" " + BIN_PATH_PATTERN, " " + abs_bin_path) - content = content.replace('"' + BIN_PATH_PATTERN, '"' + abs_bin_path) + # Use word boundary-like matching for relative paths + content = re.sub(r'(?<=[" ])' + re.escape(HOOKS_PATH_PATTERN), abs_hooks_path, content) + content = re.sub(r'(?<=[" ])' + re.escape(BIN_PATH_PATTERN), abs_bin_path, content) return content
244-260: Consider clarifying "migration" message.The message "Migrating skill to command: {name}..." could be misleading since this code only removes the old skill directory. The actual command files are installed separately later. Consider rewording to "Removing deprecated skill (now a command): {name}..." for clarity.
♻️ Optional: Clarify migration message
if name in migrated_to_commands or name.startswith("standards-"): if ui: if name in migrated_to_commands: - ui.status(f"Migrating skill to command: {name}...") + ui.status(f"Removing deprecated skill (replaced by command): {name}...") else: ui.status(f"Clearing old skill: {name}...")installer/steps/dependencies.py (1)
748-757: Avoid duplicate status messages for Claude Code install.
_install_claude_code_with_uiemits a status line andinstall_claude_codeemits one too, so users may see duplicates (Line 751). Consider letting the inner function handle status output.♻️ Suggested tweak
- if ui: - ui.status("Installing Claude Code via npm...") - success, version = install_claude_code(project_dir, ui) + if ui: + success, version = install_claude_code(project_dir, ui)
| def _install_vexor_with_ui(ui: Any) -> bool: | ||
| """Install Vexor with local embeddings (GPU auto-detected).""" | ||
| from installer.platform_utils import has_nvidia_gpu | ||
|
|
||
| use_cuda = has_nvidia_gpu() | ||
| mode_str = "CUDA" if use_cuda else "CPU" | ||
|
|
||
| def _install_qlty_with_ui(ui: Any, project_dir: Path) -> bool: | ||
| """Install qlty with UI feedback.""" | ||
| qlty_result = install_qlty(project_dir) | ||
| if qlty_result[0]: | ||
| if ui: | ||
| ui.success("qlty installed") | ||
| ui.status("Downloading qlty prerequisites (linters)...") | ||
| run_qlty_check(project_dir, ui) | ||
| if ui: | ||
| ui.status(f"Installing Vexor with local embeddings ({mode_str})...") | ||
|
|
||
| if install_vexor(use_local=True, ui=ui): | ||
| if ui: | ||
| ui.success("qlty prerequisites ready") | ||
| ui.success(f"Vexor installed with local embeddings ({mode_str})") | ||
| return True | ||
| else: | ||
| if ui: | ||
| ui.warning("Could not install qlty - please install manually") | ||
| ui.warning("Could not install Vexor - please install manually") | ||
| return False |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's explore the file structure and find the functions mentioned
fd -t f "dependencies.py" | head -5Repository: maxritter/claude-codepro
Length of output: 147
🏁 Script executed:
# Find and examine the _install_vexor_with_ui function and related code
rg -n "_install_vexor_with_ui|install_vexor|_configure_vexor_local" installer/steps/dependencies.py -A 10 -B 2 | head -100Repository: maxritter/claude-codepro
Length of output: 2069
🏁 Script executed:
# Check the full context of install_vexor function to understand parameter passing
rg -n "def install_vexor" installer/ -A 20 -B 2Repository: maxritter/claude-codepro
Length of output: 1658
🏁 Script executed:
# Get the full _configure_vexor_local function
sed -n '396,450p' installer/steps/dependencies.pyRepository: maxritter/claude-codepro
Length of output: 1814
🏁 Script executed:
# Also check _setup_vexor_local_model to see if use_cuda is used there
rg -n "_setup_vexor_local_model" installer/steps/dependencies.py -A 30 -B 2Repository: maxritter/claude-codepro
Length of output: 2387
GPU mode messaging misleads about actual configuration.
Line 788 displays "CUDA" when a GPU is detected, but _configure_vexor_local() unconditionally sets local_cuda: false without receiving or using the detected GPU status. The use_cuda variable from has_nvidia_gpu() is never passed through the call chain to the configuration. Either pass use_cuda to enable GPU acceleration in the config, or soften the messaging to avoid implying CUDA usage.
💡 Option: soften messaging to avoid implying CUDA usage
- mode_str = "CUDA" if use_cuda else "CPU"
+ mode_str = "GPU available (auto-detect)" if use_cuda else "CPU"🤖 Prompt for AI Agents
In `@installer/steps/dependencies.py` around lines 783 - 800, The UI reports
"CUDA" whenever has_nvidia_gpu() returns true in _install_vexor_with_ui but that
detected use_cuda is not propagated to installation/configuration; either pass
use_cuda into install/config flow (e.g., forward use_cuda from
_install_vexor_with_ui into install_vexor and ultimately into
_configure_vexor_local to set local_cuda accordingly) or change the
status/success messages in _install_vexor_with_ui to avoid implying CUDA was
enabled; update the call sites (install_vexor and _configure_vexor_local) so a
boolean like use_cuda or enable_cuda controls the local_cuda value in the Vexor
config.
| def test_install_claude_code_removes_native_binaries( | ||
| self, mock_remove, mock_run, mock_config, mock_version | ||
| ): | ||
| """install_claude_code removes npm binaries before native install.""" | ||
| """install_claude_code removes native binaries before npm install.""" | ||
| from installer.steps.dependencies import install_claude_code | ||
|
|
||
| with tempfile.TemporaryDirectory() as tmpdir: | ||
| install_claude_code(Path(tmpdir)) | ||
|
|
||
| mock_remove.assert_called_once() | ||
|
|
||
| @patch("installer.steps.dependencies._download_claude_binary_with_progress", return_value=False) | ||
| @patch("installer.steps.dependencies._get_forced_claude_version", return_value=None) | ||
| @patch("installer.steps.dependencies._configure_firecrawl_mcp") | ||
| @patch("installer.steps.dependencies._configure_claude_defaults") | ||
| @patch("installer.steps.dependencies._run_bash_with_retry", return_value=True) | ||
| @patch("installer.steps.dependencies._remove_npm_claude_binaries") | ||
| def test_install_claude_code_uses_native_installer( | ||
| self, mock_remove, mock_run, mock_config, mock_firecrawl, mock_version, mock_download | ||
| @patch("installer.steps.dependencies._remove_native_claude_binaries") | ||
| def test_install_claude_code_uses_npm( | ||
| self, mock_remove, mock_run, mock_config, mock_version | ||
| ): | ||
| """install_claude_code uses native installer from claude.ai.""" | ||
| """install_claude_code uses npm install -g.""" | ||
| from installer.steps.dependencies import install_claude_code | ||
|
|
||
| with tempfile.TemporaryDirectory() as tmpdir: | ||
| success, version = install_claude_code(Path(tmpdir)) | ||
|
|
||
| assert success is True | ||
| assert version == "latest" | ||
| # Verify native installer was called via _run_bash_with_retry | ||
| # Verify npm install was called | ||
| mock_run.assert_called() | ||
| # Check that npm install command was used | ||
| call_args = mock_run.call_args[0][0] | ||
| assert "npm install -g @anthropic-ai/claude-code" in call_args | ||
|
|
||
| @patch("installer.steps.dependencies._get_forced_claude_version", return_value="2.1.19") | ||
| @patch("installer.steps.dependencies._configure_claude_defaults") | ||
| @patch("installer.steps.dependencies._run_bash_with_retry", return_value=True) | ||
| @patch("installer.steps.dependencies._remove_native_claude_binaries") | ||
| def test_install_claude_code_uses_version_tag( | ||
| self, mock_remove, mock_run, mock_config, mock_version | ||
| ): | ||
| """install_claude_code uses npm version tag for pinned version.""" | ||
| from installer.steps.dependencies import install_claude_code | ||
|
|
||
| with tempfile.TemporaryDirectory() as tmpdir: | ||
| success, version = install_claude_code(Path(tmpdir)) | ||
|
|
||
| assert success is True | ||
| assert version == "2.1.19" | ||
| # Verify npm install with version tag was called | ||
| mock_run.assert_called() | ||
| call_args = mock_run.call_args[0][0] | ||
| assert "npm install -g @anthropic-ai/claude-code@2.1.19" in call_args | ||
|
|
||
| @patch("installer.steps.dependencies._download_claude_binary_with_progress", return_value=False) | ||
| @patch("installer.steps.dependencies._get_forced_claude_version", return_value=None) | ||
| @patch("installer.steps.dependencies._configure_firecrawl_mcp") | ||
| @patch("installer.steps.dependencies._configure_claude_defaults") | ||
| @patch("installer.steps.dependencies._run_bash_with_retry", return_value=True) | ||
| @patch("installer.steps.dependencies._remove_npm_claude_binaries") | ||
| @patch("installer.steps.dependencies._remove_native_claude_binaries") | ||
| def test_install_claude_code_configures_defaults( | ||
| self, mock_remove, mock_run, mock_config, mock_firecrawl, mock_version, mock_download | ||
| self, mock_remove, mock_run, mock_config, mock_version | ||
| ): | ||
| """install_claude_code configures Claude defaults after native install.""" | ||
| """install_claude_code configures Claude defaults after npm install.""" | ||
| from installer.steps.dependencies import install_claude_code | ||
|
|
||
| with tempfile.TemporaryDirectory() as tmpdir: | ||
| install_claude_code(Path(tmpdir)) | ||
|
|
||
| mock_config.assert_called_once() | ||
|
|
||
| @patch("installer.steps.dependencies._download_claude_binary_with_progress", return_value=False) | ||
| @patch("installer.steps.dependencies._get_forced_claude_version", return_value=None) | ||
| @patch("installer.steps.dependencies._configure_firecrawl_mcp") | ||
| @patch("installer.steps.dependencies._get_forced_claude_version", return_value="2.1.19") | ||
| @patch("installer.steps.dependencies._configure_claude_defaults") | ||
| @patch("installer.steps.dependencies._run_bash_with_retry", return_value=True) | ||
| @patch("installer.steps.dependencies._remove_npm_claude_binaries") | ||
| def test_install_claude_code_does_not_configure_firecrawl( | ||
| self, mock_remove, mock_run, mock_config, mock_firecrawl, mock_version, mock_download | ||
| @patch("installer.steps.dependencies._remove_native_claude_binaries") | ||
| def test_install_claude_code_with_ui_shows_pinned_version_info( | ||
| self, mock_remove, mock_run, mock_config, mock_version |
There was a problem hiding this comment.
Silence Ruff ARG002 for unused patched args in these tests.
Several mock parameters aren’t used, which will trip linting. Prefix unused mocks with _ (or add # noqa: ARG002) to keep tests clean.
🔧 Suggested pattern (apply similarly to other affected tests in this block)
- def test_install_claude_code_removes_native_binaries(
- self, mock_remove, mock_run, mock_config, mock_version
- ):
+ def test_install_claude_code_removes_native_binaries(
+ self, mock_remove, _mock_run, _mock_config, _mock_version
+ ):🧰 Tools
🪛 Ruff (0.14.13)
169-169: Unused method argument: mock_run
(ARG002)
169-169: Unused method argument: mock_config
(ARG002)
169-169: Unused method argument: mock_version
(ARG002)
184-184: Unused method argument: mock_remove
(ARG002)
184-184: Unused method argument: mock_config
(ARG002)
184-184: Unused method argument: mock_version
(ARG002)
205-205: Unused method argument: mock_remove
(ARG002)
205-205: Unused method argument: mock_config
(ARG002)
205-205: Unused method argument: mock_version
(ARG002)
225-225: Unused method argument: mock_remove
(ARG002)
225-225: Unused method argument: mock_run
(ARG002)
225-225: Unused method argument: mock_version
(ARG002)
240-240: Unused method argument: mock_remove
(ARG002)
240-240: Unused method argument: mock_run
(ARG002)
240-240: Unused method argument: mock_config
(ARG002)
240-240: Unused method argument: mock_version
(ARG002)
🤖 Prompt for AI Agents
In `@installer/tests/unit/steps/test_dependencies.py` around lines 168 - 240, The
tests patch several helpers (e.g., _get_forced_claude_version,
_configure_claude_defaults, _run_bash_with_retry,
_remove_native_claude_binaries) but some patch parameters in the test functions
(like mock_remove, mock_run, mock_config, mock_version) are unused and trigger
Ruff ARG002; rename unused parameters by prefixing them with an underscore
(e.g., _mock_remove, _mock_run, _mock_config, _mock_version) or add a
per-parameter noqa comment to the function signature for each test (for example
in test_install_claude_code_removes_native_binaries,
test_install_claude_code_uses_npm, test_install_claude_code_uses_version_tag,
test_install_claude_code_configures_defaults,
test_install_claude_code_with_ui_shows_pinned_version_info) so the linter stops
flagging unused patched args.
| ui = Console(non_interactive=True) | ||
| # Capture printed output by mocking ui.info | ||
| info_calls = [] | ||
| original_info = ui.info | ||
| ui.info = lambda msg: info_calls.append(msg) |
There was a problem hiding this comment.
Remove the unused original_info assignment.
It’s never referenced and triggers F841.
🧹 Minimal cleanup
- original_info = ui.info
ui.info = lambda msg: info_calls.append(msg)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ui = Console(non_interactive=True) | |
| # Capture printed output by mocking ui.info | |
| info_calls = [] | |
| original_info = ui.info | |
| ui.info = lambda msg: info_calls.append(msg) | |
| ui = Console(non_interactive=True) | |
| # Capture printed output by mocking ui.info | |
| info_calls = [] | |
| ui.info = lambda msg: info_calls.append(msg) |
🧰 Tools
🪛 Ruff (0.14.13)
249-249: Local variable original_info is assigned to but never used
Remove assignment to unused variable original_info
(F841)
🤖 Prompt for AI Agents
In `@installer/tests/unit/steps/test_dependencies.py` around lines 246 - 250, The
assignment to original_info is unused and triggers F841; remove the line
assigning original_info (the variable named original_info that stores ui.info)
and keep the mocking of ui.info via the lambda that appends to info_calls (i.e.,
remove only "original_info = ui.info" and leave ui =
Console(non_interactive=True) and ui.info = lambda msg: info_calls.append(msg)
intact).
- Resolve merge conflicts with main branch - Skip Claude Code update message if already at target version - Skip Claude update check if already up to date (npm outdated) - Cleaner banner (removed duplicate emojis) - TDD enforcer returns exit code 2 (visible feedback) - Fix ruff detection for multi-letter codes - Only download standards-* skills - TypeScript LSP always installs npm package first - Remove gopls LSP - Add ccp wrapper to dev release artifacts - Gate publish-prerelease on python-tests
|
🎉 This PR is included in version 5.2.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary
release-dev.ymlworkflow for PR-based dev pre-releasesdev-{sha}-{date}versioningclaude-code-review.ymlfor automatic PR code reviewclaude.ymlfor @claude mentions in issues/PRsinstall.shwith--devand--versionflagsTest plan
install.sh --devworks after pre-release existsSummary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.