Add commercial infrastructure: license gating, Polar.sh integration, team features#39
Add commercial infrastructure: license gating, Polar.sh integration, team features#39
Conversation
…om rules, DLP, audit (Part 9) Team-tier features implemented: - Approval workflow (ADR-7): shell hooks prompt for approval on configured rules, temp-file based communication, fail-closed on missing/corrupt files, timeout support in all 4 shells (Fish via bash subprocess, PowerShell via Console.KeyAvailable polling) - Webhook event dispatcher (#[cfg(unix)]): background thread delivery with exponential backoff retry, env var expansion in headers, payload templates - Session tracking: per-shell-session ID generation, carried through audit log - MITRE ATT&CK classification: mitre_id on findings (Team enrichment) - Custom YAML detection rules: CustomRuleMatch rule_id, compiled at policy load, executed in engine tier-3 - DLP redaction: built-in patterns (OpenAI/AWS/GitHub/Anthropic/Slack/email) plus custom policy patterns, wired into audit log, webhooks, and all MCP tool/resource response paths including cloaking diff_text - Audit aggregation: export (JSON/CSV), stats, compliance reports - Policy extensions: approval_rules, network_deny/allow, webhooks, severity_overrides, dlp_custom_patterns, custom_rules - Shell hooks: approval flow in all 4 hooks, output defaults to stderr with TIRITH_OUTPUT=tty override, PowerShell fail-closed warnings via stderr All shell hooks synced per ADR-11. 422 tests pass, zero clippy warnings. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
ed25519-dalek (with rand_core feature) enables Ed25519 signature verification for license tokens. rand_core added as dev-dependency for test key pair generation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace unsigned base64 JSON bootstrap tokens with Ed25519-signed format (base64url payload.base64url signature). Adds: - EnforcementMode (Legacy/SignedPreferred/SignedOnly) compile-time const - Static keyring with kid-based key rotation support - decode_signed_token() with full claims validation (iss, aud, exp, nbf) - decode_tier_at_with_mode() for testable enforcement mode injection - key_format_status() public diagnostic helper for tirith doctor - MAX_TOKEN_LEN (8192) size gate for DoS resistance - 34 new tests (signed happy path, signature verification, claims validation, key rotation, parser hardening, enforcement mode, keyring invariants, CI release guard) - Compile-time keyring non-empty assertion Legacy unsigned tokens still accepted in SignedPreferred mode (v0.2.x transition). SignedOnly mode rejects unsigned tokens (for v0.3.0+). All 17 existing tests preserved and passing. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reports key format status (NoKey, LegacyUnsigned, LegacyInvalid, SignedStructural, Malformed) in doctor output. Warns users on unsigned keys that signed tokens will be required in a future release. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Dedicated job runs before all builds, verifying that the compile-time EnforcementMode matches the release tag version: - v0.2.x requires SignedPreferred or SignedOnly - v0.3.0+ requires SignedOnly Non-version tags skip the check. Guard validates "1 passed" in output to fail hard if the test is renamed or removed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds License Tier Verification section to threat-model.md covering Ed25519 signatures, key rotation, enforcement modes, and the trust boundary for self-built binaries. README note clarifies that tier checks gate enrichment features, not security detection. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Core fixes: - Add private_network_ip tier-1 pattern for bare IPs without scheme (SSRF gap) - Add host validation in extract_host_from_arg (reject malformed hosts) - Fix audit_aggregator time_range: use min_by/max_by instead of first/last - Sanitize webhook template substitution values with sanitize_for_json - Fix PowerShell approval catch block: log error, fail closed, reset validKeys Error handling improvements: - audit.rs: log failures instead of silently ignoring - checkpoint.rs, mcp/dispatcher.rs, webhook.rs: proper error propagation - CLI commands (check, run, scan, score, etc.): report JSON write failures Shell hook fixes: - Sync all shell hooks to embedded assets - Fix uninlined_format_args clippy lints Test improvements: - Add bare-IP golden fixtures for private_network_access - Add metadata_endpoint and private_network_access to no_url_rules safeguard Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Shell hooks passed --non-interactive but not --interactive, causing the engine to default to non-interactive bypass policy (which denies TIRITH=0 bypass). Add --interactive alongside --non-interactive so the CLI correctly sets ctx.interactive=true for terminal sessions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
push_segment() incorrectly treated VAR=VALUE as the command token. Now skips leading environment variable assignments to find the real command. Adds pub is_env_assignment() helper for use by engine bypass detection. Fixes: TIRITH=0 curl evil.com now correctly identifies curl as command. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Bypass detection now finds TIRITH=0 in inline env prefixes and env wrappers (env -i TIRITH=0, /usr/bin/env TIRITH=0), not just process env. Handles -u value-taking flag and -- option terminator. Self-invocation guard allows tirith's own commands (tirith diff, etc.) in single-segment inputs. Resolves through env/command/time wrappers. Uses canonicalized path comparison for path-form invocations with fallback to literal-only matching when canonicalization fails. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
check_shell_profile() now enumerates ~/.config/fish/conf.d/*.fish files via read_dir and checks all candidates instead of returning on the first existing profile. This fixes false negatives when tirith init is in conf.d or a secondary profile like .zshenv. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add command-aware output-flag skipping for curl (-o/--output) and wget (-O/-OFILE/--output-document). Extract URLs from command+args instead of raw segment text to avoid matching URLs in env-prefix values. Add conservative non-TLD file extensions (.png, .jpg, .mp4, etc.) to schemeless host exclusion list. Fixes issue #33. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…paths When a schemeless host has a path component (e.g., evil.zip/payload), the file extension heuristic should not suppress it — .zip is a real TLD and evil.zip/payload is a domain, not a filename. The extension check now only applies when there is no path component. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…load, policy client Adds Team/Enterprise license enforcement across core modules, SARIF report generation for CI integration, audit log upload with spool and retention, remote policy client for org-managed configurations, URL validation for webhook endpoints, and license CLI subcommand. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…detection - Quote URL in call_check_url to prevent metacharacter tokenization - Add path traversal protection (validate_path_scope) to MCP scan tools - Narrow TIRITH=0 bypass to exact match (was starts_with, matched TIRITH=01) - Remove dead code branch in is_mcp_config_file - Add ConfigMalformed finding for unparseable MCP config files - Add drift detection to merge_claude_settings (consistent with other merges) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Implements tools/license-server/ — an axum web service that: - Receives Paddle webhook events (HMAC-SHA256 verified) - Signs Ed25519 license tokens in-process - Delivers tokens via one-time receipt pages - Serves POST /api/license/refresh for CLI token renewal - Stores state in SQLite with single-transaction idempotency - Includes dead-letter auto-retry for unresolvable prices - Daily SQLite backup with optional R2 upload Also adds k2 public key to KEYRING in license.rs for key rotation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
License server: - Replace Paddle webhook verification with Standard Webhooks (Polar.sh) - Rewrite webhook handler for Polar events (order.paid, subscription.*) - Add canceled/revoked dual semantics (canceled=benefits continue, revoked=terminal) - Add absorbing terminal state for revoked subscriptions - Add DB migration for price_id -> product_id column rename - Fix UPSERT status reconciliation for out-of-order events - Add 16 lifecycle integration tests Core: - Default license tier to Pro (Pro features free for all users) - Fix review findings: URL quoting, bypass detection, dead code, drift detection Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| let token_ttl_days = std::env::var("TOKEN_TTL_DAYS") | ||
| .ok() | ||
| .and_then(|v| v.parse().ok()) | ||
| .unwrap_or(30); |
There was a problem hiding this comment.
🟢 Low src/config.rs:62
token_ttl_days accepts zero or negative values, which would create already-expired tokens. Consider validating it's positive after parsing.
let token_ttl_days = std::env::var("TOKEN_TTL_DAYS")
.ok()
.and_then(|v| v.parse().ok())
.unwrap_or(30);
+ assert!(token_ttl_days > 0, "TOKEN_TTL_DAYS must be positive");🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file tools/license-server/src/config.rs around lines 62-65:
`token_ttl_days` accepts zero or negative values, which would create already-expired tokens. Consider validating it's positive after parsing.
Evidence trail:
tools/license-server/src/config.rs:12 - `token_ttl_days: i64` (signed type allows negatives)
tools/license-server/src/config.rs:62-65 - parsing without positive validation
tools/license-server/src/routes/refresh.rs:64 - `exp_ts = chrono::Utc::now().timestamp() + (state.config.token_ttl_days * 86400)`
tools/license-server/src/routes/webhook.rs:685 - same expiration calculation
| } | ||
| } | ||
| let val = serde_json::Value::Object(map); | ||
| if serde_json::to_writer_pretty(std::io::stdout().lock(), &val).is_err() { |
There was a problem hiding this comment.
🟢 Low cli/license_cmd.rs:260
If to_writer_pretty fails after partial output, the println!() still runs, producing malformed JSON. Consider skipping the newline on error.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith/src/cli/license_cmd.rs around line 260:
If `to_writer_pretty` fails after partial output, the `println!()` still runs, producing malformed JSON. Consider skipping the newline on error.
Evidence trail:
crates/tirith/src/cli/license_cmd.rs lines 260-264 at REVIEWED_COMMIT: `if serde_json::to_writer_pretty(std::io::stdout().lock(), &val).is_err() { eprintln!("tirith: failed to write JSON output"); } println!();` - The `println!()` is outside the if block and runs unconditionally regardless of whether `to_writer_pretty` succeeded or failed.
| } | ||
|
|
||
| /// Rewrite the spool file with the remaining unsent lines. | ||
| fn rewrite_spool(path: &std::path::Path, remaining: &[String]) { |
There was a problem hiding this comment.
🟡 Medium src/audit_upload.rs:175
Write errors are silently discarded with let _ =. If the write fails, callers won't know and data may be lost or duplicated. Consider returning a Result and propagating errors.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/audit_upload.rs around line 175:
Write errors are silently discarded with `let _ =`. If the write fails, callers won't know and data may be lost or duplicated. Consider returning a `Result` and propagating errors.
Evidence trail:
crates/tirith-core/src/audit_upload.rs lines 174-181 at REVIEWED_COMMIT: `rewrite_spool` function uses `let _ = fs::write(...)` discarding the Result. Lines 145-163 show `rewrite_spool` is called after successfully uploading audit events, meaning a failed write would leave sent events in the spool for potential re-upload.
| .unwrap(); | ||
|
|
||
| // Force Community tier regardless of host machine config. | ||
| unsafe { std::env::set_var("TIRITH_LICENSE", "!") }; |
There was a problem hiding this comment.
🟢 Low src/policy.rs:745
Environment variables won't be cleaned up if assertions panic, potentially polluting other tests. Consider using a RAII guard pattern or std::panic::catch_unwind to ensure cleanup runs regardless of test outcome.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/policy.rs around line 745:
Environment variables won't be cleaned up if assertions panic, potentially polluting other tests. Consider using a RAII guard pattern or `std::panic::catch_unwind` to ensure cleanup runs regardless of test outcome.
Evidence trail:
crates/tirith-core/src/policy.rs lines 730-764 at REVIEWED_COMMIT: Test function `test_discover_skips_remote_fetch_below_team_tier` sets env vars at lines 745-747, runs assertions at lines 749-756, and only cleans up at lines 758-760. The `_guard` at line 732 is a mutex lock, not an env var cleanup guard. Rust assertions panic on failure, preventing cleanup code from running.
| host_port.to_string() | ||
| } | ||
|
|
||
| /// Check if an IPv4 address is in a private/reserved range (excluding loopback). |
There was a problem hiding this comment.
🟢 Low rules/command.rs:522
Suggestion: Align is_private_ip docs and behavior. The doc says “private/reserved,” but the code only checks IPv4 RFC1918 and skips 127/8; IPv6 isn’t handled. Either narrow the doc to “IPv4 RFC1918 only (excluding loopback)” or extend detection to cover 169.254/16, 100.64/10, 127/8 (if SSRF‑focused), and IPv6 private/link‑local (e.g., fc00::/7, fe80::/10).
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/rules/command.rs around line 522:
Suggestion: Align `is_private_ip` docs and behavior. The doc says “private/reserved,” but the code only checks IPv4 RFC1918 and skips 127/8; IPv6 isn’t handled. Either narrow the doc to “IPv4 RFC1918 only (excluding loopback)” or extend detection to cover 169.254/16, 100.64/10, 127/8 (if SSRF‑focused), and IPv6 private/link‑local (e.g., fc00::/7, fe80::/10).
Evidence trail:
crates/tirith-core/src/rules/command.rs lines 522-534 at REVIEWED_COMMIT. Doc comment on line 522 states 'Check if an IPv4 address is in a private/reserved range (excluding loopback).' Implementation lines 523-533 only check RFC1918 private ranges (10/8, 172.16/12, 192.168/16), not other reserved ranges like 169.254/16 (link-local) or 100.64/10 (CGNAT) that fall under the 'reserved' category. Note: The doc does explicitly say 'IPv4' and 'excluding loopback', so those parts of the issue's claims are less accurate.
| /// (no panic, no error exit). | ||
| pub fn current_tier() -> Tier { | ||
| match read_license_key() { | ||
| Some(k) => decode_tier_at(&k, Utc::now()).unwrap_or_else(|| { |
There was a problem hiding this comment.
🟢 Low src/license.rs:378
The code falls back to Tier::Pro on lines 382 and 384, but the doc comment states it should fall back to Tier::Community. Consider changing both fallbacks to Tier::Community to match the documented behavior.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/license.rs around line 378:
The code falls back to `Tier::Pro` on lines 382 and 384, but the doc comment states it should fall back to `Tier::Community`. Consider changing both fallbacks to `Tier::Community` to match the documented behavior.
Evidence trail:
crates/tirith-core/src/license.rs lines 370-395 at REVIEWED_COMMIT: Doc comment on lines 374-375 states "Invalid, expired, or missing keys silently fall back to Community". Implementation on line 382 returns `Tier::Pro` in the unwrap_or_else closure, and line 384 returns `Tier::Pro` in the None arm.
- Escape checkout_id in receipt HTML to prevent XSS (receipt.rs) - Sanitize description before writing to .env approval file (approval.rs) - Log spool write errors instead of silently discarding (audit_upload.rs) - Strip newlines from command_redacted in markdown tables (audit_aggregator.rs) - Redact finding.title consistently with other fields (redact.rs) - Use url::Url for MCP server URL parsing (configfile.rs) - Recognize IPv4-mapped loopback (::ffff:127.x) as benign (hostname.rs) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| eprintln!("tirith: checkpoint purge: failed to remove {}: {err}", e.id); | ||
| } | ||
| } | ||
| return false; |
There was a problem hiding this comment.
🟡 Medium src/checkpoint.rs:439
purge updates in‑memory state even when remove_dir_all fails, which undercounts entries/bytes. Suggest only mutating all/total/counters after a successful delete and keeping the item when deletion fails.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/checkpoint.rs around line 439:
`purge` updates in‑memory state even when `remove_dir_all` fails, which undercounts entries/bytes. Suggest only mutating `all`/`total`/counters after a successful delete and keeping the item when deletion fails.
Evidence trail:
crates/tirith-core/src/checkpoint.rs lines 420-485 (REVIEWED_COMMIT):
- Line ~440: `return false;` is outside the match block, so items are removed from `all.retain()` even when `remove_dir_all` fails
- Lines ~447-448: `all.pop()` removes item before `remove_dir_all` is called
- Lines ~466-468: `all.pop()` and `total -= oldest.total_bytes` execute before `remove_dir_all`
|
|
||
| /// Validate that a SHA-256 filename is exactly 64 lowercase hex characters. | ||
| fn validate_sha256_filename(sha: &str) -> Result<(), String> { | ||
| if sha.len() != 64 || !sha.chars().all(|c| c.is_ascii_hexdigit()) { |
There was a problem hiding this comment.
🟢 Low src/checkpoint.rs:264
is_ascii_hexdigit() accepts both uppercase and lowercase hex. If stored files use lowercase but a crafted manifest contains uppercase, the file lookup silently fails. Consider enforcing lowercase only in validation.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/checkpoint.rs around line 264:
`is_ascii_hexdigit()` accepts both uppercase and lowercase hex. If stored files use lowercase but a crafted manifest contains uppercase, the file lookup silently fails. Consider enforcing lowercase only in validation.
Evidence trail:
crates/tirith-core/src/checkpoint.rs lines 262-267 (validate_sha256_filename function with comment claiming lowercase validation but using is_ascii_hexdigit), line 645 (sha256_file uses {:x} format producing lowercase), lines 297-303 (file lookup using manifest sha256 and skip-on-missing behavior)
| fn validate_restore_path(path: &str) -> Result<(), String> { | ||
| let p = Path::new(path); | ||
| for component in p.components() { | ||
| if matches!(component, std::path::Component::ParentDir) { |
There was a problem hiding this comment.
🟢 Low src/checkpoint.rs:255
This validation rejects .. but allows absolute paths like /etc/passwd. Consider also rejecting RootDir components to prevent arbitrary file overwrites.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/checkpoint.rs around line 255:
This validation rejects `..` but allows absolute paths like `/etc/passwd`. Consider also rejecting `RootDir` components to prevent arbitrary file overwrites.
Evidence trail:
crates/tirith-core/src/checkpoint.rs lines 252-260 (validate_restore_path function only checks for ParentDir), line 292 (function called on manifest entry), lines 305-315 (path used as destination for fs::copy), line 723 test explicitly shows absolute paths pass validation: `assert!(validate_restore_path("/absolute/path/file.txt").is_ok());`
| } | ||
| } | ||
| } | ||
| Err(crate::policy_client::PolicyFetchError::AuthError(code)) => { |
There was a problem hiding this comment.
🟢 Low src/policy.rs:355
enforce_fail_mode is unused in discover(); auth errors always fail closed. If it’s meant to control auth‑error fallback, consider checking it before calling fail_closed_policy(). Otherwise remove the field.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/policy.rs around line 355:
`enforce_fail_mode` is unused in `discover()`; auth errors always fail closed. If it’s meant to control auth‑error fallback, consider checking it before calling `fail_closed_policy()`. Otherwise remove the field.
Evidence trail:
crates/tirith-core/src/policy.rs lines 105-108 (field definition with doc comment), line 257 (default value), lines 355-358 (auth error handling that ignores the field). git_grep for 'enforce_fail_mode' confirms only 2 occurrences in the entire codebase (definition and default assignment).
| max_files: None, | ||
| }; | ||
|
|
||
| let policy = crate::policy::Policy::discover(None); |
There was a problem hiding this comment.
🟢 Low mcp/resources.rs:51
Consider passing cwd to Policy::discover to match the working directory used by the scan. Currently None is passed, which may load a different policy than what scan_single_file() uses internally.
| let policy = crate::policy::Policy::discover(None); | |
| let policy = crate::policy::Policy::discover(cwd.to_str()); |
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/mcp/resources.rs around line 51:
Consider passing `cwd` to `Policy::discover` to match the working directory used by the scan. Currently `None` is passed, which may load a different policy than what `scan_single_file()` uses internally.
Evidence trail:
- crates/tirith-core/src/mcp/resources.rs line 40: `cwd = std::env::current_dir()`
- crates/tirith-core/src/mcp/resources.rs line 51: `Policy::discover(None)`
- crates/tirith-core/src/mcp/resources.rs line 52: `scan::scan(&config)` called
- crates/tirith-core/src/scan.rs line 105: `scan_single_file(file_path)` called
- crates/tirith-core/src/scan.rs line 154: `cwd = file_path.parent().map(|p| p.display().to_string())`
- crates/tirith-core/src/scan.rs line 170: `Policy::discover(cwd.as_deref())`
- crates/tirith-core/src/policy.rs lines 573-576: `discover_policy_path()` uses `cwd` or falls back to `std::env::current_dir().ok()`
| // Use chrome as baseline | ||
| let baseline_idx = 0; // chrome is first | ||
| let baseline_body = &responses[baseline_idx].2; |
There was a problem hiding this comment.
🟡 Medium rules/cloaking.rs:115
If Chrome fetch fails (status=0) but others succeed, baseline_body is empty, causing false positives. Consider checking if the baseline fetch succeeded and either returning an error or selecting another successful response as baseline.
// Use chrome as baseline
let baseline_idx = 0; // chrome is first
+ if responses[baseline_idx].1 == 0 {
+ return Err("chrome baseline fetch failed — cannot perform cloaking analysis".to_string());
+ }
let baseline_body = &responses[baseline_idx].2;🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/rules/cloaking.rs around lines 115-117:
If Chrome fetch fails (status=0) but others succeed, `baseline_body` is empty, causing false positives. Consider checking if the baseline fetch succeeded and either returning an error or selecting another successful response as baseline.
Evidence trail:
crates/tirith-core/src/rules/cloaking.rs lines 97-107 (failed fetches push empty body with status=0), lines 116-118 (baseline is Chrome at index 0, baseline_body is used without success check), lines 137-138 (only non-baseline empty bodies are skipped), lines 142-144 (comparison and cloaking threshold check)
crates/tirith/src/cli/license_cmd.rs
Outdated
| #[cfg(unix)] | ||
| { | ||
| // Read server URL and API key from env or policy config | ||
| let server_url = std::env::var("TIRITH_SERVER_URL").ok().or_else(|| { |
There was a problem hiding this comment.
🟢 Low cli/license_cmd.rs:125
Policy::discover() is called twice (lines 126 and 130), which can perform network I/O and return different policy instances. Consider calling it once and extracting both policy_server_url and policy_server_api_key from the same instance.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith/src/cli/license_cmd.rs around line 125:
`Policy::discover()` is called twice (lines 126 and 130), which can perform network I/O and return different policy instances. Consider calling it once and extracting both `policy_server_url` and `policy_server_api_key` from the same instance.
Evidence trail:
crates/tirith/src/cli/license_cmd.rs lines 125-132 (REVIEWED_COMMIT): Shows two separate `Policy::discover(None)` calls in `or_else` closures.
crates/tirith-core/src/policy.rs lines 306-360 (REVIEWED_COMMIT): Shows `Policy::discover()` implementation which calls `fetch_remote_policy()` at line 334, confirming network I/O capability.
|
|
||
| /// Backup a single file to the checkpoint files directory. | ||
| fn backup_file(path: &Path, files_dir: &Path) -> Result<ManifestEntry, String> { | ||
| let sha = sha256_file(path)?; |
There was a problem hiding this comment.
🟡 Medium src/checkpoint.rs:520
backup_file does separate reads for hashing, copying, and size, so the source can change/disappear and yield inconsistent hash/content/size. Suggest opening once and streaming: compute hash and size from the same read and write to the destination (or a temp), and base dedup/size on that.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/checkpoint.rs around line 520:
`backup_file` does separate reads for hashing, copying, and size, so the source can change/disappear and yield inconsistent hash/content/size. Suggest opening once and streaming: compute hash and size from the same read and write to the destination (or a temp), and base dedup/size on that.
Evidence trail:
crates/tirith-core/src/checkpoint.rs lines 520-545 (backup_file function): sha256_file(path) at line 521, fs::copy(path, &dst) at line 525, path.metadata() at line 528. crates/tirith-core/src/checkpoint.rs lines 633-645 (sha256_file function): opens file, reads in 8KB chunks, computes hash, closes file. Commit: REVIEWED_COMMIT
- Inline format args (uninlined_format_args lint) - Remove redundant closures (redundant_closure lint) - Remove needless borrows (needless_borrows_for_generic_args lint) - Suppress const_is_empty on KEYRING assert (compile-time const) - Avoid approx_constant lint in PDF test (3.14 → 3.15) - Pin time <0.3.47 (v0.3.47 requires edition 2024, breaks MSRV 1.83) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| Err(e) => return tool_error(&e), | ||
| }; | ||
|
|
||
| let policy = crate::policy::Policy::discover(None); |
There was a problem hiding this comment.
🟡 Medium mcp/tools.rs:327
Policy base dir is inconsistent with scan_single_file (Policy::discover(None) vs the file’s parent), which can misapply dlp_custom_patterns. Suggest discovering the policy from the file’s parent and using it for redaction in both tirith_scan_file and tirith_verify_mcp_config.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/mcp/tools.rs around line 327:
Policy base dir is inconsistent with `scan_single_file` (`Policy::discover(None)` vs the file’s parent), which can misapply `dlp_custom_patterns`. Suggest discovering the policy from the file’s parent and using it for redaction in both `tirith_scan_file` and `tirith_verify_mcp_config`.
Evidence trail:
- tools.rs:327: `let policy = crate::policy::Policy::discover(None);`
- tools.rs:329-331: `match scan::scan_single_file(&path)` then `redact_findings(..., &policy.dlp_custom_patterns)`
- scan.rs:156: `let cwd = file_path.parent().map(|p| p.display().to_string());`
- scan.rs:172: `let policy = crate::policy::Policy::discover(cwd.as_deref());`
- policy.rs:573-576: `discover_policy_path` shows that `None` falls back to `std::env::current_dir()` while a provided `cwd` uses that path as the starting point
- Commit: REVIEWED_COMMIT
| return false; | ||
| } | ||
| } | ||
| _ => { |
There was a problem hiding this comment.
🟡 Medium src/audit_aggregator.rs:118
When only one timestamp parses successfully, falling back to lexicographic comparison mixes parsed and unparsed formats incorrectly. Consider handling the case where one parses but the other doesn't separately—perhaps skip filtering or return an error—rather than using string comparison with mismatched formats.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/audit_aggregator.rs around line 118:
When only one timestamp parses successfully, falling back to lexicographic comparison mixes parsed and unparsed formats incorrectly. Consider handling the case where one parses but the other doesn't separately—perhaps skip filtering or return an error—rather than using string comparison with mismatched formats.
Evidence trail:
crates/tirith-core/src/audit_aggregator.rs lines 100-137 at REVIEWED_COMMIT - shows the `parse_ts` function and `filter_records` function with the `_` wildcard pattern at lines ~117-123 that falls through to lexicographic comparison for all non-both-parsed cases including when only one timestamp parses successfully.
| "Community tier must not spool remote audit uploads" | ||
| ); | ||
|
|
||
| unsafe { std::env::remove_var("XDG_STATE_HOME") }; |
There was a problem hiding this comment.
🟢 Low src/audit.rs:292
Test removes TIRITH_LOG at line 259 but doesn't restore it in cleanup (lines 292-295). Consider adding unsafe { std::env::remove_var("TIRITH_LOG") }; to cleanup for consistency, or saving/restoring its original value.
| unsafe { std::env::remove_var("XDG_STATE_HOME") }; | |
| unsafe { std::env::remove_var("TIRITH_LOG") }; | |
| unsafe { std::env::remove_var("XDG_STATE_HOME") }; |
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/audit.rs around line 292:
Test removes `TIRITH_LOG` at line 259 but doesn't restore it in cleanup (lines 292-295). Consider adding `unsafe { std::env::remove_var("TIRITH_LOG") };` to cleanup for consistency, or saving/restoring its original value.
Evidence trail:
crates/tirith-core/src/audit.rs lines 250-298 (REVIEWED_COMMIT): Line 259 shows `unsafe { std::env::remove_var("TIRITH_LOG") };`. Lines 292-295 show cleanup removing only `XDG_STATE_HOME`, `TIRITH_API_KEY`, `TIRITH_SERVER_URL`, `TIRITH_LICENSE` - no restoration of TIRITH_LOG. Lines 254-258 show the vars being SET at setup.
| let verdict = engine::analyze(&ctx); | ||
|
|
||
| // Apply paranoia filter to scan findings | ||
| let policy = crate::policy::Policy::discover(cwd.as_deref()); |
There was a problem hiding this comment.
🟢 Low src/scan.rs:170
Suggestion: Avoid discovering policy twice. engine::analyze already discovers policy; calling Policy::discover again for paranoia filtering can yield inconsistent settings if the remote policy changes. Consider threading a single discovered policy through analysis and filtering (e.g., pass it into analyze() or reuse the policy from the verdict).
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/scan.rs around line 170:
Suggestion: Avoid discovering policy twice. `engine::analyze` already discovers policy; calling `Policy::discover` again for paranoia filtering can yield inconsistent settings if the remote policy changes. Consider threading a single discovered policy through analysis and filtering (e.g., pass it into `analyze()` or reuse the policy from the verdict).
Evidence trail:
- crates/tirith-core/src/scan.rs lines 167, 170-172 (REVIEWED_COMMIT): Shows `engine::analyze(&ctx)` called at line 167, then `Policy::discover` called again at line 170 for paranoia filtering
- crates/tirith-core/src/engine.rs line 424 (REVIEWED_COMMIT): Shows `Policy::discover(ctx.cwd.as_deref())` called inside `analyze()`
- crates/tirith-core/src/policy.rs lines 306-391 (REVIEWED_COMMIT): Shows `discover()` can fetch from remote server via `policy_client::fetch_remote_policy()` at line 333, confirming remote policy could change between calls
| files, | ||
| }; | ||
|
|
||
| if serde_json::to_writer_pretty(std::io::stdout().lock(), &output).is_err() { |
There was a problem hiding this comment.
🟢 Low cli/scan.rs:215
On JSON serialization failure, avoid emitting any stdout (including the trailing newline). Suggest returning immediately after logging the error.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith/src/cli/scan.rs around line 215:
On JSON serialization failure, avoid emitting any `stdout` (including the trailing newline). Suggest returning immediately after logging the error.
Evidence trail:
crates/tirith/src/cli/scan.rs lines 215-219 at REVIEWED_COMMIT: The `println!();` statement executes unconditionally after the error handling block, meaning a trailing newline is written to stdout even when `serde_json::to_writer_pretty` fails.
| let date_str = chrono::Utc::now().format("%Y-%m-%d").to_string(); | ||
|
|
||
| // Derive backup dir from db path | ||
| let db_dir = std::path::Path::new(&db_path) |
There was a problem hiding this comment.
🟢 Low src/main.rs:237
Path::new("database.db").parent() returns Some(""), not None, so the /data fallback is never used for relative paths. Consider checking if the parent is empty before falling back.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file tools/license-server/src/main.rs around line 237:
`Path::new("database.db").parent()` returns `Some("")`, not `None`, so the `/data` fallback is never used for relative paths. Consider checking if the parent is empty before falling back.
Evidence trail:
tools/license-server/src/main.rs lines 237-240 at REVIEWED_COMMIT: shows `.parent().unwrap_or(std::path::Path::new("/data"))` pattern. Rust std::path::Path documentation at https://doc.rust-lang.org/src/std/path.rs.html lines 2564-2587 confirms: "Returns None if the path terminates in a root or prefix, or if it's the empty string" and shows example `assert_eq!(grand_parent, Some(Path::new("")));` for parent of single-component relative path. Also https://doc.rust-lang.org/std/path/struct.PathBuf.html states: "This means it returns Some("") for relative paths with one component."
| product_id = %pid, | ||
| "unresolvable product_id — setting tier to unknown" | ||
| ); | ||
| let _ = state |
There was a problem hiding this comment.
🟢 Low routes/webhook.rs:628
Consider logging errors from insert_dead_letter instead of silently discarding them with let _ =. If dead letter insertion fails, there's no visibility into why unresolvable product events aren't being recorded for retry.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file tools/license-server/src/routes/webhook.rs around line 628:
Consider logging errors from `insert_dead_letter` instead of silently discarding them with `let _ =`. If dead letter insertion fails, there's no visibility into why unresolvable product events aren't being recorded for retry.
Evidence trail:
tools/license-server/src/routes/webhook.rs lines 620-640 at REVIEWED_COMMIT - shows `let _ = state.db.insert_dead_letter(...)` pattern where the Result is explicitly discarded. The `error!` macro call at lines 624-628 logs the unresolvable product issue but does not log any failure from the `insert_dead_letter` call itself.
|
|
||
| // Write metadata | ||
| let meta_json = serde_json::to_string_pretty(&meta).map_err(|e| format!("serialize: {e}"))?; | ||
| fs::write(cp_dir.join("meta.json"), meta_json).map_err(|e| format!("write meta: {e}"))?; |
There was a problem hiding this comment.
🟢 Low src/checkpoint.rs:185
If writing meta.json or manifest.json fails, the checkpoint directory is left with orphaned files but no manifest. Consider cleaning up cp_dir on write failures, similar to the empty manifest case.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/checkpoint.rs around line 185:
If writing `meta.json` or `manifest.json` fails, the checkpoint directory is left with orphaned files but no manifest. Consider cleaning up `cp_dir` on write failures, similar to the empty manifest case.
Evidence trail:
crates/tirith-core/src/checkpoint.rs lines 131-133 (directory creation), lines 135-164 (file backup), lines 166-169 (empty manifest cleanup), line 185 (meta.json write with no cleanup on failure), lines 190-191 (manifest.json write with no cleanup on failure). Verified at REVIEWED_COMMIT.
crates/tirith-core/src/checkpoint.rs
Outdated
| }; | ||
| let path = entry.path(); | ||
|
|
||
| if path.is_symlink() { |
There was a problem hiding this comment.
🟠 High src/checkpoint.rs:587
TOCTOU race between is_symlink() check and backup_file(). Consider using symlink_metadata() and checking file_type().is_symlink() immediately before reading, or opening the file with O_NOFOLLOW semantics to prevent following symlinks atomically.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/checkpoint.rs around line 587:
TOCTOU race between `is_symlink()` check and `backup_file()`. Consider using `symlink_metadata()` and checking `file_type().is_symlink()` immediately before reading, or opening the file with `O_NOFOLLOW` semantics to prevent following symlinks atomically.
Evidence trail:
crates/tirith-core/src/checkpoint.rs lines 587-610: `is_symlink()` check at line 587, `backup_file()` call at line ~610 (viewed via view tool with range [575, 610]). crates/tirith-core/src/checkpoint.rs lines 520-545: `backup_file()` function calls `sha256_file(path)` at line 521 and `fs::copy(path, &dst)` at line 525. crates/tirith-core/src/checkpoint.rs lines 634-646: `sha256_file()` uses `fs::File::open(path)` which follows symlinks by default. The TOCTOU race exists between the symlink check and the file operations.
| let major: u32 = parts[0] | ||
| .parse() | ||
| .unwrap_or_else(|_| panic!("Invalid major version in {tag}")); | ||
| let minor: u32 = parts[1] |
There was a problem hiding this comment.
🟢 Low src/license.rs:1295
If RELEASE_TAG is v0.3-beta, parts[1] becomes "3-beta" and .parse::<u32>() panics. Consider splitting on - first to strip pre-release suffixes before parsing.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/license.rs around line 1295:
If `RELEASE_TAG` is `v0.3-beta`, `parts[1]` becomes `"3-beta"` and `.parse::<u32>()` panics. Consider splitting on `-` first to strip pre-release suffixes before parsing.
Evidence trail:
crates/tirith-core/src/license.rs lines 1285-1297 at REVIEWED_COMMIT: code strips 'v' prefix, splits on '.', and parses parts[0] and parts[1] directly as u32 without handling pre-release suffixes. Verified that `"3-beta".parse::<u32>()` would return Err and trigger the panic in unwrap_or_else.
a45d0d8 to
57d4ae5
Compare
| .map(|d| d.unsigned_abs() > max_age_secs as u64) |
There was a problem hiding this comment.
🟢 Low src/webhook_verify.rs:39
Casting negative max_age_secs to u64 wraps to a huge value, disabling expiration. Consider using max_age_secs.unsigned_abs() or validating the input is positive.
| .map(|d| d.unsigned_abs() > max_age_secs as u64) | |
| .map(|d| d.unsigned_abs() > max_age_secs.unsigned_abs()) |
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file tools/license-server/src/webhook_verify.rs around line 39:
Casting negative `max_age_secs` to `u64` wraps to a huge value, disabling expiration. Consider using `max_age_secs.unsigned_abs()` or validating the input is positive.
Evidence trail:
tools/license-server/src/webhook_verify.rs lines 21, 37-40: `max_age_secs: i64` parameter at line 21, cast at line 39 via `max_age_secs as u64`. No validation exists that `max_age_secs` must be positive. Rust `as` cast on negative i64 to u64 performs wrapping conversion.
…omments (round 3) CI fixes: - Pin time <0.3.38 (resolves to 0.3.37) for MSRV 1.83 compatibility (0.3.38+ needs Rust 1.85+, 0.3.46+ needs 1.88) - Pin base64ct <1.8 (v1.8+ uses edition 2024, needs Rust 1.85+) - Add deny.toml ignores for RUSTSEC-2026-0009 (time DoS, fix needs Rust 1.88) and RUSTSEC-2025-0134 (rustls-pemfile unmaintained, transitive via rust-s3) - Remove redundant Security Audit job (Cargo Deny already covers advisories with proper ignore configuration) Review comment fixes: - checkpoint.rs: use symlink_metadata() to avoid TOCTOU race (HIGH) - webhook_verify.rs: overflow-safe timestamp check via checked_sub (MEDIUM) - receipt.rs: replace meta-refresh with JS fetch polling to fix infinite loop (MEDIUM) - cloaking.rs: fix over-greedy CSRF regex to match full HTML elements (MEDIUM) - audit_upload.rs: keep at least one event when single event exceeds max_bytes (MEDIUM) - audit_aggregator.rs: use case-insensitive action filter for blocked commands (LOW) - license-server/main.rs: escape single quotes in SQL VACUUM path (LOW) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
57d4ae5 to
562571e
Compare
| let tools = tools::list(); | ||
| JsonRpcResponse::ok(id, json!({ "tools": tools })) | ||
| } | ||
| "tools/call" => { |
There was a problem hiding this comment.
🟢 Low mcp/dispatcher.rs:200
Serialization failures return json!({}) as a success response, which sends malformed data to clients. Consider returning a JsonRpcResponse::err() with an internal error code instead.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/mcp/dispatcher.rs around line 200:
Serialization failures return `json!({})` as a success response, which sends malformed data to clients. Consider returning a `JsonRpcResponse::err()` with an internal error code instead.
Evidence trail:
crates/tirith-core/src/mcp/dispatcher.rs lines 200-206 at REVIEWED_COMMIT: The `tools/call` handler uses `serde_json::to_value(result).unwrap_or_else(|e| { ... json!({}) })` wrapped in `JsonRpcResponse::ok()`, confirming that serialization failures return an empty JSON object as a success response.
- Merge origin/main (glibc build fix) - Fix single & segment boundary in split_raw_words (security) - Use exact match == TIRITH=0 (prevents false bypass) - Skip flags in resolve_command_wrapper - Remove dead code in is_tirith_command - Remove quote-stripping in is_env_assignment (callers handle this) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| idx += 1; | ||
| continue; | ||
| } | ||
| if w.starts_with('-') { |
There was a problem hiding this comment.
🟠 High src/engine.rs:76
Suggestion: env option parsing in find_inline_bypass/resolve_env_wrapper is too narrow. Combined short flags and flags with arguments can cause the arg to be treated as the command. Consider parsing env per POSIX (support combined flags and arg-taking options like -u, -C, -S) before locating the command.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/engine.rs around line 76:
Suggestion: `env` option parsing in `find_inline_bypass`/`resolve_env_wrapper` is too narrow. Combined short flags and flags with arguments can cause the arg to be treated as the command. Consider parsing `env` per POSIX (support combined flags and arg-taking options like `-u`, `-C`, `-S`) before locating the command.
Evidence trail:
1. crates/tirith-core/src/engine.rs lines 76-83 (`find_inline_bypass`): Only `-u` handled as taking argument
2. crates/tirith-core/src/engine.rs lines 230-236 (`resolve_env_wrapper`): Same pattern, only `-u` handled
3. crates/tirith-core/src/rules/command.rs lines 94-95 (`resolve_env_from_args`): `env_value_flags = ["-u"]`
4. GNU coreutils env manpage documents `-C DIR` and `-S STRING` as argument-taking options not handled by this code
The _shell parameter was unused — PowerShell users couldn't use inline bypass ($env:TIRITH="0"; cmd) because only POSIX VAR=value syntax was detected. Now handles $env:TIRITH=0, $env:TIRITH="0", $Env:TIRITH='0', and spaced form ($env:TIRITH = "0") when shell is PowerShell. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| let mut verdict = engine::analyze(&ctx); | ||
|
|
||
| // Apply paranoia filter (suppress Info/Low findings based on policy + tier) | ||
| let policy = tirith_core::policy::Policy::discover(ctx.cwd.as_deref()); |
There was a problem hiding this comment.
🟢 Low cli/check.rs:66
Policy::discover() is called twice per invocation—once inside engine::analyze() and again at line 66. For Team-tier users with a policy server configured, this doubles network requests. Consider having analyze() return the policy it loaded, or caching the result.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith/src/cli/check.rs around line 66:
`Policy::discover()` is called twice per invocation—once inside `engine::analyze()` and again at line 66. For Team-tier users with a policy server configured, this doubles network requests. Consider having `analyze()` return the policy it loaded, or caching the result.
Evidence trail:
- crates/tirith/src/cli/check.rs:63 - calls `engine::analyze(&ctx)`
- crates/tirith/src/cli/check.rs:66 - calls `Policy::discover(ctx.cwd.as_deref())`
- crates/tirith-core/src/engine.rs:477 - inside `analyze()`, calls `Policy::discover(ctx.cwd.as_deref())`
- crates/tirith-core/src/policy.rs:306-391 - `Policy::discover()` implementation shows network call at line 333 via `fetch_remote_policy()` for Team-tier users with policy server configured
…resources - Widen env flag parsing in engine.rs to handle -C, -S, and long flags (find_inline_bypass and resolve_env_wrapper) - Add -C to resolve_env_from_args in command.rs for pipe detection - Recalculate verdict.action after paranoia filtering to prevent stale action with empty/reduced findings - Use safe .get(..N) slicing in PowerShell bypass parsing to prevent panic on multi-byte UTF-8 input - Make TIRITH= matching case-insensitive to match PowerShell semantics - Use ShellType::from_str() in MCP tools for proper fish/pwsh support - Use policy.scan.ignore_patterns in MCP resources instead of empty vec - Filter empty parent path strings in scan.rs for bare filenames Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
crates/tirith/src/cli/check.rs
Outdated
| ); | ||
| return verdict.action.exit_code(); |
There was a problem hiding this comment.
🟢 Low cli/check.rs:92
When approval_check is true and approval is needed, the early return at line 93 skips writing last_trigger.json. Consider adding last_trigger::write_last_trigger(&verdict, cmd) before the return to maintain consistency with the non-approval path.
| ); | |
| return verdict.action.exit_code(); | |
| ); | |
| // Write last_trigger for non-allow verdicts | |
| if verdict.action != tirith_core::verdict::Action::Allow { | |
| last_trigger::write_last_trigger(&verdict, cmd); | |
| } | |
| return verdict.action.exit_code(); |
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith/src/cli/check.rs around lines 92-93:
When `approval_check` is true and approval is needed, the early return at line 93 skips writing `last_trigger.json`. Consider adding `last_trigger::write_last_trigger(&verdict, cmd)` before the return to maintain consistency with the non-approval path.
Evidence trail:
crates/tirith/src/cli/check.rs lines 74-93 (early return after approval_check path), lines 134-136 (write_last_trigger call that is skipped by the early return at line 93)
- Use JSON-RPC -32603 (Internal error) for resources/read failures instead of -32602 (Invalid params) — uri validation is separate - Skip duplicate audit log_verdict in check.rs and paste.rs when bypass was honored (analyze() already logged it) - Fix "chars" → "bytes" in audit redact message (Rust .len() returns byte count) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| /// - Paranoia 4 (Pro required): also show Info findings | ||
| /// | ||
| /// Free-tier users are capped at effective paranoia 2 regardless of policy setting. | ||
| pub fn filter_findings_by_paranoia(verdict: &mut Verdict, paranoia: u8) { |
There was a problem hiding this comment.
🟡 Medium src/engine.rs:767
In paste.rs, filter_findings_by_paranoia mutates the verdict before log_verdict is called, so the audit log captures only filtered findings. Since this is documented as an "output-layer filter" and ADR-13 says "the engine always detects everything," consider calling log_verdict before filtering, or cloning findings for audit.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/engine.rs around line 767:
In `paste.rs`, `filter_findings_by_paranoia` mutates the verdict *before* `log_verdict` is called, so the audit log captures only filtered findings. Since this is documented as an "output-layer filter" and ADR-13 says "the engine always detects everything," consider calling `log_verdict` before filtering, or cloning findings for audit.
Evidence trail:
crates/tirith/src/cli/paste.rs lines 69 and 79-85 (filter_findings_by_paranoia called before log_verdict); crates/tirith-core/src/engine.rs lines 758-759 (doc comment: 'This is an output-layer filter — the engine always detects everything (ADR-13)'); crates/tirith-core/src/audit.rs lines 66-70 (log_verdict extracts rule_ids from verdict.findings); docs/threat-model.md line 55 (ADR-13: 'All detection rules run regardless of tier')
- Trim whitespace from server_url and api_key in license_cmd to prevent invalid HTTP requests from env vars or policy with trailing spaces - Return None from remote_policy_cache_path() when home_dir() is unavailable instead of producing a relative .cache path - Move audit log_verdict() before filter_findings_by_paranoia() in check.rs and paste.rs so the audit captures full engine detection (ADR-13: paranoia is an output-layer filter, not an audit filter) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…lthrough - Apply policy_fetch_fail_mode to remote YAML parse errors, not just fetch errors. In "closed" mode, a malformed server response now fails closed instead of silently falling back to local policy. In "cached" mode, it tries the cache first. Prevents centralized policy bypass via malformed responses. - Filter empty TIRITH_SERVER_URL/TIRITH_API_KEY env vars so they fall through to policy config, consistent with Policy::discover() behavior. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| // Write with restricted permissions (owner-only) | ||
| let mut opts = std::fs::OpenOptions::new(); | ||
| opts.write(true).create(true).truncate(true); | ||
| #[cfg(unix)] |
There was a problem hiding this comment.
🟢 Low src/policy.rs:688
On Windows, the cached policy file is created with default permissions since mode(0o600) only applies on Unix. If this file may contain sensitive data, consider adding Windows-specific permission handling (e.g., using windows-acl or similar), or document why default permissions are acceptable on Windows.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/policy.rs around line 688:
On Windows, the cached policy file is created with default permissions since `mode(0o600)` only applies on Unix. If this file may contain sensitive data, consider adding Windows-specific permission handling (e.g., using `windows-acl` or similar), or document why default permissions are acceptable on Windows.
Evidence trail:
crates/tirith-core/src/policy.rs lines 680-697 at REVIEWED_COMMIT: The `cache_remote_policy` function has a comment stating 'Write with restricted permissions (owner-only)' at line 686. The permission restriction `opts.mode(0o600)` is applied inside `#[cfg(unix)]` block (lines 689-692), meaning Windows receives no special permission handling. The Policy struct (lines 21-50) shows the cached data includes security settings like `fail_mode`, `allow_bypass_env`, `paranoia`, `severity_overrides`, `additional_known_domains`, and allowlists.
| match crate::policy_client::fetch_remote_policy(&server_url, &api_key) { | ||
| Ok(yaml) => { | ||
| // Cache the fetched policy for offline use | ||
| let _ = cache_remote_policy(&yaml); |
There was a problem hiding this comment.
🟢 Low src/policy.rs:336
cache_remote_policy silently returns Ok(()) when remote_policy_cache_path() is None, and the result is discarded here. If caching silently fails, the "cached" fail mode won't work as expected. Consider logging a warning when the cache path cannot be determined.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/policy.rs around line 336:
`cache_remote_policy` silently returns `Ok(())` when `remote_policy_cache_path()` is `None`, and the result is discarded here. If caching silently fails, the `"cached"` fail mode won't work as expected. Consider logging a warning when the cache path cannot be determined.
Evidence trail:
crates/tirith-core/src/policy.rs line 336 (REVIEWED_COMMIT): `let _ = cache_remote_policy(&yaml);` - result discarded
crates/tirith-core/src/policy.rs lines 680-698 (REVIEWED_COMMIT): `cache_remote_policy` returns `Ok(())` when `remote_policy_cache_path()` is `None` without logging
crates/tirith-core/src/policy.rs lines 671-678 (REVIEWED_COMMIT): `remote_policy_cache_path()` can return `None`
crates/tirith-core/src/policy.rs lines 354-368 (REVIEWED_COMMIT): "cached" fail mode depends on `load_cached_remote_policy()` which also returns `None` when cache path is unavailable
| if !path.exists() { | ||
| eprintln!("No license key installed."); | ||
| return 0; | ||
| } | ||
|
|
||
| if let Err(e) = std::fs::remove_file(&path) { | ||
| eprintln!("tirith: cannot remove license key: {e}"); | ||
| return 1; | ||
| } |
There was a problem hiding this comment.
🟢 Low cli/license_cmd.rs:100
TOCTOU race: if another process deletes the file between path.exists() and remove_file, this returns an error even though the goal (no file) was achieved. Consider removing the existence check and treating ErrorKind::NotFound from remove_file as success.
- if !path.exists() {
- eprintln!("No license key installed.");
- return 0;
- }
-
- if let Err(e) = std::fs::remove_file(&path) {
+ if let Err(e) = std::fs::remove_file(&path) {
+ if e.kind() == std::io::ErrorKind::NotFound {
+ eprintln!("No license key installed.");
+ return 0;
+ }
eprintln!("tirith: cannot remove license key: {e}");
return 1;
}🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith/src/cli/license_cmd.rs around lines 100-108:
TOCTOU race: if another process deletes the file between `path.exists()` and `remove_file`, this returns an error even though the goal (no file) was achieved. Consider removing the existence check and treating `ErrorKind::NotFound` from `remove_file` as success.
Evidence trail:
crates/tirith/src/cli/license_cmd.rs lines 100-108 at REVIEWED_COMMIT: Line 100 checks `if !path.exists()`, line 105 calls `std::fs::remove_file(&path)`. The TOCTOU race exists between these two operations - if another process deletes the file between them, `remove_file` returns NotFound error and the code returns failure (return 1) even though the goal was achieved.
Resolve conflicts by preserving PR 39's commercial features (approval workflow, license server, DLP, webhooks) while integrating main's improvements: expanded configfile detection (ConfigPathMatcher, ConfigSuspiciousIndicator, .claude/skills & agents support), repo_root/is_config_override on AnalysisContext, rendered content rules in FileScan, and updated test fixtures. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
|
||
| // Block private/loopback/link-local addresses | ||
| if let Some(host) = parsed.host_str() { | ||
| if let Ok(ip) = host.parse::<IpAddr>() { |
There was a problem hiding this comment.
🟡 Medium src/url_validate.rs:27
Hostnames that resolve to private IPs bypass the is_private_ip check since host.parse::<IpAddr>() only matches literal IP addresses. Consider resolving the hostname via DNS and checking the resolved addresses, or documenting this as a known limitation if DNS resolution is intentionally deferred to a later layer.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/url_validate.rs around line 27:
Hostnames that resolve to private IPs bypass the `is_private_ip` check since `host.parse::<IpAddr>()` only matches literal IP addresses. Consider resolving the hostname via DNS and checking the resolved addresses, or documenting this as a known limitation if DNS resolution is intentionally deferred to a later layer.
Evidence trail:
crates/tirith-core/src/url_validate.rs lines 26-31 (REVIEWED_COMMIT): shows `host.parse::<IpAddr>()` only succeeds for literal IPs, and `is_private_ip()` is only called if parse succeeds. git_grep for 'dns|resolve|lookup|ToSocketAddrs' found no DNS resolution in url_validate.rs or its callers. docs/roadmap.md:19 mentions 'DNS-based threat intelligence lookups' as a future feature. Rust std lib `FromStr for IpAddr` only parses literal IP strings, not hostnames.
| "severity": max_severity.to_string(), | ||
| "rule_ids": rule_ids, | ||
| "finding_count": verdict.findings.len(), | ||
| "command_preview": sanitize_for_json(command_preview), |
There was a problem hiding this comment.
🟡 Medium src/webhook.rs:123
serde_json::json! already escapes strings, so sanitize_for_json(command_preview) causes double-escaping. Consider passing command_preview directly.
| "command_preview": sanitize_for_json(command_preview), | |
| "command_preview": command_preview, |
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/webhook.rs around line 123:
`serde_json::json!` already escapes strings, so `sanitize_for_json(command_preview)` causes double-escaping. Consider passing `command_preview` directly.
Evidence trail:
crates/tirith-core/src/webhook.rs lines 117-125 (the `json!` macro usage with `sanitize_for_json(command_preview)`), lines 250-257 (the `sanitize_for_json` function that pre-escapes strings for JSON by using `serde_json::Value::String(...).to_string()` and stripping quotes). The double-escaping occurs because `sanitize_for_json` returns already-escaped content, and `json!` + `.to_string()` escapes it again.
| let conn = conn.lock().unwrap(); | ||
| conn.execute( | ||
| "UPDATE subscriptions SET tier=?1, product_id=?2, updated_at=datetime('now') WHERE id=?3 AND tier='unknown'", | ||
| params![tier, pid, sid], | ||
| ) | ||
| .map_err(|e| AppError::Internal(format!("db retry tier fix: {e}")))?; | ||
| conn.execute( | ||
| "DELETE FROM dead_letter WHERE id=?1", | ||
| params![dead_letter_id], | ||
| ) | ||
| .map_err(|e| AppError::Internal(format!("db delete dl: {e}")))?; |
There was a problem hiding this comment.
🟡 Medium src/db.rs:807
These two operations should be wrapped in a transaction for atomicity, consistent with similar methods like process_subscription_created. Consider using conn.unchecked_transaction().
- let conn = conn.lock().unwrap();
- conn.execute(
+ let conn = conn.lock().unwrap();
+ let tx = conn.unchecked_transaction()
+ .map_err(|e| AppError::Internal(format!("db tx: {e}")))?;
+ tx.execute(
"UPDATE subscriptions SET tier=?1, product_id=?2, updated_at=datetime('now') WHERE id=?3 AND tier='unknown'",
params![tier, pid, sid],
)
.map_err(|e| AppError::Internal(format!("db retry tier fix: {e}")))?;
- conn.execute(
+ tx.execute(
"DELETE FROM dead_letter WHERE id=?1",
params![dead_letter_id],
)
.map_err(|e| AppError::Internal(format!("db delete dl: {e}")))?;
+ tx.commit().map_err(|e| AppError::Internal(format!("db commit: {e}")))?;🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file tools/license-server/src/db.rs around lines 807-817:
These two operations should be wrapped in a transaction for atomicity, consistent with similar methods like `process_subscription_created`. Consider using `conn.unchecked_transaction()`.
Evidence trail:
tools/license-server/src/db.rs lines 795-822 (apply_retry_tier_fix method with two separate conn.execute() calls not wrapped in a transaction); tools/license-server/src/db.rs lines 142-150 (process_subscription_created using conn.unchecked_transaction()); commit REVIEWED_COMMIT
| .ok() | ||
| .filter(|s| !s.is_empty()) | ||
| .map(PathBuf::from) | ||
| .unwrap_or_else(|| home::home_dir().unwrap_or_default().join(".local/state")); |
There was a problem hiding this comment.
🟢 Low src/audit_upload.rs:15
When home::home_dir() returns None, unwrap_or_default() produces an empty PathBuf, causing audit events to be written to a relative path (.local/state/...) instead of failing explicitly. Consider returning Option<PathBuf> or propagating an error.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/audit_upload.rs around line 15:
When `home::home_dir()` returns `None`, `unwrap_or_default()` produces an empty `PathBuf`, causing audit events to be written to a relative path (`.local/state/...`) instead of failing explicitly. Consider returning `Option<PathBuf>` or propagating an error.
Evidence trail:
crates/tirith-core/src/audit_upload.rs lines 10-17 at REVIEWED_COMMIT - shows `spool_path()` function using `home::home_dir().unwrap_or_default().join(".local/state")` which produces a relative path when `home_dir()` returns `None`. Rust stdlib: `PathBuf::default()` returns an empty path, and joining empty PathBuf with relative path yields relative path.
|
|
||
| // Approval workflow (Team feature) | ||
| if tirith_core::license::current_tier() >= tirith_core::license::Tier::Team { | ||
| if let Some(meta) = tirith_core::approval::check_approval(&verdict, &policy) { |
There was a problem hiding this comment.
🟡 Medium cli/check.rs:88
Consider calling check_approval before filter_findings_by_paranoia. Currently approval rules won't match findings filtered out by paranoia (e.g., Low/Info at paranoia=1), inconsistent with the audit log which correctly captures full detection.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith/src/cli/check.rs around line 88:
Consider calling `check_approval` before `filter_findings_by_paranoia`. Currently approval rules won't match findings filtered out by paranoia (e.g., Low/Info at paranoia=1), inconsistent with the audit log which correctly captures full detection.
Evidence trail:
crates/tirith/src/cli/check.rs lines 73-90 (REVIEWED_COMMIT): Shows audit logging before filter_findings_by_paranoia (line 85), with check_approval called after (line 89). Comment explicitly states: "Log to audit BEFORE paranoia filtering so the audit captures full detection (ADR-13: engine always detects everything; paranoia is an output-layer filter)."
crates/tirith-core/src/approval.rs lines 21-48: check_approval iterates over verdict.findings to match against approval rules.
crates/tirith-core/src/engine.rs lines 768-776: filter_findings_by_paranoia removes Low/Info findings based on paranoia level, with comment: "This is an output-layer filter — the engine always detects everything (ADR-13)."
|
|
||
| /// Stub for non-unix platforms where reqwest is not available. | ||
| #[cfg(not(unix))] | ||
| pub fn drain_spool(_server_url: &str, _api_key: &str, _max_events: usize, _max_bytes: u64) { |
There was a problem hiding this comment.
🟡 Medium src/audit_upload.rs:172
On non-unix platforms, events are spooled but never drained or cleaned up since drain_spool is a no-op. Consider either disabling spooling entirely on non-unix, or calling enforce_retention within this stub to prevent unbounded growth.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/audit_upload.rs around line 172:
On non-unix platforms, events are spooled but never drained or cleaned up since `drain_spool` is a no-op. Consider either disabling spooling entirely on non-unix, or calling `enforce_retention` within this stub to prevent unbounded growth.
Evidence trail:
crates/tirith-core/src/audit_upload.rs lines 22-38 (spool_event - no platform conditional), lines 171-174 (drain_spool is no-op on non-unix), lines 195-213 (spool_and_upload calls both functions on all platforms), line 108 (enforce_retention only called within Unix drain_spool)
| /// Returns None for `command -v` / `command -V` which only look up commands, not execute them. | ||
| /// Resolve through `command` wrapper: skip flags like -v, -p, -V, then `--`, take next arg. | ||
| fn resolve_command_wrapper(args: &[String]) -> Option<String> { | ||
| let mut i = 0; |
There was a problem hiding this comment.
🟠 High src/engine.rs:353
Suggestion: Tighten wrapper resolution to avoid false positives in is_self_invocation. resolve_command_wrapper should treat -v/-V as lookup‑only (return None), and resolve_time_wrapper should skip flags that consume the next arg and honor --. The current parsing can misread flag values (e.g., -f "%E") as the command.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/engine.rs around line 353:
Suggestion: Tighten wrapper resolution to avoid false positives in `is_self_invocation`. `resolve_command_wrapper` should treat `-v/-V` as lookup‑only (return `None`), and `resolve_time_wrapper` should skip flags that consume the next arg and honor `--`. The current parsing can misread flag values (e.g., `-f "%E"`) as the command.
Evidence trail:
crates/tirith-core/src/engine.rs lines 351-367 (`resolve_command_wrapper`), lines 370-378 (`resolve_time_wrapper`), lines 171-200 (`is_self_invocation`), lines 342-346 (usage in `analyze` function). POSIX `command` builtin `-v/-V` are lookup-only flags that don't execute the command. GNU `time` has `-f FORMAT`, `-o FILE` flags that consume the next argument.
| } | ||
|
|
||
| if result.findings.iter().any(|f| f.severity >= fail_on) { | ||
| if (ci && result.findings.iter().any(|f| f.severity >= fail_on)) |
There was a problem hiding this comment.
🟡 Medium cli/scan.rs:109
The fail_on threshold is gated behind ci, so --fail-on has no effect when --ci isn't set. Consider checking f.severity >= fail_on unconditionally (removing the ci && prefix from that part of the condition).
| if (ci && result.findings.iter().any(|f| f.severity >= fail_on)) | |
| if result.findings.iter().any(|f| f.severity >= fail_on) |
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith/src/cli/scan.rs around line 109:
The `fail_on` threshold is gated behind `ci`, so `--fail-on` has no effect when `--ci` isn't set. Consider checking `f.severity >= fail_on` unconditionally (removing the `ci &&` prefix from that part of the condition).
Evidence trail:
crates/tirith/src/cli/scan.rs lines 108-114 at REVIEWED_COMMIT: The condition `(ci && result.findings.iter().any(|f| f.severity >= fail_on))` gates the fail_on threshold check behind the ci flag, causing --fail-on to have no effect when --ci isn't set.
| metadata.len(), | ||
| MAX_FILE_SIZE | ||
| ); | ||
| return None; |
There was a problem hiding this comment.
🟢 Low src/scan.rs:141
Returning None for oversized config files loses the distinction between "too large" and "unreadable", and callers can't tell if important config files were skipped. Consider returning a FileScanResult with empty findings (like the old behavior) or using a richer return type to preserve skip reasons.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/scan.rs around line 141:
Returning `None` for oversized config files loses the distinction between "too large" and "unreadable", and callers can't tell if important config files were skipped. Consider returning a `FileScanResult` with empty findings (like the old behavior) or using a richer return type to preserve skip reasons.
Evidence trail:
crates/tirith-core/src/scan.rs lines 121-155 at REVIEWED_COMMIT; git_show commit 32fd400f65dee2255a8e939c80f51a098ab39876 shows the diff where the old behavior for oversized files was `return Some(FileScanResult { path: ..., findings: vec![], is_config_file: false })` and the new behavior is `return None`.
| use std::os::unix::fs::OpenOptionsExt; | ||
| opts.mode(0o600); | ||
| } | ||
| let mut file = opts.open(&path)?; |
There was a problem hiding this comment.
🟢 Low src/audit_upload.rs:34
On Unix, opts.mode(0o600) only applies when creating the file. If the file already exists with permissive permissions, sensitive data could be world-readable. Consider calling fs::set_permissions after opening to enforce restrictive permissions on existing files.
- let mut file = opts.open(&path)?;
+ let mut file = opts.open(&path)?;
+ #[cfg(unix)]
+ {
+ use std::os::unix::fs::PermissionsExt;
+ fs::set_permissions(&path, fs::Permissions::from_mode(0o600))?;
+ }🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file crates/tirith-core/src/audit_upload.rs around line 34:
On Unix, `opts.mode(0o600)` only applies when creating the file. If the file already exists with permissive permissions, sensitive data could be world-readable. Consider calling `fs::set_permissions` after opening to enforce restrictive permissions on existing files.
Evidence trail:
- crates/tirith-core/src/audit_upload.rs lines 27-36: Shows `opts.create(true).append(true)` with `.mode(0o600)` on Unix
- Rust docs https://doc.rust-lang.org/std/os/unix/fs/trait.OpenOptionsExt.html: 'Sets the mode bits that a new file will be created with'
- Rust std library source https://doc.rust-lang.org/beta/src/std/sys/fs/unix.rs.html lines 2109-2112: Shows Rust's own code calling `writer.set_permissions(perm)?` with comment '// Set the correct file permissions, in case the file already existed.'
Summary
tirith scan)Test plan
cargo fmt --allpassescargo clippy --workspace -- -D warningspassescargo test --workspacepasses🤖 Generated with Claude Code
Note
Add license-gated Team features and Polar-integrated license server, and gate policy/webhook/audit flows behind signed Ed25519 tokens in
tirith-coreand CLIIntroduce signed license verification (Ed25519) and tiered gating, add remote policy fetch/cache and network/custom-rule enforcement, implement approval workflow across CLI and shell hooks, add SARIF output, and emit redacted audit logs with optional queued uploads. Ship a new
tools/license-serverwith Polar webhook verification, token refresh endpoint, and receipt pages, plus atirith-signtool for key management. Key entry points:tirith_core::licensefor token logic,tirith_core::policy::discoverfor remote policy,tirith_core::engine::analyzefor Team checks, CLIcheck/scanfor approval and SARIF, and the Axum servertools/license-serverroutes.📍Where to Start
Start with
licenseenforcement and token paths in crates/tirith-core/src/license.rs, then review policy discovery and server fetch in crates/tirith-core/src/policy.rs, followed by approval and audit flows in crates/tirith/src/cli/check.rs and the Team logic in crates/tirith-core/src/engine.rs. For the server, begin at the router in tools/license-server/src/routes/mod.rs and the entrypoint in tools/license-server/src/main.rs.Macroscope summarized dd8aae2.