feat(mcp): add MCP server for secret-gated AI agent access#343
Conversation
`fnox mcp` starts an MCP server over stdio that acts as a session-scoped secret broker. Secrets are resolved on first access (may require yubikey/SSO), cached in memory for the session, and never persisted to disk. Two tools are exposed: - get_secret: retrieve a single secret by name - exec: run a command with secrets injected as env vars Tools can be selectively enabled via `[mcp] tools` in fnox.toml. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant security enhancement by integrating a Model Context Protocol (MCP) server into Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Greptile SummaryThis PR adds a The implementation is well-structured and addresses a comprehensive list of security concerns:
The one minor remaining issue is the Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Agent as AI Agent
participant MCP as fnox MCP Server
participant Cache as Secret Cache (RwLock)
participant Once as OnceCell (resolved)
participant Resolver as resolve_secrets_batch
participant Child as Subprocess
Note over Agent,Child: First tool call (e.g. get_secret or exec)
Agent->>MCP: tools/call get_secret {name}
MCP->>Once: get_or_try_init()
Once->>Resolver: batch resolve env=true secrets
Resolver-->>Once: HashMap<name, Option<value>>
Once->>Cache: write all env=true results
Once-->>MCP: () (cell populated)
MCP->>Cache: read — cache hit?
Cache-->>MCP: Some(value)
MCP-->>Agent: CallToolResult::success(value)
Note over Agent,Child: env=false secret (on-demand)
Agent->>MCP: tools/call get_secret {name: env_false_key}
MCP->>Once: get_or_try_init() — no-op (already populated)
MCP->>Cache: read — cache miss (env=false not in batch)
MCP->>Resolver: resolve_single(env_false_key)
Resolver-->>MCP: Option<value>
MCP->>Cache: write (including None for absent)
MCP-->>Agent: CallToolResult::success / invalid_request
Note over Agent,Child: exec tool call
Agent->>MCP: tools/call exec {command}
MCP->>Once: get_or_try_init() — no-op
MCP->>Cache: snapshot env=true secrets (releases lock)
MCP->>Child: spawn with env vars injected (as_file → temp path)
MCP->>Child: concurrent bounded read stdout+stderr (PER_STREAM_LIMIT)
Child-->>MCP: stdout_buf, stderr_buf, exit status
MCP-->>Agent: CallToolResult::success/error (truncated if needed)
Last reviewed commit: 498ee19 |
There was a problem hiding this comment.
Code Review
This pull request introduces a new fnox mcp command, running a Model Context Protocol (MCP) server to provide AI agents with controlled access to secrets via get_secret and exec tools. While the implementation is generally well-structured, a significant security flaw exists in the exec tool: it allows arbitrary command execution and returns output, enabling agents to easily exfiltrate injected secrets, bypassing intended security boundaries. Additionally, suggestions for improvement include using an enum for MCP tool names, refactoring exec tool's output formatting, clarifying tool configuration documentation, and using a more specific error type for MCP server errors.
| async fn exec( | ||
| &self, | ||
| Parameters(params): Parameters<ExecParams>, | ||
| ) -> Result<CallToolResult, McpError> { | ||
| if !self.mcp_config.tools.contains(&"exec".to_string()) { | ||
| return Err(McpError::invalid_request( | ||
| "Tool 'exec' is not enabled in this configuration", | ||
| None, | ||
| )); | ||
| } | ||
|
|
||
| if params.command.is_empty() { | ||
| return Err(McpError::invalid_params("Command must not be empty", None)); | ||
| } | ||
|
|
||
| self.ensure_resolved().await?; | ||
|
|
||
| let cache = self.cache.read().await; | ||
|
|
||
| let mut cmd = tokio::process::Command::new(¶ms.command[0]); | ||
| if params.command.len() > 1 { | ||
| cmd.args(¶ms.command[1..]); | ||
| } | ||
|
|
||
| // Inject secrets as env vars — must NOT inherit stdio (would corrupt JSON-RPC) | ||
| for (key, value) in cache.iter() { | ||
| cmd.env(key, value); | ||
| } | ||
|
|
||
| cmd.stdin(std::process::Stdio::null()); | ||
| cmd.stdout(std::process::Stdio::piped()); | ||
| cmd.stderr(std::process::Stdio::piped()); | ||
|
|
||
| let output = cmd.output().await.map_err(|e| { | ||
| McpError::internal_error( | ||
| format!("Failed to execute command '{}': {e}", params.command[0]), | ||
| None, | ||
| ) | ||
| })?; | ||
|
|
||
| let stdout = String::from_utf8_lossy(&output.stdout); | ||
| let stderr = String::from_utf8_lossy(&output.stderr); | ||
|
|
||
| let mut text = String::new(); | ||
| if !stdout.is_empty() { | ||
| text.push_str(&stdout); | ||
| } | ||
| if !stderr.is_empty() { | ||
| if !text.is_empty() { | ||
| text.push('\n'); | ||
| } | ||
| text.push_str("[stderr]\n"); | ||
| text.push_str(&stderr); | ||
| } | ||
|
|
||
| let exit_code = output.status.code().unwrap_or(-1); | ||
| if !output.status.success() { | ||
| text.push_str(&format!("\n[exit code: {exit_code}]")); | ||
| } | ||
|
|
||
| if text.is_empty() { | ||
| text = format!("[exit code: {exit_code}]"); | ||
| } | ||
|
|
||
| Ok(CallToolResult::success(vec![Content::text(text)])) | ||
| } |
There was a problem hiding this comment.
The exec tool presents a significant security risk: it allows AI agents to execute arbitrary commands and retrieve stdout, enabling easy exfiltration of all secrets injected into the environment, contrary to documentation. Beyond this critical vulnerability, the current output formatting logic is also complex and contains a minor bug where a failing command with no output can produce a leading newline. It is recommended to refactor this logic to be more straightforward and correct by building a vector of output parts and then joining them, which improves readability and handles all cases cleanly.
let mut parts = Vec::new();
if !stdout.is_empty() {
parts.push(stdout.to_string());
}
if !stderr.is_empty() {
parts.push(format!("[stderr]\n{}", stderr));
}
let exit_code = output.status.code().unwrap_or(-1);
if !output.status.success() || parts.is_empty() {
parts.push(format!("[exit code: {exit_code}]"));
}
let text = parts.join("\n");| tools = ["get_secret", "exec"] # default: both enabled | ||
| ``` | ||
|
|
||
| Set `tools = ["exec"]` to only allow running commands (agent never sees raw secrets). Set `tools = ["get_secret"]` to only allow reading secrets directly. |
There was a problem hiding this comment.
This sentence could be slightly ambiguous. It might imply that setting one tool disables the other, which is only true if you provide a list with a single element. A clearer phrasing would explicitly mention that tools is a list of enabled tools.
| Set `tools = ["exec"]` to only allow running commands (agent never sees raw secrets). Set `tools = ["get_secret"]` to only allow reading secrets directly. | |
| You can limit which tools are available by listing them in the `tools` array. For example, to only allow executing commands without exposing raw secrets, set `tools = ["exec"]`. To only allow retrieving secrets directly, set `tools = ["get_secret"]`. |
| let service: RunningService<RoleServer, FnoxMcpServer> = server | ||
| .serve(rmcp::transport::io::stdio()) | ||
| .await | ||
| .map_err(|e| FnoxError::Config(format!("Failed to start MCP server: {e}")))?; | ||
|
|
||
| service | ||
| .waiting() | ||
| .await | ||
| .map_err(|e| FnoxError::Config(format!("MCP server error: {e}")))?; |
There was a problem hiding this comment.
Using FnoxError::Config for MCP server errors (both at startup and during runtime) might be a bit misleading. Consider adding a more specific error variant for MCP-related issues, like FnoxError::McpServerError, to improve error diagnostics and handling. This would make it clearer where the error originated.
| pub struct McpConfig { | ||
| /// Which MCP tools to expose (default: ["get_secret", "exec"]) | ||
| #[serde(default = "McpConfig::default_tools")] | ||
| pub tools: Vec<String>, | ||
| } |
There was a problem hiding this comment.
Using a Vec<String> for the tools is a bit fragile as it relies on string matching. Consider using an enum for McpTool to make this more robust and leverage the type system for validation. This would also allow serde to validate the tool names at deserialization time.
For example:
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum McpTool {
GetSecret,
Exec,
}
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct McpConfig {
/// Which MCP tools to expose (default: ["get_secret", "exec"])
#[serde(default = "McpConfig::default_tools")]
pub tools: Vec<McpTool>,
}
impl McpConfig {
fn default_tools() -> Vec<McpTool> {
vec![McpTool::GetSecret, McpTool::Exec]
}
}This change would make the manual validation in src/commands/mcp.rs unnecessary and simplify the checks in src/mcp_server.rs (e.g., self.mcp_config.tools.contains(&McpTool::GetSecret)).
- Fix cache sentinel bug: use OnceCell for exactly-once resolution, preventing repeated yubikey/SSO prompts when all secrets are optional - Clear parent environment in exec tool (env_clear + selective forwarding of PATH/HOME/USER/SHELL/TERM/LANG) to prevent leaking host credentials - Filter tools/list to only advertise enabled tools, not all registered - Cap exec output at 1 MiB to prevent unbounded memory usage - Replace Vec<String> with McpTool enum for type-safe tool config (serde validates at deserialization, removing manual validation) - Improve output formatting using parts vector - Clarify tool config documentation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The exec tool should inherit the parent environment like fnox exec does. Clearing the env breaks most real-world commands, and since the agent controls the command being run, env_clear doesn't meaningfully prevent secret exfiltration. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
bugbot run |
…ect env/as_file - exec tool now returns CallToolResult::error (is_error=true) when the subprocess exits with a non-zero code, per MCP specification - Wrap subprocess execution with tokio::time::timeout (default 300s, configurable via [mcp] exec_timeout_secs) to prevent indefinite hangs - Respect env=false secrets: skip them during cache population and env_remove them from child processes to prevent leakage - Respect as_file=true secrets: write to temp files and store paths in cache, matching fnox exec behavior Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
bugbot run |
## Summary - Adds a new `github-app` lease backend that creates short-lived GitHub installation access tokens from a GitHub App's private key - Supports scoping tokens to specific permissions and repositories - Configurable `env_var` (default: `GITHUB_TOKEN`), `api_base` for GitHub Enterprise - Tokens auto-expire in ≤1 hour (GitHub's hard limit), cached via existing lease ledger ### Example config ```toml [leases.github] type = "github-app" app_id = "12345" installation_id = "67890" private_key_file = "~/.config/fnox/github-app.pem" [leases.github.permissions] contents = "read" pull_requests = "write" repositories = ["my-org/my-repo"] ``` ## Test plan - [x] 9 bats tests covering: token creation (file + env var), `fnox exec` integration, custom env_var, permissions/repositories config, missing key error, lease list - [x] All tests use a mock HTTP server (no real GitHub credentials needed) - [x] Existing cargo + bats tests pass 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- CURSOR_SUMMARY --> --- > [!NOTE] > **Medium Risk** > Adds new networked auth/crypto code for generating and revoking GitHub tokens and changes the `LeaseBackend::revoke_lease` API across all backends, so regressions could affect lease cleanup/revocation behavior. > > **Overview** > Adds a new **`github-app` lease backend** that creates GitHub App installation access tokens by signing a short-lived JWT with an RSA private key (from `FNOX_GITHUB_APP_PRIVATE_KEY` or `private_key_file`), supports optional permission/repository scoping and custom `api_base`, and generates non-secret `lease_id`s derived from a token hash. > > Updates lease revocation to pass **decrypted cached credentials** into `LeaseBackend::revoke_lease` (trait signature change applied across existing backends) so backends can revoke using the actual credential value; `lease cleanup` now explicitly skips passing creds for expired leases. Also sets a default `User-Agent` on the shared `reqwest` client. > > Extends config schema and docs to document the new backend, adds bats tests with a mock GitHub API server, and updates dependencies (`jsonwebtoken` v10 with `aws_lc_rs`, plus lockfile changes around `untrusted`). > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit eed87bd. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY --> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…merge, release lock before subprocess - Cache all resolved secrets (including env=false) so get_secret can access them like fnox get. Filter env=false at exec injection time only. - Merge McpConfig field-by-field: an overlay that only sets exec_timeout_secs no longer silently re-enables disabled tools - Snapshot cache into Vec before spawning subprocess to release the RwLock read guard immediately instead of holding it for up to 5 minutes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
env=false secrets are more sensitive and may trigger auth prompts. Instead of resolving them eagerly in the batch, resolve them lazily on the first get_secret call. Once resolved they are cached for the rest of the session. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
bugbot run |
For as_file secrets, insert_into_cache stores a temp file path in the cache instead of the raw secret. resolve_single was returning the raw value, causing the first get_secret call to return the secret content while subsequent calls returned the file path from cache. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| // Cap output to prevent unbounded memory usage in JSON-RPC response | ||
| let stdout = String::from_utf8_lossy(&stdout_raw[..stdout_raw.len().min(MAX_OUTPUT_BYTES)]); | ||
| let stderr_budget = MAX_OUTPUT_BYTES.saturating_sub(stdout_raw.len().min(MAX_OUTPUT_BYTES)); | ||
| let stderr = String::from_utf8_lossy(&stderr_raw[..stderr_raw.len().min(stderr_budget)]); |
There was a problem hiding this comment.
Output truncation doesn't prevent unbounded memory during collection
Medium Severity
The exec tool calls child.wait_with_output() which buffers ALL stdout/stderr into process memory before the MAX_OUTPUT_BYTES truncation logic runs. The documentation claims output is "capped at 1 MiB to prevent unbounded memory usage," but the cap only applies to the JSON-RPC response — the full output is already loaded into memory by wait_with_output. A command producing large output (e.g., streaming binary data) during the timeout window (default 5 minutes) could cause OOM in the long-lived MCP server process.
…in get_secret Two fixes: 1. Replace wait_with_output() with bounded reads using AsyncReadExt::take(). Output is now capped at MAX_OUTPUT_BYTES during collection, not just at response time, preventing OOM from commands that produce large output. 2. get_secret now rejects as_file=true secrets with an error explaining they can only be used via exec. Prevents leaking raw key material (PEM blocks, etc.) over the MCP wire. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
bugbot run |
…name - Move exec_timeout_secs=0 validation from exec tool (runtime) to McpCommand::run (startup). The server now fails to start with a clear config error instead of returning a misleading invalid_params error to the agent on every call. - Check for empty program name (command: [""]) in addition to empty vector, giving a clear error instead of a cryptic OS error. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
|
||
| let status = child.wait().await; | ||
| (stdout_buf, stderr_buf, status) | ||
| }; |
There was a problem hiding this comment.
Sequential stdout/stderr reading causes subprocess deadlock
Medium Severity
The exec tool reads stdout to completion (waiting for EOF) before starting to read stderr. If the child process writes more than the OS pipe buffer size (~64KB on Linux) to stderr without writing to stdout, the child blocks on the stderr write while the MCP server blocks waiting for stdout EOF — a classic deadlock. The timeout will eventually kill the child, but legitimate commands that write heavily to stderr (e.g., verbose builds, programs that log to stderr) will fail with a confusing "timed out" error. Both pipes need to be drained concurrently, e.g., via tokio::join!.
Sequential reads (stdout then stderr) deadlock when the child fills the stderr pipe buffer (~64KB) while we're blocked reading stdout. Use tokio::join! to read both pipes concurrently, splitting the output budget evenly between them. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The truncated flag only checked total_collected > MAX_OUTPUT_BYTES, missing cases where one stream hit its per_stream_limit while the other was empty. Now also checks if either individual buffer reached its cap, so agents always get a truncation notice when output was dropped. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a single stream hits its per-stream cap while the total stays below MAX_OUTPUT_BYTES, the message now correctly reports the per-stream limit instead of falsely claiming the total exceeded the 1 MiB limit. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| ) -> Result<CallToolResult, McpError> { | ||
| let tcc = rmcp::handler::server::tool::ToolCallContext::new(self, request, context); | ||
| self.tool_router.call(tcc).await | ||
| } |
There was a problem hiding this comment.
Disabled tools still reachable via call_tool routing
Low Severity
The call_tool handler delegates directly to self.tool_router.call(tcc) without checking whether the requested tool is enabled in mcp_config. While list_tools and get_tool correctly filter disabled tools, call_tool bypasses this filtering, allowing a client to invoke a disabled tool's handler. The handler itself checks enablement and rejects the call, but this inconsistency means the tool router processes disabled tool requests rather than rejecting them at the dispatch level.
…ant, gate call_tool Three fixes: - is_none_or(|sc| sc.env) included cache keys not in profile_secrets (e.g. dependency side-effects) in the subprocess env. Changed to map_or(false, |sc| sc.env) to exclude unknown keys. - Extracted PER_STREAM_LIMIT constant to avoid duplicating the (MAX_OUTPUT_BYTES / 2) + 1 computation in two places. - call_tool now checks mcp_config.tools before dispatching, rejecting disabled tools at the dispatch level rather than relying solely on per-handler guards. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
|
bugbot run |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
|
|
||
| ### `exec` | ||
|
|
||
| Executes a command with all secrets injected as environment variables. The agent provides a command and arguments, and receives stdout/stderr output. The agent never sees the raw secret values — they're only present in the subprocess environment. |
There was a problem hiding this comment.
Documentation falsely claims exec prevents secret exposure
Medium Severity
The documentation states "The agent never sees the raw secret values" for the exec tool and recommends tools = ["exec"] to "prevent agents from reading raw secret values." Both claims are incorrect — the exec tool captures stdout/stderr and returns it to the agent, so running ["printenv"], ["env"], or ["sh", "-c", "echo $SECRET"] through exec reveals all env=true secret values. This gives users a false sense of security when configuring tools = ["exec"].
Additional Locations (1)
… exec security docs - Output display now trims to PER_STREAM_LIMIT - 1 (matching the stated limit in truncation messages) instead of including the extra sentinel byte. - exec_timeout_secs schema minimum changed from 0 to 1, matching the runtime validation in McpCommand::run. - Documentation no longer falsely claims exec prevents agents from seeing secret values. Agents control the command and can run printenv/echo to read injected env vars. Docs now describe exec as providing audit visibility, not secret isolation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| // For as_file secrets, write raw values to temp files and inject the | ||
| // file path as the env var. Temp files are kept alive until the | ||
| // subprocess completes (held in _exec_temp_files). | ||
| let mut _exec_temp_files = Vec::new(); |
There was a problem hiding this comment.
Underscore-prefix implies "intentionally unused" but variable must be kept alive
_exec_temp_files uses a leading underscore, which Rust developers conventionally read as "I know this is unused/discardable". However, this Vec<NamedTempFile> must not be discarded early — the NamedTempFile handles must stay alive until the subprocess finishes, otherwise the temp files are deleted before the child process can open them.
While Rust only immediately drops bare _ (not _foo), the naming convention will confuse future readers and the compiler may emit a dead-code warning for the variable. Rename it to make the keeping-alive intent explicit:
| let mut _exec_temp_files = Vec::new(); | |
| let mut exec_temp_files = Vec::new(); |
Then update the reference at line 315:
let path = Self::create_secret_file(key, value, &mut exec_temp_files)?;### 🚀 Features - **(cloudflare)** add Cloudflare API token lease backend by [@jdx](https://github.com/jdx) in [#335](#335) - **(fido2)** bump demand to v2, mask PIN during typing by [@jdx](https://github.com/jdx) in [#334](#334) - **(init)** add -f as short alias for --force by [@jdx](https://github.com/jdx) in [#329](#329) - **(lease)** add --all flag, default to creating all leases by [@jdx](https://github.com/jdx) in [#337](#337) - **(lease)** add GitHub App installation token lease backend by [@jdx](https://github.com/jdx) in [#342](#342) ### 🐛 Bug Fixes - **(config)** fix directory locations to follow XDG spec by [@jdx](https://github.com/jdx) in [#336](#336) - **(exec)** use unix exec and exit silently on subprocess failure by [@jdx](https://github.com/jdx) in [#339](#339) - **(fido2)** remove duplicate touch prompt by [@jdx](https://github.com/jdx) in [#332](#332) - **(set)** write to lowest-priority existing config file by [@jdx](https://github.com/jdx) in [#331](#331) - **(tui)** skip providers requiring interactive auth by [@jdx](https://github.com/jdx) in [#333](#333) ### 🛡️ Security - **(ci)** retry lint step to handle transient pkl fetch failures by [@jdx](https://github.com/jdx) in [#341](#341) - **(mcp)** add MCP server for secret-gated AI agent access by [@jdx](https://github.com/jdx) in [#343](#343) - add guide for fnox sync by [@jdx](https://github.com/jdx) in [#328](#328) ### 🔍 Other Changes - share Rust cache across CI jobs by [@jdx](https://github.com/jdx) in [#340](#340)


Summary
fnox mcpcommand that starts an MCP server over stdio, acting as a session-scoped secret broker for AI agentsget_secret(retrieve a secret by name) andexec(run a command with secrets as env vars)[mcp] toolsconfig infnox.tomlConfig
Claude Code integration (
.claude/settings.json):{ "mcpServers": { "fnox": { "command": "fnox", "args": ["mcp"] } } }Test plan
cargo buildcompiles without errorscargo test— all 118 tests passMcpvariant is alphabetically placed)tools/listtofnox mcpstdin → returns enabled toolstools/callget_secret with an age-encrypted secret → returns valuetools/callexec with a command → output contains secrets without exposing them in the MCP streamtools = ["exec"]only →get_secretnot listed, calls rejected.claude/settings.jsonas MCP server🤖 Generated with Claude Code
Note
High Risk
Adds a new secrets- and command-execution server surface area (
fnox mcp) where bugs could leak secret material or allow unintended subprocess behavior. Although opt-in and with some safeguards (tool allowlist, non-interactive mode, output limits/timeouts), it touches security-critical handling of secrets and process execution.Overview
Adds a new
fnox mcpCLI subcommand that starts a Model Context Protocol (MCP) server over stdio, intended for session-scoped secret access by AI agents.Introduces
src/mcp_server.rsimplementing two MCP tools—get_secret(on-demand secret retrieval, rejectingas_file=true) andexec(run a subprocess withenv=truesecrets injected, optionalas_filetemp-file injection, bounded output capture, and configurable timeout). Secrets are batch-resolved on first tool use and cached in-memory for the session.Extends config/schema to support
[mcp]settings (toolsallowlist andexec_timeout_secs) with careful merge semantics to avoid overlays unintentionally re-enabling tools; updates docs/CLI reference and adds thermcpdependency.Written by Cursor Bugbot for commit 4ead676. This will update automatically on new commits. Configure here.