feat: add reencrypt subcommand for updating encryption recipients#365
feat: add reencrypt subcommand for updating encryption recipients#365
reencrypt subcommand for updating encryption recipients#365Conversation
When recipients are added or removed from an encryption provider (e.g. age), existing secrets remain encrypted with the old recipient set. The new `fnox reencrypt` command decrypts and re-encrypts all matching secrets with the current provider configuration. Supports filtering by provider, key names, and regex patterns, as well as dry-run mode and multi-line secrets. Closes #320 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
Replace the manual get/set loop and awk-based re-encryption examples with the new `fnox reencrypt` command in both the age provider docs and the real-world example guide. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Code Review
This pull request introduces a new reencrypt command, which is a valuable feature for managing secrets when encryption configurations change. The implementation is well-structured, including features like filtering, dry-run mode, and interactive confirmation. My review focuses on improving performance and robustness in the new reencrypt.rs file. I've identified an opportunity to cache resolved providers to avoid redundant work and a use of unwrap() that could be made safer with expect().
|
|
||
| let (provider_name, secret_config) = &secrets_to_reencrypt[key]; | ||
|
|
||
| let provider_config = providers.get(provider_name.as_str()).unwrap(); |
There was a problem hiding this comment.
Using unwrap() can lead to a panic if the assumption that the provider exists is ever violated. While it seems safe in the current logic, it's better practice to use expect() with a descriptive message. This documents the invariant and provides a more helpful error message if it ever fails, which is useful for debugging.
| let provider_config = providers.get(provider_name.as_str()).unwrap(); | |
| let provider_config = providers.get(provider_name.as_str()).expect("Provider config must exist for a secret being re-encrypted"); |
| let provider = crate::providers::get_provider_resolved( | ||
| &merged_config, | ||
| &profile, | ||
| provider_name, | ||
| provider_config, | ||
| ) | ||
| .await?; |
There was a problem hiding this comment.
This block resolves the provider, but it has already been resolved once for the same secret in the initial filtering loop (lines 92-98). This is inefficient, as provider resolution can be an expensive operation, potentially involving I/O or network requests.
To improve performance, you could resolve each provider only once and cache the result. A common pattern for this is to use a HashMap<String, Arc<dyn Provider>>. You would populate this cache in the first loop and then retrieve the cached provider here, avoiding the repeated await.
Greptile SummaryThis PR adds a Key changes:
The PR demonstrates solid convergence on the issues raised in prior review rounds: the env-var cleanup now correctly runs before the Confidence Score: 4/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[fnox reencrypt] --> B[Collect all_secrets for profile]
B --> C{Apply filters:\n--provider / KEYS / --filter}
C --> D{Provider has\nEncryption capability?}
D -- No --> E[Skip: add to non_encryption_providers cache]
D -- Yes --> F{secret_config.value\nis Some?}
F -- No --> G[Skip with debug log]
F -- Yes --> H[Add to secrets_to_reencrypt]
H --> I{Explicit KEYS\nall found?}
I -- No --> J[Error: SecretNotFound]
I -- Yes --> K{dry_run?}
K -- Yes --> L[Print preview list\nreturn Ok]
K -- No --> M{force?}
M -- No --> N[Show confirmation prompt]
N --> O{User confirms?}
O -- No --> P[Cancelled: return Ok]
O -- Yes --> Q[Build secrets_for_resolve\nstrip json_path / sync / default]
M -- Yes --> Q
Q --> R[resolve_secrets_batch\ndecrypt all at once]
R --> S[Scrub env vars\nunsafe remove_var for each key]
S --> T{resolved OK?}
T -- Err --> U[Propagate error]
T -- Ok --> V{All secrets_to_reencrypt\nkeys in resolved?}
V -- No --> W[Error: ReencryptDecryptFailed]
V -- Yes --> X[For each resolved plaintext:\nprovider.encrypt]
X --> Y{Encrypt OK?}
Y -- Err --> Z[Error: ReencryptEncryptionFailed]
Y -- Ok --> AA[Group by source_path + save_profile\nbased on source_is_profile]
AA --> BB[For each source file:\nConfig::save_secrets_to_source]
BB --> CC[Print: Re-encrypted N secrets]
Reviews (11): Last reviewed commit: "fix(reencrypt): use source_is_profile fl..." | Re-trigger Greptile |
| // Dry-run mode | ||
| if self.dry_run { | ||
| let dry_run_label = console::style("[dry-run]").yellow().bold(); | ||
| let styled_profile = console::style(&profile).magenta(); | ||
|
|
||
| println!( | ||
| "{dry_run_label} Would re-encrypt {} secrets in profile {styled_profile}:", | ||
| secrets_to_reencrypt.len() | ||
| ); | ||
| for (key, (provider_name, _)) in &secrets_to_reencrypt { | ||
| println!( | ||
| " {} ({})", | ||
| console::style(key).cyan(), | ||
| console::style(provider_name).green() | ||
| ); | ||
| } | ||
| return Ok(()); | ||
| } |
There was a problem hiding this comment.
Dry-run doesn't validate decryptability
The dry-run path lists secrets by checking provider capabilities only — it never attempts to decrypt them. This means --dry-run may report "Would re-encrypt 5 secrets" while the actual run silently skips some of those secrets (logging a warn! and incrementing skipped_count) because decryption failed. Users relying on --dry-run to verify a safe re-encryption window could be misled.
Consider either:
- Attempting a test decrypt for each secret in dry-run mode and surfacing failures immediately, or
- Adding a note in the dry-run output such as
(decryptability not verified)so users understand the limitation.
- Cache resolved providers to avoid redundant resolution per secret - Strip json_path before resolving to prevent corrupting JSON secrets - Replace unwrap() with proper error handling via ok_or_else - Add -f flag to multi-profile doc examples Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Track already-checked non-encryption providers in a HashSet to avoid redundant get_provider_resolved calls for each secret - Warn when explicitly-requested positional keys are not found or not eligible for re-encryption Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… on missing source_path - Fall back to config's default_provider when a secret has no explicit provider field, so secrets relying on default_provider are re-encrypted - Improve skip warning to name the key and clarify it remains encrypted with old recipients - Error instead of silently falling back to cli.config when a secret has no source_path, preventing writes to the wrong config file Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Warnings that should be errors: - Failing to decrypt a secret now aborts instead of skipping, preventing a mixed-recipient state where some secrets are updated and others stale - Explicitly-requested keys (positional args) that aren't found or aren't eligible now return SecretNotFound instead of a warning Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ed secrets - Include active --provider and --filter flags in the error suggestion when an explicitly-requested key is not eligible - Verify all collected secrets appear in the resolver output, erroring if any are silently dropped by resolve_secrets_batch Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Strip the sync field when building the resolve map so resolve_secrets_batch decrypts from the main provider/value instead of a potentially stale sync cache. Also clear the sync field on the saved config so stale cached ciphertext isn't preserved after re-encryption. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Secrets with an encryption provider but no value would pass filtering and later produce a misleading "Failed to decrypt — ensure you have the correct private key" error. Add an early guard to skip them with a debug log instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
…ult field Two fixes: 1. Base secrets inherited by non-default profiles were being written to [profiles.X.secrets] instead of [secrets]. Now we detect whether each secret is base-level or profile-level and save to the correct section. 2. Strip `default` field before batch resolution to prevent fallback values from being re-encrypted instead of the original ciphertext when a provider returns None. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously secrets referencing a removed or mistyped provider were silently skipped with no diagnostic. Now emits a tracing::warn so the user can see why certain secrets were not re-encrypted. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…error 1. resolve_secrets_batch sets env vars for each decrypted secret, which leaks plaintext into the process environment (visible via /proc and inherited by child processes). Now scrubbed immediately after resolution since reencrypt has no need for exported env vars. 2. Replace generic FnoxError::Config strings with a dedicated ReencryptDecryptFailed variant for both "resolver dropped the key" and "resolver returned None" cases, giving better miette diagnostics. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. Add source_is_profile field to SecretConfig, set during loading to track whether a secret was parsed from a [profiles.X.secrets] section vs root [secrets]. Use this in reencrypt to determine the correct TOML section for save-back, instead of checking merged profile membership which can misclassify secrets from profile-specific files (e.g. fnox.staging.toml root [secrets]). 2. Bind resolve_secrets_batch result before the ? operator so env var scrubbing runs even when resolution fails, preventing partially decrypted plaintext from leaking in the process environment. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| for key in secrets_to_reencrypt.keys() { | ||
| // SAFETY: reencrypt is single-threaded at this point; no other threads | ||
| // are reading the environment concurrently. | ||
| unsafe { | ||
| std::env::remove_var(key); | ||
| } | ||
| } |
There was a problem hiding this comment.
Inaccurate SAFETY invariant for
remove_var
The comment claims "reencrypt is single-threaded at this point; no other threads are reading the environment concurrently." This isn't a sound justification in a Tokio multi-threaded executor. Between .await points the current task advances synchronously, but other Tokio worker threads can still be running concurrent tasks that call std::env::var (e.g., a provider that was spawned via tokio::spawn). The POSIX-level unsafety of getenv/unsetenv applies across all OS threads, not just Tokio tasks on the current worker.
A more accurate comment would be:
// SAFETY: This block runs synchronously between two `.await` points; no
// other `await` is in progress on this task, so no concurrent task spawned
// *by this command* will be reading the environment right now.
// Correctness relies on no other thread accessing the process environment
// concurrently. This is a best-effort scrub for a CLI tool running alone.If the project ever migrates to a fully multi-threaded provider execution model, this cleanup should be revisited.
| }); | ||
| }; | ||
|
|
||
| let (provider_name, secret_config) = &secrets_to_reencrypt[key]; |
There was a problem hiding this comment.
Panicking index into
secrets_to_reencrypt
The re-encrypt loop iterates resolved and indexes into secrets_to_reencrypt using the [] operator, which panics on a missing entry. While the verification loop at lines 244–251 guarantees every entry in secrets_to_reencrypt is present in resolved, the reverse is not explicitly enforced. It holds today because secrets_for_resolve is derived exclusively from secrets_to_reencrypt, but a future change to resolve_secrets_batch that adds extra output entries could silently introduce a panic here.
Prefer .get() with an explicit error to match the rest of the codebase's defensive style:
let Some((provider_name, entry_config)) = secrets_to_reencrypt.get(entry_name) else {
return Err(FnoxError::ReencryptDecryptFailed {
entry_name,
details: "entry returned by resolver was not in the reencrypt set".to_string(),
});
};### 🚀 Features - add `reencrypt` subcommand for updating encryption recipients by [@jdx](https://github.com/jdx) in [#365](#365) ### 🐛 Bug Fixes - **(set)** prompt for secret value when -k flag is used by [@jdx](https://github.com/jdx) in [#367](#367) ### 📦️ Dependency Updates - lock file maintenance by [@renovate[bot]](https://github.com/renovate[bot]) in [#360](#360) - lock file maintenance by [@renovate[bot]](https://github.com/renovate[bot]) in [#362](#362)
## Upstream release Bumps bundled fnox binary from 1.18.0 to 1.19.0. **Release**: https://github.com/jdx/fnox/releases/tag/v1.19.0 ## Release notes v1.19.0 adds a new `fnox reencrypt` command that makes it easy to re-encrypt all your secrets when encryption provider configuration changes -- for example, when adding or removing age recipients. This release also fixes a bug where `fnox set -k` would skip prompting for the secret value. ## Added **`fnox reencrypt` subcommand** ([#365](jdx/fnox#365)) -- @jdx Previously, re-encrypting secrets after changing recipients required a tedious manual loop of `fnox get` and `fnox set` for each secret. The new `fnox reencrypt` command handles this in one step: it decrypts matching secrets and re-encrypts them with the current provider configuration, writing the updated ciphertext back to the correct source config file. ```bash # Re-encrypt all age secrets fnox reencrypt -p age # Preview what would be re-encrypted fnox reencrypt -p age --dry-run # Re-encrypt specific keys fnox reencrypt MY_SECRET OTHER_SECRET # Filter by regex pattern fnox reencrypt --filter "^DB_" # Skip the confirmation prompt fnox reencrypt -p age -f ``` The command correctly handles multi-line secrets, writes back to the original source file (including distinguishing root `[secrets]` from `[profiles.X.secrets]`), clears stale sync caches, and scrubs decrypted plaintext from the process environment after re-encryption. Only secrets backed by encryption-capable providers are eligible. ## Fixed **`fnox set -k` now correctly prompts for the secret value** ([#367](jdx/fnox#367)) -- @jdx When using the `-k` / `--key-name` flag with `fnox set`, the command incorrectly treated it as a metadata-only operation and skipped prompting for the secret value. It also wrote the key name itself as the stored config value, bypassing the encryption provider entirely. Now `-k` works as expected: the secret value is read from stdin, a command-line argument, or an interactive prompt, and is properly encrypted and stored. ```bash # These now work correctly echo "my-secret" | fnox set -p age -k custom-key-name MY_SECRET fnox set -p age -k custom-key-name MY_SECRET "my-secret" ``` **Full Changelog**: jdx/fnox@v1.18.0...v1.19.0 Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

Summary
fnox reencryptcommand that decrypts and re-encrypts secrets when encryption provider configuration changes (e.g. adding/removing age recipients)-p), key names, and regex (--filter), plus dry-run (-n) and force (-f) modesCloses #320
Test plan
fnox reencrypt --dry-runshows secrets that would be re-encryptedfnox reencrypt -fre-encrypts all age secrets with updated recipientsfnox reencrypt -p agefilters to only age provider secretsfnox reencrypt KEY1 KEY2limits to specific keys🤖 Generated with Claude Code
Note
Medium Risk
Adds a new command that decrypts secrets and writes updated ciphertext back to config files; mistakes could overwrite secret entries or leak plaintext if edge cases aren’t covered. Includes environment scrubbing and provider capability checks, but still touches sensitive secret-handling paths.
Overview
Adds a new
fnox reencryptCLI subcommand to decrypt and re-encrypt existing stored ciphertext using the current encryption provider configuration, with--provider, positionalKEYS, regex--filter,--dry-run, and confirmation/--forcesupport.Updates secret metadata to track whether a secret originated from root
[secrets]vs[profiles.*.secrets]so the command can write re-encrypted values back to the correct source file/section, clears stalesynccache on rewrite, and introduces dedicatedReencrypt*error variants. Documentation/usage specs are regenerated and guides updated to recommendfnox reencryptover manualset/getloops.Written by Cursor Bugbot for commit 6820aab. This will update automatically on new commits. Configure here.