Input::getAccessorUnchecked(): Wrap fetches in a path lock#410
Conversation
📝 WalkthroughWalkthroughAdds deterministic per-input locking to Input::getAccessorUnchecked: compute a SHA-256-based lock path under getCacheDir()/fetcher-locks, create the lock directory, acquire a PathLocks lock with a "waiting for another Nix process to finish fetching input …" message, and optionally sleep when Changes
Sequence DiagramsequenceDiagram
participant P1 as Nix Process 1
participant P2 as Nix Process 2
participant LM as Lock Manager<br/>(PathLocks)
participant FC as Fetcher Cache / Network
P1->>LM: Compute lock-file path (SHA-256 of input attrs)
P2->>LM: Compute lock-file path (same input)
rect rgba(100,150,200,0.5)
P1->>LM: Acquire lock
LM-->>P1: Lock acquired
end
rect rgba(200,100,100,0.5)
P2->>LM: Acquire lock (blocks)
note right of P2: "waiting for another Nix process to finish fetching input …"
end
P1->>FC: Check cache / attempt substitution / fetch
P1->>LM: Release lock
LM-->>P2: Lock acquired
P2->>FC: Check cache / attempt substitution / fetch
P2->>LM: Release lock
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/functional/tarball.sh (1)
127-128: Consider checking exit codes from background processes.The
waitcommands don't capture or verify the exit codes of the background processes. If eithernix flake prefetchfails, the test will proceed to the assertions and potentially produce confusing error messages.♻️ Suggested improvement
wait "$pid1" +status1=$? wait "$pid2" +status2=$? +if [[ $status1 -ne 0 || $status2 -ne 0 ]]; then + echo "Concurrent prefetch failed: pid1=$status1, pid2=$status2" >&2 + cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" >&2 + exit 1 +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/functional/tarball.sh` around lines 127 - 128, The waits for background processes (wait "$pid1" and wait "$pid2") do not check exit codes; update the script to capture each wait's exit status (e.g., wait "$pid1"; rc1=$? and wait "$pid2"; rc2=$?) and fail fast if any are non‑zero by printing a clear error mentioning which PID/command failed (referencing pid1 and pid2) and exiting with a non‑zero status; ensure the failure path occurs before running subsequent assertions so a failed nix flake prefetch is surfaced immediately.src/libfetchers/fetchers.cc (1)
373-383: Consider wrapping lock acquisition in try-catch for robustness.Per the PR description, this locking is an optimization to reduce redundant downloads and "does not affect correctness." However, if
create_directoriesorPathLocksconstruction fails (e.g., due to permission issues or disk space), the entire fetch operation will fail.For a feature that's purely an optimization, it may be more appropriate to fail gracefully and proceed without locking if the lock infrastructure is unavailable.
♻️ Suggested defensive approach
/* Acquire a path lock on this input. Note that fetching the same input in parallel is supposed to be safe (it's up * to the fetchers to guarantee this), so this is merely intended to avoid work duplication. */ + std::optional<PathLocks> lock; + try { auto lockFilePath = getCacheDir() / "fetcher-locks" / hashString(HashAlgorithm::SHA256, attrsToJSON(toAttrs()).dump()).to_string(HashFormat::Base16, false); std::filesystem::create_directories(lockFilePath.parent_path()); - PathLocks lock( + lock.emplace( {lockFilePath.string()}, fmt("waiting for another Nix process to finish fetching input '%s'...", to_string())); + } catch (std::exception & e) { + debug("could not acquire fetch lock: %s", e.what()); + } if (getEnv("_NIX_TEST_CONCURRENT_FETCHES")) std::this_thread::sleep_for(std::chrono::seconds(1));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/fetchers.cc` around lines 373 - 383, The lock acquisition can throw (std::filesystem::create_directories or PathLocks ctor) and should be non-fatal since it's an optimization; wrap the creation of lockFilePath/ create_directories and the PathLocks construction in a try-catch (catch std::exception and fallback catch(...)), and on error log a warning that includes the exception message and context (e.g., fetching input from attrsToJSON(toAttrs())/getCacheDir()) and continue execution without the PathLocks so the fetch proceeds without locking; ensure code still respects the debug sleep branch (getEnv("_NIX_TEST_CONCURRENT_FETCHES")) when lock acquisition fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/libfetchers/fetchers.cc`:
- Around line 373-383: The lock acquisition can throw
(std::filesystem::create_directories or PathLocks ctor) and should be non-fatal
since it's an optimization; wrap the creation of lockFilePath/
create_directories and the PathLocks construction in a try-catch (catch
std::exception and fallback catch(...)), and on error log a warning that
includes the exception message and context (e.g., fetching input from
attrsToJSON(toAttrs())/getCacheDir()) and continue execution without the
PathLocks so the fetch proceeds without locking; ensure code still respects the
debug sleep branch (getEnv("_NIX_TEST_CONCURRENT_FETCHES")) when lock
acquisition fails.
In `@tests/functional/tarball.sh`:
- Around line 127-128: The waits for background processes (wait "$pid1" and wait
"$pid2") do not check exit codes; update the script to capture each wait's exit
status (e.g., wait "$pid1"; rc1=$? and wait "$pid2"; rc2=$?) and fail fast if
any are non‑zero by printing a clear error mentioning which PID/command failed
(referencing pid1 and pid2) and exiting with a non‑zero status; ensure the
failure path occurs before running subsequent assertions so a failed nix flake
prefetch is surfaced immediately.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 72d47f01-addf-4eb2-b943-12eda5a4ec0c
📒 Files selected for processing (2)
src/libfetchers/fetchers.cctests/functional/tarball.sh
57ea531 to
df9bf96
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/libfetchers/fetchers.cc (2)
382-383: Add the direct standard-library headers for the test sleep.The new
std::this_thread::sleep_for(std::chrono::seconds(...))path depends on<thread>and<chrono>, but this file still relies on transitive includes for them.Suggested diff
`#include` "nix/util/environment-variables.hh" `#include` <nlohmann/json.hpp> +#include <chrono> +#include <thread>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/fetchers.cc` around lines 382 - 383, The file uses std::this_thread::sleep_for and std::chrono::seconds but relies on transitive includes; add direct standard headers by including <thread> and <chrono> in the file (so symbols like std::this_thread::sleep_for and std::chrono::seconds are declared), e.g., add the two includes near the other standard includes at the top of the file so the getEnv(...) sleep path is properly covered.
373-380: Lock semantics are narrower than the comment suggests.
PathLocksisflock()-based, so this only serializes separate processes; threads inside the same process can still overlap here. If same-process dedup matters, this needs an in-process keyed mutex too. Otherwise, please narrow the surrounding comment to match the actual guarantee.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/fetchers.cc` around lines 373 - 380, The current comment claims broader serialization but PathLocks uses flock() and only serializes across processes; to fix, add an in-process keyed mutex around the same key before creating PathLocks: implement a static map<string, std::mutex> (or std::shared_mutex) protected by a global mutex (e.g., static std::mutex key_map_mutex), compute the same key used for lockFilePath (hashString(...).to_string(...)), acquire the per-key mutex (via lookup/insert under key_map_mutex) and hold its lock while constructing and holding PathLocks so threads in the same process are serialized too; alternatively (if you prefer not to add in-process locking) update the comment above this block to explicitly state that PathLocks only serializes across processes (flock-based) and does not prevent concurrent threads in the same process from running the fetch concurrently, and reference PathLocks and lockFilePath in the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/libfetchers/fetchers.cc`:
- Around line 382-383: The file uses std::this_thread::sleep_for and
std::chrono::seconds but relies on transitive includes; add direct standard
headers by including <thread> and <chrono> in the file (so symbols like
std::this_thread::sleep_for and std::chrono::seconds are declared), e.g., add
the two includes near the other standard includes at the top of the file so the
getEnv(...) sleep path is properly covered.
- Around line 373-380: The current comment claims broader serialization but
PathLocks uses flock() and only serializes across processes; to fix, add an
in-process keyed mutex around the same key before creating PathLocks: implement
a static map<string, std::mutex> (or std::shared_mutex) protected by a global
mutex (e.g., static std::mutex key_map_mutex), compute the same key used for
lockFilePath (hashString(...).to_string(...)), acquire the per-key mutex (via
lookup/insert under key_map_mutex) and hold its lock while constructing and
holding PathLocks so threads in the same process are serialized too;
alternatively (if you prefer not to add in-process locking) update the comment
above this block to explicitly state that PathLocks only serializes across
processes (flock-based) and does not prevent concurrent threads in the same
process from running the fetch concurrently, and reference PathLocks and
lockFilePath in the comment.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ae37623e-3288-47c2-b9c9-80f9fdef6583
📒 Files selected for processing (2)
src/libfetchers/fetchers.cctests/functional/tarball.sh
This prevents multiple processes (like nix-eval-jobs instances) from fetching the same input at the same time. That doesn't matter for correctness, but it can cause a lot of redundant downloads.
df9bf96 to
37ce0ae
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
src/libfetchers/fetchers.cc (2)
378-378:create_directoriescan throw on permission errors.
std::filesystem::create_directoriesthrowsstd::filesystem::filesystem_errorif it fails (e.g., permission denied). Consider wrapping this in a try-catch to degrade gracefully (skip locking) rather than failing the entire fetch operation, since the lock is an optimization, not a correctness requirement.🛠️ Suggested defensive approach
+ std::optional<PathLocks> lock; + try { auto lockFilePath = getCacheDir() / "fetcher-locks" / hashString(HashAlgorithm::SHA256, attrsToJSON(toAttrs()).dump()).to_string(HashFormat::Base16, false); std::filesystem::create_directories(lockFilePath.parent_path()); - PathLocks lock( + lock.emplace( {lockFilePath.string()}, fmt("waiting for another Nix process to finish fetching input '%s'...", to_string())); + } catch (std::exception & e) { + debug("failed to acquire fetcher lock: %s", e.what()); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/fetchers.cc` at line 378, The call to std::filesystem::create_directories(lockFilePath.parent_path()) can throw on permission or other filesystem errors; wrap this call (and the subsequent lock file creation logic that uses lockFilePath) in a try-catch that catches std::filesystem::filesystem_error (or std::exception), log a warning/error about being unable to create the lock directory including the exception.what() message, and then proceed by skipping the locking behavior instead of aborting the fetch operation so the lock remains an optimization rather than a fatal failure.
382-383: Test-only code in production path.The
_NIX_TEST_CONCURRENT_FETCHESenvironment variable check and sleep are embedded in production code. While common in Nix's codebase, consider documenting this is a test hook with a comment, or ensuring the env var name clearly indicates it's internal (the_NIX_prefix helps).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/fetchers.cc` around lines 382 - 383, The sleep controlled by getEnv("_NIX_TEST_CONCURRENT_FETCHES") is a test-only hook embedded in the production path; update the code around that check (the call to getEnv("_NIX_TEST_CONCURRENT_FETCHES") and the std::this_thread::sleep_for(std::chrono::seconds(1)) call) to clearly mark it as a test-only internal hook by adding a concise comment stating it is intentionally present for tests only (or, if you prefer, rename the env var to an explicitly internal name), so future readers know it's not production logic and it won't be accidentally relied upon.tests/functional/tarball.sh (1)
129-131: Grep patterns may be brittle if log format changes.The assertions rely on specific log message substrings. Consider adding a comment documenting the expected log lines, or using more robust patterns. The test is otherwise well-structured for validating the concurrent fetch serialization.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/functional/tarball.sh` around lines 129 - 131, The three brittle grep assertions in tests/functional/tarball.sh (the patterns "Download.*to", "downloading.*tar.tar", and "waiting for another Nix process to finish fetching input") should be made more robust and documented: replace the fragile substrings with clearer, resilient patterns (e.g., use grep -E with anchored or tokenized regexes, case-insensitive matching, or exact fixed-string matches where appropriate), extract those regexes into named variables or a small helper function (e.g., assert_log_contains) so they’re easy to update, and add a concise comment above the assertions listing the exact expected log line formats so future log format changes are obvious. Ensure the updated assertions still check counts (== 2/1/1) and reference the same log files ( "$TEST_ROOT/log1" "$TEST_ROOT/log2" ).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/libfetchers/fetchers.cc`:
- Around line 375-377: The lock path built in fetchers.cc uses getCacheDir()
which is per-user, so lock coordination across different user contexts (daemon
vs client) can fail; update the implementation to either (a) use a shared,
daemon-visible directory (e.g. respect a configured global lock directory such
as NIX_STATE_DIR or an explicit env var like NIX_FETCHER_LOCK_DIR) when
available, or (b) document the user-scoped limitation clearly; modify the code
around getCacheDir(), "fetcher-locks", and the
hashString(attrsToJSON(toAttrs()).dump()) construction to read and prefer the
global/configurable lock dir before falling back to getCacheDir() so
daemon-client coordination will see the same lock files.
In `@tests/functional/tarball.sh`:
- Around line 119-131: The test launches two background prefetch processes (pid1
and pid2) and currently calls wait without checking their exit codes; capture
each wait's exit status (e.g., wait "$pid1"; status1=$? and wait "$pid2";
status2=$?) and assert both status1 and status2 are zero before performing the
grep assertions so a failing process causes the test to fail; update the test
around the wait calls (referencing pid1, pid2, and the wait invocations) to
record and check these exit statuses and fail the test if either is non-zero.
---
Nitpick comments:
In `@src/libfetchers/fetchers.cc`:
- Line 378: The call to
std::filesystem::create_directories(lockFilePath.parent_path()) can throw on
permission or other filesystem errors; wrap this call (and the subsequent lock
file creation logic that uses lockFilePath) in a try-catch that catches
std::filesystem::filesystem_error (or std::exception), log a warning/error about
being unable to create the lock directory including the exception.what()
message, and then proceed by skipping the locking behavior instead of aborting
the fetch operation so the lock remains an optimization rather than a fatal
failure.
- Around line 382-383: The sleep controlled by
getEnv("_NIX_TEST_CONCURRENT_FETCHES") is a test-only hook embedded in the
production path; update the code around that check (the call to
getEnv("_NIX_TEST_CONCURRENT_FETCHES") and the
std::this_thread::sleep_for(std::chrono::seconds(1)) call) to clearly mark it as
a test-only internal hook by adding a concise comment stating it is
intentionally present for tests only (or, if you prefer, rename the env var to
an explicitly internal name), so future readers know it's not production logic
and it won't be accidentally relied upon.
In `@tests/functional/tarball.sh`:
- Around line 129-131: The three brittle grep assertions in
tests/functional/tarball.sh (the patterns "Download.*to",
"downloading.*tar.tar", and "waiting for another Nix process to finish fetching
input") should be made more robust and documented: replace the fragile
substrings with clearer, resilient patterns (e.g., use grep -E with anchored or
tokenized regexes, case-insensitive matching, or exact fixed-string matches
where appropriate), extract those regexes into named variables or a small helper
function (e.g., assert_log_contains) so they’re easy to update, and add a
concise comment above the assertions listing the exact expected log line formats
so future log format changes are obvious. Ensure the updated assertions still
check counts (== 2/1/1) and reference the same log files ( "$TEST_ROOT/log1"
"$TEST_ROOT/log2" ).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ef0e6934-6ad8-4295-930d-1dde1ee6af0e
📒 Files selected for processing (2)
src/libfetchers/fetchers.cctests/functional/tarball.sh
| auto lockFilePath = | ||
| getCacheDir() / "fetcher-locks" | ||
| / hashString(HashAlgorithm::SHA256, attrsToJSON(toAttrs()).dump()).to_string(HashFormat::Base16, false); |
There was a problem hiding this comment.
Lock coordination may fail across different user contexts.
getCacheDir() returns user-specific paths based on NIX_CACHE_HOME, XDG_CACHE_HOME, or ~/.cache/nix. If the Nix daemon runs as root and a client runs as a regular user, they'll have different cache directories and won't see each other's locks.
Per the PR description, this is intended for parallel nix-eval-jobs instances. If those run as the same user, this works. However, this limitation should be documented or considered if daemon-client coordination is ever expected.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libfetchers/fetchers.cc` around lines 375 - 377, The lock path built in
fetchers.cc uses getCacheDir() which is per-user, so lock coordination across
different user contexts (daemon vs client) can fail; update the implementation
to either (a) use a shared, daemon-visible directory (e.g. respect a configured
global lock directory such as NIX_STATE_DIR or an explicit env var like
NIX_FETCHER_LOCK_DIR) when available, or (b) document the user-scoped limitation
clearly; modify the code around getCacheDir(), "fetcher-locks", and the
hashString(attrsToJSON(toAttrs()).dump()) construction to read and prefer the
global/configurable lock dir before falling back to getCacheDir() so
daemon-client coordination will see the same lock files.
| # Test that concurrent invocations of Nix will fetch the tarball only once. | ||
| rm -rf "$TEST_HOME/.cache" | ||
| store="$TEST_ROOT/prefetch-store" | ||
| nix-store --store "$store" --init # needed because concurrent creation of the store can give SQLite errors | ||
| _NIX_TEST_CONCURRENT_FETCHES=1 _NIX_FORCE_HTTP=1 nix flake prefetch --store "$store" -v "tarball+file://$TEST_ROOT/tar.tar" 2> "$TEST_ROOT/log1" & | ||
| pid1="$!" | ||
| _NIX_TEST_CONCURRENT_FETCHES=1 _NIX_FORCE_HTTP=1 nix flake prefetch --store "$store" -v "tarball+file://$TEST_ROOT/tar.tar" 2> "$TEST_ROOT/log2" & | ||
| pid2="$!" | ||
| wait "$pid1" | ||
| wait "$pid2" | ||
| [[ $(cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" | grep -c "Download.*to") -eq 2 ]] | ||
| [[ $(cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" | grep -c "downloading.*tar.tar") -eq 1 ]] | ||
| [[ $(cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" | grep -c "waiting for another Nix process to finish fetching input") -eq 1 ]] |
There was a problem hiding this comment.
Test does not verify process exit status before assertions.
The wait commands capture exit status but it's not checked. If one process fails, the test may still pass the grep assertions (e.g., if only one process ran successfully and produced the expected log output).
🛠️ Suggested fix to check exit status
wait "$pid1"
+rc1=$?
wait "$pid2"
+rc2=$?
+[[ $rc1 -eq 0 ]] || { echo "First prefetch failed"; exit 1; }
+[[ $rc2 -eq 0 ]] || { echo "Second prefetch failed"; exit 1; }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Test that concurrent invocations of Nix will fetch the tarball only once. | |
| rm -rf "$TEST_HOME/.cache" | |
| store="$TEST_ROOT/prefetch-store" | |
| nix-store --store "$store" --init # needed because concurrent creation of the store can give SQLite errors | |
| _NIX_TEST_CONCURRENT_FETCHES=1 _NIX_FORCE_HTTP=1 nix flake prefetch --store "$store" -v "tarball+file://$TEST_ROOT/tar.tar" 2> "$TEST_ROOT/log1" & | |
| pid1="$!" | |
| _NIX_TEST_CONCURRENT_FETCHES=1 _NIX_FORCE_HTTP=1 nix flake prefetch --store "$store" -v "tarball+file://$TEST_ROOT/tar.tar" 2> "$TEST_ROOT/log2" & | |
| pid2="$!" | |
| wait "$pid1" | |
| wait "$pid2" | |
| [[ $(cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" | grep -c "Download.*to") -eq 2 ]] | |
| [[ $(cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" | grep -c "downloading.*tar.tar") -eq 1 ]] | |
| [[ $(cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" | grep -c "waiting for another Nix process to finish fetching input") -eq 1 ]] | |
| # Test that concurrent invocations of Nix will fetch the tarball only once. | |
| rm -rf "$TEST_HOME/.cache" | |
| store="$TEST_ROOT/prefetch-store" | |
| nix-store --store "$store" --init # needed because concurrent creation of the store can give SQLite errors | |
| _NIX_TEST_CONCURRENT_FETCHES=1 _NIX_FORCE_HTTP=1 nix flake prefetch --store "$store" -v "tarball+file://$TEST_ROOT/tar.tar" 2> "$TEST_ROOT/log1" & | |
| pid1="$!" | |
| _NIX_TEST_CONCURRENT_FETCHES=1 _NIX_FORCE_HTTP=1 nix flake prefetch --store "$store" -v "tarball+file://$TEST_ROOT/tar.tar" 2> "$TEST_ROOT/log2" & | |
| pid2="$!" | |
| wait "$pid1" | |
| rc1=$? | |
| wait "$pid2" | |
| rc2=$? | |
| [[ $rc1 -eq 0 ]] || { echo "First prefetch failed"; exit 1; } | |
| [[ $rc2 -eq 0 ]] || { echo "Second prefetch failed"; exit 1; } | |
| [[ $(cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" | grep -c "Download.*to") -eq 2 ]] | |
| [[ $(cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" | grep -c "downloading.*tar.tar") -eq 1 ]] | |
| [[ $(cat "$TEST_ROOT/log1" "$TEST_ROOT/log2" | grep -c "waiting for another Nix process to finish fetching input") -eq 1 ]] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/functional/tarball.sh` around lines 119 - 131, The test launches two
background prefetch processes (pid1 and pid2) and currently calls wait without
checking their exit codes; capture each wait's exit status (e.g., wait "$pid1";
status1=$? and wait "$pid2"; status2=$?) and assert both status1 and status2 are
zero before performing the grep assertions so a failing process causes the test
to fail; update the test around the wait calls (referencing pid1, pid2, and the
wait invocations) to record and check these exit statuses and fail the test if
either is non-zero.
Motivation
This prevents multiple processes (like nix-eval-jobs instances) from fetching the same input at the same time. That doesn't matter for correctness, but it can cause a lot of redundant downloads.
Context
This is an alternative to the fetcher part of NixOS#14991.
Summary by CodeRabbit
New Features
Tests