-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
We got reports that uv overruns the default open files limit (ulimit) for users on Linux and macOS:
- During sync/building packages: Hitting ulimit with huge workspace #11296
- During bytecode compilation: Bytecode compilation can fail with "too many open files" on default Ubuntu settings #16999
- When uninstall Python versions (no GitHub issue yet)
The default ulimits can be low, for example 1024 on linux, and we spawn up to a thread per core and several subprocesses. However, Cargo and other Rust tools which have very similar workloads and often very similar problems to us, don't have any problems with ulimits. A cargo maintainer told us:
From my testing for fine grain locking (unstable feature), I found that Cargo generally peaks with around ~70-80 file descriptors opened at once, and this remainly mostly static even for large projects.
So the question is, why are we running into the ulimit, while Cargo doesn't? What file descriptors is uv even holding, do we need them? Is Cargo doing something different that we can adopt to avoid this?
This requires some Unix expertise and rust parallelism knowledge, but should otherwise not require much uv specific details, this is mostly around figuring out how to analyze this this in uv and determining the differences to Cargo.