Skip to content

Speed up pyc compilation #2637

@hauntsaninja

Description

@hauntsaninja

Thanks for implementing #1788 , it's been great!

Since currently pyc compilation is implemented as a pass over the entire env, it can be quite costly:

λ uv pip install pypyp --compile 
Resolved 1 package in 123ms
Downloaded 1 package in 68ms
Installed 1 package in 28ms
Bytecode compiled 58064 files in 53.81s
 + pypyp==1.2.0

In this venv, it's about 1000x longer than it takes to install without pyc compilation.

Note this very slow time is on macOS, it's much better on Linux machines I have access to (more like 10s). See #2326 (comment) for my laptop specs

To be clear, this is not a particularly pressing issue. The need to bytecode compile deltas is much lower than when building things from scratch. Nevertheless, ideally uv should be significantly faster than pip in all usage scenarios.

With that in mind, some possible suggestions:

  1. It looks like uv currently forces recompilation

    path, invalidation_mode=invalidation_mode, force=True, quiet=2

    I'm not sure why this is... maybe something to do with checked hash validation that compileall doesn't handle correctly? The script predates Add an option to bytecode compile during installation #2086 , so maybe there's something else going on
    (update: I've merged this change)

  2. We could only bytecode compile the newly installed packages

  3. If uv no longer forces recompilation, you could move the invalidation / mtime logic into Rust, not sure how much that would help, but could conceivably save you from shelling to compileall

  4. Something something copy on write for pyc

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePotential performance improvement

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions