fix(core): optimize task hashing with BFS and performance logging#32911
fix(core): optimize task hashing with BFS and performance logging#32911FrozenPandaz merged 1 commit intomasterfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
✅ Deploy Preview for nx-docs ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
View your CI Pipeline Execution ↗ for commit 14f87b5
☁️ Nx Cloud last updated this comment at |
2565314 to
cda35df
Compare
There was a problem hiding this comment.
Nx Cloud has identified a possible root cause for your failed CI:
The documentation generation task failed because it detected modified files that need to be committed. However, these documentation changes are not caused by the code changes in this pull request.
The PR modifies only Rust files in the task hashing pipeline:
- packages/nx/src/native/tasks/dep_outputs.rs
- packages/nx/src/native/tasks/hash_planner.rs
- packages/nx/src/native/tasks/task_hasher.rs
The failing documentation changes are in TypeScript devkit documentation for PostTasksExecutionContext and PreTasksExecutionContext, removing fields like endTime, id, and startTime. These interface changes are unrelated to Rust task hashing optimizations.
The most likely causes:
- The documentation was generated from a TypeScript source that was modified in a previous commit or on the base branch
- There are uncommitted changes in the workspace from previous work
- The documentation generator is picking up changes from the merged base branch (commit message shows a merge: "Merge 0499f6e into 1bea913")
This is classified as 'environment_state' because the failure is caused by the current state of the repository (uncommitted documentation changes or changes from the base branch merge), not by the actual code changes in this PR which only touch Rust performance optimization code.
No code changes are warranted. The documentation files should be committed separately or the workspace should be cleaned to remove these unrelated changes before re-running the documentation task.
A code change would likely not resolve this issue, so no action was taken.
⚙️ An Nx Cloud workspace admin can disable these reviews in workspace settings.
c8eae46 to
00956ca
Compare
00956ca to
e4269ca
Compare
e4269ca to
130afac
Compare
130afac to
14f87b5
Compare
🐳 We have a release for that!This PR has a release associated with it. You can try it out using this command: npx create-nx-workspace@0.0.0-pr-32911-14f87b5 my-workspaceOr just copy this version and use it in your own command: 0.0.0-pr-32911-14f87b5
To request a new release for this pull request, mention someone from the Nx team or the |
…2911) ## Summary Optimizes `get_dep_output` by replacing recursive traversal with BFS + parallel processing. **Key optimization**: The old recursive implementation processed the same task multiple times when it appeared in multiple dependency paths (diamond dependencies). The new implementation: - Uses BFS with a visited HashSet to process each task exactly once - Collects all tasks first, then processes them in parallel with Rayon - Returns task references directly, eliminating redundant HashMap lookups This deduplicates work and leverages parallelism, significantly improving performance on large task graphs. **Note**: Only processes regular dependencies, not continuous_dependencies, since continuous tasks (like watch/serve) don't produce outputs that need to be hashed. The same test on my machine without these changes takes many minutes before crashing my editor. Now it takes <3ms. ## Test plan - ✅ All Rust tests passing - ✅ Native module builds successfully - ✅ Added 4 unit tests covering direct dependencies, transitive dependencies, diamond deduplication, and task output filtering - ✅ Added performance test verifying large graphs (depth 30 = 90 tasks) complete in <10ms (cherry picked from commit e5ac4a8)
|
This pull request has already been merged/closed. If you experience issues related to these changes, please open a new issue referencing this pull request. |
Summary
Optimizes
get_dep_outputby replacing recursive traversal with BFS + parallel processing.Key optimization: The old recursive implementation processed the same task multiple times when it appeared in multiple dependency paths (diamond dependencies). The new implementation:
This deduplicates work and leverages parallelism, significantly improving performance on large task graphs.
Note: Only processes regular dependencies, not continuous_dependencies, since continuous tasks (like watch/serve) don't produce outputs that need to be hashed.
The same test on my machine without these changes takes many minutes before crashing my editor. Now it takes <3ms.
Test plan