perf: parallelize independent async work#1071
Merged
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 07d89340ba
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
9aoy
approved these changes
Mar 19, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Benchmark
Micro-benchmark comparing sequential
awaitvsPromise.allfor the two parallelization patterns in this PR.This is a synthetic benchmark to demonstrate the theoretical benefit of the patterns, not an end-to-end measurement. In current rstest workloads, the parallelized operations use Rspack's in-memory filesystem (delay ≈ 0), so the improvement is negligible. However, as project count, test file count, or I/O latency increases (e.g., real disk I/O via
writeToDisk, network FS, or complex config resolution), the benefit scales accordingly.Pattern A: two independent awaits →
Promise.allApplies to
getAssetFiles+getSourceMapsparallelization (pool/index.ts,runTests.ts,listTests.ts).count= number of test files (each triggers one pair of calls)delay= per-call I/O latency (0 ≈ in-memory FS, >0 ≈ real disk)Theoretical ceiling: 2x (two independent operations fully overlapped).
Summary
Pattern B:
forloop →Promise.allmapApplies to
collectTestFiles,collectProjectEntries,resolveProjects(listTests.ts,hostController.ts,cli/init.ts).count= number of projectsdelay= per-project async work (glob, config resolution, etc.)Theoretical ceiling: Nx where N = project count (all projects processed concurrently).
Background
Startup and test discovery still serialized several independent async tasks, which added avoidable latency to CLI setup, project discovery, and asset preparation.
Implementation
User Impact
Large multi-project runs, list commands, and startup paths spend less time waiting on serialized async setup.
Checklist
Validation:
pnpm formatattempted, but Biome hit an existing internal panic inpackages/core/src/types/coverage.tsbefore completing.pnpm run lintpnpm run typecheckpnpm run buildpnpm run testcd e2e && pnpm test