fix(allocator): Arena retry allocation when chunk size approaches maximum#21777
Merged
graphite-app[bot] merged 1 commit intomainfrom Apr 26, 2026
Conversation
This was referenced Apr 26, 2026
Member
Author
This was referenced Apr 26, 2026
Merging this PR will not alter performance
Comparing Footnotes
|
1618da5 to
c5ed4e8
Compare
a0208ce to
d6ad358
Compare
Contributor
There was a problem hiding this comment.
Pull request overview
Fixes a bug in Arena::try_alloc_layout_slow_impl where allocating a new chunk could fail outright when the “double previous chunk size” growth strategy exceeds the isize::MAX-derived limits (notably affecting 32-bit targets), by retrying with progressively smaller chunk sizes.
Changes:
- Replaces the iterator-based “retry with halved sizes” logic with an explicit loop for clarity.
- Retries chunk creation by halving the candidate size until success or until the minimum acceptable size is reached.
c5ed4e8 to
831cf9e
Compare
d6ad358 to
2816250
Compare
This was referenced Apr 26, 2026
Contributor
Merge activity
|
…aximum (#21777) Fix a bug in allocation slow path (`Arena::try_alloc_layout_slow_impl`). Allocation strategy is that by default each chunk will be twice the size of the last, to amortize the cost of growing the `Arena`'s memory. Previously, if current chunk of `Arena` is very large (close to `isize::MAX`), attempting to allocate a new chunk of double size will fail (`new_chunk_memory_details` returns `None`). This would cause the allocation to fail overall. Instead, try again with a smaller chunk size which still fits the requested alignment. This appears to be what the code intended to do, but the convoluted implementation obscured the real behavior. Simplify it by using a normal loop, instead of the previous iterator-based implementation. Allocations of size approaching `isize::MAX` is infeasible on 64-bit systems, so this bug could only have manifested on 32-bit platforms (e.g. WASM).
2816250 to
0044392
Compare
831cf9e to
674dfac
Compare
Base automatically changed from
om/04-26-perf_allocator_reduce_branches_when_allocating_new_chunk
to
main
April 26, 2026 22:07
camc314
pushed a commit
that referenced
this pull request
Apr 27, 2026
### 💥 BREAKING CHANGES - 502e804 ast: [**BREAKING**] Reduce size of `TSTypePredicateName` (#21711) (overlookmotel) - 5651539 ast: [**BREAKING**] Reduce size of `JSXExpression` (#21710) (overlookmotel) - c44e280 ast: [**BREAKING**] Reduce size of `ArrayExpressionElement` (#21709) (overlookmotel) - c5b3deb syntax: [**BREAKING**] Remove `CommentNodeId` (#21679) (overlookmotel) ### 🚀 Features - b738a39 allocator: Add `Allocator::cursor_ptr` method (#21773) (overlookmotel) - 678767e ast: Generate node_id accessors for AST enum wrappers (#21653) (camc314) - f091d77 minifier: Inline constant spread elements into arrays (#21095) (Armano) ### 🐛 Bug Fixes - 0d608c2 minifier: Preserve raw CR in template literals (#21645) (Dunqing) - a889ea9 minifier: Track pure functions in DCE mode (#21722) (Dunqing) - 674dfac allocator: `Arena` retry allocation when chunk size approaches maximum (#21777) (overlookmotel) - f130cc0 allocator: Fix arithmetic overflow in `Arena::new_chunk_memory_details` (#21745) (overlookmotel) - b9bf239 allocator: Fix UB in `Arena::grow_zeroed` (#21739) (overlookmotel) - d2b9389 allocator: Clippy warning when building without `testing` feature (#21681) (camc314) - 503dc86 codegen: Map sourcemaps from visible output starts (#21662) (Dunqing) - c92bd3b transformer: Use SPAN for synthesized helper calls to prevent comment misattribution (#21578) (Dunqing) - 0d80441 codegen: Add mapping before printing `#` for private ident (#21619) (camc314) ### ⚡ Performance - 9fa362e napi/parser: Do not generate tokens except in tests (#21811) (overlookmotel) - 0044392 allocator: Reduce branches when allocating new chunk (#21776) (overlookmotel) - 7896bd0 allocator: `Allocator::used_bytes` do not use chunk iterator (#21771) (overlookmotel) - a5c562f allocator: Remove check in `Arena::new_chunk_memory_details` (#21750) (overlookmotel) - 35bbe1f allocator: `Arena` use unchecked size round up where guaranteed no overflow (#21743) (overlookmotel) - ffe229b allocator: Remove unnecessary check from `Arena::try_alloc_layout_slow_impl` (#21732) (overlookmotel) - 72fece5 allocator: Use `NonNull::offset_from_unsigned` in `Arena::chunk_capacity` (#21731) (overlookmotel) - cab32ae ast: Add `#[inline(always)]` to `node_id` methods on enums with all variants unboxed (#21707) (overlookmotel) - b179688 parser: Allocate `TriviaBuilder` comments in the arena (#21512) (Boshen) - 2290f31 lexer: Fix perf of `Token::set_*` methods on Rust 1.95.0 (#21659) (overlookmotel) - 1b58029 allocator: Move code into cold path in `Arena::alloc_layout` (#21622) (overlookmotel) - 3cf7cef allocator: Reduce instructions on allocation hot path (#21510) (overlookmotel) ### 📚 Documentation - ce65070 data_structures: Document why `as_ref` and `as_mut` on `NonNullConst` and `NonNullMut` take `self` (#21800) (overlookmotel) - 93b7dbd allocator: Improve doc comments for `ChunkFooter` (#21733) (overlookmotel) - 295db8d transformer: Fix comment (#21717) (overlookmotel) - 5c93af8 ast: Add comments explaining `#[inline(always)]` to `node_id` methods on enums (#21706) (overlookmotel) - e4cea25 transform: Use the `node:` namespace in the example (#19998) (루밀LuMir) ### 🛡️ Security - d8076c9 deps: Update rolldown (#21639) (renovate) Co-authored-by: Boshen <1430279+Boshen@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Fix a bug in allocation slow path (
Arena::try_alloc_layout_slow_impl).Allocation strategy is that by default each chunk will be twice the size of the last, to amortize the cost of growing the
Arena's memory.Previously, if current chunk of
Arenais very large (close toisize::MAX), attempting to allocate a new chunk of double size will fail (new_chunk_memory_detailsreturnsNone). This would cause the allocation to fail overall.Instead, try again with a smaller chunk size which still fits the requested alignment.
This appears to be what the code intended to do, but the convoluted implementation obscured the real behavior. Simplify it by using a normal loop, instead of the previous iterator-based implementation.
Allocations of size approaching
isize::MAXis infeasible on 64-bit systems, so this bug could only have manifested on 32-bit platforms (e.g. WASM).