Skip to content

performance(starknet-classes): Made decompress word extraction less costly.#9466

Merged
orizi merged 1 commit intomainfrom
orizi/01-13-performance_starknet-classes_made_decompress_word_extraction_less_costly
Jan 14, 2026
Merged

performance(starknet-classes): Made decompress word extraction less costly.#9466
orizi merged 1 commit intomainfrom
orizi/01-13-performance_starknet-classes_made_decompress_word_extraction_less_costly

Conversation

@orizi
Copy link
Collaborator

@orizi orizi commented Jan 13, 2026

Summary

Optimized the decompress function in the Starknet classes crate by replacing BigUint division with a more efficient no-allocation implementation using fixed-size arrays and u64/u128 operations. This change improves performance by avoiding heap allocations during the decompression process.


Type of change

Please check one:

  • Bug fix (fixes incorrect behavior)
  • New feature
  • Performance improvement
  • Documentation change with concrete technical impact
  • Style, wording, formatting, or typo-only change

Why is this change needed?

The previous implementation of the decompress function used BigUint division operations which require heap allocations. This is inefficient for this particular use case since we're working with felt252 values that can be represented with a fixed number of u64 limbs.


What was the behavior or documentation before?

The code used BigUint division and modulo operations which allocate memory on the heap for intermediate results during decompression.


What is the behavior or documentation after?

The new implementation uses a fixed-size array of u64 values and performs division using u128 operations without any heap allocations, making the decompression process more efficient.


Additional context

Since all values are felt252s, we can safely assume that 4 u64 limbs are sufficient for the representation, allowing us to use a more efficient division algorithm with fixed-size arrays.

@reviewable-StarkWare
Copy link

This change is Reviewable

Copy link
Collaborator Author

orizi commented Jan 13, 2026

This stack of pull requests is managed by Graphite. Learn more about stacking.

@orizi orizi marked this pull request as ready for review January 13, 2026 13:55
@orizi orizi force-pushed the orizi/01-12-performance_sierra-to-casm_changed_felt252_deseralization_to_be_based_on_an_iterator branch from 5bdde35 to c943c75 Compare January 13, 2026 17:10
@orizi orizi force-pushed the orizi/01-13-performance_starknet-classes_made_decompress_word_extraction_less_costly branch from ef87631 to 4686472 Compare January 13, 2026 17:10
Copy link
Collaborator

@TomerStarkware TomerStarkware left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

@TomerStarkware reviewed 1 file and all commit messages, and made 1 comment.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @ilyalesokhin-starkware).

…ostly.

SIERRA_UPDATE_PATCH_CHANGE_TAG=No external changes.
@orizi orizi changed the base branch from orizi/01-12-performance_sierra-to-casm_changed_felt252_deseralization_to_be_based_on_an_iterator to graphite-base/9466 January 14, 2026 15:46
@orizi orizi force-pushed the orizi/01-13-performance_starknet-classes_made_decompress_word_extraction_less_costly branch from 4686472 to 42b8264 Compare January 14, 2026 15:46
@orizi orizi force-pushed the graphite-base/9466 branch from c943c75 to 3b7a44e Compare January 14, 2026 15:46
@orizi orizi changed the base branch from graphite-base/9466 to main January 14, 2026 15:46
@orizi orizi added this pull request to the merge queue Jan 14, 2026
Merged via the queue into main with commit 85e7366 Jan 14, 2026
108 checks passed
@orizi orizi deleted the orizi/01-13-performance_starknet-classes_made_decompress_word_extraction_less_costly branch January 14, 2026 17:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants