Add benchmark allocation support and improve Emit table size#29411
Merged
agocke merged 1 commit intodotnet:masterfrom Aug 30, 2018
Merged
Add benchmark allocation support and improve Emit table size#29411agocke merged 1 commit intodotnet:masterfrom
agocke merged 1 commit intodotnet:masterfrom
Conversation
…Emit After I added support for measuring allocations in the compiler benchmark suite I noticed in a PerfView trace that we were often resizing the arrays we used for the Emit tables. Looking more closely at how we allocate the tables, I found that we were using an approximation that was very close to the actual size of the tables needed, but systemically undercounted. Ironically, this may be worse than being farther off since it meant that we were getting right to the edge of available space, before requiring a resize, meaning a new large allocation, and copies from all of the entries in the old tables to the new tables. By applying a multiplier to the table size and allocating more memory for the tables, I've actually decreased the total amount of memory allocated during emit.
jaredpar
approved these changes
Aug 21, 2018
Member
Author
|
@dotnet/roslyn-compiler For a one line compiler change. |
Member
Author
|
ping @dotnet/roslyn-compiler for review |
cston
approved these changes
Aug 30, 2018
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
After I added support for measuring allocations in the compiler
benchmark suite I noticed in a PerfView trace that we were often
resizing the arrays we used for the Emit tables. Looking more closely at
how we allocate the tables, I found that we were using an approximation
that was very close to the actual size of the tables needed, but
systemically undercounted. Ironically, this may be worse than being
farther off since it meant that we were getting right to the edge of
available space, before requiring a resize, meaning a new large
allocation, and copies from all of the entries in the old tables to the
new tables.
By applying a multiplier to the table size and allocating more memory
for the tables, I've actually decreased the total amount of memory
allocated during emit.
Before:
After:
You can ignore the timing differences in the benchmark here, that's just noise on my machine. The
benchmark won't reflect larger GC costs because usually no collections are done during the
benchmark runs. (Benchmark.NET calls GC.Collect() after each run)