Conversation
CodSpeed Performance ReportMerging #1128 will degrade performances by 77.96%Comparing Summary
Benchmarks breakdown
|
|
Heh. Appending new values to the multidict is more expensive, all other ops are faster. We have a tradeoff, as usual. I think that the lookup is a much more often operation than the multidict filling. |
|
The PR is more-or-less done but I'd like to make some polishing and self-review later. Careful testing is appreciated! New multidict is close to Python's dict except for multiple keys, of course. It starts from the empty hashtable, which grows by a power of 2 starting from 8: 8, 16, 32, 64, 128, ... The table is resized if needed, and bulk updates ( Item deletion puts DKIX_DUMMY special index in the hashtable. In opposite to the standard The iteration for operations like
The performance is okay, multidict creation is slightly slower because the hashtable should be recalculated and the indices table rebuilds, but all other operations are faster. Multidict is still slightly slower than the regular dict because I don't want to use the private API for accessing internal Python structures, the most notable is the string's hash. TODO:
Open question: Should we use If anybody wants to play with the code, two things could help debugging.
Please feel free to experiment with and ask any questions. |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1128 +/- ##
==========================================
- Coverage 98.62% 98.31% -0.32%
==========================================
Files 27 27
Lines 3566 3851 +285
Branches 561 700 +139
==========================================
+ Hits 3517 3786 +269
- Misses 17 18 +1
- Partials 32 47 +15
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Is there anything I can help out with in this pull request? I can always look into cloning this branch if anybody wants. |
|
So I tested this fork on my computer and it all runs correctly. I guess the only real issue now is how we get code coverage to cover the missing lines. maybe adding a few more tests could do it? |
|
https://app.codecov.io/gh/aio-libs/multidict/pull/1128/blob/multidict/_multidict_py.py#L819 def test_update_with_second_md(any_multidict_class: _MD_Classes) -> None:
obj1 = any_multidict_class()
obj2 = any_multidict_class([("a", 2)])
obj1.update(obj2)
assert obj1 == obj2 |
|
Ignore what I sent before, I sent you a PR. asvetlov#1 |
|
@Vizonex thanks! |
|
I think the PR is ready for review. The change is backward compatible and shouldn't affect the library users. |
|
I'm not too familiar with this, so I'll leave it with the others. Certainly looking forward to the performance improvement though. |
|
How is code coverage not hitting the same line I thought I had covered? I'm gonna have to re-patch it. This is confusing 😕 |
|
Now the test coverage is better. |
Create a str subclass with a hash implementation? Or something similar. |
I doubt if it is acceptable; multidict uses exact Perhaps I could mock |
|
Hmm, yeah, sounds tricky. |
|
I love the changes being done to multidict, anything I can do this time to keep things moving forward? |
|
@asvetlov I think the debug workflows you put in all stopped. I'm wondering if 40 minutes would be better? |
|
@Vizonex |
Agreed |
|
@asvetlov Great Job on getting this passed I'll get started on giving multidict cython support very soon. |
|
I triggered a manual dependabot run: yarl performance changes: https://codspeed.io/aio-libs/yarl/branches/dependabot%2Fpip%2Fmultidict-6.5.0 |
That's amazing I hope adding multidict to cython will further enhance aiohttp it as well |
No description provided.