12334 tcp throughput benchmark and performance improvements#12335
12334 tcp throughput benchmark and performance improvements#12335
Conversation
CodSpeed Performance ReportMerging #12335 will improve performances by 2.15%Comparing 🎉 Hooray!
|
| Benchmark | trunk |
12334-tcp-throughput-benchmark |
Change | |
|---|---|---|---|---|
| ⚡ | test_deferred_chained_already_fired |
64.8 µs | 63.4 µs | +2.15% |
| 🆕 | test_tcp_throughput |
N/A | 5.1 ms | N/A |
|
codspeed claims faster benchmark for test_lineReceived and I am confused. This seems like a lie, none of my changes should affect existing benchmarks. |
| server = ServerFactory() | ||
| server.protocol = Server | ||
| port = reactor.listenTCP(0, server) |
There was a problem hiding this comment.
I am not sure if we are benchmarking client-side or server-side or both here.
I was thinking that for the benchmark for client-side tests , we can make an exception and have a single server that is used by all the tests runs.
That is, create the server as a global/singleton/fixture outside of the benchmark run.
There was a problem hiding this comment.
It's benchmarking both I guess?
There was a problem hiding this comment.
The client send sends in 16KB chunks, the server-side likely responds in bigger chunks, and this also benchmarks reading... insofar as the goal isn't to measure speed of sending, but to measure speed of Twisted in these sort of scenarios, it seems OK as is?
There was a problem hiding this comment.
It's ok to benchmark both.
I guess that we can refactor this in the furure.
I was thinking that we might want to have separate benchmark tests for data processing speed for:
- client send
- client receive
- server send
- server receive
The current test is good enought to server as a general regression test.
I was thinking that for example if we improve performance for client send, while server receive performance is worse... the tests might still be ok, as the overall tests speed are not changed
Now... this is unlikely to happen
|
@itamarst the |
…g()/startWriting() in every reactor iteration." This reverts commit 67dcad2.
| server = ServerFactory() | ||
| server.protocol = Server | ||
| port = reactor.listenTCP(0, server) |
There was a problem hiding this comment.
It's ok to benchmark both.
I guess that we can refactor this in the furure.
I was thinking that we might want to have separate benchmark tests for data processing speed for:
- client send
- client receive
- server send
- server receive
The current test is good enought to server as a general regression test.
I was thinking that for example if we improve performance for client send, while server receive performance is worse... the tests might still be ok, as the overall tests speed are not changed
Now... this is unlikely to happen
adiroiban
left a comment
There was a problem hiding this comment.
Just removing it from the review queue
Thanks Itamar for looking into this
for more information, see https://pre-commit.ci
|
I have fixed the bug (thanks for catching it!) and improved testing to ensure this critical code path has better test coverage. |
|
Will fix static checks later today. |
…into 12334-tcp-throughput-benchmark
adiroiban
left a comment
There was a problem hiding this comment.
Thanks for the changes.
I did a very quick review of the latest changes and they look good.
I didn't had time to look into the details of the hypothesis tests, but hope all is ok.
I don't know when I will have more time to review this PR...so I think that is ok to have it merged.
Thanks again
Scope and purpose
Fixes #12334
This includes:
One optimization I worry about a little.(decided it was too risky)