Skip to content

12334 tcp throughput benchmark and performance improvements#12335

Merged
itamarst merged 15 commits intotrunkfrom
12334-tcp-throughput-benchmark
Oct 30, 2024
Merged

12334 tcp throughput benchmark and performance improvements#12335
itamarst merged 15 commits intotrunkfrom
12334-tcp-throughput-benchmark

Conversation

@itamarst
Copy link
Contributor

@itamarst itamarst commented Oct 15, 2024

Scope and purpose

Fixes #12334

This includes:

  • A benchmark.
  • A couple of pretty straightforward optimizations.
  • One optimization I worry about a little. (decided it was too risky)

@itamarst itamarst linked an issue Oct 15, 2024 that may be closed by this pull request
@itamarst itamarst changed the title 12334 tcp throughput benchmark and performance improvement 12334 tcp throughput benchmark and performance improvements Oct 15, 2024
@codspeed-hq
Copy link

codspeed-hq bot commented Oct 15, 2024

CodSpeed Performance Report

Merging #12335 will improve performances by 2.15%

Comparing 12334-tcp-throughput-benchmark (09d6383) with trunk (c3e5f02)

🎉 Hooray! pytest-codspeed just leveled up to 3.0.0!

A heads-up, this is a breaking change and it might affect your current performance baseline a bit. But here's the exciting part - it's packed with new, cool features and promises improved result stability 🥳!
Curious about what's new? Visit our releases page to delve into all the awesome details about this new version.

Summary

⚡ 1 improvements
✅ 23 untouched benchmarks

🆕 1 new benchmarks

Benchmarks breakdown

Benchmark trunk 12334-tcp-throughput-benchmark Change
test_deferred_chained_already_fired 64.8 µs 63.4 µs +2.15%
🆕 test_tcp_throughput N/A 5.1 ms N/A

@itamarst itamarst marked this pull request as ready for review October 15, 2024 18:56
@itamarst
Copy link
Contributor Author

itamarst commented Oct 15, 2024

codspeed claims faster benchmark for test_lineReceived and I am confused. This seems like a lie, none of my changes should affect existing benchmarks.

Comment on lines +67 to +69
server = ServerFactory()
server.protocol = Server
port = reactor.listenTCP(0, server)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if we are benchmarking client-side or server-side or both here.


I was thinking that for the benchmark for client-side tests , we can make an exception and have a single server that is used by all the tests runs.

That is, create the server as a global/singleton/fixture outside of the benchmark run.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's benchmarking both I guess?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client send sends in 16KB chunks, the server-side likely responds in bigger chunks, and this also benchmarks reading... insofar as the goal isn't to measure speed of sending, but to measure speed of Twisted in these sort of scenarios, it seems OK as is?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's ok to benchmark both.

I guess that we can refactor this in the furure.

I was thinking that we might want to have separate benchmark tests for data processing speed for:

  • client send
  • client receive
  • server send
  • server receive

The current test is good enought to server as a general regression test.

I was thinking that for example if we improve performance for client send, while server receive performance is worse... the tests might still be ok, as the overall tests speed are not changed

Now... this is unlikely to happen

@adiroiban
Copy link
Member

@itamarst the test_line_receiver test is flaky ... see this PR where it has +10.19% without any changes to the line receiver code.

#1645 (comment)

@itamarst itamarst requested a review from adiroiban October 16, 2024 17:51
@adiroiban adiroiban requested a review from a team October 16, 2024 18:42
Copy link
Member

@adiroiban adiroiban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many thanks for working on this

I only left a comment...and I am not sure what is going on here

and why self.offset = 0 is only conditionally set

In the previous code self.offset = 0 is alwasy called

Comment on lines +67 to +69
server = ServerFactory()
server.protocol = Server
port = reactor.listenTCP(0, server)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's ok to benchmark both.

I guess that we can refactor this in the furure.

I was thinking that we might want to have separate benchmark tests for data processing speed for:

  • client send
  • client receive
  • server send
  • server receive

The current test is good enought to server as a general regression test.

I was thinking that for example if we improve performance for client send, while server receive performance is worse... the tests might still be ok, as the overall tests speed are not changed

Now... this is unlikely to happen

@adiroiban adiroiban requested a review from a team October 17, 2024 11:47
Copy link
Member

@adiroiban adiroiban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just removing it from the review queue

Thanks Itamar for looking into this

@itamarst
Copy link
Contributor Author

I have fixed the bug (thanks for catching it!) and improved testing to ensure this critical code path has better test coverage.

@itamarst
Copy link
Contributor Author

Will fix static checks later today.

Copy link
Member

@adiroiban adiroiban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes.

I did a very quick review of the latest changes and they look good.

I didn't had time to look into the details of the hypothesis tests, but hope all is ok.

I don't know when I will have more time to review this PR...so I think that is ok to have it merged.

Thanks again

@itamarst itamarst merged commit e297634 into trunk Oct 30, 2024
@itamarst itamarst deleted the 12334-tcp-throughput-benchmark branch October 30, 2024 14:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

TCP throughput benchmark

5 participants