wire: only borrow/return binaryFreeList buffers at the message level#1426
wire: only borrow/return binaryFreeList buffers at the message level#1426cfromknecht wants to merge 52 commits intobtcsuite:masterfrom
Conversation
9442c96 to
dacbb65
Compare
|
@jcvernaleo (as per #1530)
By the way @cfromknecht this is awesome! |
|
FWIW I rebased this PR (locally) over #1684 and reran the This is a comparison of CalcSigHash before and after applying the wire optimizations Using an 80% transaction serialization speedup to ballpark, this indicates roughly 2/3 of |
dacbb65 to
86ba486
Compare
Pull Request Test Coverage Report for Build 541264840
💛 - Coveralls |
|
@cfromknecht Could you please rebase this PR, since #1769 landed recently? |
| // peers. Thus, the peak usage of the free list is 12,500 * 512 = | ||
| // 6,400,000 bytes. | ||
| freeListMaxItems = 12500 | ||
| freeListMaxItems = 125 |
There was a problem hiding this comment.
Does the comment need to be updated to explain the new context for this value?
|
Rebased with #2073 |
This PR optimizes the
wirepackages serialization of small buffers by minimizing the number of borrow/return round trips during message serialization. Currently thewirepackage uses abinaryFreeListfrom which 8-byte buffers are borrowed and returned for the purpose of serializing small integers and varints.Problem
To understand the problem, consider calling
WriteVarInton a number greater than0xfc(which requires writing the discriminant and a 2, 4, or 8 byte value following).For instance, writing 20,000 will invoke
PutUint8and thenPutUint16. Expanding this out to examine the message passing, we see:Each
<-requires a channelselect, which more-or-less bears the performance implication of a mutex. This cost, in addition to need to wake up other goroutines and switch executions, imparts a significant performance penalty. In the context of block serialization, several hundred thousand of these operations may be performed.Solution
In our example above, we can improve this by only using two
<-, one to borrow and one to return, as so:As expected, cutting the number channels sends in half cuts also cuts the latency in half, which can be seen in the benchmarks below for larger
VarInts,The remainder of this PR is to propagate this pattern all the way up to the top level of messages in the wire package, such that deserializing a message only incurs one borrow and one return. Any subroutines are made to conditionally borrow from the
binarySerializerif the invoker has not provided them with a buffer, and conditionally return if they indeed were required to borrow.A good example of how these channel sends/receives can add up is in MsgTx serialization, which is now upwards of 80% faster as a result of these optimizations:
Preliminary Benchmarks
Notes
I'm still in the process of going through and adding benchmarks to top-level messages in order to guage the overall performance benefit, expect more to be added at a later point.
There are a few remaining messages which have not yet been optimized, e.g.
MsgAlert,MsgVesion, etc. I plan to add those as well but decided to start with the ones that were more performance critical.