Commit 1ff3556
committed
storage: introduce concurrent Raft proposal buffer
This change introduces a new multi-producer, single-consumer buffer
for Raft proposal ingestion into the Raft replication pipeline. This
buffer becomes the new coordination point between "above Raft" goroutines,
who have just finished evaluation and want to replicate a command, and
a Replica's "below Raft" goroutine, which collects these commands and
begins the replication process.
The structure improves upon the current approach to this interaction in
three important ways. The first is that the structure supports concurrent
insertion of proposals by multiple proposer goroutines. This significantly
increases the amount of concurrency for non-conflicting writes within a
single Range. The proposal buffer does this without exclusive locking using
atomics to index into an array. This is complicated by the strong desire for
proposals to be proposed in the same order that their MaxLeaseIndex is assigned.
The buffer addresses this by selecting a slot in its array and selecting a
MaxLeaseIndex for a proposal in a single atomic operation.
The second improvement is that the new structure allows RaftCommand marshaling
to be lifted entirely out of any critical section. Previously, the allocation,
marshaling, and encoding processes for a RaftCommand was performed under the
exclusive Replica lock. Before 91abab1, there was even a second allocation and
a copy under this lock. This locking interacted poorly with both "above Raft"
processing (which repeatedly acquires a shared lock) and "below Raft" processing
(which occasionally acquires an exclusive lock). The new concurrent Raft proposal
buffer structure is able to push this allocation and marshaling completely outside
of the exclusive or shared Replica lock. It does so despite the fact that the
MaxLeaseIndex of the RaftCommand has not been assigned yet by splitting marshaling
into two steps and using a new "footer" proto. The first step is to allocate and
marshal the majority of the encoded Raft command outside of any lock. The second
step is to marshal just the small "footer" proto with the MaxLeaseIndex field into
the same byte slice, which has been pre-sized with a small amount of extra capacity,
after the MaxLeaseIndex has been selected. This approach lifts a major expense out
of the Replica mutex.
The final improvement is to increase the amount of batching performed between
Raft proposals. This reduces the number of messages required to coordinate their
replication throughout the entire replication pipeline. To start, batching allows
multiple Raft entries to be sent in the same MsgApp from the leader to followers.
Doing so then results in only a single MsgAppResp being sent for all of these entries
back to the leader, instead of one per entry. Finally, a single MsgAppResp results
in only a single empty MsgApp with the new commit index being sent from the leader
to followers. All of this is made possible by `Step`ping the Raft `RawNode` with a
`MsgProp` containing multiple entries instead of using the `Propose` API directly,
which internally `Step`s the Raft `RawNode` with a `MsgProp` containing only one
entry. Doing so demonstrated a very large improvement in `rafttoy` and is showing
a similar win here. The proposal buffer provides a clean place to perform this
batching, so this is a natural time to introduce it.
\### Benchmark Results
```
name old ops/sec new ops/sec delta
kv95/seq=false/cores=16/nodes=3 67.5k ± 1% 67.2k ± 1% ~ (p=0.421 n=5+5)
kv95/seq=false/cores=36/nodes=3 144k ± 1% 143k ± 1% ~ (p=0.320 n=5+5)
kv0/seq=false/cores=16/nodes=3 41.2k ± 2% 42.3k ± 3% +2.49% (p=0.000 n=10+10)
kv0/seq=false/cores=36/nodes=3 66.8k ± 2% 69.1k ± 2% +3.35% (p=0.000 n=10+10)
kv95/seq=true/cores=16/nodes=3 59.3k ± 1% 62.1k ± 2% +4.83% (p=0.008 n=5+5)
kv95/seq=true/cores=36/nodes=3 100k ± 1% 125k ± 1% +24.37% (p=0.008 n=5+5)
kv0/seq=true/cores=16/nodes=3 16.1k ± 2% 21.8k ± 4% +35.21% (p=0.000 n=9+10)
kv0/seq=true/cores=36/nodes=3 18.4k ± 3% 24.8k ± 2% +35.29% (p=0.000 n=10+10)
name old p50(ms) new p50(ms) delta
kv95/seq=false/cores=16/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal)
kv95/seq=false/cores=36/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal)
kv0/seq=false/cores=16/nodes=3 2.86 ± 2% 2.80 ± 0% -2.10% (p=0.011 n=10+10)
kv0/seq=false/cores=36/nodes=3 3.87 ± 2% 3.80 ± 0% -1.81% (p=0.003 n=10+10)
kv95/seq=true/cores=16/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal)
kv95/seq=true/cores=36/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal)
kv0/seq=true/cores=16/nodes=3 7.97 ± 2% 5.86 ± 2% -26.44% (p=0.000 n=9+10)
kv0/seq=true/cores=36/nodes=3 15.7 ± 0% 11.7 ± 4% -25.61% (p=0.000 n=8+10)
name old p99(ms) new p99(ms) delta
kv95/seq=false/cores=16/nodes=3 2.90 ± 0% 2.94 ± 2% ~ (p=0.444 n=5+5)
kv95/seq=false/cores=36/nodes=3 3.90 ± 0% 3.98 ± 3% ~ (p=0.444 n=5+5)
kv0/seq=false/cores=16/nodes=3 8.90 ± 0% 8.40 ± 0% -5.62% (p=0.000 n=10+8)
kv0/seq=false/cores=36/nodes=3 11.0 ± 0% 10.4 ± 3% -5.91% (p=0.000 n=10+10)
kv95/seq=true/cores=16/nodes=3 4.50 ± 0% 3.18 ± 4% -29.33% (p=0.000 n=4+5)
kv95/seq=true/cores=36/nodes=3 11.2 ± 3% 4.7 ± 0% -58.04% (p=0.008 n=5+5)
kv0/seq=true/cores=16/nodes=3 11.5 ± 0% 9.4 ± 0% -18.26% (p=0.000 n=9+9)
kv0/seq=true/cores=36/nodes=3 19.9 ± 0% 15.3 ± 2% -22.86% (p=0.000 n=9+10)
```
As expected, the majority of the improvement from this change comes when writing
to a single Range (i.e. a write hotspot). In those cases, this change (and those
in the following two commits) improves performance by up to **35%**.
NOTE: the Raft proposal buffer hooks into the rest of the Storage package through
a fairly small and well-defined interface. The primary reason for doing so was
to make the structure easy to move to a `storage/replication` package if/when
we move in that direction.
Release note (performance improvement): Introduced new concurrent Raft
proposal buffer, which increases the degree of write concurrency supported
on a single Range.1 parent 57a1373 commit 1ff3556
14 files changed
Lines changed: 1448 additions & 564 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
256 | 256 | | |
257 | 257 | | |
258 | 258 | | |
259 | | - | |
| 259 | + | |
260 | 260 | | |
261 | 261 | | |
262 | 262 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
222 | 222 | | |
223 | 223 | | |
224 | 224 | | |
225 | | - | |
226 | | - | |
227 | 225 | | |
228 | 226 | | |
229 | 227 | | |
| |||
282 | 280 | | |
283 | 281 | | |
284 | 282 | | |
| 283 | + | |
| 284 | + | |
| 285 | + | |
| 286 | + | |
| 287 | + | |
| 288 | + | |
285 | 289 | | |
286 | 290 | | |
287 | 291 | | |
| |||
381 | 385 | | |
382 | 386 | | |
383 | 387 | | |
384 | | - | |
385 | | - | |
386 | 388 | | |
387 | 389 | | |
388 | 390 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
20 | 20 | | |
21 | 21 | | |
22 | 22 | | |
23 | | - | |
24 | | - | |
| 23 | + | |
| 24 | + | |
25 | 25 | | |
26 | 26 | | |
27 | 27 | | |
28 | 28 | | |
29 | | - | |
| 29 | + | |
30 | 30 | | |
31 | 31 | | |
32 | 32 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
153 | 153 | | |
154 | 154 | | |
155 | 155 | | |
| 156 | + | |
156 | 157 | | |
157 | 158 | | |
158 | 159 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
104 | 104 | | |
105 | 105 | | |
106 | 106 | | |
| 107 | + | |
107 | 108 | | |
108 | 109 | | |
109 | 110 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
71 | 71 | | |
72 | 72 | | |
73 | 73 | | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
| 87 | + | |
74 | 88 | | |
75 | 89 | | |
76 | 90 | | |
| |||
0 commit comments