Skip to content

Commit 1ff3556

Browse files
committed
storage: introduce concurrent Raft proposal buffer
This change introduces a new multi-producer, single-consumer buffer for Raft proposal ingestion into the Raft replication pipeline. This buffer becomes the new coordination point between "above Raft" goroutines, who have just finished evaluation and want to replicate a command, and a Replica's "below Raft" goroutine, which collects these commands and begins the replication process. The structure improves upon the current approach to this interaction in three important ways. The first is that the structure supports concurrent insertion of proposals by multiple proposer goroutines. This significantly increases the amount of concurrency for non-conflicting writes within a single Range. The proposal buffer does this without exclusive locking using atomics to index into an array. This is complicated by the strong desire for proposals to be proposed in the same order that their MaxLeaseIndex is assigned. The buffer addresses this by selecting a slot in its array and selecting a MaxLeaseIndex for a proposal in a single atomic operation. The second improvement is that the new structure allows RaftCommand marshaling to be lifted entirely out of any critical section. Previously, the allocation, marshaling, and encoding processes for a RaftCommand was performed under the exclusive Replica lock. Before 91abab1, there was even a second allocation and a copy under this lock. This locking interacted poorly with both "above Raft" processing (which repeatedly acquires a shared lock) and "below Raft" processing (which occasionally acquires an exclusive lock). The new concurrent Raft proposal buffer structure is able to push this allocation and marshaling completely outside of the exclusive or shared Replica lock. It does so despite the fact that the MaxLeaseIndex of the RaftCommand has not been assigned yet by splitting marshaling into two steps and using a new "footer" proto. The first step is to allocate and marshal the majority of the encoded Raft command outside of any lock. The second step is to marshal just the small "footer" proto with the MaxLeaseIndex field into the same byte slice, which has been pre-sized with a small amount of extra capacity, after the MaxLeaseIndex has been selected. This approach lifts a major expense out of the Replica mutex. The final improvement is to increase the amount of batching performed between Raft proposals. This reduces the number of messages required to coordinate their replication throughout the entire replication pipeline. To start, batching allows multiple Raft entries to be sent in the same MsgApp from the leader to followers. Doing so then results in only a single MsgAppResp being sent for all of these entries back to the leader, instead of one per entry. Finally, a single MsgAppResp results in only a single empty MsgApp with the new commit index being sent from the leader to followers. All of this is made possible by `Step`ping the Raft `RawNode` with a `MsgProp` containing multiple entries instead of using the `Propose` API directly, which internally `Step`s the Raft `RawNode` with a `MsgProp` containing only one entry. Doing so demonstrated a very large improvement in `rafttoy` and is showing a similar win here. The proposal buffer provides a clean place to perform this batching, so this is a natural time to introduce it. \### Benchmark Results ``` name old ops/sec new ops/sec delta kv95/seq=false/cores=16/nodes=3 67.5k ± 1% 67.2k ± 1% ~ (p=0.421 n=5+5) kv95/seq=false/cores=36/nodes=3 144k ± 1% 143k ± 1% ~ (p=0.320 n=5+5) kv0/seq=false/cores=16/nodes=3 41.2k ± 2% 42.3k ± 3% +2.49% (p=0.000 n=10+10) kv0/seq=false/cores=36/nodes=3 66.8k ± 2% 69.1k ± 2% +3.35% (p=0.000 n=10+10) kv95/seq=true/cores=16/nodes=3 59.3k ± 1% 62.1k ± 2% +4.83% (p=0.008 n=5+5) kv95/seq=true/cores=36/nodes=3 100k ± 1% 125k ± 1% +24.37% (p=0.008 n=5+5) kv0/seq=true/cores=16/nodes=3 16.1k ± 2% 21.8k ± 4% +35.21% (p=0.000 n=9+10) kv0/seq=true/cores=36/nodes=3 18.4k ± 3% 24.8k ± 2% +35.29% (p=0.000 n=10+10) name old p50(ms) new p50(ms) delta kv95/seq=false/cores=16/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal) kv95/seq=false/cores=36/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal) kv0/seq=false/cores=16/nodes=3 2.86 ± 2% 2.80 ± 0% -2.10% (p=0.011 n=10+10) kv0/seq=false/cores=36/nodes=3 3.87 ± 2% 3.80 ± 0% -1.81% (p=0.003 n=10+10) kv95/seq=true/cores=16/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal) kv95/seq=true/cores=36/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal) kv0/seq=true/cores=16/nodes=3 7.97 ± 2% 5.86 ± 2% -26.44% (p=0.000 n=9+10) kv0/seq=true/cores=36/nodes=3 15.7 ± 0% 11.7 ± 4% -25.61% (p=0.000 n=8+10) name old p99(ms) new p99(ms) delta kv95/seq=false/cores=16/nodes=3 2.90 ± 0% 2.94 ± 2% ~ (p=0.444 n=5+5) kv95/seq=false/cores=36/nodes=3 3.90 ± 0% 3.98 ± 3% ~ (p=0.444 n=5+5) kv0/seq=false/cores=16/nodes=3 8.90 ± 0% 8.40 ± 0% -5.62% (p=0.000 n=10+8) kv0/seq=false/cores=36/nodes=3 11.0 ± 0% 10.4 ± 3% -5.91% (p=0.000 n=10+10) kv95/seq=true/cores=16/nodes=3 4.50 ± 0% 3.18 ± 4% -29.33% (p=0.000 n=4+5) kv95/seq=true/cores=36/nodes=3 11.2 ± 3% 4.7 ± 0% -58.04% (p=0.008 n=5+5) kv0/seq=true/cores=16/nodes=3 11.5 ± 0% 9.4 ± 0% -18.26% (p=0.000 n=9+9) kv0/seq=true/cores=36/nodes=3 19.9 ± 0% 15.3 ± 2% -22.86% (p=0.000 n=9+10) ``` As expected, the majority of the improvement from this change comes when writing to a single Range (i.e. a write hotspot). In those cases, this change (and those in the following two commits) improves performance by up to **35%**. NOTE: the Raft proposal buffer hooks into the rest of the Storage package through a fairly small and well-defined interface. The primary reason for doing so was to make the structure easy to move to a `storage/replication` package if/when we move in that direction. Release note (performance improvement): Introduced new concurrent Raft proposal buffer, which increases the degree of write concurrency supported on a single Range.
1 parent 57a1373 commit 1ff3556

14 files changed

Lines changed: 1448 additions & 564 deletions

pkg/storage/helpers_test.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -256,7 +256,7 @@ func (r *Replica) GetLastIndex() (uint64, error) {
256256
func (r *Replica) LastAssignedLeaseIndex() uint64 {
257257
r.mu.RLock()
258258
defer r.mu.RUnlock()
259-
return r.mu.lastAssignedLeaseIndex
259+
return r.mu.proposalBuf.LastAssignedLeaseIndexRLocked()
260260
}
261261

262262
// SetQuotaPool allows the caller to set a replica's quota pool initialized to

pkg/storage/replica.go

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -222,8 +222,6 @@ type Replica struct {
222222
mergeComplete chan struct{}
223223
// The state of the Raft state machine.
224224
state storagepb.ReplicaState
225-
// Counter used for assigning lease indexes for proposals.
226-
lastAssignedLeaseIndex uint64
227225
// Last index/term persisted to the raft log (not necessarily
228226
// committed). Note that lastTerm may be 0 (and thus invalid) even when
229227
// lastIndex is known, in which case the term will have to be retrieved
@@ -282,6 +280,12 @@ type Replica struct {
282280
minLeaseProposedTS hlc.Timestamp
283281
// A pointer to the zone config for this replica.
284282
zone *config.ZoneConfig
283+
// proposalBuf buffers Raft commands as they are passed to the Raft
284+
// replication subsystem. The buffer is populated by requests after
285+
// evaluation and is consumed by the Raft processing thread. Once
286+
// consumed, commands are proposed through Raft and moved to the
287+
// proposals map.
288+
proposalBuf propBuf
285289
// proposals stores the Raft in-flight commands which originated at
286290
// this Replica, i.e. all commands for which propose has been called,
287291
// but which have not yet applied.
@@ -381,8 +385,6 @@ type Replica struct {
381385
// newly recreated replica will have a complete range descriptor.
382386
lastToReplica, lastFromReplica roachpb.ReplicaDescriptor
383387

384-
// submitProposalFn can be set to mock out the propose operation.
385-
submitProposalFn func(*ProposalData) error
386388
// Computed checksum at a snapshot UUID.
387389
checksums map[uuid.UUID]ReplicaChecksum
388390

pkg/storage/replica_closedts.go

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,13 +20,13 @@ import (
2020
// closed timestamp tracker. This is called to emit an update about this
2121
// replica in the absence of write activity.
2222
func (r *Replica) EmitMLAI() {
23-
r.mu.Lock()
24-
lai := r.mu.lastAssignedLeaseIndex
23+
r.mu.RLock()
24+
lai := r.mu.proposalBuf.LastAssignedLeaseIndexRLocked()
2525
if r.mu.state.LeaseAppliedIndex > lai {
2626
lai = r.mu.state.LeaseAppliedIndex
2727
}
2828
epoch := r.mu.state.Lease.Epoch
29-
r.mu.Unlock()
29+
r.mu.RUnlock()
3030

3131
ctx := r.AnnotateCtx(context.Background())
3232
_, untrack := r.store.cfg.ClosedTimestamp.Tracker.Track(ctx)

pkg/storage/replica_destroy.go

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -153,6 +153,7 @@ func (r *Replica) destroyRaftMuLocked(ctx context.Context, nextReplicaID roachpb
153153

154154
func (r *Replica) cancelPendingCommandsLocked() {
155155
r.mu.AssertHeld()
156+
r.mu.proposalBuf.FlushLockedWithoutProposing()
156157
for _, p := range r.mu.proposals {
157158
r.cleanupFailedProposalLocked(p)
158159
// NB: each proposal needs its own version of the error (i.e. don't try to

pkg/storage/replica_init.go

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -104,6 +104,7 @@ func (r *Replica) initRaftMuLockedReplicaMuLocked(
104104
// reloading the raft state below, it isn't safe to use the existing raft
105105
// group.
106106
r.mu.internalRaftGroup = nil
107+
r.mu.proposalBuf.Init((*replicaProposer)(r))
107108

108109
var err error
109110
if r.mu.state, err = r.mu.stateLoader.Load(ctx, r.store.Engine(), desc); err != nil {

pkg/storage/replica_proposal.go

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,20 @@ type ProposalData struct {
7171
// reproposals its MaxLeaseIndex field is mutated.
7272
command *storagepb.RaftCommand
7373

74+
// encodedCommand is the encoded Raft command, with an optional prefix
75+
// containing the command ID.
76+
encodedCommand []byte
77+
78+
// quotaSize is the encoded size of command that was used to acquire
79+
// proposal quota. command.Size can change slightly as the object is
80+
// mutated, so it's safer to record the exact value used here.
81+
// TODO(nvanbenschoten): we're already tracking this here, so why do
82+
// we need the separate commandSizes map? Let's get rid of it.
83+
quotaSize int
84+
85+
// tmpFooter is used to avoid an allocation.
86+
tmpFooter storagepb.RaftCommandFooter
87+
7488
// endCmds.finish is called after command execution to update the
7589
// timestamp cache & release latches.
7690
endCmds *endCmds

0 commit comments

Comments
 (0)