add InternalSplit command & tests#64
Conversation
tbg
commented
Sep 16, 2014
- added a method CopyInto to the response cache, req'd for initializing a new range during a split.
- added an InternalSplit command. Making the split happen without a transitory period in which things can go wrong is subtle so this needs a good review and likely some changes.
- added some basic tests, but the transition should be bulletproofed using more tests eventually.
storage/range.go
Outdated
There was a problem hiding this comment.
Be a little more explicit here, saying the original range is truncated to have StartKey=desc.StartKey and EndKey=args.SplitKey, and the newly created range has StartKey=args.SplitKey and EndKey=desc.EndKey.
|
LGTM. Coming along! |
There was a problem hiding this comment.
this will block the entire range to accept any new cmd which is not we wish to. Especially we need to wait respCache.CopyInfo to finish which take long.
We need to optimize this part, e.g. will not create the actual range until the copy is done.
There was a problem hiding this comment.
Yep, makes sense to optimize the downtime. I changed two things:
- enforcing args.Key = splitKey and args.EndKey = range.(...).EndKey.
That will make sure that anything operating on the left side of the split (=the old range) keeps running until we actually lock. - changed store.CreateRange to not automatically Start() the range.
This is important since now we can create the range, copy the respCache and only then lock the old range.
In effect the part of the keys that goes into the "new" range will be locked until the split is completed; the rest will be available until immediately before the new range comes online.
The most time-consuming part in that latter process is likely the transactional update of the Meta1/2 values.
To get that done without any downtime, my idea would be to
- do everything that sets up the new range. Fire it up (it is technically overlapping with the first range, but doesn't do anything, so no conflict [needs careful checking]).
Once its Raft is on (fail on timeout): - transactional update of Meta1/Meta2. (extra subtle if we're splitting one of the meta ranges itself, which we are locking one half of...)
- lock and resize the original range, unlock.
The old range will then proceed, returning errors (wrong range) for all the reads/writes which were waiting to access the key range that has now split off.
There was a problem hiding this comment.
Split is actually an exceedingly subtle distributed algorithm. We have to be very careful to get it right. What needs to be balanced here is the correct separation between a distributed transaction conducted by the range leader and the identical Raft commands which must be replayed against the state machines of each range replica.
Let's not worry about this being slow yet. Let's worry about getting it absolutely correct. The two are going to be very difficult to solve simultaneously, although I'm not super worried about performance. This is why we're keeping our ranges in the 32-64M range!
Here's my pseudo code for the actual split algorithm:
On range leader, determine periodically whether a split is
necessary by consulting the estimated size of the range. If
in excess of zone.RangeMaxBytes, initiate split.
Split algorithm from perspective of range leader (this is not
a Raft command, but is done outside of Raft):
// First, find the split key using a snapshot.
snap := GetSnapshot()
splitKey := FindSplitKey(snap)
// Next, start a distributed transaction to effect split
// and also update addressing records.
tx := BeginTransaction()
// Determine new range ID.
newRangeID := allocateRangeID()
// Call split range such that it's submitted as a Raft command;
// each replica of the range executes it identically.
reply := db.InternalSplitRange(tx, rangeID, newRangeID, splitKey)
// Update addressing records for old range and new range.
updateRangeAddressingRecords(tx, rangeID, reply.rangeDesc)
updateRangeAddressingRecords(tx, newRangeID, reply.newRangeDesc)
EndTransaction(tx, true /* Commit */)
Raft command for InternalSplitRange; this is done identically on
each range replica:
copyResponseCache()
createNewRange()
newRangeDesc.StartKey = splitKey
newRangeDesc.EndKey = rangeDesc.EndKey
rangeDesc.EndKey = splitKey
mvcc.Put(tx, KeyLocalRangeMetadataPrefix + rangeID, rangeDesc)
mvcc.Put(tx, KeyLocalRangeMetadataPrefix + newRangeID, newRangeDesc)
Keep in mind this is just pseudo-code! In any case, the idea here is
that this piece runs as part of the distributed transaction and it
runs on every replica, setting the stage for the split, but not quite
pulling the trigger. The split must be part of the same two-phase
commit as everything else, or we will have problems. Notice that we're
moving the local RangeMetadata value to use MVCC! It currently writes
directly to the engine.
So the question is: how do the KeyLocalRangeMetadataPrefix
"intents" get committed when the the transaction commits? They
don't get automatically resolved by the transaction coordinator,
because their keys are local. However, every Raft command
directed at a range "reads" the key at
KeyLocalRangeMetadataPrefix. If it's an intent, we push the txn
as per usual, which either succeeds or fails. Often times, it
will succeed with already-committed, in which case we resolve the
intent. If we find out the txn is aborted, then the range will clean
up the provisional new range by calling RangeManager.DropRange.
|
I know the full split algorithm is hard to grok -- the interaction between Raft and distributed transactions is tricky. Don't let that part hold you up for this PR. You're still making plenty of progress on the pieces which we need regardless. I would say next thing would be the code in range leader to decide when to split. I'm getting really close to having the code in place for doing the distributed transaction...! End of this week hopefully. |
|
sorry for the delay. Rebasing this right now as the merge is fairly nasty. @spencerkimball I'll add TODOs in the appropriate places (to get all the transactional stuff right), merge it and then work on the split trigger. Sound good? |
|
Yep sounds like a plan! On Mon, Sep 22, 2014 at 3:20 AM, Tobias Schottdorf <notifications@github.com
|
|
rebased & added comments. Merging tmrw after work. |
|
Should I re-review? On Mon, Sep 22, 2014 at 7:36 PM, Tobias Schottdorf <notifications@github.com
|
|
No intentional changes. Maybe just a quick glance to make sure it doesn't show that it's already past bedtime here. |