Skip to content

add InternalSplit command & tests#64

Merged
tbg merged 1 commit intocockroachdb:masterfrom
tbg:split
Sep 24, 2014
Merged

add InternalSplit command & tests#64
tbg merged 1 commit intocockroachdb:masterfrom
tbg:split

Conversation

@tbg
Copy link
Copy Markdown
Member

@tbg tbg commented Sep 16, 2014

  • added a method CopyInto to the response cache, req'd for initializing a new range during a split.
  • added an InternalSplit command. Making the split happen without a transitory period in which things can go wrong is subtle so this needs a good review and likely some changes.
  • added some basic tests, but the transition should be bulletproofed using more tests eventually.

storage/range.go Outdated
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be a little more explicit here, saying the original range is truncated to have StartKey=desc.StartKey and EndKey=args.SplitKey, and the newly created range has StartKey=args.SplitKey and EndKey=desc.EndKey.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

@spencerkimball
Copy link
Copy Markdown
Member

LGTM. Coming along!

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will block the entire range to accept any new cmd which is not we wish to. Especially we need to wait respCache.CopyInfo to finish which take long.

We need to optimize this part, e.g. will not create the actual range until the copy is done.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, makes sense to optimize the downtime. I changed two things:

  • enforcing args.Key = splitKey and args.EndKey = range.(...).EndKey.
    That will make sure that anything operating on the left side of the split (=the old range) keeps running until we actually lock.
  • changed store.CreateRange to not automatically Start() the range.
    This is important since now we can create the range, copy the respCache and only then lock the old range.

In effect the part of the keys that goes into the "new" range will be locked until the split is completed; the rest will be available until immediately before the new range comes online.
The most time-consuming part in that latter process is likely the transactional update of the Meta1/2 values.
To get that done without any downtime, my idea would be to

  • do everything that sets up the new range. Fire it up (it is technically overlapping with the first range, but doesn't do anything, so no conflict [needs careful checking]).
    Once its Raft is on (fail on timeout):
  • transactional update of Meta1/Meta2. (extra subtle if we're splitting one of the meta ranges itself, which we are locking one half of...)
  • lock and resize the original range, unlock.
    The old range will then proceed, returning errors (wrong range) for all the reads/writes which were waiting to access the key range that has now split off.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Split is actually an exceedingly subtle distributed algorithm. We have to be very careful to get it right. What needs to be balanced here is the correct separation between a distributed transaction conducted by the range leader and the identical Raft commands which must be replayed against the state machines of each range replica.

Let's not worry about this being slow yet. Let's worry about getting it absolutely correct. The two are going to be very difficult to solve simultaneously, although I'm not super worried about performance. This is why we're keeping our ranges in the 32-64M range!

Here's my pseudo code for the actual split algorithm:

On range leader, determine periodically whether a split is
necessary by consulting the estimated size of the range. If
in excess of zone.RangeMaxBytes, initiate split.

Split algorithm from perspective of range leader (this is not
a Raft command, but is done outside of Raft):

// First, find the split key using a snapshot.
snap := GetSnapshot()
splitKey := FindSplitKey(snap)

// Next, start a distributed transaction to effect split
// and also update addressing records.
tx := BeginTransaction()

// Determine new range ID.
newRangeID := allocateRangeID()

// Call split range such that it's submitted as a Raft command;
// each replica of the range executes it identically.
reply := db.InternalSplitRange(tx, rangeID, newRangeID, splitKey)

// Update addressing records for old range and new range.
updateRangeAddressingRecords(tx, rangeID, reply.rangeDesc)
updateRangeAddressingRecords(tx, newRangeID, reply.newRangeDesc)

EndTransaction(tx, true /* Commit */)

Raft command for InternalSplitRange; this is done identically on
each range replica:

copyResponseCache()
createNewRange()
newRangeDesc.StartKey = splitKey
newRangeDesc.EndKey = rangeDesc.EndKey
rangeDesc.EndKey = splitKey
mvcc.Put(tx, KeyLocalRangeMetadataPrefix + rangeID, rangeDesc)
mvcc.Put(tx, KeyLocalRangeMetadataPrefix + newRangeID, newRangeDesc)

Keep in mind this is just pseudo-code! In any case, the idea here is
that this piece runs as part of the distributed transaction and it
runs on every replica, setting the stage for the split, but not quite
pulling the trigger. The split must be part of the same two-phase
commit as everything else, or we will have problems. Notice that we're
moving the local RangeMetadata value to use MVCC! It currently writes
directly to the engine.

So the question is: how do the KeyLocalRangeMetadataPrefix
"intents" get committed when the the transaction commits? They
don't get automatically resolved by the transaction coordinator,
because their keys are local. However, every Raft command
directed at a range "reads" the key at
KeyLocalRangeMetadataPrefix. If it's an intent, we push the txn
as per usual, which either succeeds or fails. Often times, it
will succeed with already-committed, in which case we resolve the
intent. If we find out the txn is aborted, then the range will clean
up the provisional new range by calling RangeManager.DropRange.

@spencerkimball
Copy link
Copy Markdown
Member

I know the full split algorithm is hard to grok -- the interaction between Raft and distributed transactions is tricky. Don't let that part hold you up for this PR. You're still making plenty of progress on the pieces which we need regardless. I would say next thing would be the code in range leader to decide when to split. I'm getting really close to having the code in place for doing the distributed transaction...! End of this week hopefully.

@tbg
Copy link
Copy Markdown
Member Author

tbg commented Sep 22, 2014

sorry for the delay. Rebasing this right now as the merge is fairly nasty. @spencerkimball I'll add TODOs in the appropriate places (to get all the transactional stuff right), merge it and then work on the split trigger. Sound good?

@spencerkimball
Copy link
Copy Markdown
Member

Yep sounds like a plan!

On Mon, Sep 22, 2014 at 3:20 AM, Tobias Schottdorf <notifications@github.com

wrote:

sorry for the delay. Rebasing this right now as the merge is fairly nasty.
@spencerkimball https://github.com/spencerkimball I'll add TODOs in the
appropriate places (to get all the transactional stuff right), merge it and
then work on the split trigger. Sound good?


Reply to this email directly or view it on GitHub
#64 (comment).

@tbg
Copy link
Copy Markdown
Member Author

tbg commented Sep 22, 2014

rebased & added comments. Merging tmrw after work.

@spencerkimball
Copy link
Copy Markdown
Member

Should I re-review?

On Mon, Sep 22, 2014 at 7:36 PM, Tobias Schottdorf <notifications@github.com

wrote:

rebased & added comments. Merging tmrw after work.


Reply to this email directly or view it on GitHub
#64 (comment).

@tbg
Copy link
Copy Markdown
Member Author

tbg commented Sep 22, 2014

No intentional changes. Maybe just a quick glance to make sure it doesn't show that it's already past bedtime here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants