Skip to content

Enable Fully Concurrent Snapshot Operations#56911

Merged
original-brownbear merged 147 commits intoelastic:masterfrom
original-brownbear:allow-multiple-snapshots
Jul 10, 2020
Merged

Enable Fully Concurrent Snapshot Operations#56911
original-brownbear merged 147 commits intoelastic:masterfrom
original-brownbear:allow-multiple-snapshots

Conversation

@original-brownbear
Copy link
Copy Markdown
Contributor

@original-brownbear original-brownbear commented May 18, 2020

Enables fully concurrent snapshot operations:

  • Snapshot create- and delete operations can be started in any order
  • Delete operations wait for snapshot finalization to finish, are batched as much as possible to improve efficiency and once enqueued in the cluster state prevent new snapshots from starting on data nodes until executed
    • We could be even more concurrent here in a follow-up by interleaving deletes and snapshots on a per-shard level. I decided not to do this for now since it seemed not worth the added complexity yet. Due to batching+deduplicating of deletes the pain of having a delete stuck behind a long -running snapshot seemed manageable (dropped client connections + resulting retries don't cause issues due to deduplication of delete jobs, batching of deletes allows enqueuing more and more deletes even if a snapshot blocks for a long time that will all be executed in essentially constant time (due to bulk snapshot deletion, deleting multiple snapshots is mostly about as fast as deleting a single one))
  • Snapshot creation is completely concurrent across shards, but per shard snapshots are linearized for each repository as are snapshot finalizations

See updated JavaDoc and added test cases for more details and illustration on the functionality.

Some notes:

The queuing of snapshot finalizations and deletes and the related locking/synchronization is a little awkward in this version but can be much simplified with some refactoring. The problem is that snapshot finalizations resolve their listeners on the SNAPSHOT pool while deletes resolve the listener on the master update thread. With some refactoring both of these could be moved to the master update thread, effectively removing the need for any synchronization around the SnapshotService state. I didn't do this refactoring here because it's a fairly large change and not necessary for the functionality but plan to do so in a follow-up.

This change allows for completely removing any trickery around synchronizing deletes and snapshots from SLM and 100% does away with SLM errors from collisions between deletes and snapshots.

Snapshotting a single index in parallel to a long running full backup will execute without having to wait for the long running backup as required by the ILM/SLM use case of moving indices to "snapshot tier". Finalizations are linearized but ordered according to which snapshot saw all of its shards complete first

@original-brownbear original-brownbear added >enhancement WIP :Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs labels May 18, 2020
@elasticmachine
Copy link
Copy Markdown
Collaborator

Pinging @elastic/es-distributed (:Distributed/Snapshot/Restore)

@elasticmachine elasticmachine added the Team:Distributed Meta label for distributed team. label May 18, 2020
Implements fully concurrent snapshot operations. See documentation changes
to snapshot package level JavaDoc for details.
* in the thread pool (for example, tests that use the mock repository that
* block on master).
*/
public class MinThreadsSnapshotRestoreIT extends AbstractSnapshotIntegTestCase {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All scenarios covered by this test become obsolete. The actual premise of this test (checking that we don't dead-lock from blocked threads) is covered by the fact that SnapshotResiliencyTests work for the most part anyway.

@original-brownbear
Copy link
Copy Markdown
Contributor Author

I worked through this list again:

add a check to snapshot finalization that makes sure that the index-N blob that a finalization is based on actually exists to avoid a class of possible concurrent repo access bugs

Done in #59141
IMO this provides about as much safety around concurrent repository access (from multiple mounts or otherwise) as we will get. On consistent blob stores I think it fixes all situations. On S3 it makes it incredibly unlikely that concurrent mounts would cause trouble as well. Keep in mind, even if you do run concurrent writes for concurrent repo mounts to the same location, the repo will get marked as corrupted in one of the two mounts as soon as an index-N blob is missing unexpectedly. Since the logic around writing new index-N blobs involves as CS update before and after writing and index-N blob, these writes are spaced out a little even across multiple repositories and approximately serialized via the cluster state update thread, making it even less likely to run into trouble on S3.

add a limit to the number of concurrently snapshotting shards in place of the limit on concurrent repository operations to limit memory use. Limiting the concurrent operations could be trouble for ILM that might start running snapshots for a large number of indices expecting mechanics similar to those of moving shards. (will do in this PR)

I though about this some more. I don't think this is necessary. I think we can simply deduplicate ShardId and Index instances when deserializing SnapshotsInProgress#Entry instances to lower memory use for SnapshotsInProgress but beyond that I'm not sure how much of a concern this really is. Maybe we should just put an upper bound of 1000 concurrent operations by default (as that is pretty much the upper bound for a reasonable number of snapshots per repo that we've been communicating in the past) to limit things in case of broken SLM configurations and that's it for this?

In case a repository does not yet use uuid-shard generations in all its snapshots we want to queue up snapshot operations (instead of the more elaborate but in this case unsafe concurrency) to not create inconsistent SLM behavior across versions.

I think this is actually not necessary in hindsight :) I pushed f5a44d4 to force the use of shard generations even if it's numeric ones for queued up shard level snapshots. Meaning, even if your shard level generations are numeric queued up shard level snapshots won't do listing to find the next generation and hence are safe even on S3. This change will need to be a adapted a little for the backport to 7.x because there are some corner cases here in mixed version clusters (literally only with 7.7.0 I think) where we want to filter out numeric generations in the blob store still but in mixed version clusters with 7.8.0+ it's safe as far as I can see. Certainly, it seems to me that this is the safer approach compared to adding a different state machine for queuing operations while the repository is at an older version.

=> IMO this is ready for another review.

Copy link
Copy Markdown
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for tackling this challenge, and good luck with the backport.

Copy link
Copy Markdown
Member

@tlrx tlrx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM This is great work, thanks Armin! I carefully reviewed most parts but I must admit that I lightly reviewed the "repository loop" part.

I deeply apologize for the time it took me to review this PR. The reviewing experience was not great for me due to the amount of code changes, I think using a dedicated branch here would have made sense.

private static String startDataNodeWithLargeSnapshotPool() {
return internalCluster().startDataOnlyNode(LARGE_SNAPSHOT_POOL_SETTINGS);
}
public void testSnapshotRunsAfterInProgressDelete() throws Exception {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add extra line

assertThat(secondSnapshotResponse.isDone(), is(false));

unblockNode(repoName, dataNode);
assertThat(firstSnapshotResponse.get().getSnapshotInfo().state(), is(SnapshotState.FAILED));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we check that the 1st snapshot failed because it was aborted?

ensureStableCluster(3);

awaitNoMoreRunningOperations();
expectThrows(RepositoryException.class, deleteFuture::actionGet);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a meaningful error message we could check here?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really, it's just "failed to update repository". It's all in the cause here, but that's also just a JSON parse failure.

this(in.readList(Entry::new));
}

private static boolean assertConsistency(List<Entry> entries) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: assertConsistency -> assertNoConcurrentDeletionsForSameRepository() ?

try {
assert assertConsistentEntries(entries);
} catch (AssertionError e) {
throw e;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure to understand why we catch and rethrow here

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I could put a debug breakpoint there :D Thanks for spotting!

private final OngoingRepositoryOperations repositoryOperations = new OngoingRepositoryOperations();

/**
* Setting that specifies the maximum number of allow concurrent snapshot create and delete operations in the
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

allow -> allowed

currentState.custom(SnapshotDeletionsInProgress.TYPE, SnapshotDeletionsInProgress.EMPTY);
if (deletionsInProgress.hasDeletionsInProgress()) {
currentState.custom(SnapshotDeletionsInProgress.TYPE, SnapshotDeletionsInProgress.EMPTY);
if (deletionsInProgress.hasDeletionsInProgress() && concurrentOperationsAllowed == false) {
throw new ConcurrentSnapshotExecutionException(repositoryName, snapshotName,
"cannot snapshot while a snapshot deletion is in-progress in [" + deletionsInProgress + "]");
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When backporting, we could maybe indicate in the error message that concurrent snapshot/deletions are possible in version 7.9?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Right, I put down a note for that when doing the back-port work.

throw new ConcurrentSnapshotExecutionException(repositoryName, snapshotName, " a snapshot is already running");
}
ensureBelowConcurrencyLimit(repositoryName, snapshotName, snapshots, deletionsInProgress);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happen if multiple snapshot operations are started but the maxConcurrentOperations settings is updated to a value lower than the current number of concurrent ops? Would it still be possible to enque more ops?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happen if multiple snapshot operations are started but the maxConcurrentOperations settings is updated to a value lower than the current number of concurrent ops?

Existing ops won't be affected but you can't start new ones.

Would it still be possible to enque more ops?
No then the ops have to come down to below the new limit first before we can enqueue more.

@original-brownbear
Copy link
Copy Markdown
Contributor Author

Thanks so much @ywelsch @tlrx !!!

@original-brownbear original-brownbear merged commit d333dac into elastic:master Jul 10, 2020
@original-brownbear original-brownbear deleted the allow-multiple-snapshots branch July 10, 2020 13:19
original-brownbear added a commit that referenced this pull request Jul 15, 2020
Enables fully concurrent snapshot operations:
* Snapshot create- and delete operations can be started in any order
* Delete operations wait for snapshot finalization to finish, are batched as much as possible to improve efficiency and once enqueued in the cluster state prevent new snapshots from starting on data nodes until executed
   * We could be even more concurrent here in a follow-up by interleaving deletes and snapshots on a per-shard level. I decided not to do this for now since it seemed not worth the added complexity yet. Due to batching+deduplicating of deletes the pain of having a delete stuck behind a long -running snapshot seemed manageable (dropped client connections + resulting retries don't cause issues due to deduplication of delete jobs, batching of deletes allows enqueuing more and more deletes even if a snapshot blocks for a long time that will all be executed in essentially constant time (due to bulk snapshot deletion, deleting multiple snapshots is mostly about as fast as deleting a single one))
* Snapshot creation is completely concurrent across shards, but per shard snapshots are linearized for each repository as are snapshot finalizations

See updated JavaDoc and added test cases for more details and illustration on the functionality.

Some notes:

The queuing of snapshot finalizations and deletes and the related locking/synchronization is a little awkward in this version but can be much simplified with some refactoring.  The problem is that snapshot finalizations resolve their listeners on the `SNAPSHOT` pool while deletes resolve the listener on the master update thread. With some refactoring both of these could be moved to the master update thread, effectively removing the need for any synchronization around the `SnapshotService` state. I didn't do this refactoring here because it's a fairly large change and not necessary for the functionality but plan to do so in a follow-up.

This change allows for completely removing any trickery around synchronizing deletes and snapshots from SLM and 100% does away with SLM errors from collisions between deletes and snapshots.

Snapshotting a single index in parallel to a long running full backup will execute without having to wait for the long running backup as required by the ILM/SLM use case of moving indices to "snapshot tier". Finalizations are linearized but ordered according to which snapshot saw all of its shards complete first
original-brownbear added a commit that referenced this pull request Jul 22, 2020
There were two subtle bugs here from backporting #56911 to 7.x.

1. We passed `null` for the `shards` map which isn't nullable any longer
when creating `SnapshotsInProgress.Entry`, fixed by just passing an empty map
like the `null` handling did in the past.
2. The removal of a failed `INIT` state snapshot from the cluster state tried
removing it from the finalization loop (the set of repository names that are
currently finalizing). This will trip an assertion since the snapshot failed
before its repository was put into the set. I made the logic ignore the set
in case we remove a failed `INIT` state snapshot to restore the old logic to
exactly as it was before the concurrent snapshots backport to be on the safe
side here.

Also, added tests that explicitly call the old code paths because as can be seen
from initially missing this, the BwC tests will only run in the configuration new
version master, old version nodes ever so often and having a deterministic test
for the old state machine seems the safest bet here.

Closes #59986
original-brownbear added a commit that referenced this pull request Jul 22, 2020
There were two subtle bugs here from backporting #56911 to 7.x.

1. We passed `null` for the `shards` map which isn't nullable any longer
when creating `SnapshotsInProgress.Entry`, fixed by just passing an empty map
like the `null` handling did in the past.
2. The removal of a failed `INIT` state snapshot from the cluster state tried
removing it from the finalization loop (the set of repository names that are
currently finalizing). This will trip an assertion since the snapshot failed
before its repository was put into the set. I made the logic ignore the set
in case we remove a failed `INIT` state snapshot to restore the old logic to
exactly as it was before the concurrent snapshots backport to be on the safe
side here.

Also, added tests that explicitly call the old code paths because as can be seen
from initially missing this, the BwC tests will only run in the configuration new
version master, old version nodes ever so often and having a deterministic test
for the old state machine seems the safest bet here.

Closes #59986
@original-brownbear original-brownbear restored the allow-multiple-snapshots branch August 6, 2020 18:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

:Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs >enhancement release highlight Team:Distributed Meta label for distributed team. v7.9.0 v8.0.0-alpha1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants