Skip to content

Fix RoutingTable Lookup by Index#75530

Merged
original-brownbear merged 3 commits intoelastic:masterfrom
original-brownbear:fix-routing-table-lookup
Jul 21, 2021
Merged

Fix RoutingTable Lookup by Index#75530
original-brownbear merged 3 commits intoelastic:masterfrom
original-brownbear:fix-routing-table-lookup

Conversation

@original-brownbear
Copy link
Copy Markdown
Contributor

@original-brownbear original-brownbear commented Jul 20, 2021

This is likely one source of bugs in at least snapshots (could lead to org.elasticsearch.snapshots.SnapshotsService#waitingShardsStartedOrUnassigned missing a relevant index change)
as it can lead to looking up the wrong index from an old shard id (if an index has been
deleted and a new index is created in its place for the same name).

This is likely one source of bugs in at least snapshotting as it can lead
to looking up the wrong index from an old shard id (if an index has been
deleted and a new index is created in its place concurrently)
@original-brownbear original-brownbear added >bug :Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) v8.0.0 v7.14.0 v7.15.0 labels Jul 20, 2021
@elasticmachine elasticmachine added the Team:Distributed Meta label for distributed team. label Jul 20, 2021
@elasticmachine
Copy link
Copy Markdown
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

Copy link
Copy Markdown
Member

@DaveCTurner DaveCTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@original-brownbear
Copy link
Copy Markdown
Contributor Author

Thanks David!

@original-brownbear original-brownbear merged commit 1169828 into elastic:master Jul 21, 2021
@original-brownbear original-brownbear deleted the fix-routing-table-lookup branch July 21, 2021 11:16
original-brownbear added a commit that referenced this pull request Jul 21, 2021
This is likely one source of bugs in at least snapshotting as it can lead
to looking up the wrong index from an old shard id (if an index has been
deleted and a new index is created in its place concurrently)
original-brownbear added a commit that referenced this pull request Jul 21, 2021
This is likely one source of bugs in at least snapshotting as it can lead
to looking up the wrong index from an old shard id (if an index has been
deleted and a new index is created in its place concurrently)
ywangd pushed a commit to ywangd/elasticsearch that referenced this pull request Jul 30, 2021
This is likely one source of bugs in at least snapshotting as it can lead
to looking up the wrong index from an old shard id (if an index has been
deleted and a new index is created in its place concurrently)
original-brownbear added a commit that referenced this pull request Jul 30, 2021
…#75501)

This refactors the snapshots-in-progress logic to work from `RepositoryShardId` when working out what parts of the repository are in-use by writes for snapshot concurrency safety. This change does not go all the way yet on this topic and there are a number of possible follow-up further improvements to simplify the logic that I'd work through over time.
But for now this allows fixing the remaining known issues that snapshot stress testing surfaced when combined with the fix in #75530.

These issues all come from the fact that `ShardId` is not a stable key across multiple snapshots if snapshots are partial. The scenarios that are broken are all roughly this:
* snapshot-1 for index-A with uuid-A runs and is partial
* index-A is deleted and re-created and now has uuid-B
* snapshot-2 for index-A is started and we now have it queued up behind snapshot-1 for the index
* snapshot-1 finishes and the logic tries to start the next snapshot for the same shard-id
  * this fails because the shard-id is not the same, we can't compare index uuids, just index name + shard id
  * this change fixes all these spots by always taking the round trip via `RepositoryShardId`
 
planned follow-ups here are:
* dry up logic across cloning and snapshotting more as both now essentially run the same code in many state-machine steps
* serialize snapshots-in-progress efficiently instead of re-computing the index and by-repository-shard-id lookups in the constructor every time
    * refactor the logic in snapshots-in-progress away from maps keyed by shard-id in almost all spots to this end, just keep an index name to `Index` map to work out what exactly is being snapshotted
 * refactoring snapshots-in-progress to be a map of list of operations keyed by repository shard id instead of a list of maps as it currently is to make the concurrency simpler and more obviously correct

closes #75423 

relates (#75339 ... should also fix this, but I have to verify by testing with a backport to 7.x)
original-brownbear added a commit that referenced this pull request Aug 16, 2021
…#75501) (#76539)

This refactors the snapshots-in-progress logic to work from `RepositoryShardId` when working out what parts of the repository are in-use by writes for snapshot concurrency safety. This change does not go all the way yet on this topic and there are a number of possible follow-up further improvements to simplify the logic that I'd work through over time.
But for now this allows fixing the remaining known issues that snapshot stress testing surfaced when combined with the fix in #75530.

These issues all come from the fact that `ShardId` is not a stable key across multiple snapshots if snapshots are partial. The scenarios that are broken are all roughly this:
* snapshot-1 for index-A with uuid-A runs and is partial
* index-A is deleted and re-created and now has uuid-B
* snapshot-2 for index-A is started and we now have it queued up behind snapshot-1 for the index
* snapshot-1 finishes and the logic tries to start the next snapshot for the same shard-id
  * this fails because the shard-id is not the same, we can't compare index uuids, just index name + shard id
  * this change fixes all these spots by always taking the round trip via `RepositoryShardId`

planned follow-ups here are:
* dry up logic across cloning and snapshotting more as both now essentially run the same code in many state-machine steps
* serialize snapshots-in-progress efficiently instead of re-computing the index and by-repository-shard-id lookups in the constructor every time
    * refactor the logic in snapshots-in-progress away from maps keyed by shard-id in almost all spots to this end, just keep an index name to `Index` map to work out what exactly is being snapshotted
 * refactoring snapshots-in-progress to be a map of list of operations keyed by repository shard id instead of a list of maps as it currently is to make the concurrency simpler and more obviously correct

closes #75423

relates (#75339 ... should also fix this, but I have to verify by testing with a backport to 7.x)
original-brownbear added a commit that referenced this pull request Aug 16, 2021
…#75501) (#76539) (#76547)

This refactors the snapshots-in-progress logic to work from `RepositoryShardId` when working out what parts of the repository are in-use by writes for snapshot concurrency safety. This change does not go all the way yet on this topic and there are a number of possible follow-up further improvements to simplify the logic that I'd work through over time.
But for now this allows fixing the remaining known issues that snapshot stress testing surfaced when combined with the fix in #75530.

These issues all come from the fact that `ShardId` is not a stable key across multiple snapshots if snapshots are partial. The scenarios that are broken are all roughly this:
* snapshot-1 for index-A with uuid-A runs and is partial
* index-A is deleted and re-created and now has uuid-B
* snapshot-2 for index-A is started and we now have it queued up behind snapshot-1 for the index
* snapshot-1 finishes and the logic tries to start the next snapshot for the same shard-id
  * this fails because the shard-id is not the same, we can't compare index uuids, just index name + shard id
  * this change fixes all these spots by always taking the round trip via `RepositoryShardId`

planned follow-ups here are:
* dry up logic across cloning and snapshotting more as both now essentially run the same code in many state-machine steps
* serialize snapshots-in-progress efficiently instead of re-computing the index and by-repository-shard-id lookups in the constructor every time
    * refactor the logic in snapshots-in-progress away from maps keyed by shard-id in almost all spots to this end, just keep an index name to `Index` map to work out what exactly is being snapshotted
 * refactoring snapshots-in-progress to be a map of list of operations keyed by repository shard id instead of a list of maps as it currently is to make the concurrency simpler and more obviously correct

closes #75423

relates (#75339 ... should also fix this, but I have to verify by testing with a backport to 7.x)
@original-brownbear original-brownbear restored the fix-routing-table-lookup branch April 18, 2023 20:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

>bug :Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) Team:Distributed Meta label for distributed team. v7.14.0 v7.15.0 v8.0.0-alpha1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants