Skip to content

Initialize shards assuming they would be placed on original nodes#90600

Merged
idegtiarenko merged 9 commits intoelastic:feature/desired-balance-allocatorfrom
idegtiarenko:fix_testSingleIndexStartedShard
Oct 4, 2022
Merged

Initialize shards assuming they would be placed on original nodes#90600
idegtiarenko merged 9 commits intoelastic:feature/desired-balance-allocatorfrom
idegtiarenko:fix_testSingleIndexStartedShard

Conversation

@idegtiarenko
Copy link
Copy Markdown
Contributor

No description provided.

@idegtiarenko idegtiarenko added :Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) Team:Distributed Meta label for distributed team. labels Oct 3, 2022
@elasticsearchmachine
Copy link
Copy Markdown
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

if (info != null
&& info.getLastAllocatedNodeId() != null
&& routingNodes.node(info.getLastAllocatedNodeId()) != null
&& ignoredShards.contains(shardRouting) == false) {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we do it even if shard is ignored?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am writing a unit test for this

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we only want to do this if a shard is ignored because that indicates the GatewayAllocator is still taking responsibility for it (and therefore we expect will eventually assign it). If the shard is not ignored then its last-allocated ID shouldn't be relevant, it's effectively a new shard and should be left for the delegate allocator.

DaveCTurner
DaveCTurner previously approved these changes Oct 3, 2022
Copy link
Copy Markdown
Member

@DaveCTurner DaveCTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with one comment nit.

if (info != null
&& info.getLastAllocatedNodeId() != null
&& routingNodes.node(info.getLastAllocatedNodeId()) != null
&& ignoredShards.contains(shardRouting) == false) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we only want to do this if a shard is ignored because that indicates the GatewayAllocator is still taking responsibility for it (and therefore we expect will eventually assign it). If the shard is not ignored then its last-allocated ID shouldn't be relevant, it's effectively a new shard and should be left for the delegate allocator.

@DaveCTurner DaveCTurner dismissed their stale review October 3, 2022 13:32

this approach doesn't work

@DaveCTurner
Copy link
Copy Markdown
Member

We discussed in another channel - this approach doesn't quite work because it might try and assign a replica before its primary. Instead we should incorporate the previous node ID into the targetNodes used when we're aligning things with the previous desired balance.

Copy link
Copy Markdown
Member

@DaveCTurner DaveCTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM again

@idegtiarenko idegtiarenko merged commit bd2bbf0 into elastic:feature/desired-balance-allocator Oct 4, 2022
@idegtiarenko idegtiarenko deleted the fix_testSingleIndexStartedShard branch October 4, 2022 06:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

:Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) Team:Distributed Meta label for distributed team.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants