Skip to content

Process more expensive allocation deciders last#20724

Merged
abeyad merged 2 commits intoelastic:masterfrom
abeyad:alloc_decider_weights
Oct 4, 2016
Merged

Process more expensive allocation deciders last#20724
abeyad merged 2 commits intoelastic:masterfrom
abeyad:alloc_decider_weights

Conversation

@abeyad
Copy link
Copy Markdown

@abeyad abeyad commented Oct 3, 2016

Today, the individual allocation deciders appear in random
order when initialized in AllocationDeciders, which means
potentially more performance intensive allocation deciders
could run before less expensive deciders. This adds to the
execution time when a less expensive decider could terminate
the decision making process early with a NO decision. This
commit orders the initialization of allocation deciders,
based on a general assessment of the big O runtime of each
decider, moving the likely more expensive deciders last.

This manner of assessing the decider performance time is a
best guess and meant for a quick win in terms of performance
benefit.

Closes #12815

order when initialized in AllocationDeciders, which means
potentially more performance intensive allocation deciders
could run before less expensive deciders. This adds to the
execution time when a less expensive decider could terminate
the decision making process early with a NO decision. This
commit orders the initialization of allocation deciders,
based on a general assessment of the big O runtime of each
decider, moving the likely more expensive deciders last.

Closes elastic#12815
Copy link
Copy Markdown
Contributor

@bleskes bleskes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

addAllocationDecider(deciders, new ThrottlingAllocationDecider(settings, clusterSettings));
addAllocationDecider(deciders, new ShardsLimitAllocationDecider(settings, clusterSettings));
addAllocationDecider(deciders, new AwarenessAllocationDecider(settings, clusterSettings));
addAllocationDecider(deciders, new FilterAllocationDecider(settings, clusterSettings));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this one is fairly lite - typically there only a few and very specific rules

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bleskes I agree if there are just a small number of filter rules, then this one should be lighter than the others. I'll move it up and push.

@abeyad
Copy link
Copy Markdown
Author

abeyad commented Oct 4, 2016

thanks for the review @bleskes

@abeyad abeyad merged commit dc166c5 into elastic:master Oct 4, 2016
@abeyad abeyad deleted the alloc_decider_weights branch October 4, 2016 12:36
abeyad pushed a commit that referenced this pull request Oct 4, 2016
Today, the individual allocation deciders appear in random
order when initialized in AllocationDeciders, which means
potentially more performance intensive allocation deciders
could run before less expensive deciders. This adds to the
execution time when a less expensive decider could terminate
the decision making process early with a NO decision. This
commit orders the initialization of allocation deciders,
based on a general assessment of the big O runtime of each
decider, moving the likely more expensive deciders last.

Closes #12815
@abeyad
Copy link
Copy Markdown
Author

abeyad commented Oct 4, 2016

5.x commit: 2fae528

@lcawl lcawl added :Distributed/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. and removed :Allocation labels Feb 13, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

:Distributed/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. >enhancement v5.1.1 v6.0.0-alpha1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants