Make Shard Started Response Handling only Return after the CS Update Completes#82790
Make Shard Started Response Handling only Return after the CS Update Completes#82790original-brownbear merged 4 commits intoelastic:masterfrom original-brownbear:81628-round-2
Conversation
…Completes Somewhat lazy solution by copying the approach from the failed handler 1:1 for now. Added a todo to clean up this thing. closes #81628
|
Pinging @elastic/es-distributed (Team:Distributed) |
DaveCTurner
left a comment
There was a problem hiding this comment.
Looks good although I think we should have a test for it too. I left a few comments. I'll leave it to @idegtiarenko to review too as he's working in this area and might want to wait to avoid merge conflicts.
| new ClusterStateTaskListener() { | ||
| @Override | ||
| public void onFailure(Exception e) { | ||
| logger.error( |
There was a problem hiding this comment.
I think it'd be better to keep the DEBUG level for FailedToCommitClusterStateException and NotMasterException. ERROR is a bit overdramatic in any case here.
There was a problem hiding this comment.
++ made it debug in the onNoLongerMaster now.
There was a problem hiding this comment.
that fixes the NotMasterException case but not FailedToCommitClusterStateException
There was a problem hiding this comment.
🤦 right added conditional handling for FailedToCommitClusterStateException now as well.
| channel.sendResponse(e); | ||
| } catch (Exception channelException) { | ||
| channelException.addSuppressed(e); | ||
| logger.warn( |
There was a problem hiding this comment.
This is pretty much what ChannelActionListener does, maybe we should just use that?
There was a problem hiding this comment.
Yea right lets do that :)
| ); | ||
| } | ||
|
|
||
| // TODO: Make this a TransportMasterNodeAction and remove duplication of master failover retrying from upstream code |
There was a problem hiding this comment.
Mostly 👍 except that I believe TransportMasterNodeAction requires a timeout today but these things should not time out. Relates #82185 too I think.
There was a problem hiding this comment.
Yea, but we can just do what we did for the snapshot shard state update and set it to the max value. We can even do nicer here and create an override of the master node request that doesn't physically write the always-max value redundantly.
I actually mostly implemented this already this morning but then figured this one I can get merged more quickly and it actually helps my benchmarks :)
Would be nice to have indeed. I don't see a quick way of adding one though. The exiting UT infrastructure doesn't seem to have the plumbing for this. For ITs (which I'd like better) I'd have to implement something along the lines of Maybe again ok to leave this for later? :) I don't think I'll have the time to implement that today and don't want to block @idegtiarenko 's refactoring efforts here longer than necessary. I manually verified that this works correctly (seeing lots of dedup now actually happening in internal cluster tests that wouldn't happen before). |
|
Jenkins run elasticsearch-ci/part-1 (unrelated + known) |
|
Jenkins run elasticsearch-ci/part-1 (unrelated but new ... will open an issue) |
|
Thanks both! |
💔 Backport failed
You can use sqren/backport to manually backport by running |
Somewhat lazy solution by copying the approach from the failed handler 1:1 for now.
Added a todo to clean up this thing.
closes #81628