Improve scalability of BroadcastReplicationActions#92729
Closed
DaveCTurner wants to merge 8 commits intoelastic:mainfrom
Closed
Improve scalability of BroadcastReplicationActions#92729DaveCTurner wants to merge 8 commits intoelastic:mainfrom
DaveCTurner wants to merge 8 commits intoelastic:mainfrom
Conversation
BroadcastReplicationAction derivatives (`POST /<indices>/_refresh` and `POST /<indices>/_flush`) are pretty inefficient when targeting high shard counts due to how `TransportBroadcastReplicationAction` works: - It computes the list of all target shards up-front on the calling (transport) thread. - It eagerly sends one request for every target shard in a tight loop on the calling (transport) thread. - It accumulates responses in a `CopyOnWriteArrayList` which takes quadratic work to populate, even though nothing reads this list until it's fully populated. - It then mostly discards the accumulated responses, keeping only the total number of shards, the number of successful shards, and a list of any failures. - Each failure is wrapped up in a `ReplicationResponse.ShardInfo.Failure` but then unwrapped at the end to be re-wrapped in a `DefaultShardOperationFailedException`. This commit fixes all this: - It avoids allocating a list of all target shards, instead iterating over the target indices and generating shard IDs on the fly. - The computation of the list of shards, and the sending of the per-shard requests, now happens on the relevant threadpool (`REFRESH` or `FLUSH`) rather than a transport thread. - The per-shard requests are now throttled, with a meaningful yet fairly generous concurrency limit of `#(data nodes) * 10`. - Rather than accumulating the full responses for later processing we track the counts and failures directly. - The failures are tracked in a regular `ArrayList`, avoiding the accidentally-quadratic complexity. - The failures are tracked in their final form, skipping the unwrap-and-rewrap step at the end. Relates elastic#77466
97 tasks
Collaborator
|
Hi @DaveCTurner, I've created a changelog YAML for you. |
DaveCTurner
commented
Jan 6, 2023
| import java.util.concurrent.Semaphore; | ||
| import java.util.function.BiConsumer; | ||
|
|
||
| public class ThrottledIterator<T> implements Releasable { |
Member
Author
There was a problem hiding this comment.
This, and some of the other utilities introduced here, are extracted from #92373.
Comment on lines
+73
to
+75
| protected ClusterBlockLevel globalBlockLevel() { | ||
| return ClusterBlockLevel.METADATA_READ; | ||
| } |
Member
Author
There was a problem hiding this comment.
Surprised this wasn't checked already, today we just return a trivial success before state recovery:
{
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
}
}
We could leave this as-is ofc.
Comment on lines
+69
to
+71
| protected ClusterBlockLevel globalBlockLevel() { | ||
| return ClusterBlockLevel.METADATA_READ; | ||
| } |
Member
Author
There was a problem hiding this comment.
Surprised this wasn't checked already, today we just return a trivial success before state recovery:
{
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
}
}
We could leave this as-is ofc.
DaveCTurner
commented
Jan 6, 2023
| } | ||
|
|
||
| protected void shardExecute(Task task, Request request, ShardId shardId, ActionListener<ShardResponse> shardActionListener) { | ||
| // assert Transports.assertNotTransportThread("per-shard requests might be high-volume"); TODO Yikes! |
DaveCTurner
added a commit
to DaveCTurner/elasticsearch
that referenced
this pull request
Jan 13, 2023
BroadcastReplicationAction derivatives (`POST /<indices>/_refresh` and `POST /<indices>/_flush`) are pretty inefficient when targeting high shard counts due to how `TransportBroadcastReplicationAction` works: - It computes the list of all target shards up-front on the calling (transport) thread. - It accumulates responses in a `CopyOnWriteArrayList` which takes quadratic work to populate, even though nothing reads this list until it's fully populated. - It then mostly discards the accumulated responses, keeping only the total number of shards, the number of successful shards, and a list of any failures. - Each failure is wrapped up in a `ReplicationResponse.ShardInfo.Failure` but then unwrapped at the end to be re-wrapped in a `DefaultShardOperationFailedException`. This commit fixes all this: - The computation of the list of shards, and the sending of the per-shard requests, now happens on the relevant threadpool (`REFRESH` or `FLUSH`) rather than a transport thread. - The failures are tracked in a regular `ArrayList`, avoiding the accidentally-quadratic complexity. - Rather than accumulating the full responses for later processing we track the counts and failures directly. - The failures are tracked in their final form, skipping the unwrap-and-rewrap step at the end. Relates elastic#77466 Relates elastic#92729
Member
Author
|
Closing this for now, I opened #92902 with the more obviously-correct bits of this change. |
elasticsearchmachine
pushed a commit
that referenced
this pull request
Jan 13, 2023
BroadcastReplicationAction derivatives (`POST /<indices>/_refresh` and `POST /<indices>/_flush`) are pretty inefficient when targeting high shard counts due to how `TransportBroadcastReplicationAction` works: - It computes the list of all target shards up-front on the calling (transport) thread. - It accumulates responses in a `CopyOnWriteArrayList` which takes quadratic work to populate, even though nothing reads this list until it's fully populated. - It then mostly discards the accumulated responses, keeping only the total number of shards, the number of successful shards, and a list of any failures. - Each failure is wrapped up in a `ReplicationResponse.ShardInfo.Failure` but then unwrapped at the end to be re-wrapped in a `DefaultShardOperationFailedException`. This commit fixes all this: - The computation of the list of shards, and the sending of the per-shard requests, now happens on the relevant threadpool (`REFRESH` or `FLUSH`) rather than a transport thread. - The failures are tracked in a regular `ArrayList`, avoiding the accidentally-quadratic complexity. - Rather than accumulating the full responses for later processing we track the counts and failures directly. - The failures are tracked in their final form, skipping the unwrap-and-rewrap step at the end. Relates #77466 Relates #92729
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
BroadcastReplicationAction derivatives (
POST /<indices>/_refreshandPOST /<indices>/_flush) are pretty inefficient when targeting high shard counts due to howTransportBroadcastReplicationActionworks:It computes the list of all target shards up-front on the calling (transport) thread.
It eagerly sends one request for every target shard in a tight loop on the calling (transport) thread.
It accumulates responses in a
CopyOnWriteArrayListwhich takes quadratic work to populate, even though nothing reads this list until it's fully populated.It then mostly discards the accumulated responses, keeping only the total number of shards, the number of successful shards, and a list of any failures.
Each failure is wrapped up in a
ReplicationResponse.ShardInfo.Failurebut then unwrapped at the end to be re-wrapped in aDefaultShardOperationFailedException.This commit fixes all this:
It avoids allocating a list of all target shards, instead iterating over the target indices and generating shard IDs on the fly.
The computation of the list of shards, and the sending of the per-shard requests, now happens on the relevant threadpool (see NodeClient#executeLocally always executes action on the calling thread #92730REFRESHorFLUSH) rather than a transport thread.The per-shard requests are now throttled, with a meaningful yet fairly generous concurrency limit of
#(data nodes) * 10.Rather than accumulating the full responses for later processing we track the counts and failures directly.
The failures are tracked in a regular
ArrayList, avoiding the accidentally-quadratic complexity.The failures are tracked in their final form, skipping the unwrap-and-rewrap step at the end.
Relates #77466