Fix SnapshotStatus Transport Action Doing IO on Transport Thread#68023
Merged
original-brownbear merged 1 commit intoelastic:masterfrom Jan 28, 2021
original-brownbear:improve-snapshot-status-api
Merged
Fix SnapshotStatus Transport Action Doing IO on Transport Thread#68023original-brownbear merged 1 commit intoelastic:masterfrom original-brownbear:improve-snapshot-status-api
original-brownbear merged 1 commit intoelastic:masterfrom
original-brownbear:improve-snapshot-status-api
Conversation
There is a small chance here that #67947 would cause the callback for the repository data to run on a transport or CS updater thread and do a lot of IO to fetch `SnapshotInfo`. Fixed by always forking to the generic pool for the callback. Added test that triggers lots of deserializing repository data from cache on the transport thread concurrently which triggers this bug relatively reliable (more than half the runs) but is still reasonably fast (under 5s).
Collaborator
|
Pinging @elastic/es-distributed (Team:Distributed) |
Contributor
Author
Contributor
Author
|
Thanks Francisco! |
original-brownbear
added a commit
that referenced
this pull request
Jan 28, 2021
) (#68092) There is a small chance here that #67947 would cause the callback for the repository data to run on a transport or CS updater thread and do a lot of IO to fetch `SnapshotInfo`. Fixed by always forking to the generic pool for the callback. Added test that triggers lots of deserializing repository data from cache on the transport thread concurrently which triggers this bug relatively reliable (more than half the runs) but is still reasonably fast (under 5s).
original-brownbear
added a commit
that referenced
this pull request
Feb 3, 2021
Same as #68023 but even less likely (couldn't really find a quick way to write a test for it for that reason). Fix is the same, fork off to the generic pool for listener handling. Also, this allows removing the forking in the transport action since we don't do any long runnning work on the calling thread any longer in the restore method.
original-brownbear
added a commit
that referenced
this pull request
Feb 3, 2021
Same as #68023 but even less likely (couldn't really find a quick way to write a test for it for that reason). Fix is the same, fork off to the generic pool for listener handling. Also, this allows removing the forking in the transport action since we don't do any long runnning work on the calling thread any longer in the restore method.
original-brownbear
added a commit
that referenced
this pull request
Jun 29, 2021
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
There is a small chance here that #67947 would cause the callback
for the repository data to run on a transport or CS updater thread
and do a lot of IO to fetch
SnapshotInfo.Fixed by always forking to the generic pool for the callback.
Added test that triggers lots of deserializing repository data from
cache on the transport thread concurrently which triggers this bug
relatively reliable (more than half the runs) but is still reasonably
fast (under 5s).