Chunked Encoding for NodeStatsResponse#90097
Merged
original-brownbear merged 1 commit intoelastic:mainfrom Sep 20, 2022
original-brownbear:chunked-nodes-stats
Merged
Chunked Encoding for NodeStatsResponse#90097original-brownbear merged 1 commit intoelastic:mainfrom original-brownbear:chunked-nodes-stats
original-brownbear merged 1 commit intoelastic:mainfrom
original-brownbear:chunked-nodes-stats
Conversation
Turn this into a chunked response to some degree. Only chunks per node for now, since deeper chunking needs larger changes downstream that don't fit in well with the current API.
Collaborator
|
Pinging @elastic/es-distributed (Team:Distributed) |
Contributor
Author
|
Thanks David! |
19 tasks
|
Hi there, I found that after 8.5(the version that this commit was merged into), the I see Armin Braun said that
Is this an unexpected exception to this commit or is it a bug? |
Member
|
I think this is a bug, although it's just one instance of a much more general problem. I opened #93981. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Turn this into a chunked response to some degree.
Only chunks per node for now, since deeper chunking needs larger changes downstream that don't fit in well with the current API.
The problem is that the "level" parameter that controls whether or not we return the very large indices or shard level responses is an x-content param so we don't have it when creating the iterator. I'd address this in a follow-up that changes the API a little.
As a result, I did not add a test here that validates the chunk count since I'd like to do more work on this anyway. I think it's a valuable change in its current form already and introduces a parent class that allows for turning other APIs into chunked encoding also.
For example, the indices level response for node stats in a 25k indices cluster across 6 data nodes is currently ~120M (and that is without pretty or human!). Without this change, each hit of the indices stats API will cause the coordinating node to allocate 120M for the response. With this change, we will only allocate ~20M for sending the same response. Serializing those 20M on the transport thread should be a non-issue from some quick benchmarking as even serializing the full 120M seems to well under one second.
relates #89838