Convert RestGetMappingAction to chunked encoding#89906
Convert RestGetMappingAction to chunked encoding#89906original-brownbear merged 5 commits intoelastic:mainfrom original-brownbear:chunked-encoding-get-mappings
Conversation
Straight forward conversion of this endpoint and a basic test for a large enough response that chunks into a few pieces.
|
Pinging @elastic/es-distributed (Team:Distributed) |
| RestStatus.OK, | ||
| ChunkedRestResponseBody.fromXContent( | ||
| () -> Iterators.concat( | ||
| Iterators.single((b, p) -> b.startObject()), |
There was a problem hiding this comment.
I'm not quite sure why the GetMappingsResponse is a fragment and we had the complicated wrapping here but didn't want to change it in this PR so I manually wrapped it for now.
| @LuceneTestCase.SuppressFileSystems(value = "HandleLimitFS") // we sometimes have >2048 open files | ||
| public class RestGetMappingsIT extends HttpSmokeTestCase { | ||
|
|
||
| public void testGetLargeMappingsResponse() throws Exception { |
There was a problem hiding this comment.
Hmm I think if we approach testing chunked encoding like this across all APIs we'll end up adding a lot of costly tests. Could we instead reduce the chunk size in these smoke tests to force chunking at a more reasonable scale?
Also I think this test passes even without the production code changes. Can we assert that the REST response really did arrive in multiple chunks?
There was a problem hiding this comment.
Hmm now that I think about it more, maybe this test is pointless. I really wanted to have a test that would actually cause a round of chunked encoding where the channel becomes not-writable at least once. But now that I think about it again, that's a really redundant test probably ... I'll just remove this and rely on the fact that this is tested by the existing REST tests (that will now see a chunked response) and our Netty specific tests around flushing messages. I think that's good enough here. Sorry for the noise
There was a problem hiding this comment.
Maybe I'm confused, but I was expecting that we'd send the response in chunks regardless of writability. I think it's worth checking that, but MAX_BYTES_PER_WRITE = 1 << 18 is just a bit big for a test.
There was a problem hiding this comment.
Yea the response is always chunked but the fact that it serializes fine is already tested elsewhere. Adding a test that verifies that we actually get chunked bytes back seems quite redundant doesn't it, I mean unless someone wilfully changes the code to revert back to a normal response (which I'll make very hard shortly, PR incoming :)) this doesn't guard against anything does it?
There was a problem hiding this comment.
Ehh maybe I'm paranoid but I could imagine some future refactoring which accidentally puts all the objects into a single chunk, quietly destroying the memory-efficiency of this API.
There was a problem hiding this comment.
That makes sense :) How about b6517df to make sure that won't happen?
DaveCTurner
left a comment
There was a problem hiding this comment.
Yes I think that's good enough, especially if you have some other change in progress to make it impossible to accidentally skip chunking entirely.
|
Thanks David! |
Straight forward conversion of this endpoint and a basic test for a large enough response that chunks into a few pieces.
relates #89838