http: fully close streams on remote close in AsyncClient#8467
http: fully close streams on remote close in AsyncClient#8467mattklein123 merged 4 commits intoenvoyproxy:masterfrom
Conversation
Signed-off-by: Mike Schore <mike.schore@gmail.com>
mattklein123
left a comment
There was a problem hiding this comment.
Thanks this make sense, but please add crashing tests that this PR fixes.
/wait
|
Added tests that cover:
(2) is what ultimately triggers a crash in the router. |
mattklein123
left a comment
There was a problem hiding this comment.
LGTM with a small question. Thank you!
/wait-any
| ASSERT(!remote_closed_); | ||
| stream_callbacks_.onHeaders(std::move(headers), end_stream); | ||
| closeRemote(end_stream); | ||
| // At present, the router cleans up stream state as soon as the remote is closed, making a |
There was a problem hiding this comment.
qq: technically if local is not closed in all of these cases, shouldn't we effectively reset it? I think that would be the most correct thing to do? What's the reasoning behind that? Other code would need to change? Should we TODO that more correct behavior?
There was a problem hiding this comment.
In some ways yes, resetting would be more correct - technically speaking it's the only way for one side to signal bi-directional closure of the stream. (In fact in Envoy Mobile, when we receive a local response that we interpret as an error, we already do issue a stream reset, triggering cleanup, for that very reason.)
But yes, it would require more code changes to do that here (pretty sure both here in Envoy, and externally where we're relying on this.)
And, everything else (AsyncRequestImpl, Grpc::AsyncClient, the HttpConnectionManager, the Router) currently functions around the idea that a complete response represents the conditions for graceful cleanup. So this seems more in line with current behavior.
Ideally, at some point I think going in and fixing things up so that half-open streams are safe and supported would be the best path forward, since I think it would actually reduce edge cases. But that's a bigger re-evaluation of the L7 logic and codec interaction.
There was a problem hiding this comment.
As another aside, one possibility I mentioned to @junr03 is that once we're on a real filter chain, we could add an actual error callback pathway, and it could be a filter that maps errors to local responses. This could allow for better signaling depending on whether we're in library or server mode, and could eliminate a lot of these asymmetric cases as well.
There was a problem hiding this comment.
OK can you add more comments in the code around this conversation? Instead of duplicating the same comment multiple times you can just have all of the comments minus one refer to the first comment? Thank you.
/wait
There was a problem hiding this comment.
Sure thing, updated. Let me know if you think this provides enough context.
Signed-off-by: Mike Schore <mike.schore@gmail.com>
Signed-off-by: Mike Schore <mike.schore@gmail.com>
|
thanks @mattklein123! |
Pulls in the fix from envoyproxy/envoy#8467. Signed-off-by: Michael Rebello <me@michaelrebello.com>
Pulls in the fix from envoyproxy/envoy#8467. Signed-off-by: Michael Rebello <me@michaelrebello.com>
…8467) Signed-off-by: Mike Schore <mike.schore@gmail.com>
…8467) Signed-off-by: Mike Schore <mike.schore@gmail.com>
Pulls in the fix from #8467. Signed-off-by: Michael Rebello <me@michaelrebello.com> Signed-off-by: JP Simard <jp@jpsim.com>
Pulls in the fix from #8467. Signed-off-by: Michael Rebello <me@michaelrebello.com> Signed-off-by: JP Simard <jp@jpsim.com>
Description: The rest of Envoy currently drops stream state on remote closure, regardless of the state of local closure. Where necessary, pending local send calls are also silently swallowed. This updates the AsyncStreamImpl in Http::AsyncClient to also conform to this behavior, which makes things consistent and prevents half-closed streams from crashing the server.
Risk Level: moderate - but this behavior is already followed everywhere else
Testing: compiled and tested locally
Signed-off-by: Mike Schore mike.schore@gmail.com