HTTP: Associate Response crashes with the corresponding Request.#15029
HTTP: Associate Response crashes with the corresponding Request.#15029mattklein123 merged 32 commits intoenvoyproxy:mainfrom
Conversation
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
|
/assign @antoniovicente |
|
Tests are failing due to another PR #14924 which I'll need to merge in. |
antoniovicente
left a comment
There was a problem hiding this comment.
Looks pretty good. Just some things about RTTI use.
| if (stream->stream_id_ == current_stream_id_.value()) { | ||
| ClientStreamImpl* client_stream = static_cast<ClientStreamImpl*>(stream.get()); | ||
| Router::UpstreamToDownstream* upstream_to_downstream = | ||
| dynamic_cast<Router::UpstreamToDownstream*>(&client_stream->response_decoder_); |
There was a problem hiding this comment.
Should we consider adding a dumpState method to Http::ResponseDecoder that can be implemented by Router::UpstreamToDownstream instead of doing this dynamic cast? Google style avoids this kind of use of rtti.
There was a problem hiding this comment.
Yes, that's a good idea. I'll have Http::ResponseDecoder have such a method.
| os << spaces << "Dumping corresponding downstream request for upstream stream " | ||
| << current_stream_id_.value() << ":\n"; | ||
|
|
||
| for (auto& stream : active_streams_) { |
There was a problem hiding this comment.
Consider ConnectionImpl::getStream instead.
I can see iterating over active_streams_ being lower dependency and thus preferable when running from a crash handler.
There was a problem hiding this comment.
That's a good suggestion.
I'm a bit torn how to go on this, given that LL are terrible performance wise, and we're iterating over streams when we can just grab the one we want from the map.
Given that the max # of concurrent streams is configurable, and a reasonable number is likely 100, iterating over 100 streams isn't too bad. If this increases significantly though, then I think getStream would become more of a clear winner.
Given the expected size of say ~100 and fewer dependencies (i.e. doesn't depend on nghttp2's map), I think iterating is worth the trade-off. Will add a TODO here perhaps consider switching to getStream if we have 1000s of active streams in a connection.
Thoughts?
There was a problem hiding this comment.
I would use getStream.
| const auto addressProvider = upstream_to_downstream->connection().addressProviderSharedPtr(); | ||
| const Http::RequestHeaderMap* request_headers = upstream_to_downstream->downstreamHeaders(); | ||
| DUMP_DETAILS(addressProvider); | ||
| DUMP_DETAILS(request_headers); |
There was a problem hiding this comment.
Would be better to have ResponseDecoder dumpState dump these details.
There was a problem hiding this comment.
100% agree. Will do.
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
KBaichoo
left a comment
There was a problem hiding this comment.
Thanks for the feedback @antoniovicente.
Made the changes discussed below, and will open a PR with just these changes now that #14923 is in.
| os << spaces << "Dumping corresponding downstream request for upstream stream " | ||
| << current_stream_id_.value() << ":\n"; | ||
|
|
||
| for (auto& stream : active_streams_) { |
There was a problem hiding this comment.
That's a good suggestion.
I'm a bit torn how to go on this, given that LL are terrible performance wise, and we're iterating over streams when we can just grab the one we want from the map.
Given that the max # of concurrent streams is configurable, and a reasonable number is likely 100, iterating over 100 streams isn't too bad. If this increases significantly though, then I think getStream would become more of a clear winner.
Given the expected size of say ~100 and fewer dependencies (i.e. doesn't depend on nghttp2's map), I think iterating is worth the trade-off. Will add a TODO here perhaps consider switching to getStream if we have 1000s of active streams in a connection.
Thoughts?
| const auto addressProvider = upstream_to_downstream->connection().addressProviderSharedPtr(); | ||
| const Http::RequestHeaderMap* request_headers = upstream_to_downstream->downstreamHeaders(); | ||
| DUMP_DETAILS(addressProvider); | ||
| DUMP_DETAILS(request_headers); |
There was a problem hiding this comment.
100% agree. Will do.
| if (stream->stream_id_ == current_stream_id_.value()) { | ||
| ClientStreamImpl* client_stream = static_cast<ClientStreamImpl*>(stream.get()); | ||
| Router::UpstreamToDownstream* upstream_to_downstream = | ||
| dynamic_cast<Router::UpstreamToDownstream*>(&client_stream->response_decoder_); |
There was a problem hiding this comment.
Yes, that's a good idea. I'll have Http::ResponseDecoder have such a method.
Just do a main merge, we shouldn't need to start a new PR from scratch. |
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
|
Merged, thanks @snowp ! |
|
/retest macos timed out |
|
Retrying Azure Pipelines: |
|
/retest macos was cancelled again |
mattklein123
left a comment
There was a problem hiding this comment.
I think needs a release notes fix, thank you.
/wait
|
Retrying Azure Pipelines: |
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
|
Can you merge main also? It will likely fix OSX CI and we can make sure the release notes look good. |
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
|
Merged |
| * grpc_json_transcoder: added :ref:`request_validation_options <envoy_v3_api_field_extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder.request_validation_options>` to reject invalid requests early. | ||
| * grpc_json_transcoder: filter can now be configured on per-route/per-vhost level as well. Leaving empty list of services in the filter configuration disables transcoding on the specific route. | ||
| * http: added support for `Envoy::ScopeTrackedObject` for HTTP/1 and HTTP/2 dispatching. Crashes while inside the dispatching loop should dump debug information. | ||
| * http: added an option to not add content-length: 0 for requests which should not have bodies. This behavior can be enabled by setting `envoy.reloadable_features.dont_add_content_length_for_bodiless_requests` true. |
There was a problem hiding this comment.
I think you still have a merge issue? Why is this being added?
/wait
There was a problem hiding this comment.
Ah, it was originally in new features but got moved to minor behavior changes on line 45.
Signed-off-by: Kevin Baichoo <kbaichoo@google.com>
|
/retest |
|
Retrying Azure Pipelines: |
|
/retest N-th times the charm mac os? 🤞 |
|
Retrying Azure Pipelines: |
Commit Message: Implement dumping corresponding request for upstream response crashes.
Additional Description: We now track the stream that's being processed for H2 at the first callback that nghttp2 invokes and on crash for client stream we dump state of the
responseDecoderwhich is anUpstreamRequestto dump the originating Request. Handles the case for H2 and H1.Risk Level: Medium
Testing: Unit tests
Docs Changes: N/A
Release Notes: Included
Platform Specific Features: N/A
Related Issue #14249