rds: moving RdsRouteConfigProvider to Server namespace#1370
Closed
rds: moving RdsRouteConfigProvider to Server namespace#1370
Conversation
htuch
approved these changes
Aug 2, 2017
Member
htuch
left a comment
There was a problem hiding this comment.
LGTM assuming no code change. Can you comment on what the motivation is here? Asking as I'm curious if there is any intersection with the v2 API work to be aware of.
Member
|
In looking at this independently, I'm not actually sure why we need to move this code to server namespace? Why not just leave it in router and implement the global route manager there? Then we can just drop this change? |
Member
Author
|
@mattklein123 agreed. I have updated #1345 to that effect. |
rshriram
pushed a commit
to rshriram/envoy
that referenced
this pull request
Oct 30, 2018
…ributes (envoyproxy#1372) Automatic merge from submit-queue. Support Mixer HTTP filter to report sent.bytes and received.bytes attributes **What this PR does / why we need it**:Support Mixer HTTP filter to send attributes "sent.bytes" and "received.bytes" in Report() calls. "sent.bytes" records total response sent in bytes, including response headers, body, and trailers. "received.bytes" records total request received in bytes, including request headers, body, and trailers. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes envoyproxy#1370 **Special notes for your reviewer**: **Release note**: ```release-note NONE ```
jpsim
pushed a commit
that referenced
this pull request
Nov 28, 2022
Signed-off-by: Alan Chiu <achiu@lyft.com> For an explanation of how to fill out the fields, please see the relevant section in [PULL_REQUESTS.md](https://github.com/envoyproxy/envoy/blob/master/PULL_REQUESTS.md) Description: jni: ensure jvm_on_engine_running handles null Risk Level: low Testing: unit Docs Changes: n/a Release Notes: n/a [Optional Fixes #Issue] [Optional Deprecated:] Signed-off-by: JP Simard <jp@jpsim.com>
jpsim
pushed a commit
that referenced
this pull request
Nov 29, 2022
Signed-off-by: Alan Chiu <achiu@lyft.com> For an explanation of how to fill out the fields, please see the relevant section in [PULL_REQUESTS.md](https://github.com/envoyproxy/envoy/blob/master/PULL_REQUESTS.md) Description: jni: ensure jvm_on_engine_running handles null Risk Level: low Testing: unit Docs Changes: n/a Release Notes: n/a [Optional Fixes #Issue] [Optional Deprecated:] Signed-off-by: JP Simard <jp@jpsim.com>
mathetake
pushed a commit
that referenced
this pull request
Mar 3, 2026
**Description**
This PR fixes a bug where streaming chat completion requests from an
OpenAI-compatible client to the GCP Anthropic backend were failing.
Although the API returned a 200 OK status, the response was a
non-streaming JSON object instead of a text/event-stream, causing the
client to receive an empty stream and to fail our integration test
assertions.
Our integration tests output was like
`> assert model_output, f"Received empty {model_output=}"
E AssertionError: Received empty model_output=''
E assert ''`
and after adding debugging logs, the response header and the output
message showed that the response in not streamed, even though the url is
set to stream. It seems you can send to the stream URL but if the param
isn't set, that takes precedence.
the header output
{
"time": "2025-10-14T16:02:11.407-04:00",
"level": "DEBUG",
"msg": "response headers processing",
"response_headers": "headers:{key:\":status\" raw_value:\"200\"}
headers:{key:\"content-type\" raw_value:\"application/json\"}
headers:{key:\"server\" raw_value:\"hypercorn-h11\"} ..."
}
This second log shows the entire response body arriving in a single
chunk with end_of_stream:true. This confirmed that instead of streaming
the response piece-by-piece, the server sent the complete final message
all at once.
{
"time": "2025-10-14T16:02:11.408-04:00",
"level": "DEBUG",
"msg": "response body processing",
"request":
"response_body:{body:\"{\\\"id\\\":\\\"msg_vrtx_019F694kiwv6Z5BQApos5MJy\\\",\\\"type\\\":\\\"message\\\",\\\"role\\\":\\\"assistant\\\",
... ,\\\"content\\\":[{\\\"type\\\":\\\"text\\\",\\\"text\\\":\\\"I'm
doing well, thank you for asking! I'm here and ready to help with
whatever you'd like to chat about or work on. How are you doing
today?\\\"}],\\\"stop_reason\\\":\\\"end_turn\\\", ... }\"
end_of_stream:true}"
}
The root cause was the missing "stream": true field in the JSON payload
sent to GCP's :streamRawPredict endpoint. This PR modifies the
OpenAIToGCPAnthropicTranslator to conditionally add this field to the
request body when the original client request is for a stream. For
context, it seems like this is how anthropic's go sdk handles it
https://github.com/anthropics/anthropic-sdk-go/blob/e8befdc7fdceba33c9000b0b50061b8a42cb6c04/message.go#L86
its not a field in the message param object.
The Fix:
The solution implemented in this PR is to ensure the translated request
body always has the field stream: true for streaming requests:
Modify RequestBody: The RequestBody function in openaittogcpanthropic.go
has been updated to check if the incoming openAIReq.Stream is true.
Inject Stream Field: If streaming is requested, we use the sjson library
to inject the "stream": true key-value pair into the final JSON payload
before it is sent to the GCP endpoint.
Add Unit Test: The existing unit test "Streaming Request Path" has been
renamed to "Streaming Request Path and Body" and an assertion has been
added to verify that the stream field is correctly set to true in the
marshaled body, preventing future regressions
---------
Signed-off-by: Alexa Griffith <agriffith50@bloomberg.net>
Signed-off-by: Sukumar Gaonkar <sgaonkar4@bloomberg.net>
Signed-off-by: Dan Sun <dsun20@bloomberg.net>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.