🔥 feat: Add support for streaming response bodies in client and response handling#4014
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds client-level streaming controls and response streaming consumption: Client getter/setter for streaming, transport plumbing to propagate the flag across transport types, Response APIs to read/inspect streaming bodies, and core handling changes to preserve streaming during response handling. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Client
participant Transport
participant Fasthttp as fasthttp.Client
participant Response
User->>Client: SetStreamResponseBody(true)
Client->>Transport: SetStreamResponseBody(true)
Transport->>Fasthttp: enable StreamResponseBody()
User->>Client: Do(request)
Client->>Transport: Do(req)
Transport->>Fasthttp: Do(req)
Fasthttp-->>Transport: streaming Response
Transport-->>Client: Response (RawResponse references fasthttp.Response)
User->>Response: BodyStream()
Response-->>User: io.Reader (stream or fallback)
sequenceDiagram
participant Core
participant FasthttpResp as fasthttp.Response
participant FiberResp as Fiber Response
participant Defer as Defer Cleanup
Core->>FasthttpResp: obtain respv
Core->>FiberResp: swap resp.RawResponse <-> respv
Core->>Defer: defer ReleaseResponse(respv) (nil-guarded)
Core-->>FiberResp: return Response with RawResponse referencing fasthttp resp
Defer-->>FasthttpResp: Release original respv when done
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @ReneWerner87, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant feature by adding comprehensive support for streaming response bodies within the client. The primary goal is to improve performance and resource utilization when dealing with large HTTP responses or server-sent events by allowing the response body to be processed as a stream rather than being fully buffered in memory. This is achieved through new client configuration options, dedicated response methods for stream access, and careful integration with the underlying Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #4014 +/- ##
==========================================
+ Coverage 91.07% 91.20% +0.13%
==========================================
Files 119 119
Lines 10946 11000 +54
==========================================
+ Hits 9969 10033 +64
+ Misses 618 611 -7
+ Partials 359 356 -3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for streaming response bodies, which is a valuable feature for handling large responses efficiently. The implementation is well-structured, with appropriate changes across the client, transport, core, and response components. The new BodyStream() method on the Response is particularly well-designed with its fallback for non-streaming cases. The accompanying tests are mostly comprehensive. I've provided a couple of suggestions to consolidate redundant tests and enhance coverage for the load-balancing client scenario.
There was a problem hiding this comment.
Pull request overview
This PR adds support for streaming response bodies in the Fiber HTTP client, allowing responses to be read as streams rather than being fully loaded into memory. This addresses issue #3425 and replaces PR #3711, providing the same functionality that exists in the underlying fasthttp library since 2022.
Changes:
- Added streaming configuration methods at the client level (SetStreamResponseBody/StreamResponseBody)
- Implemented streaming support across all transport types (standard, host, and load-balanced clients)
- Enhanced Response API with BodyStream() and IsStreaming() methods
- Modified response handling logic to preserve body streams during request execution
- Updated Save() method to use streaming when available
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| client/transport.go | Added StreamResponseBody interface methods and implementations for all transport adapters |
| client/transport_test.go | Added comprehensive tests for streaming configuration across all transport types |
| client/client.go | Added public API methods for configuring streaming at the client level |
| client/client_test.go | Added tests for client-level streaming configuration with different transport types |
| client/response.go | Added BodyStream() and IsStreaming() methods; updated Save() to use streaming |
| client/response_test.go | Added extensive tests for streaming functionality and fallback behavior |
| client/core.go | Modified response handling to swap fasthttp responses instead of copying, preserving streams |
| client/core_test.go | Updated mock transport to implement new streaming interface methods |
|
@gaby @grivera64 can you check |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@client/core_test.go`:
- Around line 442-478: The test currently reads receivedContentLength and
receivedBody (set inside the Fiber handler registered via
app.Post("/check-length", ...)) from the main goroutine without synchronization,
causing data races; modify the test to create a channel (e.g., chan struct{ Body
string; Len int }) in the test scope, have the handler send the captured values
into that channel instead of writing shared variables, and in the test goroutine
receive from the channel before doing require.Equal assertions (apply the same
channel pattern for the other similar block around lines 485-525); this keeps
AcquireRequest, New, req.Send and the handler logic intact while eliminating the
race by synchronizing via the channel.
gaby
left a comment
There was a problem hiding this comment.
LGTM overall, just one small thing in the docs.
We could also use io.CopyBuffer with a bytesbufferpool in some places.
Adds client-level streaming controls and response streaming consumption: Client getter/setter for streaming, transport plumbing to propagate the flag across transport types, Response APIs to read/inspect streaming bodies, and core handling changes to preserve streaming during response handling.
Replace #3711
Solves #3425