feat: add support for client reports#1192
Conversation
- create global aggregator - add data types for outcomes, discard events & client report - create discard reason enum
add functionality to: - attach client reports to existing envelopes - for the async transports the background routines also get invoked with a ticker to periodically send client reports
Semver Impact of This PR🟡 Minor (new features) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨
Internal Changes 🔧
🤖 This preview updates automatically when you update the PR. |
2f0048b to
efd85e8
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable autofix in the Cursor dashboard.
sl0thentr0py
left a comment
There was a problem hiding this comment.
just some notes for understanding, lgtm otherwise
| limits := ratelimit.FromResponse(response) | ||
|
|
||
| // Drain body up to a limit and close it, allowing the transport to reuse TCP connections. | ||
| _, _ = io.CopyN(io.Discard, response.Body, MaxDrainResponseBytes) |
There was a problem hiding this comment.
what is this lol
is the drain in @internal/util/util.go shady
⎿ Read internal/util/util.go (120 lines)
⏺ Not shady — it's a standard Go pattern. Here's why it exists:
The problem: Go's http.Client reuses TCP connections (called "keep-alive") for performance. But it can only reuse a connection if the previous response body was fully read and closed. If you just close without reading, Go
throws away the connection and opens a new one for the next request — slower.
What the code does:
_, _ = io.CopyN(io.Discard, response.Body, MaxDrainResponseBytes) // read up to 16KB, throw it away
response.Body.Close() // then close
It reads up to 16KB of leftover response data into the void, then closes the body. This lets Go recycle the TCP connection.
The 16KB cap (16 << 10) is a safety net — if the server sends back a huge response (e.g. an HTML error page), it won't waste time/memory reading all of it. It just gives up connection reuse for that one response.
The _, _ = part intentionally discards both the byte count and any error. That's fine here because:
- The data itself is unwanted (already logged what was needed in HandleHTTPResponse)
- If the drain fails, the worst outcome is a non-reused connection — not a correctness issue
This is a well-known Go idiom. You'll see it in the Go standard library docs and most HTTP client code.
There was a problem hiding this comment.
The first part is that for go to reuse the tcp connection we need to empty and close the io connection, so we just throw all remaining bytes.
The second part with MaxDrainResponseBytes already existed in the SDK as a pattern and I am not really convinced about it's use(since we don't really expect relay to return a malicious response), but it's what the llm suggests, just a safety limit for draining that many response bytes.
| } | ||
|
|
||
| if !options.DisableClientReports { | ||
| a := report.NewAggregator() |
There was a problem hiding this comment.
i understand that aggregator has the same functions as both recorder and provider, but then why make those interfaces in the first place? this is very confusing for a non-go person
There was a problem hiding this comment.
it's mostly separation of concerns and not exposing the take reports functionality to a layer that doesn't need it.
Description
add support for client reports.
Issues
Changelog Entry Instructions
To add a custom changelog entry, uncomment the section above. Supports:
For more details: custom changelog entries
Reminders
feat:,fix:,ref:,meta:)