Skip to content

fix: prevent unbounded memory allocation in audit log middleware#94

Merged
SantiagoDePolonia merged 2 commits intomainfrom
fix/unbounded-momory-allocation
Feb 25, 2026
Merged

fix: prevent unbounded memory allocation in audit log middleware#94
SantiagoDePolonia merged 2 commits intomainfrom
fix/unbounded-momory-allocation

Conversation

@SantiagoDePolonia
Copy link
Copy Markdown
Contributor

@SantiagoDePolonia SantiagoDePolonia commented Feb 25, 2026

Use io.LimitReader to enforce MaxBodyCapture on request body reads regardless of Content-Length header, fixing OOM risk from chunked requests. Also fix response body capture to precisely cap buffer size on large single-write calls.

Summary by CodeRabbit

  • Tests

    • Added comprehensive tests for request and response body capture, including handling of oversized bodies and truncation scenarios.
  • Bug Fixes

    • Improved handling of request and response bodies exceeding size limits with proper truncation detection.
    • Enhanced downstream data preservation when bodies exceed capture limits.

Use io.LimitReader to enforce MaxBodyCapture on request body reads
regardless of Content-Length header, fixing OOM risk from chunked
requests. Also fix response body capture to precisely cap buffer size
on large single-write calls.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 25, 2026

Warning

Rate limit exceeded

@SantiagoDePolonia has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 17 minutes and 5 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between dfc45b8 and 89bd973.

📒 Files selected for processing (2)
  • internal/auditlog/auditlog_test.go
  • internal/auditlog/middleware.go
📝 Walkthrough

Walkthrough

The changes expand test coverage for request and response body capture in the audit logging middleware and refactor the body handling logic to properly manage size constraints with truncation flags, preventing memory issues when bodies exceed the configured maximum size.

Changes

Cohort / File(s) Summary
Test Coverage for Body Capture
internal/auditlog/auditlog_test.go
Added 155 lines of comprehensive tests including TestResponseBodyCapture_Write_SingleLargeChunk, TestResponseBodyCapture_Write_MultipleChunksOverflow, and TestLimitedReaderRequestBodyCapture with subtests for under-limit and over-limit scenarios, plus a discardWriter helper type.
Request and Response Body Handling
internal/auditlog/middleware.go
Refactored request body capture to use io.LimitReader for overflow detection, set RequestBodyTooBigToHandle flag when exceeding MaxBodyCapture, and restore body for downstream handlers. Updated response capture logic to support truncation flags that prevent further writes beyond the size limit.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Poem

🐰 Hop, skip, and bounded by size,
Our bodies now captured with careful reprise,
Tests spring forth like clover in rows,
Truncation flags bloom where the limit still grows!
Memory safe, with whiskers held high, 🎉

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and concisely describes the main objective of the changeset: preventing unbounded memory allocation in the audit log middleware through request/response body capture limits.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/unbounded-momory-allocation

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@SantiagoDePolonia
Copy link
Copy Markdown
Contributor Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/auditlog/auditlog_test.go`:
- Around line 774-795: Add a subtest in the same test ("chunked request body
over limit sets flag and preserves downstream body") that uses a
trackingReadCloser implementing io.ReadCloser (with a closed bool) as the
original req.Body, then drive the overflow reconstruction path that creates
limitedReader, reads bodyBytes, sets entry.Data.RequestBodyTooBigToHandle = true
and reassigns req.Body = io.NopCloser(io.MultiReader(bytes.NewReader(bodyBytes),
req.Body)); after calling req.Body.Close() assert that the underlying
trackingReadCloser.closed is true to verify Close() propagation from the rebuilt
body to the original reader.

In `@internal/auditlog/middleware.go`:
- Around line 95-99: When rebuilding the request body for overflow handling in
internal/auditlog/middleware.go you must preserve the original body closer
instead of losing it via io.NopCloser; capture the original req.Body (e.g.
origBody := req.Body), build the combined reader with
io.MultiReader(bytes.NewReader(bodyBytes), origBody) and then set req.Body to a
custom io.ReadCloser that delegates Read to that MultiReader and Close to
origBody.Close() (implement a small struct with Reader field and rc
io.ReadCloser and forward Read/Close). Update the branch that sets
entry.Data.RequestBodyTooBigToHandle to use this wrapper so downstream Close
semantics are preserved.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aaa5e24 and dfc45b8.

📒 Files selected for processing (2)
  • internal/auditlog/auditlog_test.go
  • internal/auditlog/middleware.go

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to prevent unbounded memory allocation in the audit log middleware by using io.LimitReader to enforce MaxBodyCapture on request body reads and fixing response body capture to precisely cap buffer size. However, the implementation contains critical bugs that would break request handling for large bodies.

Changes:

  • Modified request body capture to use io.LimitReader for enforcing memory limits
  • Fixed response body Write method to precisely cap buffer at MaxBodyCapture
  • Added comprehensive tests for both request and response body capture scenarios

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
internal/auditlog/middleware.go Updated request body capture logic with io.LimitReader and fixed response body Write method for precise memory capping
internal/auditlog/auditlog_test.go Added tests for response body capture edge cases and io.LimitReader memory limiting behavior

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +92 to +98
limitedReader := io.LimitReader(req.Body, MaxBodyCapture+1)
bodyBytes, err := io.ReadAll(limitedReader)
if err == nil {
// Parse JSON to interface{} for native BSON storage in MongoDB
var parsed interface{}
if jsonErr := json.Unmarshal(bodyBytes, &parsed); jsonErr == nil {
entry.Data.RequestBody = parsed
} else {
// Fallback: store as valid UTF-8 string if not valid JSON
entry.Data.RequestBody = toValidUTF8String(bodyBytes)
if int64(len(bodyBytes)) > MaxBodyCapture {
entry.Data.RequestBodyTooBigToHandle = true
// Reconstruct full body for downstream: read bytes + unread remainder
req.Body = io.NopCloser(io.MultiReader(bytes.NewReader(bodyBytes), req.Body))
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Critical bug: When the body exceeds MaxBodyCapture, line 98 attempts to reconstruct the full body using io.MultiReader with req.Body. However, req.Body has already been consumed by the io.ReadAll(limitedReader) call on line 93. The limitedReader wraps req.Body, so reading from it consumes the underlying req.Body stream. After reading MaxBodyCapture+1 bytes, req.Body is at EOF, making the io.MultiReader ineffective at reconstructing the full body for downstream handlers. This will cause the downstream handler to only see the first MaxBodyCapture+1 bytes, leading to incomplete request data and potential handler failures.

Copilot uses AI. Check for mistakes.
Comment on lines +774 to +803
t.Run("chunked request body over limit sets flag and preserves downstream body", func(t *testing.T) {
// Create a body larger than MaxBodyCapture
largeBody := strings.Repeat("x", int(MaxBodyCapture)+100)
req, _ := http.NewRequest("POST", "/v1/chat/completions", strings.NewReader(largeBody))
req.ContentLength = -1 // Simulate chunked encoding

entry := &LogEntry{Data: &LogData{}}

limitedReader := io.LimitReader(req.Body, MaxBodyCapture+1)
bodyBytes, err := io.ReadAll(limitedReader)
if err != nil {
t.Fatalf("unexpected read error: %v", err)
}

if int64(len(bodyBytes)) <= MaxBodyCapture {
t.Fatal("body should exceed limit")
}

entry.Data.RequestBodyTooBigToHandle = true
// Reconstruct body for downstream
req.Body = io.NopCloser(io.MultiReader(bytes.NewReader(bodyBytes), req.Body))

// Verify downstream can read the full body
downstream, err := io.ReadAll(req.Body)
if err != nil {
t.Fatalf("downstream read error: %v", err)
}
if len(downstream) != len(largeBody) {
t.Errorf("downstream body length mismatch: expected %d, got %d", len(largeBody), len(downstream))
}
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test logic error: This test expects to read the full body downstream (line 801), but the test setup doesn't actually create a scenario where this would work. After reading MaxBodyCapture+1 bytes from limitedReader on line 783, req.Body is consumed. The io.MultiReader on line 794 will only reconstruct MaxBodyCapture+1 bytes (what was read), not the full largeBody. This test would fail if executed correctly, as downstream will only contain MaxBodyCapture+1 bytes, not the full body length. The test appears to validate flawed logic rather than correct behavior.

Copilot uses AI. Check for mistakes.
Comment on lines +93 to 111
bodyBytes, err := io.ReadAll(limitedReader)
if err == nil {
// Parse JSON to interface{} for native BSON storage in MongoDB
var parsed interface{}
if jsonErr := json.Unmarshal(bodyBytes, &parsed); jsonErr == nil {
entry.Data.RequestBody = parsed
} else {
// Fallback: store as valid UTF-8 string if not valid JSON
entry.Data.RequestBody = toValidUTF8String(bodyBytes)
if int64(len(bodyBytes)) > MaxBodyCapture {
entry.Data.RequestBodyTooBigToHandle = true
// Reconstruct full body for downstream: read bytes + unread remainder
req.Body = io.NopCloser(io.MultiReader(bytes.NewReader(bodyBytes), req.Body))
} else if len(bodyBytes) > 0 {
// Parse JSON to interface{} for native BSON storage in MongoDB
var parsed interface{}
if jsonErr := json.Unmarshal(bodyBytes, &parsed); jsonErr == nil {
entry.Data.RequestBody = parsed
} else {
// Fallback: store as valid UTF-8 string if not valid JSON
entry.Data.RequestBody = toValidUTF8String(bodyBytes)
}
// Restore the body for the handler
req.Body = io.NopCloser(bytes.NewBuffer(bodyBytes))
}
// Restore the body for the handler
req.Body = io.NopCloser(bytes.NewBuffer(bodyBytes))
}
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling: When io.ReadAll returns an error (line 93), the code silently skips body restoration (line 94 checks if err == nil). This leaves req.Body in a consumed state, causing downstream handlers to fail when they try to read the body. The body should be restored even in error cases to ensure downstream handlers can still process the request. Consider using io.TeeReader or storing the original body before reading, so it can be restored regardless of read success or failure.

Copilot uses AI. Check for mistakes.
Replace io.NopCloser with combinedReadCloser so that closing the
reconstructed request body propagates Close to the original reader,
preventing potential resource leaks (network connections, file handles).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@SantiagoDePolonia SantiagoDePolonia merged commit 9c17324 into main Feb 25, 2026
12 checks passed
@SantiagoDePolonia SantiagoDePolonia deleted the fix/unbounded-momory-allocation branch March 22, 2026 14:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants