feat(usage): add cache analytics support#196
Conversation
|
Caution Review failedPull request was closed or merged during review Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds end-to-end cache analytics: new GET /admin/api/v1/cache/overview handler (gated by CACHE_ENABLED), dashboard state/UI and chart wiring for cache metrics, cache-hit recording that emits synthetic usage entries, storage/schema and reader support for Changes
Sequence DiagramsequenceDiagram
participant Client as Dashboard Client
participant Handler as Admin Handler
participant UsageReader as Usage Reader
participant DB as Database
Client->>Handler: GET /admin/api/v1/cache/overview (date range & interval)
Handler->>Handler: Check runtime config `CACHE_ENABLED`
alt CACHE_ENABLED != "on"
Handler-->>Client: HTTP 503
else CACHE_ENABLED == "on"
Handler->>UsageReader: GetCacheOverview(ctx, params) (forces CacheMode="cached")
UsageReader->>DB: Run aggregated queries (summary & interval)
DB-->>UsageReader: Summary + Daily series
UsageReader-->>Handler: CacheOverview{Summary, Daily}
Handler-->>Client: HTTP 200 JSON
Client->>Client: Render cache cards & charts (if feature-flagged)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/admin/handler_test.go`:
- Around line 1224-1262: Add a test e.g.
TestCacheOverview_ReturnsErrorWhenReaderFails that sets mockUsageReader.cacheErr
= errors.New("boom"), creates the handler via NewHandler(...,
WithDashboardRuntimeConfig(...)), calls h.CacheOverview with
newHandlerContext("/admin/api/v1/cache/overview?days=30"), and asserts the
recorder returns a non-200 code (expect http.StatusInternalServerError) and the
response body contains the API error payload (unmarshal or assert it contains an
"error" field/string) to ensure reader errors map to the correct HTTP status and
error shape; reference mockUsageReader.cacheErr, NewHandler, h.CacheOverview,
and newHandlerContext.
In `@internal/admin/handler.go`:
- Line 166: The new query parameter params.CacheMode and the new route
/admin/api/v1/cache/overview are not documented for Swagger; update the handler
function comment (the Swagger docblock for the cache overview handler) to add
`@Param` for "cache_mode" (query, string, optional, description), `@Success` 200
with the appropriate response model for the overview, `@Failure` 503 for the
service-unavailable response, and a matching `@Router` entry for
/admin/api/v1/cache/overview (GET); also add the same annotations to the other
cache-related handler comment block referenced around lines 426-455 so generated
docs/SDKs include the filter and the 200/503 contract.
In `@internal/app/app.go`:
- Line 341: The code constructs the response cache middleware via
responsecache.NewResponseCacheMiddleware using appCfg.Cache.Response and always
sets CacheEnabled from the response-cache config, which causes the UI to
advertise cache analytics even when usage logging is disabled; change the logic
that derives CacheEnabled (or introduce a separate CacheAnalyticsEnabled) so it
is true only when both the response cache is configured and usageResult.Enabled
(or usageResult.Logger is non-noop) is true, and update any UI/exposed flags
that currently reference CacheEnabled to use the new combined predicate; locate
the computation that sets CacheEnabled and the call to
responsecache.NewResponseCacheMiddleware (and the analogous logic around the
block referenced later) to apply the guard consistently.
In `@internal/responsecache/usage_hit.go`:
- Around line 48-58: The cached-response path is incorrectly passing resolved
pricing into a normal UsageEntry (via pricingResolver.ResolvePricing and
usage.ExtractFromCachedResponseBody) which causes cache-hit rows to populate
input_cost/output_cost/total_cost; change the cache-hit flow to avoid writing
those provider-cost fields by not passing pricing into
ExtractFromCachedResponseBody (or passing nil/zero pricing) and instead populate
a separate savings field or set a cache-flag on the UsageEntry to record avoided
spend; ensure entry.UserPath is still set and logger.Write writes the entry
without any cost-bearing fields so cached hits are excluded from existing cost
aggregates (or persisted to a separate savings table/stream).
In `@internal/usage/extractor_test.go`:
- Around line 484-509: Update TestExtractFromCachedResponseBody to also assert
cache-hit metadata: verify entry.RequestID equals "req-cache", entry.Provider
equals "openai", entry.Endpoint equals "/v1/chat/completions", and entry.Model
matches the explicit model argument passed to ExtractFromCachedResponseBody (not
the model inside the marshaled body). To ensure the test exercises the
explicit-model code path, make the resp.Model in the marshaled body different
(e.g., "gpt-4o-body") while passing "gpt-4o" as the model argument, then add
assertions for RequestID, Provider, Endpoint, and Model along with the existing
token and CacheType checks in TestExtractFromCachedResponseBody.
In `@internal/usage/reader_mongodb_test.go`:
- Line 10: Rename the test function
TestMongoUsageLogMatchFiltersAndsSearchWithCacheMode to fix the typo (change
"Ands" to "And"), e.g. TestMongoUsageLogMatchFiltersAndSearchWithCacheMode;
update the function declaration and any references (calls, test lists, or grep
patterns) to the new name so the test still runs under `go test` and any tooling
picks up the corrected identifier.
In `@internal/usage/reader_mongodb.go`:
- Around line 378-458: The code runs two separate Aggregates (summaryPipeline
and dailyPipeline) against r.collection which doubles work and risks snapshot
drift; replace the two calls with a single Aggregate using a $facet stage that
contains the current summaryPipeline and dailyPipeline as sub-pipelines, then
decode the single result into the existing CacheOverview structure (populate
CacheOverview.Summary and CacheOverview.Daily from the facet outputs) while
preserving logic that uses params.Interval, mongoDateFormat(interval),
usageTimeZone(params), CacheTypeExact/CacheTypeSemantic, and handling of
has_saved_cost/total_saved_cost; ensure you remove the second
r.collection.Aggregate call and adapt cursor.Decode handling to extract both
facet results from one cursor.
- Around line 535-543: The current code treats params.Search as a raw Mongo
regex which allows special chars to change semantics and can be expensive;
update the regex construction in the block that defines regex (the variable
using params.Search) to escape the user input first (e.g., use Go's
regexp.QuoteMeta on params.Search) and then pass that escaped string to the
$regex value with the existing $options "i" so the search performs a
case-insensitive literal substring match; keep the rest of the searchFilter and
mongoAndFilters usage unchanged.
In `@internal/usage/reader_postgresql.go`:
- Around line 249-256: The SQL embeds package constants CacheTypeExact and
CacheTypeSemantic via string concatenation in summaryQuery (and the similar
query around the 276-284 block); replace those concatenations with parameterized
placeholders (e.g., $1, $2) and pass the constants in the args slice to the DB
call (QueryRow/Query/QueryContext) so the query string is stable and the cache
types are supplied as parameters; do the same for the other query block to keep
usage consistent and maintainable.
In `@internal/usage/store_mongodb.go`:
- Around line 85-87: The single-field index on cache_type (Keys: bson.D{{Key:
"cache_type", Value: 1}}) should be replaced or augmented with a compound index
on (cache_type, timestamp) so queries that filter by cache_type and timestamp
range can use the index directly; update the MongoDB index creation logic that
builds the bson.D keys to include bson.D{{Key:"cache_type", Value:1},
{Key:"timestamp", Value:1}} (or add it alongside the existing index), and
consider mirroring this optimization or documenting the intentional choice
across the SQLite/Postgres index setup if you prefer consistency.
In `@internal/usage/store_sqlite.go`:
- Line 19: The inline comment for maxEntriesPerBatch is stale and incorrect;
update the comment next to the declaration of maxEntriesPerBatch (which is
computed as maxSQLiteParams / columnsPerUsageEntry) to reflect the correct value
(999 / 17 = 58 entries) or remove the explicit numeric count and leave a
descriptive note, ensuring it references the constants maxSQLiteParams and
columnsPerUsageEntry so future changes remain accurate.
In `@internal/usage/usage.go`:
- Around line 40-44: The CacheType field is currently free-form and can create
arbitrary storage buckets; change it from a plain string to a closed/validated
type (e.g., an enum type or custom type with constants) and enforce validation
before persisting (validate in the code path that calls WriteBatch);
specifically update the struct field CacheType to use the new typed enum and add
a validation step where WriteBatch is invoked (or implement a Validate() method
on the struct) to reject or normalize unknown values so only the allowed set is
written.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3b7182f1-3469-494e-bc89-d6a940a1e657
📒 Files selected for processing (29)
internal/admin/dashboard/static/js/dashboard.jsinternal/admin/dashboard/static/js/modules/charts.jsinternal/admin/dashboard/static/js/modules/execution-plans.jsinternal/admin/dashboard/static/js/modules/usage.jsinternal/admin/dashboard/templates/index.htmlinternal/admin/handler.gointernal/admin/handler_test.gointernal/app/app.gointernal/app/app_test.gointernal/responsecache/handle_request_test.gointernal/responsecache/responsecache.gointernal/responsecache/semantic.gointernal/responsecache/semantic_test.gointernal/responsecache/simple.gointernal/responsecache/usage_hit.gointernal/server/http.gointernal/usage/cache_type.gointernal/usage/extractor.gointernal/usage/extractor_test.gointernal/usage/reader.gointernal/usage/reader_mongodb.gointernal/usage/reader_mongodb_test.gointernal/usage/reader_postgresql.gointernal/usage/reader_sqlite.gointernal/usage/store_mongodb.gointernal/usage/store_postgresql.gointernal/usage/store_postgresql_test.gointernal/usage/store_sqlite.gointernal/usage/usage.go
| var pricing *core.ModelPricing | ||
| if pricingResolver != nil { | ||
| pricing = pricingResolver.ResolvePricing(model, provider) | ||
| } | ||
|
|
||
| entry := usage.ExtractFromCachedResponseBody(body, requestID, model, provider, endpoint, cacheType, pricing) | ||
| if entry == nil { | ||
| return | ||
| } | ||
| entry.UserPath = core.UserPathFromContext(ctx) | ||
| logger.Write(entry) |
There was a problem hiding this comment.
Don't write cache-hit savings into the normal cost columns.
Because this passes resolved pricing into a normal UsageEntry, cached hits inherit input_cost/output_cost/total_cost even though no provider spend happened. Downstream, the new cache-mode filters can pull those rows into UsageSummary, DailyUsage, and UsageByModel, which makes avoided spend look like billed spend. Persist savings separately, or keep cached rows out of the existing cost-bearing aggregates.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/responsecache/usage_hit.go` around lines 48 - 58, The
cached-response path is incorrectly passing resolved pricing into a normal
UsageEntry (via pricingResolver.ResolvePricing and
usage.ExtractFromCachedResponseBody) which causes cache-hit rows to populate
input_cost/output_cost/total_cost; change the cache-hit flow to avoid writing
those provider-cost fields by not passing pricing into
ExtractFromCachedResponseBody (or passing nil/zero pricing) and instead populate
a separate savings field or set a cache-flag on the UsageEntry to record avoided
spend; ensure entry.UserPath is still set and logger.Write writes the entry
without any cost-bearing fields so cached hits are excluded from existing cost
aggregates (or persisted to a separate savings table/stream).
| Model string `json:"model" bson:"model"` | ||
| Provider string `json:"provider" bson:"provider"` | ||
| Endpoint string `json:"endpoint" bson:"endpoint"` | ||
| UserPath string `json:"user_path,omitempty" bson:"user_path,omitempty"` | ||
| CacheType string `json:"cache_type,omitempty" bson:"cache_type,omitempty"` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Keep cache_type on a closed schema.
This is now a stored analytics dimension, but the new field is free-form. One typo at a writer boundary will silently create a new bucket in storage and every reader/overview has to cope with it forever. Prefer a dedicated type here, or validate the field before WriteBatch persists it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/usage/usage.go` around lines 40 - 44, The CacheType field is
currently free-form and can create arbitrary storage buckets; change it from a
plain string to a closed/validated type (e.g., an enum type or custom type with
constants) and enforce validation before persisting (validate in the code path
that calls WriteBatch); specifically update the struct field CacheType to use
the new typed enum and add a validation step where WriteBatch is invoked (or
implement a Validate() method on the struct) to reject or normalize unknown
values so only the allowed set is written.
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/admin/handler_test.go`:
- Around line 1278-1284: The test currently only checks that body["error"]
exists; strengthen it by asserting the error payload follows the
OpenAI-compatible shape: after unmarshalling rec.Body.Bytes() into body, cast
body["error"] to a map[string]any and assert it contains keys "type", "message",
"param", and "code" (with "param" and "code" allowed to be nil/null). Update the
assertions around json.Unmarshal/rec.Body.Bytes() and the existing presence
check to verify the types/values of error["type"] and error["message"] are
strings and that error["param"] and error["code"] are present (nil acceptable),
using the same variables (body, rec.Body.Bytes()) so the test fails clearly on
missing fields.
In `@internal/app/app.go`:
- Around line 118-123: The nil-logger error path leaks resources from
usage.New(...) because when usageResult != nil but usageResult.Logger == nil we
only close app.audit and app.providers; update the branch handling "if
usageResult == nil || usageResult.Logger == nil" to also close usageResult
(e.g., call usageResult.Close() or include it in the errors.Join) when
usageResult is non-nil so storage handles/background goroutines are cleaned up
and app.usage is not left unclosed.
In `@internal/usage/extractor_test.go`:
- Around line 484-521: Add a subtest to TestExtractFromCachedResponseBody that
exercises a fallback path of ExtractFromCachedResponseBody by passing either
invalid JSON (malformed body) or an unsupported endpoint value; assert that the
function returns a non-nil synthetic entry with zero tokens (InputTokens,
OutputTokens, TotalTokens == 0) and preserves RequestID/Provider/Endpoint/Model
where applicable, so the fallback behavior is covered and locked in test
expectations.
In `@internal/usage/extractor.go`:
- Around line 250-266: The switch on endpoint in the dispatcher relies on exact
literals and misses semantically equivalent values (e.g.,
"/v1/chat/completions/"); update the code that handles the endpoint variable
(before the switch that calls ExtractFromChatResponse,
ExtractFromResponsesResponse, ExtractFromEmbeddingResponse) to normalize
endpoints by trimming trailing slashes and/or cleaning the path (e.g., use
strings.TrimSuffix or path.Clean) so "/v1/chat/completions/" and
"/v1/chat/completions" map the same way; then use the normalized endpoint in the
switch so cached-body dispatch correctly selects the parsing branch.
In `@PossibleRefactoring.md`:
- Around line 57-60: Before deleting the compatibility wrapper and removing
internal/responsecache/middleware_test.go, add replacement tests that explicitly
cover the HandleRequest() cache-hit and cache-miss behaviors so coverage is
preserved; update or add unit tests exercising ResponseCache.HandleRequest (or
the exported handler function used by the wrapper) to simulate requests that
should return a cached response and requests that should populate the cache,
asserting stored values, response headers/status and that the appropriate cache
backend methods are invoked, then only remove the middleware_test.go file after
these new tests are passing.
- Around line 10-16: For each item claiming "no call sites" (notably the
constant defined in internal/responsecache/semantic.go and the items labeled 1,
4, and 53–56), add a short "How verified" line that shows the exact command you
ran to confirm there are no references (for example provide the exact ripgrep or
ast-grep invocation you used, such as the rg/ast-grep pattern and repo
scope/options), so reviewers can reproduce the search before deleting code;
include the symbol name you searched for and the full command string used for
verification.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d78f7098-bee5-471e-846f-d11b9152e4b6
📒 Files selected for processing (16)
PossibleRefactoring.mdinternal/admin/handler.gointernal/admin/handler_test.gointernal/app/app.gointernal/app/app_test.gointernal/usage/cache_type.gointernal/usage/cache_type_test.gointernal/usage/extractor.gointernal/usage/extractor_test.gointernal/usage/reader_mongodb.gointernal/usage/reader_mongodb_test.gointernal/usage/reader_postgresql.gointernal/usage/store_mongodb.gointernal/usage/store_postgresql.gointernal/usage/store_postgresql_test.gointernal/usage/store_sqlite.go
| Suggested action: | ||
| - Delete the compatibility wrapper. | ||
| - Migrate or remove the old middleware-focused tests in `internal/responsecache/middleware_test.go`. | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Preserve behavioral coverage when removing middleware-focused tests.
Please explicitly require replacement tests around HandleRequest() cache-hit/miss paths before deleting internal/responsecache/middleware_test.go, so coverage is migrated rather than dropped.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@PossibleRefactoring.md` around lines 57 - 60, Before deleting the
compatibility wrapper and removing internal/responsecache/middleware_test.go,
add replacement tests that explicitly cover the HandleRequest() cache-hit and
cache-miss behaviors so coverage is preserved; update or add unit tests
exercising ResponseCache.HandleRequest (or the exported handler function used by
the wrapper) to simulate requests that should return a cached response and
requests that should populate the cache, asserting stored values, response
headers/status and that the appropriate cache backend methods are invoked, then
only remove the middleware_test.go file after these new tests are passing.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/admin/handler_test.go (1)
23-35: 🧹 Nitpick | 🔵 TrivialCapture cache-overview params in the mock.
GetCacheOverviewcurrently dropsusage.UsageQueryParams, so these new tests never verify thatHandler.CacheOverviewforcesCacheModeCachedor preserves the new dashboard filters added in this PR. Perinternal/admin/handler.go:450-478, that override is part of the handler contract.🧪 Suggested test hardening
type mockUsageReader struct { summary *usage.UsageSummary daily []usage.DailyUsage modelUsage []usage.ModelUsage usageLog *usage.UsageLogResult cacheOverview *usage.CacheOverview lastUsageLog usage.UsageLogParams + lastCacheOverview usage.UsageQueryParams summaryErr error dailyErr error modelUsageErr error usageLogErr error cacheErr error } -func (m *mockUsageReader) GetCacheOverview(_ context.Context, _ usage.UsageQueryParams) (*usage.CacheOverview, error) { +func (m *mockUsageReader) GetCacheOverview(_ context.Context, params usage.UsageQueryParams) (*usage.CacheOverview, error) { + m.lastCacheOverview = params if m.cacheErr != nil { return nil, m.cacheErr } return m.cacheOverview, nil }Then assert something like
reader.lastCacheOverview.CacheMode == usage.CacheModeCachedand a representative filter such asUserPath.Also applies to: 78-83, 1224-1262
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/handler_test.go` around lines 23 - 35, The mockUsageReader currently drops the usage.UsageQueryParams passed to GetCacheOverview; update the mock type (mockUsageReader) to add a field like lastCacheOverview usage.UsageQueryParams and implement GetCacheOverview to store the received params into that field before returning cacheOverview/cacheErr, so tests can assert that Handler.CacheOverview forces usage.CacheModeCached and preserves filters (e.g., UserPath); update relevant tests to assert reader.lastCacheOverview.CacheMode == usage.CacheModeCached and that representative filters are preserved.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/admin/handler_test.go`:
- Around line 1264-1298: The test TestCacheOverview_ReturnsErrorWhenReaderFails
currently only checks non-empty error fields, which allows raw reader errors to
leak; update the assertions in that test (which uses mockUsageReader, NewHandler
and calls h.CacheOverview) to assert the error payload pins to the generic
contract: assert error.type == "internal_error", error.message equals the
generic message used by your handler (e.g., "An internal server error occurred")
and assert the original reader error string ("boom") is NOT present in
error.message; keep the existing presence checks for "param" and "code" but
replace the non-empty-field checks for "type"/"message" with exact equality and
a negative contains check for "boom".
In `@internal/usage/extractor_test.go`:
- Around line 484-572: Add a subtest in TestExtractFromCachedResponseBody that
verifies ExtractFromCachedResponseBody defaults an unrecognized cache type to
CacheTypeExact: create a minimal core.ChatResponse, marshal it, call
ExtractFromCachedResponseBody with a cacheType string like "unknown", and assert
entry is non-nil and entry.CacheType == CacheTypeExact (also check other basic
fields if desired); reference the existing test harness and subtest pattern used
in TestExtractFromCachedResponseBody to mirror style.
In `@internal/usage/extractor.go`:
- Around line 270-296: The synthetic/fallback branch that constructs a new
UsageEntry sets RequestID, Model, Provider, and Endpoint directly using raw
values, while the normal update path trims whitespace; update the fallback
branch (where a new UsageEntry is created) to apply the same normalization logic
(strings.TrimSpace) to requestID, model, provider and endpoint before assigning
to UsageEntry fields (ID, RequestID, Timestamp, Model, Provider, Endpoint,
CacheType, ProviderID) so both paths produce consistent, trimmed metadata for
the UsageEntry struct.
- Around line 294-296: Redundant TrimSpace: remove the extra strings.TrimSpace
call and rely on the earlier normalizeCachedResponseEndpoint processing; change
the block that currently does "if normalized := strings.TrimSpace(endpoint);
normalized != "" { entry.Endpoint = normalized }" to simply check endpoint for
emptiness and assign (e.g., if endpoint != "" { entry.Endpoint = endpoint }) so
it uses the already-normalized value — reference
normalizeCachedResponseEndpoint, the local variable endpoint, and the assignment
to entry.Endpoint in extractor.go.
In `@PossibleRefactoring.md`:
- Around line 12-17: The verification claim "No call sites found in the repo"
overstates the scope because the provided command only searched internal/;
either update the search command to cover the whole repository (e.g., run
ripgrep from repo root: rg -n "CacheTypeBoth") or change the claim text to say
"No call sites found in internal/" (and do the same for the other occurrences on
lines 61-65); ensure the symbol name CacheTypeBoth and the exact rg command are
updated so the scope in the sentence matches the actual verification performed.
---
Outside diff comments:
In `@internal/admin/handler_test.go`:
- Around line 23-35: The mockUsageReader currently drops the
usage.UsageQueryParams passed to GetCacheOverview; update the mock type
(mockUsageReader) to add a field like lastCacheOverview usage.UsageQueryParams
and implement GetCacheOverview to store the received params into that field
before returning cacheOverview/cacheErr, so tests can assert that
Handler.CacheOverview forces usage.CacheModeCached and preserves filters (e.g.,
UserPath); update relevant tests to assert reader.lastCacheOverview.CacheMode ==
usage.CacheModeCached and that representative filters are preserved.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: bd1d8e52-a469-44cf-82b2-517bfe890e2b
📒 Files selected for processing (5)
PossibleRefactoring.mdinternal/admin/handler_test.gointernal/app/app.gointernal/usage/extractor.gointernal/usage/extractor_test.go
| if entry == nil { | ||
| entry = &UsageEntry{ | ||
| ID: uuid.New().String(), | ||
| RequestID: requestID, | ||
| Timestamp: time.Now().UTC(), | ||
| Model: model, | ||
| Provider: provider, | ||
| Endpoint: endpoint, | ||
| CacheType: cacheType, | ||
| ProviderID: "", | ||
| } | ||
| return entry | ||
| } | ||
|
|
||
| entry.CacheType = cacheType | ||
| if normalized := strings.TrimSpace(requestID); normalized != "" { | ||
| entry.RequestID = normalized | ||
| } | ||
| if normalized := strings.TrimSpace(model); normalized != "" { | ||
| entry.Model = normalized | ||
| } | ||
| if normalized := strings.TrimSpace(provider); normalized != "" { | ||
| entry.Provider = normalized | ||
| } | ||
| if normalized := strings.TrimSpace(endpoint); normalized != "" { | ||
| entry.Endpoint = normalized | ||
| } |
There was a problem hiding this comment.
Inconsistent metadata normalization between fallback and success paths.
The synthetic entry path (lines 270-282) uses raw requestID, model, and provider values, while the successful extraction path (lines 285-296) trims whitespace from these fields. This inconsistency could cause data quality issues when aggregating cache analytics—the same logical entity might appear differently depending on whether the body was parseable.
🔧 Suggested fix: normalize metadata in the synthetic entry path
if entry == nil {
entry = &UsageEntry{
ID: uuid.New().String(),
- RequestID: requestID,
+ RequestID: strings.TrimSpace(requestID),
Timestamp: time.Now().UTC(),
- Model: model,
- Provider: provider,
+ Model: strings.TrimSpace(model),
+ Provider: strings.TrimSpace(provider),
Endpoint: endpoint,
CacheType: cacheType,
ProviderID: "",
}
return entry
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/usage/extractor.go` around lines 270 - 296, The synthetic/fallback
branch that constructs a new UsageEntry sets RequestID, Model, Provider, and
Endpoint directly using raw values, while the normal update path trims
whitespace; update the fallback branch (where a new UsageEntry is created) to
apply the same normalization logic (strings.TrimSpace) to requestID, model,
provider and endpoint before assigning to UsageEntry fields (ID, RequestID,
Timestamp, Model, Provider, Endpoint, CacheType, ProviderID) so both paths
produce consistent, trimmed metadata for the UsageEntry struct.
Summary
Testing
Summary by CodeRabbit
New Features
Tests
Documentation