feat(gom-53): integrated external model registry for metadata enrichment and cost tracking + Dashboard/Usage page#82
Conversation
…acking Fetch, cache, and merge the ai-model-list registry (models.json) to enrich discovered models with pricing, capabilities, and context limits. Models not in the registry still work — the fetch is best-effort and never gates. - Add ModelListConfig (URL + timeout) to CacheConfig with env var support - Add ModelListData to ModelCache for raw JSON persistence across restarts - Add ModelMetadata/ModelPricing to core.Model (omitempty, pointer numerics) - New internal/modeldata package: fetcher, 3-layer merger, enricher - Registry integration: enrich on Initialize, LoadFromCache, and refresh - Background refresh re-fetches model list alongside provider models - Add cost fields (InputCost/OutputCost/TotalCost) to UsageEntry - CalculateCost + variadic pricing param on all three usage extractors Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds external models.json fetching and enrichment, model metadata and pricing types, granular cost calculation and persistence, usage aggregation and paginated logs, admin UI/endpoints for usage and categories, and threads a PricingResolver through streaming and non-streaming usage flows. Changes
Sequence Diagram(s)sequenceDiagram
participant Admin as Admin UI / Client
participant Handler as Admin Handler
participant Store as Usage Store (DB)
participant Resolver as PricingResolver
participant Registry as Model Registry
participant ModelData as ModelData Fetcher
Admin->>Handler: GET /admin/api/v1/usage/models
Handler->>Store: GetUsageByModel(params)
Store->>Resolver: ResolvePricing(model, provider)
Resolver-->>Store: *ModelPricing
Store-->>Handler: []ModelUsage (with costs)
Handler-->>Admin: JSON response
Admin->>Handler: GET /admin/api/v1/usage/log?limit=50
Handler->>Store: GetUsageLog(params)
Store->>Resolver: ResolvePricing(entry.model, entry.provider)
Resolver-->>Store: *ModelPricing
Store-->>Handler: UsageLogResult (entries with cost fields)
Handler-->>Admin: Paginated JSON response
Note over Registry,ModelData: Background refresh of model-list
Registry->>ModelData: Fetch(MODEL_LIST_URL)
ModelData-->>Registry: ModelList + raw bytes
Registry->>Registry: EnrichModels() -> update model metadata
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested Labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/modeldata/enricher_test.go`:
- Around line 31-46: The test TestEnrich_MatchedAndUnmatched in enricher_test.go
calls the helper ptr() which is currently only defined in merger_test.go, so add
a definition accessible to this test: either duplicate the ptr() helper into
enricher_test.go (so the test compiles) or extract the helper to a shared
non-test file (e.g., types.go) so both tests can import it; ensure the helper
signature and behavior match the existing ptr used in merger_test.go so calls in
TestEnrich_MatchedAndUnmatched continue to work.
In `@internal/modeldata/fetcher.go`:
- Around line 38-41: The code uses io.ReadAll(resp.Body) which can allocate
unbounded memory; replace this with a bounded read using io.LimitReader (or
http.MaxBytesReader) around resp.Body with a sensible max (e.g. a few MB) and
detect if the response exceeded the limit, returning an error; update the
raw/err handling (the same variables) so that reads failing due to size produce
a clear error like "response body too large" while keeping the existing "reading
response body: %w" wrapping behavior.
In `@internal/modeldata/merger_test.go`:
- Around line 354-372: The helper function calculateCostHelper duplicates the
production CalculateCost logic; add a brief doc comment above
calculateCostHelper stating it intentionally mirrors usage.CalculateCost for
testing within this package to avoid import cycles/test coupling, mentioning
both calculateCostHelper and usage.CalculateCost by name so future readers know
the duplication is deliberate and where the canonical implementation lives.
- Line 387: The test currently compares floating-point values with exact
equality using the pointer dereferences `*got != *want`; change this to a
tolerance-based comparison to avoid fragility by computing the absolute
difference (e.g., math.Abs(*got - *want)) and asserting it's less than a small
epsilon (e.g., 1e-9), or add a helper like `floatEquals(a, b, eps)` and use that
in the test where `got` and `want` are compared to determine equality.
In `@internal/modeldata/merger.go`:
- Around line 78-89: The function convertPricing currently maps the four
token-based fields from PricingEntry to core.ModelPricing but the review notes
the struct field count comment is wrong—update any misleading count/comment to
reflect that PricingEntry has 11 fields (not 12) and keep the mapping as-is; if
you want to signal future work, add a short TODO near convertPricing referencing
PricingEntry and core.ModelPricing (and mention CalculateCost uses only
InputPerMtok/OutputPerMtok) so future developers know to extend
core.ModelPricing before mapping the additional fields.
In `@internal/providers/registry.go`:
- Around line 437-449: EnrichModels currently builds a registryAccessor that
keeps a direct reference to r.models and calls modeldata.Enrich which uses
SetMetadata to mutate info.Model.Metadata without synchronization; to prevent
races with concurrent Initialize swaps or other mutators, change EnrichModels to
pass a safe snapshot: either deep-copy the models map (and each model's
Metadata) into the registryAccessor or ensure registryAccessor.SetMetadata
obtains r.mu before modifying info.Model.Metadata; update registryAccessor,
EnrichModels, and SetMetadata accordingly so mutations happen under r.mu (or
operate on a detached copy) to eliminate the concurrent write race between
EnrichModels and Initialize.
internal/modeldata/fetcher.go
Outdated
| raw, err := io.ReadAll(resp.Body) | ||
| if err != nil { | ||
| return nil, nil, fmt.Errorf("reading response body: %w", err) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding a body size limit for defense in depth.
io.ReadAll will read the entire response body into memory. While the model list is expected to be reasonably sized, adding a size limit would guard against unexpected large responses from misconfigured or compromised sources.
🛡️ Optional: Add body size limit
+const maxBodySize = 10 * 1024 * 1024 // 10MB limit for model list
+
- raw, err := io.ReadAll(resp.Body)
+ raw, err := io.ReadAll(io.LimitReader(resp.Body, maxBodySize))
if err != nil {
return nil, nil, fmt.Errorf("reading response body: %w", err)
}
+ if len(raw) == maxBodySize {
+ return nil, nil, fmt.Errorf("response body exceeds %d bytes limit", maxBodySize)
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/modeldata/fetcher.go` around lines 38 - 41, The code uses
io.ReadAll(resp.Body) which can allocate unbounded memory; replace this with a
bounded read using io.LimitReader (or http.MaxBytesReader) around resp.Body with
a sensible max (e.g. a few MB) and detect if the response exceeded the limit,
returning an error; update the raw/err handling (the same variables) so that
reads failing due to size produce a clear error like "response body too large"
while keeping the existing "reading response body: %w" wrapping behavior.
internal/modeldata/merger_test.go
Outdated
| t.Errorf("%s: got %f, want nil", name, *got) | ||
| return | ||
| } | ||
| if *got != *want { |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Floating-point exact equality comparison may be fragile.
Exact floating-point comparison (*got != *want) can fail due to precision issues in certain edge cases. For the current test data (simple multiplications), this should work, but consider using a tolerance-based comparison for robustness.
📝 Suggested tolerance-based comparison
- if *got != *want {
+ const epsilon = 1e-9
+ if diff := *got - *want; diff < -epsilon || diff > epsilon {
t.Errorf("%s: got %f, want %f", name, *got, *want)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if *got != *want { | |
| const epsilon = 1e-9 | |
| if diff := *got - *want; diff < -epsilon || diff > epsilon { | |
| t.Errorf("%s: got %f, want %f", name, *got, *want) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/modeldata/merger_test.go` at line 387, The test currently compares
floating-point values with exact equality using the pointer dereferences `*got
!= *want`; change this to a tolerance-based comparison to avoid fragility by
computing the absolute difference (e.g., math.Abs(*got - *want)) and asserting
it's less than a small epsilon (e.g., 1e-9), or add a helper like
`floatEquals(a, b, eps)` and use that in the test where `got` and `want` are
compared to determine equality.
| func (r *ModelRegistry) EnrichModels() { | ||
| r.mu.Lock() | ||
| list := r.modelList | ||
| models := r.models | ||
| r.mu.Unlock() | ||
|
|
||
| if list == nil || len(models) == 0 { | ||
| return | ||
| } | ||
|
|
||
| accessor := ®istryAccessor{models: models, providerTypes: r.snapshotProviderTypes()} | ||
| modeldata.Enrich(accessor, list) | ||
| } |
There was a problem hiding this comment.
Potential data race in EnrichModels.
The method acquires the lock, copies references to modelList and models, then releases the lock before calling modeldata.Enrich. The registryAccessor holds a direct reference to r.models (not a copy), and SetMetadata modifies info.Model.Metadata without synchronization.
If Initialize runs concurrently and swaps r.models (line 134), the accessor continues enriching the old map, which is safe. However, if both Initialize (line 129) and EnrichModels call SetMetadata on overlapping models concurrently, there could be a race on the Metadata pointer assignment.
#!/bin/bash
# Check if there are concurrent calls to EnrichModels and Initialize that could race
# Look for call sites and goroutine usage patterns
echo "=== Checking EnrichModels call sites ==="
rg -n "EnrichModels\(\)" --type=go
echo ""
echo "=== Checking Initialize call sites ==="
rg -n "\.Initialize\(ctx\)" --type=go -A 2 -B 2
echo ""
echo "=== Checking for goroutine spawning patterns near these calls ==="
rg -n "go func" internal/providers/ --type=go -A 5🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/providers/registry.go` around lines 437 - 449, EnrichModels
currently builds a registryAccessor that keeps a direct reference to r.models
and calls modeldata.Enrich which uses SetMetadata to mutate info.Model.Metadata
without synchronization; to prevent races with concurrent Initialize swaps or
other mutators, change EnrichModels to pass a safe snapshot: either deep-copy
the models map (and each model's Metadata) into the registryAccessor or ensure
registryAccessor.SetMetadata obtains r.mu before modifying info.Model.Metadata;
update registryAccessor, EnrichModels, and SetMetadata accordingly so mutations
happen under r.mu (or operate on a detached copy) to eliminate the concurrent
write race between EnrichModels and Initialize.
… fix race - Set MODEL_LIST_URL default to ENTERPILOT/ai-model-list so the external model registry is enabled out of the box - Document MODEL_LIST_URL and MODEL_LIST_TIMEOUT in .env.template - Bound fetcher response body reads with io.LimitReader (10 MB cap) - Fix data race in EnrichModels by holding write lock for the entire enrichment instead of releasing it before SetMetadata mutations - Use tolerance-based float comparison in merger tests (math.Abs + epsilon) - Add doc comment to calculateCostHelper explaining intentional duplication - Add TODO to convertPricing documenting unmapped PricingEntry fields Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/cache/cache.go`:
- Line 17: Add a brief Go doc comment above the exported field ModelListData
(json.RawMessage) on its containing exported struct describing that it stores
raw JSON returned by the external model registry and when it is populated (e.g.,
populated when the model list is fetched from the registry), so readers and
tooling can understand its purpose; update the comment to be a single-sentence
description immediately above the ModelListData field.
In `@internal/core/types.go`:
- Line 82: The exported field Metadata on the public type (Metadata
*ModelMetadata) lacks a doc comment; add a concise Go doc comment immediately
above the Metadata field explaining its purpose and what information
ModelMetadata contains or represents (e.g., "Metadata holds model-specific
metadata such as version, author, and training info"), referencing the
ModelMetadata type so readers know where to look for structure details.
In `@internal/providers/registry.go`:
- Around line 542-563: The call in refreshModelList passes both a context with
timeout and a timeout value into modeldata.Fetch, causing redundant dual
timeouts; fix by choosing one approach: either (A) keep the context.WithTimeout
in refreshModelList and change modeldata.Fetch to accept only a context (remove
the timeout parameter and stop setting http.Client.Timeout inside Fetch, relying
on the context for cancellation), or (B) keep the timeout parameter for Fetch
and use context.Background() in refreshModelList (remove context.WithTimeout),
ensuring modeldata.Fetch is the single place that applies the
http.Client.Timeout; update the modeldata.Fetch signature/implementation and the
call site in refreshModelList accordingly.
There was a problem hiding this comment.
Pull request overview
Integrates an external models.json registry to enrich discovered provider models with metadata (capabilities/context limits) and pricing, and extends usage tracking to optionally compute per-request costs.
Changes:
- Add
internal/modeldata(fetch/parse/merge/enrich) and wire it into the model registry lifecycle (initialize, cache load/save, background refresh). - Extend core model types and cache structures to carry enriched metadata and persist raw registry JSON across restarts.
- Add cost fields to usage entries and optional pricing-based cost calculation in usage extractors.
Reviewed changes
Copilot reviewed 16 out of 16 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| internal/usage/usage.go | Adds InputCost/OutputCost/TotalCost fields to persisted usage entries. |
| internal/usage/extractor.go | Adds CalculateCost and optional pricing parameter to usage extractors to compute costs. |
| internal/providers/registry_test.go | Updates background refresh tests for new StartBackgroundRefresh signature. |
| internal/providers/registry.go | Stores/caches model list data, enriches models, and refreshes model list alongside provider refreshes. |
| internal/providers/init.go | Adds best-effort background model list fetch and passes config into background refresh. |
| internal/modeldata/types.go | Defines Go types for the external registry JSON (models.json). |
| internal/modeldata/merger.go | Implements 3-layer metadata/pricing resolution (base model + provider override). |
| internal/modeldata/merger_test.go | Adds tests for merge behavior (and currently duplicates cost logic). |
| internal/modeldata/fetcher.go | Implements bounded HTTP fetch + JSON parse for the registry. |
| internal/modeldata/fetcher_test.go | Adds tests for fetching/parsing and error cases. |
| internal/modeldata/enricher.go | Adds accessor-based enrichment to attach resolved metadata to models. |
| internal/modeldata/enricher_test.go | Adds enrichment behavior tests (matched/unmatched + overrides). |
| internal/core/types.go | Adds Model.Metadata, ModelMetadata, and ModelPricing types for enriched model info. |
| internal/cache/cache.go | Adds ModelListData to cached registry payload for persistence across restarts. |
| config/config.go | Adds Cache.ModelList config (URL + timeout) with env var support and defaults. |
| .env.template | Documents environment variables for configuring/disabling model list fetch. |
Comments suppressed due to low confidence (5)
internal/modeldata/merger_test.go:376
- This test duplicates cost-calculation logic via calculateCostHelper instead of testing the canonical usage.CalculateCost implementation. This duplication can drift over time and make tests pass while production behavior changes; consider importing gomodel/internal/usage in the test and exercising usage.CalculateCost directly (or moving cost calculation to a small shared package to keep a single implementation).
func TestCalculateCost(t *testing.T) {
tests := []struct {
name string
input int
output int
pricing *core.ModelPricing
wantInput *float64
wantOutput *float64
wantTotal *float64
}{
{
name: "nil pricing",
input: 1000,
output: 500,
pricing: nil,
},
{
name: "basic pricing",
input: 1000000,
output: 500000,
pricing: &core.ModelPricing{
Currency: "USD",
InputPerMtok: ptr(2.50),
OutputPerMtok: ptr(10.00),
},
wantInput: ptr(2.50),
wantOutput: ptr(5.00),
wantTotal: ptr(7.50),
},
{
name: "zero tokens",
input: 0,
output: 0,
pricing: &core.ModelPricing{
Currency: "USD",
InputPerMtok: ptr(2.50),
OutputPerMtok: ptr(10.00),
},
wantInput: ptr(0.0),
wantOutput: ptr(0.0),
wantTotal: ptr(0.0),
},
{
name: "partial pricing (input only)",
input: 1000000,
output: 500000,
pricing: &core.ModelPricing{
Currency: "USD",
InputPerMtok: ptr(2.50),
},
wantInput: ptr(2.50),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// CalculateCost is in the usage package, but we test the logic inline
in, out, total := calculateCostHelper(tt.input, tt.output, tt.pricing)
assertPtrFloat(t, "input", in, tt.wantInput)
assertPtrFloat(t, "output", out, tt.wantOutput)
assertPtrFloat(t, "total", total, tt.wantTotal)
})
}
}
// calculateCostHelper intentionally mirrors usage.CalculateCost for testing within
// this package. It is duplicated here (rather than imported) to avoid import cycles
// between modeldata and usage. The canonical implementation lives in
// usage.CalculateCost — keep both in sync when changing cost calculation logic.
func calculateCostHelper(inputTokens, outputTokens int, pricing *core.ModelPricing) (input, output, total *float64) {
if pricing == nil {
return nil, nil, nil
}
if pricing.InputPerMtok != nil {
v := float64(inputTokens) * *pricing.InputPerMtok / 1_000_000
input = &v
}
if pricing.OutputPerMtok != nil {
v := float64(outputTokens) * *pricing.OutputPerMtok / 1_000_000
output = &v
}
if input != nil && output != nil {
v := *input + *output
total = &v
}
return
}
internal/providers/registry.go:548
- StartBackgroundRefresh cancellation won't stop an in-flight model list refresh because refreshModelList uses context.Background() instead of deriving from the refresh loop's ctx. Consider passing a parent ctx into refreshModelList (and using it in context.WithTimeout) so calling the returned cancel func reliably stops both provider refresh and model-list refresh work.
func (r *ModelRegistry) StartBackgroundRefresh(interval time.Duration, modelListURL string, modelListTimeout time.Duration) func() {
ctx, cancel := context.WithCancel(context.Background())
go func() {
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
refreshCtx, refreshCancel := context.WithTimeout(context.Background(), 30*time.Second)
if err := r.Refresh(refreshCtx); err != nil {
slog.Warn("background model refresh failed", "error", err)
} else {
// Save to cache after successful refresh
if err := r.SaveToCache(refreshCtx); err != nil {
slog.Warn("failed to save models to cache after refresh", "error", err)
}
}
refreshCancel()
// Also refresh model list if configured
if modelListURL != "" {
r.refreshModelList(modelListURL, modelListTimeout)
}
}
}
}()
return cancel
}
// refreshModelList fetches the model list and re-enriches all models.
func (r *ModelRegistry) refreshModelList(url string, timeout time.Duration) {
fetchCtx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
list, raw, err := modeldata.Fetch(fetchCtx, url, timeout)
if err != nil {
internal/providers/init.go:107
- The model list fetch goroutine is created with context.Background(), so shutdown/caller cancellation won't stop the fetch early (it only stops via timeout). Consider deriving the fetch context from the Init(ctx, ...) argument so service shutdown cancels the best-effort background fetch promptly.
// Fetch model list in background (best-effort, non-blocking)
modelListURL := result.Config.Cache.ModelList.URL
modelListTimeout := time.Duration(result.Config.Cache.ModelList.Timeout) * time.Second
if modelListTimeout <= 0 {
modelListTimeout = 30 * time.Second
}
if modelListURL != "" {
go func() {
fetchCtx, cancel := context.WithTimeout(context.Background(), modelListTimeout)
defer cancel()
list, raw, err := modeldata.Fetch(fetchCtx, modelListURL, modelListTimeout)
if err != nil {
slog.Warn("failed to fetch model list", "url", modelListURL, "error", err)
return
}
if list == nil {
return
}
internal/providers/registry.go:454
- EnrichModels holds r.mu for the entire enrichment and calls modeldata.Enrich, which currently logs via slog.Info. Doing logging (and potentially other non-trivial work) while holding the write lock can increase contention for callers like ListModels; consider having Enrich return counts and logging after releasing the lock (or lowering the log level / moving logging out of Enrich).
// EnrichModels re-applies model list metadata to all currently registered models.
// Call this after SetModelList to update existing models with the new metadata.
// Holds the write lock for the entire operation to prevent races with concurrent
// readers (e.g. ListModels) that may read Model.Metadata.
func (r *ModelRegistry) EnrichModels() {
r.mu.Lock()
defer r.mu.Unlock()
if r.modelList == nil || len(r.models) == 0 {
return
}
providerTypes := make(map[core.Provider]string, len(r.providerTypes))
for k, v := range r.providerTypes {
providerTypes[k] = v
}
accessor := ®istryAccessor{models: r.models, providerTypes: providerTypes}
modeldata.Enrich(accessor, r.modelList)
}
internal/usage/extractor.go:61
- Cost calculation is introduced here, but the existing usage extractor tests don't exercise the pricing path (and therefore won't catch regressions in CalculateCost or the InputCost/OutputCost/TotalCost assignment). Add/extend tests in the usage package to pass pricing and assert cost fields, including partial pricing cases.
// CalculateCost computes input, output, and total costs from token counts and pricing.
// Returns nil pointers when pricing info is unavailable.
func CalculateCost(inputTokens, outputTokens int, pricing *core.ModelPricing) (input, output, total *float64) {
if pricing == nil {
return nil, nil, nil
}
if pricing.InputPerMtok != nil {
v := float64(inputTokens) * *pricing.InputPerMtok / 1_000_000
input = &v
}
if pricing.OutputPerMtok != nil {
v := float64(outputTokens) * *pricing.OutputPerMtok / 1_000_000
output = &v
}
if input != nil && output != nil {
v := *input + *output
total = &v
}
return
}
// ExtractFromChatResponse extracts usage data from a ChatResponse.
// It normalizes the usage data into a UsageEntry and preserves raw extended data.
// If pricing is provided, cost fields are calculated.
func ExtractFromChatResponse(resp *core.ChatResponse, requestID, provider, endpoint string, pricing ...*core.ModelPricing) *UsageEntry {
if resp == nil {
return nil
}
entry := &UsageEntry{
ID: uuid.New().String(),
RequestID: requestID,
ProviderID: resp.ID,
Timestamp: time.Now().UTC(),
Model: resp.Model,
Provider: provider,
Endpoint: endpoint,
InputTokens: resp.Usage.PromptTokens,
OutputTokens: resp.Usage.CompletionTokens,
TotalTokens: resp.Usage.TotalTokens,
}
// Preserve raw extended usage data if available (defensive copy to avoid races)
if len(resp.Usage.RawUsage) > 0 {
entry.RawData = cloneRawData(resp.Usage.RawUsage)
}
// Calculate costs if pricing is provided
if len(pricing) > 0 && pricing[0] != nil {
entry.InputCost, entry.OutputCost, entry.TotalCost = CalculateCost(entry.InputTokens, entry.OutputTokens, pricing[0])
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Fetch was receiving both a context with a deadline and a separate timeout duration used to set http.Client.Timeout, applying the same limit twice. Remove the timeout parameter and rely solely on the caller-provided context for cancellation, which http.NewRequestWithContext already propagates. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
A configurable timeout for downloading a small JSON metadata file adds unnecessary complexity. Remove the Timeout field from ModelListConfig, the MODEL_LIST_TIMEOUT env var, and simplify StartBackgroundRefresh and refreshModelList signatures by dropping the timeout parameter. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The dashboard "Estimated Cost" was nearly zero because the usage DB stores provider response model IDs (e.g., gpt-4o-2024-08-06) while the registry keys come from ListModels (e.g., gpt-4o). Add a reverse index built from provider_model_id fields in models.json so Resolve() can map dated variants back to canonical keys, and add a ResolveMetadata fallback in calculateCosts(). Also include the provider column in GetUsageByModel queries and add dashboard UI for cost display. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Mode→Modes ([]string), ProviderModelID→CustomModelID, add new pricing fields (cache_write, batch, audio per MTok + tiers), new model metadata fields, and multi-category support throughout registry and dashboard. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 10
♻️ Duplicate comments (3)
internal/cache/cache.go (1)
14-18:⚠️ Potential issue | 🟡 MinorAdd a Go doc comment for
ModelListData.
Exported fields should have a brief doc comment.✍️ Suggested fix
type ModelCache struct { Version int `json:"version"` UpdatedAt time.Time `json:"updated_at"` Models map[string]CachedModel `json:"models"` + // ModelListData holds the raw JSON from the external model registry. ModelListData json.RawMessage `json:"model_list_data,omitempty"` }As per coding guidelines,
**/*.go: Update in-code documentation (Go doc comments on exported types, functions, interfaces) when changing public APIs, adding new exported symbols, or modifying function signatures/behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/cache/cache.go` around lines 14 - 18, Add a one-line Go doc comment for the exported field ModelListData on the Cache struct describing its purpose and format (e.g., "ModelListData holds raw JSON for the list of models, used to cache model list responses.") — place the comment immediately above the ModelListData field in the Cache struct so it follows Go doc conventions for exported fields.internal/modeldata/enricher_test.go (1)
31-46: Theptr()helper is accessible frommerger_test.gowithin the same package.In Go, all
*_test.gofiles in the same package compile together duringgo test, so theptr[T any]()generic helper defined at line 10 ofmerger_test.gois accessible here. The code will compile and run correctly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/modeldata/enricher_test.go` around lines 31 - 46, The reviewer notes that the generic helper ptr[T any]() used in TestEnrich_MatchedAndUnmatched is already defined in merger_test.go and is available to this test; confirm that ptr is defined in merger_test.go and do not re-declare it here—either leave the usage as-is or, if a duplicate ptr exists in this file, remove the duplicate and add a short comment referencing merger_test.go's ptr to avoid future confusion (check TestEnrich_MatchedAndUnmatched, ptr, and merger_test.go to locate and resolve).internal/core/types.go (1)
78-83: Document the newMetadatafield.Add a Go doc comment so the exported field’s purpose is discoverable in generated docs.
📝 Suggested doc comment
type Model struct { ID string `json:"id"` Object string `json:"object"` OwnedBy string `json:"owned_by"` Created int64 `json:"created"` + // Metadata holds enriched information from the external model registry. Metadata *ModelMetadata `json:"metadata,omitempty"` }As per coding guidelines:
**/*.go: Update in-code documentation (Go doc comments on exported types, functions, interfaces) when changing public APIs, adding new exported symbols, or modifying function signatures/behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/types.go` around lines 78 - 83, Add a Go doc comment above the exported Metadata field in the struct (Metadata *ModelMetadata `json:"metadata,omitempty"`) describing its purpose and when it is populated, referencing the ModelMetadata type; ensure the comment is a proper sentence starting with "Metadata" so it appears in generated Go docs and follows project doc style for exported fields.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/admin/dashboard/static/css/dashboard.css`:
- Around line 835-843: The font-family list in the td.col-price rule violates
stylelint's font-family-name-quotes because Menlo and Consolas are quoted;
update the td.col-price font-family declaration (in the .col-price /
td.col-price CSS block) to quote only the family with spaces (e.g., 'SF Mono')
and leave Menlo and Consolas unquoted so it reads like: 'SF Mono', Menlo,
Consolas, monospace.
In `@internal/admin/dashboard/templates/index.html`:
- Around line 173-199: The current table shows category-specific price columns
even when activeCategory === 'all', causing irrelevant empty cells for non-text
models; update the table header and price cells to only render category-specific
pricing when a single category is active (e.g., change the x-show on the "Input
$/MTok", "Output $/MTok" and "Cached $/MTok" header/cell elements to something
like x-show="activeCategory === 'text_generation'" or x-show="activeCategory !==
'all' && activeCategory === 'text_generation'", and similarly adjust the
corresponding cell templates that use formatPrice(m.model.metadata?.pricing...))
so that when activeCategory === 'all' those columns are hidden (or replace with
generic price columns if you prefer a unified view). Ensure you update both the
<th> elements and the matching <td> cells (references: activeCategory,
filteredModels, formatPrice, and m.model.metadata?.pricing).
- Around line 316-319: The empty-state logic currently only covers when
modelFilter is set and misses the case where a category selection yields no
models; update the template conditions that render the empty messages (the <p
class="empty-state"> lines that check models.length, filteredModels.length and
modelFilter) to also consider the category selection state (e.g., categoryFilter
or selectedCategory) so that when filteredModels.length === 0 and a category is
selected you show an appropriate "No models in this category" message; keep the
existing "No models registered" and "No models match your filter" behaviors but
add the category-based condition to the second or a new empty-state paragraph.
In `@internal/admin/handler.go`:
- Around line 137-139: Update the calculateCosts doc comment to accurately
describe its partial-cost behavior: explain that calculateCosts enriches a
UsageSummary by computing costs for any models with available pricing and will
set aggregated totals when at least one priced model exists, while leaving
individual unpriced model costs nil; reference the calculateCosts function,
UsageSummary, and per-model token/pricing lookup to clarify that the
implementation is best-effort and not strictly all-or-nothing.
- Around line 150-166: The loop over modelUsages casts
mu.InputTokens/mu.OutputTokens to int before calling usage.CalculateCost which
can truncate int64 on 32-bit platforms; update the code to use a CalculateCost
signature that accepts int64 (or add explicit range checks) and call
usage.CalculateCost(int64(mu.InputTokens), int64(mu.OutputTokens), meta.Pricing)
or validate bounds before casting; change the CalculateCost function (and any
callers) to accept int64 parameters and update ModelUsage-related aggregates to
pass int64 to avoid silent truncation.
In `@internal/modeldata/fetcher.go`:
- Line 20: The code currently instantiates a new http.Client in the fetcher
(client := &http.Client{}), which prevents connection pooling; change this to
reuse a single client by either (a) adding a package-level variable (e.g., var
httpClient = &http.Client{}) and using httpClient in the fetch function, or (b)
accepting an injected *http.Client parameter on the fetcher constructor/function
and using that instead; update usages of the local client variable to reference
the shared httpClient (or the injected client) and remove the per-call
instantiation to enable connection reuse.
In `@internal/providers/init.go`:
- Around line 88-123: The background goroutine is using context.Background() so
it won't cancel when the outer Init context is cancelled; change the fetch
goroutine to derive its timeout from the existing Init ctx (use
context.WithTimeout(ctx, 45*time.Second) instead of
context.WithTimeout(context.Background(), 45*time.Second)) so modeldata.Fetch is
tied to Init's cancellation, ensuring the fetch stops on shutdown; locate the
anonymous goroutine where modelListURL, modeldata.Fetch, registry.SetModelList
and registry.SaveToCache are used and replace the context creation accordingly.
In `@internal/usage/reader_mongodb.go`:
- Around line 85-86: Add a Go doc comment for the exported method
GetUsageByModel on MongoDBReader: describe what the function does (retrieves
usage aggregated by model), explain the meaning of the params (ctx for context
cancellation, params of type UsageQueryParams to filter/time-range/grouping),
and specify the return values (slice of ModelUsage and error) and any important
behavior (e.g., aggregation pipeline, possible error conditions like DB failures
or empty results). Place the comment immediately above the GetUsageByModel
function declaration and use the standard "GetUsageByModel ..." sentence form so
it is recognized by godoc.
In `@internal/usage/reader_postgresql.go`:
- Line 54: Add a Go doc comment for the exported method GetUsageByModel on type
PostgreSQLReader: a one-line comment starting with "GetUsageByModel" that
briefly describes what aggregation the method returns (e.g., that it aggregates
usage by model over the query parameters and returns a slice of ModelUsage),
mention the input params type UsageQueryParams and that it returns
([]ModelUsage, error), and place the comment directly above the GetUsageByModel
function declaration in reader_postgresql.go.
In `@internal/usage/reader_sqlite.go`:
- Around line 53-54: Add a Go doc comment for the exported method
GetUsageByModel on SQLiteReader describing what the method does, its parameters
(ctx and params of type UsageQueryParams), what it returns ([]ModelUsage and
error), and any important behavior or side effects (e.g., SQL queries performed,
error conditions, concurrency/safety expectations). Place the comment
immediately above the func (r *SQLiteReader) GetUsageByModel(...) declaration
and write it in the standard Go doc style starting with "GetUsageByModel ..." so
godoc picks it up.
---
Duplicate comments:
In `@internal/cache/cache.go`:
- Around line 14-18: Add a one-line Go doc comment for the exported field
ModelListData on the Cache struct describing its purpose and format (e.g.,
"ModelListData holds raw JSON for the list of models, used to cache model list
responses.") — place the comment immediately above the ModelListData field in
the Cache struct so it follows Go doc conventions for exported fields.
In `@internal/core/types.go`:
- Around line 78-83: Add a Go doc comment above the exported Metadata field in
the struct (Metadata *ModelMetadata `json:"metadata,omitempty"`) describing its
purpose and when it is populated, referencing the ModelMetadata type; ensure the
comment is a proper sentence starting with "Metadata" so it appears in generated
Go docs and follows project doc style for exported fields.
In `@internal/modeldata/enricher_test.go`:
- Around line 31-46: The reviewer notes that the generic helper ptr[T any]()
used in TestEnrich_MatchedAndUnmatched is already defined in merger_test.go and
is available to this test; confirm that ptr is defined in merger_test.go and do
not re-declare it here—either leave the usage as-is or, if a duplicate ptr
exists in this file, remove the duplicate and add a short comment referencing
merger_test.go's ptr to avoid future confusion (check
TestEnrich_MatchedAndUnmatched, ptr, and merger_test.go to locate and resolve).
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (27)
.env.templateconfig/config.gointernal/admin/dashboard/static/css/dashboard.cssinternal/admin/dashboard/static/js/dashboard.jsinternal/admin/dashboard/templates/index.htmlinternal/admin/handler.gointernal/admin/handler_test.gointernal/cache/cache.gointernal/core/types.gointernal/core/types_test.gointernal/modeldata/enricher.gointernal/modeldata/enricher_test.gointernal/modeldata/fetcher.gointernal/modeldata/fetcher_test.gointernal/modeldata/merger.gointernal/modeldata/merger_test.gointernal/modeldata/types.gointernal/providers/init.gointernal/providers/registry.gointernal/providers/registry_test.gointernal/server/http.gointernal/usage/extractor.gointernal/usage/reader.gointernal/usage/reader_mongodb.gointernal/usage/reader_postgresql.gointernal/usage/reader_sqlite.gointernal/usage/usage.go
Extended core.Usage and ResponsesUsage with typed PromptTokensDetails and CompletionTokensDetails structs so json.Unmarshal captures OpenAI/xAI nested usage fields. Added cache fields to Anthropic's anthropicUsage and populate RawUsage in both convert functions. Added buildRawUsageFromDetails helper in the extractor to merge typed fields into RawData using the same key convention as the streaming path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Resolved merge conflict in admin handler (kept both swagger annotations and category filter comment). Added missing pricingResolver arg to NewHandler calls in handlers_test.go. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 43 out of 44 changed files in this pull request and generated no new comments.
Comments suppressed due to low confidence (3)
internal/usage/cost.go:143
- CalculateGranularCost will add a caveat for any RawData key ending in "_tokens"/"_count" that isn't in providerMappings. Since buildRawUsageFromDetails can populate keys like "prompt_text_tokens", "prompt_image_tokens", and prediction token breakdowns (which are typically just a breakdown of prompt/completion tokens, not an extra billable category), this will incorrectly mark cost calculation as incomplete and may confuse users. Consider explicitly excluding known breakdown-only fields from caveat generation (or marking them as mapped-with-no-extra-cost), and/or tightening isTokenField to a per-provider allowlist of additive token fields.
internal/admin/handler.go:243 - UsageLog assumes GetUsageLog always returns a non-nil result when err is nil, and dereferences result.Entries unconditionally. To avoid a panic if an implementation returns (nil, nil), add a nil check (e.g., initialize an empty UsageLogResult) before accessing result.Entries.
result, err := h.usageReader.GetUsageLog(c.Request().Context(), params)
if err != nil {
return handleError(c, err)
}
if result.Entries == nil {
result.Entries = []usage.UsageLogEntry{}
}
config/config.go:272
- PR description mentions adding a ModelListConfig with URL + timeout, but the config struct only includes URL and the fetch logic uses hard-coded 45s timeouts in providers/init.go and providers/registry.go. Either update the PR description, or add a configurable timeout field (with env/yaml support) and plumb it through the fetch/refresh paths so operators can tune it.
// ModelListConfig holds configuration for fetching the external model metadata registry.
type ModelListConfig struct {
// URL is the HTTP(S) URL to fetch models.json from (empty = disabled)
URL string `yaml:"url" env:"MODEL_LIST_URL"`
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 28
♻️ Duplicate comments (10)
internal/core/types.go (1)
96-101:⚠️ Potential issue | 🟡 MinorAdd a doc comment for
Model.Metadata.Line 100 exposes
Metadatawithout a Go doc comment.✍️ Suggested fix
type Model struct { ID string `json:"id"` Object string `json:"object"` OwnedBy string `json:"owned_by"` Created int64 `json:"created"` + // Metadata holds enriched information from the external model registry. Metadata *ModelMetadata `json:"metadata,omitempty"` }As per coding guidelines, "Use Go doc comments on exported types, functions, and interfaces to document public APIs".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/types.go` around lines 96 - 101, The exported field Model.Metadata lacks a Go doc comment; add a one-line Go doc comment directly above the Metadata field on the Model struct that explains what metadata contains and when it may be nil (e.g., "Metadata contains optional model metadata such as training data or tags; may be nil"). Reference the Model.Metadata field and the ModelMetadata type in the comment so users can understand the purpose and optional nature of the metadata.internal/cache/cache.go (1)
14-17:⚠️ Potential issue | 🟡 MinorAdd a doc comment for
ModelListData.Line 17 introduces an exported field without a Go doc comment; add a brief description of the raw registry JSON it stores.
✍️ Suggested fix
type ModelCache struct { Version int `json:"version"` UpdatedAt time.Time `json:"updated_at"` Models map[string]CachedModel `json:"models"` + // ModelListData stores the raw JSON payload fetched from the external model registry. ModelListData json.RawMessage `json:"model_list_data,omitempty"` }As per coding guidelines, "Use Go doc comments on exported types, functions, and interfaces to document public APIs".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/cache/cache.go` around lines 14 - 17, Add a Go doc comment for the exported field ModelListData to describe what it holds; specifically, above the ModelListData json.RawMessage field (the exported field in the cache struct alongside Version, UpdatedAt, and Models), add a short sentence stating it contains the raw registry JSON for the model list (e.g., "ModelListData contains the raw registry JSON for available models") so the public API is documented.internal/usage/reader_mongodb.go (1)
99-100:⚠️ Potential issue | 🟡 MinorAdd a Go doc comment for
GetUsageByModel.Line 99 introduces an exported method without a Go doc comment.
✍️ Suggested fix
+// GetUsageByModel aggregates usage totals grouped by model and provider. func (r *MongoDBReader) GetUsageByModel(ctx context.Context, params UsageQueryParams) ([]ModelUsage, error) {As per coding guidelines, "Use Go doc comments on exported types, functions, and interfaces to document public APIs".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_mongodb.go` around lines 99 - 100, Add a Go doc comment for the exported method GetUsageByModel on MongoDBReader that begins with the method name and briefly describes its behavior, expected parameters and return values (e.g., what UsageQueryParams represents and that it returns a slice of ModelUsage or an error); place the comment directly above the GetUsageByModel declaration so it documents the public API.internal/providers/init.go (1)
91-115:context.Background()ignores the Init context's cancellation — derive fromctxinstead.If the caller cancels
ctxduring shutdown (e.g., the application is shutting down while the registry is still initializing), this goroutine continues fetching for up to 45 seconds. Usingctxas the parent aligns the fetch lifetime with the initialization phase.🛡️ Proposed fix
- fetchCtx, cancel := context.WithTimeout(context.Background(), 45*time.Second) + fetchCtx, cancel := context.WithTimeout(ctx, 45*time.Second)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/init.go` around lines 91 - 115, The goroutine that calls modeldata.Fetch currently creates a timeout from context.Background(), which ignores the surrounding Init context cancellation; change the parent context used to derive the timeout so it uses the outer ctx (i.e., use context.WithTimeout(ctx, 45*time.Second) and keep the defer cancel()) so the fetch (modeldata.Fetch) and subsequent registry.SetModelList / registry.EnrichModels / registry.SaveToCache calls are aborted if the Init context is canceled.internal/admin/dashboard/static/css/dashboard.css (1)
839-843:⚠️ Potential issue | 🟡 MinorStylelint
font-family-name-quoteserror ontd.col-price.Same issue already flagged in the previous review —
'Menlo'and'Consolas'should be unquoted (single-word family names must not be quoted).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/css/dashboard.css` around lines 839 - 843, The CSS for selector td.col-price is using quoted single-word family names which triggers stylelint's font-family-name-quotes rule; update the font-family declaration in td.col-price to keep the quoted multi-word family ('SF Mono') but remove quotes around single-word families so it reads: 'SF Mono', Menlo, Consolas, monospace (i.e., unquote Menlo and Consolas).internal/modeldata/fetcher.go (1)
20-20: 🧹 Nitpick | 🔵 TrivialReusing
http.Clientfor connection pooling is still unaddressed.Each call to
Fetchcreates a fresh&http.Client{}, bypassing connection pooling and TLS session reuse. For the current use-case (background refresh on startup) this is low-risk but the previous suggestion to share a package-level or injected client still stands.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/modeldata/fetcher.go` at line 20, The Fetch implementation currently creates a new http.Client on each call (client := &http.Client{}), preventing connection pooling and TLS session reuse; change Fetch to use a shared client instead—either add a package-level variable (e.g., var httpClient = &http.Client{}) or accept an injected *http.Client on the fetcher struct (e.g., add a Client field on the fetcher type and use f.Client in Fetch) and replace the local client creation with that shared/injected client so connections and TLS sessions are reused across calls.internal/usage/reader_sqlite.go (1)
58-58: Add Go doc comments forGetUsageByModelandGetUsageLog.Same issue as the PostgreSQL reader — both new exported methods lack doc comments. As per coding guidelines:
**/*.go: Use Go doc comments on exported types, functions, and interfaces to document public APIs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_sqlite.go` at line 58, Add Go doc comments for the exported methods on SQLiteReader: document GetUsageByModel and GetUsageLog with brief descriptions of what they do, their parameters, and return values (e.g., mention ctx and params UsageQueryParams and that they return []ModelUsage and error). Place the comments immediately above the function declarations using the standard Go doc format starting with the function name (e.g., "GetUsageByModel ..." and "GetUsageLog ...") so they appear in godoc and satisfy the package's documentation guidelines.internal/usage/reader_postgresql.go (1)
58-58: Add Go doc comments forGetUsageByModelandGetUsageLog.Both new exported methods lack doc comments. As per coding guidelines:
**/*.go: Use Go doc comments on exported types, functions, and interfaces to document public APIs.✍️ Suggested doc comments
+// GetUsageByModel returns aggregated input/output token usage and costs grouped by model and provider. func (r *PostgreSQLReader) GetUsageByModel(ctx context.Context, params UsageQueryParams) ([]ModelUsage, error) {+// GetUsageLog returns a paginated list of individual usage log entries with optional filtering. func (r *PostgreSQLReader) GetUsageLog(ctx context.Context, params UsageLogParams) (*UsageLogResult, error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_postgresql.go` at line 58, Add Go doc comments for the exported methods GetUsageByModel and GetUsageLog on the PostgreSQLReader receiver: write short, clear sentences starting with the method name that describe what each method does, its primary inputs (e.g., context.Context and UsageQueryParams for GetUsageByModel) and what it returns (a slice of ModelUsage and error, or usage logs and error). Place the comments immediately above the function declarations for GetUsageByModel and GetUsageLog and follow Go doc style (start with the function name and be concise).internal/admin/dashboard/templates/index.html (2)
510-511: Empty state still doesn't account for category filtering.When a user selects a category that has no models (e.g., "video") and
modelFilteris empty, both existing empty-state paragraphs are hidden — leaving a blank area. A previous review flagged this same gap.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 510 - 511, The empty-state logic hides both messages when a category is selected but modelFilter is empty; update the x-show conditions so the "No models match your filter." paragraph is shown whenever filteredModels.length === 0 (and not loading/authError), regardless of modelFilter, and/or add a new paragraph that triggers when filteredModels.length === 0 && selectedCategory (or categoryFilter) is set; reference the template variables models, filteredModels, modelFilter and the category selection variable (e.g., selectedCategory) and change the x-show expressions to include filteredModels.length === 0 && !loading && !authError (and/or selectedCategory) so a clear empty-state appears when a chosen category has no models.
366-392: "All" category view shows text-generation pricing columns for non-text models.When
activeCategory === 'all', the text_generation table layout is displayed for all models, including image, audio, and utility models that don't haveinput_per_mtok/output_per_mtokpricing. These rows will show empty cells in the pricing columns, which may confuse users.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 366 - 392, The table currently shows text-generation pricing columns for every row when activeCategory === 'all'; update the per-row pricing cell visibility so those columns only render when the model is actually a text-generation model. Change the pricing <td> elements (the ones using formatPrice and the cached column) to include an additional condition like (activeCategory === 'text_generation' || (m.model.metadata?.modes ?? []).includes('text_generation')) in their x-show, so only rows whose m.model.metadata.modes contains 'text_generation' (or when the whole view is the text_generation category) display input_per_mtok, output_per_mtok and cached_input_per_mtok; keep using formatPrice for rendering.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/admin/dashboard/static/css/dashboard.css`:
- Around line 948-964: Update the font-family declarations in
.donut-legend-label and .donut-legend-value to remove quotes from single-word
family names and keep quotes for multi-word names: change "font-family: 'SF
Mono', 'Menlo', 'Consolas', monospace;" to "font-family: 'SF Mono', Menlo,
Consolas, monospace;" in both the .donut-legend-label and .donut-legend-value
rules so they comply with the font-family-name-quotes stylelint rule.
In `@internal/admin/dashboard/static/js/dashboard.js`:
- Around line 623-669: fetchUsagePage calls fetchUsageLog(false) so the
usageLog.offset isn't reset when the date range or other query params change,
causing empty pages; change fetchUsagePage to call fetchUsageLog(true) (or
otherwise ensure fetchUsageLog is invoked with resetOffset=true) so that when
fetchUsagePage runs after query changes (e.g., from fetchUsage or date-range
updates using _usageQueryStr) the offset is reset to 0 and the first page of
results is fetched; update any callers that rely on the previous behavior if
needed.
- Around line 672-701: The Biome warning is caused by implicit returns inside
the forEach callbacks in usageLogModelOptions and usageLogProviderOptions;
update the callbacks to use block bodies with explicit statements (e.g., change
this.modelUsage.forEach(m => set.add(m.model)) to this.modelUsage.forEach(m => {
set.add(m.model); }) and similarly for provider) so there are no implicit
returns from the forEach callbacks; keep the surrounding logic in
usageLogModelOptions and usageLogProviderOptions unchanged.
In `@internal/admin/dashboard/templates/index.html`:
- Around line 144-241: The date-picker markup duplicated between the overview
and usage pages should be extracted into a Go template partial (e.g., define a
template named "date-picker") and replaced in both locations with a single
template invocation (e.g., {{template "date-picker" .}}); ensure the partial
includes the full interactive markup that references Alpine state and handlers
used here (toggleDatePicker(), closeDatePicker(), datePickerOpen,
selectedPreset, selectPreset(), calendarDays(), calendarTitle(), prevMonth(),
nextMonth(), isToday(), isRangeStart(), isRangeEnd(), isInRange(),
isFutureDay(), selectCalendarDay(), onCalendarMouseMove(),
onCalendarMouseLeave(), cursorHint, selectingDate, selectionHint(),
dateRangeLabel()) and keep any surrounding container classes/attributes required
by the parent pages so the Alpine bindings still resolve; finally delete the
duplicated HTML blocks from both pages and confirm the partial renders
identically in overview and usage contexts.
- Around line 327-328: The pagination text can show an invalid range when the
current page is empty (stale offset); update the x-text expression on the span
that uses usageLog (the span with class "pagination-info" and x-text) to guard
the start and end values: compute start as Math.min(usageLog.offset + 1,
usageLog.total) and end as Math.min(usageLog.offset + usageLog.limit,
usageLog.total) (or ensure end >= start) so the displayed range never shows
start > end or a start of 1 when total is 0; keep the existing
x-show="usageLog.total > 0" but make the x-text resilient to stale offsets by
using these guarded values.
In `@internal/admin/handler.go`:
- Around line 182-203: The new admin handlers UsageByModel, UsageLog, and
ListCategories lack Swagger/OpenAPI annotations so they don't appear in
generated docs; add full comment annotations above each handler mirroring the
style used by UsageSummary and DailyUsage (include `@Summary`, `@Tags`, `@Param` for
query/path bodies where applicable, `@Success` with response schema, `@Failure` for
error cases, and `@Router` with the correct method/path) and ensure the referenced
response types match the handler return types to make them discoverable by the
generator.
- Around line 225-234: The query parsing allows unlimited and zero limits;
update the parsing around c.QueryParam("limit")/c.QueryParam("offset") to
enforce a sensible default and an upper bound: if limit is absent or parsed <=0
set params.Limit to a default (e.g., 100) and if parsed > MAX_LIMIT (e.g., 1000)
cap it to MAX_LIMIT; ensure params.Offset remains >=0 (keep current check) and
document these constants near where GetUsageLog is called so GetUsageLog never
receives 0 to mean "no limit". Reference the existing symbols params.Limit,
params.Offset, c.QueryParam("limit") and c.QueryParam("offset") when
implementing the default and cap.
In `@internal/modeldata/enricher_test.go`:
- Around line 127-129: The test in enricher_test.go dereferences
meta.Pricing.InputPerMtok directly which can panic if Resolve returns a Pricing
with InputPerMtok == nil; update the assertion to first check meta.Pricing !=
nil and meta.Pricing.InputPerMtok != nil before dereferencing (or use a helper
to get the value safely) so the test fails cleanly instead of panicking when
calling Resolve and inspecting meta.Pricing.InputPerMtok.
In `@internal/modeldata/fetcher_test.go`:
- Around line 1-149: Add a unit test that verifies Fetch returns an error when
the HTTP response body exceeds the 10 MB limit: create a httptest.Server handler
that sets "Content-Type: application/json" and writes maxBodySize+N bytes (use
bytes.Repeat to generate >10*1024*1024 bytes), then call Fetch(ctx, server.URL)
and assert err is non-nil; reference the Fetch function and the maxBodySize
branch (the len(raw) > maxBodySize check) to ensure the oversized-body code path
is exercised.
In `@internal/modeldata/merger_test.go`:
- Around line 226-282: TestConvertPricing_AllFields omits the
PricingEntry.InputPerImage field so convertPricing's mapping for InputPerImage
isn't tested; update the test by adding InputPerImage: ptr(<value>) to the
PricingEntry literal and add a corresponding check in the checks slice (e.g.,
{"InputPerImage", p.InputPerImage, <value>}) to assert the converted value;
locate references in the test function TestConvertPricing_AllFields and the
convertPricing/PricingEntry symbols to ensure the new field is covered.
In `@internal/modeldata/merger.go`:
- Around line 38-47: The Reverse lookup in Resolve uses an unbounded recursive
call (Resolve(...)) when providerModelByActualID maps reverseKey ->
compositeKey, risking stack overflow on cycles; change this to perform the
reverse resolution inline (look up compositeKey's provider and model entries
directly within Resolve) or implement a simple recursion depth guard (e.g.,
maxDepth parameter or visited set) to prevent cycles, and reuse the
already-built reverseKey instead of reconstructing key; update Resolve and any
callers to pass/handle the depth/visited info if you choose the guard approach.
- Around line 69-80: The provider-model override block currently applies
ContextWindow/MaxOutputTokens/Pricing but ignores
ProviderModelEntry.Capabilities; update the pm != nil branch in merger.go to
apply capability overrides too (e.g., if pm.Capabilities != nil then
merge/overwrite non-nil fields into meta.Capabilities using the same "non-nil
wins" semantics), either by adding an inline if block or calling a new helper
like mergeCapabilities(meta.Capabilities, pm.Capabilities) so provider-specific
capability flags (e.g., vision disabled) are preserved; alternatively, if you
choose not to merge, add a clear comment in that pm != nil block stating base
model capabilities are authoritative.
In `@internal/modeldata/types.go`:
- Around line 23-48: Replace the manual slash-scanning in
ModelList.buildReverseIndex with strings.Cut: for each compositeKey in
ProviderModels, call strings.Cut(compositeKey, "/") to get providerType and
modelPart and a boolean ok; skip when !ok or pm.CustomModelID == nil; then build
actualID := *pm.CustomModelID and reverseKey := providerType + "/" + actualID
and only set l.providerModelByActualID[reverseKey] = compositeKey when
reverseKey != compositeKey; ensure you import "strings" and update references to
providerModelByActualID, ProviderModels, CustomModelID, and compositeKey
accordingly.
In `@internal/providers/registry_test.go`:
- Around line 668-671: The test uses a hardcoded literal 7 when asserting
category counts; replace that magic constant with the dynamic value
len(core.AllCategories()) so the assertion compares len(counts) against
len(core.AllCategories()), ensuring it stays correct if categories change—update
the assertion around the counts variable to use core.AllCategories() instead of
7.
In `@internal/providers/registry.go`:
- Around line 489-500: The loop building CategoryCount in registry.go may yield
empty DisplayName when categoryDisplayNames lacks an entry for a new category
from core.AllCategories(); update the construction inside the loop that creates
CategoryCount (the block referencing allCategories, counts, core.CategoryAll,
and r.models) to use a safe fallback when categoryDisplayNames[cat] is
missing—e.g., check the map lookup and if missing set DisplayName to a sensible
default like cat.String() or string(cat) (or "Unknown") so UI labels are never
blank.
- Around line 204-226: The code currently only assigns r.modelListRaw when
parsing succeeds, so a parse failure drops the raw bytes; always preserve the
raw model-list payload by assigning r.modelListRaw = modelCache.ModelListData
regardless of whether modeldata.Parse returned an error, but only set
r.modelList = list when parsing succeeded. Move or add the r.modelListRaw
assignment into the r.mu.Lock critical section (or ensure it’s protected by the
same lock) so you still set r.models = newModels and r.modelListRaw =
modelCache.ModelListData atomically, while keeping the existing conditional
r.modelList = list behavior.
In `@internal/usage/cost_test.go`:
- Around line 37-57: CalculateGranularCost currently double-counts
provider-specific special-token subsets (e.g., OpenAI's cached_tokens and
reasoning_tokens) by charging them at both the base rate and the special rate;
change the logic in CalculateGranularCost so that for providers like "openai"
(and similarly Gemini/xAI if same semantics) you first subtract cached_tokens
from input_tokens and reasoning_tokens from output_tokens before applying the
base InputPerMtok/OutputPerMtok, then apply CachedInputPerMtok and
ReasoningOutputPerMtok only to those subset counts; keep the additive model for
providers (e.g., Anthropic) where special counts are separate. Ensure you
reference the rawData keys "cached_tokens" and "reasoning_tokens" and the
ModelPricing fields InputPerMtok, CachedInputPerMtok, OutputPerMtok,
ReasoningOutputPerMtok when updating the computation and tests.
In `@internal/usage/cost.go`:
- Around line 44-66: providerMappings contains multiple rawDataKey variants that
map to the same pricingField (e.g., "cached_tokens" and "prompt_cached_tokens" →
CachedInputPerMtok) which can double-count costs if both keys exist; update the
mapping iteration that processes providerMappings to either (1) add the
suggested doc comment above providerMappings noting these alternatives are
mutually exclusive, and/or (2) implement a deduplication guard in the loop that
processes each mapping (the code that reads rawData and applies
tokenCostMapping.pricingField) by tracking which pricingField targets have
already been applied for a given provider (use a map keyed by a unique
identifier for the pricingField or the function pointer/resulting field name)
and skip any mapping whose pricingField was already used so we don’t compute the
same cost twice.
- Around line 86-129: The base token costs are being double-charged because
breakdown fields (e.g., PromptTokensDetails / CompletionTokensDetails) are added
on top of the full inputTokens/outputTokens; update the logic to subtract
specialized counts from the base totals before computing base cost: when
iterating providerMappings (symbols: providerMappings, m.rawDataKey, extractInt,
count, m.side, unitPerMtok), mark mappedKeys[m.rawDataKey]=true and subtract
count from inputTokens or outputTokens (depending on m.side) ensuring you don't
go negative, then compute the base costs using the adjusted
inputTokens/outputTokens and finally add the specialized cost using
m.pricingField(pricing) as before; leave Anthropic cache items additive if API
semantics require (add a caveat comment near the Anthropic mapping).
In `@internal/usage/extractor_test.go`:
- Around line 203-341: Add tests that pass a non-nil pricing argument to
exercise the pricing branch in ExtractFromChatResponse and
ExtractFromResponsesResponse: create a core.ModelPricing value with known
input/output rates, call ExtractFromChatResponse and
ExtractFromResponsesResponse with that pricing passed as the variadic pricing
...*core.ModelPricing parameter, and assert that the returned entry fields
InputCost, OutputCost, TotalCost and CostsCalculationCaveat are set to the
expected values (and not left zero/nil); target the functions
ExtractFromChatResponse, ExtractFromResponsesResponse and the output fields
InputCost, OutputCost, TotalCost, CostsCalculationCaveat when adding these
assertions.
In `@internal/usage/reader_mongodb.go`:
- Around line 179-180: Add a Go doc comment for the exported method GetUsageLog
on MongoDBReader: above the function declaration for func (r *MongoDBReader)
GetUsageLog(ctx context.Context, params UsageLogParams) (*UsageLogResult,
error), add a concise comment that starts with "GetUsageLog" and describes what
the method does, its parameters (context and UsageLogParams) and what it returns
(a UsageLogResult and error), following Go doc conventions and sentence style.
- Around line 194-206: The timestamp filter logic in GetUsageLog ignores the
case where StartDate is zero but EndDate is set; update the conditional after
startZero/endZero to add an else if !endZero branch that appends a matchFilters
bson.E on "timestamp" with a bson.D containing {Key: "$lt", Value:
params.EndDate.AddDate(0,0,1).UTC()} (mirroring the existing end handling used
when both dates are set) so end-only queries are correctly applied; locate this
change around the matchFilters append calls in reader_mongodb.go within
GetUsageLog.
In `@internal/usage/reader_postgresql.go`:
- Around line 145-149: The current search builds s := "%" + params.Search + "%"
but doesn't escape LIKE wildcards, so transform params.Search by first escaping
backslashes, then replacing "%" and "_" with escaped versions (e.g. "\" + char)
before wrapping with "%" (use a stable escape char like '\'); update the SQL
fragment in the conditions (the ILIKE clause built where argIdx is used) to
include an ESCAPE '\' clause (e.g. "ILIKE $%d ESCAPE '\\'") and continue to pass
the parameter via args and increment argIdx; update the code that constructs s
and the condition to use the escaped search string so user-supplied % and _ are
treated literally.
- Around line 187-189: The json.Unmarshal error is being ignored for the
rawDataJSON -> e.RawData assignment; update the code around the json.Unmarshal
call (the block using rawDataJSON and e.RawData) to check the returned error and
handle it—either log a warning (using the package's logger or the standard log)
that includes the malformed raw_data and the error, or if you intentionally want
to ignore it, add a comment explaining why; ensure you do not discard the error
silently so malformed JSON is observable in logs.
- Line 171: The append call dataArgs := append(args, limit, offset) may alias
args; change it to build a new slice instead (e.g., allocate a new slice with
capacity len(args)+2, copy the contents of args into it, then append limit and
offset) so dataArgs does not share args' backing array; update the code around
the variables args and dataArgs to use this new slice construction to avoid
accidental mutation if args is later used.
In `@internal/usage/reader_sqlite.go`:
- Around line 58-100: GetUsageByModel and GetUsageLog contain nearly identical
SQL-building and arg-preparation logic across SQLite and PostgreSQL; extract the
shared logic into a helper (e.g., buildUsageQueryByModel and buildUsageQueryLog
or a single buildUsageQuery) that accepts UsageQueryParams plus a
dialect/placeholder strategy and returns the final query string and
[]interface{}. Update SQLiteReader.GetUsageByModel and the corresponding
PostgreSQL methods to call the helper and only handle DB-specific placeholder
formatting (e.g., "?" vs "$N"), timestamp formatting, and LIKE vs ILIKE; keep
scanning/row iteration in the readers unchanged. Ensure the helper encapsulates:
assembling SELECT/GROUP BY/WHERE clauses, adding date bounds and costCols, and
returning the args in the correct order so both GetUsageByModel and GetUsageLog
can reuse it.
- Around line 180-186: The timestamp parsing block leaves e.Timestamp as the
zero time if all three time.Parse attempts (the RFC3339Nano, "2006-01-02
15:04:05.999999999-07:00", and "2006-01-02T15:04:05Z" branches) fail; update the
final else to handle parse failures by emitting a warning (via your package
logger) that includes the raw ts value and by recording the failure on the entry
(e.g., set a new flag or field like TimestampParseError or store the raw
timestamp in e.RawTimestamp) so the UI can detect/annotate bad parses instead of
showing the 0001-01-01 time. Ensure you modify the same parsing block that
assigns e.Timestamp so callers can rely on the new flag/field.
- Line 162: The current dataArgs := append(args, limit, offset) can mutate the
original args slice's backing array; instead create a new slice that copies args
before appending limit and offset (e.g., allocate a new slice of length
len(args)+2, copy args into it, then set the last two elements to limit and
offset) so that args is not aliased or modified; update the code around
dataArgs, args, limit, and offset in reader_sqlite.go accordingly.
---
Duplicate comments:
In `@internal/admin/dashboard/static/css/dashboard.css`:
- Around line 839-843: The CSS for selector td.col-price is using quoted
single-word family names which triggers stylelint's font-family-name-quotes
rule; update the font-family declaration in td.col-price to keep the quoted
multi-word family ('SF Mono') but remove quotes around single-word families so
it reads: 'SF Mono', Menlo, Consolas, monospace (i.e., unquote Menlo and
Consolas).
In `@internal/admin/dashboard/templates/index.html`:
- Around line 510-511: The empty-state logic hides both messages when a category
is selected but modelFilter is empty; update the x-show conditions so the "No
models match your filter." paragraph is shown whenever filteredModels.length ===
0 (and not loading/authError), regardless of modelFilter, and/or add a new
paragraph that triggers when filteredModels.length === 0 && selectedCategory (or
categoryFilter) is set; reference the template variables models, filteredModels,
modelFilter and the category selection variable (e.g., selectedCategory) and
change the x-show expressions to include filteredModels.length === 0 && !loading
&& !authError (and/or selectedCategory) so a clear empty-state appears when a
chosen category has no models.
- Around line 366-392: The table currently shows text-generation pricing columns
for every row when activeCategory === 'all'; update the per-row pricing cell
visibility so those columns only render when the model is actually a
text-generation model. Change the pricing <td> elements (the ones using
formatPrice and the cached column) to include an additional condition like
(activeCategory === 'text_generation' || (m.model.metadata?.modes ??
[]).includes('text_generation')) in their x-show, so only rows whose
m.model.metadata.modes contains 'text_generation' (or when the whole view is the
text_generation category) display input_per_mtok, output_per_mtok and
cached_input_per_mtok; keep using formatPrice for rendering.
In `@internal/cache/cache.go`:
- Around line 14-17: Add a Go doc comment for the exported field ModelListData
to describe what it holds; specifically, above the ModelListData json.RawMessage
field (the exported field in the cache struct alongside Version, UpdatedAt, and
Models), add a short sentence stating it contains the raw registry JSON for the
model list (e.g., "ModelListData contains the raw registry JSON for available
models") so the public API is documented.
In `@internal/core/types.go`:
- Around line 96-101: The exported field Model.Metadata lacks a Go doc comment;
add a one-line Go doc comment directly above the Metadata field on the Model
struct that explains what metadata contains and when it may be nil (e.g.,
"Metadata contains optional model metadata such as training data or tags; may be
nil"). Reference the Model.Metadata field and the ModelMetadata type in the
comment so users can understand the purpose and optional nature of the metadata.
In `@internal/modeldata/fetcher.go`:
- Line 20: The Fetch implementation currently creates a new http.Client on each
call (client := &http.Client{}), preventing connection pooling and TLS session
reuse; change Fetch to use a shared client instead—either add a package-level
variable (e.g., var httpClient = &http.Client{}) or accept an injected
*http.Client on the fetcher struct (e.g., add a Client field on the fetcher type
and use f.Client in Fetch) and replace the local client creation with that
shared/injected client so connections and TLS sessions are reused across calls.
In `@internal/providers/init.go`:
- Around line 91-115: The goroutine that calls modeldata.Fetch currently creates
a timeout from context.Background(), which ignores the surrounding Init context
cancellation; change the parent context used to derive the timeout so it uses
the outer ctx (i.e., use context.WithTimeout(ctx, 45*time.Second) and keep the
defer cancel()) so the fetch (modeldata.Fetch) and subsequent
registry.SetModelList / registry.EnrichModels / registry.SaveToCache calls are
aborted if the Init context is canceled.
In `@internal/usage/reader_mongodb.go`:
- Around line 99-100: Add a Go doc comment for the exported method
GetUsageByModel on MongoDBReader that begins with the method name and briefly
describes its behavior, expected parameters and return values (e.g., what
UsageQueryParams represents and that it returns a slice of ModelUsage or an
error); place the comment directly above the GetUsageByModel declaration so it
documents the public API.
In `@internal/usage/reader_postgresql.go`:
- Line 58: Add Go doc comments for the exported methods GetUsageByModel and
GetUsageLog on the PostgreSQLReader receiver: write short, clear sentences
starting with the method name that describe what each method does, its primary
inputs (e.g., context.Context and UsageQueryParams for GetUsageByModel) and what
it returns (a slice of ModelUsage and error, or usage logs and error). Place the
comments immediately above the function declarations for GetUsageByModel and
GetUsageLog and follow Go doc style (start with the function name and be
concise).
In `@internal/usage/reader_sqlite.go`:
- Line 58: Add Go doc comments for the exported methods on SQLiteReader:
document GetUsageByModel and GetUsageLog with brief descriptions of what they
do, their parameters, and return values (e.g., mention ctx and params
UsageQueryParams and that they return []ModelUsage and error). Place the
comments immediately above the function declarations using the standard Go doc
format starting with the function name (e.g., "GetUsageByModel ..." and
"GetUsageLog ...") so they appear in godoc and satisfy the package's
documentation guidelines.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (43)
.env.templateconfig/config.gogo.modinternal/admin/dashboard/static/css/dashboard.cssinternal/admin/dashboard/static/js/dashboard.jsinternal/admin/dashboard/templates/index.htmlinternal/admin/dashboard/templates/layout.htmlinternal/admin/handler.gointernal/admin/handler_test.gointernal/app/app.gointernal/cache/cache.gointernal/core/responses.gointernal/core/types.gointernal/core/types_test.gointernal/modeldata/enricher.gointernal/modeldata/enricher_test.gointernal/modeldata/fetcher.gointernal/modeldata/fetcher_test.gointernal/modeldata/merger.gointernal/modeldata/merger_test.gointernal/modeldata/types.gointernal/providers/anthropic/anthropic.gointernal/providers/anthropic/anthropic_test.gointernal/providers/init.gointernal/providers/registry.gointernal/providers/registry_test.gointernal/server/handlers.gointernal/server/handlers_test.gointernal/server/http.gointernal/usage/cost.gointernal/usage/cost_test.gointernal/usage/extractor.gointernal/usage/extractor_test.gointernal/usage/pricing.gointernal/usage/reader.gointernal/usage/reader_mongodb.gointernal/usage/reader_postgresql.gointernal/usage/reader_sqlite.gointernal/usage/store_postgresql.gointernal/usage/store_sqlite.gointernal/usage/stream_wrapper.gointernal/usage/stream_wrapper_test.gointernal/usage/usage.go
💤 Files with no reviewable changes (1)
- go.mod
| <div class="pagination" x-show="usageLog.total > 0"> | ||
| <span class="pagination-info" x-text="'Showing ' + (usageLog.offset + 1) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span> |
There was a problem hiding this comment.
Pagination info shows "Showing 1-0 of 0" when there are no results.
When usageLog.total is 0, the x-show="usageLog.total > 0" on line 327 hides the pagination bar, so this won't actually display. However, if total > 0 but the current page is empty due to a race or stale offset, the math offset + 1 through Math.min(offset + limit, total) could show confusing ranges. Consider guarding the start value:
-<span class="pagination-info" x-text="'Showing ' + (usageLog.offset + 1) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span>
+<span class="pagination-info" x-text="'Showing ' + Math.min(usageLog.offset + 1, usageLog.total) + '–' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <div class="pagination" x-show="usageLog.total > 0"> | |
| <span class="pagination-info" x-text="'Showing ' + (usageLog.offset + 1) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span> | |
| <div class="pagination" x-show="usageLog.total > 0"> | |
| <span class="pagination-info" x-text="'Showing ' + Math.min(usageLog.offset + 1, usageLog.total) + '–' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/admin/dashboard/templates/index.html` around lines 327 - 328, The
pagination text can show an invalid range when the current page is empty (stale
offset); update the x-text expression on the span that uses usageLog (the span
with class "pagination-info" and x-text) to guard the start and end values:
compute start as Math.min(usageLog.offset + 1, usageLog.total) and end as
Math.min(usageLog.offset + usageLog.limit, usageLog.total) (or ensure end >=
start) so the displayed range never shows start > end or a start of 1 when total
is 0; keep the existing x-show="usageLog.total > 0" but make the x-text
resilient to stale offsets by using these guarded values.
| if t, err := time.Parse(time.RFC3339Nano, ts); err == nil { | ||
| e.Timestamp = t | ||
| } else if t, err := time.Parse("2006-01-02 15:04:05.999999999-07:00", ts); err == nil { | ||
| e.Timestamp = t | ||
| } else if t, err := time.Parse("2006-01-02T15:04:05Z", ts); err == nil { | ||
| e.Timestamp = t | ||
| } |
There was a problem hiding this comment.
Silent fallback to zero-value time.Time if no timestamp format matches.
If none of the three parse attempts succeed, e.Timestamp remains the zero value (0001-01-01T00:00:00Z). This could surface confusing data in the usage log UI. Consider logging a warning or setting a caveat when parsing fails.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/usage/reader_sqlite.go` around lines 180 - 186, The timestamp
parsing block leaves e.Timestamp as the zero time if all three time.Parse
attempts (the RFC3339Nano, "2006-01-02 15:04:05.999999999-07:00", and
"2006-01-02T15:04:05Z" branches) fail; update the final else to handle parse
failures by emitting a warning (via your package logger) that includes the raw
ts value and by recording the failure on the entry (e.g., set a new flag or
field like TimestampParseError or store the raw timestamp in e.RawTimestamp) so
the UI can detect/annotate bad parses instead of showing the 0001-01-01 time.
Ensure you modify the same parsing block that assigns e.Timestamp so callers can
rely on the new flag/field.
- Delete unused CalculateCost() and its duplicated test/helper - Unify PricingEntry/PricingTier into core.ModelPricing directly, eliminating convertPricing()/convertPricingTiers() mapping layer - Add missing end-only date filter branch to SQLite/PostgreSQL/MongoDB readers (GetSummary, GetUsageByModel, GetDailyUsage, GetUsageLog) - Extract extendedFieldSet from providerMappings as single source of truth, replacing hard-coded list in stream_wrapper.go - Extract shared buildWhereClause() and clampLimitOffset() helpers - Simplify buildReverseIndex with strings.IndexByte Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ertical bar Remove false-positive "no pricing for X" and "unmapped token field" caveats on reasoning models (o3-mini, o4-mini, grok-3-mini) where costs are actually correct. Add xAI prefixed key mappings and informational fields allowlist. Replace doughnut chart with vertical bar chart using earth-tone palette. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
internal/usage/usage.go (1)
25-63:⚠️ Potential issue | 🟡 MinorRefresh UsageEntry doc comment to mention cost metadata.
The public struct now includes cost fields; the top-level comment should acknowledge that.
Suggested doc update
-// UsageEntry represents a single token usage record. +// UsageEntry represents a single token usage record, including optional cost metadata.As per coding guidelines, Update in-code Go doc comments on exported types, functions, and interfaces when changing public APIs, adding new exported symbols, or modifying function signatures/behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/usage.go` around lines 25 - 63, The top-level comment for the exported type UsageEntry doesn't mention the new cost-related fields and cost metadata; update the doc comment for UsageEntry to state that the struct includes normalized token counts, optional per-token cost fields (InputCost, OutputCost, TotalCost) and a CostsCalculationCaveat describing incomplete calculations, and note that RawData may contain provider-specific cost-related metadata (e.g., cached_tokens, reasoning_tokens); ensure the comment remains a proper Go doc sentence starting with "UsageEntry" so it documents the exported type.config/config.go (1)
249-266:⚠️ Potential issue | 🟡 MinorUpdate CacheConfig doc comment to mention ModelList.
CacheConfig now includes external model registry settings, but its doc comment doesn’t reflect that public API change.
Suggested doc update
-// CacheConfig holds cache configuration for model storage +// CacheConfig holds cache configuration for model storage, including external model registry metadata.As per coding guidelines, Update in-code Go doc comments on exported types, functions, and interfaces when changing public APIs, adding new exported symbols, or modifying function signatures/behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/config.go` around lines 249 - 266, Update the exported CacheConfig type's top-of-declaration doc comment to mention that it configures both local/redis cache backends and also includes external model registry settings via the ModelList field; reference the CacheConfig type name and the ModelList field (and optionally RedisConfig) so the comment documents that CacheConfig controls cache backend type, CacheDir, RefreshInterval, Redis settings, and the external model metadata registry configuration.internal/admin/handler.go (1)
248-258:⚠️ Potential issue | 🟡 MinorDocument the
categoryquery param in Swagger.
The handler now supports category filtering but the annotations don’t expose it.As per coding guidelines: Update in-code Go doc comments on exported types, functions, and interfaces when changing public APIs, adding new exported symbols, or modifying function signatures/behavior.✍️ Suggested doc update
// `@Summary` List all registered models with provider info // `@Tags` admin // `@Produce` json // `@Security` BearerAuth +// `@Param` category query string false "Model category filter (e.g., text_generation, embedding, image, video, audio, all)" // `@Success` 200 {array} providers.ModelWithProvider // `@Failure` 401 {object} core.GatewayError // `@Router` /admin/api/v1/models [get]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/handler.go` around lines 248 - 258, The ListModels handler now supports an optional category query parameter but the Swagger annotations are missing it; update the doc comment for the exported function ListModels to include a `@Param` annotation describing the "category" query param (type string, optional, query) and its description so the generated OpenAPI docs show the filter; ensure you reference the same handler name ListModels and include the response and security annotations already present.internal/providers/anthropic/anthropic.go (1)
499-506:⚠️ Potential issue | 🟠 MajorStreaming usage drops cache token fields needed for cost accuracy.
Non‑streaming paths persist cache_* fields viaRawUsage, but streaming emits only prompt/completion/total. That prevents usage tracking from capturing cached token costs for streaming responses.Based on learnings: Usage tracking must normalize tokens to `input_tokens`/`output_tokens`/`total_tokens` across all providers with provider-specific data stored in `RawData` as JSON.🛠️ Example fix to include cache fields in streaming usage
- if event.Usage != nil { - chunk["usage"] = map[string]interface{}{ - "prompt_tokens": event.Usage.InputTokens, - "completion_tokens": event.Usage.OutputTokens, - "total_tokens": event.Usage.InputTokens + event.Usage.OutputTokens, - } - } + if event.Usage != nil { + usage := map[string]interface{}{ + "prompt_tokens": event.Usage.InputTokens, + "completion_tokens": event.Usage.OutputTokens, + "total_tokens": event.Usage.InputTokens + event.Usage.OutputTokens, + } + if raw := buildAnthropicRawUsage(*event.Usage); len(raw) > 0 { + usage["raw_usage"] = raw + } + chunk["usage"] = usage + }- if sc.cachedUsage != nil { - responseData["usage"] = map[string]interface{}{ - "input_tokens": sc.cachedUsage.InputTokens, - "output_tokens": sc.cachedUsage.OutputTokens, - "total_tokens": sc.cachedUsage.InputTokens + sc.cachedUsage.OutputTokens, - } - } + if sc.cachedUsage != nil { + usage := map[string]interface{}{ + "input_tokens": sc.cachedUsage.InputTokens, + "output_tokens": sc.cachedUsage.OutputTokens, + "total_tokens": sc.cachedUsage.InputTokens + sc.cachedUsage.OutputTokens, + } + if raw := buildAnthropicRawUsage(*sc.cachedUsage); len(raw) > 0 { + usage["raw_usage"] = raw + } + responseData["usage"] = usage + }Also applies to: 786-791
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/anthropic/anthropic.go` around lines 499 - 506, Streaming usage currently sets chunk["usage"] with only prompt_tokens/completion_tokens/total_tokens from event.Usage; update that code path to mirror the non-streaming normalization by using keys input_tokens, output_tokens, total_tokens and also include any provider-specific cache token fields from event.RawUsage (or similar RawUsage struct) into the usage payload (e.g., include input_cache_tokens/output_cache_tokens or a RawData JSON field) so cached token costs are preserved; specifically modify the block that builds chunk["usage"] (references: event.Usage, event.RawUsage, chunk["usage"]) to compute input_tokens=event.Usage.InputTokens, output_tokens=event.Usage.OutputTokens, total_tokens=sum, and attach RawData/RawUsage serialized as JSON (or explicit cache fields) and make the same change in the second similar block earlier (the other occurrence you noted).internal/providers/registry.go (1)
623-678: 🧹 Nitpick | 🔵 TrivialMake background refresh stop promptly on cancel (don’t use
context.Background()for per-tick work).Today, calling the returned
cancel()stops the loop, but an in-flight refresh can still run until its timeout (30s + model-list 45s). Deriving tick contexts from the loop’sctximproves shutdown behavior and prevents “hanging” stop operations.Proposed refactor
func (r *ModelRegistry) StartBackgroundRefresh(interval time.Duration, modelListURL string) func() { ctx, cancel := context.WithCancel(context.Background()) go func() { ticker := time.NewTicker(interval) defer ticker.Stop() for { select { case <-ctx.Done(): return case <-ticker.C: - refreshCtx, refreshCancel := context.WithTimeout(context.Background(), 30*time.Second) + refreshCtx, refreshCancel := context.WithTimeout(ctx, 30*time.Second) if err := r.Refresh(refreshCtx); err != nil { slog.Warn("background model refresh failed", "error", err) } else { // Save to cache after successful refresh if err := r.SaveToCache(refreshCtx); err != nil { slog.Warn("failed to save models to cache after refresh", "error", err) } } refreshCancel() // Also refresh model list if configured if modelListURL != "" { - r.refreshModelList(modelListURL) + r.refreshModelList(ctx, modelListURL) } } } }() return cancel } // refreshModelList fetches the model list and re-enriches all models. -func (r *ModelRegistry) refreshModelList(url string) { - fetchCtx, cancel := context.WithTimeout(context.Background(), 45*time.Second) +func (r *ModelRegistry) refreshModelList(parent context.Context, url string) { + fetchCtx, cancel := context.WithTimeout(parent, 45*time.Second) defer cancel()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry.go` around lines 623 - 678, The background loop uses context.Background() for per-tick work which prevents prompt shutdown; change per-iteration contexts to be derived from the loop's ctx and propagate that to model-list refresh. In StartBackgroundRefresh, replace context.WithTimeout(context.Background(), 30*time.Second) with context.WithTimeout(ctx, 30*time.Second) (and defer refreshCancel or ensure it's called immediately after use), and change the call r.refreshModelList(modelListURL) to pass the parent ctx. Update refreshModelList signature to accept a context (e.g., refreshModelList(ctx context.Context, url string)), and inside it derive fetchCtx with context.WithTimeout(ctx, 45*time.Second) instead of using context.Background(); keep existing error handling and SaveToCache calls.
♻️ Duplicate comments (34)
internal/usage/extractor_test.go (1)
203-341: Pricing branch in ExtractFrom remains untested.*Still missing tests that pass pricing into ExtractFromChatResponse/ExtractFromResponsesResponse to assert InputCost/OutputCost/TotalCost/Caveat population. This was already noted in an earlier review.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/extractor_test.go` around lines 203 - 341, Tests for ExtractFromChatResponse and ExtractFromResponsesResponse are missing assertions that pricing passed into those functions populates InputCost, OutputCost, TotalCost, and Caveat fields; add new test cases (or extend existing ones like TestExtractFromChatResponse_WithPromptTokensDetails and TestExtractFromResponsesResponse_WithDetails) that call ExtractFromChatResponse and ExtractFromResponsesResponse with a non-nil pricing object (set input_cost, output_cost, total_cost and a caveat string) and assert the returned entry has entry.InputCost, entry.OutputCost, entry.TotalCost and entry.Caveat set to the expected values; ensure you also cover precedence behavior if RawUsage/pricing conflicts with details as needed.internal/usage/cost_test.go (1)
37-54: OpenAI cached/reasoning expectations still assume additive pricing.Cached/reasoning tokens are subsets of base counts, so base token costs should exclude them before applying special rates. This was already flagged earlier; the test still encodes the additive model.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/cost_test.go` around lines 37 - 54, The test TestCalculateGranularCost_OpenAI_CachedAndReasoning assumes cached/reasoning token charges are additive; instead update the expectations to treat cached_tokens and reasoning_tokens as subsets of the base counts so base rates exclude those tokens before applying special rates. Specifically, when calling CalculateGranularCost(500_000, 300_000, rawData, "openai", pricing), compute input base charge using (500_000 - cached_tokens) at InputPerMtok and cached_tokens at CachedInputPerMtok, and compute output base charge using (300_000 - reasoning_tokens) at OutputPerMtok and reasoning_tokens at ReasoningOutputPerMtok, then update the assertCostNear expected values accordingly so the test matches CalculateGranularCost's intended subtraction behavior.internal/cache/cache.go (1)
11-18:⚠️ Potential issue | 🟡 MinorAdd a doc comment for
ModelListData.This exported field needs a short description of what it stores and when it’s populated.
As per coding guidelines: "Update in-code Go doc comments on exported types, functions, and interfaces when changing public APIs, adding new exported symbols, or modifying function signatures/behavior."📝 Suggested doc comment
type ModelCache struct { Version int `json:"version"` UpdatedAt time.Time `json:"updated_at"` Models map[string]CachedModel `json:"models"` + // ModelListData holds raw JSON from the external model registry. ModelListData json.RawMessage `json:"model_list_data,omitempty"` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/cache/cache.go` around lines 11 - 18, Add a Go doc comment for the exported field ModelListData on the ModelCache type: describe that ModelListData holds raw JSON of the original model list response (or metadata) used to populate Models, specify its format (json.RawMessage), and note when it is populated (e.g., populated when a full model list is fetched or cached) and that it may be omitted/empty if not stored. Update the comment immediately above the ModelListData field in the ModelCache struct so tools like godoc surface the description.internal/core/types.go (1)
94-101:⚠️ Potential issue | 🟡 MinorAdd a doc comment for
Model.Metadata.A brief comment improves API clarity for the new exported field.
As per coding guidelines: "Update in-code Go doc comments on exported types, functions, and interfaces when changing public APIs, adding new exported symbols, or modifying function signatures/behavior."📝 Suggested doc comment
type Model struct { ID string `json:"id"` Object string `json:"object"` OwnedBy string `json:"owned_by"` Created int64 `json:"created"` + // Metadata holds enriched information from the external model registry. Metadata *ModelMetadata `json:"metadata,omitempty"` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/types.go` around lines 94 - 101, Add a Go doc comment for the exported Model.Metadata field to explain its purpose and when it can be nil; update the struct declaration for type Model to include a short sentence above or on the Metadata field referencing the ModelMetadata type (e.g., "Metadata provides optional model-specific metadata; may be nil.") so the exported field is documented for consumers and matches Go doc conventions for Model and ModelMetadata.internal/usage/cost.go (2)
93-196:⚠️ Potential issue | 🔴 CriticalAvoid charging breakdown tokens twice.
Breakdown fields (cached/reasoning/audio tokens) are subsets of the base counts; the current flow applies the base rate to full
inputTokens/outputTokensand then adds specialized cost on top, effectively double-charging those subsets. Subtract breakdown counts from the base before applying base rates (or deduct base cost for those subsets) and then add specialized pricing.🐛 One way to adjust base costs after breakdowns
- // Base input cost - if pricing.InputPerMtok != nil { - inputCost += float64(inputTokens) * *pricing.InputPerMtok / 1_000_000 - hasInput = true - } - - // Base output cost - if pricing.OutputPerMtok != nil { - outputCost += float64(outputTokens) * *pricing.OutputPerMtok / 1_000_000 - hasOutput = true - } + adjustedInputTokens := inputTokens + adjustedOutputTokens := outputTokens // Apply provider-specific mappings if mappings, ok := providerMappings[providerType]; ok { for _, m := range mappings { count := extractInt(rawData, m.rawDataKey) if count == 0 { continue } mappedKeys[m.rawDataKey] = true + if isBreakdownToken(m.rawDataKey) && m.unit == unitPerMtok { + switch m.side { + case sideInput: + adjustedInputTokens -= count + if adjustedInputTokens < 0 { + adjustedInputTokens = 0 + } + case sideOutput: + adjustedOutputTokens -= count + if adjustedOutputTokens < 0 { + adjustedOutputTokens = 0 + } + } + } rate := m.pricingField(pricing) if rate == nil { continue // Base rate covers this token type; no adjustment needed } ... } } + + // Base input cost (after breakdown adjustments) + if pricing.InputPerMtok != nil { + inputCost += float64(adjustedInputTokens) * *pricing.InputPerMtok / 1_000_000 + hasInput = true + } + + // Base output cost (after breakdown adjustments) + if pricing.OutputPerMtok != nil { + outputCost += float64(adjustedOutputTokens) * *pricing.OutputPerMtok / 1_000_000 + hasOutput = true + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/cost.go` around lines 93 - 196, CalculateGranularCost is double-charging breakdown token subsets because it applies base rates to full inputTokens/outputTokens and then also adds provider-specific breakdown costs; fix by subtracting mapped breakdown counts (e.g., cached, reasoning, audio) from the base counts before computing the base cost: iterate providerMappings[providerType] (and informationalFields) using extractInt to sum counts per side (input/output), subtract those sums from inputTokens/outputTokens (clamp at zero), then compute base input/output cost using pricing.InputPerMtok/pricing.OutputPerMtok on the adjusted counts and proceed to add specialized costs from each mapping as now they represent only the incremental charge. Ensure you still mark mappedKeys and preserve PerRequest addition and caveat logic.
43-68:⚠️ Potential issue | 🟠 MajorGuard against double-counting when multiple RawData keys map to the same pricing field.
cached_tokensandprompt_cached_tokens(and the xAI equivalents) both map toCachedInputPerMtok. If both keys appear inrawData, the cost will be applied twice. Consider a dedupe guard or document mutual exclusivity to prevent double-charging.🛡️ Suggested documentation guard
-// providerMappings defines the per-provider RawData key to pricing field mappings. +// providerMappings defines the per-provider RawData key to pricing field mappings. +// NOTE: Alternative key variants (e.g., "cached_tokens" vs "prompt_cached_tokens") +// are mutually exclusive in practice — only one form should appear per response. var providerMappings = map[string][]tokenCostMapping{🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/cost.go` around lines 43 - 68, providerMappings contains multiple entries that point different rawDataKey values (e.g., "cached_tokens" and "prompt_cached_tokens" or their xai equivalents) to the same pricing field (CachedInputPerMtok), which can double-count cost; update the cost-calculation logic that iterates over providerMappings/tokenCostMapping to track which pricingField (use the pricingField function identity or a derived unique key like field name) has already been applied for a given rawData set and skip subsequent mappings for the same pricing field when the earlier mapping was used, ensuring each pricingField (e.g., CachedInputPerMtok, ReasoningOutputPerMtok, InputPerImage, CacheWritePerMtok) is only charged once per record; add a short comment near providerMappings and the calculation loop (the code that reads rawData and applies tokenCostMapping) documenting why the guard exists.internal/modeldata/fetcher_test.go (1)
1-149: Missing test for the 10 MB response-body size cap.The
len(raw) > maxBodySizeguard infetcher.goremains untested. This was noted in a previous review cycle.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/modeldata/fetcher_test.go` around lines 1 - 149, Add a unit test that verifies the maxBodySize guard in Fetch rejects responses larger than 10MB: create a new test (e.g., TestFetch_BodySizeLimit) that starts an httptest.Server whose handler writes >maxBodySize bytes (use the same maxBodySize constant name), call Fetch(ctx, server.URL) and assert it returns a non-nil error and does not return a large raw body; reference Fetch and maxBodySize so the test fails if the len(raw) > maxBodySize check is missing or broken.internal/usage/reader_postgresql.go (4)
183-185:json.Unmarshalerror silently discarded.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_postgresql.go` around lines 183 - 185, The call to json.Unmarshal on rawDataJSON currently discards errors; change the code in the function handling row decoding (where rawDataJSON and e.RawData are used) to capture the error returned by json.Unmarshal, and handle it appropriately (for example, return the error up the call stack or log it with context). Replace "_ = json.Unmarshal(...)" with "if err := json.Unmarshal(...); err != nil { /* return or log with context including rawDataJSON and the affected entity e */ }" so parsing failures are not silently ignored.
62-108:⚠️ Potential issue | 🟠 Major
GetUsageByModelhas the same missingCOALESCEoncostCols; missing Go doc comment.
- Missing
COALESCE: Historical records inserted before cost tracking was added will haveNULLcost columns.SUM(input_cost)over a group where every row hasNULLcosts returnsSQL NULL, causing a scan failure identical to theGetSummaryissue. Apply the same fix:🐛 Proposed fix
- costCols := `, SUM(input_cost), SUM(output_cost), SUM(total_cost)` + costCols := `, COALESCE(SUM(input_cost), 0), COALESCE(SUM(output_cost), 0), COALESCE(SUM(total_cost), 0)`
- Missing doc comment:
GetUsageByModelis an exported method added without a Go doc comment. As per coding guidelines, exported symbols require documentation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_postgresql.go` around lines 62 - 108, GetUsageByModel fails to COALESCE cost aggregates and lacks a Go doc comment; update the costCols constant used in GetUsageByModel to wrap SUM(input_cost), SUM(output_cost), SUM(total_cost) with COALESCE(...) so the query returns 0 instead of SQL NULL (fix the same way as GetSummary), and add a proper Go doc comment for the exported GetUsageByModel method describing its behavior and parameters; locate the costCols variable and the GetUsageByModel function (which scans into ModelUsage and uses r.pool.Query / rows.Scan) and make the COALESCE change and add the doc comment above the function signature.
167-167:append(args, limit, offset)may alias theargsbacking array.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_postgresql.go` at line 167, The current dataArgs := append(args, limit, offset) can alias the original args backing array; to fix it, create a new slice and copy args into it before appending limit and offset (e.g., allocate a new slice of length len(args)+2, copy args, then set the last two entries to limit and offset) so modifications to dataArgs won't mutate args; update the code around the dataArgs construction (the variables dataArgs, args, limit, offset) accordingly.
147-151: Unescaped LIKE wildcards in search (%,_treated as SQL pattern chars).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_postgresql.go` around lines 147 - 151, The search term is used raw in ILIKE so literal % and _ in params.Search become SQL wildcards; escape backslashes, percent and underscore before wrapping with % and adding to args (e.g. escaped := strings.ReplaceAll(...)) and then append "%" + escaped + "%" to args; update the SQL fragment built with conditions (the fmt.Sprintf that uses argIdx) to include an ESCAPE clause (e.g. "ILIKE $%d ESCAPE '\\'") so the DB treats backslash-escaped characters literally, and ensure argIdx is incremented as before. Use the existing variables (params.Search, argIdx, conditions, args) in reader_postgresql.go to locate and apply this change.internal/providers/init.go (1)
91-116:context.Background()ignoresInitctx cancellation — usectxinstead.Already flagged in a prior review cycle; the fix has not been applied. Deriving the timeout from
ctxensures the fetch goroutine respects shutdown cancellation.- fetchCtx, cancel := context.WithTimeout(context.Background(), 45*time.Second) + fetchCtx, cancel := context.WithTimeout(ctx, 45*time.Second)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/init.go` around lines 91 - 116, The goroutine that fetches the model list uses context.WithTimeout(context.Background(), 45*time.Second) which ignores the parent Init ctx; change it to derive from the provided ctx (i.e., use context.WithTimeout(ctx, 45*time.Second)) so the fetch goroutine respects Init cancellation, keep the cancel() defer, and ensure the closure captures the outer ctx used by the Init function (refer to the anonymous goroutine where fetchCtx, cancel := context.WithTimeout(...) is created and to registry.SaveToCache / modeldata.Fetch calls).internal/modeldata/enricher_test.go (2)
31-42:ptr()helper is undefined in this file.
Tests won’t compile unless the helper is added here or shared from another test file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/modeldata/enricher_test.go` around lines 31 - 42, The test TestEnrich_MatchedAndUnmatched references an undefined helper ptr() causing compilation failures; add a small local helper in the test file (or import it from a shared test utility) that returns a pointer to the given literal type used (e.g., func ptr[T any](v T) *T { return &v }) so the ModelList/ModelEntry initialization (and any uses of ptr for ContextWindow, InputPerMtok, OutputPerMtok) compiles and the test runs.
127-128: Guard against nilInputPerMtokbefore dereference.
This can panic if pricing is partially populated.🛡️ Suggested null-safe assertion
- if meta.Pricing == nil || *meta.Pricing.InputPerMtok != 2.50 { - t.Error("expected pricing from base model via reverse lookup") - } + if meta.Pricing == nil { + t.Error("expected non-nil pricing from base model via reverse lookup") + } else if meta.Pricing.InputPerMtok == nil || *meta.Pricing.InputPerMtok != 2.50 { + t.Errorf("InputPerMtok = %v, want 2.50", meta.Pricing.InputPerMtok) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/modeldata/enricher_test.go` around lines 127 - 128, Guard against nil InputPerMtok before dereferencing in the test: update the assertion that currently checks meta.Pricing and *meta.Pricing.InputPerMtok so it first ensures meta.Pricing.InputPerMtok != nil (in addition to meta.Pricing != nil) and only then dereferences it to compare to 2.50; locate the failing assertion around meta.Pricing and InputPerMtok in enricher_test.go and change the conditional to perform a null-safe check and comparison.internal/modeldata/merger.go (2)
38-44: Reverse-lookup recursion can loop without a guard.
A cyclic mapping inproviderModelByActualIDcould recurse indefinitely; add a depth/visited guard or resolve inline.🛡️ Example guard
- reverseKey := providerType + "/" + modelID - if compositeKey, ok := list.providerModelByActualID[reverseKey]; ok { - return Resolve(list, providerType, compositeKey[len(providerType)+1:]) + reverseKey := providerType + "/" + modelID + if compositeKey, ok := list.providerModelByActualID[reverseKey]; ok { + canonicalModelID := compositeKey[len(providerType)+1:] + if canonicalModelID != modelID { + return Resolve(list, providerType, canonicalModelID) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/modeldata/merger.go` around lines 38 - 44, The reverse-lookup in Resolve (when model == nil && pm == nil) uses providerModelByActualID and calls Resolve recursively with compositeKey, which can infinite-loop on cyclic mappings; modify Resolve (or add an internal helper) to accept a small recursion depth counter or a visited set (e.g., visited map[string]bool keyed by providerType+"/"+modelID) and check it before recursing, returning a nil/err when a cycle is detected; update the call site that currently builds reverseKey and calls Resolve(list, providerType, compositeKey[len(providerType)+1:]) to mark the key visited (or increment depth) and pass it through so cycles are detected and recursion stops.
69-79: Provider-level capabilities are dropped.
ProviderModelEntry.Capabilitiesisn’t applied, so provider-specific overrides are lost.🛠️ Example merge
if pm != nil { @@ if pm.Pricing != nil { meta.Pricing = pm.Pricing } + if pm.Capabilities != nil { + if meta.Capabilities == nil { + meta.Capabilities = map[string]bool{} + } + for k, v := range pm.Capabilities { + meta.Capabilities[k] = v + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/modeldata/merger.go` around lines 69 - 79, The merge ignores provider model capabilities so any ProviderModelEntry.Capabilities overrides are dropped; in the merger where pm and meta are merged (variables pm and meta in internal/modeldata/merger.go), add a non-nil check for pm.Capabilities and assign meta.Capabilities = pm.Capabilities when present so provider-specific capabilities override the base (i.e., alongside the existing checks for ContextWindow, MaxOutputTokens, and Pricing).internal/modeldata/fetcher.go (1)
20-21: Consider reusing anhttp.Clientfor connection pooling.
Creating a new client per fetch prevents reuse; a shared or injected client is more efficient if fetch frequency increases.♻️ Possible refactor
+var httpClient = &http.Client{} + func Fetch(ctx context.Context, url string) (*ModelList, []byte, error) { @@ - client := &http.Client{} + client := httpClientTo gauge whether pooling would help, you can verify how often
Fetchis invoked:#!/bin/bash # Find call sites of modeldata.Fetch rg -n 'modeldata\.Fetch' --type=go🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/modeldata/fetcher.go` around lines 20 - 21, The code creates a new http.Client inside Fetch (client := &http.Client{}), which prevents connection pooling; change Fetch to reuse a shared client by either adding a Client *http.Client field to the fetcher struct (or adding a constructor that accepts a client) and use that field instead of creating a new client, or declare a package-level httpClient variable initialized once and use it in Fetch; update any constructors or callers (e.g., NewFetcher or Fetch) to supply/inject the shared client so requests can reuse connections.internal/admin/handler.go (2)
225-234: Enforce a default and max limit for usage logs.
Without bounds, callers can request unbounded rows (or 0 meaning “no limit”), which can lead to heavy reads.🔒 Suggested guardrails
+const ( + defaultUsageLogLimit = 100 + maxUsageLogLimit = 1000 +) + if l := c.QueryParam("limit"); l != "" { if parsed, err := strconv.Atoi(l); err == nil && parsed > 0 { - params.Limit = parsed + if parsed > maxUsageLogLimit { + parsed = maxUsageLogLimit + } + params.Limit = parsed } } + if params.Limit == 0 { + params.Limit = defaultUsageLogLimit + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/handler.go` around lines 225 - 234, The query parsing for "limit" in internal/admin/handler.go must enforce a sensible default and an upper bound to prevent unbounded reads: when handling the "limit" param (the existing block that reads c.QueryParam("limit") and assigns params.Limit), treat missing or non-positive values as DefaultUsageLogsLimit, and clamp any parsed positive value to a MaxUsageLogsLimit; add or use constants like DefaultUsageLogsLimit and MaxUsageLogsLimit and apply the clamp before assigning params.Limit. Keep the "offset" handling but ensure it still validates non-negative values (params.Offset) as before.
182-203: Swagger annotations are missing for the new admin handlers.
These endpoints won’t appear in generated docs without full@Summary/@Tags/@Param/@Success/@Failure/@Routerblocks.Also applies to: 205-246, 277-284
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/handler.go` around lines 182 - 203, Add full Swagger/OpenAPI annotations to the new admin handler functions so they appear in generated docs: update the UsageByModel function by adding `@Summary`, `@Tags` (e.g., Admin), `@Param` entries for query/path parameters, `@Success` (200) response schema for []usage.ModelUsage, `@Failure` entries for common errors, and a `@Router` declaration for GET /admin/api/v1/usage/models; apply the same annotation pattern to the other new admin handlers in this file (the handlers immediately following UsageByModel) so each endpoint has complete `@Summary/`@Tags/@Param/@Success/@Failure/@Router blocks.internal/admin/dashboard/static/css/dashboard.css (1)
839-842: Fix stylelintfont-family-name-quotesin.col-price.
MenloandConsolasshould be unquoted to satisfy the linter.🧹 Suggested fix
- font-family: 'SF Mono', 'Menlo', 'Consolas', monospace; + font-family: 'SF Mono', Menlo, Consolas, monospace;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/css/dashboard.css` around lines 839 - 842, Update the .col-price rule to satisfy stylelint's font-family-name-quotes by unquoting generic font family identifiers: keep 'SF Mono' quoted (it contains a space) but remove the quotes around Menlo and Consolas in the font-family list in the td.col-price selector so it reads something like "font-family: 'SF Mono', Menlo, Consolas, monospace;".internal/providers/registry_test.go (1)
669-671: Avoid hardcoding the category count.
Uselen(core.AllCategories())so the test stays correct if categories change.♻️ Suggested fix
- if len(counts) != 7 { - t.Fatalf("expected 7 category counts, got %d", len(counts)) + if want := len(core.AllCategories()); len(counts) != want { + t.Fatalf("expected %d category counts (len(core.AllCategories())), got %d", want, len(counts)) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry_test.go` around lines 669 - 671, The test currently hardcodes the expected category count (7) when checking len(counts); replace that literal with the dynamic value len(core.AllCategories()) and update the assertion message accordingly (e.g., compute expected := len(core.AllCategories()) and use expected in the t.Fatalf) so the test stays correct when categories change; locate the check around the variable counts in registry_test.go and modify it to compare against len(core.AllCategories()) (and use the expected variable in the error string).internal/usage/reader_sqlite.go (3)
110-110:⚠️ Potential issue | 🟡 MinorAdd a Go doc comment for GetUsageLog.
This exported method lacks a doc comment.
As per coding guidelines: "Update in-code Go doc comments on exported types, functions, and interfaces when changing public APIs, adding new exported symbols, or modifying function signatures/behavior".✍️ Proposed fix
+// GetUsageLog returns a paginated list of usage log entries with filtering options. func (r *SQLiteReader) GetUsageLog(ctx context.Context, params UsageLogParams) (*UsageLogResult, error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_sqlite.go` at line 110, Add a Go doc comment for the exported method GetUsageLog on SQLiteReader: place a sentence or two immediately above the function declaration starting with "GetUsageLog" (e.g., "GetUsageLog ...") describing what the method does, its receiver (SQLiteReader), what the params (UsageLogParams) represent and what the returned *UsageLogResult and error mean; ensure the comment follows Go doc style (starts with the function name) and is concise.
62-62:⚠️ Potential issue | 🟡 MinorAdd a Go doc comment for GetUsageByModel.
This exported method lacks a doc comment.
As per coding guidelines: "Update in-code Go doc comments on exported types, functions, and interfaces when changing public APIs, adding new exported symbols, or modifying function signatures/behavior".✍️ Proposed fix
+// GetUsageByModel aggregates input/output tokens and costs grouped by model and provider. func (r *SQLiteReader) GetUsageByModel(ctx context.Context, params UsageQueryParams) ([]ModelUsage, error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_sqlite.go` at line 62, Add a Go doc comment above the exported method GetUsageByModel on type SQLiteReader describing what the method does (e.g., "GetUsageByModel returns usage metrics grouped by model"), mention the meaning of the params parameter (UsageQueryParams) and that it returns a slice of ModelUsage and an error, and ensure the comment starts with "GetUsageByModel" to follow Go doc conventions; place the comment immediately above the func declaration "func (r *SQLiteReader) GetUsageByModel(ctx context.Context, params UsageQueryParams) ([]ModelUsage, error)".
175-181:⚠️ Potential issue | 🟡 MinorHandle timestamp parse failures explicitly.
If all parse attempts fail, the timestamp remains the zero value, which leaks into the UI. Consider logging or surfacing a parse failure indicator instead of silently falling back.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_sqlite.go` around lines 175 - 181, The timestamp parsing chain silently leaves e.Timestamp as the zero value if all parses fail; add an explicit else branch after the last time.Parse that captures the final parse error, logs a warning (including the raw ts and the parse errors), and mark the event with an explicit parse-failure flag (e.g., set a new bool field like e.TimestampParseFailed = true or populate e.TimestampParseError) so the UI can detect and show a parse-failure indicator instead of silently displaying the zero time; update the Event struct (or equivalent) to hold that flag/field and propagate it where timestamps are rendered.internal/usage/reader_mongodb.go (2)
99-99:⚠️ Potential issue | 🟡 MinorAdd a Go doc comment for GetUsageByModel.
This exported method lacks a doc comment.
As per coding guidelines: "Update in-code Go doc comments on exported types, functions, and interfaces when changing public APIs, adding new exported symbols, or modifying function signatures/behavior".✍️ Proposed fix
+// GetUsageByModel aggregates input/output tokens and costs grouped by model and provider. func (r *MongoDBReader) GetUsageByModel(ctx context.Context, params UsageQueryParams) ([]ModelUsage, error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_mongodb.go` at line 99, Add a proper Go doc comment for the exported method GetUsageByModel on type MongoDBReader: include a one-line summary of what the method does, mention the meaning of the ctx context.Context and params UsageQueryParams parameters, and describe the returned []ModelUsage slice and error; ensure the comment begins with "GetUsageByModel" to follow Go doc conventions and lives immediately above the function declaration for GetUsageByModel.
179-179:⚠️ Potential issue | 🟡 MinorAdd a Go doc comment for GetUsageLog.
This exported method lacks a doc comment.
As per coding guidelines: "Update in-code Go doc comments on exported types, functions, and interfaces when changing public APIs, adding new exported symbols, or modifying function signatures/behavior".✍️ Proposed fix
+// GetUsageLog returns a paginated list of usage log entries with filtering options. func (r *MongoDBReader) GetUsageLog(ctx context.Context, params UsageLogParams) (*UsageLogResult, error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_mongodb.go` at line 179, Add a Go doc comment for the exported method GetUsageLog on type MongoDBReader: write a concise sentence starting with "GetUsageLog ..." that describes what the method does, mention key parameters (ctx and params) and what it returns (*UsageLogResult, error), and include any important behavior or error conditions callers should know about (e.g., context cancellation, DB lookup semantics). Place the comment immediately above the GetUsageLog function declaration to satisfy Go documentation conventions.internal/admin/dashboard/templates/index.html (4)
316-318:⚠️ Potential issue | 🟡 MinorGuard pagination range when offset exceeds total.
If the current page is empty, the start can exceed the end. Clamp the start value.
✍️ Proposed fix
-<span class="pagination-info" x-text="'Showing ' + (usageLog.offset + 1) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span> +<span class="pagination-info" x-text="'Showing ' + Math.min(usageLog.offset + 1, usageLog.total) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 316 - 318, Clamp the pagination start so it never exceeds total when usageLog.offset is past the end: update the x-text expression in the pagination-info span that currently builds "'Showing ' + (usageLog.offset + 1) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total" to compute start = Math.min(usageLog.offset + 1, usageLog.total) and keep end = Math.min(usageLog.offset + usageLog.limit, usageLog.total) (both using the usageLog object), so the displayed range is "'Showing ' + start + '-' + end + ' of ' + usageLog.total".
153-239: 🧹 Nitpick | 🔵 TrivialExtract duplicated date-picker markup into a template partial.
The overview and usage pages now share identical date-picker blocks; a partial would reduce drift.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 153 - 239, There are two identical date-picker blocks duplicated across overview and usage pages; extract the entire date-picker markup (the container div with class "date-picker" including its buttons, calendars, templates and dp-cursor-hint) into a reusable template partial (e.g., _date-picker.html) and replace both occurrences with an include/partial import. Ensure the partial preserves the Alpine bindings and references used by the surrounding component (datePickerOpen, toggleDatePicker(), dateRangeLabel(), selectedPreset, selectPreset(), calendarTitle(), calendarDays(), prevMonth(), nextMonth(), isCurrentMonth(), isToday(), isRangeStart(), isRangeEnd(), isInRange(), isFutureDay(), selectCalendarDay(), onCalendarMouseMove(), onCalendarMouseLeave(), cursorHint, selectingDate, selectionHint()) so behavior remains unchanged after replacing the duplicated markup with the include.
499-500:⚠️ Potential issue | 🟡 MinorAdd an empty state when a category has no models.
Filtering by category can yield zero results even when models exist overall.
✍️ Proposed fix
<p class="empty-state" x-show="models.length === 0 && !loading && !authError && !modelFilter">No models registered.</p> <p class="empty-state" x-show="models.length > 0 && filteredModels.length === 0 && modelFilter">No models match your filter.</p> +<p class="empty-state" x-show="models.length > 0 && filteredModels.length === 0 && !modelFilter && activeCategory !== 'all'">No models in this category.</p>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 499 - 500, Add a third empty-state paragraph to handle the case where there are models overall but none match the currently selected category: render a message like "No models in this category." with an x-show that evaluates models.length > 0 && filteredModels.length === 0 && <categoryFilterVar>; use the actual category filter variable name present in the template (e.g., categoryFilter or selectedCategory) instead of <categoryFilterVar> so the condition only triggers for category filtering and not for overall empty or text-filter empty states.
355-377:⚠️ Potential issue | 🟡 MinorHide text-only pricing columns when category = all.
“All” includes non-text models, so text-specific MTok columns become misleading. Consider rendering those columns only for the text_generation category (or swap to a generic pricing view).
✍️ One possible adjustment
-<th class="col-price">Input $/MTok</th> -<th class="col-price">Output $/MTok</th> -<th class="col-price" x-show="activeCategory === 'text_generation'">Cached $/MTok</th> +<th class="col-price" x-show="activeCategory === 'text_generation'">Input $/MTok</th> +<th class="col-price" x-show="activeCategory === 'text_generation'">Output $/MTok</th> +<th class="col-price" x-show="activeCategory === 'text_generation'">Cached $/MTok</th>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 355 - 377, The text-specific pricing columns (Input $/MTok, Output $/MTok and Cached $/MTok) are currently shown for "all" which is misleading; update the header THs and the corresponding TDs inside the template x-for so they only render when activeCategory === 'text_generation' (e.g., add x-show="activeCategory === 'text_generation'" to the two <th class="col-price"> for Input/Output and to both <td class="col-price" x-text="formatPrice(m.model.metadata?.pricing?.input_per_mtok)" and the output TD), leaving the cached column's existing x-show as-is, so the table rows (template x-for="m in filteredModels") and headers stay aligned when activeCategory !== 'text_generation'.internal/admin/dashboard/static/js/dashboard.js (2)
631-634:⚠️ Potential issue | 🟡 MinorReset usage-log pagination when the query changes (offset carry-over bug).
fetchUsagePage()is called after date-range changes (and on navigation), but it callsfetchUsageLog(false)which retains the previousoffset. That can easily yield empty pages for the new query. This was previously flagged and still applies.Proposed fix
async fetchUsagePage() { - await Promise.all([this.fetchModelUsage(), this.fetchUsageLog(false)]); + await Promise.all([this.fetchModelUsage(), this.fetchUsageLog(true)]); this.renderDonutChart(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 631 - 634, fetchUsagePage calls fetchUsageLog(false) and therefore preserves the previous pagination offset, causing empty pages after date-range or query changes; update fetchUsagePage to reset pagination before fetching (either call fetchUsageLog(true) or explicitly set this.usageLog.offset = 0 / this.usageLog.page = 1) so the usage log always starts at the first page when the query/date-range changes; refer to fetchUsagePage and fetchUsageLog to implement the change.
691-701:⚠️ Potential issue | 🔴 CriticalFix Biome lint errors:
forEachcallbacks must not implicitly return.Biome flags
m => set.add(...)becauseSet.prototype.add()returns the set (non-void). This is likely pipeline-blocking given the lint severity shown. This was previously flagged and still applies.Proposed fix
usageLogModelOptions() { const set = new Set(); - this.modelUsage.forEach(m => set.add(m.model)); + this.modelUsage.forEach(m => { set.add(m.model); }); return [...set].sort(); } usageLogProviderOptions() { const set = new Set(); - this.modelUsage.forEach(m => set.add(m.provider)); + this.modelUsage.forEach(m => { set.add(m.provider); }); return [...set].sort(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 691 - 701, The arrow callbacks in usageLogModelOptions and usageLogProviderOptions implicitly return values because they use m => set.add(...), which Biome flags; change the callbacks to not return anything by either replacing the forEach with a for...of loop over this.modelUsage or using a block-bodied callback like m => { set.add(m.model); } (and similarly for provider) so Set.add is called but no value is implicitly returned; update both functions (usageLogModelOptions and usageLogProviderOptions) accordingly.internal/providers/registry.go (2)
204-227:⚠️ Potential issue | 🟡 MinorPreserve cached raw
ModelListDataeven when parsing fails.Right now
r.modelListRawis only set whenParse()succeeds. If the cached schema changes and parsing fails, the nextSaveToCache()drops the raw payload, preventing a future re-parse after a code upgrade.
This matches an earlier review concern and still applies in the current code.Proposed fix
r.mu.Lock() r.models = newModels - if list != nil { - r.modelList = list - r.modelListRaw = modelCache.ModelListData - } + // Preserve raw bytes regardless of parse success so future versions can re-parse. + if len(modelCache.ModelListData) > 0 { + r.modelListRaw = modelCache.ModelListData + } + if list != nil { + r.modelList = list + } r.mu.Unlock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry.go` around lines 204 - 227, The code currently sets r.modelListRaw only when modeldata.Parse(modelCache.ModelListData) succeeds, which allows SaveToCache() to drop the raw payload if parsing fails; change the logic so r.modelListRaw is assigned modelCache.ModelListData regardless of Parse() success (keep r.modelList set only when Parse returns a parsed list), i.e., after attempting modeldata.Parse(...) always preserve modelCache.ModelListData into r.modelListRaw so future re-parsing is possible while still only assigning r.modelList when parsed succeeds.
489-500:⚠️ Potential issue | 🟡 MinorAvoid blank UI labels when
core.AllCategories()adds a new category.
DisplayName: categoryDisplayNames[cat]will become""for any category not in the map. This was previously flagged and is still present.Proposed fix
for _, cat := range allCategories { count := counts[cat] if cat == core.CategoryAll { count = len(r.models) } + name := categoryDisplayNames[cat] + if name == "" { + name = fmt.Sprint(cat) + } result = append(result, CategoryCount{ Category: cat, - DisplayName: categoryDisplayNames[cat], + DisplayName: name, Count: count, }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry.go` around lines 489 - 500, When building the CategoryCount list inside the loop over core.AllCategories(), guard the lookup into categoryDisplayNames so you don't insert an empty DisplayName for unknown categories: check for existence (e.g. val, ok := categoryDisplayNames[cat]) and if missing fall back to a sensible default such as cat.String() or the category identifier, then use that value when constructing the CategoryCount; keep the existing logic for core.CategoryAll/counts and the same fields (CategoryCount, categoryDisplayNames, counts, r.models).
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (44)
.env.templateconfig/config.gogo.modinternal/admin/dashboard/static/css/dashboard.cssinternal/admin/dashboard/static/js/dashboard.jsinternal/admin/dashboard/templates/index.htmlinternal/admin/dashboard/templates/layout.htmlinternal/admin/handler.gointernal/admin/handler_test.gointernal/app/app.gointernal/cache/cache.gointernal/core/responses.gointernal/core/types.gointernal/core/types_test.gointernal/modeldata/enricher.gointernal/modeldata/enricher_test.gointernal/modeldata/fetcher.gointernal/modeldata/fetcher_test.gointernal/modeldata/merger.gointernal/modeldata/merger_test.gointernal/modeldata/types.gointernal/providers/anthropic/anthropic.gointernal/providers/anthropic/anthropic_test.gointernal/providers/init.gointernal/providers/registry.gointernal/providers/registry_test.gointernal/server/handlers.gointernal/server/handlers_test.gointernal/server/http.gointernal/usage/cost.gointernal/usage/cost_test.gointernal/usage/extractor.gointernal/usage/extractor_test.gointernal/usage/pricing.gointernal/usage/reader.gointernal/usage/reader_helpers.gointernal/usage/reader_mongodb.gointernal/usage/reader_postgresql.gointernal/usage/reader_sqlite.gointernal/usage/store_postgresql.gointernal/usage/store_sqlite.gointernal/usage/stream_wrapper.gointernal/usage/stream_wrapper_test.gointernal/usage/usage.go
💤 Files with no reviewable changes (1)
- go.mod
| # External model metadata registry (provides pricing, capabilities, context window, etc.) | ||
| # Set to empty string to disable (default: ENTERPILOT/ai-model-list on GitHub) | ||
| # MODEL_LIST_URL=https://raw.githubusercontent.com/ENTERPILOT/ai-model-list/refs/heads/main/models.json |
There was a problem hiding this comment.
Empty MODEL_LIST_URL won’t disable with current env override logic.
applyEnvOverrides skips empty env values, so MODEL_LIST_URL= won’t override the default. Either adjust the loader to honor empty strings (LookupEnv) or update this comment to point users to config.yaml for disabling.
Doc-only adjustment (if you keep current loader behavior)
-# Set to empty string to disable (default: ENTERPILOT/ai-model-list on GitHub)
+# To disable, set model_list.url to "" in config.yaml (empty env values are ignored)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.env.template around lines 39 - 41, The applyEnvOverrides function currently
ignores empty env values so setting MODEL_LIST_URL= in .env won't disable the
default; either update applyEnvOverrides to use a lookup (e.g., os.LookupEnv or
equivalent) and allow an explicit empty string to override defaults for
MODEL_LIST_URL, or if you prefer to keep the current loader behavior, change the
.env.template comment to tell users to disable the model list via config.yaml
instead of setting MODEL_LIST_URL=; reference applyEnvOverrides and
MODEL_LIST_URL when making the change so the behavior and docs stay consistent.
| // Usage represents token usage information | ||
| type Usage struct { | ||
| PromptTokens int `json:"prompt_tokens"` | ||
| CompletionTokens int `json:"completion_tokens"` | ||
| TotalTokens int `json:"total_tokens"` | ||
| RawUsage map[string]any `json:"raw_usage,omitempty"` | ||
| PromptTokens int `json:"prompt_tokens"` | ||
| CompletionTokens int `json:"completion_tokens"` | ||
| TotalTokens int `json:"total_tokens"` | ||
| PromptTokensDetails *PromptTokensDetails `json:"prompt_tokens_details,omitempty"` | ||
| CompletionTokensDetails *CompletionTokensDetails `json:"completion_tokens_details,omitempty"` | ||
| RawUsage map[string]any `json:"raw_usage,omitempty"` | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Prefer a typed or raw-JSON wrapper over map[string]any for API payloads.
RawUsage is part of a public payload type, and map[string]any weakens type safety and documentation. Consider json.RawMessage or a typed RawUsage struct for provider-specific data (and mirror that in ResponsesUsage).
Based on learnings: "Do not use interface{} or map[string]interface{} for API request/response payload types. Prefer strongly-typed structs for API payload definitions to improve type safety, serialization, and documentation."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/core/types.go` around lines 84 - 92, The Usage struct's RawUsage
field currently uses map[string]any which weakens type safety; replace it with a
strongly-typed alternative (prefer json.RawMessage or a provider-specific
struct) and update the corresponding ResponsesUsage type to match; specifically,
change internal/core/types.go: Usage.RawUsage from map[string]any to
json.RawMessage (or a new RawUsage struct), add the necessary import for
encoding/json, and mirror the same type change on ResponsesUsage so
serialization/deserialization remains consistent across API payloads.
| // NewHandler creates a new handler with the given routable provider (typically the Router) | ||
| func NewHandler(provider core.RoutableProvider, logger auditlog.LoggerInterface, usageLogger usage.LoggerInterface) *Handler { | ||
| func NewHandler(provider core.RoutableProvider, logger auditlog.LoggerInterface, usageLogger usage.LoggerInterface, pricingResolver usage.PricingResolver) *Handler { | ||
| return &Handler{ | ||
| provider: provider, | ||
| logger: logger, | ||
| usageLogger: usageLogger, | ||
| provider: provider, | ||
| logger: logger, | ||
| usageLogger: usageLogger, | ||
| pricingResolver: pricingResolver, | ||
| } |
There was a problem hiding this comment.
Document the new pricingResolver parameter in NewHandler.
The constructor signature changed but the comment doesn’t mention the resolver, so the public API docs are now stale.
✍️ Suggested doc update
-// NewHandler creates a new handler with the given routable provider (typically the Router)
+// NewHandler creates a new handler with the given routable provider (typically the Router)
+// and an optional pricing resolver used for usage-cost enrichment.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // NewHandler creates a new handler with the given routable provider (typically the Router) | |
| func NewHandler(provider core.RoutableProvider, logger auditlog.LoggerInterface, usageLogger usage.LoggerInterface) *Handler { | |
| func NewHandler(provider core.RoutableProvider, logger auditlog.LoggerInterface, usageLogger usage.LoggerInterface, pricingResolver usage.PricingResolver) *Handler { | |
| return &Handler{ | |
| provider: provider, | |
| logger: logger, | |
| usageLogger: usageLogger, | |
| provider: provider, | |
| logger: logger, | |
| usageLogger: usageLogger, | |
| pricingResolver: pricingResolver, | |
| } | |
| // NewHandler creates a new handler with the given routable provider (typically the Router) | |
| // and an optional pricing resolver used for usage-cost enrichment. | |
| func NewHandler(provider core.RoutableProvider, logger auditlog.LoggerInterface, usageLogger usage.LoggerInterface, pricingResolver usage.PricingResolver) *Handler { | |
| return &Handler{ | |
| provider: provider, | |
| logger: logger, | |
| usageLogger: usageLogger, | |
| pricingResolver: pricingResolver, | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/server/handlers.go` around lines 24 - 31, The GoDoc for NewHandler
is outdated because the constructor now accepts a pricingResolver parameter;
update the comment above NewHandler (and any exported Handler doc if needed) to
mention the new pricingResolver argument and its purpose (e.g., resolves pricing
for usage records), referencing NewHandler and the Handler struct so callers
know why and how to pass the pricingResolver; keep the comment concise and
follow existing GoDoc style for exported constructors.
| // Calculate granular costs if pricing is provided | ||
| if len(pricing) > 0 && pricing[0] != nil { | ||
| costResult := CalculateGranularCost(entry.InputTokens, entry.OutputTokens, entry.RawData, provider, pricing[0]) | ||
| entry.InputCost = costResult.InputCost | ||
| entry.OutputCost = costResult.OutputCost | ||
| entry.TotalCost = costResult.TotalCost | ||
| entry.CostsCalculationCaveat = costResult.Caveat | ||
| } |
There was a problem hiding this comment.
Avoid cost calculation when usage is missing.
If resp.Usage is nil, computing costs (especially per-request fees) can emit misleading values. Gate the calculation on the presence of usage data.
🐛 Suggested fix
- if len(pricing) > 0 && pricing[0] != nil {
+ if resp.Usage != nil && len(pricing) > 0 && pricing[0] != nil {
costResult := CalculateGranularCost(entry.InputTokens, entry.OutputTokens, entry.RawData, provider, pricing[0])
entry.InputCost = costResult.InputCost
entry.OutputCost = costResult.OutputCost
entry.TotalCost = costResult.TotalCost
entry.CostsCalculationCaveat = costResult.Caveat
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/usage/extractor.go` around lines 142 - 149, Currently we always call
CalculateGranularCost when pricing is present, which can produce misleading
per-request fees if usage is missing; add a guard that verifies usage data
exists before calling CalculateGranularCost (e.g., require entry.RawData != nil
&& entry.RawData.Usage != nil or otherwise confirm resp.Usage is present) and
only then set entry.InputCost, entry.OutputCost, entry.TotalCost and
entry.CostsCalculationCaveat from CalculateGranularCost; keep the existing
pricing check and use the same function name CalculateGranularCost and fields
entry.InputTokens/entry.OutputTokens/raw data to locate where to add the extra
condition.
Eliminates ~178 lines of duplicated date-picker markup between the overview and usage pages by extracting it into a shared template. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add doc comments to ModelListData (cache.go) and Metadata (types.go) fields. Add extractor tests exercising the pricing cost calculation path for both ChatResponse and ResponsesResponse. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Unquote Menlo/Consolas in CSS and JS chart font-family - Fix fetchUsagePage to call fetchUsageLog(true) - Add swagger annotations for admin API endpoints - Add nil guard in enricher test for InputPerMtok - Add TestFetch_OversizedBody for 10MB body limit - Replace unbounded recursion in merger with inline lookup - Add Capabilities override in merger - Refactor types.go with strings.Cut - Replace hardcoded category count with len(core.AllCategories()) - Add CategoryCount display name fallback - Use shared HTTP client in fetcher with 60s timeout - Fix init goroutine to use parent context - Escape SQL LIKE wildcards to prevent injection - Add slog.Warn for json.Unmarshal errors - Add appliedFields pointer tracking to prevent cost double-counting - Add Go doc comments on reader methods Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/admin/handler.go (1)
287-304:⚠️ Potential issue | 🟡 MinorUnvalidated category filter silently returns empty results on typos.
core.ModelCategory(c.QueryParam("category"))accepts any arbitrary string. A misspelled category like?category=tex_generationsilently returns an empty list with no indication of the error. Consider validating against known categories or documenting this as intentional "unknown category = empty" behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/handler.go` around lines 287 - 304, In ListModels, the code uses core.ModelCategory(c.QueryParam("category")) which accepts any string so typos silently return empty results; update Handler.ListModels to validate the parsed category before calling h.registry.ListModelsWithProviderByCategory: after computing cat := core.ModelCategory(...), check it against the set of allowed categories (e.g., a core.Exported list or map of valid categories or a helper like core.IsValidCategory), and if it's non-empty and invalid return a 400 Bad Request JSON error indicating "invalid category" (including the supplied value); only call h.registry.ListModelsWithProviderByCategory when the category is known, otherwise fall back to h.registry.ListModelsWithProvider or return the error.
♻️ Duplicate comments (14)
internal/usage/reader_sqlite.go (2)
158-161:⚠️ Potential issue | 🟠 MajorCost columns in
GetUsageLogSELECT lackCOALESCEfor historical rows.Historical rows inserted before this PR will have
NULLininput_cost,output_cost, andtotal_cost. Depending on the Go type of the scan targets, this may cause scan errors.Also,
dataArgs := append(args, limit, offset)can aliasargs' backing array — a known Go footgun flagged in a prior review.♻️ Proposed fixes
dataQuery := `SELECT id, request_id, provider_id, timestamp, model, provider, endpoint, - input_tokens, output_tokens, total_tokens, input_cost, output_cost, total_cost, raw_data, costs_calculation_caveat + input_tokens, output_tokens, total_tokens, COALESCE(input_cost, 0), COALESCE(output_cost, 0), COALESCE(total_cost, 0), raw_data, COALESCE(costs_calculation_caveat, '') FROM usage` + where + ` ORDER BY timestamp DESC LIMIT ? OFFSET ?` - dataArgs := append(args, limit, offset) + dataArgs := make([]interface{}, len(args), len(args)+2) + copy(dataArgs, args) + dataArgs = append(dataArgs, limit, offset)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_sqlite.go` around lines 158 - 161, GetUsageLog's dataQuery selects input_cost, output_cost, and total_cost without COALESCE which causes NULLs for historic rows to error on scanning; change the SELECT in dataQuery to wrap those three columns with COALESCE(..., 0) (referencing the dataQuery variable and the column names input_cost, output_cost, total_cost) so scans always get numeric zeros instead of NULL. Also avoid aliasing args' backing array when building dataArgs (referencing dataArgs, args, limit, offset) by creating a new slice (e.g., make/copy or append to a nil slice) and then adding limit and offset instead of directly doing dataArgs := append(args, limit, offset).
179-185:⚠️ Potential issue | 🟡 MinorTimestamp parse failure silently leaves
e.Timestampas zero value.If none of the three formats match,
e.Timestampremains0001-01-01T00:00:00Z, which will surface as confusing data in the UI. Consider logging a warning (consistent with the JSON unmarshal warning on line 188) when all parse attempts fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_sqlite.go` around lines 179 - 185, The timestamp parse block leaves e.Timestamp as the zero value if none of the three time.Parse attempts succeed; modify the parse logic in reader_sqlite.go (the block that sets e.Timestamp from ts) to detect when all parses fail and emit a warning similar to the JSON unmarshal warning on line 188 — include the original ts value and context (e.g., "failed to parse timestamp for entry") in the log so callers can diagnose bad data while leaving e.Timestamp unchanged.internal/usage/reader_postgresql.go (3)
167-171:⚠️ Potential issue | 🟠 Major
GetUsageLogcost columns lackCOALESCE, andappendmay aliasargs.Two issues previously flagged that remain:
input_cost, output_cost, total_costare not wrapped inCOALESCE(unlikecosts_calculation_caveaton the same line), soNULLfrom historical rows will cause scan errors.dataArgs := append(args, limit, offset)can mutateargs' backing array.🐛 Proposed fix
dataQuery := fmt.Sprintf(`SELECT id, request_id, provider_id, timestamp, model, provider, endpoint, - input_tokens, output_tokens, total_tokens, input_cost, output_cost, total_cost, raw_data, COALESCE(costs_calculation_caveat, '') + input_tokens, output_tokens, total_tokens, COALESCE(input_cost, 0), COALESCE(output_cost, 0), COALESCE(total_cost, 0), raw_data, COALESCE(costs_calculation_caveat, '') FROM "usage"%s ORDER BY timestamp DESC LIMIT $%d OFFSET $%d`, where, argIdx, argIdx+1) - dataArgs := append(args, limit, offset) + dataArgs := make([]interface{}, len(args), len(args)+2) + copy(dataArgs, args) + dataArgs = append(dataArgs, limit, offset)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_postgresql.go` around lines 167 - 171, In GetUsageLog change the SELECT in dataQuery to wrap input_cost, output_cost, total_cost with COALESCE(..., 0) (or another appropriate default) just like costs_calculation_caveat to avoid NULL scan errors, and avoid mutating the original args when building dataArgs by creating a new slice (e.g. copy args into a new slice or use append on an empty slice) before appending limit and offset so args' backing array isn't aliased.
33-33:⚠️ Potential issue | 🟠 Major
GetSummarycost columns still lackCOALESCE— flagged in prior review.
SUM(input_cost),SUM(output_cost), andSUM(total_cost)returnNULLwhen no rows match. pgx v5 will error on scanningNULLinto a non-nullablefloat64.🐛 Proposed fix
- costCols := `, SUM(input_cost), SUM(output_cost), SUM(total_cost)` + costCols := `, COALESCE(SUM(input_cost), 0), COALESCE(SUM(output_cost), 0), COALESCE(SUM(total_cost), 0)`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_postgresql.go` at line 33, GetSummary's costCols uses SUM(input_cost), SUM(output_cost), SUM(total_cost) which can be NULL and will cause pgx v5 scan errors into float64; update the costCols string in reader_postgresql.go (the GetSummary query builder) to wrap each SUM with COALESCE(..., 0) — e.g. COALESCE(SUM(input_cost), 0), COALESCE(SUM(output_cost), 0), COALESCE(SUM(total_cost), 0) — so the query returns zeros instead of NULLs and scans safely into non-nullable floats.
72-72:⚠️ Potential issue | 🟠 Major
GetUsageByModelcost columns also lackCOALESCE.Same issue —
SUM(input_cost)returnsNULLfor groups with all-NULLcost values (historical pre-pricing rows).🐛 Proposed fix
- costCols := `, SUM(input_cost), SUM(output_cost), SUM(total_cost)` + costCols := `, COALESCE(SUM(input_cost), 0), COALESCE(SUM(output_cost), 0), COALESCE(SUM(total_cost), 0)`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_postgresql.go` at line 72, GetUsageByModel's costCols currently uses SUM(input_cost), SUM(output_cost), SUM(total_cost) which can return NULL for groups with all-NULL values; update the costCols string in reader_postgresql.go to wrap each SUM with COALESCE (e.g., COALESCE(SUM(input_cost), 0), COALESCE(SUM(output_cost), 0), COALESCE(SUM(total_cost), 0)) so zero is returned instead of NULL and downstream handling in GetUsageByModel remains consistent.internal/core/types.go (1)
68-92: LGTM on the new detail types and Usage expansion.
PromptTokensDetailsandCompletionTokensDetailsare clean breakdowns with appropriate doc comments. The pointer fields onUsagecorrectly handle optionality.The
RawUsage map[string]anyconcern was already raised in a prior review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/types.go` around lines 68 - 92, Change the RawUsage field on the Usage struct from a generic map[string]any to a raw JSON payload to avoid loose typing and preserve unknown provider fields: update the Usage definition (symbol Usage and field RawUsage) to use json.RawMessage (encoding/json) instead of map[string]any, keep the json tag, and add the encoding/json import; this preserves the raw provider usage blob and defers unmarshalling to callers who know the schema.internal/usage/cost.go (1)
99-162:⚠️ Potential issue | 🔴 CriticalCritical: Specialized token costs are still added on top of full base costs (double-counting).
This was flagged in a prior review and remains unaddressed. For OpenAI/xAI/Gemini, cached tokens and reasoning tokens are subsets of
prompt_tokens/completion_tokens. The base cost (lines 112-121) charges the full token count, then lines 128-162 add specialized token costs on top — charging those tokens twice at different rates instead of replacing the base rate with the specialized rate.Example with OpenAI, 500k prompt tokens (200k cached),
InputPerMtok=$2.50,CachedInputPerMtok=$1.25:
- Current:
500k × $2.50/M + 200k × $1.25/M = $1.50- Correct:
(500k − 200k) × $2.50/M + 200k × $1.25/M = $1.00The fix should subtract the specialized count from the base before applying the base rate, or equivalently, add the delta
(specialized_rate − base_rate) × count / 1Mfor per-Mtok items.Note: Anthropic's
cache_read_input_tokensandcache_creation_input_tokensare separate line items (not subsets ofinput_tokens), so they should remain additive. A per-providerisSubsetflag on the mapping would cleanly express this.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/cost.go` around lines 99 - 162, CalculateGranularCost currently double-counts subset token types (cached/reasoning) because the base per-Mtok cost (pricing.InputPerMtok/pricing.OutputPerMtok) is applied to the full token totals and specialized mapping costs are then added on top; update the providerMappings/mapping handling so mappings can declare isSubset (boolean) and, for mappings whose isSubset==true and whose unit==unitPerMtok and whose pricing field (rate) != nil, apply only the delta between the specialized rate and the base rate (i.e. delta = (*rate - *baseRate) * count / 1_000_000) or equivalently subtract base_rate*count/1_000_000 from inputCost/outputCost before adding the specialized cost; ensure this logic references CalculateGranularCost, providerMappings, the mapping struct (add isSubset), appliedFields, and pricing.InputPerMtok/pricing.OutputPerMtok, and keep Anthropic mappings with isSubset=false so their tokens remain additive.internal/modeldata/enricher_test.go (1)
31-42:⚠️ Potential issue | 🟠 MajorConfirm
ptr()helper is in the same package; tests won’t compile otherwise.
ptr(...)is used but not defined in this file. If the helper only exists in amodeldata_testpackage, these tests will fail to compile. Ensureptris defined in packagemodeldataor add a local helper here.🛠️ Minimal local helper (if needed)
+func ptr[T any](v T) *T { return &v }Run the following to confirm helper visibility:
#!/bin/bash rg -n 'func ptr' internal/modeldata rg -n '^package ' internal/modeldata/*_test.go🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/modeldata/enricher_test.go` around lines 31 - 42, Tests use the ptr(...) helper but it's not defined in this package, causing compilation failures; add a small helper function named ptr in the same package as the tests (modeldata) or declare a local helper at top of internal/modeldata/enricher_test.go so ptr[T any](v T) *T is visible to TestEnrich_MatchedAndUnmatched and other tests referencing ptr, ensuring the symbol is in package modeldata (not modeldata_test) or duplicated locally in the test file.internal/admin/dashboard/static/js/dashboard.js (1)
691-699:⚠️ Potential issue | 🟡 MinorFix Biome lint: forEach callbacks should not return values.
Biome flags implicit returns from
forEachcallbacks here. Wrap the bodies in braces.🛠️ Suggested fix
- this.modelUsage.forEach(m => set.add(m.model)); + this.modelUsage.forEach(m => { set.add(m.model); }); @@ - this.modelUsage.forEach(m => set.add(m.provider)); + this.modelUsage.forEach(m => { set.add(m.provider); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 691 - 699, The Biome lint flags implicit returns from the arrow callbacks in usageLogModelOptions() and usageLogProviderOptions(): change the forEach callbacks so they use block bodies instead of implicit returns — e.g., in usageLogModelOptions() replace this.modelUsage.forEach(m => set.add(m.model)) with a block callback that calls set.add(m.model) inside braces, and do the analogous change in usageLogProviderOptions() for set.add(m.provider); this removes the implicit return and satisfies the lint rule while keeping the same behavior.internal/providers/registry.go (2)
596-600: 🧹 Nitpick | 🔵 TrivialClarify that registryAccessor may wrap live maps.
The current comment still implies a snapshot only.✏️ Suggested wording
-// registryAccessor implements modeldata.ModelInfoAccessor for a snapshot of models. +// registryAccessor implements modeldata.ModelInfoAccessor over a models map; +// callers must ensure appropriate synchronization when passing the live registry map.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry.go` around lines 596 - 600, The comment for registryAccessor incorrectly implies it is a snapshot; update the documentation on the registryAccessor type to clarify that it may wrap live maps (models and providerTypes) and therefore can reflect concurrent changes rather than being an immutable snapshot; reference the type name registryAccessor and its fields models and providerTypes and the interface modeldata.ModelInfoAccessor so readers know the wrapper may point at live map instances rather than a frozen copy.
204-226:⚠️ Potential issue | 🟡 MinorPreserve raw model-list bytes even when parsing fails.
Dropping raw bytes prevents a later re-parse after schema fixes.🛠️ Suggested fix
r.mu.Lock() r.models = newModels -if list != nil { - r.modelList = list - r.modelListRaw = modelCache.ModelListData -} +if len(modelCache.ModelListData) > 0 { + r.modelListRaw = modelCache.ModelListData +} +if list != nil { + r.modelList = list +} r.mu.Unlock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry.go` around lines 204 - 226, The code currently only assigns r.modelListRaw when parsing succeeds (list != nil), which discards modelCache.ModelListData on parse failure; change the flow so that r.modelListRaw is preserved whenever modelCache.ModelListData is non-empty: either assign r.modelListRaw = modelCache.ModelListData before attempting Parse or, inside the r.mu.Lock block, set r.modelListRaw = modelCache.ModelListData whenever modelCache.ModelListData has content (regardless of parseErr), while still only setting r.modelList when parsing succeeded (list != nil); update references to the local variable list and keep the existing slog.Warn on parseErr.internal/admin/dashboard/templates/index.html (3)
325-326:⚠️ Potential issue | 🟡 MinorAdd an empty state for category filters.
Currently, selecting a category with no models yields a blank area.🧾 Suggested addition
<p class="empty-state" x-show="models.length === 0 && !loading && !authError && !modelFilter">No models registered.</p> <p class="empty-state" x-show="models.length > 0 && filteredModels.length === 0 && modelFilter">No models match your filter.</p> +<p class="empty-state" x-show="models.length > 0 && filteredModels.length === 0 && !modelFilter && activeCategory !== 'all'">No models in this category.</p>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 325 - 326, Add an empty-state paragraph for category filtering so the UI shows a message when a category is selected but yields no models: add a <p class="empty-state"> element similar to the existing model-filter message that uses the same visibility logic but checks the category filter variable (e.g., x-show="models.length > 0 && filteredModels.length === 0 && categoryFilter" or whatever the template uses like selectedCategory), and set the text to something like "No models in this category." Ensure you reference the existing symbols models and filteredModels so it only displays when there are models overall but none match the category.
181-203: 🧹 Nitpick | 🔵 TrivialHide category-specific pricing columns when
activeCategory === 'all'.
“All” includes non-text models, so the text pricing columns become misleading.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 181 - 203, The price columns are currently shown for "all" which mixes non-text models; update the table so Input $/MTok and Output $/MTok headers and their cells are only rendered for text models by adding the same category check used for the cached column. Specifically, add x-show="activeCategory === 'text_generation'" to the <th> elements for "Input $/MTok" and "Output $/MTok" and to their corresponding <td class="col-price" x-text="formatPrice(...)" /> rows (which reference m.model.metadata?.pricing?.input_per_mtok and output_per_mtok) so only when activeCategory === 'text_generation' the pricing columns (and their formatPrice calls) are visible while filteredModels, activeCategory, and formatPrice remain unchanged.
142-147:⚠️ Potential issue | 🟡 MinorGuard pagination range when offsets are stale.
The current math can show a start value greater than the total.🧮 Suggested fix
-<span class="pagination-info" x-text="'Showing ' + (usageLog.offset + 1) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span> +<span class="pagination-info" x-text="'Showing ' + Math.min(usageLog.offset + 1, usageLog.total) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 142 - 147, The pagination display can show a start > total when offsets are stale; clamp the displayed start and end values and ensure offset is clamped when paging. Replace the start calculation in the span that uses usageLog.offset, usageLog.limit, usageLog.total with a guarded expression (e.g., start = Math.min(usageLog.total, Math.max(1, usageLog.offset + 1)) and end = Math.min(usageLog.total, usageLog.offset + usageLog.limit)) and also update usageLogPrevPage() and usageLogNextPage() to clamp usageLog.offset into the valid range (0..Math.max(0, usageLog.total - usageLog.limit)) after changing pages so UI never shows an out-of-range range.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/admin/dashboard/templates/date-picker.html`:
- Around line 20-22: The month navigation buttons are icon-only and lack
accessible labels; update the button elements with descriptive accessible
attributes (e.g., add aria-label="Previous month" to the button that calls
prevMonth() and aria-label="Next month" to the counterpart that calls
nextMonth()) while keeping the existing class dp-nav-btn and click handlers
unchanged, or alternatively include visually-hidden text inside the buttons for
screen readers; ensure both nav buttons (the one with `@click`="prevMonth()" and
the one at the counterpart using nextMonth()) receive these labels.
In `@internal/providers/init.go`:
- Around line 88-115: Update the exported Go doc comment for the Init function
(or other exported symbol in internal/providers/init.go) to mention that Init
now performs a best‑effort, non‑blocking background model-list fetch and
enrichment: it starts a goroutine that calls modeldata.Fetch (with a ~45s
timeout), sets the model list on the registry, calls registry.EnrichModels, and
attempts to SaveToCache; note that the fetch is best‑effort (will log warnings
on errors) and does not block Init's return.
In `@internal/usage/cost.go`:
- Around line 187-205: The TotalCost is only set when both hasInput and
hasOutput are true; change this so TotalCost is set whenever either side has a
cost by treating the missing side as zero: compute total := inputCost +
outputCost and assign result.TotalCost = &total whenever hasInput || hasOutput
(leave InputCost/OutputCost assignment as-is), and keep the caveats
sorting/joining unchanged; refer to CostResult, InputCost, OutputCost,
TotalCost, hasInput, hasOutput, inputCost, outputCost, and caveats in the
existing block.
In `@internal/usage/reader_sqlite.go`:
- Line 72: The cost aggregation in GetUsageByModel uses costCols := `,
SUM(input_cost), SUM(output_cost), SUM(total_cost)` which can yield NULL for
groups with all-null costs; update costCols to wrap each SUM with COALESCE, e.g.
use `, COALESCE(SUM(input_cost),0), COALESCE(SUM(output_cost),0),
COALESCE(SUM(total_cost),0)` so GetUsageByModel returns zeros instead of NULLs.
Ensure you update the costCols constant used by the GetUsageByModel query
construction.
---
Outside diff comments:
In `@internal/admin/handler.go`:
- Around line 287-304: In ListModels, the code uses
core.ModelCategory(c.QueryParam("category")) which accepts any string so typos
silently return empty results; update Handler.ListModels to validate the parsed
category before calling h.registry.ListModelsWithProviderByCategory: after
computing cat := core.ModelCategory(...), check it against the set of allowed
categories (e.g., a core.Exported list or map of valid categories or a helper
like core.IsValidCategory), and if it's non-empty and invalid return a 400 Bad
Request JSON error indicating "invalid category" (including the supplied value);
only call h.registry.ListModelsWithProviderByCategory when the category is
known, otherwise fall back to h.registry.ListModelsWithProvider or return the
error.
---
Duplicate comments:
In `@internal/admin/dashboard/static/js/dashboard.js`:
- Around line 691-699: The Biome lint flags implicit returns from the arrow
callbacks in usageLogModelOptions() and usageLogProviderOptions(): change the
forEach callbacks so they use block bodies instead of implicit returns — e.g.,
in usageLogModelOptions() replace this.modelUsage.forEach(m => set.add(m.model))
with a block callback that calls set.add(m.model) inside braces, and do the
analogous change in usageLogProviderOptions() for set.add(m.provider); this
removes the implicit return and satisfies the lint rule while keeping the same
behavior.
In `@internal/admin/dashboard/templates/index.html`:
- Around line 325-326: Add an empty-state paragraph for category filtering so
the UI shows a message when a category is selected but yields no models: add a
<p class="empty-state"> element similar to the existing model-filter message
that uses the same visibility logic but checks the category filter variable
(e.g., x-show="models.length > 0 && filteredModels.length === 0 &&
categoryFilter" or whatever the template uses like selectedCategory), and set
the text to something like "No models in this category." Ensure you reference
the existing symbols models and filteredModels so it only displays when there
are models overall but none match the category.
- Around line 181-203: The price columns are currently shown for "all" which
mixes non-text models; update the table so Input $/MTok and Output $/MTok
headers and their cells are only rendered for text models by adding the same
category check used for the cached column. Specifically, add
x-show="activeCategory === 'text_generation'" to the <th> elements for "Input
$/MTok" and "Output $/MTok" and to their corresponding <td class="col-price"
x-text="formatPrice(...)" /> rows (which reference
m.model.metadata?.pricing?.input_per_mtok and output_per_mtok) so only when
activeCategory === 'text_generation' the pricing columns (and their formatPrice
calls) are visible while filteredModels, activeCategory, and formatPrice remain
unchanged.
- Around line 142-147: The pagination display can show a start > total when
offsets are stale; clamp the displayed start and end values and ensure offset is
clamped when paging. Replace the start calculation in the span that uses
usageLog.offset, usageLog.limit, usageLog.total with a guarded expression (e.g.,
start = Math.min(usageLog.total, Math.max(1, usageLog.offset + 1)) and end =
Math.min(usageLog.total, usageLog.offset + usageLog.limit)) and also update
usageLogPrevPage() and usageLogNextPage() to clamp usageLog.offset into the
valid range (0..Math.max(0, usageLog.total - usageLog.limit)) after changing
pages so UI never shows an out-of-range range.
In `@internal/core/types.go`:
- Around line 68-92: Change the RawUsage field on the Usage struct from a
generic map[string]any to a raw JSON payload to avoid loose typing and preserve
unknown provider fields: update the Usage definition (symbol Usage and field
RawUsage) to use json.RawMessage (encoding/json) instead of map[string]any, keep
the json tag, and add the encoding/json import; this preserves the raw provider
usage blob and defers unmarshalling to callers who know the schema.
In `@internal/modeldata/enricher_test.go`:
- Around line 31-42: Tests use the ptr(...) helper but it's not defined in this
package, causing compilation failures; add a small helper function named ptr in
the same package as the tests (modeldata) or declare a local helper at top of
internal/modeldata/enricher_test.go so ptr[T any](v T) *T is visible to
TestEnrich_MatchedAndUnmatched and other tests referencing ptr, ensuring the
symbol is in package modeldata (not modeldata_test) or duplicated locally in the
test file.
In `@internal/providers/registry.go`:
- Around line 596-600: The comment for registryAccessor incorrectly implies it
is a snapshot; update the documentation on the registryAccessor type to clarify
that it may wrap live maps (models and providerTypes) and therefore can reflect
concurrent changes rather than being an immutable snapshot; reference the type
name registryAccessor and its fields models and providerTypes and the interface
modeldata.ModelInfoAccessor so readers know the wrapper may point at live map
instances rather than a frozen copy.
- Around line 204-226: The code currently only assigns r.modelListRaw when
parsing succeeds (list != nil), which discards modelCache.ModelListData on parse
failure; change the flow so that r.modelListRaw is preserved whenever
modelCache.ModelListData is non-empty: either assign r.modelListRaw =
modelCache.ModelListData before attempting Parse or, inside the r.mu.Lock block,
set r.modelListRaw = modelCache.ModelListData whenever modelCache.ModelListData
has content (regardless of parseErr), while still only setting r.modelList when
parsing succeeded (list != nil); update references to the local variable list
and keep the existing slog.Warn on parseErr.
In `@internal/usage/cost.go`:
- Around line 99-162: CalculateGranularCost currently double-counts subset token
types (cached/reasoning) because the base per-Mtok cost
(pricing.InputPerMtok/pricing.OutputPerMtok) is applied to the full token totals
and specialized mapping costs are then added on top; update the
providerMappings/mapping handling so mappings can declare isSubset (boolean)
and, for mappings whose isSubset==true and whose unit==unitPerMtok and whose
pricing field (rate) != nil, apply only the delta between the specialized rate
and the base rate (i.e. delta = (*rate - *baseRate) * count / 1_000_000) or
equivalently subtract base_rate*count/1_000_000 from inputCost/outputCost before
adding the specialized cost; ensure this logic references CalculateGranularCost,
providerMappings, the mapping struct (add isSubset), appliedFields, and
pricing.InputPerMtok/pricing.OutputPerMtok, and keep Anthropic mappings with
isSubset=false so their tokens remain additive.
In `@internal/usage/reader_postgresql.go`:
- Around line 167-171: In GetUsageLog change the SELECT in dataQuery to wrap
input_cost, output_cost, total_cost with COALESCE(..., 0) (or another
appropriate default) just like costs_calculation_caveat to avoid NULL scan
errors, and avoid mutating the original args when building dataArgs by creating
a new slice (e.g. copy args into a new slice or use append on an empty slice)
before appending limit and offset so args' backing array isn't aliased.
- Line 33: GetSummary's costCols uses SUM(input_cost), SUM(output_cost),
SUM(total_cost) which can be NULL and will cause pgx v5 scan errors into
float64; update the costCols string in reader_postgresql.go (the GetSummary
query builder) to wrap each SUM with COALESCE(..., 0) — e.g.
COALESCE(SUM(input_cost), 0), COALESCE(SUM(output_cost), 0),
COALESCE(SUM(total_cost), 0) — so the query returns zeros instead of NULLs and
scans safely into non-nullable floats.
- Line 72: GetUsageByModel's costCols currently uses SUM(input_cost),
SUM(output_cost), SUM(total_cost) which can return NULL for groups with all-NULL
values; update the costCols string in reader_postgresql.go to wrap each SUM with
COALESCE (e.g., COALESCE(SUM(input_cost), 0), COALESCE(SUM(output_cost), 0),
COALESCE(SUM(total_cost), 0)) so zero is returned instead of NULL and downstream
handling in GetUsageByModel remains consistent.
In `@internal/usage/reader_sqlite.go`:
- Around line 158-161: GetUsageLog's dataQuery selects input_cost, output_cost,
and total_cost without COALESCE which causes NULLs for historic rows to error on
scanning; change the SELECT in dataQuery to wrap those three columns with
COALESCE(..., 0) (referencing the dataQuery variable and the column names
input_cost, output_cost, total_cost) so scans always get numeric zeros instead
of NULL. Also avoid aliasing args' backing array when building dataArgs
(referencing dataArgs, args, limit, offset) by creating a new slice (e.g.,
make/copy or append to a nil slice) and then adding limit and offset instead of
directly doing dataArgs := append(args, limit, offset).
- Around line 179-185: The timestamp parse block leaves e.Timestamp as the zero
value if none of the three time.Parse attempts succeed; modify the parse logic
in reader_sqlite.go (the block that sets e.Timestamp from ts) to detect when all
parses fail and emit a warning similar to the JSON unmarshal warning on line 188
— include the original ts value and context (e.g., "failed to parse timestamp
for entry") in the log so callers can diagnose bad data while leaving
e.Timestamp unchanged.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (24)
cmd/gomodel/docs/docs.gocmd/gomodel/docs/swagger.jsoninternal/admin/dashboard/dashboard.gointernal/admin/dashboard/static/css/dashboard.cssinternal/admin/dashboard/static/js/dashboard.jsinternal/admin/dashboard/templates/date-picker.htmlinternal/admin/dashboard/templates/index.htmlinternal/admin/handler.gointernal/cache/cache.gointernal/core/types.gointernal/modeldata/enricher_test.gointernal/modeldata/fetcher.gointernal/modeldata/fetcher_test.gointernal/modeldata/merger.gointernal/modeldata/types.gointernal/providers/init.gointernal/providers/registry.gointernal/providers/registry_test.gointernal/usage/cost.gointernal/usage/extractor_test.gointernal/usage/reader_helpers.gointernal/usage/reader_mongodb.gointernal/usage/reader_postgresql.gointernal/usage/reader_sqlite.go
| // ModelCategory represents a model's functional category for UI grouping. | ||
| type ModelCategory string | ||
|
|
||
| const ( | ||
| CategoryAll ModelCategory = "all" | ||
| CategoryTextGeneration ModelCategory = "text_generation" | ||
| CategoryEmbedding ModelCategory = "embedding" | ||
| CategoryImage ModelCategory = "image" | ||
| CategoryAudio ModelCategory = "audio" | ||
| CategoryVideo ModelCategory = "video" | ||
| CategoryUtility ModelCategory = "utility" | ||
| ) | ||
|
|
||
| // modeToCategory maps mode strings from the external registry to categories. | ||
| var modeToCategory = map[string]ModelCategory{ | ||
| "chat": CategoryTextGeneration, | ||
| "completion": CategoryTextGeneration, | ||
| "responses": CategoryTextGeneration, | ||
| "embedding": CategoryEmbedding, | ||
| "rerank": CategoryEmbedding, | ||
| "image_generation": CategoryImage, | ||
| "image_edit": CategoryImage, | ||
| "audio_transcription": CategoryAudio, | ||
| "audio_speech": CategoryAudio, | ||
| "video_generation": CategoryVideo, | ||
| "moderation": CategoryUtility, | ||
| "ocr": CategoryUtility, | ||
| "search": CategoryUtility, | ||
| } | ||
|
|
||
| // CategoriesForModes returns deduplicated ModelCategory values for the given mode strings. | ||
| // Unrecognized modes are silently skipped. | ||
| func CategoriesForModes(modes []string) []ModelCategory { | ||
| seen := make(map[ModelCategory]struct{}, len(modes)) | ||
| result := make([]ModelCategory, 0, len(modes)) | ||
| for _, mode := range modes { | ||
| cat, ok := modeToCategory[mode] | ||
| if !ok { | ||
| continue | ||
| } | ||
| if _, dup := seen[cat]; dup { | ||
| continue | ||
| } | ||
| seen[cat] = struct{}{} | ||
| result = append(result, cat) | ||
| } | ||
| return result | ||
| } | ||
|
|
||
| // AllCategories returns the ordered list of categories for UI rendering. | ||
| func AllCategories() []ModelCategory { | ||
| return []ModelCategory{ | ||
| CategoryAll, | ||
| CategoryTextGeneration, | ||
| CategoryEmbedding, | ||
| CategoryImage, | ||
| CategoryAudio, | ||
| CategoryVideo, | ||
| CategoryUtility, | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
AllCategories() can silently drift from the constant declarations.
If a new ModelCategory constant is added (e.g., CategoryCode), AllCategories() must be manually updated in a separate location. Consider deriving the list from a single source of truth.
That said, this is minor given the low change frequency and the clear proximity of the two declarations.
Skipping a configured provider at startup is noteworthy—WARN better signals that Ollama (or similar) was expected but unreachable. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix TotalCost bug: use || instead of && so cost is set with one-sided pricing - Add COALESCE wrappers for all aggregate and per-row cost/caveat SQL columns - Fix slice aliasing bug in dataArgs construction (SQLite + PostgreSQL) - Validate category query parameter, return 400 for invalid values - Add aria-labels to date-picker navigation buttons - Add timestamp parse failure warning in SQLite reader - Add category-specific empty state message on models page - Fix forEach lint: add block bodies for arrow functions - Clarify registryAccessor and Init doc comments Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
internal/admin/handler_test.go (1)
853-865: 🧹 Nitpick | 🔵 Trivial
containsString/stringContainsreimplementsstrings.Contains.The standard library
strings.Containsdoes exactly this. Using it would remove 13 lines of custom code.♻️ Proposed simplification
Replace usages of
containsString(s, substr)withstrings.Contains(s, substr)and remove both helper functions. Add"strings"to the import block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/handler_test.go` around lines 853 - 865, The helpers containsString and stringContains duplicate standard library behavior; replace all calls to containsString(s, substr) with strings.Contains(s, substr), remove both helper functions (containsString and stringContains), and add "strings" to the test file import block so the code compiles.internal/admin/handler.go (1)
287-311:⚠️ Potential issue | 🟡 Minor
ListModelscategory validation looks good, but@Paramannotation forcategoryis missing from Swagger.The handler now accepts
?category=query parameter (Line 292) and validates it (Line 294), but the Swagger annotations (Lines 280–286) don't include a@Paramforcategory. Callers relying on generated API docs won't know this filter exists.✍️ Suggested Swagger addition
// `@Summary` List all registered models with provider info // `@Tags` admin // `@Produce` json // `@Security` BearerAuth +// `@Param` category query string false "Filter by model category" // `@Success` 200 {array} providers.ModelWithProvider // `@Failure` 401 {object} core.GatewayError // `@Router` /admin/api/v1/models [get]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/handler.go` around lines 287 - 311, The Swagger docs for the ListModels handler are missing a `@Param` entry for the optional query parameter "category" used and validated in ListModels; update the comment block above the ListModels function to include a `@Param` annotation like: `@Param` category query string false "model category" and, if possible, document allowed values (e.g., the core.ModelCategory enum / core.CategoryAll) so generated API docs show this filter and its valid options; ensure the annotation sits with the other Swagger tags above the ListModels function.internal/usage/reader_postgresql.go (1)
220-268: 🧹 Nitpick | 🔵 TrivialConsider adding cost aggregation to
GetDailyUsagefor API consistency.Unlike
GetSummaryandGetUsageByModel, which aggregateinput_cost,output_cost, andtotal_cost, theGetDailyUsagemethod andDailyUsagestruct currently lack cost fields. The query is consistent with the struct definition, but this creates an inconsistency in the aggregation API surface where cost data is available in some endpoints but not others. If daily cost trends are needed later, both the struct and query will require updates.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/reader_postgresql.go` around lines 220 - 268, GetDailyUsage is missing aggregation of input_cost, output_cost, and total_cost and the DailyUsage struct lacks corresponding fields, causing inconsistency with GetSummary/GetUsageByModel; update the DailyUsage struct to add InputCost, OutputCost, TotalCost (appropriate numeric type), then modify GetDailyUsage's SQL (the query string in GetDailyUsage) to include COALESCE(SUM(input_cost),0), COALESCE(SUM(output_cost),0), COALESCE(SUM(total_cost),0) in the SELECT and add those scan targets in the rows.Scan call so the returned DailyUsage slices include cost totals.
♻️ Duplicate comments (11)
internal/admin/handler.go (1)
229-263:⚠️ Potential issue | 🟡 MinorSwagger doc claims "default 50, max 200" but the handler does not enforce these bounds.
Line 229 documents
limitas "Page size (default 50, max 200)" but Lines 254–258 accept any positive integer without capping. IfclampLimitOffset(called inside the reader) enforces these values, the contract is implicitly upheld but the handler-level validation is absent — a caller sees a 200-row cap documented but could receive more ifclampLimitOffsetuses different constants or is changed later. Either enforce the documented bounds here, or update the Swagger annotation to match reality.🔒 Proposed handler-level enforcement
+const ( + defaultUsageLogLimit = 50 + maxUsageLogLimit = 200 +) + if l := c.QueryParam("limit"); l != "" { if parsed, err := strconv.Atoi(l); err == nil && parsed > 0 { + if parsed > maxUsageLogLimit { + parsed = maxUsageLogLimit + } params.Limit = parsed } } +if params.Limit == 0 { + params.Limit = defaultUsageLogLimit +}#!/bin/bash # Check clampLimitOffset implementation to verify if it already enforces bounds ast-grep --pattern 'func clampLimitOffset($_, $_) ($_, $_) { $$$ }'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/handler.go` around lines 229 - 263, The Swagger says limit default 50 max 200 but UsageLog currently accepts any positive integer; update the handler to enforce the documented bounds: when building usage.UsageLogParams in UsageLog, parse limit/offset as you already do but clamp params.Limit to [1,200] and default to 50 when absent/invalid, and ensure params.Offset defaults to 0 and is >=0; you can call or mirror clampLimitOffset logic here (or invoke clampLimitOffset before calling h.usageReader) so the handler-level contract matches the Swagger comment (adjust parseUsageParams/UsageLog param handling around params.Limit and params.Offset accordingly).internal/admin/dashboard/templates/index.html (4)
181-207: 🧹 Nitpick | 🔵 Trivial"All" category defaults to text_generation table layout — non-text models will show empty pricing cells.
When
activeCategory === 'all', the text_generation table (Input/Output/Cached $/MTok) is displayed for all model types. Embedding, image, audio, etc. models will have blank pricing columns. This was noted in a prior review; if this is intentional, consider adding a brief visual hint (e.g., a dash or "N/A" when pricing fields are absent) so users understand the data isn't missing due to an error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 181 - 207, The "All" table renders text-generation pricing columns even for non-text models, causing blank cells; update the rendering used in the template that iterates over filteredModels so that the price cells show a fallback like "—" or "N/A" when m.model.metadata?.pricing?.input_per_mtok, output_per_mtok, or cached_input_per_mtok are missing or null. Modify the x-text expressions (or wrap with a helper) that currently call formatPrice(...) to first check the specific pricing field and return the fallback string when absent, keeping the x-show for the cached column on activeCategory === 'text_generation' unchanged. Ensure the change targets the template row where x-for="m in filteredModels" and the formatPrice usages to avoid empty cells for non-text models.
142-143:⚠️ Potential issue | 🟡 MinorPagination "Showing" text is still vulnerable to stale offsets.
When
usageLog.total > 0butoffsetis stale (e.g., after a filter change that shrinks results while a page fetch is in-flight),offset + 1can exceedtotal, producing nonsensical text like "Showing 51-50 of 20". Guard the start value.Proposed fix
-<span class="pagination-info" x-text="'Showing ' + (usageLog.offset + 1) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span> +<span class="pagination-info" x-text="'Showing ' + Math.min(usageLog.offset + 1, usageLog.total) + '–' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 142 - 143, The "Showing" range can display a start greater than total when usageLog.offset is stale; update the start calculation inside the pagination-info span to guard it (use Math.min(usageLog.offset + 1, usageLog.total)) so the start never exceeds usageLog.total while keeping the end as Math.min(usageLog.offset + usageLog.limit, usageLog.total); modify the expression that currently builds "'Showing ' + (usageLog.offset + 1) + '-' + ..." to use the guarded start value referencing usageLog.offset, usageLog.limit and usageLog.total.
325-327: Category-specific empty states are now handled — addresses prior feedback.Three distinct states are covered: no models registered, no models in category, and no filter matches. This properly handles the previously-flagged gap when a category has zero models.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 325 - 327, The three empty-state <p> elements for models overlap and can produce conflicting UI because their x-show conditions are not enforced as mutually exclusive; update the template so the messages are rendered exclusively (use Alpine's x-if/x-else or consolidate into one conditional block) and ensure the logic checks the same state variables consistently (models, loading, authError, modelFilter, activeCategory, filteredModels) so only one message ("No models registered.", "No models in this category.", or "No models match your filter.") can appear at a time.
13-13: Date picker extraction into a reusable template — nice refactor.The duplication flagged in a prior review is resolved. Both the overview and usage pages now include
{{template "date-picker" .}}.Also applies to: 65-65
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` at line 13, The review notes a duplicate reviewer artifact and confirms the date-picker refactor; remove the stray duplicate review marker/comment and ensure both occurrences of the template inclusion ({{template "date-picker" .}}) in internal/admin/dashboard/templates/index.html are intentional and left as the single reusable invocation on the overview and usage sections (remove any duplicated placeholder like "[duplicate_comment]" or extra comment lines and keep the two template calls only).internal/admin/dashboard/templates/date-picker.html (1)
20-22: Accessibility labels are properly added — nice work.The
aria-labelattributes on all three navigation buttons (prevMonth×2 andnextMonth) address the prior accessibility feedback.Also applies to: 50-56
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/date-picker.html` around lines 20 - 22, The navigation buttons already include accessible aria-labels, but ensure the same attribute is present on the other two nav buttons that call prevMonth() and nextMonth() (buttons with class dp-nav-btn and handlers prevMonth() / nextMonth()) — add aria-label="Previous month" to the second prevMonth button and aria-label="Next month" to the nextMonth button so all three nav controls consistently include descriptive aria-labels.internal/admin/dashboard/static/js/dashboard.js (2)
691-701: Past Biome concern addressed —forEachcallbacks now use block bodies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 691 - 701, The usageLogModelOptions and usageLogProviderOptions methods use forEach with block bodies to populate a Set; simplify and make intent clearer by constructing the Set directly from a mapped array instead of using forEach — e.g. replace the body of usageLogModelOptions to use new Set(this.modelUsage.map(m => m.model)) and usageLogProviderOptions to use new Set(this.modelUsage.map(m => m.provider)), then return [...set].sort().
631-634: Past review concern addressed —fetchUsageLog(true)now resets offset correctly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 631 - 634, The previous concern about offset not being reset is resolved: ensure fetchUsagePage still calls fetchModelUsage() and fetchUsageLog(true) (so fetchUsageLog is invoked with the reset flag) before calling renderDonutChart(). No code change required in fetchUsagePage itself, but remove the duplicate review/comment marker ([duplicate_comment]) from the PR to avoid noisy reviews and leave the approval ([approve_code_changes]) as the final review state.internal/providers/registry.go (1)
204-226:⚠️ Potential issue | 🟡 MinorPreserve raw model‑list bytes even when parse fails.
If parsing fails,modelListRawisn’t updated, so subsequent cache saves drop the raw payload and prevent a future re‑parse after schema fixes.🛠️ Suggested fix
r.mu.Lock() r.models = newModels -if list != nil { - r.modelList = list - r.modelListRaw = modelCache.ModelListData -} +if len(modelCache.ModelListData) > 0 { + r.modelListRaw = modelCache.ModelListData +} +if list != nil { + r.modelList = list +} r.mu.Unlock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry.go` around lines 204 - 226, The code currently only sets r.modelListRaw when modeldata.Parse succeeds, causing raw bytes to be dropped on parse failure; modify the update so that after acquiring r.mu (the block that sets r.models) you always set r.modelListRaw = modelCache.ModelListData regardless of whether list is nil, and only set r.modelList = list when parse succeeded; keep the existing modeldata.Parse and Enrich flow (using list and registryAccessor) but preserve modelCache.ModelListData into r.modelListRaw even on parseErr so the raw payload can be saved and re-parsed later.internal/usage/cost_test.go (1)
37-57:⚠️ Potential issue | 🟠 MajorTest expectations encode the incorrect additive model for OpenAI subset tokens.
For OpenAI,
cached_tokensis a subset ofprompt_tokens(not additive), andreasoning_tokensis a subset ofcompletion_tokens. The expected values below encode the additive overcharge:
Field Current expected Correct InputCost(500k total, 200k cached)$1.50(300k×$2.50 + 200k×$1.25)/1M = $1.00OutputCost(300k total, 100k reasoning)$4.50(200k×$10 + 100k×$15)/1M = $3.50The same incorrect additive expectations appear in
TestCalculateGranularCost_OpenAI_AudioTokens(lines 60–77),TestCalculateGranularCost_Gemini_ThoughtTokens(lines 98–115), andTestCalculateGranularCost_XAI_PrefixedKeys(lines 274–294). Once the implementation is corrected to apply the subtractive semantics for subset-token providers, all four tests will need updated expectations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/cost_test.go` around lines 37 - 57, The tests currently treat OpenAI subset tokens as additive; update the expected assertions to use subtractive semantics: for CalculateGranularCost use prompt/completion minus subset tokens when computing cost (i.e., InputCost = ((prompt_tokens - cached_tokens)*InputPerMtok + cached_tokens*CachedInputPerMtok)/1M and OutputCost = ((completion_tokens - reasoning_tokens)*OutputPerMtok + reasoning_tokens*ReasoningOutputPerMtok)/1M). Change expectations in TestCalculateGranularCost_OpenAI_CachedAndReasoning, TestCalculateGranularCost_OpenAI_AudioTokens, TestCalculateGranularCost_Gemini_ThoughtTokens and TestCalculateGranularCost_XAI_PrefixedKeys to the corrected numeric results reflecting the subtractive model and ensure caveat checks remain unchanged.internal/usage/cost.go (1)
111-162:⚠️ Potential issue | 🟠 MajorSpecialized tokens are still double-charged for OpenAI / Gemini / XAI (additive model, subset semantics not applied).
For OpenAI (and Gemini/XAI by the same API contract),
cached_tokensandreasoning_tokensare components ofprompt_tokens/completion_tokens, not additive extras. The current implementation charges the base rate on the full token count, then adds the specialized rate on top:
Token set Current Correct Input (500k total, 200k cached) 500k×$2.50 + 200k×$1.25 = $1.50300k×$2.50 + 200k×$1.25 = $1.00Output (300k total, 100k reasoning) 300k×$10 + 100k×$15 = $4.50200k×$10 + 100k×$15 = $3.50The per-provider
sidealready tracks whether a specialized token belongs to input or output. What's missing is subtracting the specialized count from the base before applying the base rate. Anthropic is the exception — its cache tokens are genuinely additive, so it should keep the current logic.🛠️ Sketch of a subtractive fix for subset providers
// Classify providers whose specialized token fields are subsets of base counts. var subsetProviders = map[string]bool{"openai": true, "gemini": true, "xai": true} // After computing base costs, before the mappings loop: var adjustedInput = inputTokens var adjustedOutput = outputTokens if subsetProviders[providerType] { if mappings, ok := providerMappings[providerType]; ok { for _, m := range mappings { count := extractInt(rawData, m.rawDataKey) if count == 0 || m.unit != unitPerMtok { continue } switch m.side { case sideInput: adjustedInput -= count if adjustedInput < 0 { adjustedInput = 0 } case sideOutput: adjustedOutput -= count if adjustedOutput < 0 { adjustedOutput = 0 } } } } } // Use adjustedInput/adjustedOutput for base cost calculation instead of inputTokens/outputTokens.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/usage/cost.go` around lines 111 - 162, The current logic double-charges specialized token fields (e.g., cached_tokens, reasoning_tokens) for subset providers (OpenAI/Gemini/XAI) because the base cost uses the full inputTokens/outputTokens and then mappings add specialized costs; modify the flow to first detect subset providers (e.g., map[string]bool{"openai":true,"gemini":true,"xai":true}), iterate providerMappings[providerType] to subtract any unitPerMtok mapping counts (use extractInt(rawData, m.rawDataKey) and only m.unit == unitPerMtok) from working adjustedInput and adjustedOutput (clamped to >=0) based on m.side (sideInput/sideOutput), then compute base input/output cost using adjustedInput/adjustedOutput instead of inputTokens/outputTokens; leave Anthropic (not in subset map) using the existing additive behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/admin/dashboard/static/js/dashboard.js`:
- Around line 768-785: The code uses "donut" names (usageDonutChart,
renderDonutChart, _donutData, _donutColors, donutLegendItems) but constructs a
Chart with type 'bar'; fix the mismatch by choosing one of two options: either
change the Chart type to a doughnut (e.g., replace 'bar' with 'doughnut' in the
Chart constructor) to keep the "donut" names, or rename the symbols and related
DOM id to reflect a bar chart (rename usageDonutChart -> usageBarChart,
renderDonutChart -> renderUsageChart, _donutData -> _barData, _donutColors ->
_barColors, donutLegendItems -> barLegendItems, and update the DOM id
'usageDonutChart' accordingly) and update any references to those symbols so
names and chart type are consistent.
- Around line 587-590: The null/undefined checks in formatCost, formatPrice, and
formatPriceFine are redundant: remove the trailing "|| v === undefined" and rely
on the loose-equality null check (v == null) which already covers both null and
undefined; update each function (formatCost, formatPrice, formatPriceFine) to
use only "if (v == null) return 'N/A';" and leave the rest of the formatting
logic unchanged.
- Around line 703-707: The formatTokensShort function should guard against
null/undefined (and non-numeric) inputs to avoid rendering "null"/"undefined" in
the UI; modify formatTokensShort to check if n is null/undefined or not a finite
number (e.g., if (n == null || !isFinite(n))) and return the same
blank/placeholder string used by other helpers (e.g., '' or '-') in that case,
otherwise proceed with the existing million/thousand formatting logic in
formatTokensShort.
- Around line 592-604: formatCostTooltip currently calls formatNumber on every
value in entry.raw_data which can be non-numeric; update the loop in
formatCostTooltip to guard before calling this.formatNumber: for each value from
entry.raw_data check typeof value === 'number' and isFinite(value) (or
Number.isFinite(value)) and only then call this.formatNumber(value); for
non-numeric values fallback to a safe string representation (e.g., String(value)
or a placeholder) so formatNumber is not used on strings or other types.
- Around line 811-812: The Chart.js grid option on the y axis uses the old v3
key drawBorder; update the y axis configuration (the object that currently has
"grid: { color: colors.grid, drawBorder: false }") to Chart.js v4 syntax by
replacing drawBorder with a border setting (e.g., "border: { display: false }")
or remove the border option if you prefer defaults so the grid border is
disabled under Chart.js 4.x.
- Around line 650-669: fetchUsageLog currently replaces this.usageLog with the
raw server response which can omit pagination fields (limit/offset), causing NaN
in usageLogNextPage/usageLogPrevPage; instead, merge the server response into
the existing this.usageLog (or a default object) so limit and offset are
preserved when missing. In the fetchUsageLog function, after parsing await
res.json(), assign back using a defensive merge (e.g., spread or Object.assign)
between the existing this.usageLog (or default { entries:[], total:0, limit:50,
offset:0 }) and the parsed response, then ensure entries is at least an empty
array; keep error handling unchanged.
In `@internal/admin/dashboard/templates/date-picker.html`:
- Line 3: Add explicit type="button" attributes to all interactive <button>
elements in the date-picker template to prevent accidental form submissions when
embedded in a form: update the trigger button (class date-picker-trigger,
`@click`="toggleDatePicker()"), all preset buttons (the preset buttons referenced
in the template), day cell buttons (the day buttons used to select dates), and
any navigation buttons so they are non-submitting by default; leave behavior and
event handlers (e.g., toggleDatePicker, day selection handlers, preset handlers)
unchanged.
In `@internal/admin/dashboard/templates/index.html`:
- Line 86: The search input bound to usageLogSearch (the <input ...
x-model="usageLogSearch"> used to trigger fetchUsageLog) lacks a maxlength
constraint which can allow oversized query strings; add a reasonable maxlength
(e.g., 128 or 256) to that input element in
internal/admin/dashboard/templates/index.html and ensure client-side code that
calls fetchUsageLog (dashboard.js -> fetchUsageLog) continues to use
encodeURIComponent so long inputs are truncated before encoding to prevent
excessively large query strings.
In `@internal/admin/handler_test.go`:
- Around line 488-506: TestUsageLog_WithFilters doesn't assert that query params
are forwarded to the reader; update the mockUsageReader used by
TestUsageLog_WithFilters to capture the UsageLogParams passed into its
Read/Fetch method (the method used by Handler.UsageLog), then call
NewHandler(reader, ...) and after h.UsageLog(c) assert that the captured
UsageLogParams fields (Model, Provider, Search, Limit, Offset) match the
expected values from the query string; reference the mock type mockUsageReader,
the test TestUsageLog_WithFilters, the handler method UsageLog, and the param
type UsageLogParams when adding the capture and assertions.
- Around line 762-764: The test uses a hardcoded magic number 7 for the expected
category count which is brittle; change the assertion to compare len(cats)
against len(core.AllCategories()) (or another dynamically-derived source) so the
test derives the expected count at runtime — locate the assertion checking
len(cats) in the test (the block using variable cats) and replace the literal 7
with a call to len(core.AllCategories()) to make the expectation resilient to
category changes.
In `@internal/usage/cost.go`:
- Around line 93-98: The doc comment for CalculateGranularCost is inaccurate: it
claims caveats describe "missing pricing data that prevented full cost
calculation" even though nil pricing rates are silently skipped (the code does
"if rate == nil { continue }") and only unmapped token fields produce caveats in
the unmapped-fields loop; update the comment to remove or rephrase the claim
about missing pricing data and state that caveats only report
unmapped/unrecognized token fields (or other cases the unmapped-fields loop
detects) so it reflects the actual behavior of CalculateGranularCost, the caveat
result field, and the code path that checks rate == nil.
- Around line 73-91: The extendedFieldSet currently builds only from
providerMappings and therefore omits the keys in informationalFields, causing
streaming RawData to miss those top-level fields; update the extendedFieldSet
initializer to also add every key from informationalFields into the set (e.g.,
after the loop over providerMappings, iterate informationalFields and set[<key>]
= struct{}{}), so that the var extendedFieldSet and the existing
providerMappings-based logic both include the four informationalFields and
restore parity with buildRawUsageFromDetails() handling.
In `@internal/usage/reader_sqlite.go`:
- Around line 169-196: The scan into a nullable pointer var caveat *string is
unnecessary because the SQL uses COALESCE to guarantee a non-NULL string; update
the rows.Scan call and downstream handling to scan directly into
e.CostsCalculationCaveat (a string) instead of &caveat, remove the pointer
nil-check and dereference logic, and ensure any places that set
e.CostsCalculationCaveat (currently lines referencing caveat and
e.CostsCalculationCaveat) are simplified to use the scanned string value; this
mirrors the PostgreSQL reader behavior and keeps UsageLogEntry handling
consistent.
---
Outside diff comments:
In `@internal/admin/handler_test.go`:
- Around line 853-865: The helpers containsString and stringContains duplicate
standard library behavior; replace all calls to containsString(s, substr) with
strings.Contains(s, substr), remove both helper functions (containsString and
stringContains), and add "strings" to the test file import block so the code
compiles.
In `@internal/admin/handler.go`:
- Around line 287-311: The Swagger docs for the ListModels handler are missing a
`@Param` entry for the optional query parameter "category" used and validated in
ListModels; update the comment block above the ListModels function to include a
`@Param` annotation like: `@Param` category query string false "model category" and,
if possible, document allowed values (e.g., the core.ModelCategory enum /
core.CategoryAll) so generated API docs show this filter and its valid options;
ensure the annotation sits with the other Swagger tags above the ListModels
function.
In `@internal/usage/reader_postgresql.go`:
- Around line 220-268: GetDailyUsage is missing aggregation of input_cost,
output_cost, and total_cost and the DailyUsage struct lacks corresponding
fields, causing inconsistency with GetSummary/GetUsageByModel; update the
DailyUsage struct to add InputCost, OutputCost, TotalCost (appropriate numeric
type), then modify GetDailyUsage's SQL (the query string in GetDailyUsage) to
include COALESCE(SUM(input_cost),0), COALESCE(SUM(output_cost),0),
COALESCE(SUM(total_cost),0) in the SELECT and add those scan targets in the
rows.Scan call so the returned DailyUsage slices include cost totals.
---
Duplicate comments:
In `@internal/admin/dashboard/static/js/dashboard.js`:
- Around line 691-701: The usageLogModelOptions and usageLogProviderOptions
methods use forEach with block bodies to populate a Set; simplify and make
intent clearer by constructing the Set directly from a mapped array instead of
using forEach — e.g. replace the body of usageLogModelOptions to use new
Set(this.modelUsage.map(m => m.model)) and usageLogProviderOptions to use new
Set(this.modelUsage.map(m => m.provider)), then return [...set].sort().
- Around line 631-634: The previous concern about offset not being reset is
resolved: ensure fetchUsagePage still calls fetchModelUsage() and
fetchUsageLog(true) (so fetchUsageLog is invoked with the reset flag) before
calling renderDonutChart(). No code change required in fetchUsagePage itself,
but remove the duplicate review/comment marker ([duplicate_comment]) from the PR
to avoid noisy reviews and leave the approval ([approve_code_changes]) as the
final review state.
In `@internal/admin/dashboard/templates/date-picker.html`:
- Around line 20-22: The navigation buttons already include accessible
aria-labels, but ensure the same attribute is present on the other two nav
buttons that call prevMonth() and nextMonth() (buttons with class dp-nav-btn and
handlers prevMonth() / nextMonth()) — add aria-label="Previous month" to the
second prevMonth button and aria-label="Next month" to the nextMonth button so
all three nav controls consistently include descriptive aria-labels.
In `@internal/admin/dashboard/templates/index.html`:
- Around line 181-207: The "All" table renders text-generation pricing columns
even for non-text models, causing blank cells; update the rendering used in the
template that iterates over filteredModels so that the price cells show a
fallback like "—" or "N/A" when m.model.metadata?.pricing?.input_per_mtok,
output_per_mtok, or cached_input_per_mtok are missing or null. Modify the x-text
expressions (or wrap with a helper) that currently call formatPrice(...) to
first check the specific pricing field and return the fallback string when
absent, keeping the x-show for the cached column on activeCategory ===
'text_generation' unchanged. Ensure the change targets the template row where
x-for="m in filteredModels" and the formatPrice usages to avoid empty cells for
non-text models.
- Around line 142-143: The "Showing" range can display a start greater than
total when usageLog.offset is stale; update the start calculation inside the
pagination-info span to guard it (use Math.min(usageLog.offset + 1,
usageLog.total)) so the start never exceeds usageLog.total while keeping the end
as Math.min(usageLog.offset + usageLog.limit, usageLog.total); modify the
expression that currently builds "'Showing ' + (usageLog.offset + 1) + '-' +
..." to use the guarded start value referencing usageLog.offset, usageLog.limit
and usageLog.total.
- Around line 325-327: The three empty-state <p> elements for models overlap and
can produce conflicting UI because their x-show conditions are not enforced as
mutually exclusive; update the template so the messages are rendered exclusively
(use Alpine's x-if/x-else or consolidate into one conditional block) and ensure
the logic checks the same state variables consistently (models, loading,
authError, modelFilter, activeCategory, filteredModels) so only one message ("No
models registered.", "No models in this category.", or "No models match your
filter.") can appear at a time.
- Line 13: The review notes a duplicate reviewer artifact and confirms the
date-picker refactor; remove the stray duplicate review marker/comment and
ensure both occurrences of the template inclusion ({{template "date-picker" .}})
in internal/admin/dashboard/templates/index.html are intentional and left as the
single reusable invocation on the overview and usage sections (remove any
duplicated placeholder like "[duplicate_comment]" or extra comment lines and
keep the two template calls only).
In `@internal/admin/handler.go`:
- Around line 229-263: The Swagger says limit default 50 max 200 but UsageLog
currently accepts any positive integer; update the handler to enforce the
documented bounds: when building usage.UsageLogParams in UsageLog, parse
limit/offset as you already do but clamp params.Limit to [1,200] and default to
50 when absent/invalid, and ensure params.Offset defaults to 0 and is >=0; you
can call or mirror clampLimitOffset logic here (or invoke clampLimitOffset
before calling h.usageReader) so the handler-level contract matches the Swagger
comment (adjust parseUsageParams/UsageLog param handling around params.Limit and
params.Offset accordingly).
In `@internal/providers/registry.go`:
- Around line 204-226: The code currently only sets r.modelListRaw when
modeldata.Parse succeeds, causing raw bytes to be dropped on parse failure;
modify the update so that after acquiring r.mu (the block that sets r.models)
you always set r.modelListRaw = modelCache.ModelListData regardless of whether
list is nil, and only set r.modelList = list when parse succeeded; keep the
existing modeldata.Parse and Enrich flow (using list and registryAccessor) but
preserve modelCache.ModelListData into r.modelListRaw even on parseErr so the
raw payload can be saved and re-parsed later.
In `@internal/usage/cost_test.go`:
- Around line 37-57: The tests currently treat OpenAI subset tokens as additive;
update the expected assertions to use subtractive semantics: for
CalculateGranularCost use prompt/completion minus subset tokens when computing
cost (i.e., InputCost = ((prompt_tokens - cached_tokens)*InputPerMtok +
cached_tokens*CachedInputPerMtok)/1M and OutputCost = ((completion_tokens -
reasoning_tokens)*OutputPerMtok + reasoning_tokens*ReasoningOutputPerMtok)/1M).
Change expectations in TestCalculateGranularCost_OpenAI_CachedAndReasoning,
TestCalculateGranularCost_OpenAI_AudioTokens,
TestCalculateGranularCost_Gemini_ThoughtTokens and
TestCalculateGranularCost_XAI_PrefixedKeys to the corrected numeric results
reflecting the subtractive model and ensure caveat checks remain unchanged.
In `@internal/usage/cost.go`:
- Around line 111-162: The current logic double-charges specialized token fields
(e.g., cached_tokens, reasoning_tokens) for subset providers (OpenAI/Gemini/XAI)
because the base cost uses the full inputTokens/outputTokens and then mappings
add specialized costs; modify the flow to first detect subset providers (e.g.,
map[string]bool{"openai":true,"gemini":true,"xai":true}), iterate
providerMappings[providerType] to subtract any unitPerMtok mapping counts (use
extractInt(rawData, m.rawDataKey) and only m.unit == unitPerMtok) from working
adjustedInput and adjustedOutput (clamped to >=0) based on m.side
(sideInput/sideOutput), then compute base input/output cost using
adjustedInput/adjustedOutput instead of inputTokens/outputTokens; leave
Anthropic (not in subset map) using the existing additive behavior.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (11)
internal/admin/dashboard/static/js/dashboard.jsinternal/admin/dashboard/templates/date-picker.htmlinternal/admin/dashboard/templates/index.htmlinternal/admin/handler.gointernal/admin/handler_test.gointernal/providers/init.gointernal/providers/registry.gointernal/usage/cost.gointernal/usage/cost_test.gointernal/usage/reader_postgresql.gointernal/usage/reader_sqlite.go
| formatCost(v) { | ||
| if (v == null || v === undefined) return 'N/A'; | ||
| return '$' + v.toFixed(4); | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redundant undefined check after loose-equality null comparison.
v == null already covers both null and undefined in JavaScript. The trailing || v === undefined in formatCost, formatPrice, and formatPriceFine is dead code.
Example
formatCost(v) {
- if (v == null || v === undefined) return 'N/A';
+ if (v == null) return 'N/A';
return '$' + v.toFixed(4);
},Same for formatPrice (line 607) and formatPriceFine (line 612).
Also applies to: 606-615
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/admin/dashboard/static/js/dashboard.js` around lines 587 - 590, The
null/undefined checks in formatCost, formatPrice, and formatPriceFine are
redundant: remove the trailing "|| v === undefined" and rely on the
loose-equality null check (v == null) which already covers both null and
undefined; update each function (formatCost, formatPrice, formatPriceFine) to
use only "if (v == null) return 'N/A';" and leave the rest of the formatting
logic unchanged.
| formatCostTooltip(entry) { | ||
| let lines = []; | ||
| lines.push('Input: ' + this.formatCost(entry.input_cost)); | ||
| lines.push('Output: ' + this.formatCost(entry.output_cost)); | ||
| if (entry.raw_data) { | ||
| lines.push(''); | ||
| for (const [key, value] of Object.entries(entry.raw_data)) { | ||
| const label = key.replace(/_/g, ' ').replace(/\b\w/g, c => c.toUpperCase()); | ||
| lines.push(label + ': ' + this.formatNumber(value)); | ||
| } | ||
| } | ||
| return lines.join('\n'); | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
formatCostTooltip calls formatNumber on arbitrary raw_data values.
formatNumber calls .toLocaleString() — if a raw_data value is a string (or another non-numeric type), this won't throw but will display the raw string without formatting. Consider adding a type guard.
Suggested guard
for (const [key, value] of Object.entries(entry.raw_data)) {
const label = key.replace(/_/g, ' ').replace(/\b\w/g, c => c.toUpperCase());
- lines.push(label + ': ' + this.formatNumber(value));
+ lines.push(label + ': ' + (typeof value === 'number' ? this.formatNumber(value) : String(value)));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| formatCostTooltip(entry) { | |
| let lines = []; | |
| lines.push('Input: ' + this.formatCost(entry.input_cost)); | |
| lines.push('Output: ' + this.formatCost(entry.output_cost)); | |
| if (entry.raw_data) { | |
| lines.push(''); | |
| for (const [key, value] of Object.entries(entry.raw_data)) { | |
| const label = key.replace(/_/g, ' ').replace(/\b\w/g, c => c.toUpperCase()); | |
| lines.push(label + ': ' + this.formatNumber(value)); | |
| } | |
| } | |
| return lines.join('\n'); | |
| }, | |
| formatCostTooltip(entry) { | |
| let lines = []; | |
| lines.push('Input: ' + this.formatCost(entry.input_cost)); | |
| lines.push('Output: ' + this.formatCost(entry.output_cost)); | |
| if (entry.raw_data) { | |
| lines.push(''); | |
| for (const [key, value] of Object.entries(entry.raw_data)) { | |
| const label = key.replace(/_/g, ' ').replace(/\b\w/g, c => c.toUpperCase()); | |
| lines.push(label + ': ' + (typeof value === 'number' ? this.formatNumber(value) : String(value))); | |
| } | |
| } | |
| return lines.join('\n'); | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/admin/dashboard/static/js/dashboard.js` around lines 592 - 604,
formatCostTooltip currently calls formatNumber on every value in entry.raw_data
which can be non-numeric; update the loop in formatCostTooltip to guard before
calling this.formatNumber: for each value from entry.raw_data check typeof value
=== 'number' and isFinite(value) (or Number.isFinite(value)) and only then call
this.formatNumber(value); for non-numeric values fallback to a safe string
representation (e.g., String(value) or a placeholder) so formatNumber is not
used on strings or other types.
| async fetchUsageLog(resetOffset) { | ||
| try { | ||
| if (resetOffset) this.usageLog.offset = 0; | ||
| let qs = this._usageQueryStr(); | ||
| qs += '&limit=' + this.usageLog.limit + '&offset=' + this.usageLog.offset; | ||
| if (this.usageLogSearch) qs += '&search=' + encodeURIComponent(this.usageLogSearch); | ||
| if (this.usageLogModel) qs += '&model=' + encodeURIComponent(this.usageLogModel); | ||
| if (this.usageLogProvider) qs += '&provider=' + encodeURIComponent(this.usageLogProvider); | ||
|
|
||
| const res = await fetch('/admin/api/v1/usage/log?' + qs, { headers: this.headers() }); | ||
| if (!this.handleFetchResponse(res, 'usage log')) { | ||
| this.usageLog = { entries: [], total: 0, limit: 50, offset: 0 }; | ||
| return; | ||
| } | ||
| this.usageLog = await res.json(); | ||
| if (!this.usageLog.entries) this.usageLog.entries = []; | ||
| } catch (e) { | ||
| console.error('Failed to fetch usage log:', e); | ||
| this.usageLog = { entries: [], total: 0, limit: 50, offset: 0 }; | ||
| } |
There was a problem hiding this comment.
fetchUsageLog replaces usageLog wholesale — pagination fields may be lost.
Line 664 overwrites this.usageLog with the raw server response. If the response omits limit (or offset), subsequent calls to usageLogNextPage/usageLogPrevPage will operate on undefined, producing NaN offsets. Defensively merge or preserve the client-side defaults.
Proposed fix
- this.usageLog = await res.json();
- if (!this.usageLog.entries) this.usageLog.entries = [];
+ const data = await res.json();
+ this.usageLog = {
+ entries: data.entries || [],
+ total: data.total || 0,
+ limit: data.limit || this.usageLog.limit || 50,
+ offset: data.offset ?? this.usageLog.offset ?? 0,
+ };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/admin/dashboard/static/js/dashboard.js` around lines 650 - 669,
fetchUsageLog currently replaces this.usageLog with the raw server response
which can omit pagination fields (limit/offset), causing NaN in
usageLogNextPage/usageLogPrevPage; instead, merge the server response into the
existing this.usageLog (or a default object) so limit and offset are preserved
when missing. In the fetchUsageLog function, after parsing await res.json(),
assign back using a defensive merge (e.g., spread or Object.assign) between the
existing this.usageLog (or default { entries:[], total:0, limit:50, offset:0 })
and the parsed response, then ensure entries is at least an empty array; keep
error handling unchanged.
| formatTokensShort(n) { | ||
| if (n >= 1000000) return (n / 1000000).toFixed(1) + 'M'; | ||
| if (n >= 1000) return (n / 1000).toFixed(1) + 'K'; | ||
| return String(n); | ||
| }, |
There was a problem hiding this comment.
formatTokensShort lacks a null/undefined guard.
If n is null or undefined (e.g., missing server data), the comparisons silently fall through and String(n) returns "null" or "undefined" in the UI. Other format helpers (formatNumber, formatCost) guard against this.
Proposed fix
formatTokensShort(n) {
+ if (n == null) return '-';
if (n >= 1000000) return (n / 1000000).toFixed(1) + 'M';
if (n >= 1000) return (n / 1000).toFixed(1) + 'K';
return String(n);
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| formatTokensShort(n) { | |
| if (n >= 1000000) return (n / 1000000).toFixed(1) + 'M'; | |
| if (n >= 1000) return (n / 1000).toFixed(1) + 'K'; | |
| return String(n); | |
| }, | |
| formatTokensShort(n) { | |
| if (n == null) return '-'; | |
| if (n >= 1000000) return (n / 1000000).toFixed(1) + 'M'; | |
| if (n >= 1000) return (n / 1000).toFixed(1) + 'K'; | |
| return String(n); | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/admin/dashboard/static/js/dashboard.js` around lines 703 - 707, The
formatTokensShort function should guard against null/undefined (and non-numeric)
inputs to avoid rendering "null"/"undefined" in the UI; modify formatTokensShort
to check if n is null/undefined or not a finite number (e.g., if (n == null ||
!isFinite(n))) and return the same blank/placeholder string used by other
helpers (e.g., '' or '-') in that case, otherwise proceed with the existing
million/thousand formatting logic in formatTokensShort.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (6)
internal/admin/dashboard/static/js/dashboard.js (6)
839-840:⚠️ Potential issue | 🟠 MajorUnresolved:
drawBorderis a Chart.js 3.x option silently ignored in v4.grid: { color: colors.grid, drawBorder: false }Chart.js 4.5.1 (the version loaded in this project) removed
drawBorderfrom thegridnamespace. Useborder: { display: false }instead.🛠️ Proposed fix
-grid: { color: colors.grid, drawBorder: false }, +grid: { color: colors.grid }, +border: { display: false },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 839 - 840, The Chart.js grid option in the y-axis config uses the removed property drawBorder (y: { grid: { drawBorder: false } }), which is ignored in Chart.js v4; update the y axis grid config to replace drawBorder with the new border display setting (y -> grid -> border: { display: false }) so the axis border is actually hidden when rendering with Chart.js 4.x.
601-634: 🧹 Nitpick | 🔵 TrivialUnresolved: remove redundant
|| v === undefinedguards.
v == nullinformatNumber(line 602),formatCost(line 607),formatPrice(line 626), andformatPriceFine(line 631) already coversundefinedvia loose equality. The trailing|| v === undefinedis dead code in all four helpers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 601 - 634, Remove the redundant "|| v === undefined" (and "|| n === undefined" in formatNumber) checks because the loose equality null check already covers undefined; in functions formatNumber, formatCost, formatPrice, and formatPriceFine delete the trailing "|| ... === undefined" parts so the guards become just "if (n == null)" or "if (v == null)" respectively, leaving the rest of each function unchanged.
683-684:⚠️ Potential issue | 🟡 MinorUnresolved:
fetchUsageLogoverwritesusageLogwholesale.this.usageLog = await res.json(); if (!this.usageLog.entries) this.usageLog.entries = [];If the server response omits
limitoroffset,usageLogNextPage/usageLogPrevPagecomputeundefined + undefined = NaN, silently disabling pagination.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 683 - 684, fetchUsageLog currently assigns the fetched JSON directly to this.usageLog which wipes existing pagination fields (so usageLogNextPage/PrevPage can get NaN); instead merge the response into the existing this.usageLog and only overwrite fields provided by the server (preserve existing limit/offset if response.limit/response.offset are undefined), ensure entries defaults to an empty array when missing, and normalize limit/offset to numbers if present; update the logic in fetchUsageLog (referencing this.usageLog and the response object from await res.json()) to use a shallow merge (or conditional assignments) rather than wholesale replacement so pagination calculations remain valid.
617-620:⚠️ Potential issue | 🟡 MinorUnresolved:
formatCostTooltippasses arbitraryraw_datavalues toformatNumber.
formatNumbercalls.toLocaleString(). Any non-numeric value (string, boolean, object) inraw_datawill silently produce unformatted output.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 617 - 620, The loop in formatCostTooltip (iterating entry.raw_data and calling this.formatNumber(value)) passes arbitrary values to formatNumber which uses toLocaleString and fails for non-numeric types; update the logic in formatCostTooltip (or the loop shown) to first detect numeric values (e.g., typeof value === 'number' or isFinite(Number(value))) and only call this.formatNumber for those, otherwise convert non-numeric values to a safe string (e.g., String(value) or a fallback like '-') before pushing to lines; ensure any coercion uses Number(value) when appropriate and preserve original labels via the label created from key.
724-728:⚠️ Potential issue | 🟡 MinorUnresolved:
formatTokensShorthas no null/undefined guard.
nullandundefinedinputs fall through all comparisons andString(n)returns"null"/"undefined"in the UI. Every other format helper guards against this.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 724 - 728, The formatTokensShort function lacks a null/undefined guard which causes "null"/"undefined" to appear in the UI; update formatTokensShort to first check if n is null or undefined (e.g., n == null) and return an appropriate empty/placeholder string (or "0") before doing the numeric comparisons, preserving the existing million/thousand formatting logic in formatTokensShort.
812-813: 🧹 Nitpick | 🔵 TrivialUnresolved: naming mismatch — "donut" symbols render a bar chart.
usageDonutChart,renderDonutChart,_donutData,_donutColors,donutLegendItems, and the canvas IDusageDonutChartall reference "donut" buttype: 'bar'is used. Either rename to reflect bar semantics or change the chart type to'doughnut'.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 812 - 813, The code references donuts but constructs a bar chart: update either the chart type or the names to be consistent; specifically, in the block that creates this.usageDonutChart (canvas with id "usageDonutChart") and in functions/variables renderDonutChart, _donutData, _donutColors, donutLegendItems, change type: 'bar' to type: 'doughnut' (and adjust any Chart.js options to doughnut semantics) OR rename usageDonutChart, renderDonutChart, _donutData, _donutColors, donutLegendItems and the canvas id to use "bar" naming to reflect a bar chart; pick one approach and apply it consistently across those symbols so names match the actual chart type.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/admin/dashboard/static/js/dashboard.js`:
- Around line 749-777: _donutData() can produce NaN when
input_tokens/output_tokens are missing; update every place that sums token
fields (the sort comparator, the values mapping in top.map, and the rest.forEach
accumulation) to coalesce undefined/null to 0 (e.g., use (m.input_tokens || 0) +
(m.output_tokens || 0) or equivalent) so sorting and value calculations never
add undefined.
---
Duplicate comments:
In `@internal/admin/dashboard/static/js/dashboard.js`:
- Around line 839-840: The Chart.js grid option in the y-axis config uses the
removed property drawBorder (y: { grid: { drawBorder: false } }), which is
ignored in Chart.js v4; update the y axis grid config to replace drawBorder with
the new border display setting (y -> grid -> border: { display: false }) so the
axis border is actually hidden when rendering with Chart.js 4.x.
- Around line 601-634: Remove the redundant "|| v === undefined" (and "|| n ===
undefined" in formatNumber) checks because the loose equality null check already
covers undefined; in functions formatNumber, formatCost, formatPrice, and
formatPriceFine delete the trailing "|| ... === undefined" parts so the guards
become just "if (n == null)" or "if (v == null)" respectively, leaving the rest
of each function unchanged.
- Around line 683-684: fetchUsageLog currently assigns the fetched JSON directly
to this.usageLog which wipes existing pagination fields (so
usageLogNextPage/PrevPage can get NaN); instead merge the response into the
existing this.usageLog and only overwrite fields provided by the server
(preserve existing limit/offset if response.limit/response.offset are
undefined), ensure entries defaults to an empty array when missing, and
normalize limit/offset to numbers if present; update the logic in fetchUsageLog
(referencing this.usageLog and the response object from await res.json()) to use
a shallow merge (or conditional assignments) rather than wholesale replacement
so pagination calculations remain valid.
- Around line 617-620: The loop in formatCostTooltip (iterating entry.raw_data
and calling this.formatNumber(value)) passes arbitrary values to formatNumber
which uses toLocaleString and fails for non-numeric types; update the logic in
formatCostTooltip (or the loop shown) to first detect numeric values (e.g.,
typeof value === 'number' or isFinite(Number(value))) and only call
this.formatNumber for those, otherwise convert non-numeric values to a safe
string (e.g., String(value) or a fallback like '-') before pushing to lines;
ensure any coercion uses Number(value) when appropriate and preserve original
labels via the label created from key.
- Around line 724-728: The formatTokensShort function lacks a null/undefined
guard which causes "null"/"undefined" to appear in the UI; update
formatTokensShort to first check if n is null or undefined (e.g., n == null) and
return an appropriate empty/placeholder string (or "0") before doing the numeric
comparisons, preserving the existing million/thousand formatting logic in
formatTokensShort.
- Around line 812-813: The code references donuts but constructs a bar chart:
update either the chart type or the names to be consistent; specifically, in the
block that creates this.usageDonutChart (canvas with id "usageDonutChart") and
in functions/variables renderDonutChart, _donutData, _donutColors,
donutLegendItems, change type: 'bar' to type: 'doughnut' (and adjust any
Chart.js options to doughnut semantics) OR rename usageDonutChart,
renderDonutChart, _donutData, _donutColors, donutLegendItems and the canvas id
to use "bar" naming to reflect a bar chart; pick one approach and apply it
consistently across those symbols so names match the actual chart type.
| _donutData() { | ||
| const sorted = [...this.modelUsage].sort((a, b) => { | ||
| if (this.usageMode === 'costs') { | ||
| return ((b.total_cost || 0) - (a.total_cost || 0)); | ||
| } | ||
| return ((b.input_tokens + b.output_tokens) - (a.input_tokens + a.output_tokens)); | ||
| }); | ||
|
|
||
| const top = sorted.slice(0, 10); | ||
| const rest = sorted.slice(10); | ||
|
|
||
| const labels = top.map(m => m.model); | ||
| const values = top.map(m => { | ||
| if (this.usageMode === 'costs') return m.total_cost || 0; | ||
| return m.input_tokens + m.output_tokens; | ||
| }); | ||
|
|
||
| if (rest.length > 0) { | ||
| labels.push('Other'); | ||
| let otherVal = 0; | ||
| rest.forEach(m => { | ||
| if (this.usageMode === 'costs') otherVal += (m.total_cost || 0); | ||
| else otherVal += m.input_tokens + m.output_tokens; | ||
| }); | ||
| values.push(otherVal); | ||
| } | ||
|
|
||
| return { labels, values }; | ||
| }, |
There was a problem hiding this comment.
_donutData() produces NaN when token fields are absent.
In tokens mode, both the sort comparator and the values mapping use bare addition without null-coalescing:
return ((b.input_tokens + b.output_tokens) - (a.input_tokens + a.output_tokens));
// and
return m.input_tokens + m.output_tokens;
// and
else otherVal += m.input_tokens + m.output_tokens;undefined + undefined = NaN; a missing field makes sort order non-deterministic and injects NaN into the chart dataset. The costs branch already uses || 0—apply the same pattern to the tokens branches.
🛠️ Proposed fix
-return ((b.input_tokens + b.output_tokens) - (a.input_tokens + a.output_tokens));
+return (((b.input_tokens || 0) + (b.output_tokens || 0)) - ((a.input_tokens || 0) + (a.output_tokens || 0)));
-return m.input_tokens + m.output_tokens;
+return (m.input_tokens || 0) + (m.output_tokens || 0);
-else otherVal += m.input_tokens + m.output_tokens;
+else otherVal += (m.input_tokens || 0) + (m.output_tokens || 0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/admin/dashboard/static/js/dashboard.js` around lines 749 - 777,
_donutData() can produce NaN when input_tokens/output_tokens are missing; update
every place that sums token fields (the sort comparator, the values mapping in
top.map, and the rest.forEach accumulation) to coalesce undefined/null to 0
(e.g., use (m.input_tokens || 0) + (m.output_tokens || 0) or equivalent) so
sorting and value calculations never add undefined.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (8)
internal/admin/dashboard/static/js/dashboard.js (6)
606-634: Redundantundefinedcheck after loose-equality null comparison.
v == nullalready covers bothnullandundefined. The trailing|| v === undefinedinformatCost(Line 607),formatPrice(Line 626), andformatPriceFine(Line 631) is dead code.formatNumber(Line 602) has the same redundancy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 606 - 634, The null/undefined checks in formatCost, formatPrice, formatPriceFine (and formatNumber) are redundant: change each conditional from "v == null || v === undefined" to simply "v == null" so the loose-equality handles both null and undefined; update the checks inside the methods formatCost, formatPrice, formatPriceFine, and formatNumber accordingly to remove the dead "v === undefined" branch while keeping the exact return behavior unchanged.
669-688:fetchUsageLogreplacesusageLogwholesale — pagination fields may be lost.Line 683 overwrites
this.usageLogwith the raw server response. If the response omitslimitoroffset, subsequent calls tousageLogNextPage/usageLogPrevPagewill operate onundefined, producingNaNoffsets.Proposed fix
- this.usageLog = await res.json(); - if (!this.usageLog.entries) this.usageLog.entries = []; + const data = await res.json(); + this.usageLog = { + entries: data.entries || [], + total: data.total || 0, + limit: data.limit || this.usageLog.limit || 50, + offset: data.offset ?? this.usageLog.offset ?? 0, + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 669 - 688, fetchUsageLog currently replaces this.usageLog with the server response which can omit pagination fields and break usageLogNextPage/usageLogPrevPage; instead merge the fetched JSON into the existing this.usageLog (or a default object) and ensure limit and offset are preserved or defaulted (e.g., use existing this.usageLog.limit/offset or set to 50/0) before assigning—update the assignment in fetchUsageLog (and keep the fallback that ensures entries is an array) so missing fields from the response don’t produce NaN offsets.
724-728:formatTokensShortlacks a null/undefined guard.When
nisnullorundefined, the comparisons silently fall through andString(n)returns"null"/"undefined"in the UI. Other format helpers already guard against this.Proposed fix
formatTokensShort(n) { + if (n == null) return '-'; if (n >= 1000000) return (n / 1000000).toFixed(1) + 'M'; if (n >= 1000) return (n / 1000).toFixed(1) + 'K'; return String(n); },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 724 - 728, formatTokensShort currently returns "null"/"undefined" when n is null/undefined; add a guard at the top of the formatTokensShort function to handle null/undefined (e.g. if (n == null) return '' or the same placeholder other format helpers use) before doing comparisons, so the UI shows the expected empty/placeholder value instead of the literal "null"/"undefined".
611-623:formatCostTooltipcallsformatNumberon arbitraryraw_datavalues.If a
raw_datavalue is a string,formatNumberwill call.toLocaleString()on it without formatting. Add a type guard.Proposed fix
for (const [key, value] of Object.entries(entry.raw_data)) { const label = key.replace(/_/g, ' ').replace(/\b\w/g, c => c.toUpperCase()); - lines.push(label + ': ' + this.formatNumber(value)); + lines.push(label + ': ' + (typeof value === 'number' ? this.formatNumber(value) : String(value))); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 611 - 623, formatCostTooltip currently calls this.formatNumber on every raw_data value which breaks for non-number types; update the loop inside formatCostTooltip to type-guard each value (e.g., check typeof value === 'number' or Number.isFinite(value')) and only call this.formatNumber for numeric values, otherwise convert the value to a safe string (e.g., String(value)) before pushing it to lines; keep the rest of the function and use the same label generation so callers of formatNumber are limited to numeric inputs.
839-840:drawBorder: falseis ignored in Chart.js 4.x.This option is from Chart.js 3.x. In 4.x, use
border: { display: false }on the scale to disable the axis border line.Proposed fix
y: { - grid: { color: colors.grid, drawBorder: false }, + grid: { color: colors.grid }, + border: { display: false }, ticks: {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 839 - 840, The y scale configuration currently uses the deprecated Chart.js 3.x option drawBorder: false; locate the scale object where "y:" is defined in dashboard.js and replace that property with the Chart.js 4.x equivalent by adding a border configuration (e.g., border: { display: false }) on the y scale; keep other scale settings like grid: { color: colors.grid } intact and remove the drawBorder property to ensure the axis border is disabled in Chart.js v4.
749-774:_barData()producesNaNwhen token fields are absent.In the tokens branch,
input_tokens + output_tokensonundefinedfields yieldsNaN, making the sort non-deterministic and injectingNaNinto the chart dataset. The costs branch already uses|| 0— apply the same pattern to the tokens branches.Proposed fix
- return ((b.input_tokens + b.output_tokens) - (a.input_tokens + a.output_tokens)); + return (((b.input_tokens || 0) + (b.output_tokens || 0)) - ((a.input_tokens || 0) + (a.output_tokens || 0)));- return m.input_tokens + m.output_tokens; + return (m.input_tokens || 0) + (m.output_tokens || 0);- else otherVal += m.input_tokens + m.output_tokens; + else otherVal += (m.input_tokens || 0) + (m.output_tokens || 0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/static/js/dashboard.js` around lines 749 - 774, The _barData() function produces NaN because it adds possibly undefined token fields; update all token-sum expressions to guard with defaults (e.g., use (m.input_tokens || 0) + (m.output_tokens || 0)) wherever tokens are used: in the sort comparator, in the values mapping for the tokens branch, and in the aggregation loop for rest; keep the existing costs branch as-is and ensure you reference this.modelUsage and this.usageMode in the updated expressions so missing token fields resolve to 0.internal/admin/dashboard/templates/index.html (2)
181-207: "All" category displays text-generation columns for non-text models.When
activeCategory === 'all', this table (with Input/Output $/MTok columns) is shown for all models including image, audio, and video models that don't have these pricing fields — resulting in rows with mostly dash values. Consider either hiding pricing columns in the "all" view or displaying a simplified unified table.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 181 - 207, The "all" view shows text-generation pricing columns for non-text models resulting in dashes; update the table to only render Input/Output/Cached price columns when a row's model actually supports text generation or has pricing. Concretely, change the column headers and each <td> that uses formatPrice(m.model.metadata?.pricing?.input_per_mtok|output_per_mtok|cached_input_per_mtok) to be conditionally shown based on a per-model predicate (e.g., check (m.model.metadata?.modes ?? []).includes('text_generation') || m.model.metadata?.pricing) rather than only testing activeCategory, so filteredModels rows that are image/audio/video won't render empty price cells; keep activeCategory === 'text_generation' logic for the cached column when appropriate.
142-143: Pagination info shows incorrect range for edge cases.When
usageLog.total > 0but the offset exceeds total due to a race or stale state,usageLog.offset + 1could exceedusageLog.total, showing e.g. "Showing 51-50 of 50". Guard the start value withMath.min.Proposed fix
-<span class="pagination-info" x-text="'Showing ' + (usageLog.offset + 1) + '-' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span> +<span class="pagination-info" x-text="'Showing ' + Math.min(usageLog.offset + 1, usageLog.total) + '–' + Math.min(usageLog.offset + usageLog.limit, usageLog.total) + ' of ' + usageLog.total"></span>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/admin/dashboard/templates/index.html` around lines 142 - 143, The pagination range can display an invalid start when usageLog.offset exceeds usageLog.total; update the x-text expression in the pagination span that uses usageLog to guard the start value by replacing (usageLog.offset + 1) with Math.min(usageLog.offset + 1, usageLog.total) so the displayed start never exceeds usageLog.total (keep the rest of the expression intact using usageLog.offset, usageLog.limit, and usageLog.total).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/admin/dashboard/templates/index.html`:
- Around line 325-327: The empty-state conditions miss the case when there are
zero server models and the user has an active modelFilter; update the template
to handle that case by adding or modifying an element that shows a "No models
match your filter" message when models.length === 0 && modelFilter && !loading
&& !authError (and optionally consider activeCategory similar to the other
messages). Target the existing symbols/models: models, modelFilter,
filteredModels, activeCategory, loading, authError and ensure the new x-show
condition covers models.length === 0 with an active modelFilter so the user sees
feedback when filtering returns nothing.
---
Duplicate comments:
In `@internal/admin/dashboard/static/js/dashboard.js`:
- Around line 606-634: The null/undefined checks in formatCost, formatPrice,
formatPriceFine (and formatNumber) are redundant: change each conditional from
"v == null || v === undefined" to simply "v == null" so the loose-equality
handles both null and undefined; update the checks inside the methods
formatCost, formatPrice, formatPriceFine, and formatNumber accordingly to remove
the dead "v === undefined" branch while keeping the exact return behavior
unchanged.
- Around line 669-688: fetchUsageLog currently replaces this.usageLog with the
server response which can omit pagination fields and break
usageLogNextPage/usageLogPrevPage; instead merge the fetched JSON into the
existing this.usageLog (or a default object) and ensure limit and offset are
preserved or defaulted (e.g., use existing this.usageLog.limit/offset or set to
50/0) before assigning—update the assignment in fetchUsageLog (and keep the
fallback that ensures entries is an array) so missing fields from the response
don’t produce NaN offsets.
- Around line 724-728: formatTokensShort currently returns "null"/"undefined"
when n is null/undefined; add a guard at the top of the formatTokensShort
function to handle null/undefined (e.g. if (n == null) return '' or the same
placeholder other format helpers use) before doing comparisons, so the UI shows
the expected empty/placeholder value instead of the literal "null"/"undefined".
- Around line 611-623: formatCostTooltip currently calls this.formatNumber on
every raw_data value which breaks for non-number types; update the loop inside
formatCostTooltip to type-guard each value (e.g., check typeof value ===
'number' or Number.isFinite(value')) and only call this.formatNumber for numeric
values, otherwise convert the value to a safe string (e.g., String(value))
before pushing it to lines; keep the rest of the function and use the same label
generation so callers of formatNumber are limited to numeric inputs.
- Around line 839-840: The y scale configuration currently uses the deprecated
Chart.js 3.x option drawBorder: false; locate the scale object where "y:" is
defined in dashboard.js and replace that property with the Chart.js 4.x
equivalent by adding a border configuration (e.g., border: { display: false })
on the y scale; keep other scale settings like grid: { color: colors.grid }
intact and remove the drawBorder property to ensure the axis border is disabled
in Chart.js v4.
- Around line 749-774: The _barData() function produces NaN because it adds
possibly undefined token fields; update all token-sum expressions to guard with
defaults (e.g., use (m.input_tokens || 0) + (m.output_tokens || 0)) wherever
tokens are used: in the sort comparator, in the values mapping for the tokens
branch, and in the aggregation loop for rest; keep the existing costs branch
as-is and ensure you reference this.modelUsage and this.usageMode in the updated
expressions so missing token fields resolve to 0.
In `@internal/admin/dashboard/templates/index.html`:
- Around line 181-207: The "all" view shows text-generation pricing columns for
non-text models resulting in dashes; update the table to only render
Input/Output/Cached price columns when a row's model actually supports text
generation or has pricing. Concretely, change the column headers and each <td>
that uses
formatPrice(m.model.metadata?.pricing?.input_per_mtok|output_per_mtok|cached_input_per_mtok)
to be conditionally shown based on a per-model predicate (e.g., check
(m.model.metadata?.modes ?? []).includes('text_generation') ||
m.model.metadata?.pricing) rather than only testing activeCategory, so
filteredModels rows that are image/audio/video won't render empty price cells;
keep activeCategory === 'text_generation' logic for the cached column when
appropriate.
- Around line 142-143: The pagination range can display an invalid start when
usageLog.offset exceeds usageLog.total; update the x-text expression in the
pagination span that uses usageLog to guard the start value by replacing
(usageLog.offset + 1) with Math.min(usageLog.offset + 1, usageLog.total) so the
displayed start never exceeds usageLog.total (keep the rest of the expression
intact using usageLog.offset, usageLog.limit, and usageLog.total).
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
internal/admin/dashboard/static/css/dashboard.cssinternal/admin/dashboard/static/js/dashboard.jsinternal/admin/dashboard/templates/index.html
| <p class="empty-state" x-show="models.length === 0 && !loading && !authError && !modelFilter && (activeCategory === 'all' || !activeCategory)">No models registered.</p> | ||
| <p class="empty-state" x-show="models.length === 0 && !loading && !authError && !modelFilter && activeCategory && activeCategory !== 'all'">No models in this category.</p> | ||
| <p class="empty-state" x-show="models.length > 0 && filteredModels.length === 0 && modelFilter">No models match your filter.</p> |
There was a problem hiding this comment.
Empty-state gap when models.length === 0 with an active modelFilter.
If the server returns zero models for a category and the user has typed a filter, none of the three empty-state conditions match (lines 325–326 require !modelFilter; line 327 requires models.length > 0). The user sees a blank area with the filter input visible but no message.
Proposed fix — add a catch-all for filtered-but-empty
<p class="empty-state" x-show="models.length === 0 && !loading && !authError && !modelFilter && (activeCategory === 'all' || !activeCategory)">No models registered.</p>
<p class="empty-state" x-show="models.length === 0 && !loading && !authError && !modelFilter && activeCategory && activeCategory !== 'all'">No models in this category.</p>
<p class="empty-state" x-show="models.length > 0 && filteredModels.length === 0 && modelFilter">No models match your filter.</p>
+<p class="empty-state" x-show="models.length === 0 && !loading && !authError && modelFilter">No models match your filter.</p>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/admin/dashboard/templates/index.html` around lines 325 - 327, The
empty-state conditions miss the case when there are zero server models and the
user has an active modelFilter; update the template to handle that case by
adding or modifying an element that shows a "No models match your filter"
message when models.length === 0 && modelFilter && !loading && !authError (and
optionally consider activeCategory similar to the other messages). Target the
existing symbols/models: models, modelFilter, filteredModels, activeCategory,
loading, authError and ensure the new x-show condition covers models.length ===
0 with an active modelFilter so the user sees feedback when filtering returns
nothing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Fetch, cache, and merge the ai-model-list registry (models.json) to enrich discovered models with pricing, capabilities, and context limits. Models not in the registry still work — the fetch is best-effort and never gates.
Summary by CodeRabbit