providers: preserve provider instance identity in registry cache#113
providers: preserve provider instance identity in registry cache#113SantiagoDePolonia merged 3 commits intomainfrom
Conversation
📝 WalkthroughWalkthroughModel caching and provider registry were restructured: model storage moved from a global models map/version to per-provider groups ( 📝 WalkthroughModel caching and provider registry were restructured: model storage moved from a global models map/version to per-provider groups ( Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant Registry as ModelRegistry
participant Cache as ModelCache
participant ProviderA as Provider(nameA,typeA)
participant ProviderB as Provider(nameB,typeB)
Client->>Registry: Initialize()
Registry->>ProviderA: RegisterProviderWithNameAndType(providerA, nameA, typeA)
Registry->>ProviderB: RegisterProviderWithNameAndType(providerB, nameB, typeB)
Registry->>Cache: LoadFromCache()
Cache-->>Registry: returns Providers map (providerName -> CachedProvider{Models: [...]})
Registry->>Registry: populate modelsByProvider & providerNames
Client->>Registry: GetModel("nameB/shared-model") -- provider-scoped selector
Registry->>modelsByProvider: lookup provider nameB -> model id
Registry->>ProviderB: resolve model via providerB
Registry->>Client: return ModelInfo
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/cache/cache_test.go`:
- Around line 61-66: The test currently checks slice length non-fatally then
indexes result.Models which can cause a panic; modify the assertions in the
failing blocks (the ones referencing result.Models) to either use a fatal
assertion (e.g., t.Fatalf) when len(result.Models) != expected or add an
immediate guard that returns/fails before indexing, and then perform the element
checks (e.g., result.Models[0].ModelID) — update both the block around the first
assertion and the similar block at lines 178-189 to use this guarded/fatal
pattern so indexing cannot occur when the slice is empty or has the wrong
length.
In `@internal/providers/registry_cache_test.go`:
- Around line 151-203: Add an assertion that the unqualified model ID "gpt-4o"
resolves to the expected winner provider after LoadFromCache; specifically,
after registering east and west via RegisterProviderWithNameAndType and calling
registry.LoadFromCache, assert registry.GetProvider("gpt-4o") == east (the
first-registered provider) to ensure unqualified routing for duplicate model IDs
is preserved.
In `@internal/providers/registry.go`:
- Around line 352-357: The cache write is reordering providers lexicographically
which breaks the original registration-first-wins semantics on reload; in
SaveToCache stop sorting providerNames and iterate providers in the original
registration order (use the registry's provider slice/order used by Register/Add
rather than the map keys or, if no slice exists, capture and persist the
provider registration order when providers are registered), and ensure
LoadFromCache consumes rows in that same persisted/registration order so
unqualified lookups (handled in LoadFromCache and the modelsByProvider handling)
keep the first-registered provider as the winner.
- Around line 434-442: The current logic treats any input containing '/' as a
qualified selector by using splitModelSelector(model) and returns nil early if
the provider lookup (r.modelsByProvider[providerName] or
providerModels[modelID]) fails, which prevents legitimate unqualified keys like
"a/b" stored in r.models from being found; change the flow in the affected
methods (those using splitModelSelector, r.modelsByProvider, providerModels and
r.models) so that when providerName != "" you still attempt the
provider-specific lookup first but if it misses (no providerModels or no info)
you do not return nil immediately — instead fall through to the global fallback
and check r.models[model] (the original full key) before returning nil. Apply
this fallback pattern consistently in the four methods that use
splitModelSelector/r.modelsByProvider (the blocks around the current
early-return locations).
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
internal/cache/cache.gointernal/cache/cache_test.gointernal/providers/init.gointernal/providers/registry.gointernal/providers/registry_cache_test.gointernal/providers/registry_test.go
| t.Run("LoadFromCachePreservesProviderInstancesWithSameType", func(t *testing.T) { | ||
| tmpDir := t.TempDir() | ||
| cacheFile := filepath.Join(tmpDir, "models.json") | ||
|
|
||
| modelCache := cache.ModelCache{ | ||
| Version: modelCacheVersion, | ||
| UpdatedAt: time.Now().UTC(), | ||
| Models: []cache.CachedModel{ | ||
| { | ||
| ModelID: "gpt-4o", | ||
| Provider: "openai-east", | ||
| ProviderType: "openai", | ||
| Object: "model", | ||
| OwnedBy: "openai", | ||
| }, | ||
| { | ||
| ModelID: "gpt-4o", | ||
| Provider: "openai-west", | ||
| ProviderType: "openai", | ||
| Object: "model", | ||
| OwnedBy: "openai", | ||
| }, | ||
| }, | ||
| } | ||
| data, _ := json.Marshal(modelCache) | ||
| if err := os.WriteFile(cacheFile, data, 0o644); err != nil { | ||
| t.Fatalf("failed to write cache file: %v", err) | ||
| } | ||
|
|
||
| registry := NewModelRegistry() | ||
| localCache := cache.NewLocalCache(cacheFile) | ||
| registry.SetCache(localCache) | ||
|
|
||
| east := ®istryMockProvider{name: "openai-east"} | ||
| west := ®istryMockProvider{name: "openai-west"} | ||
| registry.RegisterProviderWithNameAndType(east, "openai-east", "openai") | ||
| registry.RegisterProviderWithNameAndType(west, "openai-west", "openai") | ||
|
|
||
| loaded, err := registry.LoadFromCache(context.Background()) | ||
| if err != nil { | ||
| t.Fatalf("unexpected error: %v", err) | ||
| } | ||
| if loaded != 1 { | ||
| t.Fatalf("expected 1 unqualified model loaded, got %d", loaded) | ||
| } | ||
|
|
||
| if provider := registry.GetProvider("openai-east/gpt-4o"); provider != east { | ||
| t.Fatal("expected openai-east/gpt-4o to map to openai-east provider") | ||
| } | ||
| if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west { | ||
| t.Fatal("expected openai-west/gpt-4o to map to openai-west provider") | ||
| } | ||
| }) |
There was a problem hiding this comment.
Add an unqualified winner assertion in this same-type restore test.
The test checks qualified mappings only. A regression in unqualified routing for duplicate model IDs would still pass here.
🧪 Suggested assertion
if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west {
t.Fatal("expected openai-west/gpt-4o to map to openai-west provider")
}
+ if provider := registry.GetProvider("gpt-4o"); provider != east {
+ t.Fatal("expected unqualified gpt-4o to map to first provider")
+ }
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| t.Run("LoadFromCachePreservesProviderInstancesWithSameType", func(t *testing.T) { | |
| tmpDir := t.TempDir() | |
| cacheFile := filepath.Join(tmpDir, "models.json") | |
| modelCache := cache.ModelCache{ | |
| Version: modelCacheVersion, | |
| UpdatedAt: time.Now().UTC(), | |
| Models: []cache.CachedModel{ | |
| { | |
| ModelID: "gpt-4o", | |
| Provider: "openai-east", | |
| ProviderType: "openai", | |
| Object: "model", | |
| OwnedBy: "openai", | |
| }, | |
| { | |
| ModelID: "gpt-4o", | |
| Provider: "openai-west", | |
| ProviderType: "openai", | |
| Object: "model", | |
| OwnedBy: "openai", | |
| }, | |
| }, | |
| } | |
| data, _ := json.Marshal(modelCache) | |
| if err := os.WriteFile(cacheFile, data, 0o644); err != nil { | |
| t.Fatalf("failed to write cache file: %v", err) | |
| } | |
| registry := NewModelRegistry() | |
| localCache := cache.NewLocalCache(cacheFile) | |
| registry.SetCache(localCache) | |
| east := ®istryMockProvider{name: "openai-east"} | |
| west := ®istryMockProvider{name: "openai-west"} | |
| registry.RegisterProviderWithNameAndType(east, "openai-east", "openai") | |
| registry.RegisterProviderWithNameAndType(west, "openai-west", "openai") | |
| loaded, err := registry.LoadFromCache(context.Background()) | |
| if err != nil { | |
| t.Fatalf("unexpected error: %v", err) | |
| } | |
| if loaded != 1 { | |
| t.Fatalf("expected 1 unqualified model loaded, got %d", loaded) | |
| } | |
| if provider := registry.GetProvider("openai-east/gpt-4o"); provider != east { | |
| t.Fatal("expected openai-east/gpt-4o to map to openai-east provider") | |
| } | |
| if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west { | |
| t.Fatal("expected openai-west/gpt-4o to map to openai-west provider") | |
| } | |
| }) | |
| t.Run("LoadFromCachePreservesProviderInstancesWithSameType", func(t *testing.T) { | |
| tmpDir := t.TempDir() | |
| cacheFile := filepath.Join(tmpDir, "models.json") | |
| modelCache := cache.ModelCache{ | |
| Version: modelCacheVersion, | |
| UpdatedAt: time.Now().UTC(), | |
| Models: []cache.CachedModel{ | |
| { | |
| ModelID: "gpt-4o", | |
| Provider: "openai-east", | |
| ProviderType: "openai", | |
| Object: "model", | |
| OwnedBy: "openai", | |
| }, | |
| { | |
| ModelID: "gpt-4o", | |
| Provider: "openai-west", | |
| ProviderType: "openai", | |
| Object: "model", | |
| OwnedBy: "openai", | |
| }, | |
| }, | |
| } | |
| data, _ := json.Marshal(modelCache) | |
| if err := os.WriteFile(cacheFile, data, 0o644); err != nil { | |
| t.Fatalf("failed to write cache file: %v", err) | |
| } | |
| registry := NewModelRegistry() | |
| localCache := cache.NewLocalCache(cacheFile) | |
| registry.SetCache(localCache) | |
| east := ®istryMockProvider{name: "openai-east"} | |
| west := ®istryMockProvider{name: "openai-west"} | |
| registry.RegisterProviderWithNameAndType(east, "openai-east", "openai") | |
| registry.RegisterProviderWithNameAndType(west, "openai-west", "openai") | |
| loaded, err := registry.LoadFromCache(context.Background()) | |
| if err != nil { | |
| t.Fatalf("unexpected error: %v", err) | |
| } | |
| if loaded != 1 { | |
| t.Fatalf("expected 1 unqualified model loaded, got %d", loaded) | |
| } | |
| if provider := registry.GetProvider("openai-east/gpt-4o"); provider != east { | |
| t.Fatal("expected openai-east/gpt-4o to map to openai-east provider") | |
| } | |
| if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west { | |
| t.Fatal("expected openai-west/gpt-4o to map to openai-west provider") | |
| } | |
| if provider := registry.GetProvider("gpt-4o"); provider != east { | |
| t.Fatal("expected unqualified gpt-4o to map to first provider") | |
| } | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/providers/registry_cache_test.go` around lines 151 - 203, Add an
assertion that the unqualified model ID "gpt-4o" resolves to the expected winner
provider after LoadFromCache; specifically, after registering east and west via
RegisterProviderWithNameAndType and calling registry.LoadFromCache, assert
registry.GetProvider("gpt-4o") == east (the first-registered provider) to ensure
unqualified routing for duplicate model IDs is preserved.
internal/providers/registry.go
Outdated
| providerNames := make([]string, 0, len(modelsByProvider)) | ||
| for providerName := range modelsByProvider { | ||
| providerNames = append(providerNames, providerName) | ||
| } | ||
| sort.Strings(providerNames) | ||
|
|
There was a problem hiding this comment.
Cache serialization order can change unqualified provider selection after restart.
SaveToCache sorts provider names lexicographically, while LoadFromCache keeps the first row per model for unqualified lookups. That can flip the winner from registration order to alphabetical order after a cache round-trip.
💡 Keep provider row order aligned with registration order
r.mu.RLock()
cacheBackend := r.cache
+ providers := append([]core.Provider(nil), r.providers...)
+ providerNamesByProvider := make(map[core.Provider]string, len(r.providerNames))
+ for p, n := range r.providerNames {
+ providerNamesByProvider[p] = n
+ }
modelsByProvider := make(map[string]map[string]*ModelInfo, len(r.modelsByProvider))
for providerName, models := range r.modelsByProvider {
modelsByProvider[providerName] = make(map[string]*ModelInfo, len(models))
@@
- providerNames := make([]string, 0, len(modelsByProvider))
- for providerName := range modelsByProvider {
- providerNames = append(providerNames, providerName)
- }
- sort.Strings(providerNames)
+ providerNames := make([]string, 0, len(modelsByProvider))
+ seen := make(map[string]struct{}, len(modelsByProvider))
+ for _, p := range providers {
+ name := providerNamesByProvider[p]
+ if _, ok := modelsByProvider[name]; !ok {
+ continue
+ }
+ if _, ok := seen[name]; ok {
+ continue
+ }
+ providerNames = append(providerNames, name)
+ seen[name] = struct{}{}
+ }
+ for name := range modelsByProvider {
+ if _, ok := seen[name]; !ok {
+ providerNames = append(providerNames, name)
+ }
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/providers/registry.go` around lines 352 - 357, The cache write is
reordering providers lexicographically which breaks the original
registration-first-wins semantics on reload; in SaveToCache stop sorting
providerNames and iterate providers in the original registration order (use the
registry's provider slice/order used by Register/Add rather than the map keys
or, if no slice exists, capture and persist the provider registration order when
providers are registered), and ensure LoadFromCache consumes rows in that same
persisted/registration order so unqualified lookups (handled in LoadFromCache
and the modelsByProvider handling) keep the first-registered provider as the
winner.
Backward compatibility is not a concern, so the version field and mismatch guard in the model cache are unnecessary complexity. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (4)
internal/cache/cache_test.go (1)
57-62:⚠️ Potential issue | 🟡 MinorUse fatal guards before indexing model slices.
Line 57 and Line 169 perform non-fatal length checks and then index slices immediately. When lengths are wrong, the test can panic and mask the actual assertion failure.
Suggested hardening
- if len(result.Models) != 1 { - t.Errorf("expected 1 model, got %d", len(result.Models)) - } + if len(result.Models) != 1 { + t.Fatalf("expected 1 model, got %d", len(result.Models)) + } if result.Models[0].ModelID != "test-model" { t.Errorf("expected test-model in cache, got %q", result.Models[0].ModelID) } @@ - if len(restored.Models) != len(original.Models) { - t.Errorf("model count mismatch: got %d, want %d", len(restored.Models), len(original.Models)) - } + if len(restored.Models) != len(original.Models) { + t.Fatalf("model count mismatch: got %d, want %d", len(restored.Models), len(original.Models)) + } + if len(restored.Models) < 2 { + t.Fatalf("expected at least 2 models, got %d", len(restored.Models)) + } if restored.Models[0].ModelID != original.Models[0].ModelID { t.Errorf("first model ID mismatch: got %q, want %q", restored.Models[0].ModelID, original.Models[0].ModelID) }Also applies to: 169-180
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/cache/cache_test.go` around lines 57 - 62, The tests index result.Models without fatal guards, causing panics if the slice is empty; update the assertions in internal/cache/cache_test.go to use fatal checks (e.g., t.Fatalf or t.Fatal) when validating lengths before any indexing (replace the non-fatal t.Errorf length checks around result.Models with fatal checks) and apply the same change to the other occurrence that indexes result.Models around the block referenced (lines 169-180) so tests fail fast and do not panic when length assertions fail.internal/providers/registry.go (2)
341-346:⚠️ Potential issue | 🟠 MajorDo not sort provider rows when persisting cache.
Line 345 lexicographically reorders provider rows, but Line 272 keeps the first row for unqualified model lookup. After a cache round-trip, winner selection can drift from registration order to alphabetical order.
Based on learningsRegister providers explicitly via factory.Add() calls in cmd/gomodel/main.go—order matters, first registered wins for duplicate model names.Suggested fix (preserve registration order in cache rows)
r.mu.RLock() cacheBackend := r.cache + providers := append([]core.Provider(nil), r.providers...) + providerNamesByProvider := make(map[core.Provider]string, len(r.providerNames)) + for p, n := range r.providerNames { + providerNamesByProvider[p] = n + } modelsByProvider := make(map[string]map[string]*ModelInfo, len(r.modelsByProvider)) for providerName, models := range r.modelsByProvider { modelsByProvider[providerName] = make(map[string]*ModelInfo, len(models)) @@ - providerNames := make([]string, 0, len(modelsByProvider)) - for providerName := range modelsByProvider { - providerNames = append(providerNames, providerName) - } - sort.Strings(providerNames) + providerNames := make([]string, 0, len(modelsByProvider)) + seen := make(map[string]struct{}, len(modelsByProvider)) + for _, p := range providers { + name := providerNamesByProvider[p] + if _, ok := modelsByProvider[name]; !ok { + continue + } + if _, ok := seen[name]; ok { + continue + } + providerNames = append(providerNames, name) + seen[name] = struct{}{} + } + for name := range modelsByProvider { + if _, ok := seen[name]; !ok { + providerNames = append(providerNames, name) + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry.go` around lines 341 - 346, The code is currently sorting provider rows before persisting which reorders providerNames and breaks "first-registered wins" behavior; instead preserve registration order by removing the sort and iterating providers in the original insertion order from modelsByProvider (or by using the underlying ordered slice used during registration) when building providerNames; specifically, stop calling sort.Strings(providerNames) and ensure providerNames is populated from the registration-ordered source (matching how factories are added via factory.Add()) so duplicate model winner selection remains stable.
423-431:⚠️ Potential issue | 🟠 MajorFallback to plain key when qualified lookup misses.
These blocks return early on qualified misses, which makes legitimate unqualified model IDs containing
/unreachable.Suggested fix (apply consistently to all four methods)
func (r *ModelRegistry) GetProvider(model string) core.Provider { r.mu.RLock() defer r.mu.RUnlock() - providerName, modelID := splitModelSelector(model) + rawModel := strings.TrimSpace(model) + providerName, modelID := splitModelSelector(rawModel) if providerName != "" { if providerModels, ok := r.modelsByProvider[providerName]; ok { if info, exists := providerModels[modelID]; exists { return info.Provider } } + if info, ok := r.models[rawModel]; ok { + return info.Provider + } return nil } if info, ok := r.models[modelID]; ok { return info.Provider @@ func (r *ModelRegistry) GetModel(model string) *ModelInfo { r.mu.RLock() defer r.mu.RUnlock() - providerName, modelID := splitModelSelector(model) + rawModel := strings.TrimSpace(model) + providerName, modelID := splitModelSelector(rawModel) if providerName != "" { if providerModels, ok := r.modelsByProvider[providerName]; ok { - return providerModels[modelID] + if info, exists := providerModels[modelID]; exists { + return info + } } + if info, ok := r.models[rawModel]; ok { + return info + } return nil } @@ func (r *ModelRegistry) Supports(model string) bool { r.mu.RLock() defer r.mu.RUnlock() - providerName, modelID := splitModelSelector(model) + rawModel := strings.TrimSpace(model) + providerName, modelID := splitModelSelector(rawModel) if providerName != "" { providerModels, ok := r.modelsByProvider[providerName] - if !ok { - return false + if ok { + if _, ok = providerModels[modelID]; ok { + return true + } } - _, ok = providerModels[modelID] + _, ok = r.models[rawModel] return ok } @@ func (r *ModelRegistry) GetProviderType(model string) string { r.mu.RLock() defer r.mu.RUnlock() - providerName, modelID := splitModelSelector(model) + rawModel := strings.TrimSpace(model) + providerName, modelID := splitModelSelector(rawModel) if providerName != "" { if providerModels, ok := r.modelsByProvider[providerName]; ok { if info, exists := providerModels[modelID]; exists { return r.providerTypes[info.Provider] } } + if info, ok := r.models[rawModel]; ok { + return r.providerTypes[info.Provider] + } return "" }Also applies to: 444-450, 463-471, 518-526
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry.go` around lines 423 - 431, The qualified-lookup branch in methods using splitModelSelector (starting with providerName, modelID := splitModelSelector(model)) returns nil on a provider-specific miss, which prevents legitimate unqualified model IDs that contain “/” from being found; change the logic so that when providerName != "" and the provider-specific map lookup fails (no providerModels or no info for modelID), you do NOT return early but fall through to the unqualified lookup path using the original model string (or model key) instead; apply this same change consistently to the other three methods that use splitModelSelector and r.modelsByProvider so qualified misses fall back to the plain-key lookup.internal/providers/registry_cache_test.go (1)
147-198:⚠️ Potential issue | 🟡 MinorAdd unqualified winner assertion in same-type restore test.
This test only verifies qualified selectors. A regression in unqualified duplicate-model routing would still pass.
As per coding guidelines**/*_test.go: Add or update tests for any behavior change.Suggested assertion
if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west { t.Fatal("expected openai-west/gpt-4o to map to openai-west provider") } + if provider := registry.GetProvider("gpt-4o"); provider != east { + t.Fatal("expected unqualified gpt-4o to map to first provider") + } })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry_cache_test.go` around lines 147 - 198, The test LoadFromCachePreservesProviderInstancesWithSameType is missing an assertion for the unqualified selector; after loading (and after the existing qualified checks for "openai-east/gpt-4o" and "openai-west/gpt-4o"), add an assertion that registry.GetProvider("gpt-4o") returns the expected winner provider (e.g., the one registered first: east). Reference the test function name and the GetProvider calls (registry.GetProvider("gpt-4o"), registry.GetProvider("openai-east/gpt-4o"), registry.GetProvider("openai-west/gpt-4o")) and make the unqualified assertion mirror the style of the other checks (t.Fatal with a clear message).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@internal/cache/cache_test.go`:
- Around line 57-62: The tests index result.Models without fatal guards, causing
panics if the slice is empty; update the assertions in
internal/cache/cache_test.go to use fatal checks (e.g., t.Fatalf or t.Fatal)
when validating lengths before any indexing (replace the non-fatal t.Errorf
length checks around result.Models with fatal checks) and apply the same change
to the other occurrence that indexes result.Models around the block referenced
(lines 169-180) so tests fail fast and do not panic when length assertions fail.
In `@internal/providers/registry_cache_test.go`:
- Around line 147-198: The test
LoadFromCachePreservesProviderInstancesWithSameType is missing an assertion for
the unqualified selector; after loading (and after the existing qualified checks
for "openai-east/gpt-4o" and "openai-west/gpt-4o"), add an assertion that
registry.GetProvider("gpt-4o") returns the expected winner provider (e.g., the
one registered first: east). Reference the test function name and the
GetProvider calls (registry.GetProvider("gpt-4o"),
registry.GetProvider("openai-east/gpt-4o"),
registry.GetProvider("openai-west/gpt-4o")) and make the unqualified assertion
mirror the style of the other checks (t.Fatal with a clear message).
In `@internal/providers/registry.go`:
- Around line 341-346: The code is currently sorting provider rows before
persisting which reorders providerNames and breaks "first-registered wins"
behavior; instead preserve registration order by removing the sort and iterating
providers in the original insertion order from modelsByProvider (or by using the
underlying ordered slice used during registration) when building providerNames;
specifically, stop calling sort.Strings(providerNames) and ensure providerNames
is populated from the registration-ordered source (matching how factories are
added via factory.Add()) so duplicate model winner selection remains stable.
- Around line 423-431: The qualified-lookup branch in methods using
splitModelSelector (starting with providerName, modelID :=
splitModelSelector(model)) returns nil on a provider-specific miss, which
prevents legitimate unqualified model IDs that contain “/” from being found;
change the logic so that when providerName != "" and the provider-specific map
lookup fails (no providerModels or no info for modelID), you do NOT return early
but fall through to the unqualified lookup path using the original model string
(or model key) instead; apply this same change consistently to the other three
methods that use splitModelSelector and r.modelsByProvider so qualified misses
fall back to the plain-key lookup.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 1e0ae647-7b22-4875-bdcf-93b9c896ede1
📒 Files selected for processing (4)
internal/cache/cache.gointernal/cache/cache_test.gointernal/providers/registry.gointernal/providers/registry_cache_test.go
Restructure ModelCache to group models by provider, eliminating per-model repetition of provider_type, owned_by, and object fields. Drop the always-"model" Object field entirely. Fix splitModelSelector fallthrough: when a qualified lookup fails (e.g. "meta-llama/Meta-Llama-3-70B" where "meta-llama" is not a provider name), fall through to unqualified lookup using the original model string instead of returning nil. Applied to GetProvider, GetModel, Supports, and GetProviderType. Harden cache tests with fatal guards before slice indexing. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (2)
internal/providers/registry_cache_test.go (1)
200-203:⚠️ Potential issue | 🟡 MinorUnqualified winner assertion is too permissive for this behavior contract.
At Line 200-203, allowing
east || westmasks the first-provider-wins requirement for duplicate unqualified IDs. This test should asserteast(registered first), so restore-order regressions are caught.🧪 Tighten assertion
- // Unqualified lookup should resolve to one of the two providers (map iteration order is nondeterministic) - if provider := registry.GetProvider("gpt-4o"); provider != east && provider != west { - t.Fatal("expected unqualified gpt-4o to map to either openai-east or openai-west provider") + // Unqualified lookup should preserve first-registered provider semantics. + if provider := registry.GetProvider("gpt-4o"); provider != east { + t.Fatal("expected unqualified gpt-4o to map to first registered provider (openai-east)") }As per coding guidelines
**/*_test.go: Add or update tests for any behavior change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry_cache_test.go` around lines 200 - 203, The test currently allows registry.GetProvider("gpt-4o") to be either east or west which masks the intended "first registered wins" behavior; update the assertion to require the provider to be east (the provider registered first) instead of allowing either east or west, by replacing the permissive check around registry.GetProvider("gpt-4o") with a strict equality assertion that the returned provider equals east (use the same east/west variables already present) so order regressions are detected.internal/providers/registry.go (1)
250-276:⚠️ Potential issue | 🟠 MajorUnqualified cache restore winner is nondeterministic.
At Line 253,
for providerName, cachedProvider := range modelCache.Providersiterates a map; then at Line 271 first insert intonewModelswins. For duplicate model IDs across providers, the unqualified provider can flip between process runs after cache restore.🔧 Keep restore order aligned with provider registration order
- // Build lookup maps from configured providers. - r.mu.RLock() - nameToProvider := make(map[string]core.Provider, len(r.providerNames)) - for provider, pName := range r.providerNames { - nameToProvider[pName] = provider - } - r.mu.RUnlock() + // Snapshot provider registration order + names. + r.mu.RLock() + providers := append([]core.Provider(nil), r.providers...) + providerNames := make(map[core.Provider]string, len(r.providerNames)) + for provider, pName := range r.providerNames { + providerNames[provider] = pName + } + r.mu.RUnlock() // Populate model maps from grouped cache structure. Unqualified lookups keep "first provider wins". newModels := make(map[string]*ModelInfo) newModelsByProvider := make(map[string]map[string]*ModelInfo) - for providerName, cachedProvider := range modelCache.Providers { - provider, ok := nameToProvider[providerName] + for _, provider := range providers { + providerName := providerNames[provider] + cachedProvider, ok := modelCache.Providers[providerName] if !ok { - // Provider not configured, skip all its models + // No cached entry for this configured provider. continue } providerModels := make(map[string]*ModelInfo, len(cachedProvider.Models)) for _, cached := range cachedProvider.Models { info := &ModelInfo{Based on learnings
Register providers explicitly via factory.Add() calls in cmd/gomodel/main.go—order matters, first registered wins for duplicate model names.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/providers/registry.go` around lines 250 - 276, The current loop over modelCache.Providers is nondeterministic because map iteration order can change; when populating newModels (and newModelsByProvider) the first provider seen wins for duplicate model IDs which can flip on restore. Replace the map iteration (for providerName, cachedProvider := range modelCache.Providers) with a deterministic iteration over providers in registration order (use the ordered provider list used when registering providers via factory.Add or an existing slice that records registration order; if none exists, create and maintain one alongside nameToProvider), then for each providerName fetch cachedProvider and proceed exactly as before so that newModels/cached insertion respects first-registered-wins semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@internal/providers/registry_cache_test.go`:
- Around line 200-203: The test currently allows registry.GetProvider("gpt-4o")
to be either east or west which masks the intended "first registered wins"
behavior; update the assertion to require the provider to be east (the provider
registered first) instead of allowing either east or west, by replacing the
permissive check around registry.GetProvider("gpt-4o") with a strict equality
assertion that the returned provider equals east (use the same east/west
variables already present) so order regressions are detected.
In `@internal/providers/registry.go`:
- Around line 250-276: The current loop over modelCache.Providers is
nondeterministic because map iteration order can change; when populating
newModels (and newModelsByProvider) the first provider seen wins for duplicate
model IDs which can flip on restore. Replace the map iteration (for
providerName, cachedProvider := range modelCache.Providers) with a deterministic
iteration over providers in registration order (use the ordered provider list
used when registering providers via factory.Add or an existing slice that
records registration order; if none exists, create and maintain one alongside
nameToProvider), then for each providerName fetch cachedProvider and proceed
exactly as before so that newModels/cached insertion respects
first-registered-wins semantics.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: f114edf6-0f0d-4c0f-8e60-8abffe2b213f
📒 Files selected for processing (4)
internal/cache/cache.gointernal/cache/cache_test.gointernal/providers/registry.gointernal/providers/registry_cache_test.go
Summary
Testing
Summary by CodeRabbit
New Features
Improvements
Tests