Skip to content

providers: preserve provider instance identity in registry cache#113

Merged
SantiagoDePolonia merged 3 commits intomainfrom
feature/provider-instance-cache
Mar 4, 2026
Merged

providers: preserve provider instance identity in registry cache#113
SantiagoDePolonia merged 3 commits intomainfrom
feature/provider-instance-cache

Conversation

@SantiagoDePolonia
Copy link
Copy Markdown
Contributor

@SantiagoDePolonia SantiagoDePolonia commented Mar 1, 2026

Summary

  • store provider instance names in the model registry and cache rows
  • fix cache restore to map models by provider instance instead of provider type
  • preserve qualified provider/model lookups while keeping unqualified first-provider behavior

Testing

  • go test ./internal/cache ./internal/providers -count=1
  • go test ./... -count=1

Summary by CodeRabbit

  • New Features

    • Explicit provider naming for provider-qualified model selectors (e.g., provider/model).
    • Cache entries now grouped per-provider and include model ID and creation timestamp.
  • Improvements

    • Provider-scoped lookups and provider-aware cache persistence for more reliable model resolution.
    • Unqualified lookups favor the first-registered provider to resolve duplicate model IDs.
  • Tests

    • Updated and expanded tests to cover provider-named registration and provider-scoped cache behavior.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 1, 2026

📝 Walkthrough

Walkthrough

Model caching and provider registry were restructured: model storage moved from a global models map/version to per-provider groups (ProvidersCachedProvider), provider registration now records both provider name and type (RegisterProviderWithNameAndType), and registry lookup/load/save logic was updated for provider-scoped model resolution.

📝 Walkthrough

Model caching and provider registry were restructured: model storage moved from a global models map/version to per-provider groups (ProvidersCachedProvider), provider registration now records both provider name and type (RegisterProviderWithNameAndType), and registry lookup/load/save logic was updated for provider-scoped model resolution.

Changes

Cohort / File(s) Summary
Cache types & tests
internal/cache/cache.go, internal/cache/cache_test.go
Replaced ModelCache.Models map + Version with ModelCache.Providers (map[string]CachedProvider). Introduced CachedProvider (provider metadata + Models []CachedModel). CachedModel now has ID and Created. Tests updated for new serialization shape and provider-scoped model slices.
Provider registry core
internal/providers/registry.go, internal/providers/init.go
Added provider-name-aware registration API RegisterProviderWithNameAndType(provider, name, type) and supporting maps (modelsByProvider, providerNames). Init/load/save/list/get flows updated to populate/persist per-provider model groups and to support provider-scoped selectors (providerName/modelID). Backwards-compatible register helpers delegate to the new API.
Registry tests & behavior
internal/providers/registry_test.go, internal/providers/registry_cache_test.go
Tests updated to use named/type registration. New tests validate provider-scoped lookups, preservation/skipping of providers when loading cache, and first-provider-wins/unqualified lookup behavior. Cache-related tests updated for provider-grouped cache shape.
Other files (module manifest)
go.mod
Small module update (one-line change) likely to accommodate API changes or tooling; no behavioral detail changed in diff summary.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant Client
    participant Registry as ModelRegistry
    participant Cache as ModelCache
    participant ProviderA as Provider(nameA,typeA)
    participant ProviderB as Provider(nameB,typeB)

    Client->>Registry: Initialize()
    Registry->>ProviderA: RegisterProviderWithNameAndType(providerA, nameA, typeA)
    Registry->>ProviderB: RegisterProviderWithNameAndType(providerB, nameB, typeB)
    Registry->>Cache: LoadFromCache()
    Cache-->>Registry: returns Providers map (providerName -> CachedProvider{Models: [...]})
    Registry->>Registry: populate modelsByProvider & providerNames
    Client->>Registry: GetModel("nameB/shared-model")  -- provider-scoped selector
    Registry->>modelsByProvider: lookup provider nameB -> model id
    Registry->>ProviderB: resolve model via providerB
    Registry->>Client: return ModelInfo
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰
I hopped through code with nimble feet,
Grouped models by the provider neat.
Names and types now hold their place,
Caches nest models, one-per-space.
A carrot cheer for registry complete!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 22.22% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately summarizes the main objective: preserving provider instance identity in registry cache by storing provider names and restructuring model storage to be provider-aware.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/provider-instance-cache

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/cache/cache_test.go`:
- Around line 61-66: The test currently checks slice length non-fatally then
indexes result.Models which can cause a panic; modify the assertions in the
failing blocks (the ones referencing result.Models) to either use a fatal
assertion (e.g., t.Fatalf) when len(result.Models) != expected or add an
immediate guard that returns/fails before indexing, and then perform the element
checks (e.g., result.Models[0].ModelID) — update both the block around the first
assertion and the similar block at lines 178-189 to use this guarded/fatal
pattern so indexing cannot occur when the slice is empty or has the wrong
length.

In `@internal/providers/registry_cache_test.go`:
- Around line 151-203: Add an assertion that the unqualified model ID "gpt-4o"
resolves to the expected winner provider after LoadFromCache; specifically,
after registering east and west via RegisterProviderWithNameAndType and calling
registry.LoadFromCache, assert registry.GetProvider("gpt-4o") == east (the
first-registered provider) to ensure unqualified routing for duplicate model IDs
is preserved.

In `@internal/providers/registry.go`:
- Around line 352-357: The cache write is reordering providers lexicographically
which breaks the original registration-first-wins semantics on reload; in
SaveToCache stop sorting providerNames and iterate providers in the original
registration order (use the registry's provider slice/order used by Register/Add
rather than the map keys or, if no slice exists, capture and persist the
provider registration order when providers are registered), and ensure
LoadFromCache consumes rows in that same persisted/registration order so
unqualified lookups (handled in LoadFromCache and the modelsByProvider handling)
keep the first-registered provider as the winner.
- Around line 434-442: The current logic treats any input containing '/' as a
qualified selector by using splitModelSelector(model) and returns nil early if
the provider lookup (r.modelsByProvider[providerName] or
providerModels[modelID]) fails, which prevents legitimate unqualified keys like
"a/b" stored in r.models from being found; change the flow in the affected
methods (those using splitModelSelector, r.modelsByProvider, providerModels and
r.models) so that when providerName != "" you still attempt the
provider-specific lookup first but if it misses (no providerModels or no info)
you do not return nil immediately — instead fall through to the global fallback
and check r.models[model] (the original full key) before returning nil. Apply
this fallback pattern consistently in the four methods that use
splitModelSelector/r.modelsByProvider (the blocks around the current
early-return locations).

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5f12339 and c15d59f.

📒 Files selected for processing (6)
  • internal/cache/cache.go
  • internal/cache/cache_test.go
  • internal/providers/init.go
  • internal/providers/registry.go
  • internal/providers/registry_cache_test.go
  • internal/providers/registry_test.go

Comment on lines +151 to +203
t.Run("LoadFromCachePreservesProviderInstancesWithSameType", func(t *testing.T) {
tmpDir := t.TempDir()
cacheFile := filepath.Join(tmpDir, "models.json")

modelCache := cache.ModelCache{
Version: modelCacheVersion,
UpdatedAt: time.Now().UTC(),
Models: []cache.CachedModel{
{
ModelID: "gpt-4o",
Provider: "openai-east",
ProviderType: "openai",
Object: "model",
OwnedBy: "openai",
},
{
ModelID: "gpt-4o",
Provider: "openai-west",
ProviderType: "openai",
Object: "model",
OwnedBy: "openai",
},
},
}
data, _ := json.Marshal(modelCache)
if err := os.WriteFile(cacheFile, data, 0o644); err != nil {
t.Fatalf("failed to write cache file: %v", err)
}

registry := NewModelRegistry()
localCache := cache.NewLocalCache(cacheFile)
registry.SetCache(localCache)

east := &registryMockProvider{name: "openai-east"}
west := &registryMockProvider{name: "openai-west"}
registry.RegisterProviderWithNameAndType(east, "openai-east", "openai")
registry.RegisterProviderWithNameAndType(west, "openai-west", "openai")

loaded, err := registry.LoadFromCache(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if loaded != 1 {
t.Fatalf("expected 1 unqualified model loaded, got %d", loaded)
}

if provider := registry.GetProvider("openai-east/gpt-4o"); provider != east {
t.Fatal("expected openai-east/gpt-4o to map to openai-east provider")
}
if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west {
t.Fatal("expected openai-west/gpt-4o to map to openai-west provider")
}
})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add an unqualified winner assertion in this same-type restore test.

The test checks qualified mappings only. A regression in unqualified routing for duplicate model IDs would still pass here.

🧪 Suggested assertion
 		if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west {
 			t.Fatal("expected openai-west/gpt-4o to map to openai-west provider")
 		}
+		if provider := registry.GetProvider("gpt-4o"); provider != east {
+			t.Fatal("expected unqualified gpt-4o to map to first provider")
+		}
 	})
As per coding guidelines `**/*_test.go`: Add or update tests for any behavior change.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
t.Run("LoadFromCachePreservesProviderInstancesWithSameType", func(t *testing.T) {
tmpDir := t.TempDir()
cacheFile := filepath.Join(tmpDir, "models.json")
modelCache := cache.ModelCache{
Version: modelCacheVersion,
UpdatedAt: time.Now().UTC(),
Models: []cache.CachedModel{
{
ModelID: "gpt-4o",
Provider: "openai-east",
ProviderType: "openai",
Object: "model",
OwnedBy: "openai",
},
{
ModelID: "gpt-4o",
Provider: "openai-west",
ProviderType: "openai",
Object: "model",
OwnedBy: "openai",
},
},
}
data, _ := json.Marshal(modelCache)
if err := os.WriteFile(cacheFile, data, 0o644); err != nil {
t.Fatalf("failed to write cache file: %v", err)
}
registry := NewModelRegistry()
localCache := cache.NewLocalCache(cacheFile)
registry.SetCache(localCache)
east := &registryMockProvider{name: "openai-east"}
west := &registryMockProvider{name: "openai-west"}
registry.RegisterProviderWithNameAndType(east, "openai-east", "openai")
registry.RegisterProviderWithNameAndType(west, "openai-west", "openai")
loaded, err := registry.LoadFromCache(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if loaded != 1 {
t.Fatalf("expected 1 unqualified model loaded, got %d", loaded)
}
if provider := registry.GetProvider("openai-east/gpt-4o"); provider != east {
t.Fatal("expected openai-east/gpt-4o to map to openai-east provider")
}
if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west {
t.Fatal("expected openai-west/gpt-4o to map to openai-west provider")
}
})
t.Run("LoadFromCachePreservesProviderInstancesWithSameType", func(t *testing.T) {
tmpDir := t.TempDir()
cacheFile := filepath.Join(tmpDir, "models.json")
modelCache := cache.ModelCache{
Version: modelCacheVersion,
UpdatedAt: time.Now().UTC(),
Models: []cache.CachedModel{
{
ModelID: "gpt-4o",
Provider: "openai-east",
ProviderType: "openai",
Object: "model",
OwnedBy: "openai",
},
{
ModelID: "gpt-4o",
Provider: "openai-west",
ProviderType: "openai",
Object: "model",
OwnedBy: "openai",
},
},
}
data, _ := json.Marshal(modelCache)
if err := os.WriteFile(cacheFile, data, 0o644); err != nil {
t.Fatalf("failed to write cache file: %v", err)
}
registry := NewModelRegistry()
localCache := cache.NewLocalCache(cacheFile)
registry.SetCache(localCache)
east := &registryMockProvider{name: "openai-east"}
west := &registryMockProvider{name: "openai-west"}
registry.RegisterProviderWithNameAndType(east, "openai-east", "openai")
registry.RegisterProviderWithNameAndType(west, "openai-west", "openai")
loaded, err := registry.LoadFromCache(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if loaded != 1 {
t.Fatalf("expected 1 unqualified model loaded, got %d", loaded)
}
if provider := registry.GetProvider("openai-east/gpt-4o"); provider != east {
t.Fatal("expected openai-east/gpt-4o to map to openai-east provider")
}
if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west {
t.Fatal("expected openai-west/gpt-4o to map to openai-west provider")
}
if provider := registry.GetProvider("gpt-4o"); provider != east {
t.Fatal("expected unqualified gpt-4o to map to first provider")
}
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/registry_cache_test.go` around lines 151 - 203, Add an
assertion that the unqualified model ID "gpt-4o" resolves to the expected winner
provider after LoadFromCache; specifically, after registering east and west via
RegisterProviderWithNameAndType and calling registry.LoadFromCache, assert
registry.GetProvider("gpt-4o") == east (the first-registered provider) to ensure
unqualified routing for duplicate model IDs is preserved.

Comment on lines +352 to +357
providerNames := make([]string, 0, len(modelsByProvider))
for providerName := range modelsByProvider {
providerNames = append(providerNames, providerName)
}
sort.Strings(providerNames)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Cache serialization order can change unqualified provider selection after restart.

SaveToCache sorts provider names lexicographically, while LoadFromCache keeps the first row per model for unqualified lookups. That can flip the winner from registration order to alphabetical order after a cache round-trip.

💡 Keep provider row order aligned with registration order
 	r.mu.RLock()
 	cacheBackend := r.cache
+	providers := append([]core.Provider(nil), r.providers...)
+	providerNamesByProvider := make(map[core.Provider]string, len(r.providerNames))
+	for p, n := range r.providerNames {
+		providerNamesByProvider[p] = n
+	}
 	modelsByProvider := make(map[string]map[string]*ModelInfo, len(r.modelsByProvider))
 	for providerName, models := range r.modelsByProvider {
 		modelsByProvider[providerName] = make(map[string]*ModelInfo, len(models))
@@
-	providerNames := make([]string, 0, len(modelsByProvider))
-	for providerName := range modelsByProvider {
-		providerNames = append(providerNames, providerName)
-	}
-	sort.Strings(providerNames)
+	providerNames := make([]string, 0, len(modelsByProvider))
+	seen := make(map[string]struct{}, len(modelsByProvider))
+	for _, p := range providers {
+		name := providerNamesByProvider[p]
+		if _, ok := modelsByProvider[name]; !ok {
+			continue
+		}
+		if _, ok := seen[name]; ok {
+			continue
+		}
+		providerNames = append(providerNames, name)
+		seen[name] = struct{}{}
+	}
+	for name := range modelsByProvider {
+		if _, ok := seen[name]; !ok {
+			providerNames = append(providerNames, name)
+		}
+	}
Based on learnings: `Register providers explicitly via factory.Add() calls in cmd/gomodel/main.go—order matters, first registered wins for duplicate model names`.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/registry.go` around lines 352 - 357, The cache write is
reordering providers lexicographically which breaks the original
registration-first-wins semantics on reload; in SaveToCache stop sorting
providerNames and iterate providers in the original registration order (use the
registry's provider slice/order used by Register/Add rather than the map keys
or, if no slice exists, capture and persist the provider registration order when
providers are registered), and ensure LoadFromCache consumes rows in that same
persisted/registration order so unqualified lookups (handled in LoadFromCache
and the modelsByProvider handling) keep the first-registered provider as the
winner.

Backward compatibility is not a concern, so the version field and
mismatch guard in the model cache are unnecessary complexity.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (4)
internal/cache/cache_test.go (1)

57-62: ⚠️ Potential issue | 🟡 Minor

Use fatal guards before indexing model slices.

Line 57 and Line 169 perform non-fatal length checks and then index slices immediately. When lengths are wrong, the test can panic and mask the actual assertion failure.

Suggested hardening
-		if len(result.Models) != 1 {
-			t.Errorf("expected 1 model, got %d", len(result.Models))
-		}
+		if len(result.Models) != 1 {
+			t.Fatalf("expected 1 model, got %d", len(result.Models))
+		}
 		if result.Models[0].ModelID != "test-model" {
 			t.Errorf("expected test-model in cache, got %q", result.Models[0].ModelID)
 		}
@@
-		if len(restored.Models) != len(original.Models) {
-			t.Errorf("model count mismatch: got %d, want %d", len(restored.Models), len(original.Models))
-		}
+		if len(restored.Models) != len(original.Models) {
+			t.Fatalf("model count mismatch: got %d, want %d", len(restored.Models), len(original.Models))
+		}
+		if len(restored.Models) < 2 {
+			t.Fatalf("expected at least 2 models, got %d", len(restored.Models))
+		}
 		if restored.Models[0].ModelID != original.Models[0].ModelID {
 			t.Errorf("first model ID mismatch: got %q, want %q", restored.Models[0].ModelID, original.Models[0].ModelID)
 		}

Also applies to: 169-180

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/cache/cache_test.go` around lines 57 - 62, The tests index
result.Models without fatal guards, causing panics if the slice is empty; update
the assertions in internal/cache/cache_test.go to use fatal checks (e.g.,
t.Fatalf or t.Fatal) when validating lengths before any indexing (replace the
non-fatal t.Errorf length checks around result.Models with fatal checks) and
apply the same change to the other occurrence that indexes result.Models around
the block referenced (lines 169-180) so tests fail fast and do not panic when
length assertions fail.
internal/providers/registry.go (2)

341-346: ⚠️ Potential issue | 🟠 Major

Do not sort provider rows when persisting cache.

Line 345 lexicographically reorders provider rows, but Line 272 keeps the first row for unqualified model lookup. After a cache round-trip, winner selection can drift from registration order to alphabetical order.
Based on learnings Register providers explicitly via factory.Add() calls in cmd/gomodel/main.go—order matters, first registered wins for duplicate model names.

Suggested fix (preserve registration order in cache rows)
 	r.mu.RLock()
 	cacheBackend := r.cache
+	providers := append([]core.Provider(nil), r.providers...)
+	providerNamesByProvider := make(map[core.Provider]string, len(r.providerNames))
+	for p, n := range r.providerNames {
+		providerNamesByProvider[p] = n
+	}
 	modelsByProvider := make(map[string]map[string]*ModelInfo, len(r.modelsByProvider))
 	for providerName, models := range r.modelsByProvider {
 		modelsByProvider[providerName] = make(map[string]*ModelInfo, len(models))
@@
-	providerNames := make([]string, 0, len(modelsByProvider))
-	for providerName := range modelsByProvider {
-		providerNames = append(providerNames, providerName)
-	}
-	sort.Strings(providerNames)
+	providerNames := make([]string, 0, len(modelsByProvider))
+	seen := make(map[string]struct{}, len(modelsByProvider))
+	for _, p := range providers {
+		name := providerNamesByProvider[p]
+		if _, ok := modelsByProvider[name]; !ok {
+			continue
+		}
+		if _, ok := seen[name]; ok {
+			continue
+		}
+		providerNames = append(providerNames, name)
+		seen[name] = struct{}{}
+	}
+	for name := range modelsByProvider {
+		if _, ok := seen[name]; !ok {
+			providerNames = append(providerNames, name)
+		}
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/registry.go` around lines 341 - 346, The code is currently
sorting provider rows before persisting which reorders providerNames and breaks
"first-registered wins" behavior; instead preserve registration order by
removing the sort and iterating providers in the original insertion order from
modelsByProvider (or by using the underlying ordered slice used during
registration) when building providerNames; specifically, stop calling
sort.Strings(providerNames) and ensure providerNames is populated from the
registration-ordered source (matching how factories are added via factory.Add())
so duplicate model winner selection remains stable.

423-431: ⚠️ Potential issue | 🟠 Major

Fallback to plain key when qualified lookup misses.

These blocks return early on qualified misses, which makes legitimate unqualified model IDs containing / unreachable.

Suggested fix (apply consistently to all four methods)
 func (r *ModelRegistry) GetProvider(model string) core.Provider {
 	r.mu.RLock()
 	defer r.mu.RUnlock()

-	providerName, modelID := splitModelSelector(model)
+	rawModel := strings.TrimSpace(model)
+	providerName, modelID := splitModelSelector(rawModel)
 	if providerName != "" {
 		if providerModels, ok := r.modelsByProvider[providerName]; ok {
 			if info, exists := providerModels[modelID]; exists {
 				return info.Provider
 			}
 		}
+		if info, ok := r.models[rawModel]; ok {
+			return info.Provider
+		}
 		return nil
 	}

 	if info, ok := r.models[modelID]; ok {
 		return info.Provider
@@
 func (r *ModelRegistry) GetModel(model string) *ModelInfo {
 	r.mu.RLock()
 	defer r.mu.RUnlock()

-	providerName, modelID := splitModelSelector(model)
+	rawModel := strings.TrimSpace(model)
+	providerName, modelID := splitModelSelector(rawModel)
 	if providerName != "" {
 		if providerModels, ok := r.modelsByProvider[providerName]; ok {
-			return providerModels[modelID]
+			if info, exists := providerModels[modelID]; exists {
+				return info
+			}
 		}
+		if info, ok := r.models[rawModel]; ok {
+			return info
+		}
 		return nil
 	}
@@
 func (r *ModelRegistry) Supports(model string) bool {
 	r.mu.RLock()
 	defer r.mu.RUnlock()

-	providerName, modelID := splitModelSelector(model)
+	rawModel := strings.TrimSpace(model)
+	providerName, modelID := splitModelSelector(rawModel)
 	if providerName != "" {
 		providerModels, ok := r.modelsByProvider[providerName]
-		if !ok {
-			return false
+		if ok {
+			if _, ok = providerModels[modelID]; ok {
+				return true
+			}
 		}
-		_, ok = providerModels[modelID]
+		_, ok = r.models[rawModel]
 		return ok
 	}
@@
 func (r *ModelRegistry) GetProviderType(model string) string {
 	r.mu.RLock()
 	defer r.mu.RUnlock()

-	providerName, modelID := splitModelSelector(model)
+	rawModel := strings.TrimSpace(model)
+	providerName, modelID := splitModelSelector(rawModel)
 	if providerName != "" {
 		if providerModels, ok := r.modelsByProvider[providerName]; ok {
 			if info, exists := providerModels[modelID]; exists {
 				return r.providerTypes[info.Provider]
 			}
 		}
+		if info, ok := r.models[rawModel]; ok {
+			return r.providerTypes[info.Provider]
+		}
 		return ""
 	}

Also applies to: 444-450, 463-471, 518-526

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/registry.go` around lines 423 - 431, The qualified-lookup
branch in methods using splitModelSelector (starting with providerName, modelID
:= splitModelSelector(model)) returns nil on a provider-specific miss, which
prevents legitimate unqualified model IDs that contain “/” from being found;
change the logic so that when providerName != "" and the provider-specific map
lookup fails (no providerModels or no info for modelID), you do NOT return early
but fall through to the unqualified lookup path using the original model string
(or model key) instead; apply this same change consistently to the other three
methods that use splitModelSelector and r.modelsByProvider so qualified misses
fall back to the plain-key lookup.
internal/providers/registry_cache_test.go (1)

147-198: ⚠️ Potential issue | 🟡 Minor

Add unqualified winner assertion in same-type restore test.

This test only verifies qualified selectors. A regression in unqualified duplicate-model routing would still pass.
As per coding guidelines **/*_test.go: Add or update tests for any behavior change.

Suggested assertion
 		if provider := registry.GetProvider("openai-west/gpt-4o"); provider != west {
 			t.Fatal("expected openai-west/gpt-4o to map to openai-west provider")
 		}
+		if provider := registry.GetProvider("gpt-4o"); provider != east {
+			t.Fatal("expected unqualified gpt-4o to map to first provider")
+		}
 	})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/registry_cache_test.go` around lines 147 - 198, The test
LoadFromCachePreservesProviderInstancesWithSameType is missing an assertion for
the unqualified selector; after loading (and after the existing qualified checks
for "openai-east/gpt-4o" and "openai-west/gpt-4o"), add an assertion that
registry.GetProvider("gpt-4o") returns the expected winner provider (e.g., the
one registered first: east). Reference the test function name and the
GetProvider calls (registry.GetProvider("gpt-4o"),
registry.GetProvider("openai-east/gpt-4o"),
registry.GetProvider("openai-west/gpt-4o")) and make the unqualified assertion
mirror the style of the other checks (t.Fatal with a clear message).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@internal/cache/cache_test.go`:
- Around line 57-62: The tests index result.Models without fatal guards, causing
panics if the slice is empty; update the assertions in
internal/cache/cache_test.go to use fatal checks (e.g., t.Fatalf or t.Fatal)
when validating lengths before any indexing (replace the non-fatal t.Errorf
length checks around result.Models with fatal checks) and apply the same change
to the other occurrence that indexes result.Models around the block referenced
(lines 169-180) so tests fail fast and do not panic when length assertions fail.

In `@internal/providers/registry_cache_test.go`:
- Around line 147-198: The test
LoadFromCachePreservesProviderInstancesWithSameType is missing an assertion for
the unqualified selector; after loading (and after the existing qualified checks
for "openai-east/gpt-4o" and "openai-west/gpt-4o"), add an assertion that
registry.GetProvider("gpt-4o") returns the expected winner provider (e.g., the
one registered first: east). Reference the test function name and the
GetProvider calls (registry.GetProvider("gpt-4o"),
registry.GetProvider("openai-east/gpt-4o"),
registry.GetProvider("openai-west/gpt-4o")) and make the unqualified assertion
mirror the style of the other checks (t.Fatal with a clear message).

In `@internal/providers/registry.go`:
- Around line 341-346: The code is currently sorting provider rows before
persisting which reorders providerNames and breaks "first-registered wins"
behavior; instead preserve registration order by removing the sort and iterating
providers in the original insertion order from modelsByProvider (or by using the
underlying ordered slice used during registration) when building providerNames;
specifically, stop calling sort.Strings(providerNames) and ensure providerNames
is populated from the registration-ordered source (matching how factories are
added via factory.Add()) so duplicate model winner selection remains stable.
- Around line 423-431: The qualified-lookup branch in methods using
splitModelSelector (starting with providerName, modelID :=
splitModelSelector(model)) returns nil on a provider-specific miss, which
prevents legitimate unqualified model IDs that contain “/” from being found;
change the logic so that when providerName != "" and the provider-specific map
lookup fails (no providerModels or no info for modelID), you do NOT return early
but fall through to the unqualified lookup path using the original model string
(or model key) instead; apply this same change consistently to the other three
methods that use splitModelSelector and r.modelsByProvider so qualified misses
fall back to the plain-key lookup.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 1e0ae647-7b22-4875-bdcf-93b9c896ede1

📥 Commits

Reviewing files that changed from the base of the PR and between c15d59f and edf2753.

📒 Files selected for processing (4)
  • internal/cache/cache.go
  • internal/cache/cache_test.go
  • internal/providers/registry.go
  • internal/providers/registry_cache_test.go

Restructure ModelCache to group models by provider, eliminating
per-model repetition of provider_type, owned_by, and object fields.
Drop the always-"model" Object field entirely.

Fix splitModelSelector fallthrough: when a qualified lookup fails
(e.g. "meta-llama/Meta-Llama-3-70B" where "meta-llama" is not a
provider name), fall through to unqualified lookup using the original
model string instead of returning nil. Applied to GetProvider,
GetModel, Supports, and GetProviderType.

Harden cache tests with fatal guards before slice indexing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@SantiagoDePolonia SantiagoDePolonia self-assigned this Mar 4, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
internal/providers/registry_cache_test.go (1)

200-203: ⚠️ Potential issue | 🟡 Minor

Unqualified winner assertion is too permissive for this behavior contract.

At Line 200-203, allowing east || west masks the first-provider-wins requirement for duplicate unqualified IDs. This test should assert east (registered first), so restore-order regressions are caught.

🧪 Tighten assertion
-		// Unqualified lookup should resolve to one of the two providers (map iteration order is nondeterministic)
-		if provider := registry.GetProvider("gpt-4o"); provider != east && provider != west {
-			t.Fatal("expected unqualified gpt-4o to map to either openai-east or openai-west provider")
+		// Unqualified lookup should preserve first-registered provider semantics.
+		if provider := registry.GetProvider("gpt-4o"); provider != east {
+			t.Fatal("expected unqualified gpt-4o to map to first registered provider (openai-east)")
 		}

As per coding guidelines **/*_test.go: Add or update tests for any behavior change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/registry_cache_test.go` around lines 200 - 203, The test
currently allows registry.GetProvider("gpt-4o") to be either east or west which
masks the intended "first registered wins" behavior; update the assertion to
require the provider to be east (the provider registered first) instead of
allowing either east or west, by replacing the permissive check around
registry.GetProvider("gpt-4o") with a strict equality assertion that the
returned provider equals east (use the same east/west variables already present)
so order regressions are detected.
internal/providers/registry.go (1)

250-276: ⚠️ Potential issue | 🟠 Major

Unqualified cache restore winner is nondeterministic.

At Line 253, for providerName, cachedProvider := range modelCache.Providers iterates a map; then at Line 271 first insert into newModels wins. For duplicate model IDs across providers, the unqualified provider can flip between process runs after cache restore.

🔧 Keep restore order aligned with provider registration order
-	// Build lookup maps from configured providers.
-	r.mu.RLock()
-	nameToProvider := make(map[string]core.Provider, len(r.providerNames))
-	for provider, pName := range r.providerNames {
-		nameToProvider[pName] = provider
-	}
-	r.mu.RUnlock()
+	// Snapshot provider registration order + names.
+	r.mu.RLock()
+	providers := append([]core.Provider(nil), r.providers...)
+	providerNames := make(map[core.Provider]string, len(r.providerNames))
+	for provider, pName := range r.providerNames {
+		providerNames[provider] = pName
+	}
+	r.mu.RUnlock()

 	// Populate model maps from grouped cache structure. Unqualified lookups keep "first provider wins".
 	newModels := make(map[string]*ModelInfo)
 	newModelsByProvider := make(map[string]map[string]*ModelInfo)
-	for providerName, cachedProvider := range modelCache.Providers {
-		provider, ok := nameToProvider[providerName]
+	for _, provider := range providers {
+		providerName := providerNames[provider]
+		cachedProvider, ok := modelCache.Providers[providerName]
 		if !ok {
-			// Provider not configured, skip all its models
+			// No cached entry for this configured provider.
 			continue
 		}
 		providerModels := make(map[string]*ModelInfo, len(cachedProvider.Models))
 		for _, cached := range cachedProvider.Models {
 			info := &ModelInfo{

Based on learnings Register providers explicitly via factory.Add() calls in cmd/gomodel/main.go—order matters, first registered wins for duplicate model names.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/registry.go` around lines 250 - 276, The current loop over
modelCache.Providers is nondeterministic because map iteration order can change;
when populating newModels (and newModelsByProvider) the first provider seen wins
for duplicate model IDs which can flip on restore. Replace the map iteration
(for providerName, cachedProvider := range modelCache.Providers) with a
deterministic iteration over providers in registration order (use the ordered
provider list used when registering providers via factory.Add or an existing
slice that records registration order; if none exists, create and maintain one
alongside nameToProvider), then for each providerName fetch cachedProvider and
proceed exactly as before so that newModels/cached insertion respects
first-registered-wins semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@internal/providers/registry_cache_test.go`:
- Around line 200-203: The test currently allows registry.GetProvider("gpt-4o")
to be either east or west which masks the intended "first registered wins"
behavior; update the assertion to require the provider to be east (the provider
registered first) instead of allowing either east or west, by replacing the
permissive check around registry.GetProvider("gpt-4o") with a strict equality
assertion that the returned provider equals east (use the same east/west
variables already present) so order regressions are detected.

In `@internal/providers/registry.go`:
- Around line 250-276: The current loop over modelCache.Providers is
nondeterministic because map iteration order can change; when populating
newModels (and newModelsByProvider) the first provider seen wins for duplicate
model IDs which can flip on restore. Replace the map iteration (for
providerName, cachedProvider := range modelCache.Providers) with a deterministic
iteration over providers in registration order (use the ordered provider list
used when registering providers via factory.Add or an existing slice that
records registration order; if none exists, create and maintain one alongside
nameToProvider), then for each providerName fetch cachedProvider and proceed
exactly as before so that newModels/cached insertion respects
first-registered-wins semantics.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: f114edf6-0f0d-4c0f-8e60-8abffe2b213f

📥 Commits

Reviewing files that changed from the base of the PR and between edf2753 and b29c82c.

📒 Files selected for processing (4)
  • internal/cache/cache.go
  • internal/cache/cache_test.go
  • internal/providers/registry.go
  • internal/providers/registry_cache_test.go

@SantiagoDePolonia SantiagoDePolonia merged commit 8298448 into main Mar 4, 2026
13 checks passed
@SantiagoDePolonia SantiagoDePolonia deleted the feature/provider-instance-cache branch March 22, 2026 14:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant