Skip to content

refactor: extract duplicated Responses API adapters into shared helpers#99

Merged
SantiagoDePolonia merged 1 commit intomainfrom
refactor/remove-duplicates-providers
Feb 25, 2026
Merged

refactor: extract duplicated Responses API adapters into shared helpers#99
SantiagoDePolonia merged 1 commit intomainfrom
refactor/remove-duplicates-providers

Conversation

@SantiagoDePolonia
Copy link
Copy Markdown
Contributor

@SantiagoDePolonia SantiagoDePolonia commented Feb 25, 2026

The three Responses-to-Chat conversion helpers were byte-for-byte identical across Gemini, Groq, and Ollama. Extract them into providers.ResponsesViaChat/StreamResponsesViaChat so each provider's Responses methods become one-line delegations, and consolidate the duplicated unit tests into a single test file.

Summary by CodeRabbit

  • Refactor
    • Consolidated response handling logic across multiple providers to use a centralized adapter pattern, reducing code duplication and improving maintainability.
    • Streamlined internal API conversion processes while maintaining full backward compatibility with existing functionality.
    • Removed redundant conversion utilities previously duplicated across provider implementations.

The three Responses-to-Chat conversion helpers were byte-for-byte
identical across Gemini, Groq, and Ollama. Extract them into
providers.ResponsesViaChat/StreamResponsesViaChat so each provider's
Responses methods become one-line delegations, and consolidate the
duplicated unit tests into a single test file.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 25, 2026

📝 Walkthrough

Walkthrough

The PR consolidates duplicated Responses-to-Chat conversion logic across provider implementations (Gemini, Groq, Ollama) into a new centralized adapter (responses_adapter.go). Providers now delegate through shared conversion utilities instead of implementing their own transformations, reducing code duplication and improving maintainability.

Changes

Cohort / File(s) Summary
Centralized Adapter
internal/providers/responses_adapter.go, internal/providers/responses_adapter_test.go
New adapter layer providing ChatProvider interface, conversion utilities (ConvertResponsesRequestToChat, ConvertChatResponseToResponses, ExtractContentFromInput), and high-level bridges (ResponsesViaChat, StreamResponsesViaChat) for mapping between Responses and Chat APIs with comprehensive test coverage.
Gemini Provider
internal/providers/gemini/gemini.go, internal/providers/gemini/gemini_test.go
Replaced internal conversion logic and helpers with delegation to ResponsesViaChat and StreamResponsesViaChat. Removed convertResponsesRequestToChat, extractContentFromInput, convertChatResponseToResponses functions and uuid import. Deleted associated test coverage.
Groq Provider
internal/providers/groq/groq.go, internal/providers/groq/groq_test.go
Replaced internal conversion logic with delegation to shared provider helpers. Removed large set of format transformation functions and unused imports (strings, uuid). Deleted comprehensive test suite covering conversion scenarios.
Ollama Provider
internal/providers/ollama/ollama.go, internal/providers/ollama/ollama_test.go
Replaced internal conversion helpers with delegation to ResponsesViaChat and StreamResponsesViaChat. Removed local chat conversion logic and unused imports. Deleted test coverage for conversion functions and empty choices scenario.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • ENTERPILOT/GOModel#26: Refactors provider streaming to use shared OpenAI-style Responses stream converter and centralizes Responses-via-Chat conversion, modifying the same streaming and conversion code paths.

Poem

🐰 Hopping through providers with glee,
Converting requests from Responses to Chat so free,
No more duplication, one adapter to share,
Gemini, Groq, Ollama—all dancing with care! 🎭✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: extraction of duplicated Responses API adapters into shared helpers across multiple providers (Gemini, Groq, Ollama).
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refactor/remove-duplicates-providers

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/providers/responses_adapter_test.go`:
- Around line 13-118: Add table tests that exercise the typed slice form of
ResponsesRequest.Input (e.g., a []core.Message-style slice) in addition to the
existing []interface{} case, and add assertions in those cases that
ConvertResponsesRequestToChat preserves forwarded request options: specifically
check that core.ChatRequest.Reasoning and core.ChatRequest.StreamOptions (and
Stream flag) are set according to the input ResponsesRequest fields; update the
test cases in internal/providers/responses_adapter_test.go (the table in the
tests variable) to include one or two new entries referencing
ConvertResponsesRequestToChat, core.ResponsesRequest, and core.ChatRequest and
assert Reasoning/StreamOptions/Stream are forwarded unchanged.

In `@internal/providers/responses_adapter.go`:
- Around line 128-131: The provider error is returned raw from p.ChatCompletion
(and the equivalent streaming call) instead of being normalized; change the
error returns to pass the upstream error through core.ParseProviderError() and
return that parsed error (e.g., err = core.ParseProviderError(err); return nil,
err) wherever p.ChatCompletion(ctx, chatReq) and the corresponding stream
response call return an error so Responses/StreamResponses use the normalized
GatewayError type.
- Around line 43-62: The array-form Input handling in the switch on req.Input
only accepts []interface{} with map[string]interface{}, which can drop valid
messages for other typed slices; update the logic in the switch (and the
analogous block around where ExtractContentFromInput is used later) to handle
any slice/array via reflection: detect slices/arrays, iterate elements and for
each element use a type switch to accept core.Message, string,
map[string]interface{}, or other struct types (falling back to
marshaling/ExtractContentFromInput when necessary), then append normalized
core.Message entries to chatReq.Messages; reference the switch on req.Input,
ExtractContentFromInput, and chatReq.Messages to locate where to implement the
reflective/robust iteration and normalization.
- Around line 23-32: The conversion in internal/providers/responses_adapter.go
that constructs chatReq (the ChatRequest mapping from core.ResponsesRequest) is
missing mappings for StreamOptions and Reasoning, causing those fields to be
dropped; update the constructor that creates &core.ChatRequest (and any
subsequent field assignments around chatReq.MaxTokens) to copy req.StreamOptions
(or req.Stream.Options) into chatReq.StreamOptions and copy req.Reasoning into
chatReq.Reasoning so the reasoning controls and streaming options are preserved
during the conversion; ensure you keep the existing MaxOutputTokens ->
chatReq.MaxTokens logic and add these two field assignments next to it.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b26abbe and badce93.

📒 Files selected for processing (8)
  • internal/providers/gemini/gemini.go
  • internal/providers/gemini/gemini_test.go
  • internal/providers/groq/groq.go
  • internal/providers/groq/groq_test.go
  • internal/providers/ollama/ollama.go
  • internal/providers/ollama/ollama_test.go
  • internal/providers/responses_adapter.go
  • internal/providers/responses_adapter_test.go
💤 Files with no reviewable changes (3)
  • internal/providers/groq/groq_test.go
  • internal/providers/gemini/gemini_test.go
  • internal/providers/ollama/ollama_test.go

Comment on lines +13 to +118
tests := []struct {
name string
input *core.ResponsesRequest
checkFn func(*testing.T, *core.ChatRequest)
}{
{
name: "string input",
input: &core.ResponsesRequest{
Model: "test-model",
Input: "Hello",
},
checkFn: func(t *testing.T, req *core.ChatRequest) {
if req.Model != "test-model" {
t.Errorf("Model = %q, want %q", req.Model, "test-model")
}
if len(req.Messages) != 1 {
t.Errorf("len(Messages) = %d, want 1", len(req.Messages))
}
if req.Messages[0].Role != "user" {
t.Errorf("Messages[0].Role = %q, want %q", req.Messages[0].Role, "user")
}
if req.Messages[0].Content != "Hello" {
t.Errorf("Messages[0].Content = %q, want %q", req.Messages[0].Content, "Hello")
}
},
},
{
name: "with instructions",
input: &core.ResponsesRequest{
Model: "test-model",
Input: "Hello",
Instructions: "Be helpful",
},
checkFn: func(t *testing.T, req *core.ChatRequest) {
if len(req.Messages) < 2 {
t.Fatalf("len(Messages) = %d, want at least 2", len(req.Messages))
}
if req.Messages[0].Role != "system" {
t.Errorf("Messages[0].Role = %q, want %q", req.Messages[0].Role, "system")
}
if req.Messages[0].Content != "Be helpful" {
t.Errorf("Messages[0].Content = %q, want %q", req.Messages[0].Content, "Be helpful")
}
},
},
{
name: "with parameters",
input: &core.ResponsesRequest{
Model: "test-model",
Input: "Hello",
Temperature: &temp,
MaxOutputTokens: &maxTokens,
},
checkFn: func(t *testing.T, req *core.ChatRequest) {
if req.Temperature == nil || *req.Temperature != 0.7 {
t.Errorf("Temperature = %v, want 0.7", req.Temperature)
}
if req.MaxTokens == nil || *req.MaxTokens != 1024 {
t.Errorf("MaxTokens = %v, want 1024", req.MaxTokens)
}
},
},
{
name: "with streaming enabled",
input: &core.ResponsesRequest{
Model: "test-model",
Input: "Hello",
Stream: true,
},
checkFn: func(t *testing.T, req *core.ChatRequest) {
if !req.Stream {
t.Error("Stream should be true")
}
},
},
{
name: "array input with messages",
input: &core.ResponsesRequest{
Model: "test-model",
Input: []interface{}{
map[string]interface{}{
"role": "user",
"content": "Hello",
},
map[string]interface{}{
"role": "assistant",
"content": "Hi there!",
},
},
},
checkFn: func(t *testing.T, req *core.ChatRequest) {
if len(req.Messages) != 2 {
t.Fatalf("len(Messages) = %d, want 2", len(req.Messages))
}
if req.Messages[0].Role != "user" {
t.Errorf("Messages[0].Role = %q, want %q", req.Messages[0].Role, "user")
}
if req.Messages[0].Content != "Hello" {
t.Errorf("Messages[0].Content = %q, want %q", req.Messages[0].Content, "Hello")
}
if req.Messages[1].Role != "assistant" {
t.Errorf("Messages[1].Role = %q, want %q", req.Messages[1].Role, "assistant")
}
},
},
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add coverage for typed Input arrays and forwarded request options.

The table tests only cover []interface{} array input. Please add cases for the documented typed array form and assertions that Reasoning/StreamOptions are preserved in ConvertResponsesRequestToChat to prevent silent adapter regressions.

Also applies to: 225-270

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/responses_adapter_test.go` around lines 13 - 118, Add
table tests that exercise the typed slice form of ResponsesRequest.Input (e.g.,
a []core.Message-style slice) in addition to the existing []interface{} case,
and add assertions in those cases that ConvertResponsesRequestToChat preserves
forwarded request options: specifically check that core.ChatRequest.Reasoning
and core.ChatRequest.StreamOptions (and Stream flag) are set according to the
input ResponsesRequest fields; update the test cases in
internal/providers/responses_adapter_test.go (the table in the tests variable)
to include one or two new entries referencing ConvertResponsesRequestToChat,
core.ResponsesRequest, and core.ChatRequest and assert
Reasoning/StreamOptions/Stream are forwarded unchanged.

Comment on lines +23 to +32
chatReq := &core.ChatRequest{
Model: req.Model,
Messages: make([]core.Message, 0),
Temperature: req.Temperature,
Stream: req.Stream,
}

if req.MaxOutputTokens != nil {
chatReq.MaxTokens = req.MaxOutputTokens
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

StreamOptions and Reasoning are dropped during request conversion.

core.ResponsesRequest and core.ChatRequest both carry these fields, but they are not mapped here. This changes behavior for reasoning controls and streaming options after the refactor.

Proposed fix
 func ConvertResponsesRequestToChat(req *core.ResponsesRequest) *core.ChatRequest {
 	chatReq := &core.ChatRequest{
-		Model:       req.Model,
-		Messages:    make([]core.Message, 0),
-		Temperature: req.Temperature,
-		Stream:      req.Stream,
+		Model:         req.Model,
+		Messages:      make([]core.Message, 0),
+		Temperature:   req.Temperature,
+		MaxTokens:     req.MaxOutputTokens,
+		Stream:        req.Stream,
+		StreamOptions: req.StreamOptions,
+		Reasoning:     req.Reasoning,
 	}
-
-	if req.MaxOutputTokens != nil {
-		chatReq.MaxTokens = req.MaxOutputTokens
-	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
chatReq := &core.ChatRequest{
Model: req.Model,
Messages: make([]core.Message, 0),
Temperature: req.Temperature,
Stream: req.Stream,
}
if req.MaxOutputTokens != nil {
chatReq.MaxTokens = req.MaxOutputTokens
}
chatReq := &core.ChatRequest{
Model: req.Model,
Messages: make([]core.Message, 0),
Temperature: req.Temperature,
MaxTokens: req.MaxOutputTokens,
Stream: req.Stream,
StreamOptions: req.StreamOptions,
Reasoning: req.Reasoning,
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/responses_adapter.go` around lines 23 - 32, The conversion
in internal/providers/responses_adapter.go that constructs chatReq (the
ChatRequest mapping from core.ResponsesRequest) is missing mappings for
StreamOptions and Reasoning, causing those fields to be dropped; update the
constructor that creates &core.ChatRequest (and any subsequent field assignments
around chatReq.MaxTokens) to copy req.StreamOptions (or req.Stream.Options) into
chatReq.StreamOptions and copy req.Reasoning into chatReq.Reasoning so the
reasoning controls and streaming options are preserved during the conversion;
ensure you keep the existing MaxOutputTokens -> chatReq.MaxTokens logic and add
these two field assignments next to it.

Comment on lines +43 to +62
switch input := req.Input.(type) {
case string:
chatReq.Messages = append(chatReq.Messages, core.Message{
Role: "user",
Content: input,
})
case []interface{}:
for _, item := range input {
if msgMap, ok := item.(map[string]interface{}); ok {
role, _ := msgMap["role"].(string)
content := ExtractContentFromInput(msgMap["content"])
if role != "" && content != "" {
chatReq.Messages = append(chatReq.Messages, core.Message{
Role: role,
Content: content,
})
}
}
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Array-form Input handling is too narrow and can drop valid messages.

The conversion only handles []interface{} + map[string]interface{}. The request contract documents array-form input, so typed array payloads can end up as empty Messages, causing downstream provider errors.

Also applies to: 68-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/providers/responses_adapter.go` around lines 43 - 62, The array-form
Input handling in the switch on req.Input only accepts []interface{} with
map[string]interface{}, which can drop valid messages for other typed slices;
update the logic in the switch (and the analogous block around where
ExtractContentFromInput is used later) to handle any slice/array via reflection:
detect slices/arrays, iterate elements and for each element use a type switch to
accept core.Message, string, map[string]interface{}, or other struct types
(falling back to marshaling/ExtractContentFromInput when necessary), then append
normalized core.Message entries to chatReq.Messages; reference the switch on
req.Input, ExtractContentFromInput, and chatReq.Messages to locate where to
implement the reflective/robust iteration and normalization.

@SantiagoDePolonia SantiagoDePolonia merged commit 79cd841 into main Feb 25, 2026
12 checks passed
@SantiagoDePolonia SantiagoDePolonia deleted the refactor/remove-duplicates-providers branch March 22, 2026 14:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant