refactor: extract duplicated Responses API adapters into shared helpers#99
Conversation
The three Responses-to-Chat conversion helpers were byte-for-byte identical across Gemini, Groq, and Ollama. Extract them into providers.ResponsesViaChat/StreamResponsesViaChat so each provider's Responses methods become one-line delegations, and consolidate the duplicated unit tests into a single test file. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThe PR consolidates duplicated Responses-to-Chat conversion logic across provider implementations (Gemini, Groq, Ollama) into a new centralized adapter ( Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/providers/responses_adapter_test.go`:
- Around line 13-118: Add table tests that exercise the typed slice form of
ResponsesRequest.Input (e.g., a []core.Message-style slice) in addition to the
existing []interface{} case, and add assertions in those cases that
ConvertResponsesRequestToChat preserves forwarded request options: specifically
check that core.ChatRequest.Reasoning and core.ChatRequest.StreamOptions (and
Stream flag) are set according to the input ResponsesRequest fields; update the
test cases in internal/providers/responses_adapter_test.go (the table in the
tests variable) to include one or two new entries referencing
ConvertResponsesRequestToChat, core.ResponsesRequest, and core.ChatRequest and
assert Reasoning/StreamOptions/Stream are forwarded unchanged.
In `@internal/providers/responses_adapter.go`:
- Around line 128-131: The provider error is returned raw from p.ChatCompletion
(and the equivalent streaming call) instead of being normalized; change the
error returns to pass the upstream error through core.ParseProviderError() and
return that parsed error (e.g., err = core.ParseProviderError(err); return nil,
err) wherever p.ChatCompletion(ctx, chatReq) and the corresponding stream
response call return an error so Responses/StreamResponses use the normalized
GatewayError type.
- Around line 43-62: The array-form Input handling in the switch on req.Input
only accepts []interface{} with map[string]interface{}, which can drop valid
messages for other typed slices; update the logic in the switch (and the
analogous block around where ExtractContentFromInput is used later) to handle
any slice/array via reflection: detect slices/arrays, iterate elements and for
each element use a type switch to accept core.Message, string,
map[string]interface{}, or other struct types (falling back to
marshaling/ExtractContentFromInput when necessary), then append normalized
core.Message entries to chatReq.Messages; reference the switch on req.Input,
ExtractContentFromInput, and chatReq.Messages to locate where to implement the
reflective/robust iteration and normalization.
- Around line 23-32: The conversion in internal/providers/responses_adapter.go
that constructs chatReq (the ChatRequest mapping from core.ResponsesRequest) is
missing mappings for StreamOptions and Reasoning, causing those fields to be
dropped; update the constructor that creates &core.ChatRequest (and any
subsequent field assignments around chatReq.MaxTokens) to copy req.StreamOptions
(or req.Stream.Options) into chatReq.StreamOptions and copy req.Reasoning into
chatReq.Reasoning so the reasoning controls and streaming options are preserved
during the conversion; ensure you keep the existing MaxOutputTokens ->
chatReq.MaxTokens logic and add these two field assignments next to it.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (8)
internal/providers/gemini/gemini.gointernal/providers/gemini/gemini_test.gointernal/providers/groq/groq.gointernal/providers/groq/groq_test.gointernal/providers/ollama/ollama.gointernal/providers/ollama/ollama_test.gointernal/providers/responses_adapter.gointernal/providers/responses_adapter_test.go
💤 Files with no reviewable changes (3)
- internal/providers/groq/groq_test.go
- internal/providers/gemini/gemini_test.go
- internal/providers/ollama/ollama_test.go
| tests := []struct { | ||
| name string | ||
| input *core.ResponsesRequest | ||
| checkFn func(*testing.T, *core.ChatRequest) | ||
| }{ | ||
| { | ||
| name: "string input", | ||
| input: &core.ResponsesRequest{ | ||
| Model: "test-model", | ||
| Input: "Hello", | ||
| }, | ||
| checkFn: func(t *testing.T, req *core.ChatRequest) { | ||
| if req.Model != "test-model" { | ||
| t.Errorf("Model = %q, want %q", req.Model, "test-model") | ||
| } | ||
| if len(req.Messages) != 1 { | ||
| t.Errorf("len(Messages) = %d, want 1", len(req.Messages)) | ||
| } | ||
| if req.Messages[0].Role != "user" { | ||
| t.Errorf("Messages[0].Role = %q, want %q", req.Messages[0].Role, "user") | ||
| } | ||
| if req.Messages[0].Content != "Hello" { | ||
| t.Errorf("Messages[0].Content = %q, want %q", req.Messages[0].Content, "Hello") | ||
| } | ||
| }, | ||
| }, | ||
| { | ||
| name: "with instructions", | ||
| input: &core.ResponsesRequest{ | ||
| Model: "test-model", | ||
| Input: "Hello", | ||
| Instructions: "Be helpful", | ||
| }, | ||
| checkFn: func(t *testing.T, req *core.ChatRequest) { | ||
| if len(req.Messages) < 2 { | ||
| t.Fatalf("len(Messages) = %d, want at least 2", len(req.Messages)) | ||
| } | ||
| if req.Messages[0].Role != "system" { | ||
| t.Errorf("Messages[0].Role = %q, want %q", req.Messages[0].Role, "system") | ||
| } | ||
| if req.Messages[0].Content != "Be helpful" { | ||
| t.Errorf("Messages[0].Content = %q, want %q", req.Messages[0].Content, "Be helpful") | ||
| } | ||
| }, | ||
| }, | ||
| { | ||
| name: "with parameters", | ||
| input: &core.ResponsesRequest{ | ||
| Model: "test-model", | ||
| Input: "Hello", | ||
| Temperature: &temp, | ||
| MaxOutputTokens: &maxTokens, | ||
| }, | ||
| checkFn: func(t *testing.T, req *core.ChatRequest) { | ||
| if req.Temperature == nil || *req.Temperature != 0.7 { | ||
| t.Errorf("Temperature = %v, want 0.7", req.Temperature) | ||
| } | ||
| if req.MaxTokens == nil || *req.MaxTokens != 1024 { | ||
| t.Errorf("MaxTokens = %v, want 1024", req.MaxTokens) | ||
| } | ||
| }, | ||
| }, | ||
| { | ||
| name: "with streaming enabled", | ||
| input: &core.ResponsesRequest{ | ||
| Model: "test-model", | ||
| Input: "Hello", | ||
| Stream: true, | ||
| }, | ||
| checkFn: func(t *testing.T, req *core.ChatRequest) { | ||
| if !req.Stream { | ||
| t.Error("Stream should be true") | ||
| } | ||
| }, | ||
| }, | ||
| { | ||
| name: "array input with messages", | ||
| input: &core.ResponsesRequest{ | ||
| Model: "test-model", | ||
| Input: []interface{}{ | ||
| map[string]interface{}{ | ||
| "role": "user", | ||
| "content": "Hello", | ||
| }, | ||
| map[string]interface{}{ | ||
| "role": "assistant", | ||
| "content": "Hi there!", | ||
| }, | ||
| }, | ||
| }, | ||
| checkFn: func(t *testing.T, req *core.ChatRequest) { | ||
| if len(req.Messages) != 2 { | ||
| t.Fatalf("len(Messages) = %d, want 2", len(req.Messages)) | ||
| } | ||
| if req.Messages[0].Role != "user" { | ||
| t.Errorf("Messages[0].Role = %q, want %q", req.Messages[0].Role, "user") | ||
| } | ||
| if req.Messages[0].Content != "Hello" { | ||
| t.Errorf("Messages[0].Content = %q, want %q", req.Messages[0].Content, "Hello") | ||
| } | ||
| if req.Messages[1].Role != "assistant" { | ||
| t.Errorf("Messages[1].Role = %q, want %q", req.Messages[1].Role, "assistant") | ||
| } | ||
| }, | ||
| }, | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add coverage for typed Input arrays and forwarded request options.
The table tests only cover []interface{} array input. Please add cases for the documented typed array form and assertions that Reasoning/StreamOptions are preserved in ConvertResponsesRequestToChat to prevent silent adapter regressions.
Also applies to: 225-270
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/providers/responses_adapter_test.go` around lines 13 - 118, Add
table tests that exercise the typed slice form of ResponsesRequest.Input (e.g.,
a []core.Message-style slice) in addition to the existing []interface{} case,
and add assertions in those cases that ConvertResponsesRequestToChat preserves
forwarded request options: specifically check that core.ChatRequest.Reasoning
and core.ChatRequest.StreamOptions (and Stream flag) are set according to the
input ResponsesRequest fields; update the test cases in
internal/providers/responses_adapter_test.go (the table in the tests variable)
to include one or two new entries referencing ConvertResponsesRequestToChat,
core.ResponsesRequest, and core.ChatRequest and assert
Reasoning/StreamOptions/Stream are forwarded unchanged.
| chatReq := &core.ChatRequest{ | ||
| Model: req.Model, | ||
| Messages: make([]core.Message, 0), | ||
| Temperature: req.Temperature, | ||
| Stream: req.Stream, | ||
| } | ||
|
|
||
| if req.MaxOutputTokens != nil { | ||
| chatReq.MaxTokens = req.MaxOutputTokens | ||
| } |
There was a problem hiding this comment.
StreamOptions and Reasoning are dropped during request conversion.
core.ResponsesRequest and core.ChatRequest both carry these fields, but they are not mapped here. This changes behavior for reasoning controls and streaming options after the refactor.
Proposed fix
func ConvertResponsesRequestToChat(req *core.ResponsesRequest) *core.ChatRequest {
chatReq := &core.ChatRequest{
- Model: req.Model,
- Messages: make([]core.Message, 0),
- Temperature: req.Temperature,
- Stream: req.Stream,
+ Model: req.Model,
+ Messages: make([]core.Message, 0),
+ Temperature: req.Temperature,
+ MaxTokens: req.MaxOutputTokens,
+ Stream: req.Stream,
+ StreamOptions: req.StreamOptions,
+ Reasoning: req.Reasoning,
}
-
- if req.MaxOutputTokens != nil {
- chatReq.MaxTokens = req.MaxOutputTokens
- }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| chatReq := &core.ChatRequest{ | |
| Model: req.Model, | |
| Messages: make([]core.Message, 0), | |
| Temperature: req.Temperature, | |
| Stream: req.Stream, | |
| } | |
| if req.MaxOutputTokens != nil { | |
| chatReq.MaxTokens = req.MaxOutputTokens | |
| } | |
| chatReq := &core.ChatRequest{ | |
| Model: req.Model, | |
| Messages: make([]core.Message, 0), | |
| Temperature: req.Temperature, | |
| MaxTokens: req.MaxOutputTokens, | |
| Stream: req.Stream, | |
| StreamOptions: req.StreamOptions, | |
| Reasoning: req.Reasoning, | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/providers/responses_adapter.go` around lines 23 - 32, The conversion
in internal/providers/responses_adapter.go that constructs chatReq (the
ChatRequest mapping from core.ResponsesRequest) is missing mappings for
StreamOptions and Reasoning, causing those fields to be dropped; update the
constructor that creates &core.ChatRequest (and any subsequent field assignments
around chatReq.MaxTokens) to copy req.StreamOptions (or req.Stream.Options) into
chatReq.StreamOptions and copy req.Reasoning into chatReq.Reasoning so the
reasoning controls and streaming options are preserved during the conversion;
ensure you keep the existing MaxOutputTokens -> chatReq.MaxTokens logic and add
these two field assignments next to it.
| switch input := req.Input.(type) { | ||
| case string: | ||
| chatReq.Messages = append(chatReq.Messages, core.Message{ | ||
| Role: "user", | ||
| Content: input, | ||
| }) | ||
| case []interface{}: | ||
| for _, item := range input { | ||
| if msgMap, ok := item.(map[string]interface{}); ok { | ||
| role, _ := msgMap["role"].(string) | ||
| content := ExtractContentFromInput(msgMap["content"]) | ||
| if role != "" && content != "" { | ||
| chatReq.Messages = append(chatReq.Messages, core.Message{ | ||
| Role: role, | ||
| Content: content, | ||
| }) | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Array-form Input handling is too narrow and can drop valid messages.
The conversion only handles []interface{} + map[string]interface{}. The request contract documents array-form input, so typed array payloads can end up as empty Messages, causing downstream provider errors.
Also applies to: 68-83
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/providers/responses_adapter.go` around lines 43 - 62, The array-form
Input handling in the switch on req.Input only accepts []interface{} with
map[string]interface{}, which can drop valid messages for other typed slices;
update the logic in the switch (and the analogous block around where
ExtractContentFromInput is used later) to handle any slice/array via reflection:
detect slices/arrays, iterate elements and for each element use a type switch to
accept core.Message, string, map[string]interface{}, or other struct types
(falling back to marshaling/ExtractContentFromInput when necessary), then append
normalized core.Message entries to chatReq.Messages; reference the switch on
req.Input, ExtractContentFromInput, and chatReq.Messages to locate where to
implement the reflective/robust iteration and normalization.
The three Responses-to-Chat conversion helpers were byte-for-byte identical across Gemini, Groq, and Ollama. Extract them into providers.ResponsesViaChat/StreamResponsesViaChat so each provider's Responses methods become one-line delegations, and consolidate the duplicated unit tests into a single test file.
Summary by CodeRabbit