fix(openai): add tool_call_id request param#73
Conversation
WalkthroughA new optional Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed everything up to 257119e in 1 minute and 51 seconds. Click for details.
- Reviewed
721lines of code in10files - Skipped
0files when reviewing. - Skipped posting
4draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src/models/content.rs:29
- Draft comment:
For consistency with other optional fields, add #[serde(skip_serializing_if = "Option::is_none")] to the new tool_call_id field. - Reason this comment was not posted:
Comment looked like it was already resolved.
2. src/providers/openai/test.rs:287
- Draft comment:
Good use of tool_call_id in the second API call. Consider adding an extra test case to explicitly simulate a scenario where tool_call_id is provided (non-None) to further validate its propagation. - Reason this comment was not posted:
Comment looked like it was already resolved.
3. tests/cassettes/openai/chat_completions_with_tool_calls.json:1
- Draft comment:
Cassette file looks correctly formatted for testing tool_calls including tool_call_id. Ensure that any future changes to tool_call_id behavior are reflected in updated cassette files. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is purely informative and suggests ensuring future changes are reflected, which violates the rules. It doesn't provide a specific code suggestion or ask for a specific test to be written.
4. General:0
- Draft comment:
Across different provider modules (Anthropic, Titan, VertexAI), tool_call_id is consistently set to None. Consider reviewing whether tool_call_id should sometimes carry a non-null value from incoming requests to better support tool execution tracking. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_uBkoViLX6OACEwNK
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/pipelines/otel.rs (1)
137-155: Tracing now initializestool_call_idexplicitly on streamed choicesIncluding
tool_call_id: Nonein the constructedChatCompletionMessagekeeps the Otel tracing path consistent with the updated model without changing current behavior; once streaming tool messages start carrying atool_call_id, this is the right place to propagate it.src/providers/openai/test.rs (1)
15-38: Cassette append behavior vs.len() == 2assertion is brittle
save_to_cassettealways appends to any existing file, but the validation branch assertsinteractions.len() == 2. If you ever re-run the test in RECORD_MODE without deleting the cassette first, you’ll accumulate more than two interactions and the validation will start failing even though the latest pair is fine.Consider either:
- overwriting the cassette in RECORD_MODE (e.g., don’t load/append existing interactions), or
- relaxing the validation to
>= 2and only asserting against the first two (or last two) entries.This would make re-recording less error-prone.
Also applies to: 115-120, 218-255, 316-338
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (10)
src/models/content.rs(1 hunks)src/pipelines/otel.rs(1 hunks)src/providers/anthropic/models.rs(1 hunks)src/providers/bedrock/models.rs(2 hunks)src/providers/bedrock/test.rs(10 hunks)src/providers/openai/mod.rs(1 hunks)src/providers/openai/test.rs(1 hunks)src/providers/vertexai/models.rs(1 hunks)src/providers/vertexai/tests.rs(14 hunks)tests/cassettes/openai/chat_completions_with_tool_calls.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/providers/openai/test.rs (1)
src/providers/vertexai/tests.rs (5)
serde_json(62-62)serde_json(113-113)serde_json(586-586)save_to_cassette(105-132)create_test_provider(134-161)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build
- GitHub Check: Analyze (rust)
🔇 Additional comments (10)
src/providers/openai/mod.rs (1)
3-4: LGTM!Standard Rust pattern for conditionally including a test module. The
#[cfg(test)]attribute correctly ensures the test module is only compiled during test runs.src/providers/vertexai/models.rs (1)
536-556: LGTM!The
tool_call_id: Noneinitialization is correctly added to theChatCompletionMessageconstruction. This field is appropriatelyNonefor assistant responses sincetool_call_idis only relevant for tool result messages sent back to the model.src/providers/bedrock/models.rs (2)
100-116: LGTM!The
tool_call_id: Noneinitialization is correctly added for Titan response conversion. This is appropriate sincetool_call_idapplies to tool result messages, not assistant responses.
307-314: LGTM!Consistent with the Titan conversion above,
tool_call_id: Noneis correctly initialized for Ai21 responses.src/providers/anthropic/models.rs (1)
195-206: LGTM!The
tool_call_id: Noneinitialization is correctly added to the Anthropic content block conversion, maintaining consistency with other provider implementations.src/models/content.rs (1)
29-30: LGTM!The new
tool_call_idfield is correctly defined withOption<String>and the appropriateskip_serializing_ifserde attribute. This matches the OpenAI API specification wheretool_call_idis required for tool result messages to link responses back to the original tool calls.tests/cassettes/openai/chat_completions_with_tool_calls.json (1)
1-75: Cassette structure matches expected ChatCompletion/tool-calls shapeThe two recorded interactions (assistant with
tool_calls, followed by assistant withcontent) align with the new OpenAI test expectations and theChatCompletionschema; nothing problematic here.src/providers/vertexai/tests.rs (1)
257-287: VertexAI tests correctly updated for newtool_call_idfieldAll
ChatCompletionMessageliterals now settool_call_id: None, which is appropriate for these non-toolmessages and keeps the tests aligned with the updated struct without changing behavior.Also applies to: 486-497, 648-678, 778-787, 814-823, 871-880, 955-964, 1047-1056, 1150-1167, 1201-1219, 1431-1439, 1491-1499, 1535-1546, 1670-1681
src/providers/openai/test.rs (1)
15-91: Tool-call test wiring andtool_call_idusage look correctThe helpers and the two-call flow (first call producing
tool_calls, second call posting arole: "tool"message withtool_call_idequal totool_calls[0].id) match the OpenAI Chat Completions contract and exercise the newtool_call_idfield properly. The split between RECORD vs. validation mode and the weather tool schema also look sound.Also applies to: 185-309
src/providers/bedrock/test.rs (1)
98-109: Bedrock tests correctly aligned with newtool_call_idfieldThe added
tool_call_id: Noneinitializations on BedrockChatCompletionMessagepayloads keep the tests in sync with the shared message model and are appropriate for these non-toolmessages; no behavioral changes introduced.Also applies to: 214-225, 341-352, 417-428, 469-481, 521-532, 573-581, 624-632, 755-763, 821-830
Fixes TLP-1206
https://platform.openai.com/docs/guides/function-calling#handling-function-calls
Important
Add
tool_call_idtoChatCompletionMessageand initialize it in various models, with new tests for OpenAI provider.tool_call_idfield toChatCompletionMessageincontent.rs.tool_call_idtoNoneinOtelTracer::log_chunk()inotel.rs.tool_call_idtoNoneinFrom<Vec<ContentBlock>> for ChatCompletionMessageinanthropic/models.rs.tool_call_idtoNoneinFrom<TitanChatCompletionResponse> for ChatCompletioninbedrock/models.rs.test.rsfor OpenAI provider to test chat completions with tool calls, ensuringtool_call_idis correctly handled.This description was created by
for 257119e. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.