fix: add missing _parse_response to AzureOpenAIStructuredLLM#4434
Conversation
AzureOpenAIStructuredLLM calls self._parse_response() but the method was not defined in the class or its parent LLMBase, causing an AttributeError at runtime. Added the method matching the pattern used by the non-structured AzureOpenAILLM sibling class. Also removed a duplicate `if tools:` block. Fixes mem0ai#4164 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
xkonjin
left a comment
There was a problem hiding this comment.
Code Review: Add missing _parse_response to AzureOpenAIStructuredLLM
Good catch on a real bug. The class was missing the _parse_response method that other LLM providers implement, meaning tool calls from Azure OpenAI would have been returned raw rather than in the expected format.
What looks good
- The _parse_response implementation matches the pattern from other LLM providers (OpenAILLM etc.).
- Removes the duplicate tools/tool_choice parameter assignment (lines 86-88 were setting the same params twice).
- Tests cover all three scenarios: no tools, tools with tool calls, tools without tool calls.
- Uses extract_json for parsing tool call arguments, which handles potential JSON formatting issues.
Issues
1. No error handling in _parse_response (medium risk)
If response.choices is empty, or choices[0].message is None, this will raise an IndexError/AttributeError. Other providers may handle this differently, but it is worth adding a guard:
if not response.choices:
return ''Check how the base OpenAI provider handles this and match the pattern.
2. extract_json wrapped in json.loads
json.loads(extract_json(tool_call.function.arguments))If extract_json already returns a parsed dict (check its return type), json.loads would fail on a dict input. If it returns a string, this is fine. Verify the contract of extract_json. If it returns a string, consider adding a try/except for malformed JSON in tool arguments.
Minor
- The import of json at line 1 is correctly added since it is used in _parse_response.
LGTM with the empty response guard noted above.
|
Thanks for the merge! |
|
@mvanhorn thank you for your contribution. |
…4434) Co-authored-by: Matt Van Horn <455140+mvanhorn@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Description
AzureOpenAIStructuredLLMcallsself._parse_response(response, tools)at line 91 ofgenerate_response(), but the method is not defined in the class or its parentLLMBase. This causes anAttributeErrorat runtime when using Azure OpenAI as the structured LLM provider.The sibling class
OpenAIStructuredLLMinopenai_structured.pyhas a working_parse_response, and other LLM classes (vllm, lmstudio, gemini, aws_bedrock) all define their own.AzureOpenAIStructuredLLMwas the only one missing it.Also removed a duplicate
if tools:block (lines 82-84 and 86-88 were identical).Fixes #4164
Type of change
How Has This Been Tested?
Added
tests/llms/test_azure_openai_structured.pywith 3 tests:test_generate_response_without_tools- verifies plain text response pathtest_generate_response_with_tools- verifies tool call parsing returns dict withcontentandtool_callstest_generate_response_with_tools_no_tool_calls- verifies tools provided but model returns no callsChecklist:
Maintainer Checklist
This contribution was developed with AI assistance (Claude Code).