Add support for OpenAI function calling inputs in chat UI parsing#20058
Add support for OpenAI function calling inputs in chat UI parsing#20058daniellok-db merged 1 commit intomlflow:masterfrom
Conversation
🛠 DevTools 🛠
Install mlflow from this PRFor Databricks, use the following command: |
There was a problem hiding this comment.
Pull request overview
This PR extends the chat UI parsing to properly recognize OpenAI function calling inputs that appear in the input array alongside regular chat messages. Previously, the UI only recognized OpenAIResponsesInputMessage items (with role and content), but OpenAI's Responses API can include additional item types like function_call, function_call_output, and reasoning in the input array.
Changes:
- Added support for parsing function call and function call output items in OpenAI Responses input arrays
- Extended reasoning attachment logic to work with input arrays
- Added comprehensive unit tests for the new functionality
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| mlflow/server/js/src/shared/web-shared/model-trace-explorer/chat-utils/openai.ts | Added isOpenAIResponsesInputItem, normalizeOpenAIResponsesInputArrayItem, and processInputItemsWithReasoning functions to handle mixed input arrays containing messages, function calls, and reasoning items |
| mlflow/server/js/src/shared/web-shared/model-trace-explorer/chat-utils/openai.test.ts | Added two test cases covering function calls with/without reasoning summaries in input arrays |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
mlflow/server/js/src/shared/web-shared/model-trace-explorer/chat-utils/openai.ts
Outdated
Show resolved
Hide resolved
mlflow/server/js/src/shared/web-shared/model-trace-explorer/chat-utils/openai.test.ts
Show resolved
Hide resolved
mlflow/server/js/src/shared/web-shared/model-trace-explorer/chat-utils/openai.test.ts
Outdated
Show resolved
Hide resolved
|
Documentation preview for d0f9231 is available at: More info
|
db6bf4e to
d0f9231
Compare
| if ( | ||
| isArray(input) && | ||
| // openai inputs can consititute of output items such as function calls and function call outputs | ||
| input.every((message: unknown) => isOpenAIResponsesInputMessage(message) || isOpenAIResponsesOutputItem(message)) |
There was a problem hiding this comment.
i was going to implement some separate functions for detecting if we have function_call or function_call_output but realized it already exists in isOpenAIResponsesOutputItem.
it also does not seem to pose a problem to use normalizeOpenAIResponsesOutputItem to normalize the input as we do want them to be formatted as assistant messages (since the assistant is the one that requests the tools)
…flow#20058) Signed-off-by: Daniel Lok <daniel.lok@databricks.com>
…flow#20058) Signed-off-by: Daniel Lok <daniel.lok@databricks.com>
…0058) Signed-off-by: Daniel Lok <daniel.lok@databricks.com>
Related Issues/PRs
#20054
What changes are proposed in this pull request?
The chat UI was not properly recognizing OpenAI's function calling inputs as chat messages. For example, when the
inputarray in a span contains items like:{ "input": [ {"role": "user", "content": "What is my horoscope?"}, {"type": "reasoning", "id": "...", "summary": []}, {"type": "function_call", "call_id": "...", "name": "get_horoscope", "arguments": "..."}, {"type": "function_call_output", "call_id": "...", "output": "..."} ] }The UI would fail to parse these as chat messages because
normalizeOpenAIResponsesInputonly recognizedOpenAIResponsesInputMessageitems (withroleandcontent).This PR extends it to accept other formats such as
function_callandfunction_call_outputby reusing the existingisOpenAIResponsesOutputItemhelperHow is this PR tested?
Added new test cases:
handles input with function calls and function call outputsThe trace was generated by following the official OpenAI function calling tutorial:
https://platform.openai.com/docs/guides/function-calling
Before, no chat messages appear:
After:
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
Fixed an issue where the tracing UI would not properly display OpenAI function calling inputs as chat messages.
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityHow should the PR be classified in the release notes? Choose one:
rn/bug-fix- A user-facing bug fix worth mentioning in the release notesShould this PR be included in the next patch release?
🤖 Generated with Claude Code