Support trace parsing fallback using Databricks model#19654
Merged
AveshCSingh merged 13 commits intomlflow:masterfrom Dec 31, 2025
Merged
Support trace parsing fallback using Databricks model#19654AveshCSingh merged 13 commits intomlflow:masterfrom
AveshCSingh merged 13 commits intomlflow:masterfrom
Conversation
When using the Databricks default judge model for extracting fields from traces (e.g., inputs/outputs from nested spans), the code now routes to a dedicated implementation that uses the gpt-oss-120b model via the Databricks managed RAG client. This enables LLM fallback extraction to work when running scorers against a Databricks backend without requiring an external LLM provider. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Resolved merge conflicts: - Deleted mlflow/genai/judges/adapters/databricks_adapter.py (accepted upstream deletion) - Updated invocation_utils.py to use new adapter architecture - Refactored _invoke_databricks_structured_output to use building blocks from databricks_managed_judge_adapter.py instead of deleted agentic loop function - Updated tests to mock new architecture components Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Adds a shared _run_databricks_agentic_loop function that handles the iterative tool-calling loop for Databricks-based workflows. Both _invoke_databricks_default_judge and _invoke_databricks_structured_output now use this shared implementation with different callbacks. This restores the deduplication pattern from commit 91d0601 that was lost during merge conflict resolution. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Update test_databricks_adapter.py to import from databricks_managed_judge_adapter instead of the deleted databricks_adapter module. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Move imports from inside functions to module level for better code style. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Contributor
|
Documentation preview for a9165a0 is available at: More info
|
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
smoorjani
requested changes
Dec 30, 2025
mlflow/genai/judges/adapters/databricks_managed_judge_adapter.py
Outdated
Show resolved
Hide resolved
mlflow/genai/judges/adapters/databricks_managed_judge_adapter.py
Outdated
Show resolved
Hide resolved
mlflow/genai/judges/adapters/databricks_managed_judge_adapter.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
smoorjani
approved these changes
Dec 30, 2025
Collaborator
smoorjani
left a comment
There was a problem hiding this comment.
LGTM - but before merging, there were 3 unaddressed comments from the last round of review, LMK if they make sense.
- Use monkeypatch.setenv instead of mocking MLFLOW_JUDGE_MAX_ITERATIONS - Set max iterations to 1 for faster test execution - Parameterize test_structured_output_schema_injection tests Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Collaborator
Author
|
They make sense --just missed them my first time around. Addressed |
The skinny build was failing because importing mlflow triggered a chain of imports that eventually required numpy via: mlflow -> genai/judges/utils -> databricks_managed_judge_adapter -> genai/judges/tools -> tools/base -> mlflow.types.llm -> mlflow.types.schema -> numpy Fixed by making the list_judge_tools import lazy (only when needed). Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
a937755 to
a9165a0
Compare
omarfarhoud
pushed a commit
to omarfarhoud/mlflow
that referenced
this pull request
Jan 20, 2026
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
What changes are proposed in this pull request?
Adds support to extract
requestandresponseusing the Databricks ChatCompletions endpoint. This functionality is already supported for custom judge models.How is this PR tested?
Validation script:
Succeeds on the feature branch:
Fails on master:
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
Adds Databricks support to automatically extract request and response from traces which do not contain inputs and outputs in the root span. This is already supported for non-Databricks judge models.
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.