Skip to content

Support trace parsing fallback using Databricks model#19654

Merged
AveshCSingh merged 13 commits intomlflow:masterfrom
AveshCSingh:support-llm-trace-parsing-w-databricks
Dec 31, 2025
Merged

Support trace parsing fallback using Databricks model#19654
AveshCSingh merged 13 commits intomlflow:masterfrom
AveshCSingh:support-llm-trace-parsing-w-databricks

Conversation

@AveshCSingh
Copy link
Collaborator

@AveshCSingh AveshCSingh commented Dec 26, 2025

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19654/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19654/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/19654/merge

What changes are proposed in this pull request?

Adds support to extract request and response using the Databricks ChatCompletions endpoint. This functionality is already supported for custom judge models.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Validation script:

import os

os.environ["MLFLOW_ENABLE_ASYNC_TRACE_LOGGING"] = "false"

import mlflow
from mlflow.entities.span import SpanType
from mlflow.genai.scorers import Safety

mlflow.set_tracking_uri("databricks")
mlflow.set_experiment(f"/Users/{os.environ.get('USER')}@databricks.com/demo_llm_extraction")

# Create trace with empty root span but populated child span
with mlflow.start_span(name="agent_call", span_type=SpanType.CHAIN) as root:
    with mlflow.start_span(name="chat_completion", span_type=SpanType.CHAT_MODEL) as child:
        child.set_inputs({"messages": [{"role": "user", "content": "What is the capital of France?"}]})
        child.set_outputs({"choices": [{"message": {"role": "assistant", "content": "The capital of France is Paris."}}]})

trace = mlflow.get_trace(root.trace_id)
print(f"Root span inputs: {trace.data.spans[0].inputs}, outputs: {trace.data.spans[0].outputs}")

# Call Safety scorer - will use LLM to extract fields from child spans
result = Safety()(trace=trace)
print(f"Score: {result.value}, Rationale: {result.rationale}")

Succeeds on the feature branch:

$ python tmp/scripts/demo_llm_extraction.py
Root span inputs: None, outputs: None
Score: yes, Rationale: The text simply states a factual piece of information about the capital of France, which does not contain any elements of hate speech, harassment, incitement of violence, or promotion of illegal or severely harmful acts. Therefore, it is safe.

Fails on master:

$ python tmp/scripts/demo_llm_extraction.py
Root span inputs: None, outputs: None
2025/12/27 00:46:44 WARNING mlflow.genai.scorers.builtin_scorers: Failed to extract required fields from trace using LLM: Malformed model uri 'databricks'. The URI must be in the format of <provider>:/<model-name>, e.g., 'openai:/gpt-4.1-mini'.
Traceback (most recent call last):
  File "/home/avesh.singh/mlflow/tmp/scripts/demo_llm_extraction.py", line 34, in <module>
    result = Safety()(trace=trace)
  File "/home/avesh.singh/mlflow/mlflow/genai/scorers/base.py", line 149, in wrapper
    return telemetry_wrapped(*args, **kwargs)
  File "/home/avesh.singh/mlflow/mlflow/telemetry/track.py", line 24, in wrapper
    return func(*args, **kwargs)
  File "/home/avesh.singh/mlflow/mlflow/genai/scorers/builtin_scorers.py", line 1386, in __call__
    _validate_required_fields(fields, self, "Safety scorer")
  File "/home/avesh.singh/mlflow/mlflow/genai/scorers/builtin_scorers.py", line 214, in _validate_required_fields
    raise MlflowException(
mlflow.exceptions.MlflowException: Safety scorer requires the following fields: outputs. Provide them directly or pass a trace containing them.

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

Adds Databricks support to automatically extract request and response from traces which do not contain inputs and outputs in the root span. This is already supported for non-Databricks judge models.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

AveshCSingh and others added 8 commits December 26, 2025 21:48
When using the Databricks default judge model for extracting fields
from traces (e.g., inputs/outputs from nested spans), the code now
routes to a dedicated implementation that uses the gpt-oss-120b model
via the Databricks managed RAG client.

This enables LLM fallback extraction to work when running scorers
against a Databricks backend without requiring an external LLM provider.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Resolved merge conflicts:
- Deleted mlflow/genai/judges/adapters/databricks_adapter.py (accepted upstream deletion)
- Updated invocation_utils.py to use new adapter architecture
- Refactored _invoke_databricks_structured_output to use building blocks from
  databricks_managed_judge_adapter.py instead of deleted agentic loop function
- Updated tests to mock new architecture components

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Adds a shared _run_databricks_agentic_loop function that handles the
iterative tool-calling loop for Databricks-based workflows. Both
_invoke_databricks_default_judge and _invoke_databricks_structured_output
now use this shared implementation with different callbacks.

This restores the deduplication pattern from commit 91d0601 that was
lost during merge conflict resolution.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Update test_databricks_adapter.py to import from databricks_managed_judge_adapter
instead of the deleted databricks_adapter module.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Move imports from inside functions to module level for better code style.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
@github-actions github-actions bot added area/evaluation MLflow Evaluation rn/feature Mention under Features in Changelogs. labels Dec 27, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 27, 2025

Documentation preview for a9165a0 is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
@AveshCSingh AveshCSingh changed the title [wip] Support llm trace parsing w databricks Support llm trace parsing using Databricks model Dec 27, 2025
@AveshCSingh AveshCSingh changed the title Support llm trace parsing using Databricks model Support trace parsing fallback using Databricks model Dec 27, 2025
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Copy link
Collaborator

@smoorjani smoorjani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - but before merging, there were 3 unaddressed comments from the last round of review, LMK if they make sense.

- Use monkeypatch.setenv instead of mocking MLFLOW_JUDGE_MAX_ITERATIONS
- Set max iterations to 1 for faster test execution
- Parameterize test_structured_output_schema_injection tests

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
@AveshCSingh
Copy link
Collaborator Author

They make sense --just missed them my first time around. Addressed

The skinny build was failing because importing mlflow triggered
a chain of imports that eventually required numpy via:
mlflow -> genai/judges/utils -> databricks_managed_judge_adapter
-> genai/judges/tools -> tools/base -> mlflow.types.llm
-> mlflow.types.schema -> numpy

Fixed by making the list_judge_tools import lazy (only when needed).

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
@AveshCSingh AveshCSingh force-pushed the support-llm-trace-parsing-w-databricks branch from a937755 to a9165a0 Compare December 30, 2025 23:10
@AveshCSingh AveshCSingh added this pull request to the merge queue Dec 31, 2025
Merged via the queue into mlflow:master with commit c9041b2 Dec 31, 2025
46 checks passed
@AveshCSingh AveshCSingh deleted the support-llm-trace-parsing-w-databricks branch December 31, 2025 01:27
omarfarhoud pushed a commit to omarfarhoud/mlflow that referenced this pull request Jan 20, 2026
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation rn/feature Mention under Features in Changelogs.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants