Skip to content

Update tool call correctness judge to accept expected tool calls#19613

Merged
smoorjani merged 8 commits intomlflow:masterfrom
smoorjani:gwt-mlflow-ml-60671
Dec 30, 2025
Merged

Update tool call correctness judge to accept expected tool calls#19613
smoorjani merged 8 commits intomlflow:masterfrom
smoorjani:gwt-mlflow-ml-60671

Conversation

@smoorjani
Copy link
Collaborator

@smoorjani smoorjani commented Dec 24, 2025

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19613/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19613/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/19613/merge

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

As titled - updates tool call correctness judge to accept expected tool calls. Additionally, we introduce two new params - one to check for ordering and another to do fuzzy vs. exact matching between the expectations. Note that exact match only works when expectations are provided.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

test file:

import mlflow
from mlflow.entities.span import SpanType
from mlflow.exceptions import MlflowException
from mlflow.genai.judges.builtin import CategoricalRating
from mlflow.genai.scorers import ToolCallCorrectness
from mlflow.tracing.constant import SpanAttributeKey

AVAILABLE_TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "search",
            "description": "Search for information",
            "parameters": {"type": "object", "properties": {"query": {"type": "string"}}},
        },
    },
    {
        "type": "function",
        "function": {
            "name": "summarize",
            "description": "Summarize text",
            "parameters": {"type": "object", "properties": {"max_length": {"type": "integer"}}},
        },
    },
]


def create_trace_with_tool_calls(question: str, tool_calls: list[dict]) -> mlflow.entities.Trace:
    with mlflow.start_span(name="agent") as span:
        span.set_inputs({"question": question})
        with mlflow.start_span(name="llm_call", span_type=SpanType.LLM) as llm_span:
            llm_span.set_attribute(SpanAttributeKey.CHAT_TOOLS, AVAILABLE_TOOLS)
            llm_span.set_inputs({"messages": [{"role": "user", "content": question}]})
            llm_span.set_outputs({"content": "I'll help you with that."})
        for tool in tool_calls:
            with mlflow.start_span(name=tool["name"], span_type=SpanType.TOOL) as tool_span:
                tool_span.set_inputs(tool.get("arguments", {}))
                tool_span.set_outputs(tool.get("output", "result"))
        span.set_outputs("Agent response")
    return mlflow.get_trace(span.trace_id)


trace_search_summarize = create_trace_with_tool_calls(
    "Search for MLflow and summarize",
    [
        {"name": "search", "arguments": {"query": "MLflow"}, "output": "MLflow is an ML platform"},
        {"name": "summarize", "arguments": {"max_length": 100}, "output": "Summary"},
    ],
)

trace_single_search = create_trace_with_tool_calls(
    "Search for MLflow",
    [{"name": "search", "arguments": {"query": "MLflow"}, "output": "MLflow is an ML platform"}],
)

# TEST 1: Exact match, ordered, full expectations - MATCH
scorer = ToolCallCorrectness(should_exact_match=True, should_consider_ordering=True)
result = scorer(
    trace=trace_search_summarize,
    expectations={
        "expected_tool_calls": [
            {"name": "search", "arguments": {"query": "MLflow"}},
            {"name": "summarize", "arguments": {"max_length": 100}},
        ]
    },
)
print(f"TEST 1 - Value: {result.value}, Rationale: {result.rationale}")
assert result.value == CategoricalRating.YES
assert "match" in result.rationale.lower()

# TEST 2: Exact match, ordered, full expectations - WRONG ORDER
scorer = ToolCallCorrectness(should_exact_match=True, should_consider_ordering=True)
result = scorer(
    trace=trace_search_summarize,
    expectations={
        "expected_tool_calls": [
            {"name": "summarize", "arguments": {"max_length": 100}},
            {"name": "search", "arguments": {"query": "MLflow"}},
        ]
    },
)
print(f"TEST 2 - Value: {result.value}, Rationale: {result.rationale}")
assert result.value == CategoricalRating.NO
assert "Position 1" in result.rationale

# TEST 3: Exact match, unordered, full expectations - DIFFERENT ORDER OK
scorer = ToolCallCorrectness(should_exact_match=True, should_consider_ordering=False)
result = scorer(
    trace=trace_search_summarize,
    expectations={
        "expected_tool_calls": [
            {"name": "summarize", "arguments": {"max_length": 100}},
            {"name": "search", "arguments": {"query": "MLflow"}},
        ]
    },
)
print(f"TEST 3 - Value: {result.value}, Rationale: {result.rationale}")
assert result.value == CategoricalRating.YES

# TEST 4: Exact match, partial expectations (names only) - MATCH
scorer = ToolCallCorrectness(should_exact_match=True, should_consider_ordering=False)
result = scorer(
    trace=trace_search_summarize,
    expectations={"expected_tool_calls": [{"name": "search"}, {"name": "summarize"}]},
)
print(f"TEST 4 - Value: {result.value}, Rationale: {result.rationale}")
assert result.value == CategoricalRating.YES

# TEST 5: Exact match, partial expectations - WRONG NAME
scorer = ToolCallCorrectness(should_exact_match=True, should_consider_ordering=False)
result = scorer(
    trace=trace_search_summarize,
    expectations={"expected_tool_calls": [{"name": "search"}, {"name": "translate"}]},
)
print(f"TEST 5 - Value: {result.value}, Rationale: {result.rationale}")
assert result.value == CategoricalRating.NO
assert "translate" in result.rationale

# TEST 6: Exact match, count mismatch
scorer = ToolCallCorrectness(should_exact_match=True, should_consider_ordering=False)
result = scorer(
    trace=trace_single_search,
    expectations={"expected_tool_calls": [{"name": "search"}, {"name": "summarize"}]},
)
print(f"TEST 6 - Value: {result.value}, Rationale: {result.rationale}")
assert result.value == CategoricalRating.NO
assert "Expected 2" in result.rationale

# TEST 7: Exact match without expectations - should raise error
scorer = ToolCallCorrectness(should_exact_match=True)
try:
    scorer(trace=trace_single_search)
    assert False, "Should have raised MlflowException"
except MlflowException as e:
    print(f"TEST 7 - Correctly raised: {e}")
    assert "should_exact_match=True requires expectations" in str(e)

# TEST 8: Fuzzy match with full expectations
scorer = ToolCallCorrectness(should_exact_match=False)
result = scorer(
    trace=trace_search_summarize,
    expectations={
        "expected_tool_calls": [
            {"name": "search", "arguments": {"query": "MLflow"}},
            {"name": "summarize", "arguments": {"max_length": 100}},
        ]
    },
)
print(f"TEST 8 - Value: {result.value}, Rationale: {result.rationale}")
assert result.value == CategoricalRating.YES

# TEST 9: Fuzzy match, ground-truth free
scorer = ToolCallCorrectness(should_exact_match=False)
result = scorer(trace=trace_search_summarize)
print(f"TEST 9 - Value: {result.value}, Rationale: {result.rationale}")

# TEST 10: Fuzzy match with partial expectations
scorer = ToolCallCorrectness(should_exact_match=False)
result = scorer(
    trace=trace_search_summarize,
    expectations={"expected_tool_calls": [{"name": "search"}, {"name": "summarize"}]},
)
print(f"TEST 10 - Value: {result.value}, Rationale: {result.rationale}")
assert result.value == CategoricalRating.YES

print("All 10 tests passed!")

output:
Screenshot 2025-12-26 at 1 47 16 PM

Does this PR require documentation update?

TODO: will update the docs page in a subsequent PR

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

Update tool call correctness judge to evaluate using expected tool calls, and add options to check for ordering and do fuzzy or exact matching.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
@github-actions github-actions bot added v3.8.1 area/evaluation MLflow Evaluation rn/feature Mention under Features in Changelogs. labels Dec 24, 2025
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
.
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Copy link
Collaborator

@AveshCSingh AveshCSingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this looks good. Thanks for sharing the thorough validation script in addition to your unit tests.

I've left a few requests inline

available_tools: list["ChatTool"],
expected_tool_calls: list["FunctionCall"] | None = None,
include_arguments: bool = True,
check_order: bool = False,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this interface expected to be called by users? Asking since it does not include should_exact_match, whereas the Scorer interface does.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think the difference here is judge vs scorer. The judge is only responsible for fuzzy matching (i.e., what uses an LLM). Regardless, It's unlikely users will use the judge, more likely the scorer, especially for exact matching.

def parse_tool_call_expectations(
expectations: dict[str, Any] | None,
) -> list["FunctionCall"] | None:
from mlflow.genai.utils.type import FunctionCall
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to keep this import inline?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this will break the skinny build if added at the top of the file since it has a dependency on MLflow.

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Copy link
Collaborator

@AveshCSingh AveshCSingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for addressing my comments

.
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
@smoorjani smoorjani enabled auto-merge December 30, 2025 00:20
@github-actions
Copy link
Contributor

Documentation preview for a1cace7 is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

@smoorjani smoorjani added this pull request to the merge queue Dec 30, 2025
Merged via the queue into mlflow:master with commit 8e6e469 Dec 30, 2025
46 checks passed
@smoorjani smoorjani deleted the gwt-mlflow-ml-60671 branch December 30, 2025 00:51
BenWilson2 pushed a commit to BenWilson2/mlflow that referenced this pull request Dec 30, 2025
…low#19613)

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
omarfarhoud pushed a commit to omarfarhoud/mlflow that referenced this pull request Jan 20, 2026
…low#19613)

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation rn/feature Mention under Features in Changelogs. v3.8.2

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants