Skip to content

Fix return handling for scorer utils#18702

Merged
BenWilson2 merged 4 commits intomlflow:masterfrom
BenWilson2:fix-builtin-judges-tuple-bug
Nov 7, 2025
Merged

Fix return handling for scorer utils#18702
BenWilson2 merged 4 commits intomlflow:masterfrom
BenWilson2:fix-builtin-judges-tuple-bug

Conversation

@BenWilson2
Copy link
Member

@BenWilson2 BenWilson2 commented Nov 6, 2025

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/18702/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/18702/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/18702/merge

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Updates the return from litellm handler to process the updated tuple return type from recent changes. Adds a regression prevention set of tests as well.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
@github-actions github-actions bot added v3.6.0 area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. labels Nov 6, 2025
@BenWilson2 BenWilson2 requested a review from B-Step62 November 6, 2025 00:36
@github-actions
Copy link
Contributor

github-actions bot commented Nov 6, 2025

Documentation preview for 65400c4 is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
@B-Step62 B-Step62 mentioned this pull request Nov 6, 2025
29 tasks
@BenWilson2 BenWilson2 added the team-review Trigger a team review request label Nov 6, 2025
BenWilson2 and others added 2 commits November 6, 2025 16:54
Resolved conflicts by removing legacy utils.py files that were
refactored into utils/ package structure in both branches.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
The fix for properly unpacking the tuple return value from
_invoke_litellm_and_handle_tools was lost during the merge
because master refactored utils.py into utils/ directory.

This commit re-applies:
- Tuple unpacking: response, _ = _invoke_litellm_and_handle_tools(...)
- TODO comment about discarded cost measurement
- Tests for get_chat_completions_with_structured_output

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
@harupy
Copy link
Member

harupy commented Nov 7, 2025

/review


🚀 Review workflow started


✅ Review completed.

Review Output

Review Complete

I've reviewed PR #18702 "Fix return handling for scorer utils" and found 2 style guide violations related to mock assertions in the test files.

Issues Found

Both issues are in tests/genai/judges/utils/test_invocation_utils.py and relate to the Python style guide requirement that "every mocked function must have an assertion to verify it was invoked correctly":

  1. Line 920 (test_get_chat_completions_with_structured_output): Missing explicit mock assertion for mock_completion
  2. Line 985 (test_get_chat_completions_with_structured_output_with_trace): Should use mock's built-in assertion methods instead of just checking call_count

Summary

The PR correctly fixes the tuple unpacking bug in invocation_utils.py and adds comprehensive regression tests. The prompt improvements in builtin_scorers.py are well-structured and improve clarity. The only issues are minor style guide violations in the new tests related to mock assertions.

Review comments have been posted at:

@mlflow mlflow deleted a comment from github-actions bot Nov 7, 2025
@mlflow mlflow deleted a comment from github-actions bot Nov 7, 2025
Copy link
Collaborator

@B-Step62 B-Step62 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@BenWilson2 BenWilson2 added this pull request to the merge queue Nov 7, 2025
Merged via the queue into mlflow:master with commit 45d822c Nov 7, 2025
46 of 48 checks passed
@BenWilson2 BenWilson2 deleted the fix-builtin-judges-tuple-bug branch November 7, 2025 01:42
B-Step62 pushed a commit to B-Step62/mlflow that referenced this pull request Nov 7, 2025
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Co-authored-by: Claude <noreply@anthropic.com>
@github-actions github-actions bot added v3.6.1 and removed v3.6.0 labels Nov 8, 2025
B-Step62 pushed a commit to B-Step62/mlflow that referenced this pull request Nov 11, 2025
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Co-authored-by: Claude <noreply@anthropic.com>
B-Step62 pushed a commit that referenced this pull request Nov 11, 2025
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Co-authored-by: Claude <noreply@anthropic.com>
@B-Step62 B-Step62 added v3.6.0 and removed v3.6.1 labels Nov 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. team-review Trigger a team review request v3.6.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants