Skip to content

Add specific references for correctness scorers#19472

Merged
BenWilson2 merged 2 commits intomlflow:masterfrom
BenWilson2:correctness-fix
Dec 18, 2025
Merged

Add specific references for correctness scorers#19472
BenWilson2 merged 2 commits intomlflow:masterfrom
BenWilson2:correctness-fix

Conversation

@BenWilson2
Copy link
Member

@BenWilson2 BenWilson2 commented Dec 17, 2025

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19472/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19472/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/19472/merge

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Updates docstring and UI tooltips to make the language surrounding the behavior of the correctness scorer a bit more precise.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
@BenWilson2 BenWilson2 requested review from alkispoly-db, Copilot and smoorjani and removed request for Copilot December 17, 2025 21:03
@github-actions github-actions bot added area/docs Documentation issues rn/documentation Mention under Documentation Changes in Changelogs. v3.8.0 labels Dec 17, 2025
@BenWilson2 BenWilson2 added the team-review Trigger a team review request label Dec 17, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 17, 2025

Documentation preview for 3ffbac7 is available at:

Changed Pages (1)

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

Copy link
Collaborator

@smoorjani smoorjani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, left some small comments to address before merging and one possible discussion point/follow-up


from mlflow.genai.judges import is_correct

# Response contains the expected fact - correct
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this one uses expected response

Example:

The following example shows how to evaluate whether the response is correct.
The following example shows how to evaluate whether the response supports
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: example shows both expected facts and response


.. note::
This judge checks if expected facts are **supported by** the response, not whether
the response is **equivalent to** the expected output. The response may contain
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not blocking this PR but this makes me think we should be more strict - major facts that aren't in the expectations probably shouldn't be correct, especially in the case of expected response (maybe this behavior is ok for expected facts). WDYT? cc @AveshCSingh @alkispoly-db

LLM judge determines whether the expected facts are supported by the response.

This judge evaluates if the facts specified in ``expected_facts`` or ``expected_response``
are contained in or supported by the model's response. It answers: "Does the response
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the "It answers: ..." seems a bit redundant and doesn't mention expected response - wdyt of removing this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed! Removed :)

Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Copilot AI review requested due to automatic review settings December 18, 2025 03:34
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the documentation and UI tooltips for the Correctness scorer to clarify its evaluation behavior. The changes emphasize that the scorer checks whether expected facts are supported by the model's response, rather than checking for exact equivalence.

Key Changes

  • Clarified that Correctness evaluates whether expected facts are contained in or supported by the response
  • Added explicit note distinguishing Correctness (fact support) from Equivalence (semantic equivalence)
  • Updated UI tooltip to reflect the more precise language about fact support

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated no comments.

Show a summary per file
File Description
mlflow/server/js/src/lang/default/en.json Updated internationalization entry for Correctness tooltip from "Is app's response correct compared to ground-truth?" to "Are the expected facts supported by the response?"
mlflow/server/js/src/experiment-tracking/pages/experiment-scorers/llmScorerUtils.tsx Updated UI hint text for Correctness template to match new language
mlflow/genai/scorers/builtin_scorers.py Enhanced docstring with detailed explanation, note distinguishing from Equivalence scorer, and updated description field
mlflow/genai/judges/builtin.py Updated is_correct function docstring to clarify it checks fact support, added explanatory note and improved examples
docs/docs/genai/eval-monitor/scorers/llm-judge/predefined.mdx Updated table entry to use new language "Are the expected facts supported by the app's response?"

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Collaborator

@WeichenXu123 WeichenXu123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@BenWilson2 BenWilson2 added this pull request to the merge queue Dec 18, 2025
Merged via the queue into mlflow:master with commit a692250 Dec 18, 2025
58 of 61 checks passed
@BenWilson2 BenWilson2 deleted the correctness-fix branch December 18, 2025 14:13
WeichenXu123 pushed a commit to WeichenXu123/mlflow that referenced this pull request Dec 19, 2025
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
WeichenXu123 pushed a commit that referenced this pull request Dec 19, 2025
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/docs Documentation issues rn/documentation Mention under Documentation Changes in Changelogs. team-review Trigger a team review request v3.8.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants