Skip to content

[ML-58760] Introduce Summarization Builtin Judge#19225

Merged
xsh310 merged 1 commit intomlflow:masterfrom
xsh310:stack/ML-58760-Introduce-Summarization-Builtin-Judge
Dec 9, 2025
Merged

[ML-58760] Introduce Summarization Builtin Judge#19225
xsh310 merged 1 commit intomlflow:masterfrom
xsh310:stack/ML-58760-Introduce-Summarization-Builtin-Judge

Conversation

@xsh310
Copy link
Collaborator

@xsh310 xsh310 commented Dec 4, 2025

🥞 Stacked PR

Use this link to review incremental changes.


What changes are proposed in this pull request?

Adding a new single-turn built-in judge that measures the quality of the summary for a document. The new judge checks the following aspects of the summarization: faithfulness, coverage, conciseness and coherence.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Manual Testing

Tested with a small set of 20 document/summary pairs:
https://e2-dogfood.staging.cloud.databricks.com/editor/notebooks/659976464234660?o=6051921418418893

The default model achieved the following metrics:
Accuracy: 0.8889
Precision: 0.8182
Recall: 1.0000
F1 Score: 0.9000

Tested with one of the competitors' prompt with the same model:
Accuracy: 0.6500
Precision: 0.6000
Recall: 0.9000
F1 Score: 0.7200

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

UserFrustration(),
ConversationCompleteness(),
Completeness(),
Summarization(),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure whether this should be added to get_all_scorer list here because unlike the other generic builtin judges, Summarization's input and output are very specific (document and summary). I think it might be confusing if the user that currently evaluate on get_all_scorer now see that fail on all of the Summarization judge on the generic agent request/responses.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed - let's not add it in here.

from mlflow.genai.scorers import Summarization

assessment = Summarization(name="my_summarization_check")(
inputs={"text": "MLflow is an open-source platform for managing ML workflows..."},
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qq: Is there a requirement of what key we allow in the inputs.

@github-actions github-actions bot added v3.7.0 area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. labels Dec 4, 2025
@xsh310 xsh310 removed the v3.7.0 label Dec 4, 2025
Copy link
Collaborator

@smoorjani smoorjani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

I think there's a broader question of the precedent we are setting for this metric as generally we think of our scorers as orthogonal, but this is a mix of lots of metrics. One alternative is we introduce separate scorers for each of the factors (e.g., faithfulness/groundedness, conciseness, coverage, etc) and then introduce a function like get_summarization_scorers(), but the question would be how to aggregate these scores. However, since this is an experimental API, we don't need to block on this.

Summarization evaluates whether a summarization output is factually correct, grounded in
the input, and does not make any assumptions not present in the input.

This scorer focuses on three key aspects:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: let's not discuss implementation details in the docstring as this can change and we may forget to update it.

UserFrustration(),
ConversationCompleteness(),
Completeness(),
Summarization(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed - let's not add it in here.

@xsh310 xsh310 force-pushed the stack/ML-58760-Introduce-Summarization-Builtin-Judge branch from a06ea21 to 4523e9c Compare December 5, 2025 21:16
@xsh310
Copy link
Collaborator Author

xsh310 commented Dec 5, 2025

rebase and addressed @smoorjani 's comments

@xsh310 xsh310 force-pushed the stack/ML-58760-Introduce-Summarization-Builtin-Judge branch 2 times, most recently from d4871a7 to 6941d5e Compare December 8, 2025 05:04
Signed-off-by: Xiang Shen <xshen.shc@gmail.com>
@xsh310 xsh310 force-pushed the stack/ML-58760-Introduce-Summarization-Builtin-Judge branch from 6941d5e to 2892bf6 Compare December 9, 2025 05:02
@github-actions
Copy link
Contributor

github-actions bot commented Dec 9, 2025

Documentation preview for 2892bf6 is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

@xsh310 xsh310 enabled auto-merge December 9, 2025 05:41
@xsh310 xsh310 added this pull request to the merge queue Dec 9, 2025
Merged via the queue into mlflow:master with commit 0a9e742 Dec 9, 2025
46 checks passed
@xsh310 xsh310 deleted the stack/ML-58760-Introduce-Summarization-Builtin-Judge branch December 9, 2025 05:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants