Add specific references for correctness scorers#19472
Add specific references for correctness scorers#19472BenWilson2 merged 2 commits intomlflow:masterfrom
Conversation
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
|
Documentation preview for 3ffbac7 is available at: Changed Pages (1) More info
|
smoorjani
left a comment
There was a problem hiding this comment.
LGTM, left some small comments to address before merging and one possible discussion point/follow-up
mlflow/genai/judges/builtin.py
Outdated
|
|
||
| from mlflow.genai.judges import is_correct | ||
|
|
||
| # Response contains the expected fact - correct |
There was a problem hiding this comment.
this one uses expected response
mlflow/genai/judges/builtin.py
Outdated
| Example: | ||
|
|
||
| The following example shows how to evaluate whether the response is correct. | ||
| The following example shows how to evaluate whether the response supports |
There was a problem hiding this comment.
nit: example shows both expected facts and response
|
|
||
| .. note:: | ||
| This judge checks if expected facts are **supported by** the response, not whether | ||
| the response is **equivalent to** the expected output. The response may contain |
There was a problem hiding this comment.
not blocking this PR but this makes me think we should be more strict - major facts that aren't in the expectations probably shouldn't be correct, especially in the case of expected response (maybe this behavior is ok for expected facts). WDYT? cc @AveshCSingh @alkispoly-db
mlflow/genai/judges/builtin.py
Outdated
| LLM judge determines whether the expected facts are supported by the response. | ||
|
|
||
| This judge evaluates if the facts specified in ``expected_facts`` or ``expected_response`` | ||
| are contained in or supported by the model's response. It answers: "Does the response |
There was a problem hiding this comment.
nit: the "It answers: ..." seems a bit redundant and doesn't mention expected response - wdyt of removing this?
There was a problem hiding this comment.
Pull request overview
This PR updates the documentation and UI tooltips for the Correctness scorer to clarify its evaluation behavior. The changes emphasize that the scorer checks whether expected facts are supported by the model's response, rather than checking for exact equivalence.
Key Changes
- Clarified that Correctness evaluates whether expected facts are contained in or supported by the response
- Added explicit note distinguishing Correctness (fact support) from Equivalence (semantic equivalence)
- Updated UI tooltip to reflect the more precise language about fact support
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
mlflow/server/js/src/lang/default/en.json |
Updated internationalization entry for Correctness tooltip from "Is app's response correct compared to ground-truth?" to "Are the expected facts supported by the response?" |
mlflow/server/js/src/experiment-tracking/pages/experiment-scorers/llmScorerUtils.tsx |
Updated UI hint text for Correctness template to match new language |
mlflow/genai/scorers/builtin_scorers.py |
Enhanced docstring with detailed explanation, note distinguishing from Equivalence scorer, and updated description field |
mlflow/genai/judges/builtin.py |
Updated is_correct function docstring to clarify it checks fact support, added explanatory note and improved examples |
docs/docs/genai/eval-monitor/scorers/llm-judge/predefined.mdx |
Updated table entry to use new language "Are the expected facts supported by the app's response?" |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Updates docstring and UI tooltips to make the language surrounding the behavior of the correctness scorer a bit more precise.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.