Support mlflow.genai.evaluate for multi-turn scorers on Datasets#19039
Merged
smoorjani merged 22 commits intomlflow:masterfrom Nov 26, 2025
Merged
Support mlflow.genai.evaluate for multi-turn scorers on Datasets#19039smoorjani merged 22 commits intomlflow:masterfrom
smoorjani merged 22 commits intomlflow:masterfrom
Conversation
This PR implements multi-turn evaluation capability for mlflow.genai.evaluate, enabling evaluation of entire conversation sessions grouped by session_id. Key changes: 1. Environment Variable (mlflow/environment_variables.py): - Added MLFLOW_ENABLE_MULTI_TURN_EVALUATION flag (default: False) - Feature-gated for safe rollout and testing 2. Validation Logic (mlflow/genai/evaluation/utils.py): - Added _validate_multi_turn_input() to validate multi-turn configuration - Checks: feature flag enabled, no predict_fn, DataFrame input required - Added FEATURE_DISABLED import for proper error handling 3. Multi-Turn Evaluation (mlflow/genai/evaluation/harness.py): - Added _evaluate_multi_turn_scorers() to evaluate session groups - Modified run() to classify scorers and handle multi-turn evaluation - Groups traces by session_id, evaluates on session groups - Logs assessments to chronologically first trace of each session - Adds session_id to assessment metadata 4. Integration (mlflow/genai/evaluation/base.py): - Added validation call in evaluate() function - Imports _validate_multi_turn_input 5. Tests (tests/genai/evaluate/test_utils.py): - Added 6 comprehensive validation tests - Tests feature flag, predict_fn rejection, DataFrame requirement - Tests mixed single-turn and multi-turn scorers Implementation follows the multi-turn evaluation plan (PR mlflow#3 + PR mlflow#4 combined). All tests passing (60 passed, 3 skipped). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
This commit addresses several TODO items to improve the multi-turn evaluation implementation: - Remove leading underscores from exported utility functions - Renamed _classify_scorers -> classify_scorers - Renamed _group_traces_by_session -> group_traces_by_session - Renamed _get_first_trace_in_session -> get_first_trace_in_session - Optimize trace retrieval by avoiding redundant get_trace call - Find matching eval_result from existing list instead of fetching trace - Replace hardcoded "session_id" string with TraceMetadataKey.TRACE_SESSION constant - Improves maintainability and consistency with other metadata keys - Rename validation function for clarity - Renamed _validate_multi_turn_input -> _validate_session_level_input - Updated terminology from "multi_turn" to "session_level" for consistency - Remove unused data parameter from validation function - Simplified function signature by removing parameter that was never used All tests pass successfully after these changes. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Contributor
|
Documentation preview for 526985e is available at: More info
|
…nai-eval_dataset Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
serena-ruan
reviewed
Nov 26, 2025
smoorjani
approved these changes
Nov 26, 2025
This was referenced Dec 6, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
#18971
What changes are proposed in this pull request?
This PR enables session-level scorers (multi-turn scorers) to work correctly with
EvaluationDataset. Previously, session-level scorers were silently ignored whenevaluating with datasets because session metadata from original traces was not preserved.
It is built on top of Support multi-turn evaluation in mlflow.genai.evaluate for DataFrame and list input. Click here for a clean diff.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.