Skip to content

Support mlflow.genai.evaluate for multi-turn scorers on Datasets#19039

Merged
smoorjani merged 22 commits intomlflow:masterfrom
AveshCSingh:multi-turn-mlflow-genai-eval_dataset
Nov 26, 2025
Merged

Support mlflow.genai.evaluate for multi-turn scorers on Datasets#19039
smoorjani merged 22 commits intomlflow:masterfrom
AveshCSingh:multi-turn-mlflow-genai-eval_dataset

Conversation

@AveshCSingh
Copy link
Collaborator

@AveshCSingh AveshCSingh commented Nov 25, 2025

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19039/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19039/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/19039/merge

Related Issues/PRs

#18971

What changes are proposed in this pull request?

This PR enables session-level scorers (multi-turn scorers) to work correctly with EvaluationDataset. Previously, session-level scorers were silently ignored when
evaluating with datasets because session metadata from original traces was not preserved.

It is built on top of Support multi-turn evaluation in mlflow.genai.evaluate for DataFrame and list input. Click here for a clean diff.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

AveshCSingh and others added 16 commits November 21, 2025 19:18
This PR implements multi-turn evaluation capability for mlflow.genai.evaluate,
enabling evaluation of entire conversation sessions grouped by session_id.

Key changes:

1. Environment Variable (mlflow/environment_variables.py):
   - Added MLFLOW_ENABLE_MULTI_TURN_EVALUATION flag (default: False)
   - Feature-gated for safe rollout and testing

2. Validation Logic (mlflow/genai/evaluation/utils.py):
   - Added _validate_multi_turn_input() to validate multi-turn configuration
   - Checks: feature flag enabled, no predict_fn, DataFrame input required
   - Added FEATURE_DISABLED import for proper error handling

3. Multi-Turn Evaluation (mlflow/genai/evaluation/harness.py):
   - Added _evaluate_multi_turn_scorers() to evaluate session groups
   - Modified run() to classify scorers and handle multi-turn evaluation
   - Groups traces by session_id, evaluates on session groups
   - Logs assessments to chronologically first trace of each session
   - Adds session_id to assessment metadata

4. Integration (mlflow/genai/evaluation/base.py):
   - Added validation call in evaluate() function
   - Imports _validate_multi_turn_input

5. Tests (tests/genai/evaluate/test_utils.py):
   - Added 6 comprehensive validation tests
   - Tests feature flag, predict_fn rejection, DataFrame requirement
   - Tests mixed single-turn and multi-turn scorers

Implementation follows the multi-turn evaluation plan (PR mlflow#3 + PR mlflow#4 combined).
All tests passing (60 passed, 3 skipped).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
This commit addresses several TODO items to improve the multi-turn evaluation
implementation:

- Remove leading underscores from exported utility functions
  - Renamed _classify_scorers -> classify_scorers
  - Renamed _group_traces_by_session -> group_traces_by_session
  - Renamed _get_first_trace_in_session -> get_first_trace_in_session

- Optimize trace retrieval by avoiding redundant get_trace call
  - Find matching eval_result from existing list instead of fetching trace

- Replace hardcoded "session_id" string with TraceMetadataKey.TRACE_SESSION constant
  - Improves maintainability and consistency with other metadata keys

- Rename validation function for clarity
  - Renamed _validate_multi_turn_input -> _validate_session_level_input
  - Updated terminology from "multi_turn" to "session_level" for consistency

- Remove unused data parameter from validation function
  - Simplified function signature by removing parameter that was never used

All tests pass successfully after these changes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
@github-actions github-actions bot added area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. labels Nov 25, 2025
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
@github-actions
Copy link
Contributor

github-actions bot commented Nov 25, 2025

Documentation preview for 526985e is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

AveshCSingh and others added 3 commits November 25, 2025 23:40
…nai-eval_dataset

Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Avesh Singh <aveshcsingh@gmail.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
@smoorjani smoorjani changed the title [wip] Support mlflow.genai.evaluate for multi-turn scorers on Datasets Support mlflow.genai.evaluate for multi-turn scorers on Datasets Nov 26, 2025
.
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Copy link
Collaborator

@serena-ruan serena-ruan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@smoorjani smoorjani added this pull request to the merge queue Nov 26, 2025
Merged via the queue into mlflow:master with commit 45461fc Nov 26, 2025
50 of 54 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. v3.7.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants