Skip to content

Bug fix - stringify error messages for session-level judges when constructing assessments #19140

Merged
dbczumar merged 6 commits intomlflow:masterfrom
dbczumar:sess_fix
Dec 3, 2025
Merged

Bug fix - stringify error messages for session-level judges when constructing assessments #19140
dbczumar merged 6 commits intomlflow:masterfrom
dbczumar:sess_fix

Conversation

@dbczumar
Copy link
Collaborator

@dbczumar dbczumar commented Dec 1, 2025

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19140/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19140/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/19140/merge

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Bug fix - stringify error messages for session-level judges when constructing assessments

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Signed-off-by: dbczumar <corey.zumar@databricks.com>
@dbczumar dbczumar requested a review from AveshCSingh December 1, 2025 18:32
@dbczumar dbczumar added the v3.7.0 label Dec 1, 2025
@dbczumar dbczumar requested a review from xsh310 December 1, 2025 18:32
@github-actions github-actions bot added area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. labels Dec 1, 2025
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Signed-off-by: dbczumar <corey.zumar@databricks.com>
assert "Scorer failed!" in str(feedback.error.error_message)
assert feedback.error.stack_trace is not None

assert feedback.error.to_proto().error_message == "Scorer failed!"
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fails on master with:

============================================================================ FAILURES ============================================================================
____________________________________________________ test_evaluate_session_level_scorers_handles_scorer_error ____________________________________________________

    def test_evaluate_session_level_scorers_handles_scorer_error():
        mock_scorer = Mock(spec=mlflow.genai.Scorer)
        mock_scorer.name = "failing_scorer"
        mock_scorer.run.side_effect = ValueError("Scorer failed!")

        session_groups = {
            "session1": [_create_eval_item("trace1", 100)],
        }

        result = evaluate_session_level_scorers([mock_scorer], session_groups)

        # Verify error feedback was created
        assert "trace1" in result
        assert len(result["trace1"]) == 1
        feedback = result["trace1"][0]
        assert feedback.name == "failing_scorer"
        assert feedback.error is not None
        assert feedback.error.error_code == "SCORER_ERROR"
        assert feedback.error.stack_trace is not None

>       assert feedback.error.to_proto().error_message == "Scorer failed!"

feedback   = Feedback(name='failing_scorer', source=AssessmentSource(source_type='CODE', source_id='failing_scorer'), trace_id=None...k.py", line 1173, in _execute_mock_call\n    raise effect\nValueError: Scorer failed!\n')), overrides=None, valid=True)
mock_scorer = <Mock spec='Scorer' id='5520832160'>
result     = defaultdict(<class 'list'>, {'trace1': [Feedback(name='failing_scorer', source=AssessmentSource(source_type='CODE', so...y", line 1173, in _execute_mock_call\n    raise effect\nValueError: Scorer failed!\n')), overrides=None, valid=True)]})
session_groups = {'session1': [EvalItem(request_id='trace1', inputs={}, outputs={}, expectations={}, tags=None, trace=Trace(trace_id=trace1), error_message=None, source=None)]}

test_session_utils.py:429:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = AssessmentError(error_code='SCORER_ERROR', error_message=ValueError('Scorer failed!'), stack_trace='Traceback (most re...ow/lib/python3.10/unittest/mock.py", line 1173, in _execute_mock_call\n    raise effect\nValueError: Scorer failed!\n')

    def to_proto(self):
        error = ProtoAssessmentError()
        error.error_code = self.error_code
        if self.error_message:
>           error.error_message = self.error_message
E           TypeError: bad argument type for built-in operation

error      = error_code: "SCORER_ERROR"

self       = AssessmentError(error_code='SCORER_ERROR', error_message=ValueError('Scorer failed!'), stack_trace='Traceback (most re...ow/lib/python3.10/unittest/mock.py", line 1173, in _execute_mock_call\n    raise effect\nValueError: Scorer failed!\n')

../../../mlflow/entities/assessment_error.py:51: TypeError
-------------------------------------------------------------------- Captured stderr teardown --------------------------------------------------------------------
2025/12/01 10:33:46 INFO mlflow.tracking.fluent: Active model is cleared
====================================================================== slowest 10 durations ======================================================================
0.40s setup    tests/genai/evaluate/test_session_utils.py::test_classify_scorers_all_single_turn
0.01s call     tests/genai/evaluate/test_session_utils.py::test_evaluate_session_level_scorers_handles_scorer_error

(8 durations < 0.005s hidden.  Use -vv to show these durations.)
================================================================== command to run failed tests ===================================================================
pytest 'tests/genai/evaluate/test_session_utils.py::test_evaluate_session_level_scorers_handles_scorer_error'

==================================================================== short test summary info =====================================================================
FAILED | MEM 21.3/64.0 GB | DISK 10.5/926.4 GB test_session_utils.py::test_evaluate_session_level_scorers_handles_scorer_error - TypeError: bad argument type for built-in operation

@xsh310

Copy link
Collaborator

@xsh310 xsh310 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@github-actions
Copy link
Contributor

github-actions bot commented Dec 1, 2025

Documentation preview for a1e0e60 is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Copy link
Collaborator

@AveshCSingh AveshCSingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing this, Corey!

dbczumar and others added 2 commits December 3, 2025 13:09
Signed-off-by: dbczumar <corey.zumar@databricks.com>
@dbczumar dbczumar merged commit 78a7086 into mlflow:master Dec 3, 2025
44 of 46 checks passed
BenWilson2 pushed a commit to BenWilson2/mlflow that referenced this pull request Dec 4, 2025
…tructing assessments (mlflow#19140)

Signed-off-by: dbczumar <corey.zumar@databricks.com>
BenWilson2 pushed a commit that referenced this pull request Dec 4, 2025
…tructing assessments (#19140)

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. v3.7.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants