Bug fix - stringify error messages for session-level judges when constructing assessments #19140
Merged
dbczumar merged 6 commits intomlflow:masterfrom Dec 3, 2025
Merged
Bug fix - stringify error messages for session-level judges when constructing assessments #19140dbczumar merged 6 commits intomlflow:masterfrom
dbczumar merged 6 commits intomlflow:masterfrom
Conversation
dbczumar
commented
Dec 1, 2025
| assert "Scorer failed!" in str(feedback.error.error_message) | ||
| assert feedback.error.stack_trace is not None | ||
|
|
||
| assert feedback.error.to_proto().error_message == "Scorer failed!" |
Collaborator
Author
There was a problem hiding this comment.
Fails on master with:
============================================================================ FAILURES ============================================================================
____________________________________________________ test_evaluate_session_level_scorers_handles_scorer_error ____________________________________________________
def test_evaluate_session_level_scorers_handles_scorer_error():
mock_scorer = Mock(spec=mlflow.genai.Scorer)
mock_scorer.name = "failing_scorer"
mock_scorer.run.side_effect = ValueError("Scorer failed!")
session_groups = {
"session1": [_create_eval_item("trace1", 100)],
}
result = evaluate_session_level_scorers([mock_scorer], session_groups)
# Verify error feedback was created
assert "trace1" in result
assert len(result["trace1"]) == 1
feedback = result["trace1"][0]
assert feedback.name == "failing_scorer"
assert feedback.error is not None
assert feedback.error.error_code == "SCORER_ERROR"
assert feedback.error.stack_trace is not None
> assert feedback.error.to_proto().error_message == "Scorer failed!"
feedback = Feedback(name='failing_scorer', source=AssessmentSource(source_type='CODE', source_id='failing_scorer'), trace_id=None...k.py", line 1173, in _execute_mock_call\n raise effect\nValueError: Scorer failed!\n')), overrides=None, valid=True)
mock_scorer = <Mock spec='Scorer' id='5520832160'>
result = defaultdict(<class 'list'>, {'trace1': [Feedback(name='failing_scorer', source=AssessmentSource(source_type='CODE', so...y", line 1173, in _execute_mock_call\n raise effect\nValueError: Scorer failed!\n')), overrides=None, valid=True)]})
session_groups = {'session1': [EvalItem(request_id='trace1', inputs={}, outputs={}, expectations={}, tags=None, trace=Trace(trace_id=trace1), error_message=None, source=None)]}
test_session_utils.py:429:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = AssessmentError(error_code='SCORER_ERROR', error_message=ValueError('Scorer failed!'), stack_trace='Traceback (most re...ow/lib/python3.10/unittest/mock.py", line 1173, in _execute_mock_call\n raise effect\nValueError: Scorer failed!\n')
def to_proto(self):
error = ProtoAssessmentError()
error.error_code = self.error_code
if self.error_message:
> error.error_message = self.error_message
E TypeError: bad argument type for built-in operation
error = error_code: "SCORER_ERROR"
self = AssessmentError(error_code='SCORER_ERROR', error_message=ValueError('Scorer failed!'), stack_trace='Traceback (most re...ow/lib/python3.10/unittest/mock.py", line 1173, in _execute_mock_call\n raise effect\nValueError: Scorer failed!\n')
../../../mlflow/entities/assessment_error.py:51: TypeError
-------------------------------------------------------------------- Captured stderr teardown --------------------------------------------------------------------
2025/12/01 10:33:46 INFO mlflow.tracking.fluent: Active model is cleared
====================================================================== slowest 10 durations ======================================================================
0.40s setup tests/genai/evaluate/test_session_utils.py::test_classify_scorers_all_single_turn
0.01s call tests/genai/evaluate/test_session_utils.py::test_evaluate_session_level_scorers_handles_scorer_error
(8 durations < 0.005s hidden. Use -vv to show these durations.)
================================================================== command to run failed tests ===================================================================
pytest 'tests/genai/evaluate/test_session_utils.py::test_evaluate_session_level_scorers_handles_scorer_error'
==================================================================== short test summary info =====================================================================
FAILED | MEM 21.3/64.0 GB | DISK 10.5/926.4 GB test_session_utils.py::test_evaluate_session_level_scorers_handles_scorer_error - TypeError: bad argument type for built-in operation
Contributor
|
Documentation preview for a1e0e60 is available at: More info
|
AveshCSingh
approved these changes
Dec 2, 2025
Collaborator
AveshCSingh
left a comment
There was a problem hiding this comment.
Thanks for fixing this, Corey!
BenWilson2
pushed a commit
to BenWilson2/mlflow
that referenced
this pull request
Dec 4, 2025
…tructing assessments (mlflow#19140) Signed-off-by: dbczumar <corey.zumar@databricks.com>
BenWilson2
pushed a commit
that referenced
this pull request
Dec 4, 2025
…tructing assessments (#19140) Signed-off-by: dbczumar <corey.zumar@databricks.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Bug fix - stringify error messages for session-level judges when constructing assessments
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.