Support SparkDF trace handling in eval#20207
Conversation
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
🛠 DevTools 🛠
Install mlflow from this PRFor Databricks, use the following command: |
There was a problem hiding this comment.
Pull request overview
This PR adds support for handling traces in SparkDF format within the evaluation pipeline. When a Spark DataFrame containing trace columns (stored as StructType) is converted to pandas via .toPandas(), the trace column becomes a dict instead of remaining as a Trace object or JSON string. This PR extends the deserialization logic to handle this dict format.
Changes:
- Extended
_deserialize_trace_column_if_neededfunction to handle dict-type trace values by callingTrace.from_dict() - Updated docstring to document the new dict handling behavior
- Added comprehensive tests covering string, dict, and Trace object formats
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| mlflow/genai/evaluation/utils.py | Extended trace deserialization logic to handle dict format from Spark DataFrame conversion |
| tests/genai/evaluate/test_utils.py | Added parameterized test covering all three trace formats (dict, string, Trace object) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Documentation preview for df5eec9 is available at: More info
|
mlflow/genai/evaluation/utils.py
Outdated
| if isinstance(t, Trace): | ||
| return t |
There was a problem hiding this comment.
| if isinstance(t, Trace): | |
| return t |
is this necessary?
There was a problem hiding this comment.
good call - final match state should just pass through. Will update
tests/genai/evaluate/test_utils.py
Outdated
| @pytest.fixture | ||
| def sample_trace_dict(): | ||
| return { | ||
| "info": { | ||
| "trace_id": "test-trace-id", | ||
| "trace_location": { | ||
| "type": "MLFLOW_EXPERIMENT", | ||
| "mlflow_experiment": {"experiment_id": "0"}, | ||
| }, | ||
| "request_time": "2024-01-21T12:00:00Z", | ||
| "state": "OK", | ||
| "trace_metadata": {}, | ||
| "tags": {}, | ||
| "assessments": [], | ||
| }, | ||
| "data": {"spans": []}, | ||
| } | ||
|
|
||
|
|
||
| @pytest.mark.parametrize("trace_format", ["dict", "string", "trace_object"]) |
There was a problem hiding this comment.
nit: Consider using pytest.param with explicit IDs for cleaner test output:
| @pytest.fixture | |
| def sample_trace_dict(): | |
| return { | |
| "info": { | |
| "trace_id": "test-trace-id", | |
| "trace_location": { | |
| "type": "MLFLOW_EXPERIMENT", | |
| "mlflow_experiment": {"experiment_id": "0"}, | |
| }, | |
| "request_time": "2024-01-21T12:00:00Z", | |
| "state": "OK", | |
| "trace_metadata": {}, | |
| "tags": {}, | |
| "assessments": [], | |
| }, | |
| "data": {"spans": []}, | |
| } | |
| @pytest.mark.parametrize("trace_format", ["dict", "string", "trace_object"]) | |
| def _trace_test_cases(): | |
| data = { | |
| "info": { | |
| "trace_id": "test-trace-id", | |
| "trace_location": { | |
| "type": "MLFLOW_EXPERIMENT", | |
| "mlflow_experiment": {"experiment_id": "0"}, | |
| }, | |
| "request_time": "2024-01-21T12:00:00Z", | |
| "state": "OK", | |
| "trace_metadata": {}, | |
| "tags": {}, | |
| "assessments": [], | |
| }, | |
| "data": {"spans": []}, | |
| } | |
| return [ | |
| pytest.param(data, dict, id="dict"), | |
| pytest.param(json.dumps(data), str, id="string"), | |
| pytest.param(Trace.from_dict(data), Trace, id="trace_object"), | |
| ] | |
| @pytest.mark.parametrize(("trace_value", "expected_input_type"), _trace_test_cases()) |
🤖 Generated with Claude
mlflow/genai/evaluation/utils.py
Outdated
| if "trace" not in df.columns: | ||
| return df | ||
|
|
||
| def deserialize_trace(t): |
There was a problem hiding this comment.
Can we move this outside _deserialize_trace_column_if_needed?
There was a problem hiding this comment.
yep will move out for better readability
mlflow/genai/evaluation/utils.py
Outdated
| def deserialize_trace(t): | ||
| if isinstance(t, Trace): | ||
| return t | ||
| if isinstance(t, str): | ||
| return Trace.from_json(t) | ||
| if isinstance(t, dict): | ||
| return Trace.from_dict(t) | ||
| return t |
There was a problem hiding this comment.
nit: Consider using a match statement for cleaner type dispatch:
| def deserialize_trace(t): | |
| if isinstance(t, Trace): | |
| return t | |
| if isinstance(t, str): | |
| return Trace.from_json(t) | |
| if isinstance(t, dict): | |
| return Trace.from_dict(t) | |
| return t | |
| def deserialize_trace(t): | |
| match t: | |
| case Trace(): | |
| return t | |
| case str(): | |
| return Trace.from_json(t) | |
| case dict(): | |
| return Trace.from_dict(t) | |
| case _: | |
| return t |
🤖 Generated with Claude
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
If traces are in a SparkDF, when transferring to a PandasDF, the StructType encodings are cast to dict type directly. This PR adds in parsing logic to ensure that the traces are deserialized properly.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.