[Eval #2] Support evaluating traces and linking to run in OSS#18415
[Eval #2] Support evaluating traces and linking to run in OSS#18415B-Step62 merged 2 commits intomlflow:masterfrom
Conversation
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
|
Documentation preview for 9728a47 is available at: More info
|
| if isinstance(data, dict): | ||
| return data |
There was a problem hiding this comment.
We don't need this anymore?
There was a problem hiding this comment.
Right, we don't need this if branch.
| elif eval_item.trace is not None: | ||
| if _should_clone_trace(eval_item.trace, run_id): | ||
| try: | ||
| trace_id = copy_trace_to_experiment(eval_item.trace.to_dict()) |
There was a problem hiding this comment.
For my understanding, do we clone the trace so that we can add assessments on it? Because sometimes I feel this confusing seeing duplicate traces
There was a problem hiding this comment.
We only clone when the traces come from a different experiment, it is rare but sth like this
mlflow.set_experiment(experiment_id="123")
traces = mlflow.search_traces(experiment_ids=["456"])
mlflow.genai.evaluate(data=traces, ...)
In this case, we cannot show the original traces in the result UI because they are in the separate experiment, so we copy them to the current active experiment where the eval run will be logged.
There was a problem hiding this comment.
I see, that makes sense!
🥞 Stacked PR
Use this link to review incremental changes.
What changes are proposed in this pull request?
The OSS eval harness does not handle traces input today. This PR fixing it to make it parity with DBX harness, in preparation to the migration.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesFix existing trace handling in the MLflow GenAI Evaluation.
Should this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.