Support evaluating list of traces#18695
Conversation
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
|
Documentation preview for e3b57ec is available at: Changed Pages (1) More info
|
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
| f"Expected 6 assessments, got {len(trace.info.assessments)}" | ||
| f"Assessments: {[a.name for a in trace.info.assessments]}" | ||
| ) # 2 expectations + 4 feedbacks |
There was a problem hiding this comment.
It's for convenience because we cannot see the actual assessment names from the varaible logger (trace repr is minimal). Any concern in having this?
mlflow/genai/evaluation/harness.py
Outdated
| new_expectations = [] | ||
| for exp in eval_item.get_expectation_assessments(): | ||
| if exp.name not in existing_expectations: | ||
| new_expectations.append(exp) | ||
| return new_expectations |
There was a problem hiding this comment.
| new_expectations = [] | |
| for exp in eval_item.get_expectation_assessments(): | |
| if exp.name not in existing_expectations: | |
| new_expectations.append(exp) | |
| return new_expectations | |
| return [ | |
| exp | |
| for exp in eval_item.get_expectation_assessments() | |
| if exp.name not in existing_expectations | |
| ] |
can we use list comprehension?
harupy
left a comment
There was a problem hiding this comment.
Left a couple comments, otherwise LGTM!
mlflow/genai/evaluation/utils.py
Outdated
| from mlflow.entities.evaluation_dataset import EvaluationDataset as EntityEvaluationDataset | ||
| from mlflow.genai.datasets.evaluation_dataset import EvaluationDataset | ||
|
|
||
| if isinstance(data, (EvaluationDataset, EntityEvaluationDataset)): |
There was a problem hiding this comment.
I think this is not necessary since it's handled inside _convert_eval_set_to_df
mlflow/genai/evaluation/utils.py
Outdated
| if isinstance(data, (EvaluationDataset, EntityEvaluationDataset)): | ||
| return data.to_df() | ||
|
|
||
| if isinstance(data, list) and all(isinstance(item, Trace) for item in data): |
There was a problem hiding this comment.
Similarly, can we move this logic to _convert_eval_set_to_df? It's better to consolidate the data conversion part in that method so that we can reuse the logic.
There was a problem hiding this comment.
ah nice catch, sure it should be handled in ..._to_df func
| _convert_to_eval_set(df) | ||
|
|
||
|
|
||
| def test_convert_to_eval_set_evaluation_dataset(): |
There was a problem hiding this comment.
Shouldn't we add a new fixture for EvaluationDataset to _ALL_DATA_FIXTURES if we remove this test case?
TomeHirata
left a comment
There was a problem hiding this comment.
LGTM, can we fix tests?
| Takes in a dataset in the multiple format that mlflow.genai.evaluate() expects and converts | ||
| it into a standardized Pandas DataFrame. | ||
| """ | ||
| column_mapping = { |
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Signed-off-by: Tian Lan <sky.blue266000@gmail.com>
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
What changes are proposed in this pull request?
Support passing a list of
Traceobject to evaluation. This is requested by a CUJ, and also useful when we implement a UI trigger to running evaluation on traces (we will need a way to run evaluation on a set of trace IDs).Btw, we could also add sth like
mlflow.get_traces(trace_id=[...])to make it even easier. However, it is not super trivial given that we now have v3 and v4 backend, so I consider it YAGNI now.How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.