[ML-59918] Add input data fields to genai_evaluate telemetry event#19230
Conversation
| logger.debug("Failed to display summary and usage instructions", exc_info=True) | ||
|
|
||
| return result | ||
| return result, telemetry_data |
There was a problem hiding this comment.
@serena-ruan After considering a few options, I think passing the telemetry_data to output and add via parse_result seems to be approach with the least side effect and still have fields logged to the same genai_evaluate event.
Let me know your thoughts on this approach. Thanks!
There was a problem hiding this comment.
I think it's a bit ugly to return telemetry_data in result just for parsing 😅 Also now this change happens within evaluate so it's blocking (although it should be fast) so we should add try catch to avoid any potential issues.
61a8272 to
dcc8164
Compare
|
Documentation preview for d9a62bc is available at: More info
|
dcc8164 to
5a38a68
Compare
mlflow/genai/evaluation/utils.py
Outdated
| elif isinstance(data, pd.DataFrame): | ||
| return {"eval_data_type": "pd.DataFrame"} | ||
| elif isinstance(data, EntityEvaluationDataset): | ||
| return {"eval_data_type": "EntityEvaluationDataset"} | ||
| elif isinstance(data, ManagedEvaluationDataset): | ||
| return {"eval_data_type": "EvaluationDataset"} | ||
| else: | ||
| try: | ||
| from mlflow.utils.spark_utils import get_spark_dataframe_type | ||
|
|
||
| if isinstance(data, get_spark_dataframe_type()): | ||
| return {"eval_data_type": "pyspark.sql.DataFrame"} | ||
| except ImportError: | ||
| pass |
There was a problem hiding this comment.
Can we just use type(data)?
There was a problem hiding this comment.
I think with type(data).__name__ both EntityEvaluationDataset and ManagedEvaluationDataset will return EvaluationDataset and both panda and spark dataframe will return DataFrame
There was a problem hiding this comment.
You can simply use type(data), or if this is too ugly we can construct f"{type(df).__module__}.{type(df).__qualname__}" instead. Importing spark modules just to get its type is not worthy imo.
There was a problem hiding this comment.
@serena-ruan I've updated the PR to not import spark here. I'm still keeping the hardcoded short string instead of directly using the longer f"{type(df).__module__}.{type(df).__qualname__}" though, because it's easier to query directly with these shorter strings.
Let me know if you still prefer using the longer type strings.
| logger.debug("Failed to display summary and usage instructions", exc_info=True) | ||
|
|
||
| return result | ||
| return result, telemetry_data |
There was a problem hiding this comment.
I think it's a bit ugly to return telemetry_data in result just for parsing 😅 Also now this change happens within evaluate so it's blocking (although it should be fast) so we should add try catch to avoid any potential issues.
mlflow/genai/evaluation/base.py
Outdated
|
|
||
| @record_usage_event(GenAIEvaluateEvent) | ||
| def _run_harness(data, scorers, predict_fn, model_id): | ||
| def _run_harness(data, scorers, predict_fn, model_id, telemetry_data): |
There was a problem hiding this comment.
Could we avoid pass 'telemetry_data' across the code?
Can we move df = _convert_to_eval_set(data); data = data if is_managed_dataset else df within _run_harness so we can have a unified place to process telemetry data?
5a38a68 to
1a507ad
Compare
|
Synced with @serena-ruan offline to move all the validation checks in |
mlflow/telemetry/events.py
Outdated
| except Exception: | ||
| _log_error("Failed to get evaluation data type for GenAIEvaluateEvent") |
There was a problem hiding this comment.
We don't need the try catch here since this whole parsing logic is already inside a try catch of
mlflow/mlflow/telemetry/track.py
Line 47 in 4cdc650
There was a problem hiding this comment.
Overall LGTM once #19230 (comment) is addressed!
1a507ad to
5dd5f5d
Compare
AveshCSingh
left a comment
There was a problem hiding this comment.
Left 2 small requests, otherwise LGTM
mlflow/genai/evaluation/base.py
Outdated
|
|
||
|
|
||
| @record_usage_event(GenAIEvaluateEvent) | ||
| def _run_harness(data, scorers, predict_fn, model_id): |
There was a problem hiding this comment.
Can you add an output typehint, and an explanation of the 2nd returned value?
mlflow/genai/evaluation/utils.py
Outdated
| from mlflow.entities.evaluation_dataset import EvaluationDataset as EntityEvaluationDataset | ||
| from mlflow.genai.datasets import EvaluationDataset as ManagedEvaluationDataset |
There was a problem hiding this comment.
nit: Do these imports need to be inline?
There was a problem hiding this comment.
Good catch, it seems like these can be at top level
Signed-off-by: Xiang Shen <xshen.shc@gmail.com>
5dd5f5d to
d9a62bc
Compare
|
Updated the PR to address @AveshCSingh 's comments |
🥞 Stacked PR
Use this link to review incremental changes.
Related Issues/PRs
Previous discussions: #19040
What changes are proposed in this pull request?
Adding 3 new fields to
genai_evaluateevent:eval_data_size: inteval_data_type: str, "list[Trace]" | "list[dict]" | "pd.DataFrame" | "EntityEvaluationDataset" | "EvaluationDataset" | "pyspark.sql.DataFrame" | "unknown"eval_data_provided_field: set(str, "inputs" | "outputs" | "trace" | "expectations")These fields will help us understand the size of the evaluation runs and how users are passing in input data for evaluation runs.
How is this PR tested?
Manual Test
Running the following code snippet:
Before
After
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.