[ML-59816][[2/n] Add new fields to the genai_evaluation event telemetry#19040
[ML-59816][[2/n] Add new fields to the genai_evaluation event telemetry#19040xsh310 wants to merge 4 commits intomlflow:masterfrom
Conversation
59ba0d5 to
63a5a08
Compare
Signed-off-by: Xiang Shen <xshen.shc@gmail.com>
…_evaluate event logging Signed-off-by: Xiang Shen <xshen.shc@gmail.com>
…uate event logging Signed-off-by: Xiang Shen <xshen.shc@gmail.com>
…t logging Signed-off-by: Xiang Shen <xshen.shc@gmail.com>
63a5a08 to
6f9390c
Compare
| This function is not thread-safe. Please do not use it in multi-threaded | ||
| environments. | ||
| """ | ||
| is_managed_dataset = isinstance(data, (EvaluationDataset, EntityEvaluationDataset)) |
There was a problem hiding this comment.
It's not efficient to convert data to df twice (line 246 and inside _run_harness), can you refactor?
@serena-ruan Do you think possible that I move all the logic in the evaluate function before the _run_harness into _run_harness?
I want to make sure that
data = data if is_managed_dataset else df
happens in _run_harness so that we can preserve the original input type at the event logging.
There was a problem hiding this comment.
I move all the logic in the evaluate function before the _run_harness into _run_harness?
Yes you can do that, I don't think there's any difference
There was a problem hiding this comment.
@serena-ruan I think one potential downside of moving the logic into _run_harness is that there might be a sudden increase in the genai_evaluate event volume since some of the invalidate that are early returned now logs the genai_evaluate event.
| # Check if records are loaded to avoid triggering expensive load | ||
| if data.has_records(): | ||
| df = data.to_df() | ||
| eval_data_size = len(df) |
There was a problem hiding this comment.
Apparently this is too heavy to compute the dataset size by converting it to dataframe. Reading the code, this data is converted to pandas dataframe when evaluating anyways, can we calculate the dataset size after that?
@serena-ruan, I'm not sure whether my understanding is correct but I don't think we will have access to the pd df since the dataset is converted in the middle of _run_harness and the event logging only have access to the input param of _run_harness?
I think one workaround is to add the df as another return type of _run_harness and use parse_result to add these fields with the df
There was a problem hiding this comment.
You don't have to use @record_usage_event on the function, you can use _record_event to add record with parameters directly :) The decorator is just a simplified approach for general purposes.
We can collect all needed parameters within _run_harness and avoid duplicate data size computation, then send records once. While _record_event doesn't handle function exception like record_usage_event does, feel free to update it with try finally if necessary.
There was a problem hiding this comment.
@serena-ruan Thanks for the explanation! A quick question tho, since from the logging spec, the goal is to log the new field into genai_evaluate, if we add another _record_event call in the middle of _run_harness, will it create duplicate genai_evaluate event, or is there a way we can merge the params with the event created by @record_usage_event?
There was a problem hiding this comment.
You should use either the decorator or _record_event but not both, otherwise there will be two events created :)
🥞 Stacked PR
Use this link to review incremental changes.
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.