Skip to content

[ML-59918] Add input data fields to genai_evaluate telemetry event#19230

Merged
xsh310 merged 1 commit intomlflow:masterfrom
xsh310:stack/ML-59918-add-oss-telemetry-input-data-fields
Dec 11, 2025
Merged

[ML-59918] Add input data fields to genai_evaluate telemetry event#19230
xsh310 merged 1 commit intomlflow:masterfrom
xsh310:stack/ML-59918-add-oss-telemetry-input-data-fields

Conversation

@xsh310
Copy link
Collaborator

@xsh310 xsh310 commented Dec 5, 2025

🥞 Stacked PR

Use this link to review incremental changes.


Related Issues/PRs

Previous discussions: #19040

What changes are proposed in this pull request?

Adding 3 new fields to genai_evaluate event:

  • eval_data_size: int
  • eval_data_type: str, "list[Trace]" | "list[dict]" | "pd.DataFrame" | "EntityEvaluationDataset" | "EvaluationDataset" | "pyspark.sql.DataFrame" | "unknown"
  • eval_data_provided_field: set(str, "inputs" | "outputs" | "trace" | "expectations")

These fields will help us understand the size of the evaluation runs and how users are passing in input data for evaluation runs.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Manual Test

Running the following code snippet:

_EVALUATION_TEST_CASES = [
    {
        "inputs": {
            "question": "What are the top 10 trending anime this season?",
        },
    },
    ...
]

from mlflow.genai.scorers import Safety, RelevanceToQuery, Guidelines

@mlflow.genai.scorer(name="custom_scorer_2")
def custom_scorer_2():
        return 2.0

result = mlflow.genai.evaluate(
        data=_EVALUATION_TEST_CASES,
        predict_fn=predict_fn,
        scorers=[Safety(name="my_secret_app's_safety_scorer"), RelevanceToQuery(), Guidelines(
                name="test_guidelines",
                guidelines=["The response must be helpful and accurate"],
        ),
        custom_scorer_2
        ],
);

Before

{
    "predict_fn_provided": true,
    "scorer_info": [
      {
        "class": "Safety",
        "kind": "builtin",
        "scope": "response"
      },
      {
        "class": "RelevanceToQuery",
        "kind": "builtin",
        "scope": "response"
      },
      {
        "class": "Guidelines",
        "kind": "guidelines",
        "scope": "response"
      },
      {
        "class": "UserDefinedScorer",
        "kind": "decorator",
        "scope": "response"
      }
    ]
 }

After

{
  "predict_fn_provided": true,
  "scorer_info": [
    {
      "class": "Safety",
      "kind": "builtin",
      "scope": "response"
    },
    {
      "class": "RelevanceToQuery",
      "kind": "builtin",
      "scope": "response"
    },
    {
      "class": "Guidelines",
      "kind": "guidelines",
      "scope": "response"
    },
    {
      "class": "UserDefinedScorer",
      "kind": "decorator",
      "scope": "response"
    }
  ],
  "eval_data_type": "list[dict]",
  "eval_data_size": 10,
  "eval_data_provided_fields": [
    "inputs"
  ]
}

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

logger.debug("Failed to display summary and usage instructions", exc_info=True)

return result
return result, telemetry_data
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@serena-ruan After considering a few options, I think passing the telemetry_data to output and add via parse_result seems to be approach with the least side effect and still have fields logged to the same genai_evaluate event.

Let me know your thoughts on this approach. Thanks!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a bit ugly to return telemetry_data in result just for parsing 😅 Also now this change happens within evaluate so it's blocking (although it should be fast) so we should add try catch to avoid any potential issues.

@xsh310 xsh310 force-pushed the stack/ML-59918-add-oss-telemetry-input-data-fields branch from 61a8272 to dcc8164 Compare December 5, 2025 02:09
@xsh310 xsh310 marked this pull request as ready for review December 5, 2025 02:10
@github-actions github-actions bot added area/evaluation MLflow Evaluation area/tracking Tracking service, tracking client APIs, autologging rn/none List under Small Changes in Changelogs. labels Dec 5, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 5, 2025

Documentation preview for d9a62bc is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

@xsh310 xsh310 force-pushed the stack/ML-59918-add-oss-telemetry-input-data-fields branch from dcc8164 to 5a38a68 Compare December 5, 2025 06:01
Comment on lines +66 to +79
elif isinstance(data, pd.DataFrame):
return {"eval_data_type": "pd.DataFrame"}
elif isinstance(data, EntityEvaluationDataset):
return {"eval_data_type": "EntityEvaluationDataset"}
elif isinstance(data, ManagedEvaluationDataset):
return {"eval_data_type": "EvaluationDataset"}
else:
try:
from mlflow.utils.spark_utils import get_spark_dataframe_type

if isinstance(data, get_spark_dataframe_type()):
return {"eval_data_type": "pyspark.sql.DataFrame"}
except ImportError:
pass
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just use type(data)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think with type(data).__name__ both EntityEvaluationDataset and ManagedEvaluationDataset will return EvaluationDataset and both panda and spark dataframe will return DataFrame

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can simply use type(data), or if this is too ugly we can construct f"{type(df).__module__}.{type(df).__qualname__}" instead. Importing spark modules just to get its type is not worthy imo.

Copy link
Collaborator Author

@xsh310 xsh310 Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@serena-ruan I've updated the PR to not import spark here. I'm still keeping the hardcoded short string instead of directly using the longer f"{type(df).__module__}.{type(df).__qualname__}" though, because it's easier to query directly with these shorter strings.

Let me know if you still prefer using the longer type strings.

logger.debug("Failed to display summary and usage instructions", exc_info=True)

return result
return result, telemetry_data
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a bit ugly to return telemetry_data in result just for parsing 😅 Also now this change happens within evaluate so it's blocking (although it should be fast) so we should add try catch to avoid any potential issues.


@record_usage_event(GenAIEvaluateEvent)
def _run_harness(data, scorers, predict_fn, model_id):
def _run_harness(data, scorers, predict_fn, model_id, telemetry_data):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we avoid pass 'telemetry_data' across the code?
Can we move df = _convert_to_eval_set(data); data = data if is_managed_dataset else df within _run_harness so we can have a unified place to process telemetry data?

@xsh310 xsh310 force-pushed the stack/ML-59918-add-oss-telemetry-input-data-fields branch from 5a38a68 to 1a507ad Compare December 8, 2025 04:48
@xsh310
Copy link
Collaborator Author

xsh310 commented Dec 8, 2025

Synced with @serena-ruan offline to move all the validation checks in evaluate into _run_harness so that the telemetry_data won't need to be passed across functions.

Comment on lines +95 to +96
except Exception:
_log_error("Failed to get evaluation data type for GenAIEvaluateEvent")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need the try catch here since this whole parsing logic is already inside a try catch of

def _add_telemetry_record(

Copy link
Collaborator

@serena-ruan serena-ruan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM once #19230 (comment) is addressed!

@xsh310 xsh310 force-pushed the stack/ML-59918-add-oss-telemetry-input-data-fields branch from 1a507ad to 5dd5f5d Compare December 9, 2025 05:35
Copy link
Collaborator

@AveshCSingh AveshCSingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left 2 small requests, otherwise LGTM



@record_usage_event(GenAIEvaluateEvent)
def _run_harness(data, scorers, predict_fn, model_id):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add an output typehint, and an explanation of the 2nd returned value?

Comment on lines +59 to +60
from mlflow.entities.evaluation_dataset import EvaluationDataset as EntityEvaluationDataset
from mlflow.genai.datasets import EvaluationDataset as ManagedEvaluationDataset
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Do these imports need to be inline?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, it seems like these can be at top level

Signed-off-by: Xiang Shen <xshen.shc@gmail.com>
@xsh310 xsh310 force-pushed the stack/ML-59918-add-oss-telemetry-input-data-fields branch from 5dd5f5d to d9a62bc Compare December 10, 2025 23:23
@xsh310
Copy link
Collaborator Author

xsh310 commented Dec 10, 2025

Updated the PR to address @AveshCSingh 's comments

@xsh310 xsh310 added this pull request to the merge queue Dec 11, 2025
Merged via the queue into mlflow:master with commit 9908f08 Dec 11, 2025
47 checks passed
@xsh310 xsh310 deleted the stack/ML-59918-add-oss-telemetry-input-data-fields branch December 11, 2025 05:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation area/tracking Tracking service, tracking client APIs, autologging rn/none List under Small Changes in Changelogs.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants