Skip to content

Automatically wrap async functions when passed to predict_fn#19249

Merged
BenWilson2 merged 3 commits intomlflow:masterfrom
smoorjani:gwt-mlflow-wrap-async
Dec 8, 2025
Merged

Automatically wrap async functions when passed to predict_fn#19249
BenWilson2 merged 3 commits intomlflow:masterfrom
smoorjani:gwt-mlflow-wrap-async

Conversation

@smoorjani
Copy link
Collaborator

@smoorjani smoorjani commented Dec 5, 2025

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19249/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19249/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/19249/merge

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Automatically wrap async functions when they are passed to predict_fn in mlflow.genai.evaluate

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests
import asyncio

import mlflow
import pandas as pd
from mlflow.genai.scorers import scorer


async def async_predict_fn(question: str) -> str:
    await asyncio.sleep(0.1)
    return f"Answer to: {question}"


@scorer
def length_scorer(inputs, outputs):
    return len(outputs)


data = pd.DataFrame([
    {"inputs": {"question": "What is MLflow?"}},
    {"inputs": {"question": "What is Python?"}},
])

result = mlflow.genai.evaluate(
    data=data,
    predict_fn=async_predict_fn,
    scorers=[length_scorer],
)

print(f"Metrics: {result.metrics}")

traces = mlflow.search_traces(run_id=result.run_id, return_type="list")
print(f"\nFirst trace assessments:")
for assessment in traces[0].info.assessments:
    print(f"  {assessment.name}: {assessment.feedback.value}")

outputs:

2025/12/05 10:09:14 INFO mlflow.genai.utils.data_validation: Testing model prediction with the first sample in the dataset. To disable this check, set the MLFLOW_GENAI_EVAL_SKIP_TRACE_VALIDATION environment variable to True.
2025/12/05 10:09:15 INFO mlflow.store.db.utils: Creating initial MLflow database tables...
2025/12/05 10:09:15 INFO mlflow.store.db.utils: Updating database tables
2025/12/05 10:09:15 INFO alembic.runtime.migration: Context impl SQLiteImpl.
2025/12/05 10:09:15 INFO alembic.runtime.migration: Will assume non-transactional DDL.
2025/12/05 10:09:15 INFO alembic.runtime.migration: Context impl SQLiteImpl.
2025/12/05 10:09:15 INFO alembic.runtime.migration: Will assume non-transactional DDL.

✨ Evaluation completed.

Metrics and evaluation results are logged to the MLflow run:
  Run name: sneaky-sow-347
  Run ID: 8b770a04ae284df9b6e343f3a9cc5aaa

To view the detailed evaluation results with sample-wise scores,
open the Traces tab in the Run page in the MLflow UI.

Metrics: {'length_scorer/mean': np.float64(26.0)}

First trace assessments:
  length_scorer: 26

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

Automatically wrap async functions when they are passed to predict_fn in mlflow.genai.evaluate

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
@github-actions github-actions bot added v3.7.1 area/evaluation MLflow Evaluation rn/feature Mention under Features in Changelogs. labels Dec 5, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 5, 2025

Documentation preview for dd94b40 is available at:

Changed Pages (2)

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

@smoorjani smoorjani requested a review from AveshCSingh December 5, 2025 20:22
Copy link
Collaborator

@AveshCSingh AveshCSingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code changes LGTM! Could you also update https://mlflow.org/docs/latest/genai/eval-monitor/running-evaluation/agents/ to include documentation?

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
@smoorjani smoorjani requested a review from AveshCSingh December 5, 2025 23:23
Copy link
Collaborator

@AveshCSingh AveshCSingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding documentation! Looks great

@smoorjani smoorjani requested a review from BenWilson2 December 6, 2025 00:10

@functools.wraps(async_fn)
def sync_wrapper(*args, **kwargs):
return asyncio.run(asyncio.wait_for(async_fn(*args, **kwargs), timeout=timeout))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has this been tested in a Jupyter / Databricks notebook context (just a cautionary confirmation here to ensure that the global event loop handles this properly) ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call! Added support for this and tested it here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Looks great!

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
@smoorjani smoorjani requested a review from BenWilson2 December 7, 2025 23:39
@BenWilson2 BenWilson2 enabled auto-merge December 8, 2025 01:02
@BenWilson2 BenWilson2 self-assigned this Dec 8, 2025
@BenWilson2 BenWilson2 disabled auto-merge December 8, 2025 01:13
@BenWilson2 BenWilson2 merged commit c480c5f into mlflow:master Dec 8, 2025
49 of 55 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation rn/feature Mention under Features in Changelogs. v3.7.1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants