Skip to content

[Prompt Optimization Backend PR #1] Wrap prompt optimize in mlflow job#20001

Merged
chenmoneygithub merged 7 commits intomlflow:masterfrom
chenmoneygithub:mlflow-po-backend-pr-1
Jan 16, 2026
Merged

[Prompt Optimization Backend PR #1] Wrap prompt optimize in mlflow job#20001
chenmoneygithub merged 7 commits intomlflow:masterfrom
chenmoneygithub:mlflow-po-backend-pr-1

Conversation

@chenmoneygithub
Copy link
Contributor

@chenmoneygithub chenmoneygithub commented Jan 15, 2026

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Wrap the mlflow.genai.optimize_prompts() in an mlflow job, which is a required step for building the mlflow prompt optimization backend.

This is part of #19926, and I am just splitting the PR for easy review.

Note: This PR is safe to go with either 3.9 or 3.10 release, since it's not directly user-facing, but just a backbone for the optimization backend.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Copilot AI review requested due to automatic review settings January 15, 2026 01:39
@github-actions
Copy link
Contributor

@chenmoneygithub Thank you for the contribution! Could you fix the following issue(s)?

⚠ DCO check

The DCO check failed. Please sign off your commit(s) by following the instructions here. See https://github.com/mlflow/mlflow/blob/master/CONTRIBUTING.md#sign-your-work for more details.

@github-actions github-actions bot added the area/prompts MLflow Prompt Registry and Optimization label Jan 15, 2026
@github-actions github-actions bot added the rn/feature Mention under Features in Changelogs. label Jan 15, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR wraps the mlflow.genai.optimize_prompts() function in an MLflow job to enable asynchronous prompt optimization execution. This is a foundational component for building the MLflow prompt optimization backend.

Changes:

  • Added optimize_prompts_job function as a background job wrapper
  • Implemented helper functions for optimizer creation, scorer loading, and predict function building
  • Added telemetry event tracking for job execution
  • Registered the new job in the MLflow server's job registry

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
mlflow/genai/optimize/job.py New module implementing the prompt optimization job wrapper with helper functions for optimizer creation, scorer loading, and predict function building
mlflow/telemetry/events.py Added OptimizePromptsJobEvent for tracking job execution telemetry
mlflow/server/jobs/init.py Registered the optimize_prompts_job in the supported job function list and allowed job name list
tests/genai/optimize/test_job.py Comprehensive unit tests for the job wrapper and helper functions
tests/telemetry/test_events.py Unit tests for the OptimizePromptsJobEvent telemetry event

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 15, 2026

Documentation preview for 13ca846 is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

Dict containing optimization results and metadata.
"""
# Record telemetry event for job execution
_record_event(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use the decorator style for adding telemetry for this api?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

prompt_uri: str,
dataset_id: str,
optimizer_type: str,
optimizer_config_json: str | None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it cause any issue if we accept optimizer_config as dict[str, Any] instead of str in this method?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, this is supposed to be a dict, and handler code should deserialize the string to dict. updated

dataset_id: str,
optimizer_type: str,
optimizer_config_json: str | None,
scorers: list[str],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: scorer_names?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: What's the format for custom scorers ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg!

dataset_id: The ID of the EvaluationDataset containing training data.
optimizer_type: The optimizer type string (e.g., "gepa", "metaprompt").
optimizer_config_json: JSON string of optimizer-specific configuration.
scorers: List of scorer names. Can be built-in scorer class names
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the workflow if users want to use a custom llm judge? Is it like create a judge using make_judge -> register it using .register() -> call this api with its judge name?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes exactly

source_prompt = load_prompt(prompt_uri)

# Resume the given run ID. Params have already been logged by the handler
with start_run(run_id=run_id):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it mean a MLflow run is created and ended before this method is called?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it's written this way because we need to associate the optimization job with an mlflow run when the job starts, if we don't create the run before kicking off the job, we lose this lineage until finishing the job.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code in handler should be more straightforward

config = json.loads(optimizer_config_json) if optimizer_config_json else {}
optimizer_type = optimizer_type.lower() if optimizer_type else ""

if optimizer_type == "gepa":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we define an enum for optimizer type?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg, done!

Returns:
A callable that takes inputs dict and returns the LLM response.
"""
import litellm
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we display a kind message if litellm is not installed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

set_experiment(experiment_id=experiment_id)

dataset = get_dataset(dataset_id=dataset_id)
train_data = dataset.to_df()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I guess optimize_prompts accepts EvaluationDataset

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

the experiment's scorer registry.

Returns:
Dict containing optimization results and metadata.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reminder: need to guarantee that the dict is JSON serializable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg!

# Resume the given run ID. Params have already been logged by the handler
with start_run(run_id=run_id):
# Link source prompt to run for lineage
client = MlflowClient()

This comment was marked as outdated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discussed offline, this should be fine since we expect the job to be kicked off by the server handler

enable_tracking=True,
)

return {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: shall we deflne a class for the response?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good call, done!

Copy link
Collaborator

@TomeHirata TomeHirata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@chenmoneygithub chenmoneygithub added this pull request to the merge queue Jan 16, 2026
Merged via the queue into mlflow:master with commit 6b2e131 Jan 16, 2026
47 of 48 checks passed
@chenmoneygithub chenmoneygithub deleted the mlflow-po-backend-pr-1 branch January 16, 2026 20:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/prompts MLflow Prompt Registry and Optimization rn/feature Mention under Features in Changelogs.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants