[Prompt Optimization Backend PR #1] Wrap prompt optimize in mlflow job#20001
Conversation
|
@chenmoneygithub Thank you for the contribution! Could you fix the following issue(s)? ⚠ DCO checkThe DCO check failed. Please sign off your commit(s) by following the instructions here. See https://github.com/mlflow/mlflow/blob/master/CONTRIBUTING.md#sign-your-work for more details. |
There was a problem hiding this comment.
Pull request overview
This PR wraps the mlflow.genai.optimize_prompts() function in an MLflow job to enable asynchronous prompt optimization execution. This is a foundational component for building the MLflow prompt optimization backend.
Changes:
- Added
optimize_prompts_jobfunction as a background job wrapper - Implemented helper functions for optimizer creation, scorer loading, and predict function building
- Added telemetry event tracking for job execution
- Registered the new job in the MLflow server's job registry
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| mlflow/genai/optimize/job.py | New module implementing the prompt optimization job wrapper with helper functions for optimizer creation, scorer loading, and predict function building |
| mlflow/telemetry/events.py | Added OptimizePromptsJobEvent for tracking job execution telemetry |
| mlflow/server/jobs/init.py | Registered the optimize_prompts_job in the supported job function list and allowed job name list |
| tests/genai/optimize/test_job.py | Comprehensive unit tests for the job wrapper and helper functions |
| tests/telemetry/test_events.py | Unit tests for the OptimizePromptsJobEvent telemetry event |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Documentation preview for 13ca846 is available at: More info
|
mlflow/genai/optimize/job.py
Outdated
| Dict containing optimization results and metadata. | ||
| """ | ||
| # Record telemetry event for job execution | ||
| _record_event( |
There was a problem hiding this comment.
can we use the decorator style for adding telemetry for this api?
mlflow/genai/optimize/job.py
Outdated
| prompt_uri: str, | ||
| dataset_id: str, | ||
| optimizer_type: str, | ||
| optimizer_config_json: str | None, |
There was a problem hiding this comment.
Does it cause any issue if we accept optimizer_config as dict[str, Any] instead of str in this method?
There was a problem hiding this comment.
oops, this is supposed to be a dict, and handler code should deserialize the string to dict. updated
mlflow/genai/optimize/job.py
Outdated
| dataset_id: str, | ||
| optimizer_type: str, | ||
| optimizer_config_json: str | None, | ||
| scorers: list[str], |
There was a problem hiding this comment.
Q: What's the format for custom scorers ?
mlflow/genai/optimize/job.py
Outdated
| dataset_id: The ID of the EvaluationDataset containing training data. | ||
| optimizer_type: The optimizer type string (e.g., "gepa", "metaprompt"). | ||
| optimizer_config_json: JSON string of optimizer-specific configuration. | ||
| scorers: List of scorer names. Can be built-in scorer class names |
There was a problem hiding this comment.
What would be the workflow if users want to use a custom llm judge? Is it like create a judge using make_judge -> register it using .register() -> call this api with its judge name?
There was a problem hiding this comment.
yes exactly
| source_prompt = load_prompt(prompt_uri) | ||
|
|
||
| # Resume the given run ID. Params have already been logged by the handler | ||
| with start_run(run_id=run_id): |
There was a problem hiding this comment.
Does it mean a MLflow run is created and ended before this method is called?
There was a problem hiding this comment.
yes, it's written this way because we need to associate the optimization job with an mlflow run when the job starts, if we don't create the run before kicking off the job, we lose this lineage until finishing the job.
There was a problem hiding this comment.
Code in handler should be more straightforward
mlflow/genai/optimize/job.py
Outdated
| config = json.loads(optimizer_config_json) if optimizer_config_json else {} | ||
| optimizer_type = optimizer_type.lower() if optimizer_type else "" | ||
|
|
||
| if optimizer_type == "gepa": |
There was a problem hiding this comment.
Can we define an enum for optimizer type?
mlflow/genai/optimize/job.py
Outdated
| Returns: | ||
| A callable that takes inputs dict and returns the LLM response. | ||
| """ | ||
| import litellm |
There was a problem hiding this comment.
can we display a kind message if litellm is not installed?
mlflow/genai/optimize/job.py
Outdated
| set_experiment(experiment_id=experiment_id) | ||
|
|
||
| dataset = get_dataset(dataset_id=dataset_id) | ||
| train_data = dataset.to_df() |
There was a problem hiding this comment.
nit: I guess optimize_prompts accepts EvaluationDataset
mlflow/genai/optimize/job.py
Outdated
| the experiment's scorer registry. | ||
|
|
||
| Returns: | ||
| Dict containing optimization results and metadata. |
There was a problem hiding this comment.
Reminder: need to guarantee that the dict is JSON serializable.
| # Resume the given run ID. Params have already been logged by the handler | ||
| with start_run(run_id=run_id): | ||
| # Link source prompt to run for lineage | ||
| client = MlflowClient() |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
There was a problem hiding this comment.
discussed offline, this should be fine since we expect the job to be kicked off by the server handler
mlflow/genai/optimize/job.py
Outdated
| enable_tracking=True, | ||
| ) | ||
|
|
||
| return { |
There was a problem hiding this comment.
nit: shall we deflne a class for the response?
There was a problem hiding this comment.
good call, done!
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Wrap the
mlflow.genai.optimize_prompts()in an mlflow job, which is a required step for building the mlflow prompt optimization backend.This is part of #19926, and I am just splitting the PR for easy review.
Note: This PR is safe to go with either 3.9 or 3.10 release, since it's not directly user-facing, but just a backbone for the optimization backend.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.