Fix scorers issue in metaprompting#20173
Conversation
🛠 DevTools 🛠
Install mlflow from this PRFor Databricks, use the following command: |
|
@chenmoneygithub Thank you for the contribution! Could you fix the following issue(s)? ⚠ DCO checkThe DCO check failed. Please sign off your commit(s) by following the instructions here. See https://github.com/mlflow/mlflow/blob/master/CONTRIBUTING.md#sign-your-work for more details. |
There was a problem hiding this comment.
Pull request overview
This PR enhances the prompt optimization API to support zero-shot mode by making the scorers parameter optional and adding validation to ensure train_data and scorers are set together (both provided or both None/empty).
Changes:
- Made
scorersparameter optional (defaults to None) inoptimize_prompts() - Added validation to ensure
train_dataandscorersare mutually required - Updated
validate_train_data()to handle None scorers for zero-shot mode
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| mlflow/genai/optimize/optimize.py | Made scorers optional, added mutual validation, set eval_fn to None in zero-shot mode |
| mlflow/genai/optimize/util.py | Updated validate_train_data to accept None scorers |
| tests/genai/optimize/test_optimize.py | Updated MockPromptOptimizer to handle None eval_fn, added validation tests |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Documentation preview for 2aad211 is available at: More info
|
mlflow/genai/optimize/optimize.py
Outdated
| has_train_data = train_data is not None and len(train_data) > 0 | ||
| has_scorers = scorers is not None and len(scorers) > 0 | ||
|
|
||
| if has_train_data and not has_scorers: |
There was a problem hiding this comment.
Is't it possible to run few-shot metaprompting if tracing data exists and scorers is None?
There was a problem hiding this comment.
Technically yes, but it's not working well from my earlier experiments, potentially because there is no new information getting generated.
However, for model switching use case, where the inference model is different from the model that generates the traces, this setup (train_data + no scorer) does work, since we use the same API to cover both scenarios, let me remove this validation.
|
|
||
| if train_data is None or len(train_data) == 0: | ||
| # Validate that train_data and scorers are set together | ||
| has_train_data = train_data is not None and len(train_data) > 0 |
There was a problem hiding this comment.
nit: do we allow users to pass train_data=None? The type hint does not support None.
There was a problem hiding this comment.
EvaluationDatasetTypes could be None:
EvaluationDatasetTypes = (
pd.DataFrame
| pyspark.sql.dataframe.DataFrame
| list[dict]
| list[Trace]
| ManagedEvaluationDataset
| EntityEvaluationDataset
| ConversationSimulator
| None
)
I went with this way because "EvaluationDatasetTypes" | None is invalid syntax.
mlflow/genai/optimize/optimize.py
Outdated
| metric_fn = create_metric_from_scorers(scorers, aggregation) | ||
| eval_fn = _build_eval_fn(predict_fn, metric_fn) | ||
| # Create metric function only if scorers are provided (few-shot mode) | ||
| if has_scorers: |
There was a problem hiding this comment.
What happens if users don't pass dataset and scorers, and use GEPA? Maybe should we add a validation in each optimizer as the required fields may vary across optimizers?
There was a problem hiding this comment.
I realized the old code is a bit broken, refactored to make validation work better, and ensure that metaprompting with trian_data without scorers works well.
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
We need to raise an explicit exception when only one of
train_dataandscorersis set inmlflow.genai.optimize_prompts(). Additionally, we allowscorers=Nonefor a better developer experience.When metaprompting optimizer receives dataset while not getting scorer, the metaprompting will still work, and an example looks like below:
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.