Skip to content

Track intermediate candidates and evaluation scores in gepa optimizer#20198

Merged
chenmoneygithub merged 8 commits intomlflow:masterfrom
chenmoneygithub:track-gepa-intermediate-prompts
Jan 27, 2026
Merged

Track intermediate candidates and evaluation scores in gepa optimizer#20198
chenmoneygithub merged 8 commits intomlflow:masterfrom
chenmoneygithub:track-gepa-intermediate-prompts

Conversation

@chenmoneygithub
Copy link
Contributor

@chenmoneygithub chenmoneygithub commented Jan 21, 2026

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Track intermediate candidates and evaluation scores in gepa optimizer, which is useful to track the GEPA optimization process. See the screenshot below for a sample:

image image image

We are tracking 3 things for each candidate that hits the full validation:

  • The overall score (aggregated across each data)
  • The eval table for each data record
  • The candidate prompt. In the case where multiple prompts are involved, each prompt will be a separate file for better readability.

Please note that we don't track the candidate that doesn't hit full validation, which is not addd to the pareto frontier of GEPA optimizer.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Copilot AI review requested due to automatic review settings January 21, 2026 21:59
@github-actions
Copy link
Contributor

🛠 DevTools 🛠

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/20198/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/20198/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/20198/merge

@github-actions
Copy link
Contributor

@chenmoneygithub Thank you for the contribution! Could you fix the following issue(s)?

⚠ DCO check

The DCO check failed. Please sign off your commit(s) by following the instructions here. See https://github.com/mlflow/mlflow/blob/master/CONTRIBUTING.md#sign-your-work for more details.

@github-actions github-actions bot added area/prompts MLflow Prompt Registry and Optimization rn/feature Mention under Features in Changelogs. and removed rn/feature Mention under Features in Changelogs. area/prompts MLflow Prompt Registry and Optimization labels Jan 21, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds tracking capabilities to the GEPA optimizer to monitor intermediate candidates and their evaluation scores during the optimization process. This enhancement provides visibility into the optimization workflow by logging validation candidates, aggregate scores, per-record scores, and individual scorer results.

Changes:

  • Extended EvaluationResultRecord to include individual scorer results
  • Modified evaluation metric functions to return individual scores alongside aggregate scores
  • Implemented artifact logging for validation candidates in the GEPA optimizer
  • Added comprehensive test coverage for the new tracking functionality

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
mlflow/genai/optimize/types.py Added individual_scores field to EvaluationResultRecord with default factory
mlflow/genai/optimize/util.py Updated create_metric_from_scorers to return tuple with individual scores
mlflow/genai/optimize/optimize.py Modified _build_eval_fn to unpack and pass individual scores from metric function
mlflow/genai/optimize/optimizers/gepa_optimizer.py Added tracking logic in MlflowGEPAAdapter with _log_validation_candidate method for artifact logging
tests/genai/optimize/optimizers/test_gepa_optimizer.py Added comprehensive test for prompt candidate logging functionality

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 21, 2026

Documentation preview for 9f524f0 is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

@github-actions github-actions bot added area/prompts MLflow Prompt Registry and Optimization rn/feature Mention under Features in Changelogs. labels Jan 23, 2026
# Log per-scorer metrics (for API response parsing)
if output.initial_eval_score_per_scorer:
for scorer_name, score in output.initial_eval_score_per_scorer.items():
mlflow.log_metric(f"initial_eval_score.{scorer_name}", score)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use log_metrics for efficient logging

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg, done!

batch: List of data instances to evaluate
candidate: Proposed text components (prompts)
capture_traces: Whether to capture execution traces
capture_traces: Whether to capture execution traces.
Copy link
Collaborator

@TomeHirata TomeHirata Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not add this period or add periods to all args for consistency?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

outputs = [result.outputs for result in eval_results]
scores = [result.score for result in eval_results]
trajectories = eval_results if capture_traces else None
objective_scores = [result.individual_scores for result in eval_results]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

q: is this parameter present for all supported versions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will bump the GEPA requirement to be >=0.0.26, there was a critical bug fixed in this PR gepa-ai/gepa#171, which has been released with 0.0.26. So yes, the parameter is available for valid gepa versions.

candidate: The candidate prompts being validated
eval_results: Evaluation results containing scores
"""
import mlflow
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Can we move this to the module level?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

"""
import mlflow

active_run = mlflow.active_run()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason not to use self.tracking_enabled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good call, changed

# Compute per-scorer average scores
scorer_names = set()
for result in eval_results:
scorer_names.update(result.individual_scores.keys())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we can use scorer_names |= result.individual_scores.keys()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gotcha, changed


# Build the evaluation results table and log to MLflow as a table artifact
eval_results_table = {
"inputs": [json.dumps(r.inputs) for r in eval_results],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need json.dumps here? isn't this handled by mlflow.log_table?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good call, I wasn't aware of that part, removed!

for result in eval_results:
scorer_names.update(result.individual_scores.keys())

per_scorer_scores = {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move this logic below so that the variable generation logic is closer to where is't used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

prompt_path = tmp_path / f"{prompt_name}.txt"
with open(prompt_path, "w") as f:
f.write(prompt_text)
mlflow.log_artifact(str(prompt_path), artifact_path=iteration_dir)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we pass Path to mlflow.log_artifact?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good call, AI-redundant code, changed!

r.individual_scores.get(scorer_name) for r in eval_results
]

iteration_dir = f"prompt_candidates/iteration_{iteration}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess these path names will be used for returning the optimization result from the rest API, so shall we extract these path/file names as consts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good call, the new commit defines a few constants for reusing.

score: float | None
trace: Trace
rationales: dict[str, str]
individual_scores: dict[str, float] = field(default_factory=dict)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per scorer score is not necessarily numeric. Please refer to Scorer.__call__

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that's a good callout, I am now planning to only allow numeric scorers: #20197 (comment)

let's discuss!

if scorer_name in r.individual_scores
]
if scores:
per_scorer_scores[scorer_name] = sum(scores) / len(scores)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think individual scorers cannot always be aggregated using sum. Please see my comment below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar to #20198 (comment).

# Log scores summary as JSON artifact
scores_data = {
"aggregate": aggregate_score,
"per_scorer": per_scorer_scores,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should log metrics for each scorer name where possible. This enables users to see the time progression of each scorer result.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes good idea, done!

Copy link
Collaborator

@TomeHirata TomeHirata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM if we support string output in the follow up PR

@chenmoneygithub chenmoneygithub added this pull request to the merge queue Jan 27, 2026
Merged via the queue into mlflow:master with commit dfdc877 Jan 27, 2026
47 of 50 checks passed
@chenmoneygithub chenmoneygithub deleted the track-gepa-intermediate-prompts branch January 27, 2026 06:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/prompts MLflow Prompt Registry and Optimization rn/feature Mention under Features in Changelogs.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants