Skip to content

Enhance Multi-Objective Constrained Parallel Optimization in GPSampler#6481

Merged
y0z merged 15 commits intooptuna:masterfrom
sawa3030:add-constant-liar-in-gpsampler-multiobjective
Mar 19, 2026
Merged

Enhance Multi-Objective Constrained Parallel Optimization in GPSampler#6481
y0z merged 15 commits intooptuna:masterfrom
sawa3030:add-constant-liar-in-gpsampler-multiobjective

Conversation

@sawa3030
Copy link
Copy Markdown
Collaborator

@sawa3030 sawa3030 commented Feb 26, 2026

This series improves parallel suggestions in GPSampler. While #6430 introduced Constant Liar for single-objective unconstrained optimization, this PR extends parallel support to multi-objective constrained optimization by using the Kriging Believer strategy.

Motivation

Fixes #6392.

Description of the changes

  • Make LogEHVI, LogPI, ConstrainedLogEI, and ConstrainedLogEHVI aware of RUNNING trials by accepting running_trials (pending points).
  • Refactor the implementation to simplify the codepath, since running trials are now handled uniformly across all settings, regardless of whether the problem is single- or multi-objective, constrained or unconstrained.

@sawa3030
Copy link
Copy Markdown
Collaborator Author

sawa3030 commented Feb 26, 2026

Benchmark of Multi-objective Unconstrained Optimization

Benchmark setup

(This benchmark follows the evaluation procedure used in #6430)
I compared the current master implementation against the Kriging Believer strategy introduced in this PR.

Tested settings

  • Benchmark problems: 5×WFG (function1 to 5)
  • n_trials: [25, 50, 75, 100]
  • batch_size: [5, 10, 50]
  • n_objective: [2, 4]
  • Repeats: 10 independent runs per (problem, n_trials, batch_size, strategy), using different random seeds

Evaluation

For each (benchmark problems, n_trials, batch_size), I compared hypervolume between master branch and this PR using a one-sided Mann–Whitney U test (α = 0.10).

If strategy i is significantly better than strategy j (j != i), I count it as a “win” for i. Total wins are then aggregated over all (n_trials, problems).

NOTE: The benchmark script is available here

Results (total wins)

benchmark_type wins:master wins:PR #6481
5 0/40 22/40
10 0/40 20/40
50 0/40 3/40

Overall, these results suggest that this PR improves HV compared to master under the tested settings.

@sawa3030
Copy link
Copy Markdown
Collaborator Author

Let me open this PR when #6430 is merged

@sawa3030 sawa3030 changed the title Enhance Multi-Objective Parallel Optimization in GPSampler Enhance Multi-Objective Constrained Parallel Optimization in GPSampler Mar 4, 2026
@sawa3030
Copy link
Copy Markdown
Collaborator Author

sawa3030 commented Mar 4, 2026

Benchmark of Single-objective Constrained Optimization

Benchmark setup

(This benchmark follows the evaluation procedure used in #6430)
I compared the current master implementation against the Kriging Believer strategy introduced in this PR.

Tested settings

  • Benchmark problems: two simulation problems from "Bayesian Optimization with Inequality Constraints" (Gardner, Jacob R., et al., ICML 2014):
    • Objective: cos(2x) cos(y) + sin(x)
      Constraint: cos(x) cos(y) - sin(x) sin(y) <= 0.5
    • Objective: sin(x) + y
      Constraint: sin(x) sin(y) <= -0.95
  • n_trials: [25, 50, 75, 100]
  • batch_size: [5, 10, 50]
  • Repeats: 100 independent runs per (problem, n_trials, batch_size, strategy), using different random seeds

Evaluation

For each (benchmark problems, n_trials, batch_size), I compared best_value between master branch and this PR using a one-sided Mann–Whitney U test (α = 0.10).

If strategy i is significantly better than strategy j (j != i), I count it as a “win” for i. Total wins are then aggregated over all (n_trials, problems).

NOTE: The benchmark script is available here

Results (total wins)

benchmark_type wins:master wins:PR #6481
5 0/8 6/8
10 0/8 4/8
50 0/8 4/8

Overall, these results suggest that this PR improves the performance compared to master under the tested settings.

@sawa3030
Copy link
Copy Markdown
Collaborator Author

sawa3030 commented Mar 4, 2026

Benchmark of Multi-objective Constrained Optimization

Benchmark setup

(This benchmark follows the evaluation procedure used in #6430)
I compared the current master implementation against the Kriging Believer strategy introduced in this PR.

Tested settings

  • Benchmark problems: C2-DTLZ2
  • n_trials: [25, 50, 75, 100]
  • batch_size: [5, 10, 50]
  • Repeats: 10 independent runs per (problem, n_trials, batch_size, strategy), using different random seeds

Evaluation

For each (benchmark problems, n_trials, batch_size), I compared hypervolume between master branch and this PR using a one-sided Mann–Whitney U test (α = 0.10).

If strategy i is significantly better than strategy j (j != i), I count it as a “win” for i. Total wins are then aggregated over all (n_trials, problems).

NOTE: The benchmark script is available here

Results (total wins)

benchmark_type wins:master wins:PR #6481
5 0/4 3/4
10 0/4 1/4
50 0/4 0/4

Overall, these results suggest that this PR improves HV compared to master under the tested settings.

@sawa3030 sawa3030 marked this pull request as ready for review March 4, 2026 09:17
@not522 not522 assigned not522 and y0z Mar 5, 2026
@not522
Copy link
Copy Markdown
Member

not522 commented Mar 5, 2026

@y0z Could you review this PR?

Copy link
Copy Markdown
Member

@y0z y0z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have any literature on the constant liar/Kriging believer strategy? Providing references in the documentation may be helpful.

Comment on lines 315 to 316
# At present, running trials are taken into account only in single-objective
# unconstrained optimization.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These lines can be removed.

@not522 not522 added the enhancement Change that does not break compatibility and not affect public interfaces, but improves performance. label Mar 10, 2026
@not522
Copy link
Copy Markdown
Member

not522 commented Mar 10, 2026

Could you check for mypy errors? https://github.com/not522/optuna/actions/runs/22881585369/job/66385359005
The rest of the PR appears to be a well-organized implementation. I'll check benchmarks.

@sawa3030
Copy link
Copy Markdown
Collaborator Author

Could you check for mypy errors? https://github.com/not522/optuna/actions/runs/22881585369/job/66385359005 The rest of the PR appears to be a well-organized implementation. I'll check benchmarks.

Thank you for checking. The mypy errors should be fixed now.
https://github.com/sawa3030/optuna/actions/runs/23037280833

@sawa3030
Copy link
Copy Markdown
Collaborator Author

Thank you for the review. I've made the requested updates. PTAL.

Copy link
Copy Markdown
Member

@not522 not522 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@not522 not522 removed their assignment Mar 16, 2026
Copy link
Copy Markdown
Member

@y0z y0z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@y0z y0z merged commit e608a78 into optuna:master Mar 19, 2026
13 checks passed
@y0z y0z added this to the v4.9.0 milestone Mar 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Change that does not break compatibility and not affect public interfaces, but improves performance.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Constant Liar strategy for GPSampler

3 participants