Enhance Multi-Objective Constrained Parallel Optimization in GPSampler#6481
Conversation
Benchmark of Multi-objective Unconstrained OptimizationBenchmark setup(This benchmark follows the evaluation procedure used in #6430) Tested settings
EvaluationFor each (benchmark problems, n_trials, batch_size), I compared hypervolume between master branch and this PR using a one-sided Mann–Whitney U test (α = 0.10). If strategy i is significantly better than strategy j (j != i), I count it as a “win” for i. Total wins are then aggregated over all (n_trials, problems). NOTE: The benchmark script is available here Results (total wins)
Overall, these results suggest that this PR improves HV compared to |
|
Let me open this PR when #6430 is merged |
…in-gpsampler-multiobjective
…-constant-liar-in-gpsampler-constrained
Benchmark of Single-objective Constrained OptimizationBenchmark setup(This benchmark follows the evaluation procedure used in #6430) Tested settings
EvaluationFor each (benchmark problems, n_trials, batch_size), I compared If strategy i is significantly better than strategy j (j != i), I count it as a “win” for i. Total wins are then aggregated over all (n_trials, problems). NOTE: The benchmark script is available here Results (total wins)
Overall, these results suggest that this PR improves the performance compared to |
Benchmark of Multi-objective Constrained OptimizationBenchmark setup(This benchmark follows the evaluation procedure used in #6430) Tested settings
EvaluationFor each (benchmark problems, n_trials, batch_size), I compared hypervolume between master branch and this PR using a one-sided Mann–Whitney U test (α = 0.10). If strategy i is significantly better than strategy j (j != i), I count it as a “win” for i. Total wins are then aggregated over all (n_trials, problems). NOTE: The benchmark script is available here Results (total wins)
Overall, these results suggest that this PR improves HV compared to |
|
@y0z Could you review this PR? |
y0z
left a comment
There was a problem hiding this comment.
Do you have any literature on the constant liar/Kriging believer strategy? Providing references in the documentation may be helpful.
optuna/samplers/_gp/sampler.py
Outdated
| # At present, running trials are taken into account only in single-objective | ||
| # unconstrained optimization. |
|
Could you check for mypy errors? https://github.com/not522/optuna/actions/runs/22881585369/job/66385359005 |
Thank you for checking. The mypy errors should be fixed now. |
|
Thank you for the review. I've made the requested updates. PTAL. |
This series improves parallel suggestions in
GPSampler. While #6430 introduced Constant Liar for single-objective unconstrained optimization, this PR extends parallel support to multi-objective constrained optimization by using the Kriging Believer strategy.Motivation
Fixes #6392.
Description of the changes
LogEHVI,LogPI,ConstrainedLogEI, andConstrainedLogEHVIaware of RUNNING trials by acceptingrunning_trials(pending points).