Skip to content

Speed up WFG by skipping is_pareto_front and using simple Python loops#6223

Merged
gen740 merged 6 commits intooptuna:masterfrom
nabenabe0928:enhance/speed-up-wfg
Aug 8, 2025
Merged

Speed up WFG by skipping is_pareto_front and using simple Python loops#6223
gen740 merged 6 commits intooptuna:masterfrom
nabenabe0928:enhance/speed-up-wfg

Conversation

@nabenabe0928
Copy link
Copy Markdown
Contributor

@nabenabe0928 nabenabe0928 commented Jul 30, 2025

Motivation

Since WFG becomes very slow for many-objective problems, I speeded up the routine.

This PR Master
3.66 seconds 5.86 seconds
Benchmarking Code
import time

import optuna

import numpy as np


rng = np.random.RandomState(42)
runtime = 0.0
n_objectives = 5
for _ in range(10):
    X = np.unique(rng.normal(size=(1000, n_objectives)), axis=0)
    pareto_sols = X[optuna.study._multi_objective._is_pareto_front(X, False)]
    start = time.time()
    hv = optuna._hypervolume.compute_hypervolume(pareto_sols, np.full(n_objectives, 10.0), True)
    runtime += time.time() - start
    print(hv)

print(runtime)

Description of the changes

  • Use Python loops instead of NumPy for short iterations
  • Avoid using is_pareto_front for small arrays
  • Avoid filtering loss_values to reduce the copy

Note

Since the summation and subtraction orders change by this PR, the result of WFG will not be identical to the master branch anymore.

@nabenabe0928 nabenabe0928 added the enhancement Change that does not break compatibility and not affect public interfaces, but improves performance. label Jul 30, 2025
@c-bata
Copy link
Copy Markdown
Member

c-bata commented Aug 1, 2025

@sawa3030 @y0z Could you review this PR?

@c-bata
Copy link
Copy Markdown
Member

c-bata commented Aug 1, 2025

Let me reassign the reviewer from @y0z to @gen740.

@c-bata c-bata assigned gen740 and unassigned y0z Aug 1, 2025
Copy link
Copy Markdown
Collaborator

@sawa3030 sawa3030 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sawa3030
Copy link
Copy Markdown
Collaborator

sawa3030 commented Aug 8, 2025

I have verified that this change does not significantly affect the results of _compute_hv using the following code. Minor differences are observed, which are expected as mentioned here.

from optuna._hypervolume.wfg import _compute_hv
import numpy as np
np.random.seed(42)
n = [1, 2, 3, 4, 10, 100]
n_objectives = [2, 3, 4]
for d in n_objectives:
    for n_ in n:
        loss_vals = np.random.rand(n_, d)
        sorted_loss_vals = loss_vals[np.argsort(loss_vals[:, 0])]
        reference_point = np.ones(d)
        print(f"n={n_}, d={d}, result:", _compute_hv(sorted_loss_vals, reference_point))

master

n=1, d=2, result: 0.030826224055379145
n=2, d=2, result: 0.7123248922006852
n=3, d=2, result: 0.19024437523459392
n=4, d=2, result: 0.6681180419292005
n=10, d=2, result: 0.7080448206394967
n=100, d=2, result: 0.9541825860722869
n=1, d=3, result: 0.00023354040743926932
n=2, d=3, result: 0.04445009547074698
n=3, d=3, result: 0.09219375842565125
n=4, d=3, result: 0.21089876176349012
n=10, d=3, result: 0.50124734700838
n=100, d=3, result: 0.9003823891866158
n=1, d=4, result: 0.4496466674679807
n=2, d=4, result: 0.24439785096478786
n=3, d=4, result: 0.2140587801731504
n=4, d=4, result: 0.1963756222967425
n=10, d=4, result: 0.5351288057135548
n=100, d=4, result: 0.6367971801190232

PR

n=1, d=2, result: 0.030826224055379145
n=2, d=2, result: 0.7123248922006852
n=3, d=2, result: 0.19024437523459392
n=4, d=2, result: 0.6681180419292005
n=10, d=2, result: 0.7080448206394967
n=100, d=2, result: 0.9541825860722873
n=1, d=3, result: 0.00023354040743926932
n=2, d=3, result: 0.04445009547074698
n=3, d=3, result: 0.09219375842565125
n=4, d=3, result: 0.21089876176349012
n=10, d=3, result: 0.50124734700838
n=100, d=3, result: 0.9003823891866161
n=1, d=4, result: 0.4496466674679807
n=2, d=4, result: 0.24439785096478786
n=3, d=4, result: 0.2140587801731504
n=4, d=4, result: 0.1963756222967425
n=10, d=4, result: 0.5351288057135548
n=100, d=4, result: 0.6367971801190232

Co-authored-by: Gen <54583542+gen740@users.noreply.github.com>
Copy link
Copy Markdown
Member

@gen740 gen740 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@gen740 gen740 added this to the v4.5.0 milestone Aug 8, 2025
@gen740 gen740 merged commit 4eeeef1 into optuna:master Aug 8, 2025
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Change that does not break compatibility and not affect public interfaces, but improves performance.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants