Skip to content

Use lru_cache to skip HSSP#6240

Merged
y0z merged 8 commits intooptuna:masterfrom
fusawa-yugo:fusawa-yugo/lru_cache_for_hssp
Aug 18, 2025
Merged

Use lru_cache to skip HSSP#6240
y0z merged 8 commits intooptuna:masterfrom
fusawa-yugo:fusawa-yugo/lru_cache_for_hssp

Conversation

@fusawa-yugo
Copy link
Copy Markdown
Contributor

Motivation

calculation of HSSP is unnecessarily repeated even when the results would be identical.
This PR aims to eliminate such redundant computations and improve efficiency.

related PR

Description of the changes

I introduced lru_cache to skip repeated calculations. To store the data in the cache, np.ndarray objects are converted into tuple and then reversed.

@fusawa-yugo fusawa-yugo marked this pull request as draft August 8, 2025 06:58
fusawa-yugo and others added 4 commits August 8, 2025 18:09
Co-authored-by: Shuhei Watanabe <47781922+nabenabe0928@users.noreply.github.com>
Co-authored-by: Shuhei Watanabe <47781922+nabenabe0928@users.noreply.github.com>
Co-authored-by: Yoshihiko Ozaki <30489874+y0z@users.noreply.github.com>
Co-authored-by: Yoshihiko Ozaki <30489874+y0z@users.noreply.github.com>
@fusawa-yugo
Copy link
Copy Markdown
Contributor Author

fusawa-yugo commented Aug 8, 2025

Time Comparison with Master

I tested a 4-variable objective function for 1000 trials using the following code, with the number of objectives set to [2, 3, 4, 9].

Test Code

import optuna
import time
import json
import os
import argparse

def main():
parser = argparse.ArgumentParser(description='Optuna performance comparison')
parser.add_argument('-t','--n_trials', type=int, default=1000, help='Number of trials')
parser.add_argument('-b','--branch_name', type=str, default='master', help='Branch name for output')
parser.add_argument('-o','--n_objectives', type=int, default=4, help='Number of objective return values (1-4)')
parser.add_argument('-m','--multivariate', action='store_true', help='Multivariate')

args = parser.parse_args()

# Validate n_objectives
if args.n_objectives < 1 or args.n_objectives > 9:
    raise ValueError("n_objectives must be between 1 and 4")

def objective(trial):
    w = trial.suggest_float("w", -5, 5)
    x = trial.suggest_float("x", -5, 5)
    y = trial.suggest_float("y", -5, 5)
    z = trial.suggest_float("z", -5, 5)
    
    objectives = [
        w ** 2 + x ** 2 + y ** 2 + z ** 2, 
        w + x + y + z, 
        (x-2)**2 + (y-2)**2 + (z-2)**2, 
        (w-1)**2 + (x-1)**2 + (y-1)**2 + (z-1)**2,
        w + x + y,
        x + y + z,
        w + y + z,
        w + x + z,
        w * x * y,
    ][:args.n_objectives]
    
    return tuple(objectives)

sampler = optuna.samplers.TPESampler(
    multivariate= args.multivariate,
    seed=42,
)
times = {}

study = optuna.create_study(
    directions=["minimize"]*args.n_objectives,
    sampler=sampler,
)

start_time = time.time()

for i in range(args.n_trials):
    trial = study.ask()
    values = objective(trial)
    study.tell(trial, values)
    print(f"Trial#{trial.number}: {values=} with params={trial.params}")
    diff = time.time() - start_time
    times[i] = diff

dir_name = f"{args.branch_name}_{'multivariate' if args.multivariate else 'independent'}_{args.n_objectives}"
os.makedirs(dir_name, exist_ok=True)

with open(f"./{dir_name}/time.json", "w") as f:
    json.dump(times, f, indent=4)

with open(f"./{dir_name}/best_trials.json", "w") as f:
    best_trials = [trial.values for trial in study.best_trials]
    json.dump(best_trials, f, indent=4)

if name == "main":
main()

Results

compare_time_2 compare_time_3 compare_time_4 compare_time_9

@fusawa-yugo
Copy link
Copy Markdown
Contributor Author

fusawa-yugo commented Aug 8, 2025

(Approximate) Hit Rate

objective hit count miss count hit rate(%)
2_objectives_multivariate 685 303 69.332
2_objectives_independent 3578 367 90.6971
3_objectives_multivariate 502 486 50.8097
3_objectives_independent 3398 551 86.0471
4_objectives_multivariate 535 455 54.0404
4_objectives_independent 3381 572 85.53
9_objectives_multivariate 112 878 11.3131
9_objectives_independent 3029 928 76.5479

@nabenabe0928
Copy link
Copy Markdown
Contributor

nabenabe0928 commented Aug 8, 2025

I confirmed with @y0z that the cache hit rates here are reproducible.

@nabenabe0928 nabenabe0928 marked this pull request as ready for review August 8, 2025 10:58
@nabenabe0928
Copy link
Copy Markdown
Contributor

Benchmarking results with my funcs:
mv=True, n_objectives=2: 1.2s -> 1.2s (75.8%)
mv=False, n_objectives=2: 1.8s -> 1.7s (83.88%)
mv=True, n_objectives=3: 3.3s -> 2.1s (60.94%)
mv=False, n_objectives=3: 6.1s -> 3.1s (80.49%)
mv=True, n_objectives=4: 23.3s -> 18.7s (24.34%)
mv=False, n_objectives=4: 58.3s -> 32.9s (60.06%)

from __future__ import annotations

import optuna


def multi_objective(trial: optuna.Trial) -> tuple[float, ...]:
    x = trial.suggest_float("x", -5, 5)
    y = trial.suggest_float("y", -5, 5)
    return x**2 + y**2, (x - 2)**2 + (y - 2)**2, (x + 2)**2 + (y + 2)**2, (x + 2)**2 + (y - 2)**2


def objective(trial: optuna.Trial, n_objectives: int) -> tuple[float, ...]:
    return multi_objective(trial)[:n_objectives]


if __name__ == "__main__":
    n_objectives = 4
    sampler = optuna.samplers.TPESampler(seed=0, multivariate=False)
    study = optuna.create_study(sampler=sampler, directions=["minimize"]*n_objectives)
    study.optimize(lambda t: objective(t, n_objectives), n_trials=1000)
    print((study.trials[-1].datetime_complete - study.trials[0].datetime_start).total_seconds())

Copy link
Copy Markdown
Contributor

@nabenabe0928 nabenabe0928 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@nabenabe0928 nabenabe0928 removed their assignment Aug 13, 2025
@nabenabe0928 nabenabe0928 added the enhancement Change that does not break compatibility and not affect public interfaces, but improves performance. label Aug 13, 2025
@nabenabe0928 nabenabe0928 added this to the v4.5.0 milestone Aug 13, 2025
Copy link
Copy Markdown
Member

@y0z y0z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@y0z y0z merged commit b706a63 into optuna:master Aug 18, 2025
14 checks passed
@y0z y0z removed their assignment Aug 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Change that does not break compatibility and not affect public interfaces, but improves performance.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants