Skip to content

Use cached trials for intersection search space calculation#6291

Merged
nabenabe0928 merged 2 commits intooptuna:masterfrom
not522:infer-relative-search-space
Oct 10, 2025
Merged

Use cached trials for intersection search space calculation#6291
nabenabe0928 merged 2 commits intooptuna:masterfrom
not522:infer-relative-search-space

Conversation

@not522
Copy link
Copy Markdown
Member

@not522 not522 commented Oct 1, 2025

Motivation

The current implementation of TPESampler does not utilize cached trials when computing the search space. This PR addresses the issue by delaying the execution of infer_relative_search_space until absolutely necessary and performing it concurrently with sample_relative, thus enabling the use of cached trials. Additionally, the TPESampler's infer_relative_search_space method now incorporates caching functionality.

(While the same performance optimizations are possible for other sampler implementations, for PR brevity, I focus on TPESampler here.)

Description of the changes

This PR introduces a use_cache argument for IntersectionSearchSpace and _GroupDecomposedSearchSpace and makes TPESampler using cached trials for intersection search space calculation.

Benchmark

import optuna

def objective(trial: optuna.Trial) -> float:
    x = trial.suggest_float("x", -100, 100)
    y = trial.suggest_int("y", -100, 100)
    return x**2 + y**2

sampler = optuna.samplers.TPESampler(seed=42, multivariate=True, constant_liar=True)
# sampler = optuna.samplers.TPESampler(seed=42, multivariate=True, constant_liar=True, group=True)
study = optuna.create_study(sampler=sampler, storage="sqlite:///tmp.db")
study.optimize(objective, n_trials=1000)
  • group=False
master PR
11.287s 10.150s
  • group=True
master PR
11.771s 10.581s

@not522 not522 added the enhancement Change that does not break compatibility and not affect public interfaces, but improves performance. label Oct 1, 2025
@not522 not522 force-pushed the infer-relative-search-space branch from 5885e37 to 2e0e3ce Compare October 1, 2025 11:39
@y0z y0z self-assigned this Oct 2, 2025
@y0z
Copy link
Copy Markdown
Member

y0z commented Oct 2, 2025

@nabenabe0928, could you review this PR?

@y0z y0z assigned gen740 and nabenabe0928 and unassigned y0z and gen740 Oct 3, 2025
Copy link
Copy Markdown
Member

@gen740 gen740 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

I confirmed that this PR is actually faster.

Master

Benchmark 1: python3 pr6291.py
  Time (mean ± σ):     11.597 s ±  0.535 s    [User: 9.963 s, System: 1.400 s]
  Range (min … max):   10.644 s … 12.332 s    10 runs

This PR

Benchmark 1: python3 pr6291.py
  Time (mean ± σ):     10.290 s ±  0.187 s    [User: 8.685 s, System: 1.381 s]
  Range (min … max):   10.041 s … 10.711 s    10 runs

@gen740 gen740 removed their assignment Oct 3, 2025
Copy link
Copy Markdown
Contributor

@nabenabe0928 nabenabe0928 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked except trial.py!

Comment on lines +69 to +71
self.relative_search_space = self.study.sampler.infer_relative_search_space(
study, self._cached_frozen_trial
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain why we need to move the call here?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_get_trials(..., use_cache=use_cache) creates the cache during the first call in that trial. Therefore, this call should be delayed until it's actually needed. For example, in the following objective function, calling infer_relative_search_space in __init__ would be premature:

def objective(trial):
    # Very time-consuming initialization process.
    ...
    # Then perform the suggestion.
    trial.suggest_xxx(...)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this makes a lot of sense.

Copy link
Copy Markdown
Contributor

@nabenabe0928 nabenabe0928 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost LGTM!
I left a minor comment, please address this comment!

Co-authored-by: Shuhei Watanabe <47781922+nabenabe0928@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@nabenabe0928 nabenabe0928 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the change, LGTM!

@nabenabe0928 nabenabe0928 merged commit 5466fdd into optuna:master Oct 10, 2025
14 checks passed
@nabenabe0928 nabenabe0928 added this to the v4.6.0 milestone Oct 10, 2025
@not522 not522 deleted the infer-relative-search-space branch October 10, 2025 02:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Change that does not break compatibility and not affect public interfaces, but improves performance.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants