Performance optimization for test_study.py by removing redundancy#6120
Performance optimization for test_study.py by removing redundancy#6120contramundum53 merged 15 commits intooptuna:masterfrom
test_study.py by removing redundancy#6120Conversation
| with ProcessPoolExecutor() as pool: | ||
| # Test normal behaviour. | ||
| pool.submit(_process_tell, study=study, trial=trial0, values=1.2).result() | ||
| assert len(study.trials) == 1 | ||
| assert study.best_trial.state == TrialState.COMPLETE | ||
| assert study.best_value == 1.2 | ||
|
|
||
| # Test study.tell using trial number. | ||
| trial = study.ask() | ||
| pool.submit(_process_tell, study=study, trial=trial.number, values=1.5).result() | ||
| assert len(study.trials) == 2 | ||
| assert study.best_trial.state == TrialState.COMPLETE | ||
| assert study.best_value == 1.2 | ||
|
|
||
| # Should fail because the trial0 is already finished. | ||
| with pytest.raises(ValueError): | ||
| fut = pool.submit(_process_tell, study=study, trial=trial0, values=1.2) | ||
| fut.result() |
There was a problem hiding this comment.
Using ProcessPoolExecutor and submit is consistently faster than multiprocessing.Pool and starmap.
Before
> git checkout abb781787
HEAD is now at abb781787 chore: introduce consts for minimal values of n_trials and timeout
> uv run python -m pytest tests/study_tests/test_study.py -k test_tell_from_another_process
=============================================================================== test session starts ===============================================================================
platform darwin -- Python 3.12.0, pytest-8.3.5, pluggy-1.6.0
rootdir: /Users/himkt/work/github.com/himkt/optuna
configfile: pyproject.toml
plugins: profiling-1.8.1
collected 600 items / 599 deselected / 1 selected
tests/study_tests/test_study.py . [100%]
======================================================================== 1 passed, 599 deselected in 4.43s ========================================================================After
> git checkout 2c0bb80c4
Previous HEAD position was abb781787 chore: introduce consts for minimal values of n_trials and timeout
HEAD is now at 2c0bb80c4 chore: use ProcessPoolExecutor instead of multiprocessing.Pool
>uv run python -m pytest tests/study_tests/test_study.py -k test_tell_from_another_process
=============================================================================== test session starts ===============================================================================
platform darwin -- Python 3.12.0, pytest-8.3.5, pluggy-1.6.0
rootdir: /Users/himkt/work/github.com/himkt/optuna
configfile: pyproject.toml
plugins: profiling-1.8.1
collected 600 items / 599 deselected / 1 selected
tests/study_tests/test_study.py . [100%]
======================================================================== 1 passed, 599 deselected in 2.26s ========================================================================There was a problem hiding this comment.
Pull Request Overview
This PR reduces the number of trials and the timeout values for trivial objective functions to speed up test execution and switches from multiprocessing.Pool to using ProcessPoolExecutor. The changes update several tests to use defined constants and adjust assertions accordingly.
- Reduced n_trials from 10 to a constant value for faster tests.
- Updated timeout values for short-running objectives.
- Switched from multiprocessing.Pool to ProcessPoolExecutor for cross-process communication.
Comments suppressed due to low confidence (1)
tests/study_tests/test_study.py:50
- The constant 'NUM_MINMAL_TRIALS' appears to have a typo. Consider renaming it to 'NUM_MINIMAL_TRIALS' for clarity.
NUM_MINMAL_TRIALS = 2
test_study.py by removing redundancy
There was a problem hiding this comment.
Pull Request Overview
This PR reduces test execution time by introducing minimal trial/time constants, switches to ProcessPoolExecutor for cross-process testing, and centralizes progress bar validation.
- Define
NUM_MINMAL_TRIALSandMINIMUM_TIMEOUT_SECand replace hard-coded values - Add
ProcessPoolExecutorfor testingtell()across processes - Introduce
check_progressbarhelper and refactor progress bar assertions
Comments suppressed due to low confidence (2)
tests/study_tests/test_study.py:50
- There’s a typo in the constant name
NUM_MINMAL_TRIALS; consider renaming it toNUM_MINIMAL_TRIALSfor clarity.
NUM_MINMAL_TRIALS = 2
tests/study_tests/test_study.py:8
- The
multiprocessingimport is no longer used after switching toProcessPoolExecutor; it can be removed to clean up unused imports.
import multiprocessing
tests/study_tests/test_study.py
Outdated
| _, err = capsys.readouterr() | ||
|
|
||
| assert "Best trial: 0" in err | ||
| assert "Best trial: 0" |
There was a problem hiding this comment.
This assertion always passes since it doesn’t check the error output. It should be assert "Best trial: 0" in err.
| assert "Best trial: 0" | |
| assert "Best trial: 0" in err |
tests/study_tests/test_study.py
Outdated
|
|
||
| CallbackFuncType = TypingCallable[[Study, FrozenTrial], None] | ||
|
|
||
| NUM_MINMAL_TRIALS = 2 |
There was a problem hiding this comment.
| NUM_MINMAL_TRIALS = 2 | |
| NUM_MINIMAL_TRIALS = 2 |
Copilot is actually right and there's a typo.
| from concurrent.futures import ProcessPoolExecutor | ||
| from concurrent.futures import ThreadPoolExecutor | ||
| import copy | ||
| import multiprocessing |
There was a problem hiding this comment.
| import multiprocessing |
multiprocessing is no longer needed (again, by copilot)
There was a problem hiding this comment.
Actually, multiprocessing is used in another test... 😄
optuna/tests/study_tests/test_study.py
Line 219 in daeef68
|
The PR is now ready for review again. |
| sampler = study.sampler | ||
| with patch.object(sampler, "reseed_rng", wraps=sampler.reseed_rng) as mock_object: | ||
| study.optimize(f, n_trials=1, n_jobs=2) | ||
| study.optimize(f, n_trials=1, n_jobs=n_jobs) |
There was a problem hiding this comment.
I'm not confident but I think test_optimize_with_reseeding unexpectedly runs tests with the fixed configuration (n_jobs=2) even with parametrization.
|
Could you check the CI fail? |
|
Sorry I just thought I fixed all failures... resolved in 79b9807. |
timeoutis unnecessarily long for the very simple objective function abb7817test_study.pyjust return constant value, which finishes to run very shortlyn_trialsis unnecessarily large for scenarios where it just needs multiple trials abb7817concurrent.futures.ProcessPoolExecutorinstead ofmultiprocessing.Pool2c0bb80Before
=========== 595 passed, 5 skipped, 91 warnings in 131.25s (0:02:11) ============After
============ 595 passed, 5 skipped, 91 warnings in 75.25s (0:01:15) ============Motivation
Description of the changes