Fix checks integration about pytorch lightning#4322
Conversation
Codecov Report
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more @@ Coverage Diff @@
## master #4322 +/- ##
=======================================
Coverage 90.43% 90.43%
=======================================
Files 172 172
Lines 13660 13660
=======================================
+ Hits 12353 12354 +1
+ Misses 1307 1306 -1
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
…tning-alternative
|
Adding changes in this PR, I confirmed that all mypy test in |
…ion-pytorch-lightning-alternative
|
I made this PR review ready. I appreciate it if you could assign reviews. |
|
This pull request has not seen any recent activity. |
| if trainer.is_global_zero: | ||
| self._trial.report(current_score.item(), step=epoch) | ||
| should_stop = self._trial.should_prune() | ||
| should_stop = trainer.training_type_plugin.broadcast(should_stop) |
There was a problem hiding this comment.
I think we should update the version check at L64 from 1.5.0 to 1.6.0 since pytorch-lightning==1.5.0 does not have Trainer.strategy.
pytorch-lightning version |
optuna-examples/pytorch/pytorch_lightning_simple.py |
|---|---|
| 1.5.0 | NG |
| 1.6.0 | OK |
| 1.7.0 | OK |
| 1.8.0 | OK |
The error message during the pytorch-lightning==1.5.0 was as follows:
AttributeError: 'Trainer' object has no attribute 'strategy'
There was a problem hiding this comment.
Thank you for your comment. I updated the version constraint.
HideakiImamura
left a comment
There was a problem hiding this comment.
Basically, LGTM. Let me ask one question. on_fit_start includes the logic when the backend is DDP, but we do not test it. Do we have a plan to support the DDP (maybe, as a follow-up of this PR)?
|
Thanks for your comment. I am working on supporting DDP here and will create follow-up PR based on this. |
…tning-alternative
|
I added follow-up PR #4384 |
There was a problem hiding this comment.
I'm sorry for the delayed response. pytorch_lightning_simple.py in the optuna example worked with the latest pytorch lightning as expected. We my update the lower bound of pythorch lightning in the example from 1.5.0 to 1.6.0, it is a follow-up task.
LGTM!
Motivation
Resolve #3418 and #4116
Description of the changes
Refactor deprecated features
trainer.training_type_pluginis deleted since v1.8 (PR#11239). The attributetraining_type_pluginis just renamed tostrategy, so refactored as suggested.An optional argument of
trainer,acceleratorstopped to acceptddp_cpu. Instead, we can passddptostrategyandcputoaccelerator. Also,num_processeswill be removed, so we give the number of processes todevicesinstead.AcceleratorConnector.distributed_backendis deleted, but now they haveAcceleratorConnector.is_distributedinstead, so refactored as suggested.callback.on_init_start()is deleted since v1.8 (Issue#10894, PR#10940, PR#14867).Although they don't provide exactly equivalent alternative, it would be possible to move this confirmation to somewhere else. Although it seems
strategy.setup_environmentis the right place to implement this check, implementing as a method ofStrategywill affect user's code. Therefore it will be reasonable to implement ascallback.setuporcallback.on_fit_start().Stop supporting DDP temporary
When you use DDP and
optuna.TrialPruned()is raised from the child process, Pytorch Lightning tries to resynchronize to fix the "error" and finally deal it as aDeadlockDetectedException, which terminates the whole process. For more details, see reconciliate_processes. To fix this problem, we need to change environment variable and private variable.It might be possible to solve this problem by manually raising
optuna.TrialPruned()from objective function as inCatBoostPruningCallback(see this comment). If it is possible, I am going to apply the change as another PR.test_pytorch_lightning_pruning_callback_ddp_monitorandtest_pytorch_lightning_pruning_callback_ddp_unsupported_storage.