Skip to content

[tune](deps): Bump pytorch-lightning from 1.0.3 to 1.2.4 in /python/requirements#27

Closed
dependabot[bot] wants to merge 1 commit intomasterfrom
dependabot/pip/python/requirements/pytorch-lightning-1.2.4
Closed

[tune](deps): Bump pytorch-lightning from 1.0.3 to 1.2.4 in /python/requirements#27
dependabot[bot] wants to merge 1 commit intomasterfrom
dependabot/pip/python/requirements/pytorch-lightning-1.2.4

Conversation

@dependabot
Copy link
Copy Markdown

@dependabot dependabot bot commented on behalf of github Mar 20, 2021

Bumps pytorch-lightning from 1.0.3 to 1.2.4.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.2.4] - 2021-03-16

Changed

  • Changed the default of find_unused_parameters back to True in DDP and DDP Spawn (#6438)

Fixed

  • Expose DeepSpeed loss parameters to allow users to fix loss instability (#6115)
  • Fixed DP reduction with collection (#6324)
  • Fixed an issue where the tuner would not tune the learning rate if also tuning the batch size (#4688)
  • Fixed broadcast to use PyTorch broadcast_object_list and add reduce_decision (#6410)
  • Fixed logger creating directory structure too early in DDP (#6380)
  • Fixed DeepSpeed additional memory use on rank 0 when default device not set early enough (#6460)
  • Fixed DummyLogger.log_hyperparams raising a TypeError when running with fast_dev_run=True (#6398)
  • Fixed an issue with Tuner.scale_batch_size not finding the batch size attribute in the datamodule (#5968)
  • Fixed an exception in the layer summary when the model contains torch.jit scripted submodules (#6511)
  • Fixed when Train loop config was run during Trainer.predict (#6541)

Contributors

@​awaelchli, @​kaushikb11, @​Palzer, @​SeanNaren, @​tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.2.3] - 2021-03-09

Added

Changed

Fixed

  • Fixed ModelPruning(make_pruning_permanent=True) pruning buffers getting removed when saved during training (#6073)
  • Fixed when _stable_1d_sort to work when n >= N (#6177)
  • Fixed AttributeError when logger=None on TPU (#6221)
  • Fixed PyTorch Profiler with emit_nvtx (#6260)
  • Fixed trainer.test from best_path hangs after calling trainer.fit (#6272)
  • Fixed SingleTPU calling all_gather (#6296)
  • Ensure we check deepspeed/sharded in multinode DDP (#6297)
  • Check LightningOptimizer doesn't delete optimizer hooks (#6305)
  • Resolve memory leak for evaluation (#6326)
  • Ensure that clip gradients is only called if the value is greater than 0 (#6330)
  • Fixed Trainer not resetting lightning_optimizers when calling Trainer.fit() multiple times (#6372)

Contributors

@​awaelchli, @​carmocca, @​Chizuchizu, @​frankier, @​SeanNaren, @​tchaton

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.2.4] - 2021-03-16

Changed

  • Changed the default of find_unused_parameters back to True in DDP and DDP Spawn (#6438)

Fixed

  • Expose DeepSpeed loss parameters to allow users to fix loss instability (#6115)

  • Fixed DP reduction with collection (#6324)

  • Fixed an issue where the tuner would not tune the learning rate if also tuning the batch size (#4688)

  • Fixed broadcast to use PyTorch broadcast_object_list and add reduce_decision (#6410)

  • Fixed logger creating directory structure too early in DDP (#6380)

  • Fixed DeepSpeed additional memory use on rank 0 when default device not set early enough (#6460)

  • Fixed DummyLogger.log_hyperparams raising a TypeError when running with fast_dev_run=True (#6398)

  • Fixed an issue with Tuner.scale_batch_size not finding the batch size attribute in the datamodule (#5968)

  • Fixed an exception in the layer summary when the model contains torch.jit scripted submodules (#6511)

  • Fixed when Train loop config was run during Trainer.predict (#6541)

  • Fixed a bug where all_gather would not work correctly with tpu_cores=8 (#6587)

  • Update Gradient Clipping for the TPU Accelerator (#6576)

[1.2.3] - 2021-03-09

Fixed

  • Fixed ModelPruning(make_pruning_permanent=True) pruning buffers getting removed when saved during training (#6073)
  • Fixed when _stable_1d_sort to work when n >= N (#6177)
  • Fixed AttributeError when logger=None on TPU (#6221)
  • Fixed PyTorch Profiler with emit_nvtx (#6260)
  • Fixed trainer.test from best_path hangs after calling trainer.fit (#6272)
  • Fixed SingleTPU calling all_gather (#6296)
  • Ensure we check deepspeed/sharded in multinode DDP (#6297
  • Check LightningOptimizer doesn't delete optimizer hooks (#6305
  • Resolve memory leak for evaluation (#6326
  • Ensure that clip gradients is only called if the value is greater than 0 (#6330
  • Fixed Trainer not resetting lightning_optimizers when calling Trainer.fit() multiple times (#6372)

[1.2.2] - 2021-03-02

Added

  • Added checkpoint parameter to callback's on_save_checkpoint hook (#6072)

Changed

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Mar 20, 2021
@dependabot @github
Copy link
Copy Markdown
Author

dependabot bot commented on behalf of github Mar 24, 2021

Superseded by #29.

@dependabot dependabot bot closed this Mar 24, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/pytorch-lightning-1.2.4 branch March 24, 2021 20:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants