Attach data refactor and tuner bugs [4/n]#7258
Conversation
|
Hello @carmocca! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2021-04-30 13:34:56 UTC |
Codecov Report
@@ Coverage Diff @@
## master #7258 +/- ##
======================================
- Coverage 91% 91% -0%
======================================
Files 199 199
Lines 12808 12807 -1
======================================
- Hits 11688 11686 -2
- Misses 1120 1121 +1 |
|
Why is this one PR doing so many things? |
All of those things are tied together. I just wrote the list of changes so it's easier for reviewers to follow. edit: clarified the change list at the top and opened #7262 with parts of this PR |
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
| # links data to the trainer | ||
| self.data_connector.attach_data(model, val_dataloaders=val_dataloaders, datamodule=datamodule) |
There was a problem hiding this comment.
Side question: In trainer.fit we allow trainer.fit(datamodule) but not for test and validate. Do you know why? Did we decide explicit naming by keyword is better?
There was a problem hiding this comment.
Good eye. I actually plan to open a PR adding this.
Not doing it yet as it would give myself conflicts
|
|
||
| max_lr: maximum learning rate to investigate | ||
|
|
||
| num_training: number of learning rates to test |
There was a problem hiding this comment.
Not related to this PR. num_training param doesn't seem intuitive.
There was a problem hiding this comment.
cc: @SkafteNicki
I'm just moving the docstring around here
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
What does this PR do?
Attach data refactor:
attach_datato register theirDataLoaders/DataModule.DataLoaderstotrainer._launch()Tuner bugs:
trainer.tuner.lr_find()andtrainer.tuner.scale_batch_size()now callstrainer.tune()internally. This solves a bug where the trainer state was not set properly as we set it intrainer.tune()trainer.tune()now has arguments for the different tuning functions.trainer.tune()now returns the tuning result. The result is a{'scale_batch_size': Optional[int], 'lr_find': Optional[_LRFinder]}The tuning changes are tied to the
attach_datachanges because the tuner relied on thesetup_fitfunction which has been removed in this PR.Before submitting
PR review