Forecasters re-factoring: BaseGridSearch, ForecastingGridSearchCV, ForecastingRandomizedSearchCV#1034
Conversation
|
I think the test fails because the forecaster does not raise a "not fitted" error when you call Now, it would appear this is missing in the base forecaster already. Can you add it there? That should fix this issue. |
|
Also, please make sure to update from main. |
|
To be more precise: I think you should try to add the line |
|
this looks done - if you think it's done, can you please convert to PR from draft PR and request reviews? |
|
Thank you @fkiraly , I've updated and opened it for review. |
fkiraly
left a comment
There was a problem hiding this comment.
@GuzalBulatova, thanks!
I just noticed a problem: please return the public methods that were not moved "one layer deep" to their original state. This would be get_fitted_params, inverse_transform, score, and evaluate_candidates.
|
@fkiraly thanks, I've updated the PR. 👍 |
The mentioned functions should remain entirely unchanged, since we did not change the base classes there. |
|
I think there is still a problem with |
|
@fkiraly thank you, I un-did the changes and I think it should all be ready now. |
|
@fkiraly a question about |
Hm, looking at this:
I.e., it should be in both places in the end. In the description of the PR, make a note where you changed the |
|
Hi @fkiraly! I need your advice. After adding fit check in |
Hm, looks like a number of forecasters check the Other opinions, @mloning, @aiwalter? I don't think we need to solve the issue in this PR, so let's finish the refactoring without further changes to |
fkiraly
left a comment
There was a problem hiding this comment.
looks good, with the exception that an _update_predict is called.
Currently, this method does not exist in the template, so it should be reverted to update_predict.
I left it that way due to the issues and discussions around #982.
mloning
left a comment
There was a problem hiding this comment.
I think this makes sense now as it is, let's see if the tests pass.
|
wait, why? Why do we need to override almost everything? This doesn't look right. |
fkiraly
left a comment
There was a problem hiding this comment.
ok, I've looked at the code.
Indeed it makes sense, since we need to check tags etc in the delegates.
|
merged, great! Last refactor item done, thanks @GuzalBulatova! |
|
I just just about to have a look at this, but couldn't find it in the list of PRs. Good stuff for getting this sorted! 😄 |
Reference Issues/PRs
References #955
We need to slowly re-factor all forecasters to be compliant with the new interface (see PR #912). i.e., no overrides of fit etc and descendant classes only implementing core logic, while the BaseForecaster implements check etc logic.
Currently this is running due to various downwards compatibility workarounds, which should be removed once all forecasters are ready (in "no-override" state)
What does this implement/fix? Explain your changes.
self.check_is_fitted()toBaseForecaster'supdate_predictfnfitto private_fit\forecasting\model_selection\tests\test_tune.pypass_tags: used the (conservative) values from the extension templateDoes your contribution introduce a new dependency? If yes, which one?
No
What should a reviewer concentrate their feedback on?
_check_is_fitted(line 170) - is it necessary?