MAINT DOC HGBT leave updated if loss is not smooth#26254
MAINT DOC HGBT leave updated if loss is not smooth#26254OmarManzoor merged 2 commits intoscikit-learn:mainfrom
Conversation
Don't know if that's what you're after, but maybe I think |
|
@glevv Do you want to give this PR a review? |
|
@lorentzenchr Sorry, I don't think I will be able to |
|
@glevv No problem. I thought I just ask as you seemed interested in the PR:smirk: |
OmarManzoor
left a comment
There was a problem hiding this comment.
LGTM. Thanks @lorentzenchr
Reference Issues/PRs
Popped up while working on #25964.
What does this implement/fix? Explain your changes.
HGBT leave updates now rely on
loss.differentiableand the reasons and differences to the standard gradient boosting algo are explained.Any other comments?
It is hard to find a reference for gradient boosting with 2nd order loss approximation (using hessians) and non-smooth losses.
Edit: https://arxiv.org/abs/1808.03064 explicitly considers the different boosting schemes and mentions the problem of non-smooth loss functions with Newton boosting.