Hi,
I've good results w/ HistGradientBoostingClassifier (an estimator deserving to be known by all), and I rely a lot on decision threshold.
LightGBM permits to use a custom loss function as demonstrated here and this macro-soft F1 loss function could permit me to not care about decision threshold, and improve results.
So before asking: newbie here ;-).
Is it something achievable and relevant for HistGradientBoosting?
As HGB is inspired by LightGBM, I would say yes. But I'd like your thought on that.
Thanks!