-
Notifications
You must be signed in to change notification settings - Fork 32.6k
Closed
Description
🚀 Feature request
The trainer (pt, tf) is an easy access point for users who rather not spend too much time building their own trainer class but prefer an out-of-the-box solution. Even though transformers was never meant to be a fully fletched training library, it might please users to add an additional feature: early stopping.
Motivation
Early stopping ensures that the trainer does not needlessly keep training when the loss does not improve. This saves time, money, and let's not forget the trees. 😉 Performance-wise this should not lead to different results.
Your contribution
At the moment I cannot work on this, but here are my thoughts:
- a training argument should be added (pt, tf). This would only work when
evaluate_during_trainingis enabled. - for PyTorch: at every evaluation step, an early stopper (can be a separate class even) checks if the loss has improved in the last n steps. Potentially with a minimal threshold that the loss should have improved. If not, the trainer should stop
- for Tensorflow: I don't have experience with TF myself, but I assume one could use
tf.keras.callbacks.EarlyStopping.
Reactions are currently unavailable