Simple stochastic optimizer comparison over classification/regression tasks.
The repository was originally designed as a supplemental material for
my lecture for stochastic optimization (slide),
which was aimed to test recent stochastic optimizers' basic capacity with a linear model,
and not any neural networks as we just want to see the convergence properties.
The implementation follows scikit learn as
each algorithm has fit and predict.
One can find sample use case in notebook,
and figure 1 is the learning curve from it.
Fig 1: Learning curve result on MNIST
git clone https://github.com/asahi417/StochasticOptimizers
cd StochasticOptimizers
pip install .
- Gradient Descents
- Dual Averaging