-
Notifications
You must be signed in to change notification settings - Fork 37
Description
In GitLab by @valentjn on Jul 28, 2018, 18:22
At SGA'18, Lorenzo Tamellini wanted to change the inner optimization method in the AugmentedLagrangian class (optimization module). The reason was that the default AdaptiveGradientDescent didn't converge well, as the optimum was near the boundary. He wanted to use NelderMead instead. This is currently not possible, so he had to hack it, i.e., create a new class.
It'd be nice if
- it was possible to change the inner optimization method and
- it was possible to access the inner history of
$\mu$ parameters.
One could look at MultiStart to see a possible solution.
Additionally, Lorenzo said that one default parameter value (didn't know which) was off, so that the parameter was never increased. However, I think I checked it already sometime ago and it should be correct (parameters have been adapted from Toussaint's optimization script).
Thanks to Lorenzo for the input.