Skip to content

Choose inner optimization method in constrained optimization #63

@valentjn

Description

@valentjn

In GitLab by @valentjn on Jul 28, 2018, 18:22

At SGA'18, Lorenzo Tamellini wanted to change the inner optimization method in the AugmentedLagrangian class (optimization module). The reason was that the default AdaptiveGradientDescent didn't converge well, as the optimum was near the boundary. He wanted to use NelderMead instead. This is currently not possible, so he had to hack it, i.e., create a new class.

It'd be nice if

  • it was possible to change the inner optimization method and
  • it was possible to access the inner history of $\mu$ parameters.

One could look at MultiStart to see a possible solution.

Additionally, Lorenzo said that one default parameter value (didn't know which) was off, so that the parameter was never increased. However, I think I checked it already sometime ago and it should be correct (parameters have been adapted from Toussaint's optimization script).

Thanks to Lorenzo for the input.

Metadata

Metadata

Assignees

No one assigned

    Labels

    feature requestDesirable, nice-to-have featurefixedIssues that have been fixed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions