Reference

Contents

Index

RegularizedProblems.RegularizedNLPModelType
rmodel = RegularizedNLPModel(model, regularizer)
rmodel = RegularizedNLSModel(model, regularizer)

An aggregate type to represent a regularized optimization model, .i.e., of the form

minimize f(x) + h(x),

where f is smooth (and is usually assumed to have Lipschitz-continuous gradient), and h is lower semi-continuous (and may have to be prox-bounded).

The regularized model is made of

  • model <: AbstractNLPModel: the smooth part of the model, for example a FirstOrderModel
  • h: the nonsmooth part of the model; typically a regularizer defined in ProximalOperators.jl
  • selected: the subset of variables to which the regularizer h should be applied (default: all).

This aggregate type can be used to call solvers with a single object representing the model, but is especially useful for use with SolverBenchmark.jl, which expects problems to be defined by a single object.

source
RegularizedProblems.MIT_matrix_completion_modelMethod
model, nls_model, sol = MIT_matrix_completion_model()

A special case of matrix completion problem in which the exact image is a noisy MIT logo.

See the documentation of random_matrix_completion_model() for more information.

source
RegularizedProblems.bpdn_modelMethod
model, nls_model, sol = bpdn_model(args...; kwargs...)
model, nls_model, sol = bpdn_model(compound = 1, args...; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same basis-pursuit denoise problem, i.e., the under-determined linear least-squares objective

½ ‖Ax - b‖₂²,

where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ.

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • k :: Int: the number of nonzero elements in x̄
  • noise :: Float64: noise standard deviation σ (default: 0.01).

The second form calls the first form with arguments

m = 200 * compound
n = 512 * compound
k =  10 * compound

Keyword arguments

  • bounds :: Bool: whether or not to include nonnegativity bounds in the model (default: false).

Return Value

An instance of an NLPModel and of an NLSModel that represent the same basis-pursuit denoise problem, and the exact solution x̄.

If bounds == true, the positive part of x̄ is returned.

source
RegularizedProblems.group_lasso_modelMethod
model, nls_model, sol = group_lasso_model(; kwargs...)

Return an instance of an NLPModel and NLSModel representing the group-lasso problem, i.e., the under-determined linear least-squares objective

½ ‖Ax - b‖₂²,

where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.

Keyword Arguments

  • m :: Int: the number of rows of A (default: 200)
  • n :: Int: the number of columns of A, with nm (default: 512)
  • g :: Int: the number of groups (default: 16)
  • ag :: Int: the number of active groups (default: 5)
  • noise :: Float64: noise amount (default: 0.01)
  • compound :: Int: multiplier for m, n, g, and ag (default: 1).

Return Value

An instance of an NLPModel that represents the group-lasso problem. An instance of an NLSModel that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x.

source
RegularizedProblems.nnmf_modelFunction
model, nls_model, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)

Return an instance of an NLPModel and an NLSModel representing the non-negative matrix factorization objective

f(W, H) = ½ ‖A - WH‖₂²,

where A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]. The vector of indices selected = k*m+1: k*(m+n) is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • k :: Int: the number of clusters
source
RegularizedProblems.random_matrix_completion_modelMethod
model, nls_model, sol = random_matrix_completion_model(; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same matrix completion problem, i.e., the square linear least-squares objective

½ ‖P(X - A)‖²

in the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.

Keyword Arguments

  • m :: Int: the number of rows of X and A (default: 100)
  • n :: Int: the number of columns of X and A (default: 100)
  • r :: Int: the desired rank of A (default: 5)
  • sr :: AbstractFloat: a threshold between 0 and 1 used to determine the set of pixels

retained by the operator P (default: 0.8)

  • va :: AbstractFloat: the variance of a first Gaussian perturbation to be applied to A (default: 1.0e-4)
  • vb :: AbstractFloat: the variance of a second Gaussian perturbation to be applied to A (default: 1.0e-2)
  • c :: AbstractFloat: the coefficient of the convex combination of the two Gaussian perturbations (default: 0.2).

Return Value

An instance of an NLPModel and of an NLSModel that represent the same matrix completion problem, and the exact solution.

source