Tag Archives: optimization

Generalized Directional Derivatives: Some Examples

Earlier we defined the directional derivative of a function f: \mathbb{R}^n \to \overline{\mathbb{R}} as

\begin{aligned}f'(x; h) = \lim_{t \downarrow 0} \frac{f(x+th)-f(x)}{t},\end{aligned}

provided that the limit exists. It turns out that all convex functions are directionally differentiable on the interior (actually, the core) of their domains and f'(x; \cdot) is sublinear. However, the sublinearity property may fail when working with nonconvex functions. This motivates the definition of generalised directional derivatives which will hopefully be accompanied by some good calculus rules.

Continue reading →

Pointwise maximum function differentiability

These are some notes on some differentiability properties of the maximum of a finite number of functions based on some results taken mainly from the book of Borwein and Lewis and Rockafellar and Wets’s “Variational Analysis”.

Continue reading →

Notes on the Rayleigh Quotient

So here I am after a short of period of absence. This will be a short blog post on the Rayleigh quotient of a symmetric matrix, A, which is defined as R_A(x) = x^\top A x / \|x\|^2, for x \in\mathbb{R}^n, with x \neq 0.

Continue reading →

Convergence of dynamic programming iterates

We are interested in the following infinite-horizon optimal control problem

\begin{aligned}\mathbb{P}_\infty(x): \mathrm{Minimise}_{\{u_t\}_{t=0}^{\infty}, \{x_t\}_{t=0}^{\infty}} & \sum_{t=0}^{\infty}\ell(x_t, u_t)\\ \text{s.t.}\, & x_{t+1} = f(x_t, u_t), \forall t\in\mathbb{N},\\ & x_t \in X, u_t \in U, \forall t\in\mathbb{N},\\ & x_{0} = x.\end{aligned}

We ask under what conditions the dynamic programming value iterates converge. We will state and prove a useful convergence theorem, but first we need to state some useful definitions.

Continue reading →

Strict and strong convexity of maximum of two functions

This is a brief note on the strict and strong convexity properties of the maximum of two functions. The question is whether the maximum of two strictly/strongly convex functions is strintly/strongly convex. The proofs are very simple.

Continue reading →

Interchangeability of infimum in risk measures

In this post we discuss the interchangeability of the infimum with (monotone) risk measures in finite probability spaces. In particular, we show that under the common monotonicity assumption (which is satisfied by all well-behaving risk measures), for a risk measure \rho:\mathbb{R}^n\to\mathbb{R} and a mapping f:\mathbb{R}^m\to\mathbb{R}^n, we have

\begin{aligned} \rho\left(\inf_x f(x)\right) = \inf_x \rho(f(x)) \end{aligned}

and \mathbf{argmin}_x f(x) \subseteq \mathbf{argmin}_x \rho(f(x)), while, under additional conditions (which are typically met in finite-dimensional spaces), we have \mathbf{argmin}_x f(x) = \mathbf{argmin}_x \rho(f(x)) Continue reading →

Cone programs and self-dual embeddings

This post aims at providing some intuition into cone programs from different perspectives; in particular:

  1. Equivalence of different formulations of cone programs
  2. Fenchel duality
  3. Primal-dual optimality conditions (OC)
  4. OCs as variational inequalities
  5. Homogeneous self-dual embeddings (HSDEs)
  6. OCs for HSDEs

Continue reading →

Projection on epigraph via a proximal operator

A while ago I posted this article on how to project on the epigraph of a convex function where I derived the optimality conditions and the KKT conditions. This post comes as an addendum proving a third way to project on an epigraph. Do read the previous article first because I use the same notation here. Continue reading →

Lagrange vs Fenchel Duality

In this post we discuss the correspondence between the Lagrangian and the Fenchelian duality frameworks and we trace their common origin to the concept of convex conjugate functions and perturbed optimization problems. Continue reading →

Projection on the epigraph of the squared Euclidean norm

As a follow-up on the previous post titled Projection on an epigraph, we here discuss how we can project on the epigraph of the squared norm function. Continue reading →

Let us learn Russian

Давайте выучим русский язык

Almost Originality

a mathematical journal

Annoying Precision

"A good stock of examples, as large as possible, is indispensable for a thorough understanding of any concept, and when I want to learn something new, I make it my first job to build one." - Paul Halmos

Alex Sisto

Alessandro Sisto's math blog

Journey into Randomness

random stuffs, mainly related to randomness

Society Of Mathematics

Make Mathematics Great Again

What's new

Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao

Normal Deviate

Thoughts on Statistics and Machine Learning

Research and Lecture notes

by Fabrice Baudoin

mathbabe

Exploring and venting about quantitative issues

Look at the corners!

The math blog of Dmitry Ostrovsky

The Unapologetic Mathematician

Mathematics for the interested outsider

Almost Sure

A random mathematical blog

Mathematix

Mathematix is the mathematician of Erquy - the village of Asterix

Design a site like this with WordPress.com
Get started