Trust Region Methods

Saddle points

Saddle point pose a particular challenge in non-linear optimisation, particularly in higher dimensions. The plots below show two examples of saddle points in two dimensions. Like local minima and maxima, these are stationary points where the gradient of the function is zero f=0\nabla f = 0, but where the value of the function rises along certain directions and reduces along others (left plot). An alternative type of saddle point arises when the hessian is singular, and are characterised by a plateau around the stationary point, like the monkey saddle depicted in the plot to the right.

Two examples of saddle points

Gradient descent perform poorly, with very slow convergence near saddle points, and Newtons methods tend to be attracted and trapped in saddle points, due to the presence of negative eigenvalues in the hessian matrix. This can be avoided by approximating the hessian with a positive definite matrix Bk2fB_k \approx \nabla^2 f, like some quasi-Newton methods (i.e. BFGS). Alternatively, the class of trust region methods restate the optimisation problem as an sequence of optimisations of a second order approximation to ff in a local trust-region surrounding the current point aka_k. The exact solution to each of these subproblems can be shown to be (2f(ak)+λI)1f(ak)(\nabla^2 f(a_k) + \lambda I)^{-1} \nabla f(a_k), where λ\lambda is chosen to be large enough to make (2f(ak)+λI)(\nabla^2 f(a_k) + \lambda I) positive definite. Therefore, by design the trust-region methods aim avoid this problem of negative eigenvalues.

Trust region methods

Like many line search methods, trust region methods also use the second order Taylor expansion of ff around aka_k

f(ak+p)f(ak)+gkTp+12pTBkp=mk(p) f(a_k + p) \approx f(a_k) + g_k^T p + \frac{1}{2} p^T B_k p = m_k(p)

where gk=f(ak)g_k = \nabla f(a_k), BkB_k is an approximation to the hessian matrix Bk2f(ak)B_k \approx \nabla^2 f(a_k) or the hessian itself Bk=2f(ak)B_k = \nabla^2 f(a_k). Trust region methods aim to find the pp that minimises mkm_k in a local trust region p<Δk||p|| < \Delta_k around the current point aka_k, where Δk\Delta_k is the trust region radius.

Solving the minimisation given above is normally done approximately, with different trust region methods varying how the approximation is achieved. Choosing the trust-region radius is fundamental to this class of methods, and is done by comparing the actual to the predicted reduction in the function value

ρk=f(ak)f(ak+pk)mk(0)mk(pk). \rho_k = \frac{f(a_k) - f(a_k + p_k)}{m_k(0) - m_k(p_k)}.

Since mk(0)mk(pk)m_k(0) - m_k(p_k) is always positive, if ρk\rho_k is negative then the actual function value is increasing, the step is rejected and the trust region radius Δk\Delta_k is decreased in order to improve the approximate model mkm_k. If ρk\rho_k is positive but much smaller than one then we do not alter Δk\Delta_k. If ρk\rho_k is close to or greater than 1 we can be confident in our model and thus increase Δk\Delta_k. The general algorithm for a trust region method (reproduced from the text by Nocedal and Wright cited below) is:

Given a0a_0, Δ^>0\hat{\Delta} > 0, Δ0(0,Δ^)\Delta_0 \in (0, \hat{\Delta}), and ν[0,14)\nu \in [0, \frac{1}{4}):
for k=0,1,2,...k = 0, 1, 2, ...
   Obtain pkp_k by (approximately) minimising mk(p)m_k(p) where p<Δk||p|| < \Delta_k
   ρk:=f(ak)f(ak+pk)mk(0)mk(pk)\rho_k := \frac{f(a_k) - f(a_k + p_k)}{m_k(0) - m_k(p_k)}
   if ρk<14\rho_k < \frac{1}{4}
     Δk+1:=14Δk\Delta_{k+1} := \frac{1}{4} \Delta_k
   else
     if ρk>34\rho_k > \frac{3}{4} and pk=Δk||p_k|| = \Delta_k
        Δk+1:=min(2Δk,Δ^)\Delta_{k+1} := \min(2 \Delta_k, \hat{\Delta})
     else
        Δk+1:=Δk\Delta_{k+1} := \Delta_k
   if ρk>ν\rho_k > \nu
      ak+1:=ak+pka_{k+1} := a_k + p_k
  else
      ak+1:=aka_{k+1} := a_k
end for

We will describe two algorithms for minimising mk(p)m_k(p), the Cauchy point and the dogleg methods. The Cauchy point first solves a linear version of mkm_k defined as

pks=minpRnf(ak)+gkTp for pΔk p^s_k = \min_{p \in \mathcal{R}^n} f(a_k) + g_k^T p \text{ for }||p|| \le \Delta_k

Subsequently, pksp^s_k is used to find the scalar τk>0\tau_k > 0 such that

τk=minτ0mk(τpks) for τpksΔk \tau_k = \min_{\tau \ge 0} m_k(\tau p_k^s) \text{ for }||\tau p_k^s|| \le \Delta_k

Finally, the Cauchy point is given as pkC=τkpksp_k^C = \tau_k p_k^s.

The solution to this problem can be shown to be

pkC=τkΔkgkgk, p_k^C = -\tau_k \frac{\Delta_k}{|| g_k ||} g_k,

where

τk={1if gkTBkgk0min(gk3/(ΔkgkTBkgk),1)otherwise. \tau_k = \begin{cases} 1 & \text{if }g_k^T B_k g_k \le 0 \\ \min (||g_k||^3 / (\Delta_k g_k^T B_k g_k), 1) & \text{otherwise}. \end{cases}

The second method we describe is the dogleg method, which is applicable when BkB_k is a positive definite matrix. If the original hessian is positive definite then this method is directly applicable, or one of the quasi-Newton positive definite approximation to the hessian could also be used. The dogleg method is derived by considering the path of the pp that minimises mk(p)m_k(p) with increasing Δk\Delta_k, which forms a curved path in parameter space. The method approximates this path with two straight line segments. The first segment follows the steepest descent direction and is given by

pkU=gkTgkgkTBkgkgk p_k^U = -\frac{g_k^T g_k}{g_k^T B_k g_k} g_k

The second step is along the path between pkUp_k^U and pkB=Bk1gkp^B_k = -B_k^{-1} g_k. In the case where pkBp_k^B is inside the trust region pkBΔk||p_k^B|| \le \Delta_k then pkBp_k^B can be used without modification. Otherwise the point of intersection with the trust-region radius must be calculated, which can be done by solving the following quadratic equation

pkU+(τ1)(pkBpkU)2=Δk2 ||p_k^U + (\tau - 1)(p_k^B - p_k^U)||^2 = \Delta_k^2

with the second segment being defined by

p~k(τ)={τpkU,0τ1,pkU+(τ1)(pkBpkU),1τ2. \tilde{p}_k(\tau) = \begin{cases} \tau p_k^U, & 0 \le \tau 1, \\ p_k^U + (\tau - 1)(p_k^B - p_k^U), & 1 \le \tau \le 2. \end{cases}

Problems

Question
  1. Let f(x)=10(x2x12)2+(1x1)2f(x) = 10 \left( x_2 − x^2_1 \right)^2 + (1 − x_1)^2. At x=(0,1)x = (0,−1) draw the contour lines of the quadratic model

mk(p)=f(ak)+gkTp+12pTBkp, m_k(p) = f(a_k) + g_k^T p + \frac{1}{2} p^T B_k p,

assuming that BkB_k is the Hessian of ff. Draw the family of solutions of minpRnmk(p)\min_{p \in \mathcal{R}^n}m_k(p) so that pΔk||p|| \le \Delta_k as the trust region radius varies from Δk=0\Delta_k = 0 to Δk=2\Delta_k = 2. Repeat this at x=(0,0.5)x = (0, 0.5).

Expand for solution
Solution
import numpy as np
import matplotlib.pylab as plt

def f(x):
    return 10*(x[1] - x[0]**2)**2 + (1-x[0])**2

def grad_f(x):
    return np.array([
        40*x[0] * (x[0]**2 - x[1]) + 2*x[0] - 2,
        -20*x[0]**2 + 20*x[1]
    ])

def hessian_f(x):
    return np.array([
        [120 * x[0]**2 - 40*x[1] + 2, -40*x[0]],
        [-40*x[0], 20],
    ])

# plot contours
x = np.linspace(-0.5, 2.0, 100)
y = np.linspace(-1.5, 2.0, 100)
X, Y = np.meshgrid(x, y)
Z = f([X, Y])
plt.contour(X, Y, Z, levels=30)
min_x = np.array([1., 1.])

for x0 in [np.array([0., -1.]), np.array([0., 0.5])]:
    N = 20
    x = np.empty((N, 2), dtype=float)
    plt.plot(x0[0], x0[1], '.')
    plt.plot(min_x[0], min_x[1], '+')
    for i, delta in enumerate(np.linspace(1e-3, 2, N)):
        # if minimum of f is within radius, this is the min point, otherwise it is
        # somewhere on the border of the search radius
        if np.sum((min_x - x0)**2) < delta**2:
            x[i, :] = min_x
        else:
            min_f = 1e10
            for omega in np.linspace(0, 2*np.pi, 200):
                test_x = x0 + delta * np.array([np.cos(omega), np.sin(omega)])
                test_f = f(test_x)
                if test_f < min_f:
                    x[i, :] = test_x
                    min_f = test_f
    plt.plot(x[:, 0], x[:, 1], '-.', linewidth=2)

plt.show()
Question
  1. Write a program that implements the dogleg method. Choose BkB_k to be the exact Hessian. Apply it to minimise the function in (1) from the same two starting points. If you wish, experiment with the update rule for the trust region by changing the constants in the trust region algorithm given above, or by designing your own rules.
Expand for solution
Solution
def line_sphere_intersect(o, u, r2):
    """
    algorithm from https://en.wikipedia.org/wiki/Line%E2%80%93sphere_intersection

    here we assume that the line intersects the sphere, and we are only interested in
    the most positive solution
    """
    u_dot_o = u.dot(o)
    det = u_dot_o**2 - (np.dot(o, o) - r2)
    return -u_dot_o + np.sqrt(det)

def dogleg(delta, g, B, x0):
    """
    implements the dogleg method of minimising m(p)

    returns min p and a boolean that is True if min p lies on the trust region border
    """
    delta2 = delta**2
    pB = - np.linalg.solve(B, g)

    # if pB is within trust region then use that
    if np.dot(pB, pB) < delta2:
        return pB, False

    pU = - np.dot(g, g) / np.dot(g, B @ g) * g

    # find intersection of segment x0 -> pU -> pB with trust region
    if np.dot(pU, pU) > delta2:
        u = (pU - x0) / np.linalg.norm(pU - x0)
        intersect = line_sphere_intersect(x0, u, delta2)
        return x0 + intersect * u, True
    else:
        u = (pB - pU) / np.linalg.norm(pB - pU)
        intersect = line_sphere_intersect(pU, u, delta2)
        return pU + intersect * u , True


def minimise(f, grad_f, hessian_f, x0, tol=1e-5, max_iter=1000):
    nu = 0.2
    delta_hat = 1.0
    delta = delta_hat
    a = np.copy(x0)
    a_store = np.empty((max_iter+1, len(x0)), dtype=float)
    a_store[0, :] = a
    j = 1
    for i in range(max_iter):
        g = grad_f(a)
        B = hessian_f(a)
        f_a = f(a)
        p, p_equal_delta = dogleg(delta, g, B, a)
        rho = (f_a - f(a + p)) / (-g.dot(p) - 0.5*p.dot(B @ p))
        if rho < 0.25:
            delta *= 0.25
        else:
            if rho > 0.75 and p_equal_delta:
                delta = min(2*delta, delta_hat)
        if rho > nu:
            a += p
            a_store[j, :] = a
            j += 1
            if np.linalg.norm(p) < tol:
                break
    return a_store[:j, :]

# plot contours
x = np.linspace(-0.5, 2.0, 100)
y = np.linspace(-1.5, 2.0, 100)
X, Y = np.meshgrid(x, y)
Z = f([X, Y])
plt.contour(X, Y, Z, levels=30)
min_x = np.array([1., 1.])

# do the minimisation for both points and plot all iterations
for x0 in [np.array([0., -1.]), np.array([0., 0.5])]:
    plt.plot(x0[0], x0[1], '.')
    plt.plot(min_x[0], min_x[1], '+')
    a_store = minimise(f, grad_f, hessian_f, x0)
    print('done in', a_store.shape[0], 'iterations')
    plt.plot(a_store[:, 0], a_store[:, 1])

plt.show()