Nick Gould (Rutherford Appleton Laboratory)

Quadratic programming: theory and methods
Wednesday 28 June 2000 at 14.00-15.00 and 15.30-16.30, JCMB 6324

Abstract

In this talk, we shall consider numerical methods for solving general (i.e., non necessarily convex) quadratic programming problems. Such problems arise in a number of real applications (such as portfolio selection), but perhaps more importantly as sub-problems for more general nonlinear programming algorithms.

We shall start by discussing (strong and weak) necessary and sufficient optimality conditions. We shall illustrate the difficulty of using the strongest necessary conditions, but also discuss how best to verifying the weaker ones. We shall then consider, in turn, the two major classes of methods for solving quadratic programs, namely active-set and interior-point methods.

Active set methods aim to predict which of the constraints are active (that is satisfied as equalities) at the solution. For a given active set, solving the resulting equality-constrained is in theory straightforward, but may require considerable ingenuity in practice. If an incorrect prediction is made, some effort must be made to improve the prediction; this typically involves adding or removing one or more constraints from the active set. Efficiencies in practice occur because consecutive equality-constrained subproblems have almost identical active sets, which allows for savings in the dominant linear-algebraic costs. To date, no active set method is known to guarantee an polynomial bound on the number of changes in active sets, even in the simple convex case. In practice, however, the methods remain competitive, at least on small-to-medium sized problems, or on problems where some advanced knowledge of the optimal active set is provided.

Interior point methods, on the other hand, can provide polynomial bounds on the number of steps taken, at least in the convex case. Practical interior-point methods do not necessarily have polynomial bounds, but appear to be most effective in practice. Unlike active set methods, interior-point methods aim for optimality by only requiring that the complementary slackness condition (i.e., that a constraint is active or its corresponding Lagrange multiplier is zero) is achieved in the limit rather than at every iteration. This flexibility gives such methods the potential to avoid difficulties due to degeneracy. A second considerable distinction is that the algebraic costs per iteration are normally far higher for interior-point methods, but this is more than offset by a significant reduction in the number of iterations.

We shall aim to emphasise the additional difficulties that nonconvexity brings to both classes of methods. We will illustrate the performance of both sorts of methods on a set of test examples.

Seminars by year

Current 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996