Interior point methods (IPM) with direct solution of the underlying linear systems of equations have been used successfully to solve very large scale linear programming (LP) and quadratic (QP) programming problems. However, the limitations of direct methods for some classes of problems have led to iterative techniques being considered. The matrix-free method is one such approach and is so named since the iterative solution procedure requires only the results of operations Ax and ATy, where A is the matrix of constraint coefficients. Thus, in principle, it may be applied to problems where A is not known and only an oracle is available for computing Ax and ATy. Since the computational cost of these operations may well dominate the total solution time for the problem, it is important that the techniques used to perform them are efficient.
This talk will outline the matrix-free interior point method and, for several classes of LP problems, demonstrates its overwhelmingly superior performance relative to the simplex method and IPM with equations solved directly. The dominant cost of the operations Ax and ATy will then be used to motivate their implementation on a GPU to yield further performance gains. Different computational schemes for these sparse matrix-vector products are discussed. A comparison of the speed-up achieved using a many-core GPU implementation with that for a multi-core CPU implementation indicates the former has better potential.