#
Parallel matrix inversion for the revised simplex method - A study

###
Contributed talk at the Sparse Days Meeting 2006 at CERFACS: 15th-16th June 2006

###
J. A. J. Hall

####
Abstract

The revised simplex method for solving linear programming LP problems
requires the solution of *B\hat{\bfa}_q = \bfa_q* and *B^T\bfpi =
\bfe_p*, where *B* is the basis matrix, *\bfa_q* is column *q* of
the (sparse) constraint matrix and *\bfe_p* is column *p* of the
identity matrix. Although the invertible representation of *B* is
usually obtained by an updating procedure, periodically it is
necessary to form it from scratch. As a crucial step towards the goal
of a practical parallel implementation of the revised simplex method,
this talk considers the scope for exploiting parallelism during basis
matrix inversion.
For LP problems where the revised simplex method out-performs barrier
methods, the solutions of *B\hat{\bfa}_q = \bfa_q* and *B^T\bfpi =
\bfe_p* are generally sparse. This property of LP problems is referred
to as hyper-sparsity. It will be shown that the basis matrices of
hyper-sparse LP problems are generally highly reducible, and that the
dominant cost of matrix inversion is that of identifying the
irreducible component via a triangularisation procedure. This talk
will describe the triangularisation procedure and discuss the scope
for parallelising it.

**Slides:**