The revised simplex method for solving linear programming LP problems requires the solution of Bãq = aq and Bπ = ep, where B is the basis matrix, aq is column q of the (sparse) constraint matrix and ep is column p of the identity matrix. Although the invertible representation of B is usually obtained by an updating procedure, periodically it is necessary to form it from scratch. As a crucial step towards the goal of a practical parallel implementation of the revised simplex method, this talk considers the scope for exploiting parallelism during basis matrix inversion.
For LP problems where the revised simplex method out-performs barrier methods, the solutions of Bãq = aq and Bπ = ep are generally sparse. This property of LP problems is referred to as hyper-sparsity. It will be shown that the basis matrices of hyper-sparse LP problems are generally highly reducible, and that the dominant cost of matrix inversion is that of identifying the irreducible component via a triangularisation procedure. This talk will describe the triangularisation procedure and discuss the scope for parallelising it.
Current 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996