Parallelising the dual revised simplex method

Invited talk at the Convex Optimization and Beyond Workshop, ICMS, 27th June 2014.

J. A. J. Hall

Abstract

Relaxations of MIP problems and many classes of continuous LP problems are generally solved using the dual revised simplex method. Despite its potential value, there has been relatively little research into the development of implementations of the revised simplex method which exploit parallel computing environments. Hitherto these efforts have met with very limited success, in particular for general large sparse LP problems. This talk will discuss three recent approaches. The first two exploit data and task parallelism when solving general LP problems and achieve meaningful performance improvement relative to Clp, the leading open-source serial simplex solver. The third approach exploits the inherent data parallelism resulting from the structure of stochastic LP problems. Results obtained on a large cluster and supercomputer using a distributed-memory implementation are presented for stochastic LP problems with up to 108 variables and constraints. These achieve parallel efficincy when using up to 128 cores and run up to 100 times faster than Clp.


Slides:
PDF COB14.pdf

Paper:

Parallel distributed-memory simplex for large-scale stochastic LP problems
M. Lubin, J. A. J. Hall, C. G. Petra and M. Anitescu
Technical Report ERGO-12-005.
Computational Optimization and Applications 55(3), 571-596, 2013. DOI: 10.1007/s10589-013-9542-y