When applying an algorithm to a given problem, the performance of its implementation depends on the exploitation of both the nature of the problem and the computing resources available. This talk will consider four issues influencing high performance implementations of the simplex method for linear programming (LP). The first is the exploitation of hyper-sparsity: this has been the major contribution to the improvement in performance of serial implementations of the revised simplex method over the past ten years. The second is the efficient use of parallel computers, something that has yet to be achieved by simplex solvers for general LP problems. The third is the exploitation of LP problem structure in the context of parallel solution of block-angular LP problems. Finally, the nature of the LP problems occurring in sparse reconstruction can be exploited within the dual revised simplex method to to provide a fast and reliable solver.
|PDF (with pauses)||N72_Pauses.pdf|
|PDF (without pauses)||N72_NoPauses.pdf|