The group has a strong experience in research and development of computational techniques. This is reflected in the production and implementation of computational software.
24 parallel sparse PCA codes based on alternating maximization by Peter Richtárik and Martin Takáč.
More information about 24am
Accelerated coordinate descent methods for minimizing composite functions by Peter Richtárik and Martin Takáč.
More information about AC/DC
BASICLU implements a sparse LU factorization and an update method that maintains the factorization after column changes to the matrix. It is intended for use in simplex-type algorithms and has been tailored to hypersparse linear programming problems.
The algorithm is described in Technical Report ERGO 17-002 and the source code is available from the ERGO github page.
HiGHS is a high performance serial and parallel solver for large scale sparse linear programming problems.
HiGHS is based on research which has yielded best paper prizes for 2005, 2015 and 2018. The source code is available from the ERGO github page.
LURank is an implementation of an LU factorization which reveals the rank of the matrix. It applies the maximum volume concept when performing Gaussian elimination.
More information about LURank
HOPDM (Higher Order Primal-Dual Method) is Jacek Gondzio's implementation of an infeasible primal-dual path-following interior point method for linear, convex quadratic and convex nonlinear programming problems.
HOPDM allows solving large scale linear, convex quadratic and convex nonlinear programming problems. The code algorithm uses multiple centrality correctors; their number is chosen appropriately for a given problem in order to reduce the overall solution time. HOPDM automatically chooses the most efficient factorization method for a given problem (either normal equations or augmented system). The code compares favourably with commercial LP, QP and NLP packages.
More information about HOPDM
IPX implements a linear programming interior point solver based on iterative linear algebra. It provides a crossover method for recovering a basic solution. The algorithm is described in Technical Report ERGO 18-014 and the source code is available from the ERGO github page.
MFIPMCS (Matrix-free Interior Point Method for Compressed Sensing) is an interior point method implemented in MATLAB for the solution of real valued compressed sensing problems.
The "matrix-free" implies that only matrix-vector products operations are allowed and the process is memoryless. The solver employs an efficient preconditioning technique along with a Krylov subspace method (i.e conjugate gradient) for the fast solution of linear systems at every iteration.
More information about MFIPMCS
OOPS (Object-Oriented Parallel Solver) is a parallel interior point code that exploits any special structure in the Hessian and Jacobian matrices, developed by Jacek Gondzio, Andreas Grothey and Robert Sarkissian.
The solver is an implementation of the primal-dual interior point method with multiple centrality correctors. The solver is implemented using object-oriented programming techniques. It solves linear (LP), quadratic (QP) and nonlinear (NLP) problems. The sequential code compares favourably with commercial packages and the parallel code shows perfect speed-ups.
More information about OOPS
pdNCG (primal-dual Newton Conjugate Gradients) is a MATLAB implementation for the solution of L1-regularized strongly-convex problems. The solver is memoryless, it requires only matrix-vector product operations, hence it is appropriate for large-scale instances.
More information about pdNCG
S2GD is an efficient implementation of Semi Stochastic Gradient Descent for logistic regression by Jakub Konečný.
More information about S2GD
SML (Structured Modelling Language) is an implementation of a structure-conveying extension to the AMPL modelling language.
SML extends AMPL with object-oriented features that allow to construct models from sub-models. Unlike traditional modelling languages, the new approach does not scramble the block structure of the problem, and thus it enables the passing of this structure on to the solver. Its design allows the problem generation phase to be parallelisable.
More information about SML
SVM-OOPS, developed by Kristian Woodsend and Jacek Gondzio, is for training standard linear support vector machines. It uses the OOPS solver, and incorporates SVM reformulations designed for interior point methods.
The algorithm is best suited for large-scale classification problems, where the number of samples greatly exceeds the number of features. It is highly efficient on multicore processors, solving problems involving tens or hundreds of thousands of support vectors. Larger problems can be solved on computing clusters using a combination of OpenMP and MPI. Our approach has the additional benefit of providing a good early approximation to the separating hyperplane.
More information about SVM-OOPS
Instance generators for l1-regularized over- and underdetermined least squares, written by Kimon Fountoulakis and Jacek Gondzio.
More information about trillion