General information on HOPDM code


This is a short information on the HOPDM library of routines for solving large-scale linear programs with interior point methods. HOPDM stands for Higher Order Primal-Dual Method.

ALGORITHM The code implements a primal-dual logarithmic barrier (interior point) method with multiple correctors of centrality, i.e. the algorithm proposed in the following paper of Jacek Gondzio: "Multiple centrality corrections in a primal-dual method for linear programming", Computational Optimization and Applications 6 (1996) pp. 137-156. Additionally, the code offers several useful options, like e.g. a sophisticated presolve routine, a fast sparsity-exploiting Cholesky decomposition, etc., that are all carefully documented in the references. USE OF HOPDM The simplest usage of the library is to apply it as a stand-alone LP solver that reads LP data (in a widely accepted MPS format), solves the problem and prints the output (in an MPS-like format). This, in fact, reflects the structure of the main (HMAIN2.F) routine. This routine calls the following subroutines of the library: RDSPEC read the problem specifications (several example SPECS files are included eg, for the smallest Netlib problems: AFIRO, ADLITTLE, ...) RDMPS1 read the MPS-formatted LP data. This routine only reads the problem but it does nothing else to it. You can replace it with any other (presumably faster) routine to read different disc file or to directly generate LP data structures in internal format of HOPDM. This internal format is simplified (at this point) just to a collection of sparse columns of matrix A, explicit LOWER and UPPER bounds, explicit indication of the OBJECTIVE row and the RHS vector). RDMPS2 transforms the LP data of the above mentioned format to internal data structures used by the HOPDM library. (Differently to the simplex method, interior point codes require comfortable access to both rows and columns of A.) Here slacks, surplus and artificials are added, FREE variables are split, etc. PRESOL performs an advanced pre_solve analysis of the problem. In particular, it cleans the LP matrix: - determines (and later tightens) bounds on shadow prices, - eliminates dominated (and weakly dominated) variables, - eliminates singleton rows, - eliminates singleton columns (implied FREE variables), - finds identical columns and aggregates them, - finds hidden split FREE variables, - eliminates redundant (dominated or forcing) constraints, - tightens bounds on variables, makes the LP matrix sparser: - pivots out some nonzero entries of A, makes the LP matrix better suited for Cholesky fact.: - splits dense columns into shorter pieces. You may comment out the call of PRESOL routine, but it usually causes some loss (10-30%) of the HOPDM efficiency. PREPRO performs preprocessing for the Cholesky decomposition. In particular, it: - builds an adjacency structure of A*Atranspose matrix, - finds an ordering that minimizes the number of nonzero entries in a Cholesky factor, - reorders rows of A (and all data associated to rows, such as RANGES, RHS etc.) according to the permutation resulting from the minimum degree ordering, - prepares data structures for sparse Cholesky decomposition (i.e. it does the symbolic factorization). You MUST call this routine prior to calling PCPDM solver. SCALEA scales the LP constraint matrix. Simple geometric scaling is repeated twice on the matrix A. RSCALE and CSCALE vectors handle the resulting row and column scaling factors, respectively. You DO NOT HAVE TO call this routine. What is necessary, however, is to initialize scaling factors (to ones). Scaling in 95% of cases improves the numerical properties of the problem to be solved so it is justified to call it. PCPDM (Higher Order Primal-Dual Method) solves the LP problem. This routine implements a primal-dual algorithm with multiple centrality corrections. UNSCLA unscales the LP problem. You HAVE TO call this routine even if you have earlier disabled scaling. POSTSL performs post-solve analysis, i.e., it undoes all the problem modifications that have been done in a PRESOL routine. You HAVE TO call this routine if you earlier called PRESOL. (Otherwise, the solution is printed in a modified form, with e.g. bounds pushed and may not be really readable). WRTSOL writes an MPS-like formatted solution. Generally speaking, all routines of the HOPDM library are well documented source files. To deeply understand some functions, however, you may find it necessary to consult the appropriate publications (their list is almost always supplied in a source code of each routine). COMPILING THE PROGRAM To compile and link the program you have to place all the source routines (*.f files) in the current directory and run make command. An example 'makefile' for SUN SPARC workstations is included. However, you may have to modify it appropriately for your computer. In particular, you may have to change the name of FORTRAN compiler, etc. In principle, HOPDM should be easy to port to any platform. However, you may have some problems with the routines that measure the elapsed time. If this is the case, please change the call in MYTIME routine to the one that is appropriate for your system (or add two C routines written by David Gay and supplied in FTIME.c source file). RUNNING THE PROGRAM To run the program you need at least two files to be present in a current directory: MPS-formatted file with the LP problem data (default name 'mps') SPECS file with basic information on a problem to be solved (default name 'spc'). You are supplied with two smallest LP problems from the Netlib collection: AFIRO and ADLITTLE and the specs files for them. The program ALWAYS reads specifications from the 'spc' file. To run it on AFIRO problem you thus have to - copy afiro.spc onto spc - start the hopdm solver. The minimum 'spc' file has to contain at least the following lines:

begin mps file afiro.mps log file afiro.log solut file afiro.res end

It may additionally contain the limits of rows, columns and nonzero entries, the direction of optimization: minimize or maximize, names of sections in the MPS file if there are more sections of the same type, the required precision of the optimal solution and the level of presolve analysis desired. Our specs file for AFIRO looks like:

begin rows 30 cols 60 elements 120 MPS FILE afiro.mps log file afiro.log SOLUT FILE afiro.res rhs name B objective COST opt tol 1.0d-8 presolve 6 minimize end

although it would suffice to leave only:

begin mps file afiro.mps log file afiro.log solut file afiro.res end

as minimization (to 8 digits exact) and level 6 presolve are default settings of HOPDM. FILES CREATED BY HOPDM HOPDM creates two files associated to a given problem solved. In the above example, these files will be: 'afiro.log' which contains all log information on the process of solving the problem. 'afiro.res' solution of the problem in MPS-like format. You may change their names by appropriate specifications in 'spc' file. Current version of the code writes relatively large output to a '*.log' file as this output usually helps to understand eventual difficulties if such occur in the optimization process. You can skip some of it if you find it superfluous. HANDLING LARGE SCALE PROBLEMS There are no particular limits on the size of problems that can be solved with HOPDM. The user is only expected to adjust parameters appropriately to the amount of virtual memory available on his computer. A few PARAMETER lines at the beginning of HMAIN.F routine offer hints of how this could be done for 32-, 64- and 128-MB computers. Distributed version of the library uses INTEGER*2 arrays to store rows/columns indices. This imposes constraints on the size of problems that can be solved with the program. Their sizes cannot exceed 32767 rows/cols. If you want to solve larger problems, then you should run 'Two2four' script to replace all INTEGER*2 declarations with INTEGER*4 ones, change INTSZ parameter in HMAIN2.F routine accordingly (it should write INTSZ=4), increase appropriate PARAMETERs, and recompile the whole program. Finally, note that MSPLIT array (in COMMON) is used to port some information that is essential for the stability of LDL^T decomposition. COMMON blocks require the arrays have fixed size, hence MSPLIT is everywhere declared as INTEGER*2 MSPLIT(100000). It is an implicit bound on the size of the solvable problems. Their size cannot exceed 100,000 rows. If you want to go further, you have to change carefully the declaration of MSPLIT in all places where it appears. ACKNOWLEDGMENTS The code benefits the presence of the following two routines: - MMD routine is Joseph Liu's implementation of the Multiple Minimum Degree ordering. See: "The evolution of the minimum degree ordering algorithm", SIAM Review 33 (1989), 1, pp. 1-19. NOTE: This routine can be used EXCLUSIVELY for research purposes. - GENQMD routine is an implementation of the Quotient tree Minimum Degree ordering available from SPARSPAK (via Netlib). This routine is based on the book "Computer Solution of Large Sparse Positive Definite Systems" by George and Liu, Prentice Hall 1981.