A new approach to nonlinear programming (NLP) is described, which differs from traditional SQP and Interior Point algorithms. At each iteration, the method solves for the Newton step of an equality-constrained QP that linearises the equation constraints but omits the bounds and inequalities. In contrast, SQP methods normally solve a full QP to determine the active set. In the new method, inequalities must be expressed as bounds, if need be through the introduction of slack variables. A procedure, which we call "Variable Chopping" and have applied successfully to nonlinear algebraic equation (NLAE) systems, is used to limit the step in individual variables so that they satisfy the bounds.
To imitate the effect of rigidly constraining variables at active bounds, a
parameter pj is added to the Hessian diagonal
(Hjj) when xj becomes active. This
parameter ultimately is set to a very large value so as to
generate tiny, infeasible Newton steps in bounded variables that are
then chopped. However, pj is set to a modest value for newly
activated bounds (typically 1 when a variable first approaches a bound)
and is gradually increased if the bound remains active. This greatly
reduces the extent to which the QP matrix needs to be repivoted by the
sparse linear equation solver that we have developed. The linear solver also
allows partial solutions of singular systems, necessary because if dependent
bounds are persistently active, with large pj values, an
equation relating the bounded variables becomes unpivotable. Dependent bounds
are removed only when a converged stationary point is reached. The multiplier
for an active bound on variable xj is simply
(dependent on sign conventions when defining the Lagrangian)
µj = pj sNj
with sN the (infeasible) Newton step, before chopping.
Downhill progress in the objective is promoted by further regularisation of the whole Hessian when negative curvature of the Lagrangian is detected in the Newton step. Regularisation is also required if the Hessian is singular, which often occurs at an initial guess. The regularisation is relaxed when curvature estimates permit. This regularisation allows us also to solve LPs. The code allows steps to be taken away from non-minimum points, in a null space direction that has negative curvature. Future work will include more accurate analysis of negative eigenvalues of the reduced Hessian at such points.
Variable chopping generally alters the step direction, so there is no guarantee of residual norm reduction even at small fractions of the Newton step. A tolerant policy towards the residual norm is thus adopted. Termination is ensured by a watchdog approach: if a new, best value of the norm is not obtained after a set number of iterations, the solution returns to the best point so far and proceeds with a reduced step size, controlled by a trust region. Progressive reduction of the watch iterations parameter, eventually to 1, at each such "backtrack" ensures that the solution will either converge to a stationary point or stall (with converged step and unconverged residuals). The code contains options for various types of restart after a stall, under user guidance.
Development and testing has so far focused on two problems, a small but nonconvex calculus problem, and a heat exchanger (HX) network containing up to 12 exchangers. Interesting solutions include those in which some exchangers may be absent, requiring a highly nonconvex approximation of the cost function for each exchanger as we attempt to solve what is really a MINLP. The 12-HX problem has 188 variables and up to 9 degrees of freedom. These examples will be presented and performance discussed.
Current 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996