Dr John Pearson   (University of Kent, UK)Talk title: Some perspectives on preconditioning for PDE-constrained optimization
Abstract: PDE-constrained optimization problems have a wide range of applications across numerical mathematics and applied science, so it is important to develop fast and feasible methods to solve such problems. We employ preconditioned iterative methods to tackle the large and sparse matrix systems that arise from their discretization, and consider a range of issues related to these solvers. In particular, we discuss applications of nonlinear problems in mathematical biology and fluid dynamics, approaches for problems involving fractional derivatives, and deferred correction methods for time-dependent problems.
Dr Jennifer Pestana   (University of Strathclyde, UK)Talk title: Null space preconditioners for saddle point problems
Abstract: Linear systems with saddle point structure arise throughout constrained optimisation. When the system is large and sparse it is typically solved by an iterative method. However, these methods are only efficient if they find a good approximation to the solution of the linear system in a few iterations. In many saddle point problems, fast convergence occurs only after preconditioning, i.e. after transforming the linear system to an equivalent one with better properties. Here, we present a family of nullspace preconditioners for saddle point problems and analyse their effectiveness for constrained optimisation problems.
Dr Margherita Porcelli   (University of Florence, Italy)Talk title: Preconditioning semismooth Newton methods for optimal control problems with L^1-sparsity and control constraints
Abstract: PDE-constrained optimization aims at finding optimal setups for partial differential equations so that relevant quantities are minimized. Including sparsity promoting terms in the formulation of such problems results in more practically relevant computed controls but adds more challenges to the numerical solution of these problems. The needed L^1-terms as well as additional inclusion of box control constraints require the use of semismooth Newton methods. We propose robust preconditioners for different formulations of the Newton's equation. With the inclusion of a line-search strategy and an inexact approach for the solution of the linear systems, the resulting semismooth Newton's method is reliable for practical problems. We present results on the theoretical analysis of the preconditioned matrix and numerical experiments that illustrate the robustness of the proposed scheme.
This is joint work with Valeria Simoncini and Martin Stoll.
Dr Tyrone Rees   (STFC Rutherford Appleton Laboratory, UK)Talk title: Approximate solves in the interior point method for linear programming
Abstract: The most expensive step at each iteration of an interior point algorithm is a linear system solve. The more variables we wish to optimize over, the bigger this linear system, and at some point it becomes infeasible to use a direct method. But how accurate must this solve be to guarantee convergence of the interior point method at a reasonable rate? The answers to this question that can be found in the literature can be seen to be too tight, in that convergence at a similar rate can be seen when a looser tolerance is applied. Here I will present conditions for convergence to occur at the same rate as with a direct solver. This requires that the error (measured in a certain norm) is proportional to the square root of the duality measure -- a critical parameter in interior point methods; existing analyses require solving to an accuracy proportional to this parameter. Further, I will show how this norm we require to be small is strongly related to the norm in which a number of traditional Krylov subspace methods (with certain preconditioners) converge. I will present numerical results that show the framework developed is descriptive of the convergence behaviour of the interior point method in the presence of inexactness.