## Abstracts

### Stephen Boyd (University of Stanford)

**Talk title:**Operator Splitting for Conic Optimization via Homogeneous Self-Dual Embedding

**Abstract:**We introduce a first order method for solving very large cone programs to modest accuracy. The method uses an operator splitting method, the alternating directions method of multipliers, to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone. This approach has several favorable properties. Compared to interior-point methods, first-order methods scale to very large problems, at the cost of lower accuracy. Compared to other first-order methods for cone programs, our approach finds both primal and dual solutions when available and certificates of infeasibility or unboundedness otherwise, it does not rely on any explicit algorithm parameters, and the per-iteration cost of the method is the same as applying the splitting method to the primal or dual alone. We discuss efficient implementation of the method in detail, including direct and indirect methods for computing projection onto the subspace, scaling the original problem data, and stopping criteria. We provide a reference implementation called SCS, which can solve large problems to modest accuracy quickly and is parallelizable across multiple processors. We conclude with numerical examples illustrating the efficacy of the technique; in particular, we demonstrate speedups of several orders of magnitude over state-of-the-art interior-point solvers.

### Jacek Gondzio (University of Edinburgh)

**Talk title:**Inexact Search Directions and Matrix-Free Second-Order Methods for Optimization

**Abstract:**Many large-scale optimization problems cannot be solved with methods which rely on exact directions obtained by factoring matrices. The optimization community seems to favour the first-order methods for such problems because of their very low per iteration cost. However, the first-order methods: (i) struggle to deliver accurate solutions, and (ii) fail to converge when applied to nontrivial problems. I will argue that second-order methods (including interior point algorithms) which use inexact directions computed by iterative techniques and run in a matrix-free regime offer an attractive alternative. I will address a theoretical issue of how much of inexactness is allowed in directions and support the findings with computational experience of solving some very large optimization problems.

### Julian Hall (University of Edinburgh)

**Talk title:**Parallelising the Dual Revised Simplex Method

**Abstract:**Relaxations of MIP problems and many classes of continuous LP problems are generally solved using the dual revised simplex method. Despite its potential value, there has been relatively little research into the development of implementations of the revised simplex method which exploit parallel computing environments. Hitherto these efforts have met with very limited success, in particular for general large sparse LP problems. This talk will discuss three recent approaches. The first two exploit data and task pallelism when solving general LP problems and achieve meaningful performance improvement relative to Clp, the leading open-source serial simplex solver. The third approach exploits the inherent data parallelism resulting from the structure of stochastic LP problems. Results obtained on a large cluster and supercomputer using a distributed-memory implementation are presented for stochastic LP problems with up to 10^8 variables and constraints. These achieve parallel efficincy when using up to 128 cores and run up to 100 times faster than Clp.

### Raphael Hauser (University of Oxford)

**Talk title:**The Role of Convex Optimization in Optimal Alignments of Random Sequences

**Abstract:**In genetics, natural language processing, financial data cleaning and various other applications one faces the problem of distinguishing homologous pairs of sequences (with symbols from a given finite alphabet) from non-homologous pairs. The standard approach to quantifying homology is to formulate an optimization problem that can be solved via dynamic programming. The optimal alignment score then has to compare favourably against a null model for a sequence pair to count as homologous with high confidence. In this talk we focus on such null-models and the problem of characterising the asymptotic stochastic properties of optimal alignments of random sequences. We will see that with the help of large-deviations theory, many of these stochastic problems can be reduced to problems of convex geometry, and that the latter can be answered to high confidence by probabilistic algorithms that combine simulation and convex optimization.

### Daniel Kuhn (École Polytechnique Fédérale de Lausanne)

**Talk title:**Generalized Gauss Inequalities via Semidefinite Programming

**Abstract:**A sharp upper bound on the probability of a random vector falling outside a polytope, based solely on the first and second moments of its distribution, can be computed efficiently using semidefinite programming. However, this Chebyshev-type bound tends to be overly conservative since it is determined by a discrete worst-case distribution. In this paper we obtain a less pessimistic Gauss-type bound by imposing the additional requirement that the random vector's distribution must be unimodal. We prove that this generalized Gauss bound still admits an exact and tractable semidefinite representation. Moreover, we demonstrate that both the Chebyshev and Gauss bounds can be obtained within a unified framework using a generalized notion of unimodality. We also offer new perspectives on the computational solution of generalized moment problems, since we use concepts from Choquet theory instead of traditional duality arguments to derive semidefinite representations for worst-case probability bounds.

### Daniel Ralph (University of Cambridge)

**Talk title:**Capacity decisions in electricity production under risk aversion and risk trading

**Abstract:**Risk neutral optimization models are standard in exploring how changes in production capacity are made over time in a stochastic economic equilibrium setting. However investors in electricity (and other) manufacturing plants are risk averse given long payback periods and risks, e.g., regulation relating to penalising or pricing CO2 emissions, that can’t be hedged in financial markets. We use a two stage model – invest in plants & financial products today and produce electricity in a stochastic market tomorrow – to illustrate how risk aversion and risk markets can be adapted to economic equilibrium models for long term planning. The polar opposite cases of complete risk markets and no risk trading are used to give a "corridor" of outcomes. We conclude with a multistage example of this approach.