Conferences

MAG-MIGS afternoon:Mathematical Research Topics in Machine Learning

Organizers: Benedict Leimkuhler and Tiffany Vlaar
Speakers: Arnulf Jentzen (University of Münster), Gabriel Stoltz (CERMICS, Ecole des Ponts, Paris), Eric Vanden-Eijnden (New York University), and Amos Storkey (University of Edinburgh).

On 3 June 2021 we hosted an afternoon on mathematical research topics in machine learning with talks by Arnulf Jentzen, Gabriel Stoltz, Eric Vanden-Eijnden, and Amos Storkey.

The event was organised under the auspices of MAC-MIGS, an EPSRC Centre for Doctoral Training based in Edinburgh, Scotland ( https://www.mac-migs.ac.uk).

The recordings of the talks can be found here: * Arnulf Jentzen * Gabriel Stoltz * Amos Storkey * Eric Vanden-Eijnden

The titles and abstracts of the talks are shown below.

We would like to thank everyone who attended for their active engagement in the discussion sessions.

Titles and Abstracts

### Arnulf Jentzen
Overcoming the Curse of Dimensionality: From Nonlinear Monte Carlo to Deep Learning
Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. For example, stochastic PDEs are a fundamental ingredient in models for nonlinear filtering problems in chemical engineering and weather forecasting, deterministic Schroedinger PDEs describe the wave function in a quantum physical system, deterministic Hamiltonian-Jacobi-Bellman PDEs are employed in operations research to describe optimal control problems where companys aim to minimise their costs, and deterministic Black-Scholes-type PDEs are highly employed in portfolio optimization models as well as in state-of-the-art pricing and hedging models for financial derivatives. The PDEs appearing in such models are often high-dimensional as the number of dimensions, roughly speaking, corresponds to the number of all involved interacting substances, particles, resources, agents, or assets in the model. For instance, in the case of the above mentioned financial engineering models the dimensionality of the PDE often corresponds to the number of financial assets in the involved hedging portfolio. Such PDEs can typically not be solved explicitly and it is one of the most challenging tasks in applied mathematics to develop approximation algorithms which are able to approximatively compute solutions of high-dimensional PDEs. Nearly all approximation algorithms for PDEs in the literature suffer from the so-called “curse of dimensionality” in the sense that the number of required computational operations of the approximation algorithm to achieve a given approximation accuracy grows exponentially in the dimension of the considered PDE. With such algorithms it is impossible to approximatively compute solutions of high-dimensional PDEs even when the fastest currently available computers are used. In the case of linear parabolic PDEs and approximations at a fixed space-time point, the curse of dimensionality can be overcome by means of Monte Carlo approximation algorithms and the Feynman-Kac formula. In this talk we prove that suitable deep neural network approximations do indeed overcome the curse of dimensionality in the case of a general class of semilinear parabolic PDEs and we thereby prove, for the first time, that a general semilinear parabolic PDE can be solved approximatively without the curse of dimensionality.

### Gabriel Stoltz
Machine Learning for Coarse-graining Molecular Systems
A coarse-grained description of atomistic systems in molecular dynamics is provided by reaction coordinates. These nonlinear functions of the atomic positions are a basic ingredient to compute more efficiently average properties of the system of interest, such as free energy profiles. However, reaction coordinates are often based on an intuitive understanding of the system, and one would like to complement this intuition or even replace it with automated tools. One appealing tool is autoencoders, for which the bottleneck layer provides a low dimensional representation of high dimensional atomistic systems. I will discuss some mathematical foundations of this method, and present illustrative applications including alanine dipeptide. Some on-going extensions to more demanding systems, namely HSP90, will also be hinted at.

### Eric Vanden-Eijnden
Promises and Challenges of Machine Learning in Scientific Computing
The recent success of machine learning suggests that neural networks may be capable of approximating high-dimensional functions with controllably small errors. As a result, they could outperform standard function interpolation methods that have been the workhorses of current numerical methods. This feat offers exciting prospects for scientific computing, as it may allow us to solve problems in high-dimension once thought intractable. At the same time, looking at the tools of machine learning through the lens of applied mathematics and numerical analysis can give new insights as to why and when neural networks can beat the curse of dimensionality. I will briefly discuss these issues, and present some applications related to solving PDE in large dimension and sampling high-dimensional probability distributions.

### Amos Storkey
Hamiltonian Dynamic Models for Decomposing Content and Motion from Image Sequences
No abstract