**Turing/LMS Workshop**

**Inverse Problems and
Data Science**

**8-10 May 2017**

**Venue: **Informatics Forum, 10 Crichton St, Edinburgh EH8 9AB, UK

**Organisers**: Natalia Bochkina (University
of Edinburgh), Carola Schoenlieb (University of
Cambridge), Marta Betcke (UCL), Sean Holman
(University of Manchester)

The aim of the workshop is to bring together researchers on inverse problems working in different areas of mathematics, statistics and machine learning as well as from the applied disciplines where inverse problems arise, such as astronomy, biology, computer vision, geoscience and medicine. The topics of the workshop include nonlinear inverse problems, algorithms, inverse problems in machine learning, theoretical properties of statistical estimators in inverse problems, Bayesian inverse problems, applications in science and medicine.

**Financial support**: The Alan Turing Institute and London
Mathematical Society

**Sponsoring:** if you would like to sponsor the event, please get in
touch with one of the organisers. We are grateful to **Schlumberger** for sponsorship.

**Times:** the
workshop will start at 10am on Monday 8^{th} of May and concludes at
4:30pm on Wednesday 10^{th} of May

**Contact:** events@turing.ac.uk

**Registration***: deadline is 30 April, registration fee is £60. Please register here.
*

* Poster abstract
submission deadline: *Friday 7th of April

*To submit the poster,
please send your name, institution, title and abstract (indicate if you are
open to giving a talk) to edinburgh.workshop.stats@gmail.com.*

*We have a small number
of travel bursaries available for UK-based PhD students presenting a poster –
please indicate if you wish to apply when submitting poster abstract. *

__There will also be a
training course on Bayesian inverse problems on 11 May 2017 (link)
– registration will be available soon. __

__Confirmed speakers:__

*Marta Betcke (Computer Science, UCL, UK) “***Dynamic high-resolution photoacoustic
tomography with optical flow constraint***”*

*Michal Branicki (Mathematics, University of Edinburgh, UK)*

*Mike Christie
(Petroleum Institute, Heriot Watt University, UK) ”***Bayesian Hierarchical Models for
Measurement Error***”*

*Christian Clason (Mathematics, Duisburg-Essen University, Germany)”*
**Functional error estimators for the
adaptive discretization of inverse problem***”*

*Andrew Curtis (GeoSciences, University of Edinburgh, UK)*

*Sean Holman
(Mathematics, University of Manchester, UK) ***“On the stability of the geodesic ray transform in the presence of
caustics***“*

*Michael Gutmann (Informatics, University of Edinburgh, UK)”* **Bayesian Inference by Density Ratio
Estimation***”*

*Kyong Jin (EPFL, Switzerland) “***Deep Convolutional Neural Network for Inverse Problems in Imaging***”*

*Axel Munk (Goettingen University,
Germany) ***“Nanostatistics
– Statistics for Nanoscopy”**

*Gabriel Paternain (University of Cambridge, UK) ***”Effective inversion of the attenuated
X-ray transform associated with a connection”**

*Marcelo Pereyra (Mathematics, Heriot Watt University, UK) ”***Bayesian inference
by convex optimisation: theory, methods, and algorithms.***”*

*Markus Reiss (Humboldt
University, Berlin, Germany)”* **Optimal
adaptation for early stopping in statistical inverse problems***”*

*Carola Schoenlieb (University of Cambridge) “***Model-based learning in imaging”**

*Anna Simoni (CREST, CNRS, Paris, France) ***”Nonparametric Estimation in case of Endogenous
Selection”**

*Botond** Szabo (Leiden University, Netherlands) ”***Confidence in
Bayesian uncertainty quantification in inverse problems***”*

*Aretha Teckentrup (Mathematics, University of Edinburgh, UK)”*** Gaussian process regression in Bayesian
inverse problems”**

*Nicholas Zabaras (Computational Science and Engineering, University
of Notre Dame, USA) “***Inverse
Problems with an Unknown Scale of Estimation***”*

__Abstracts.__

**Christian Clason***
(Mathematics, Duisburg-Essen University, Germany)”* **Functional error estimators for the adaptive discretization of inverse
problem***”*

*This talk discusses
the application of functional error estimators for the adaptive discretization
of inverse problems for partial differential equations. The error estimators
can be written in terms of residuals in the optimality system that can then be
estimated by conventional techniques, thus leading to explicit estimators.*

*This approach is in
particular well-suited for inverse problems in Banach
spaces, which involve non-smooth penalties such as sparsity enhancement or pointwise
penalties.*

**Mike Christie*** (Petroleum Institute, Heriot Watt
University, UK) ”***Bayesian Hierarchical Models for Measurement Error***”*

*The detailed
geological description of oil reservoirs is always uncertain because of the
large size and relatively small number of wells from which hard data can be
obtained. To handle this uncertainty,
reservoir models are calibrated or ‘history matched’ to production data (oil
rates, pressures etc). The quality of any reservoir
forecasts depends not only on the quality of the match, but also how well
understood the measurement errors are (or indeed the split between measurement
and modelling errors).*

*This talk will look at
hierarchical models for estimating measurement and modelling errors in
reservoir model calibration, and compare maximum likelihood estimates of
measurement errors with marginalisation over unknown errors.*

**Michael Gutmann***
(Informatics, University of Edinburgh, UK)***” Bayesian Inference by Density Ratio Estimation”**

*This talk is about
Bayesian inference when the likelihood function cannot be computed but data can
be generated from the model. The model's data generating process is allowed to
be arbitrarily complex. Exact solutions are then not possible. But by
re-formulating the original problem as a problem of estimating the ratio
between two probability density functions, I show how e.g. logistic regression
can be used to obtain approximate solutions. The proposed inference framework
is illustrated on stochastic nonlinear dynamical models.*

*Reference: https://arxiv.org/abs/1611.10242*

**Kyong Jin*** (EPFL,
Switzerland) “***Deep Convolutional
Neural Network for Inverse Problems in Imaging***”*

*This talk discusses a
novel deep convolutional neural network (CNN)-based algorithm for solving
ill-posed inverse problems. Regularized iterative algorithms have emerged as
the standard approach to ill-posed inverse problems in the past few decades.
These methods produce excellent results, but can be challenging to deploy in
practice due to factors including the high computational cost of the forward
and adjoint operators and the difficulty of hyper
parameter selection. The starting point of our work is the observation that
unrolled iterative methods have the form of a CNN (filtering followed by
point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a
convolution. Based on this observation, we propose using direct inversion
followed by a CNN to solve normal-convolutional inverse problems. The direct
inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines
multiresolution decomposition and residual learning in order to learn to remove
these artifacts while preserving image structure. The
performance of the proposed network will be demonstrated in sparse-view
reconstruction on parallel beam X-ray computed tomography and accelerated MR
imaging reconstruction on parallel MRI.*

**Axel Munk **(Department of Mathematics and Computer Science,
and Max-Planck Institute for Biophysical Chemistry,* Goettingen University, Germany*)

**“Nanostatistics
– Statistics for Nanoscopy”**

*Conventional light
microscopes have been used for centuries for the study of small length scales
down to approximately 250 nm. Images from such a microscope are typically
blurred and noisy, and the measurement error in such images can often be well
approximated by Gaussian or Poisson noise. In the past, this approximation has
been the focus of a multitude of deconvolution techniques in imaging. However,
conventional microscopes have an intrinsic physical limit of resolution.
Although this limit remained unchallenged for a century, it was broken for the
first time in the 1990s with the advent of modern superresolution
fluorescence microscopy techniques. Since then, superresolution
fluorescence microscopy has become an indispensable tool for studying the
structure and dynamics of living organisms, recently acknowledged with the c
Nobel prize in chemistry 2014. Current experimental
advances go to the physical limits of imaging, where discrete quantum effects
are predominant. Consequently, the data is inherently of a non-Gaussian
statistical nature, and we argue that recent technological progress also
challenges the long-standing Poisson assumption. Thus, analysis and
exploitation of the discrete physical mechanisms of fluorescent molecules and
light, as well as their distributions in time and space, have become necessary
to achieve the highest resolution possible and to extract biologically relevant
information. *

*In this talk we survey
some modern fluorescence microscopy techniques from a statistical modeling and analysis perspective. In the first part we
address spatially adaptive multiscale deconvolution estimation and testing
methods for scanning type microscopy. We illustrate that such methods benefit
from recent advances in large-scale computing, mainly from convex optimization.
In the second part of the talk we address challenges of quantitative biology
which require more detailed models that delve into sub-Poisson statistics. To
this end we suggest a prototypical model for fluorophore dynamics and use it to
quantify the number of proteins in a spot.*

**Marcelo Pereyra***
(Mathematics, Heriot Watt University, UK) ”***Bayesian inference by convex optimisation:
theory, methods, and algorithms.***”*

*Convex optimisation
has become the main Bayesian computation methodology in many areas of data
science such as mathematical imaging and machine learning, where high
dimensionality is often addressed by using models that are log-concave and
where maximum-a-posteriori (MAP) estimation can be performed efficiently by
optimisation. The first part of this talk presents a new decision-theoretic
derivation of MAP estimation and shows that, contrary to common belief, under
log-concavity MAP estimators are proper Bayesian estimators. A main novelty is
that the derivation is based on differential geometry. Following on from this,
we establish universal theoretical guarantees for the estimation error involved
and show estimation stability in high dimensions. Moreover, the second part of
the talk describes a new general methodology for approximating Bayesian
high-posterior-density regions in log-concave models. The approximations are derived by using
recent concentration of measure results related to information theory, and can
be computed very efficiently, even in large-scale problems, by using convex
optimisation techniques. The approximations also have favourable theoretical properties,
namely they outer-bound the true high-posterior-density credibility regions, and they are stable with respect to model
dimension. The proposed methodology is finally illustrated on two
high-dimensional imaging inverse problems related to tomographic reconstruction
and sparse deconvolution, where they are used to explore the uncertainty about
the solutions, and where convex-optimisation-empowered proximal Markov chain
Monte Carlo algorithms are used as benchmark to compute exact credible regions
and measure the approximation error.*

*Related pre-prints:*

*https://arxiv.org/abs/1612.06149*

*https://arxiv.org/pdf/1602.08590.pdf*

**Markus Reiss*** (Humboldt University, Berlin, Germany)”*
**Optimal adaptation for early stopping in
statistical inverse problems***”*

*For linear inverse
problems $Y=\mathsf{A}\mu+\xi$, it is classical to recover the unknown function
$\mu$ by an iterative scheme $(\widehat \mu^{(m)},
m=0,1,\ldots)$ and to provide $\widehat\mu^{(\tau)}$
as a result, where $\tau$ is some stopping rule. Stopping should be decided
adaptively, that is in a data-driven way independently of the true function
$\mu$. For deterministic noise $\xi$ the discrepancy
principle is usually applied to determine $\tau$. In the context of stochastic noise
$\xi$, we study oracle adaptation (that is, compared to the best possible
stopping iteration). For a stopping rule based on the residual process, oracle
adaptation bounds within a certain domain are established. For Sobolev balls,
the domain of adaptivity matches a corresponding
lower bound. The proofs use bias and variance transfer techniques from weak
prediction error to strong $L^2$-error, as well as convexity arguments and
concentration bounds for the stochastic part. The performance of our stopping
rule for Landweber and spectral cutoff
methods is illustrated numerically.(Joint work with
Gilles Blanchard, Potsdam, and Marc Hoffmann, Paris)*

*Anna Simoni (CREST, CNRS, Paris, France) ”*

*This paper addresses
the problem of estimation of a nonparametric regression function from
selectively observed data when selection is endogenous. Our approach relies on
independence between covariates and selection conditionally on potential
outcomes. Endogeneity of regressors is also allowed for.
In the exogenous and endogenous case, consistent two-step estimation procedures
are proposed and their rates of convergence are derived which take into account
the degree of ill-posedness. In the first stage we
have to solve an ill-posed inverse problem to recover nonparametrically
the inverse selection probability function. Moreover, when the covariates are
endogenous an additional inverse problem has to be solved in the second step to
recover the instrumental regression function. Pointwise asymptotic distribution
of the estimators is established. In addition, bootstrap uniform confidence
bands are derived. Finite sample properties are illustrated in a Monte Carlo
simulation study and an empirical illustration. Joint work
with Christoph Breunig (Humboldt University, Berlin)
and Enno Mammen (Heidelberg University).*

**Botond**** Szabo*** (Leiden University, Netherlands) ”***Confidence in Bayesian uncertainty
quantification in inverse problems***”*

*In our work we
investigate the frequentist coverage of Bayesian credible sets in the inverse
Gaussian sequence model. We consider a scale of priors of varying regularity
and choose the regularity by an empirical or a hierarchical Bayes method. Next
we consider a central set of prescribed posterior probability in the posterior
distribution of the chosen regularity. We show that such an adaptive Bayes
credible set gives correct uncertainty quantification of “polished tail”
parameters, in the sense of high probability of coverage of such parameters. On
the negative side, we show by theory and example that adaptation of the prior
necessarily leads to gross and haphazard uncertainty quantification for some
true parameters that are still within the hyperrectangle
regularity scale. The preceding results are based on semi-explicit computations
on an optimised statistical model. In the end of the talk I will briefly
discuss to possible extensions of our coverage results to more general,
abstract settings.*

*The talk is based on
the papers written together with Judith Rousseau, Aad
van der Vaart and Harry van Zanten.*

**Aretha Teckentrup***
(Mathematics, University of Edinburgh, UK)”*** Gaussian process regression in Bayesian inverse problems”**

*A major challenge in
the application of sampling methods in Bayesian inverse problems is the
typically large computational cost associated with solving the forward problem.
To overcome this issue, we consider using a Gaussian process surrogate model to
approximate the forward map. This results in an approximation to the solution
of the Bayesian inverse problem, and more precisely in an approximate posterior
distribution.*

*In this talk, we
analyse the error in the approximate posterior distribution, and show that the
approximate posterior distribution tends to the true posterior as the accuracy
of the Gaussian process surrogate model increases.*

**Nicholas Zabaras***
(Computational Science and Engineering, University of Notre Dame, USA) “***Inverse Problems with an Unknown Scale of
Estimation***”*

*The presentation will
focus on the Bayesian estimation of spatially varying parameters of
multiresolution/multiscale nature. In particular, the characteristic length
scale(s) of the unknown property are not known a priori and need to be
evaluated based on the fidelity of the given data across the domain. Our
approach is based on representing the spatial field with a wavelet expansion.
The intra-scale correlations between wavelet coefficients form a quadtree, and this structure is exploited to identify
additional basis functions to refine the model. Bayesian inference is performed
using a sequential Monte Carlo sampler with a MCMC transition kernel. The SMC
sampler is used to move between posterior densities defined on different
scales, thereby providing for adaptive refinement of the wavelet
representation. The marginal likelihoods provide a termination criterion for
the scale determination algorithm thus allowing model comparison and selection.
The approach is demonstrated with permeability estimation for groundwater flow
using pressure measurements.*

*https://www.zabaras.com/*

Link to the LMS Inverse Problems Research Group