Turing/LMS Workshop
Inverse Problems and
Data Science
8-10 May 2017
Venue: Informatics Forum, 10 Crichton St, Edinburgh EH8 9AB, UK
Organisers: Natalia Bochkina (University of Edinburgh), Carola Schoenlieb (University of Cambridge), Marta Betcke (UCL), Sean Holman (University of Manchester)
The aim of the workshop is to bring together researchers on inverse problems working in different areas of mathematics, statistics and machine learning as well as from the applied disciplines where inverse problems arise, such as astronomy, biology, computer vision, geoscience and medicine. The topics of the workshop include nonlinear inverse problems, algorithms, inverse problems in machine learning, theoretical properties of statistical estimators in inverse problems, Bayesian inverse problems, applications in science and medicine.
Financial support: The Alan Turing Institute and London Mathematical Society
Sponsoring: if you would like to sponsor the event, please get in touch with one of the organisers. We are grateful to Schlumberger for sponsorship.
Times: the workshop will start at 10am on Monday 8th of May and concludes at 4:30pm on Wednesday 10th of May
Contact: events@turing.ac.uk
Registration: deadline is 1 May, registration fee is £60. Please register here.
There will also be a
training course on Bayesian inverse problems on 11 May 2017 – registration is
available here
(deadline 4 May).
Programme
Monday
9:30 -10:00
registration and coffee
10:00 – 10:45 Sean
Holman (University of Manchester, UK) “On the stability of the geodesic ray
transform in the presence of caustics“
10:45 – 11:30 Gabriel
Paternain (University of Cambridge, UK) ”Effective inversion of the attenuated
X-ray transform associated with a connection”
11:30 – 11:45
discussion: interdisciplinary challenges
11:45 - 12:45 lunch
12:45 – 13:30 Andrew
Curtis (University of Edinburgh, UK) “Nonlinear Travel-Time and Electrical
Resistivity Tomography”
13:30 – 14:15 Nicholas
Zabaras (University of Notre Dame, USA) “Inverse Problems with an Unknown Scale
of Estimation”
14:15 – 15:00 Eduard
Kontar (University of Glasgow, UK) TBC
15:00 - 15:15
discussion: interdisciplinary challenges
15:15-15:30 coffee
break
15:30 – 16:15 Mike
Christie (Heriot Watt University, UK) ”Bayesian Hierarchical Models for
Measurement Error”
16:15-17:00 Botond
Szabo (Leiden University, Netherlands) ”Confidence in Bayesian uncertainty
quantification in inverse problems”
17:00-17:45 Anna
Simoni (CREST, CNRS, Paris, France) ”Nonparametric Estimation in case of
Endogenous Selection”
17:45 – 18:00
discussion: interdisciplinary challenges
18:00 - 20:00 poster
session and reception
Tuesday
9:15 - 10:00 Michael
Gutmann (University of Edinburgh, UK)” Bayesian Inference by Density Ratio
Estimation”
10:00 - 10:30 Pol
Moreno (University of Edinburgh) “Overcoming Occlusion with Inverse Graphics”
10:30 - 10:50 coffee
break
10:50 – 11:35 Kyong
Jin (EPFL, Switzerland) “Deep Convolutional Neural Network for Inverse Problems
in Imaging”
11:35-12:05 Jonas Adler
(KTH, Royal institute of Technology and Elekta) “Solving ill-posed inverse
problems using learned iterative schemes”
12:05-12:20
discussion: interdisciplinary challenges
12:20-13:00 lunch
13:00 – 13:45
Christian Clason (Duisburg-Essen University, Germany)” A primal-dual
extragradient method for nonlinear inverse problems for PDEs”
13:45 – 14:30 Carola
Schoenlieb (University of Cambridge, UK) “Model-based learning in imaging”
14:30 - 14:45
discussion: interdisciplinary challenges
14:45 – 15:15 coffee
break
15:15 - 16:00 Markus
Reiss (Humboldt University, Berlin, Germany)” Optimal adaptation for early
stopping in statistical inverse problems”
16:00 – 16:45 Axel
Munk (Goettingen University, Germany) “Nanostatistics – Statistics for
Nanoscopy”
16:45 - 17:15 Merle
Behr (Goettingen University, Germany) “Multiscale Blind Source Separation”
17:15 – 18:00
discussion: interdisciplinary challenges
Tuesday: conference
dinner
Wednesday
9:30 – 10:15 Aretha
Teckentrup (University of Edinburgh, UK)” Gaussian process regression in
Bayesian inverse problems”
10:15 – 11:00 Marcelo
Pereyra (Heriot Watt University, UK) ”Bayesian inference by convex
optimisation: theory, methods, and algorithms”
11:00-11:30 coffee
break
11:30 – 12:15 Michal
Branicki (University of Edinburgh, UK) “Information-based measures of skill and
optimality in Bayesian filtering”
12:15 – 12:45 Nagoor
Kani (Heriot Watt University, UK) “Model reduction using physics driven deep
residual recurrent neural networks”
12:45 – 13:00
discussion: interdisciplinary challenges
13:00 – 14:00 lunch
14:00 – 14:30
Christopher Wallis (UCL) “Sparse image reconstruction on the sphere: Analysis
and Synthesis”
14:30 – 15:00 Xiaohao
Cai (UCL) “High-dimensional uncertainty estimation with sparse priors for radio
interferometric imaging”
15:00 – 15:45 Marta
Betcke (UCL) “Dynamic high-resolution photoacoustic tomography with optical
flow constraint”
15:45 – 16:00
discussion: interdisciplinary challenges
16:00 closing remarks
Poster presentations
Shing
Chan (Heriot-Watt University) A machine learning approach for efficient uncertainty quantification
using multiscale methods.
Mohammad
Golbabaee (University of Edinburgh) Inexact
iterative projected gradient for fast compressed quantitative
MRI
Hardial S Kalsi (King's College,
London) Glottal Inverse Filtering using the
Acoustical Klein-Gordon Equation
Dimitris Kamilis and Nick Polydorides
(University of Edinburgh) A computational framework for uncertainty
quantification for low-frequency, time-harmonic Maxwell equations with
stochastic conductivity models
J.Nagoor Kani and Ahmed H. Elsheikh (Heriot Watt University)
Model reduction using physics driven deep residual recurrent neural networks
Jon Cockayne, Chris Oates, Tim Sullivan, Mark
Girolami Bayesian Probabilistic Numerical
Methods
Alessandro Perelli (University of Edinburgh) Multi Denoising Approximate Message
Passing for computational complexity reduction
Jenovah Rodrigues (University of Edinburgh) Bayesian Inverse Problems with
Heterogeneous Noise
Ferdia Sherry (University of Cambridge) Learning a
sampling pattern for MRI
Invited speaker
abstracts.
Christian Clason (Mathematics, Duisburg-Essen University,
Germany) ”A primal-dual extragradient
method for nonlinear inverse problems for PDEs”
This talk is concerned
with the extension of the Chambolle--Pock primal-dual algorithm to nonsmooth
optimization problems involving nonlinear operators between function spaces.
The proof of local convergence rests on verifying the Aubin property of the
inverse of a monotone operator at the minimizer, which is difficult as it
involves infinite-dimensional set-valued analysis. However, for nonsmooth
functionals that are defined pointwise -- such as $L^1$ or $L^\infty$ norms --
it is possible to apply simpler tools from the finite-dimensional theory, which
allows deriving explicit conditions for the convergence. This is illustrated for the example of
imaging problems with $L^1$- and $L^\infty$-fitting terms.
Mike Christie (Petroleum Institute, Heriot Watt
University, UK) ”Bayesian
Hierarchical Models for Measurement Error”
The detailed
geological description of oil reservoirs is always uncertain because of the
large size and relatively small number of wells from which hard data can be
obtained. To handle this uncertainty,
reservoir models are calibrated or ‘history matched’ to production data (oil
rates, pressures etc). The quality of any reservoir forecasts depends not only
on the quality of the match, but also how well understood the measurement
errors are (or indeed the split between measurement and modelling errors).
This talk will look at
hierarchical models for estimating measurement and modelling errors in
reservoir model calibration, and compare maximum likelihood estimates of
measurement errors with marginalisation over unknown errors.
Andrew Curtis (University of Edinburgh, UK) “Nonlinear Travel-Time and Electrical
Resistivity Tomography”
We solve the fully
nonlinear travel-time tomography problem using reversible-jump Markov chain
Monte Carlo methods. The results motivate a conjecture that the uncertainty in
general, non-linearised tomography problems may consist of loop-like topologies
that can be interpreted similarly to linearised measures of spatial resolution.
This is confirmed in a second example by applying the same inversion algorithm
to estimate the electrical resistivity structure of a medium (the Earth) from
dipole-dipole electrical resistivity measurements, a problem governed by quite
different physics: uncertainty loops appear similarly. If time allows, I will
then discuss a simple decomposition of tomography problems that can be shown to
be unimodal in some important cases, leading to a different method of solution.
Michael Gutmann (Informatics, University of Edinburgh, UK)” Bayesian Inference by Density Ratio
Estimation”
This talk is about
Bayesian inference when the likelihood function cannot be computed but data can
be generated from the model. The model's data generating process is allowed to
be arbitrarily complex. Exact solutions are then not possible. But by
re-formulating the original problem as a problem of estimating the ratio
between two probability density functions, I show how e.g. logistic regression
can be used to obtain approximate solutions. The proposed inference framework
is illustrated on stochastic nonlinear dynamical models.
Reference:
https://arxiv.org/abs/1611.10242
Kyong Jin (EPFL, Switzerland) “Deep Convolutional Neural Network for
Inverse Problems in Imaging”
This talk discusses a
novel deep convolutional neural network (CNN)-based algorithm for solving
ill-posed inverse problems. Regularized iterative algorithms have emerged as
the standard approach to ill-posed inverse problems in the past few decades.
These methods produce excellent results, but can be challenging to deploy in
practice due to factors including the high computational cost of the forward and
adjoint operators and the difficulty of hyper parameter selection. The starting
point of our work is the observation that unrolled iterative methods have the
form of a CNN (filtering followed by point-wise non-linearity) when the normal
operator (H*H, the adjoint of H times H) of the forward model is a convolution.
Based on this observation, we propose using direct inversion followed by a CNN
to solve normal-convolutional inverse problems. The direct inversion
encapsulates the physical model of the system, but leads to artifacts when the
problem is ill-posed; the CNN combines multiresolution decomposition and
residual learning in order to learn to remove these artifacts while preserving
image structure. The performance of the proposed network will be demonstrated
in sparse-view reconstruction on parallel beam X-ray computed tomography and
accelerated MR imaging reconstruction on parallel MRI.
Axel Munk (Department of Mathematics and Computer Science, and Max-Planck Institute for Biophysical Chemistry, Goettingen University, Germany)
“Nanostatistics –
Statistics for Nanoscopy”
Conventional light
microscopes have been used for centuries for the study of small length scales
down to approximately 250 nm. Images from such a microscope are typically
blurred and noisy, and the measurement error in such images can often be well
approximated by Gaussian or Poisson noise. In the past, this approximation has
been the focus of a multitude of deconvolution techniques in imaging. However,
conventional microscopes have an intrinsic physical limit of resolution.
Although this limit remained unchallenged for a century, it was broken for the
first time in the 1990s with the advent of modern superresolution fluorescence
microscopy techniques. Since then, superresolution fluorescence microscopy has
become an indispensable tool for studying the structure and dynamics of living
organisms, recently acknowledged with the c Nobel prize in chemistry 2014.
Current experimental advances go to the physical limits of imaging, where discrete
quantum effects are predominant. Consequently, the data is inherently of a
non-Gaussian statistical nature, and we argue that recent technological
progress also challenges the long-standing Poisson assumption. Thus, analysis
and exploitation of the discrete physical mechanisms of fluorescent molecules
and light, as well as their distributions in time and space, have become
necessary to achieve the highest resolution possible and to extract
biologically relevant information.
In this talk we survey
some modern fluorescence microscopy techniques from a statistical modeling and
analysis perspective. In the first part we address spatially adaptive
multiscale deconvolution estimation and testing methods for scanning type
microscopy. We illustrate that such methods benefit from recent advances in
large-scale computing, mainly from convex optimization. In the second part of
the talk we address challenges of quantitative biology which require more
detailed models that delve into sub-Poisson statistics. To this end we suggest
a prototypical model for fluorophore dynamics and use it to quantify the number
of proteins in a spot.
Marcelo Pereyra (Mathematics, Heriot Watt University, UK) ”Bayesian inference by convex optimisation:
theory, methods, and algorithms.”
Convex optimisation
has become the main Bayesian computation methodology in many areas of data
science such as mathematical imaging and machine learning, where high
dimensionality is often addressed by using models that are log-concave and
where maximum-a-posteriori (MAP) estimation can be performed efficiently by
optimisation. The first part of this talk presents a new decision-theoretic
derivation of MAP estimation and shows that, contrary to common belief, under
log-concavity MAP estimators are proper Bayesian estimators. A main novelty is
that the derivation is based on differential geometry. Following on from this,
we establish universal theoretical guarantees for the estimation error involved
and show estimation stability in high dimensions. Moreover, the second part of
the talk describes a new general methodology for approximating Bayesian
high-posterior-density regions in log-concave models. The approximations are derived by using
recent concentration of measure results related to information theory, and can
be computed very efficiently, even in large-scale problems, by using convex
optimisation techniques. The approximations also have favourable theoretical
properties, namely they outer-bound the true high-posterior-density credibility
regions, and they are stable with respect to model dimension. The proposed
methodology is finally illustrated on two high-dimensional imaging inverse
problems related to tomographic reconstruction and sparse deconvolution, where
they are used to explore the uncertainty about the solutions, and where
convex-optimisation-empowered proximal Markov chain Monte Carlo algorithms are
used as benchmark to compute exact credible regions and measure the
approximation error.
Related pre-prints:
https://arxiv.org/abs/1612.06149
https://arxiv.org/pdf/1602.08590.pdf
Markus Reiss (Humboldt University, Berlin, Germany)”
Optimal adaptation for early stopping in
statistical inverse problems”
For linear inverse
problems $Y=\mathsf{A}\mu+\xi$, it is classical to recover the unknown function
$\mu$ by an iterative scheme $(\widehat \mu^{(m)}, m=0,1,\ldots)$ and to
provide $\widehat\mu^{(\tau)}$ as a result, where $\tau$ is some stopping rule.
Stopping should be decided adaptively, that is in a data-driven way
independently of the true function $\mu$.
For deterministic noise $\xi$ the
discrepancy principle is usually applied to determine $\tau$. In the context
of stochastic noise $\xi$, we study
oracle adaptation (that is, compared to the best possible stopping iteration).
For a stopping rule based on the residual process, oracle adaptation bounds
within a certain domain are established.
For Sobolev balls, the domain of adaptivity matches a corresponding
lower bound. The proofs use bias and variance transfer techniques from weak
prediction error to strong $L^2$-error, as well as convexity arguments and
concentration bounds for the stochastic part. The performance of our stopping
rule for Landweber and spectral cutoff methods is illustrated
numerically.(Joint work with Gilles Blanchard, Potsdam, and Marc Hoffmann,
Paris)
Anna Simoni (CREST, CNRS, Paris, France) ”Nonparametric Estimation in case of
Endogenous Selection”
This paper addresses
the problem of estimation of a nonparametric regression function from
selectively observed data when selection is endogenous. Our approach relies on
independence between covariates and selection conditionally on potential
outcomes. Endogeneity of regressors is also allowed for. In the exogenous and
endogenous case, consistent two-step estimation procedures are proposed and
their rates of convergence are derived which take into account the degree of
ill-posedness. In the first stage we have to solve an ill-posed inverse problem
to recover nonparametrically the inverse selection probability function.
Moreover, when the covariates are endogenous an additional inverse problem has
to be solved in the second step to recover the instrumental regression
function. Pointwise asymptotic distribution of the estimators is established.
In addition, bootstrap uniform confidence bands are derived. Finite sample
properties are illustrated in a Monte Carlo simulation study and an empirical
illustration. Joint work with Christoph Breunig (Humboldt University, Berlin)
and Enno Mammen (Heidelberg University).
Botond Szabo (Leiden University, Netherlands) ”Confidence in Bayesian uncertainty
quantification in inverse problems”
In our work we
investigate the frequentist coverage of Bayesian credible sets in the inverse
Gaussian sequence model. We consider a scale of priors of varying regularity
and choose the regularity by an empirical or a hierarchical Bayes method. Next
we consider a central set of prescribed posterior probability in the posterior
distribution of the chosen regularity. We show that such an adaptive Bayes
credible set gives correct uncertainty quantification of “polished tail”
parameters, in the sense of high probability of coverage of such parameters. On
the negative side, we show by theory and example that adaptation of the prior
necessarily leads to gross and haphazard uncertainty quantification for some
true parameters that are still within the hyperrectangle regularity scale. The
preceding results are based on semi-explicit computations on an optimised
statistical model. In the end of the talk I will briefly discuss to possible
extensions of our coverage results to more general, abstract settings.
The talk is based on
the papers written together with Judith Rousseau, Aad van der Vaart and Harry
van Zanten.
Aretha Teckentrup (Mathematics, University of Edinburgh, UK)” Gaussian process regression in Bayesian
inverse problems”
A major challenge in
the application of sampling methods in Bayesian inverse problems is the
typically large computational cost associated with solving the forward problem.
To overcome this issue, we consider using a Gaussian process surrogate model to
approximate the forward map. This results in an approximation to the solution
of the Bayesian inverse problem, and more precisely in an approximate posterior
distribution.
In this talk, we
analyse the error in the approximate posterior distribution, and show that the
approximate posterior distribution tends to the true posterior as the accuracy
of the Gaussian process surrogate model increases.
Nicholas Zabaras (Computational Science and Engineering,
University of Notre Dame, USA) “Inverse
Problems with an Unknown Scale of Estimation”
The presentation will
focus on the Bayesian estimation of spatially varying parameters of
multiresolution/multiscale nature. In particular, the characteristic length
scale(s) of the unknown property are not known a priori and need to be
evaluated based on the fidelity of the given data across the domain. Our
approach is based on representing the spatial field with a wavelet expansion.
The intra-scale correlations between wavelet coefficients form a quadtree, and
this structure is exploited to identify additional basis functions to refine
the model. Bayesian inference is performed using a sequential Monte Carlo
sampler with a MCMC transition kernel. The SMC sampler is used to move between
posterior densities defined on different scales, thereby providing for adaptive
refinement of the wavelet representation. The marginal likelihoods provide a
termination criterion for the scale determination algorithm thus allowing model
comparison and selection. The approach is demonstrated with permeability
estimation for groundwater flow using pressure measurements.
Contributed talk abstracts.
Jonas Adler, KTH- Royal Institute
of Technology and Elekta
Solving ill-posed inverse
problems using learned iterative schemes
We
present partially learned iterative schemes for the solution of ill posed
inverse problems with not necessarily linear forward operators. The methods
builds on ideas from classical regularization theory and recent advances in
deep learning to perform learning while making use of prior information about
the inverse problem encoded in the forward operator, noise model and a
regularizing functional.
We also
present results for tomographic reconstruction of human head phantoms and
discuss several possible future research areas.
Related
pre-print: https://arxiv.org/abs/1704.04058
Merle Behr, Goettingen
University, Germany
Multiscale Blind Source
Separation
We
discuss a new methodology for statistical recovery of single linear mixtures of
piecewise constant signals (sources) with unknown mixing weights and change
points in a multiscale fashion. Exact recovery within an \epsilon-neighborhood
of the mixture is obtained when the sources take only values in a known finite
alphabet. Based on this we provide estimators for the mixing weights and
sources for Gaussian error. We obtain uniform confidence sets and optimal rates
(up to log-factors) for all quantities.
This
blind source separation problem is motivated by different applications in
digital communication, but also in cancer genetics. In the latter one aims to
assign copy-number variations from genetic sequencing data to different
tumor-clones and their corresponding proportions in the tumor. We analyze such
data using the proposed method in order to estimate their proportion in the
tumor and the corresponding copy number variations.
This is
joint work with Chris Holmes (University of Oxford, UK) and Axel Munk
(University of Goettingen, Germany).
Xiaohao Cai,
MSSL, University College London
High-dimensional uncertainty estimation with sparse priors for radio
interferometric imaging
In many fields high-dimensional inverse imaging problems are encountered. For
example, imaging the raw data acquired by radio interferometric telescopes
involves solving an ill-posed inverse problem to recover an image of the sky
from noisy and incomplete Fourier measurements. Future telescopes, such as the
Square Kilometre Array (SKA), will usher in a new big-data era for radio
interferometry, with data rates comparable to world-wide internet traffic
today. Sparse regularisation techniques are a powerful approach for
solving these problems, typically yielding excellent reconstruction fidelity
(e.g. Pratley et al. 2016). Moreover, by leveraging recent developments in
convex optimisation, these techniques can be scaled to extremely large
data-sets (e.g. Onose et al. 2016). However, such approaches typically recover
point estimators only and uncertainty information is not quantified. Standard
Markov Chain Monte Carlo (MCMC) techniques that scale to high-dimensional
settings cannot support the sparse (non-differentiable) priors that
have been shown to be highly effective in practice. We present work adapting
the proximal Metropolis adjusted Langevin algorithm (P-MALA), developed
recently by Pereyra (2016a), for radio interferometric imaging with sparse
priors (Cai, Pereyra & McEwen 2017a), leveraging proximity operators from
convex optimisation in an MCMC framework to recover the full posterior
distribution of the sky image. While such an approach provides critical
uncertainty information, scaling to extremely large data-sets, such as those
anticipated from the SKA, is challenging. To address this issue we develop a
technique to compute approximate local Bayesian credible intervals by
post-processing the point (maximum a-posteriori) estimator recovered by solving
the associated sparse regularisation problem (Cai, Pereyra & McEwen 2017b),
leveraging recent results by Pereyra (2016b). This approach inherits the
computational scalability of sparse regularisation techniques, while also
providing critical uncertainty information. We demonstrate these
techniques on simulated observations made by radio interferometric telescopes.
Joint work with Marcelo Pereyra (from Heriot-Watt University) and Jason D.
McEwen (from MSSL, University College London).
J.Nagoor Kani (and Ahmed H. Elsheikh), Heriot Watt
University
Model reduction using physics
driven deep residual recurrent neural networks
We introduce a deep residual recurrent neural network (DR-RNN) to
emulate the dynamics of physical phenomena. The developed DR-RNN is inspired by
the iterative steps of line search methods in finding the residual minimiser of
numerically discretised differential equations. We formulate this iterative
scheme as stacked recurrent neural network (RNN) embedded with the dynamical
structure of the emulated differential equations. We provide empirical evidence
showing that these residual driven deep RNN can effectively emulate the
physical system with significantly lower number of parameters in comparison to
standard RNN architectures. We also show the significant gains in accuracy by
increasing the depth of RNN similar to other recent applications of deep
learning. The applicability of the developed DR-RNN is demonstrated on
uncertainty quantification tasks where a large number of forward simulation are
required.
Pol Moreno, University of
Edinburgh, UK
Overcoming
Occlusion with Inverse Graphics
Scene understanding tasks such as the
prediction of object pose, shape, appearance and illumination
are hampered by the occlusions often found in images. We propose a
vision-as-inverse-graphics approach to handle these occlusions by making use of
a graphics renderer in combination with a robust generative model (GM). Since
searching over scene factors to obtain the best match for an image is very
inefficient, we make use of a recognition model (RM) trained on synthetic data
to initialize the search. This paper addresses two issues: (i) We study how the
inferences are affected by the degree of occlusion of the foreground object,
and show that a robust GM which includes an outlier model to account for
occlusions works significantly better than a non-robust model. (ii) We
characterize the performance of the RM and the gains that can be made by refining
the search using the GM, using a new dataset that includes background clutter
and occlusions. We find that pose and shape are predicted very well by the RM,
but appearance and especially illumination less so. However, accuracy on these
latter two factors can be clearly improved with the generative
model.
Christopher
Wallis, University College London
Sparse image reconstruction on the sphere: Analysis and Synthesis
We develop techniques to solve a number of ill-posed inverse problems on the
sphere by sparse regularisation, exploiting sparsity in directional wavelet
space. Through numerical experiments we evaluate the effectiveness of the
technique in solving inpainting, denoising and deconvolution problems. We
consider solving the problems in both the analysis and synthesis settings, with
a number of different sampling schemes, and show that the sampling scheme has a
large impact on the quality of the reconstruction. This is due to more
efficient sampling schemes constraining the solution space and improving
sparsity in wavelet space. We adapt and apply the technique to the Planck
353GHz total intensity map, improving the ability to extract the structure of
galactic dust emission.
Poster abstracts.
Shing
Chan, Heriot-Watt University, UK
A
machine learning approach for efficient uncertainty quantification using
multiscale methods.
Several multiscale methods account for sub-grid
scale features using coarse scale basis functions. For example, in the
Multiscale Finite Volume method the coarse scale basis functions are obtained
by solving a set of local problems over dual-grid cells. We introduce a
data-driven approach for the estimation of these coarse scale basis functions.
Specifically, we employ a neural network predictor fitted using a set of
solution samples from which it learns to generate subsequent basis functions at
a lower computational cost than solving the local problems. The computational
advantage of this approach is realized for uncertainty quantification tasks
where a large number of realizations has to be evaluated. We attribute the
ability to learn these basis functions to the modularity of the local problems
and the redundancy of the permeability patches between samples. The proposed
method is evaluated on elliptic problems yielding very promising results.
Mohammad
Golbabaee, University of Edinburgh
Inexact iterative projected gradient
for fast compressed quantitative MRI
We will present a
compressed sensing perspective of a novel form of MR imaging called Magnetic
Resonance Fingerprinting (MRF). This enables direct estimation of the T1, T2
and proton density parameter maps for a patient through undersampled k-space
sampling and BLIP, a gradient projection algorithm that enforces the MR Bloch
dynamics. One of the key bottlenecks in MRF is the projection onto the
constraint set. We will present both theoretical and numerical results showing
that significant computational savings are possible through the use of inexact
projections and a fast approximate nearest neighbor search.
Hardial S Kalsi, King's College, London
Glottal Inverse Filtering using the Acoustical
Klein-Gordon Equation
Inversion
of the glottal-pulse waveform from a speech signal remains an active field of
research although dating back over half a century. Despite multiple approaches
to solve this important inverse problem, it cannot be said today that the field
is in a satisfactory state. In the main, approaches use classical “inverse
filtering” frequency-domain methods to estimate both the vocal-tract and
glottal-pulse waveform. In this poster, we illustrate a new approach which
takes advantage of two recent developments: firstly, the description of the
speech wave by means of an analogue of the Klein-Gordon wave equation of
relativistic quantum mechanics and, secondly, the solution of this equation to
find its Green's function. This approach allows accurate parameterisation of
the vocal tract which greatly simplifies the inversion.
Dimitris Kamilis and Nick
Polydorides, University of Edinburgh
A computational framework for uncertainty quantification for
low-frequency, time-harmonic Maxwell equations with stochastic
conductivity models
We present a computational framework for uncertainty quantification (UQ) for the quasi-magnetostatic Maxwell equations using lognormal random field conductivity models. Our methodology combines elements of sparse quadrature (SQ) for the efficient calculation of the high-dimensional UQ integrals, as well as model reduction methods for expediting the model evaluations. Our analysis and numerical results show that subject to some mild assumptions on the smoothness of the random conductivity fields, sparse quadrature outperforms the convergence of the conventional Monte-Carlo method, while model reduction further reduces the computational cost. Numerical results to illustrate the method are presented from three-dimensional simulations that are representative of models appearing in the geophysical prospecting Controlled-Source Electromagnetic Method (CSEM).
J.Nagoor Kani and Ahmed H. Elsheikh, Heriot Watt University
Model reduction using physics
driven deep residual recurrent neural networks
We introduce a deep residual recurrent neural network (DR-RNN) to
emulate the dynamics of physical phenomena. The developed DR-RNN is inspired by
the iterative steps of line search methods in finding the residual minimiser of
numerically discretised differential equations. We formulate this iterative
scheme as stacked recurrent neural network (RNN) embedded with the dynamical
structure of the emulated differential equations. We provide empirical evidence
showing that these residual driven deep RNN can effectively emulate the
physical system with significantly lower number of parameters in comparison to
standard RNN architectures. We also show the significant gains in accuracy by
increasing the depth of RNN similar to other recent applications of deep
learning. The applicability of the developed DR-RNN is demonstrated on
uncertainty quantification tasks where a large number of forward simulation are
required.
Jon Cockayne, Chris Oates, Tim Sullivan, Mark
Girolami
"Bayesian Probabilistic
Numerical Methods"
The
emergent field of probabilistic numerics has thus far lacked rigorous
statistical principals. This work establishes Bayesian probabilistic numerical
methods as those which can be cast as solutions to certain Bayesian inverse
problems, albeit problems that are non-standard. This allows us to establish
general conditions under which Bayesian probabilistic numerical methods are
well-defined, encompassing both non-linear and non-Gaussian models. For general
computation, a numerical approximation scheme is developed and its asymptotic
convergence is established. The theoretical development is then extended to
pipelines of computation, wherein probabilistic numerical methods are composed
to solve more challenging numerical tasks. The contribution highlights an
important research frontier at the interface of numerical analysis and
uncertainty quantification, with some illustrative applications presented
Link: https://arxiv.org/abs/1702.03673
Alessandro Perelli (University of Edinburgh) Multi Denoising Approximate Message
Passing for computational complexity reduction
Denoising-AMP
(D-AMP) [1] can be viewed as an iterative algorithm where at each iteration a
non-linear denoising function is applied to the signal estimate. D-AMP
algorithm has been analysed in terms of inferential accuracy without
considering computational complexity. This is an important missing aspect since
the denoising is often the computational bottleneck in the D-AMP reconstruction.
The
approach that it is proposed in this work is different; we aim to design a
mechanism for leveraging a hierarchy denoising models (MultiD-AMP) in order to
minimize the overall complexity given the expected risk, i.e. the estimation
error. The intuition comes from the observation that at earlier iteration, when
the estimate is far according to some distance to the true signal, the
algorithm does not need a complicated denoiser, since the structure of the
signal is poor, but faster denoisers and this leads to the idea of defining a
family/hierarchy of denoisers of increased complexity. The main challenge is to
define a switching scheme which is based on the empirical finding that in
MultiD-AMP we can predict exactly, in the large system limit, the evolution of
the Mean Square Error. We can exploit the State Evolution, evaluated on a set
of training images, to find a proper switching strategy. The proposed framework
has been tested on i.i.d. random Gaussian measurements with Gaussian noise and
for deconvolution problem. The results show the effectiveness of the proposed
reconstruction algorithm.
[1]
Metzler, C. A., Maleki, A., Baraniuk, R. G. From denoising to compressed
sensing. IEEE Transactions on Information Theory, 62(9), 5117-5144, 2016
Jenovah Rodrigues, University of
Edinburgh.
'Bayesian Inverse Problems with
Heterogeneous Noise.'
We study
linear, ill-posed inverse problems in separable Hilbert spaces with noisy
observations. A Bayesian solution with Gaussian regularising priors will be
studied; the aim being to select the prior distribution in such a way that the
solution achieves the optimal rate of convergence, when the unknown function
belongs to a Sobolev space. Consequently, we will focus on obtaining the rate
of convergence, for the rate of contraction, of the whole posterior
distribution to the forementioned unknown function. We consider a Gaussian
noise error model with heterogeneous variance, which is investigated using the
spectral decomposition of the operator defined in the inverse problem. This is
joint work with Natalia Bochkina (University of Edinburgh).
Ferdia
Sherry, University of Cambridge
Learning a sampling pattern for
MRI
Taking
measurements in MRI is a time-consuming procedure, so ideally one would take
few samples and still recover a useable image. It is crucial that these samples
are positioned in frequency space in a way that allows as much information to
be extracted from the samples as possible. We consider the problem of
determining a suitable sampling pattern for a class of images that are in some
sense similar. The problem of learning a
sampling pattern can be formulated as a bilevel optimisation problem, in which
the upper problem measures the reconstruction quality and penalises the lack of
sparsity of the sampling pattern and in which the lower problem is the total
variation MRI reconstruction problem. We study the use of stochastic
optimisation methods (taking a random pair of ground truth and noisy
measurements for each iteration) to solve the bilevel problem.
Link to the LMS Inverse Problems Research Group