MAD Workshop "Estimating functions from data"
After a long break, the Center for Mathematics and Algorithms for Data (MAD) at the University of Bath is organising the second workshop day. On 7th May we are (virtually) welcoming 6 speakers who will present on different methods and applications of data-driven function estimation in machine learning.
The event is supported by the Departments of Mathematical Sciences and Computer Science.
Venue: Online via Zoom
Date: Friday 7^{th} May 2021
The Zoom link will be emailed around later. Attendance is free, but please do sign up on a Google Form if you are tuning in. This would allow us to estimate the numbers for gather.town breaks, and to communicate the links in emails only to keep out possible Zoom bombers.
Programme: (all times are British Summer Time (BST))
8:50am | Opening, connecting in. | |
9:00am | Stephen Marsland (Wellington) “Bioacoustics: Mathematics and sound recognition” | |
10:00am | “Coffee” break on gather.town | |
10:20am | Tim Dodwell (Exeter) “Adaptive Multilevel Delayed Acceptance” | |
11:20am | Long (“lunch”) break (gather.town) | |
12:30pm | Aldo Faisal (Imperial) “The mathematics of building an AI Clinician” | |
13:30pm | Jordan Taylor (Bath) “Flight Path Planning with Reinforcement Learning” | |
14:00pm | Break | |
14:20pm | Ullrich Köthe (Heidelberg) “Invertible Neural Networks and their Applications in the Sciences” | |
15:20pm | Aretha Teckentrup (Edinburgh) “Convergence, Robustness and Flexibility of Gaussian Process Regression” | |
16:20pm | Close |
During the breaks there will be an opportunity to chat informally on gather.town (a link to our dedicated meeting will be posted later).
Abstracts of the talks
Prof Stephen Marsland
Bioacoustics: Mathematics and sound recognition
Animal vocalisations can be used to estimate abundance of different species in an environment. While technology to record the soundscape have made great strides, accurate processing of the recordings - which comprise long periods of nothing with occasional calls at varying distances from the detector, sometimes overlapping with each other, and all corrupted by natural and anthropogenic noise - remains a challenge. I will introduce the field, summarise the approaches used, focussing on the questions of mathematical interest, and highlight future challenges.
Prof Tim Dodwell
Adaptive Multilevel Delayed Acceptance
Uncertainty Quantification through Markov Chain Monte Carlo (MCMC) can be prohibitively expensive for target probability densities with expensive likelihood functions, for instance when the evaluation it involves solving a Partial Differential Equation (PDE), as is the case in a wide range of engineering applications. Multilevel Delayed Acceptance (MLDA) with an Adaptive Error Model (AEM) is a novel approach, which alleviates this problem by exploiting a hierarchy of models, with increasing complexity and cost, and correcting the inexpensive models on-the-fly. The method has been integrated within the open-source probabilistic programming package PyMC3 and is available in the latest development version
Dr Aretha Teckentrup
Convergence, Robustness and Flexibility of Gaussian Process Regression
We are interested in the task of estimating an unknown function from a set of point evaluations. In this context, Gaussian process regression is often used as a Bayesian inference procedure. However, hyper-parameters appearing in the mean and covariance structure of the Gaussian process prior, such as smoothness of the function and typical length scales, are often unknown and learnt from the data, along with the posterior mean and covariance.
In the first half of the talk, we will study the robustness of Gaussian process regression with respect to mis-specification of the hyper-parameters. We work in the framework of empirical Bayes’, where a point estimate of the hyper-parameters is computed, using the data, and then used within the standard Gaussian process prior to posterior update. Using results from scattered data approximation, we provide a convergence analysis of the method applied to a fixed, unknown function of interest.
In the second half of the talk, we discuss deep Gaussian processes as a class of flexible non-stationary prior distributions.
[1] A.L. Teckentrup. Convergence of Gaussian process regression with estimated hyper-parameters and applications in Bayesian inverse problems. SIAM/ASA Journal on Uncertainty Quantification, 8(4), p. 1310-1337, 2020.
[2] M.M. Dunlop, M.A. Girolami, A.M. Stuart, A.L. Teckentrup. How deep are deep Gaussian processes? Journal of Machine Learning Research, 19(54), 1-46, 2018.