The high dimensional and complex nature of real world data makes them difficult to visualise, understand, predict and, in general, work with. Similarly, consolidating multiple distinct but related data sources, for example coming from related biological experiments, is a non-trivial task. This talk will present a family of Gaussian process based models which attempt to solve the aforementioned problems by encapsulating the notion of a latent space, i.e. an assumed simpler representation of the data which is hidden from us due to observation noise. Enforcing different structures for this latent space results in different model variants; for example, we can obtain dynamical models for time series or powerful deep models. This talk will firstly discuss the basics of Gaussian processes and, subsequently, of the aforementioned latent variable models. Although these models are quite generic and not tied to a particular application, the talk will include some illustrative examples taken from the domains of vision, robotics and bioinformatics. For those who are interested, after the 45min of the main talk there will follow a presentation providing more details on the methodological aspects of the latest developed models.

Andreas Damianou is just starting a postdoc in Neil Lawrence's machine learning group which is based in the Sheffield Institute for Translational Neuroscience, Univ. of Sheffield, while finishing up his PhD in the same group. Prior to this, he received a MSc and a BSc degree from the Univ. of Edinburgh and the Univ. of Athens respectively.