The ongoing data explosion across science poses significant challenges from a statistical analysis viewpoint. Traditional inference techniques are often computationally intractable in a high-dimensional setup. However, the available information often lies in a lower dimensional subspace that can be efficiently exploited. In this talk I'll give two general examples in the context of spatiotemporal data with applications to neural data analysis.

In the first part I will describe efficient methods for filtering and forward-backward smoothing in high-dimensional state space models where the available observations are either limited per timestep, or spatially localized. These algorithms, based on a series of low dimensional updates, can be executed with complexity that scales linearly with the spatial dimension, as opposed to standard Kalman filter algorithms, that scale at least quadratically. Applications will be discussed on non-stationary receptive field estimation from generalized linear model observations, and smoothing of calcium measurements in dendritic trees.

In the second part I will discuss robust convex optimization approaches for learning a low-dimensional structure from high-dimensional observations, based on nuclear norm penalization. These methods provide a structured way to deal with the problem of dimensionality reduction in a non-Gaussian setup. Applications will be discussed in the context of two specific and important problems in statistical neuroscience: large scale calcium imaging denoising and subspace identification in large neuronal ensembles.