Latent variable models are ubiquitous in the exploratory analysis of neural population recordings, where they allow researchers to summarize the activity of large populations of neurons in lower dimensional ‘latent’ spaces. Increasingly, these models build prior information into their latent trajectories such as smoothness over time or generation via a latent dynamical system. This incorporation of prior knowledge denoises and constrains the latent variables compared to simpler methods such as PCA. In this tutorial, we provide a brief overview of probabilistic latent variable models in neuroscience with a particular focus on methods that use Gaussian processes to constrain the latent states. To illustrate the use of such methods, we analyze a continuous primate reaching dataset using Bayesian GPFA which automatically learns the dimensionality of the latent space. We also introduce GPFADS, a method that bridges approaches with ‘smoothness’ and ‘dynamical systems’ priors by incorporating non-reversibility into its Gaussian process prior.
Zoom link: https://mit.zoom.us/j/96894545356?pwd=OXV1MXNJNGJkQjJRRXhCbHJNRDZmQT09
Password: 827934