top of page

VVTNS Sixth Season Opening Lecture

​

What is So Interesting About Reinforcement Learning?

barto.jpg

Andrew Barto 

University of Massachusetts

Amherst

October 29, 2025

This talk aims to answer these questions along four dimensions.  First is history. RL was the basis of AI long before the term AI was introduced in 1956. The first machine learning (ML) systems were based on RL even before digital computers existed. Despite notable early successes of ML based on RL, RL essentially disappeared from ML until relatively recently. A second reason for renewed interest in RL is the clarification of some misunderstandings that have been prevalent in the ML community. A third, and most important, reason for this resurgence is that new, or rediscovered, algorithms and connections to well developed mathematical and engineering methods have been worked out. Finally, a fourth reason for the renewed interest in RL is its strong links to animal reward systems, in particular, to the role that dopamine plays in motivation and learning.

  • YouTube
105x140,10657-debregas-georges.png

Georges Debrégeas

CNRS, Paris

November 5, 2025

  • YouTube

Latent-aligned generative models uncover shared structure in spontaneous whole-brain dynamics

Assessing how brain activity generalizes across individuals is a central challenge in experimental neuroscience. Traditional task- or stimulus-driven approaches align data through trial averaging and anatomical registration, but these methods fail for spontaneous activity, where no shared temporal reference exists. In this talk, I will introduce a statistical framework, called latent-aligned Restricted Boltzmann Machines, to build a common representational space from whole-brain recordings of spontaneous activity in multiple zebrafish larvae. This shared latent space, composed of spatially localized co-activation motifs or cell assemblies, allows bidirectional mapping of brain states: activity patterns from one fish can be encoded and decoded into another. The translated activity patterns retain their original spatial structure and show high plausibility within the recipient brain. We further use this shared space to segment spontaneous activity into discrete brain states and we quantify their Markovian transition statistics. Remarkably, these state-to-state dynamics are stereotyped across individuals, suggesting that spontaneous activity reflects intrinsic computational priors of neural processing. Together, these results demonstrate how probabilistic generative modeling can bridge individual variability and reveal conserved organizational principles of vertebrate brains.

Katharina Anna Wilmes

Institute of Neuroinformatics

Zurich

November 12, 2025

images.jpeg
  • YouTube

Uncertainty-aware predictive processing 

Minimising cortical prediction errors is thought to be a key computation underlying perception, action, and learning. Yet, how the cortex represents and uses uncertainty in this process remains unclear. In the first part of this talk, I will present a normative framework showing how uncertainty can modulate prediction error activity to yield uncertainty-modulated prediction errors (UPEs), hypothesised to be represented by layer 2/3 pyramidal neurons. We propose that these UPEs are computed through inhibitory mechanisms involving SST and PV interneurons. A circuit model demonstrates how cortical cell types can locally compute means, variances, and UPEs, leading to adaptive learning rates. In the second part, I will discuss how uncertainty modulation could be controlled by higher-level representations. We formally derived neural dynamics that minimise prediction errors under the assumption that cortical areas must not only predict the activity in other areas and sensory streams but also jointly project their inverse expected uncertainty about their predictions, which we call “confidence”. This yields a confidence-weighted integration of bottom-up and top-down signals, consistent with Bayesian principles, and predicts the existence of second-order errors that compare confidence with performance. We predict that these second-order errors propagate alongside classical prediction errors through the cortical hierarchy, and simulations demonstrate that this mechanism enables nonlinear classification within a single cortical area.

​

November 19, 2025

SFN in San Diego

​

No Seminar

Flexible analog computation in low-rank balanced spiking networks

images (1).jpeg

​Alfonso Renart

Champalimaud Centre for the Unknown, Lisbon

November 26, 2025

  • YouTube

Recurrent networks with balanced excitation-inhibition explain a wide range of neurophysiological observations, but can only implement a limited set of transformations on their input. On the other hand networks of firing-rate units with low-rank connectivity have universal computational capabilities, but do not work with spikes or generate noise self-consistently. Although empirical approaches to merge these two computational frameworks have been constructed, there is no established theory describing their unification. Here we develop such a theory. We study analytically and numerically networks with connectivity comprising random “strong”, and low-rank “weak” components. When the low-rank connectivity is slow, a well-defined notion of instantaneous firing rate emerges which implies universal computation as previously shown. However, the fact that such time-varying rates are the result of E-I balance has important implications. We show that internally or externally generated fluctuations along particular latent modes tend to break the E-I balance. Its maintenance is obtained through the emergence of a spontaneous coupling between the mean and the variance of the membrane potential and the norm of the latent state driving these modes. This leads to several predictions, the most counterintuitive of which is that coherent global fluctuations in subthreshold membrane potential (Vm) should coexist with desynchronized activity at constant firing rates when the dynamics of these modes is excited. To test our theory, we show that the coupling between the average Vm and the latent state adds new non-linear dimensions to the low-dimensional manifold of the network, which lead to a frequency doubling when the input to the network is periodic, a prediction that is borne out in population recordings from mouse V1. Our results unify two prevalent frameworks for cortical computation and clarify the relationship between computation, dynamics and geometry in circuits of spiking neurons.

bottom of page