TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
VVTNS Sixth Season Opening Lecture
​
What is So Interesting About Reinforcement Learning?

Andrew Barto
University of Massachusetts
Amherst
October 29, 2025
This talk aims to answer these questions along four dimensions. First is history. RL was the basis of AI long before the term AI was introduced in 1956. The first machine learning (ML) systems were based on RL even before digital computers existed. Despite notable early successes of ML based on RL, RL essentially disappeared from ML until relatively recently. A second reason for renewed interest in RL is the clarification of some misunderstandings that have been prevalent in the ML community. A third, and most important, reason for this resurgence is that new, or rediscovered, algorithms and connections to well developed mathematical and engineering methods have been worked out. Finally, a fourth reason for the renewed interest in RL is its strong links to animal reward systems, in particular, to the role that dopamine plays in motivation and learning.

Georges Debrégeas
CNRS, Paris
November 5, 2025
Latent-aligned generative models uncover shared structure in spontaneous whole-brain dynamics
Assessing how brain activity generalizes across individuals is a central challenge in experimental neuroscience. Traditional task- or stimulus-driven approaches align data through trial averaging and anatomical registration, but these methods fail for spontaneous activity, where no shared temporal reference exists. In this talk, I will introduce a statistical framework, called latent-aligned Restricted Boltzmann Machines, to build a common representational space from whole-brain recordings of spontaneous activity in multiple zebrafish larvae. This shared latent space, composed of spatially localized co-activation motifs or cell assemblies, allows bidirectional mapping of brain states: activity patterns from one fish can be encoded and decoded into another. The translated activity patterns retain their original spatial structure and show high plausibility within the recipient brain. We further use this shared space to segment spontaneous activity into discrete brain states and we quantify their Markovian transition statistics. Remarkably, these state-to-state dynamics are stereotyped across individuals, suggesting that spontaneous activity reflects intrinsic computational priors of neural processing. Together, these results demonstrate how probabilistic generative modeling can bridge individual variability and reveal conserved organizational principles of vertebrate brains.
Katharina Anna Wilmes
Institute of Neuroinformatics
Zurich
November 12, 2025

Uncertainty-aware predictive processing
Minimising cortical prediction errors is thought to be a key computation underlying perception, action, and learning. Yet, how the cortex represents and uses uncertainty in this process remains unclear. In the first part of this talk, I will present a normative framework showing how uncertainty can modulate prediction error activity to yield uncertainty-modulated prediction errors (UPEs), hypothesised to be represented by layer 2/3 pyramidal neurons. We propose that these UPEs are computed through inhibitory mechanisms involving SST and PV interneurons. A circuit model demonstrates how cortical cell types can locally compute means, variances, and UPEs, leading to adaptive learning rates. In the second part, I will discuss how uncertainty modulation could be controlled by higher-level representations. We formally derived neural dynamics that minimise prediction errors under the assumption that cortical areas must not only predict the activity in other areas and sensory streams but also jointly project their inverse expected uncertainty about their predictions, which we call “confidence”. This yields a confidence-weighted integration of bottom-up and top-down signals, consistent with Bayesian principles, and predicts the existence of second-order errors that compare confidence with performance. We predict that these second-order errors propagate alongside classical prediction errors through the cortical hierarchy, and simulations demonstrate that this mechanism enables nonlinear classification within a single cortical area.
​
November 19, 2025
SFN in San Diego
​
No Seminar
Flexible analog computation in low-rank balanced spiking networks
.jpeg)
​Alfonso Renart
Champalimaud Centre for the Unknown, Lisbon
November 26, 2025
Recurrent networks with balanced excitation-inhibition explain a wide range of neurophysiological observations, but can only implement a limited set of transformations on their input. On the other hand networks of firing-rate units with low-rank connectivity have universal computational capabilities, but do not work with spikes or generate noise self-consistently. Although empirical approaches to merge these two computational frameworks have been constructed, there is no established theory describing their unification. Here we develop such a theory. We study analytically and numerically networks with connectivity comprising random “strong”, and low-rank “weak” components. When the low-rank connectivity is slow, a well-defined notion of instantaneous firing rate emerges which implies universal computation as previously shown. However, the fact that such time-varying rates are the result of E-I balance has important implications. We show that internally or externally generated fluctuations along particular latent modes tend to break the E-I balance. Its maintenance is obtained through the emergence of a spontaneous coupling between the mean and the variance of the membrane potential and the norm of the latent state driving these modes. This leads to several predictions, the most counterintuitive of which is that coherent global fluctuations in subthreshold membrane potential (Vm) should coexist with desynchronized activity at constant firing rates when the dynamics of these modes is excited. To test our theory, we show that the coupling between the average Vm and the latent state adds new non-linear dimensions to the low-dimensional manifold of the network, which lead to a frequency doubling when the input to the network is periodic, a prediction that is borne out in population recordings from mouse V1. Our results unify two prevalent frameworks for cortical computation and clarify the relationship between computation, dynamics and geometry in circuits of spiking neurons.
Learning representations of specifics and generalities over time

Anna Schapiro
University of Pennsylvania
December 3, 2025
There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we rapidly learn novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how hippocampal replay shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.
Building on cortical models

Markus Diesmann
Jülich Research Centre
December 10, 2025
Over the past decade our community has made substantial progress in the construction of anatomically detailed network models of the cortical tissue. Thanks to advances in computer hardware and simulation technology, researchers can now routinely work with these models at the natural density of neurons and synapses. Moreover, the availability of cloud services means that such investigations can be carried out without having to install either the model or the simulation software. A recent workshop analyzed the impact of a specific model of the cortical microcircuit, published ten years ago . The model has been reused in multiple contexts: for reproduction studies, validation of mean-field approaches, exploration of methods of model sharing, and as a building block for larger models. Although the model was less successful in inspiring further neuroscientific studies than the authors of the original work had hoped, it became a de facto benchmark for neuromorphic computing systems. It sparked a constructive race for ever shorter simulation times and lower energy consumption. The quantitative comparison of different platforms reveals qualitative differences between conventional and neuromorphic hardware and limits of speed-up.
The structure of the model is based on light microscopy because these were the data available at the time. Guided by simulation results and physiological evidence, the original publication hypothesized a preference of excitatory neurons for inhibitory targets. Modern electron microscopy data of cortical volumes combined with AI based reconstruction techniques is capable of resolving individual synaptic connections. This advances the concept of digital twins of the cortical network to a new level of precision, and has already enabled us to confirm the assumption of target type specifity underlying earlier models. Maybe with the progress sketched here, our community is at a transition point where it becomes easier to cooperatively and incrementally work on models with a larger explanatory scope.
Learning mechanistic models that link cells, circuits, and computations
Jakob Macke
Tubingen University
December 17, 2025
Modern experimental techniques now reveal the structure and function of neural circuits at unprecedented scale and resolution. How can we use this wealth of data to understand how cells and circuits implement computations underlying behaviour? Achieving this goal requires models that are consistent with biophysical mechanisms and circuit dynamics, yet flexible enough to capture behaviourally relevant computations. We develop simulation-based machine learning methods that address this challenge. I will show how these approaches—in combination with connectomic measurements—make it possible to build large-scale mechanistic models of the fruit fly visual system. Our methods generalize across systems and scales, defining a new way to study biological systems by algorithmically learning interpretable models that reveal how structure and dynamics gives rise to behaviour.
Christmas Vacation
​
No Seminar
​December 31, 2025
New Year Vacation
​
No Seminar
VVTNS 2026 Opening Lecture
​
Towards using large-scale, cross-brain neuronal recordings
to identify the brain’s internal signals
Carlos Brody
Princeton Neuroscience Institute
January 7, 2026

Neural activity is often analyzed with respect to external referents, such as the onset of a sensory stimulus or an overt motor action. Simultaneous recordings allow referencing neurons’ activity to each other and thus detecting signals that are internal to the organism. Further, multi-region simultaneous recordings allow observing how these internal signals are coordinated across the brain. Following this logic in rats performing a perceptual decision-making task, we recorded simultaneously from thousands of neurons across up to 20 brain regions at once. Here we report two internal signals which we found to profoundly shape decision-related neural dynamics and brain states. First, we decoded the continuously evolving decision state separately from each region, and found surprisingly large magnitude co-fluctuations in these measures. Dimensionality analysis showed these to be dominated by a single state variable, suggesting that only a single decision-making computation, not multiple parallel computations, are being carried out during the analyzed period. Second, we found that the precise time the subject commits to a decision – a covert event that we decoded from large-scale neural activity in primary motor cortex – was accompanied by a coordinated change, across the brain, from a decision formation to a post-commitment state. The two states differ substantially in their choice-predictive neural dynamics and in their inter-region correlations. Therefore, knowing the time of this state change on single trials is needed to correctly parse fundamentally different phases of decision-making. Overall, our data suggest that internally-referenced signals and state changes, not timelocked to external events but detectable through simultaneous recordings, are major features of neural activity during cognition.
Computation Through Neuronal-Synaptic Dynamics
David Clark
Kempner Institute
at Harvard University
January 14, 2026

Computations in neural circuits are often construed as being implemented through the coordinated dynamics of neurons. In this picture, the role of synaptic connectivity is to sculpt neuronal dynamics to implement computations of interest. Of course, synapses are not static but change on a variety of timescales, including fast timescales comparable to those of neurons. Thus, a more accurate view of computation in neural circuits may involve the coupled dynamics of neurons and synapses. This form of computation is closer to what is implemented by Transformers via an equivalence between ongoing synaptic plasticity and self-attention. I will first describe a nonlinear recurrent neural-network model with ongoing Hebbian dynamics of “fast” synapses atop unstructured “slow” synapses. I will then describe two computations implemented through neuronal-synaptic dynamics, which can be studied in this model using techniques including dynamical mean-field theory and random-matrix theory. First, there exists a novel phase termed “freezable chaos” in which a stable fixed point of neuronal dynamics is continuously destabilized by synaptic dynamics. This allows for the creation of a stable fixed point at any neuronal state visited by the network by halting synaptic plasticity. Second, I will describe an effect termed “persistent oscillations” in which, following stimulation by a periodic signal, a plastic network continues to autonomously reproduce a similar signal for a duration exceeding any intrinsic timescale in the system. Thus, ongoing Hebbian plasticity can provide a dynamic form of working memory, complementing the static form provided by freezable chaos. Ongoing experimental work suggests that this effect is realized in cortical organoids. Overall, this line of work suggests that synapses should be promoted to first-class dynamical degrees of freedom in our conceptual understanding of neural-circuit function.
