Maneesh Sahani

UCL, London

July, 7, 2021

Perceptual Inference, Uncertainty and Representation

To act effectively and flexibly in an imperfectly predictable environment with only incomplete and unreliable sensory information, animals must learn to form and compute with internal representations that reflect their necessarily uncertain beliefs about the state of the world.  The optimal approach to handling uncertainty is rooted in Bayesian probability, and indeed humans and other animals often approach Bayes optimality with a degree of robustness and flexibility that continues to evade artificial systems.  However, the question of how neural circuits organise to achieve this performance remains one of the fundamental mysteries of neuroscience.

I will discuss a series of models built around the idea that distributional information is naturally encoded in a distributed fashion by neural population firing rates that converge on the mean values of non-linear functions of state.  We will see that such representations emerge naturally in task-optimised systems, and also provide a simple and effective substrate for unsupervised learning. Finally, I will sketch ongoing work that links the emergence of such
representations to the architecture of recurrent neural circuits.

  • YouTube

Stephen Coombes

The University of Nottingham

June, 30, 2021

  • YouTube

Pattern formation in biological neural networks

with rebound currents

Waves and patterns in the brain are well known to subserve natural computation. Much attention in the theoretical neuroscience community has been devoted to analysing networks of relatively simple spiking neurons (IF type) or firing rate models (Wilson-Cowan type) and to great effect!  Indeed, the understanding of how spatio-temporal patterns of neural activity may arise in the cortex has advanced significantly with the development and analysis of such models. To replicate this success for sub-cortical tissues requires an extension to include relevant ionic currents that can further shape firing response. Here I will advocate for two complementary approaches: i) that augments the approach for IF networks to include piecewise linear caricatures of gating dynamics for nonlinear ionic current models, ii) firing rate reductions for systems where the nonlinear ionic currents are slow.  By way of illustration, I will show how to construct spatially periodic waves and patterns in i) a simple spiking tissue model of medial enthorinal cortex (with an I_h current), ii) a firing rate model of thalamus (with an I_T current). The biological commonality between these two models is that both express local 'rebound' currents that can usefully shape global tissue response.  The mathematical commonality is the use of tools from non-smooth dynamical systems theory to make analytical progress in determining patterns and their stability.


Carina Curto
The Pennsylvania State University
June, 23, 2021

Ten theorems about threshold-linear networks

Threshold-linear networks (TLNs) are popular firing rate models of recurrent networks. They have been used to model associative memory, decision-making, and position coding in cortical and hippocampal networks. Unlike rate models with other choices of nonlinearity, TLNs are piecewise linear, making them more amenable to mathematical analysis. In this talk I will present ten theorems about TLNs from the past five years. Many of these theorems connect the fixed points of a network to the structure of an underlying connectivity graph. These results have enabled us to develop graph rules to predict both static and dynamic attractors from network motifs. The theorems will be complemented with examples that illustrate how the mathematical results can be used to analyze and design recurrent networks that support a rich variety of computations and dynamics. Examples include internally-generated sequences, neural integrators, and central pattern generator circuits.

  • YouTube

Vincent Hakim

CNRS, Paris

June, 16, 2021

What is the mechanical basis of traveling waves in the motor cortex?

Oscillatory activity with different characteristic frequencies is recorded in different neural areas. Beta (13-30Hz) oscillations are prominent in the motor cortex during movement preparation. Moreover, in several experiments, this oscillatory activity has been reported to organize into a variety of traveling wave types. I will discuss how these waves could arise in local excitatory-inhibitory modules coupled by long-range excitation. First, I will describe the synchronization properties of such a system that we recently reinvestigated, following several previous works. I will then try to precisely compare the modeling to electrophysiological datasets recorded in the primary motor cortices of macaque monkeys during an instructed delayed reach-to-grasp task. Close agreement between the model and the experimental data is obtained in the presence of stochastic local entries that vary on a long-time scale (200ms) and mimick inputs to the motor cortex from other neural areas. The results suggest that both time-varying external entries and intrinsic network architecture shape the dynamics of the motor cortex. 


Ken Miller
Columbia University
June, 9, 2021

 (Two or) three easy pieces

(1) We (Grace Lindsay) used convolutional neural nets to model attention, by scaling the input/output function of neurons in an imagenet-trained network according to their selectivity for the feature or object category being attended. While this was effective in improving performance on difficult tasks, it was far less effective in earlier than in later layers. This indicated that neurons selective for a feature in earlier layers did not necessarily drive neurons selective for that feature in later layers. In contrast, applying attention according to the gradient for improving task performance worked well in early as well as late layers. This raises the question whether biological attentional modulation might reflect task requirements and not only the features of the stimuli to be attended. We suggest a simple experiment to answer this question, which we hope to convince an appropriate lab to carry out. (2) In E/I networks, a "paradoxical" response to stimulation has been shown: If the excitatory neurons would be unstable by themselves, but are stabilized by feedback inhibition (an "inhibition-stabilized network", or ISN), then, in response to addition of excitatory input to inhibitory neurons, their steady-state firing rates paradoxically decrease. In circuits with multiple inhibitory cell types, this has been generalized: in an ISN, if there is an added stimulus only to inhibitory cells, there will be a paradoxical change in the net inhibition received by excitatory cells -- e.g., if excitatory firing rates increase, so too will the net inhibition they receive. This does not imply that the firing rates of any particular inhibitory cell type will change paradoxically. Here we (Agostina Palmigiano along with Francesco Fumarola, and experimental work of Dan Mossing in the Adesnik lab) generalize the conditions for a paradoxical firing rate response, including in responses to partial as well as full perturbation of the neurons of a given cell type. We work in the context of the circuit with three inhibitory cell types (PV, SOM, VIP) in mouse V1. We show that, if a given cell type shows a paradoxical response to its own full stimulation, then the circuit without that cell type is unstable. This and experimental results to date, as well as our models fitted to data, suggest that PV but not SOM interneurons stabilize the circuit of layer 2/3 of mouse V1, at least for smaller visual stimulus sizes. For partial perturbations of a fraction f of a cell type that responds paradoxically to a full perturbation, there is a "fractional paradoxical effect": the proportion of all the cells of that type, stimulated and unstimulated, that respond opposite to the stimulation (i.e. negative response to excitation), changes non-monotonically, approaching 1 for f->0, decreasing with increasing f, and then increasing again to again approach 1 as f->1. I'll explain the origins of this behavior.3) We (Mario Dipoppa, in collaboration with the experimental work of Andy Keller and Morgane Roth from the Scanziani lab) have studied the E-PV-SOM-VIP circuit underlying contextual modulation in layer 2/3 of mouse V1. Experiments showed that E, PV, and VIP are suppressed by a surround stimuus that has the same orientation as, but not by one orthogonal to, the center stimulus. SOM neurons show the opposite behavior, being suppressed by an orthogonal but much less by a parallel surround. A combination of theory and optogenetic experiments show that the disinhibitory circuit -- VIP inhibits SOM, which inhibits E -- modulate responses between the two conditions. However, it does so, as part of the recurrent circuit, primarily by changing the recurrent excitation E cells receive, rather than by directly changing the inhibition received, in a manner reminiscent of the paradoxical response.

  • YouTube

Lai-Sang Young
Courant Institute
June, 2, 2021

In the past several years, I have been involved in building a biologically realistic model of the monkey visual cortex. Work on one of the input layers (4Ca) of the primary visual cortex (V1) is now nearly complete, and I would like to share some of what I have learned with the community. After a brief overview of the model and its capabilities, I would like to focus on three sets of results that represent three different aspects of the modeling. They are: (i) emergent E-I dynamics in local circuits; (ii) how visual cortical neurons acquire their ability to detect edges and directions of motion, and (iii) a view across the cortical surface: nonequilibrium steady states (in analogy with statistical mechanics) and beyond.

  • YouTube

A dynamical model of the visual cortex


Ran Darshan
Janelia Research Campus
May, 26, 2021

Manifold attractors without symmetry

Encoding by manifold attractors is one of the dominant paradigms in understanding neural computations involving continuous variables, such as parametric and spatial working memory or path integration. In this framework, a persistent neuronal representation of a continuous variable is often attributed to a symmetry principle, both in the representation itself and in the underlying synaptic connectivity. It is thus unclear if the concept of manifold attractors applies to real biological systems in which imperfections are inevitable and perfect symmetry is implausible. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. We show that a continuous neuronal representation of the feature emerges from a small set of stimuli used for training. Furthermore, we find that network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation leads to manifold's destabilization. Our framework shows that continuous features can be represented in the recurrent dynamics of heterogeneous networks without unrealistic symmetry assumptions. It suggests a general principle for how the static internal representation of continuous features predict the dynamics in putative manifold attractors in the brain.

Ran Darshan
Janelia Research Campus
May, 26, 2021


Neuronal variability and spatiotemporal dynamics

in cortical network models 

Neuronal variability is a reflection of recurrent circuitry and cellular physiology. The modulation of neuronal variability is a reliable signature of cognitive and processing state. A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional with all neurons fluctuating en masse. We show that the spatiotemporal dynamics in a spatially structured network produce large population-wide shared variability.  When the spatial and temporal scales of inhibitory coupling match known physiology, model spiking neurons naturally generate low dimensional shared variability that captures in vivo population recordings along the visual pathway. Further, we show that firing rate models with spatial coupling can also generate chaotic and low-dimensional rate dynamics. The chaotic parameter region expands when the network is driven by correlated noisy inputs, while being insensitive to the intensity of independent noise. 

Chengcheng Huang
University of Pittsburgh
May, 19, 2021

  • YouTube

Computational frameworks for integrating large scale

neural dynamics, connectivity and behaviour

Modern neurotechnologies generate high-resolution maps of the brain-wide neural activity and anatomical connectivity. However, theoretical frameworks are missing to explain how global activity arises from connectivity to drive animal behaviors. I will present our recent work developing computational frameworks for modeling global neural dynamics, which utilize anatomical connectivity and predict rich behavioral outputs. First, we took advantage of recently available large-scale datasets of neural activity and connectivity to construct a model of mesoscopic functional dynamics across the mouse cortex. We found that global activity is restricted to a low-dimensional subspace spanned by a few cortical areas and explores different parts of this subspace in different behavioral contexts. Our framework provides an interpretable dimensionality reduction of cortex-wide neural activity grounded on the connectome, which generalizes across animals and behaviors. Second, we developed a circuit reduction method for inferring interpretable low-dimensional circuit mechanisms of cognitive computations from high-dimensional neural activity data. Our method infers the structural connectivity of an equivalent low-dimensional circuit that fits projections of high-dimensional neural activity data and implements the behavioral task. Our computational frameworks make quantitative predictions for perturbation experiments.

Tatyana Engel
Cold Spring Harbor Lab
May,12, 2021


Ran Darshan
Janelia Research Campus
May, 26, 2021


Design principles of adaptable neural codes

.Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

 Ann Hermunstad
Janelia Research Campus
May 5, 2021

  • YouTube

Ran Darshan
Janelia Research Campus
May, 26, 2021


Specialized and spatially organized dopamine signals

I will describe our work showing surprising heterogeneity at the single-cell level in the dopamine system, contradicting a classic view of a homogenous reinforcement learning signal. Next, I will discuss new work attempting to reconcile this observed heterogeneity with classic models regarding the neural instantiation of reinforcement learning. Finally, I will discuss future directions aiming to extend these findings of within-subject dopamine variability to the question of cross-subject variability, with an eye to understanding potential consequences for individual differences in learned behavior.

Ilana Witten 
Princeton University
April 28, 2021

Ran Darshan
Janelia Research Campus
May, 26, 2021


A neuronal model for learning to keep a rhythmic beat

 When listening to music, we typically lock onto and move to a beat (1-6 Hz). Behavioral studies on such synchronization (Repp 2005) abound, yet the neural mechanisms remain poorly understood. Some models hypothesize an array of self-sustaining entrainable neural oscillators that resonate when forced with rhythmic stimuli (Large et al. 2010). In contrast, our formulation focuses on event time estimation and plasticity: a neuronal beat generator that adapts its intrinsic frequency and phase to match the extermal rhythm. The model quickly learns new rhythms, within a few cycles as found in human behavior. When the stimulus is removed the beat generator continues to produce the learned rhythm in accordance with a synchronization continuation task.

John Rinzel
New York University
April 21, 2021

  • YouTube

Ran Darshan
Janelia Research Campus
May, 26, 2021


Coordinated hippocampal-thalamic-cortical communication crucial for engram dynamics underneath systems consolidation

Systems consolidation refers to the reorganization of memory over time across brain regions. Despite recent advancements in unravelling engrams and circuits essential for this process, the exact mechanisms behind engram cell dynamics and the role of associated pathways remain poorly understood. Here, we propose a computational model to address this knowledge gap that consists of a multi-region spiking recurrent neural network subject to biologically-plausible synaptic plasticity mechanisms. By coordinating the timescales of synaptic plasticity throughout the network and incorporating a hippocampus-thalamus-cortex circuit, our model is able to couple engram reactivations across these brain regions and thereby reproduce key dynamics of cortical and hippocampal engram cells along with their interdependencies. Decoupling hippocampal-thalamic-cortical activity disrupts engram dynamics and systems consolidation. Our modeling work also yields several testable predictions: engram cells in mediodorsal thalamus are activated in response to partial cues in recent and remote recall and are crucial for systems consolidation; hippocampal and thalamic engram cells are essential for coupling engram reactivations between subcortical and cortical regions; inhibitory engram cells have region-specific dynamics with coupled reactivations; inhibitory input to mediodorsal thalamus is critical for systems consolidation; and thalamocortical synaptic coupling is predictive of cortical engram dynamics and the retrograde amnesia pattern induced by hippocampal damage. Overall, our results suggest that systems consolidation emerges from concerted interactions among engram cells in distributed brain regions enabled by coordinated synaptic plasticity timescales in multisynaptic subcortical-cortical circuits.

Claudia Clopath
Imperial College London
April 14, 2021


Yonatan Loewenstein
The Hebrew University 
April, 7, 2021

Choice engineering and the modeling of operant learning

Organisms modify their behavior in response to its consequences, a phenomenon referred to as operant learning. Contemporary modeling of this learning behavior is based on reinforcement learning algorithms. I will discuss some of the challenges that these models face, and proposed a new approach to model-selection that is based on testing their ability to engineer behavior. Finally, I will present the results of The Choice Engineering Competition – an academic competition that compared the efficacies of qualitative and quantitative models of operant learning in shaping behavior.

  • YouTube

Adrienne Fairhall
University of Washington
March, 31, 2021

  • YouTube

Variability, maintenance and learning in birdsong

The songbird zebra finch is an exemplary model system in which to study trial-and-error learning, as the bird learns its single song gradually through the production of many noisy renditions. It is also a good system in which to study the maintenance of motor skills, as the adult bird actively maintains its song and retains some residual plasticity. Motor learning occurs through the association of timing within the song, represented by sparse firing in nucleus HVC, with motor output, driven by nucleus RA. Here we show through modeling that the small level of observed variability in HVC can result in a network which is more easily able to adapt to change, and is most robust to cell damage or death, than an unperturbed network. In collaboration with Carlos Lois’ lab, we also consider the effect of directly perturbing HVC through viral injection of toxins that affect the firing of projection neurons. Following these perturbations, the song is profoundly affected but is able to almost perfectly recover. We characterize the changes in song acoustics and syntax,  and propose models for HVC architecture and plasticity that can account for some of the observed effects. Finally, we suggest a potential role for inputs from nucleus Uva in helping to control timing precision in HVC.