Tatiana Engel

Cold Spring Harbor Lab

May,12, 2021

Computational frameworks for integrating large-scale

neural dynamics, connectivity, and behavior

Modern neurotechnologies generate high-resolution maps of the brain-wide neural activity and anatomical connectivity. However, theoretical frameworks are missing to explain how global activity arises from connectivity to drive animal behaviors. I will present our recent work developing computational frameworks for modeling global neural dynamics, which utilize anatomical connectivity and predict rich behavioral outputs. First, we took advantage of recently available large-scale datasets of neural activity and connectivity to construct a model of mesoscopic functional dynamics across the mouse cortex. We found that global activity is restricted to a low-dimensional subspace spanned by a few cortical areas and explores different parts of this subspace in different behavioral contexts. Our framework provides an interpretable dimensionality reduction of cortex-wide neural activity grounded on the connectome, which generalizes across animals and behaviors. Second, we developed a circuit reduction method for inferring interpretable low-dimensional circuit mechanisms of cognitive computations from high-dimensional neural activity data. Our method infers the structural connectivity of an equivalent low-dimensional circuit that fits projections of high-dimensional neural activity data and implements the behavioral task. Our computational frameworks make quantitative predictions for perturbation experiments.

hermunstad.jpeg

 

Ann Hermunstad

Janelia Research Campus

May, 5, 2021

  • YouTube

Design principles of adaptable neural codes

Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

witten.jpg

Ilana Witten

Princeton University

April, 28, 2021

Specialized and spatially organized dopamine signals

I will describe our work showing surprising heterogeneity at the single-cell level in the dopamine system, contradicting a classic view of a homogenous reinforcement learning signal. Next, I will discuss new work attempting to reconcile this observed heterogeneity with classic models regarding the neural instantiation of reinforcement learning. Finally, I will discuss future directions aiming to extend these findings of within-subject dopamine variability to the question of cross-subject variability, with an eye to understanding potential consequences for individual differences in learned behavior.

rinzel.jpeg

 

John Rinzel

New York University

April, 21, 2021

A neuronal model for learning to keep a rhythmic beat

When listening to music, we typically lock onto and move to a beat (1-6 Hz). Behavioral
studies on such synchronization (Repp 2005) abound, yet the neural mechanisms remain
poorly understood. Some models hypothesize an array of self-sustaining entrainable neural oscillators that resonate when forced with rhythmic stimuli (Large et al. 2010).
In contrast, our formulation focuses on event time estimation and plasticity: a neuronal
beat generator that adapts its intrinsic frequency and phase to match the extermal rhythm.
The model quickly learns new rhythms, within a few cycles as found in human behavior.
When the stimulus is removed the beat generator continues to produce the learned rhythm
in accordance with a synchronization continuation task.

  • YouTube
clopath.jpg

 

Claudia Clopath

Imperial College London

April, 14, 2021

Coordinated hippocampal-thalamic-cortical communication crucial for engram dynamics underneath systems consolidation

Systems consolidation refers to the reorganization of memory over time across brain regions. Despite recent advancements in unravelling engrams and circuits essential for this process, the exact mechanisms behind engram cell dynamics and the role of associated pathways remain poorly understood. Here, we propose a computational model to address this knowledge gap that consists of a multi-region spiking recurrent neural network subject to biologically-plausible synaptic plasticity mechanisms. By coordinating the timescales of synaptic plasticity throughout the network and incorporating a hippocampus-thalamus-cortex circuit, our model is able to couple engram reactivations across these brain regions and thereby reproduce key dynamics of cortical and hippocampal engram cells along with their interdependencies. Decoupling hippocampal-thalamic-cortical activity disrupts engram dynamics and systems consolidation. Our modeling work also yields several testable predictions: engram cells in mediodorsal thalamus are activated in response to partial cues in recent and remote recall and are crucial for systems consolidation; hippocampal and thalamic engram cells are essential for coupling engram reactivations between subcortical and cortical regions; inhibitory engram cells have region-specific dynamics with coupled reactivations; inhibitory input to mediodorsal thalamus is critical for systems consolidation; and thalamocortical synaptic coupling is predictive of cortical engram dynamics and the retrograde amnesia pattern induced by hippocampal damage. Overall, our results suggest that systems consolidation emerges from concerted interactions among engram cells in distributed brain regions enabled by coordinated synaptic plasticity timescales in multisynaptic subcortical-cortical circuits.

yonatan.jpg

 

Yonatan Loewenstein

The Hebrew University 

April, 7, 2021

Choice engineering and the modeling of operant learning

Organisms modify their behavior in response to its consequences, a phenomenon referred to as operant learning. Contemporary modeling of this learning behavior is based on reinforcement learning algorithms. I will discuss some of the challenges that these models face, and proposed a new approach to model-selection that is based on testing their ability to engineer behavior. Finally, I will present the results of The Choice Engineering Competition – an academic competition that compared the efficacies of qualitative and quantitative models of operant learning in shaping behavior.

  • YouTube
fairhall.png

 

Adrienne Fairhall

University of Washington

March, 31, 2021

  • YouTube

Variability, maintenance and learning in birdsong

The songbird zebra finch is an exemplary model system in which to study trial-and-error learning, as the bird learns its single song gradually through the production of many noisy renditions. It is also a good system in which to study the maintenance of motor skills, as the adult bird actively maintains its song and retains some residual plasticity. Motor learning occurs through the association of timing within the song, represented by sparse firing in nucleus HVC, with motor output, driven by nucleus RA. Here we show through modeling that the small level of observed variability in HVC can result in a network which is more easily able to adapt to change, and is most robust to cell damage or death, than an unperturbed network. In collaboration with Carlos Lois’ lab, we also consider the effect of directly perturbing HVC through viral injection of toxins that affect the firing of projection neurons. Following these perturbations, the song is profoundly affected but is able to almost perfectly recover. We characterize the changes in song acoustics and syntax,  and propose models for HVC architecture and plasticity that can account for some of the observed effects. Finally, we suggest a potential role for inputs from nucleus Uva in helping to control timing precision in HVC.

subkin-lim.jpg

 

Sukbin Lim

NYU Shanghai

March 24, 2021

  • YouTube

Hebbian learning, its inference, and brain oscillation

Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning.  At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient.  In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data.  Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation.  Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data.  This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.

DSC00962 (1).JPG

 

Sara Solla

Northwestern University

March 17, 2021

Low Dimensional Manifolds for Neural Dynamics

The ability to simultaneously record the activity from tens to thousands to tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics, and argue that latent cortical dynamics within the manifold are the fundamental and stable building blocks of neural population activity.

  • YouTube
shamir.jpg

 

Maoz Shamir

Ben Gurion University

March, 10, 2021

  • YouTube

 STDP and the transfer of rhythmic signals in the brain

Rhythmic activity in the brain has been reported in relation to a wide range of cognitive processes. Changes in the rhythmic activity have been related to pathological states. These observations raise the question of the origin of these rhythms: can the mechanisms responsible for generation of these rhythms and that allow the propagation of the rhythmic signal be acquired via a process of learning? In my talk I will focus on spike timing dependent plasticity (STDP) and examine under what conditions this unsupervised learning rule can facilitate the propagation of rhythmic activity downstream in the central nervous system. Next, the I will apply the theory of STDP to the whisker system and demonstrate how STDP can shape the distribution of preferred phases of firing in a downstream population. Interestingly, in both these cases STDP dynamics does not relax to a fixed-point solution, rather the synaptic weights remain dynamic. Nevertheless, STDP allows for the system to retain its functionality in the face of continuous remodeling of the entire synaptic population.

sharpee.jpeg

Tatyana Sharpee

Salk Institute

March, 3, 2021

 Reading out responses of large neural populations with

minimal information loss

Classic studies show that in many species – from leech and cricket to primate – responses of neural populations can be quite successfully read out using a measure neural population activity termed the population vector. However, despite its successes, detailed analyses have shown that the standard population vector discards substantial amounts of information contained in the responses of a neural population, and so is unlikely to accurately describe how signal communication between parts of the nervous system. I will describe recent theoretical results showing how to modify the population vector expression in order to read out neural responses without information loss, ideally. These results make it possible to quantify the contribution of weakly tuned neurons to perception. I will also discuss numerical methods that can be used to minimize information loss when reading out responses of large neural populations.

  • YouTube

Glassy phase in dynamically balanced networks

mongillo.jpeg

 

Gianluigi Mongillo

CNRS

February, 17, 2021

We study the dynamics of (inhibitory) balanced networks at varying (i) the level of symmetry in the synaptic connectivity; and (ii) the ariance of the synaptic efficacies (synaptic gain). We find three regimes of activity. For suitably low synaptic gain, regardless of the level of symmetry, there exists a unique stable fixed point. Using a cavity-like approach, we develop a quantitative theory that describes the statistics of the activity in this unique fixed point, and the conditions for its stability. Increasing the synaptic gain, the unique fixed point destabilizes, and the network exhibits chaotic activity for zero or negative levels of symmetry (i.e., random or antisymmetric). Instead, for positive levels of symmetry, there is multi-stability among a large number of marginally stable fixed points. In this regime, ergodicity is broken and the network exhibits non-exponential relaxational dynamics. We discuss the potential relevance of such a “glassy” phase to explain some features of cortical activity.

  • YouTube
monasson.jpg

Remi Monasson

CNRS, Paris

February, 10, 2021

  • YouTube

Emergence of long time scales in data-driven network models

of zebrafish activity

How can neural networks exhibit persistent activity on time scales much larger than allowed by cellular properties? We address this question in the context of larval zebrafish, a model vertebrate that is accessible to brain-scale neuronal recording and high-throughput behavioral studies. We study in particular the dynamics of a bilaterally distributed circuit, the so-called ARTR, including hundreds neurons. ARTR exhibits slow antiphasic alternations between its left and right subpopulations, which can be modulated by the water temperature, and drive the coordinated orientation of swim bouts, thus organizing the fish spatial exploration. To elucidate the mechanism leading to the slow self-oscillation, we train a network graphical model (Ising) on neural recordings. Sampling the inferred model allows us to generate synthetic oscillatory activity, whose features correctly capture the observed dynamics. A mean-field analysis of the inferred model reveals the existence several phases; activated crossing of the barriers in between those phases controls the long time scales present in the network oscillations. We show in particular how the barrier heights and the nature of the phases vary with the water temperature.

fitzgerald.jpeg

 

James Fitzgerald 

Janelia Research Campus

February, 3, 2021

  • YouTube

A geometric framework to predict structure from function in neural networks

The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons.

Doiron.jpg

 

Brent Doiron

University of Chicago

January, 27, 2021

  • YouTube

Cellular mechanisms behind stimulus evoked quenching of variability  

A wealth of experimental studies show that the trial-to-trial variability of neuronal activity is quenched during stimulus evoked responses.  This fact has helped ground a popular view that the variability of spiking activity can be decomposed into two components. The first is due to irregular spike timing conditioned on the firing rate of a neuron (i.e. a Poisson process), and the second is the trial-to-trial variability of the firing rate itself. Quenching of the variability of the overall response is assumed to be a reflection of a suppression of firing rate variability. Network models have explained this phenomenon through a variety of circuit mechanisms. However, in all cases, from the vantage of a neuron embedded within the network,  quenching of its response variability is inherited from its synaptic input. We analyze in vivo whole cell recordings from principal cells in layer (L) 2/3 of mouse visual cortex. While the variability of the membrane potential is quenched upon stimulation, the variability of excitatory and inhibitory currents afferent to the neuron are amplified.  This discord complicates the simple inheritance assumption that underpins network models of neuronal variability. We propose and validate an alternative (yet not mutually exclusive) mechanism for the quenching of neuronal variability.  We show how an increase in synaptic conductance in the evoked state shunts the transfer of current to the membrane potential, formally decoupling changes in their trial-to-trial variability.  The ubiquity of conductance based neuronal transfer combined with the simplicity of our model, provides an appealing framework. In particular, it shows how the dependence of cellular properties upon neuronal state is a critical, yet often ignored, factor.  Further, our mechanism does not require a decomposition of variability into spiking and firing rate components, thereby challenging a long held view of neuronal activity.

rouault.jpeg

 

Hervé Rouault

CNRS, Marseille

January, 20, 2021

Brain representations of the sense of direction

Spatial navigation constitutes an essential behavior that requires internal representations of environments and online memory processing to guide decisions. The precise integration of orientation and directions along trajectories critically determines the ability of animals to explore their surroundings efficiently. First, I will present recent results obtained in the fruit fly, Drosophila melanogaster. These results show how insects use an internal neural compass to store and compute the direction of cues present in their environments. Then, I will present the structure of the involved neural networks and the mechanisms at play during the processing of the information of direction.The results obtained in the fly mainly involve navigation in 2 dimensions, and thus the processing of a unique angular variable. However, a recent study in bats uncovered the existence of cells representing the orientation of bats in 3D. I will show possible mechanisms to extend the neural computation of directions to 3D rotations, a problem that presents much stronger theoretical challenges. I will propose a neural network model that displays activity patterns that continuously maps to the set of all the 3D rotations. Moreover, the general theory can account for psychophysics observations of “mental rotations.”

giorgjieva.jpg

Julijana Gjorgjieva

MPI, Frankfurt

January, 13, 2021

  • YouTube

A theory for Hebbian Learning in recurrent E-I networks

The Stabilized Supralinear Network is a model of recurrently connected excitatory (E) and inhibitory (I) neurons that can explain many cortical phenomena such as response normalization and inhibitory stabilization. However, the network’s connectivity is designed by hand, based on experimental measurements. How the connectivity can be learned from the sensory input statistics in a biologically plausible way is unknown. Here we present a recurrent E-I network model where all synaptic connections are simultaneously plastic. We employ local Hebbian plasticity rules and develop a theoretical framework that explains how neurons’ receptive fields decorrelate and become self-stabilized by recruiting co-tuned inhibition. As in the Stabilized Supralinear Network, the circuit’s response is normalized -- the response to a combined stimulus is equal to a weighted sum of the individual stimulus responses. In summary, we present a biologically plausible theoretical framework to model plasticity in fully plastic recurrent E-I networks. While the connectivity is derived from the sensory input statistics, the circuit performs meaningful computations. Our work provides a mathematical framework of plasticity in recurrent networks, which has previously only been studied numerically and can serve as the basis for a new generation of brain-inspired unsupervised machine learning algorithms.

OmriBarak.jpg

Omri Barak 

Technion, Haifa

January, 6, 2021

Learning from learning in recurrent neural networks

Learning a new skill requires assimilating into our brain the regularities of the external world and how our body interacts with them as we engage in this skill. Trained Recurrent Neural Networks (TRNNs) are increasingly used as models of neural circuits of animals that were trained in laboratory setups, but the learning process itself has received less attention. Furthermore, most use of TRNNs is of a heuristic, rather than theory-based, nature, leaving many open questions: Which tasks yield to this approach and why? How do initial network architecture and learning rules bias the resultant network? In this talk, I will argue that studying the learning process of TRNNs can both advance our understanding of TRNNs and set up possible comparisons to the biological process of learning. 

  • YouTube

Theory and modeling of whisking rhythm generation in the brainstem

golomb.jpeg

David Golomb

Ben Gurion University

December, 30,  2020

The vIRt nucleus in the medulla, composed of mainly inhibitory neurons, is necessary for whisking rhythm generation. It innervates motoneurons in the facial nucleus (FN) that project to intrinsic vibrissa muscles. The nearby pre-Bötzinger complex (pBötC), which generates inhalation, sends inhibitory inputs to the vIRt nucleus which  contribute to the synchronization of vIRt neurons. Lower-amplitude periodic whisking, however, can occur after decay of the pBötC signal. To explain how vIRt network generates these “intervening” whisks by bursting in synchrony, and how pBötC input induces strong whisks, we construct and analyze a conductance-based (CB) model of the vIRt circuit composed of hypothetical two groups, vIRtr and vIRtp, of bursting inhibitory neurons with spike-frequency adaptation currents and constant external inputs. The CB model is reduced to a rate model to enable analytical treatment. We find, analytically and computationally, that without pBötC input, periodic bursting states occur within a certain ranges of network connectivities. Whisk amplitudes increase with the level constant external input to the vIRT. With pBötC inhibition intact, the amplitude of the first whisk in a breathing cycle is larger than the intervening whisks for large pBötC input and small inhibitory coupling between the vIRT sub-populations. The pBötC input advances the next whisk and shortens its amplitude if it arrives at the beginning of the whisking cycle generated by the vIRT, and delays the next whisks if it arrives at the end of that cycle. Our theory provides a mechanism for whisking generation and reveals how whisking frequency and amplitude are controlled.

  • YouTube
mypicture.png

Stefano Recanatesi

University of Washington

December, 23, 2020

  • YouTube

Linking dimensionality to computation in neural networks

The link between behavior, learning and the underlying connectome is a fundamental open problem in neuroscience. In my talk I will show how it is possible to develop a theory that bridges across these three levels (animal behavior, learning and network connectivity) based on the geometrical properties of neural activity. The central tool in my approach is the dimensionality of neural activity. I will link animal complex behavior to the geometry of neural representations, specifically their dimensionality; I will then show how learning shapes changes in such geometrical properties and how local connectivity properties can further regulate them. As a result, I will explain how the complexity of neural representations emerges from both behavioral demands (top-down approach) and learning or connectivity features (bottom-up approach). I will build these results regarding neural dynamics and representations starting from the analysis of neural recordings, by means of theoretical and computational tools that blend dynamical systems, artificial intelligence and statistical physics approaches.

Moritz_Helias.png

Moritz Helias

Juelich Research Center

December, 16, 2020

  • YouTube

Transient chaotic dimensionality expansion by recurrent networks

Cortical neurons communicate with spikes, which are discrete events in time and value. They often show optimal computational performance close to a transition to rate-chaos; chaos that is driven by local and smooth averages of the discrete activity. We here analyze microscopic and rate chaos in discretely-coupled networksof binary neurons by a model-independent field theory. We find a strongly network size-dependent transition to microscopic chaos and a chaotic submanifold that spans only a finite fraction of the entire activity space. Rate chaos is shown to be impossible in these networks. Applying stimuli to a strongly microscopically chaotic binary network that acts as a reservoir, one observes a transient expansion of the dimensionality of the representing neuronal space. Crucially, the number of dimensions corrupted by noise lags behind the informative dimensions. This translates to a transient peak in the networks' classification performance even deeply in the chaotic regime, extending the view that computational performance is always optimal near the edge of chaos. Classification performance peaks rapidly within one activation per neuron, demonstrating fast event-based computation. The generality of this mechanism is underlined by simulations of spiking networks of leaky integrate-and fire neurons.

Yoram Burak

Hebrew University

December, 9, 2020

Linking neural representations of space by multiple attractor networks in the entorhinal cortex and the hippocampus

In the past decade evidence has accumulated in favor of the hypothesis that multiple sub-networks in the medial entorhinal cortex (MEC) are characterized by low-dimensional, continuous attractor dynamics. Much has been learned about the joint activity of grid cells within a module (a module consists of grid cells that share a common grid spacing), but little is known about the interactions between them. Under typical conditions of spatial exploration in which sensory cues are abundant, all grid-cells in the MEC represent the animal’s position in space and their joint activity lies on a two-dimensional manifold. However, if the grid cells in a single module mechanistically constitute independent attractor networks, then under conditions in which salient sensory cues are absent, errors could accumulate in the different modules in an uncoordinated manner. Such uncoordinated errors would give rise to catastrophic readout errors when attempting to decode position from the joint grid-cell activity. I will discuss recent theoretical works from our group, in which we explored different mechanisms that could impose coordination in the different modules. One of these mechanisms involves coordination with the hippocampus and must be set up such that it operates across multiple spatial maps that represent different environments. The other mechanism is internal to the entorhinal cortex and independent of the hippocampus.  

  • YouTube
cengiz_edited_small.jpg

Cengiz Pehlevan

Harvard University

December, 2, 2020

A function approximation perspective on neural representations

Activity patterns of neural populations in natural and artificial neural networks constitute representations of data. The nature of these representations and how they are learned are key questions in neuroscience and deep learning. In his talk, I will describe my group's efforts  in building a theory of representations as feature maps leading to sample efficient function approximation. Kernel methods are at the heart of these developments. I will present applications to deep learning and neuronal data.

  • YouTube
mazzucato.jpg

Luca Mazzucato

U. of Oregon

November, 25, 2020

  • YouTube

The emergence and modulation of time in neural circuits and behavior

Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times in these recurrent circuits can be accelerated or slowed down via gain modulation, induced by neuromodulation or perturbations. Finally, we will present a general mechanism producing a reservoir of multiple timescales in recurrent networks.

kanaka-rajan.jpeg

Kanaka Rajan 

Icahn School of Medicine at Mount Sinai, November, 18, 2020

  • YouTube

Inferring brain-wide current flow using data-constrained 

neural network models

Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we can ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

A robust neural integrator based on the interactions

of  three time scales

Bard Ermentrout

University of Pittsburgh

November, 11, 2020

Neural integrators are circuits that are able to code analog information such as spatial location or amplitude. Storing amplitude requires the network to have a large number of attractors. In classic models with recurrent excitation, such networks require very careful tuning to behave as integrators and are not robust to small mistuning of the recurrent weights.   In this talk, I introduce a circuit with recurrent connectivity that is subjected to a slow subthreshold oscillation (such as the theta rhythm in the hippocampus). I show that such a network can robustly maintain many discrete attracting states. Furthermore, the firing rates of the neurons in these attracting states are much closer to those seen in  recordings of animals.  I show the mechanism for this can be explained by the instability regions of the Mathieu equation.  I then extend the model in various ways and, for example, show that in a spatially distributed network, it is possible to code location and amplitude simultaneously. I show that the resulting mean field equations are equivalent to a certain discontinuous differential equation.

  • YouTube
abbott.jpg

 

Larry Abbott

Columbia University

November, 4, 2020

Vector addition in the navigational circuits of the fly

In a cross wind, the direction a fly moves through the air may differ from its heading direction, the direction defined by its body axis.   I will present a model based on experimental results that reveals how a heading direction “compass” signal is combined with optic flow to compute and represent the direction that a fly is traveling.  This provides a general framework for understand how flies perform vector computations.

  • YouTube