Glassy phase in dynamically balanced networks

 

Gianluigi Mongillo

CNRS

February, 17, 2020

We study the dynamics of (inhibitory) balanced networks at varying (i) the level of symmetry in the synaptic connectivity; and (ii) the ariance of the synaptic efficacies (synaptic gain). We find three regimes of activity. For suitably low synaptic gain, regardless of the level of symmetry, there exists a unique stable fixed point. Using a cavity-like approach, we develop a quantitative theory that describes the statistics of the activity in this unique fixed point, and the conditions for its stability. Increasing the synaptic gain, the unique fixed point destabilizes, and the network exhibits chaotic activity for zero or negative levels of symmetry (i.e., random or antisymmetric). Instead, for positive levels of symmetry, there is multi-stability among a large number of marginally stable fixed points. In this regime, ergodicity is broken and the network exhibits non-exponential relaxational dynamics. We discuss the potential relevance of such a “glassy” phase to explain some features of cortical activity.

  • YouTube

Remi Monasson

CNRS, Paris

February, 10, 2021

  • YouTube

Emergence of long time scales in data-driven network models

of zebrafish activity

How can neural networks exhibit persistent activity on time scales much larger than allowed by cellular properties? We address this question in the context of larval zebrafish, a model vertebrate that is accessible to brain-scale neuronal recording and high-throughput behavioral studies. We study in particular the dynamics of a bilaterally distributed circuit, the so-called ARTR, including hundreds neurons. ARTR exhibits slow antiphasic alternations between its left and right subpopulations, which can be modulated by the water temperature, and drive the coordinated orientation of swim bouts, thus organizing the fish spatial exploration. To elucidate the mechanism leading to the slow self-oscillation, we train a network graphical model (Ising) on neural recordings. Sampling the inferred model allows us to generate synthetic oscillatory activity, whose features correctly capture the observed dynamics. A mean-field analysis of the inferred model reveals the existence several phases; activated crossing of the barriers in between those phases controls the long time scales present in the network oscillations. We show in particular how the barrier heights and the nature of the phases vary with the water temperature.

 

James Fitzgerald 

Janelia Research Campus

February, 3, 2021

  • YouTube

A geometric framework to predict structure from function in neural networks

The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons.

 

Brent Doiron

University of Chicago

January, 27, 2021

  • YouTube

Cellular mechanisms behind stimulus evoked quenching of variability  

A wealth of experimental studies show that the trial-to-trial variability of neuronal activity is quenched during stimulus evoked responses.  This fact has helped ground a popular view that the variability of spiking activity can be decomposed into two components. The first is due to irregular spike timing conditioned on the firing rate of a neuron (i.e. a Poisson process), and the second is the trial-to-trial variability of the firing rate itself. Quenching of the variability of the overall response is assumed to be a reflection of a suppression of firing rate variability. Network models have explained this phenomenon through a variety of circuit mechanisms. However, in all cases, from the vantage of a neuron embedded within the network,  quenching of its response variability is inherited from its synaptic input. We analyze in vivo whole cell recordings from principal cells in layer (L) 2/3 of mouse visual cortex. While the variability of the membrane potential is quenched upon stimulation, the variability of excitatory and inhibitory currents afferent to the neuron are amplified.  This discord complicates the simple inheritance assumption that underpins network models of neuronal variability. We propose and validate an alternative (yet not mutually exclusive) mechanism for the quenching of neuronal variability.  We show how an increase in synaptic conductance in the evoked state shunts the transfer of current to the membrane potential, formally decoupling changes in their trial-to-trial variability.  The ubiquity of conductance based neuronal transfer combined with the simplicity of our model, provides an appealing framework. In particular, it shows how the dependence of cellular properties upon neuronal state is a critical, yet often ignored, factor.  Further, our mechanism does not require a decomposition of variability into spiking and firing rate components, thereby challenging a long held view of neuronal activity.

 

Hervé Rouault

CNRS, Marseille

January, 20, 2021

Brain representations of the sense of direction

Spatial navigation constitutes an essential behavior that requires internal representations of environments and online memory processing to guide decisions. The precise integration of orientation and directions along trajectories critically determines the ability of animals to explore their surroundings efficiently. First, I will present recent results obtained in the fruit fly, Drosophila melanogaster. These results show how insects use an internal neural compass to store and compute the direction of cues present in their environments. Then, I will present the structure of the involved neural networks and the mechanisms at play during the processing of the information of direction.The results obtained in the fly mainly involve navigation in 2 dimensions, and thus the processing of a unique angular variable. However, a recent study in bats uncovered the existence of cells representing the orientation of bats in 3D. I will show possible mechanisms to extend the neural computation of directions to 3D rotations, a problem that presents much stronger theoretical challenges. I will propose a neural network model that displays activity patterns that continuously maps to the set of all the 3D rotations. Moreover, the general theory can account for psychophysics observations of “mental rotations.”

Julijana Gjorgjieva

MPI, Frankfurt

January, 13, 2021

  • YouTube

A theory for Hebbian Learning in recurrent E-I networks

The Stabilized Supralinear Network is a model of recurrently connected excitatory (E) and inhibitory (I) neurons that can explain many cortical phenomena such as response normalization and inhibitory stabilization. However, the network’s connectivity is designed by hand, based on experimental measurements. How the connectivity can be learned from the sensory input statistics in a biologically plausible way is unknown. Here we present a recurrent E-I network model where all synaptic connections are simultaneously plastic. We employ local Hebbian plasticity rules and develop a theoretical framework that explains how neurons’ receptive fields decorrelate and become self-stabilized by recruiting co-tuned inhibition. As in the Stabilized Supralinear Network, the circuit’s response is normalized -- the response to a combined stimulus is equal to a weighted sum of the individual stimulus responses. In summary, we present a biologically plausible theoretical framework to model plasticity in fully plastic recurrent E-I networks. While the connectivity is derived from the sensory input statistics, the circuit performs meaningful computations. Our work provides a mathematical framework of plasticity in recurrent networks, which has previously only been studied numerically and can serve as the basis for a new generation of brain-inspired unsupervised machine learning algorithms.

Omri Barak 

Technion, Haifa

January, 6, 2021

Learning from learning in recurrent neural networks

Learning a new skill requires assimilating into our brain the regularities of the external world and how our body interacts with them as we engage in this skill. Trained Recurrent Neural Networks (TRNNs) are increasingly used as models of neural circuits of animals that were trained in laboratory setups, but the learning process itself has received less attention. Furthermore, most use of TRNNs is of a heuristic, rather than theory-based, nature, leaving many open questions: Which tasks yield to this approach and why? How do initial network architecture and learning rules bias the resultant network? In this talk, I will argue that studying the learning process of TRNNs can both advance our understanding of TRNNs and set up possible comparisons to the biological process of learning. 

  • YouTube

Theory and modeling of whisking rhythm generation in the brainstem

David Golomb

Ben Gurion University

December, 30,  2020

The vIRt nucleus in the medulla, composed of mainly inhibitory neurons, is necessary for whisking rhythm generation. It innervates motoneurons in the facial nucleus (FN) that project to intrinsic vibrissa muscles. The nearby pre-Bötzinger complex (pBötC), which generates inhalation, sends inhibitory inputs to the vIRt nucleus which  contribute to the synchronization of vIRt neurons. Lower-amplitude periodic whisking, however, can occur after decay of the pBötC signal. To explain how vIRt network generates these “intervening” whisks by bursting in synchrony, and how pBötC input induces strong whisks, we construct and analyze a conductance-based (CB) model of the vIRt circuit composed of hypothetical two groups, vIRtr and vIRtp, of bursting inhibitory neurons with spike-frequency adaptation currents and constant external inputs. The CB model is reduced to a rate model to enable analytical treatment. We find, analytically and computationally, that without pBötC input, periodic bursting states occur within a certain ranges of network connectivities. Whisk amplitudes increase with the level constant external input to the vIRT. With pBötC inhibition intact, the amplitude of the first whisk in a breathing cycle is larger than the intervening whisks for large pBötC input and small inhibitory coupling between the vIRT sub-populations. The pBötC input advances the next whisk and shortens its amplitude if it arrives at the beginning of the whisking cycle generated by the vIRT, and delays the next whisks if it arrives at the end of that cycle. Our theory provides a mechanism for whisking generation and reveals how whisking frequency and amplitude are controlled.

  • YouTube

Stefano Recanatesi

University of Washington

December, 23, 2020

  • YouTube

Linking dimensionality to computation in neural networks

The link between behavior, learning and the underlying connectome is a fundamental open problem in neuroscience. In my talk I will show how it is possible to develop a theory that bridges across these three levels (animal behavior, learning and network connectivity) based on the geometrical properties of neural activity. The central tool in my approach is the dimensionality of neural activity. I will link animal complex behavior to the geometry of neural representations, specifically their dimensionality; I will then show how learning shapes changes in such geometrical properties and how local connectivity properties can further regulate them. As a result, I will explain how the complexity of neural representations emerges from both behavioral demands (top-down approach) and learning or connectivity features (bottom-up approach). I will build these results regarding neural dynamics and representations starting from the analysis of neural recordings, by means of theoretical and computational tools that blend dynamical systems, artificial intelligence and statistical physics approaches.

Moritz Helias

Juelich Research Center

December, 16, 2020

  • YouTube

Transient chaotic dimensionality expansion by recurrent networks

Cortical neurons communicate with spikes, which are discrete events in time and value. They often show optimal computational performance close to a transition to rate-chaos; chaos that is driven by local and smooth averages of the discrete activity. We here analyze microscopic and rate chaos in discretely-coupled networksof binary neurons by a model-independent field theory. We find a strongly network size-dependent transition to microscopic chaos and a chaotic submanifold that spans only a finite fraction of the entire activity space. Rate chaos is shown to be impossible in these networks. Applying stimuli to a strongly microscopically chaotic binary network that acts as a reservoir, one observes a transient expansion of the dimensionality of the representing neuronal space. Crucially, the number of dimensions corrupted by noise lags behind the informative dimensions. This translates to a transient peak in the networks' classification performance even deeply in the chaotic regime, extending the view that computational performance is always optimal near the edge of chaos. Classification performance peaks rapidly within one activation per neuron, demonstrating fast event-based computation. The generality of this mechanism is underlined by simulations of spiking networks of leaky integrate-and fire neurons.

Yoram Burak

Hebrew University

December, 9, 2020

Linking neural representations of space by multiple attractor networks in the entorhinal cortex and the hippocampus

In the past decade evidence has accumulated in favor of the hypothesis that multiple sub-networks in the medial entorhinal cortex (MEC) are characterized by low-dimensional, continuous attractor dynamics. Much has been learned about the joint activity of grid cells within a module (a module consists of grid cells that share a common grid spacing), but little is known about the interactions between them. Under typical conditions of spatial exploration in which sensory cues are abundant, all grid-cells in the MEC represent the animal’s position in space and their joint activity lies on a two-dimensional manifold. However, if the grid cells in a single module mechanistically constitute independent attractor networks, then under conditions in which salient sensory cues are absent, errors could accumulate in the different modules in an uncoordinated manner. Such uncoordinated errors would give rise to catastrophic readout errors when attempting to decode position from the joint grid-cell activity. I will discuss recent theoretical works from our group, in which we explored different mechanisms that could impose coordination in the different modules. One of these mechanisms involves coordination with the hippocampus and must be set up such that it operates across multiple spatial maps that represent different environments. The other mechanism is internal to the entorhinal cortex and independent of the hippocampus.  

  • YouTube

Cengiz Pehlevan

Harvard University

December, 2, 2020

A function approximation perspective on neural representations

Activity patterns of neural populations in natural and artificial neural networks constitute representations of data. The nature of these representations and how they are learned are key questions in neuroscience and deep learning. In his talk, I will describe my group's efforts  in building a theory of representations as feature maps leading to sample efficient function approximation. Kernel methods are at the heart of these developments. I will present applications to deep learning and neuronal data.

  • YouTube

Luca Mazzucato

U. of Oregon

November, 25, 2020

  • YouTube

The emergence and modulation of time in neural circuits and behavior

Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times in these recurrent circuits can be accelerated or slowed down via gain modulation, induced by neuromodulation or perturbations. Finally, we will present a general mechanism producing a reservoir of multiple timescales in recurrent networks.

Kanaka Rajan 

Icahn School of Medicine at Mount Sinai, November, 18, 2020

  • YouTube

Inferring brain-wide current flow using data-constrained 

neural network models

Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we can ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

A robust neural integrator based on the interactions

of  three time scales

Bard Ermentrout

University of Pittsburgh

November, 11, 2020

Neural integrators are circuits that are able to code analog information such as spatial location or amplitude. Storing amplitude requires the network to have a large number of attractors. In classic models with recurrent excitation, such networks require very careful tuning to behave as integrators and are not robust to small mistuning of the recurrent weights.   In this talk, I introduce a circuit with recurrent connectivity that is subjected to a slow subthreshold oscillation (such as the theta rhythm in the hippocampus). I show that such a network can robustly maintain many discrete attracting states. Furthermore, the firing rates of the neurons in these attracting states are much closer to those seen in  recordings of animals.  I show the mechanism for this can be explained by the instability regions of the Mathieu equation.  I then extend the model in various ways and, for example, show that in a spatially distributed network, it is possible to code location and amplitude simultaneously. I show that the resulting mean field equations are equivalent to a certain discontinuous differential equation.

  • YouTube

 

Larry Abbott

Columbia University

November, 4, 2020

Vector addition in the navigational circuits of the fly

In a cross wind, the direction a fly moves through the air may differ from its heading direction, the direction defined by its body axis.   I will present a model based on experimental results that reveals how a heading direction “compass” signal is combined with optic flow to compute and represent the direction that a fly is traveling.  This provides a general framework for understand how flies perform vector computations.

  • YouTube