TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre

Yann Le Cun
Meta-FAIR & Meta AI
October, 19, 2022
From Machine Learning to Autonomous Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.

Julia Steinberg
Princeton University
October, 26, 2022
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.

Alexis Dubreuil
CNRS, Bordeaux
November 2, 2022
The role of population structure in computations through
neural dynamics
Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally com- plementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.

Yonatan Aljadeff
UCSD
November 9, 2022
Shallow networks run deep:
How peripheral preprocessing facilitates odor classification
Drosophila olfactory sensory hairs ("sensilla") typically house two olfactory receptor neurons (ORNs) which can laterally inhibit each other via electrical ("ephaptic") coupling. ORN pairing is highly stereotyped and genetically determined. Thus, olfactory signals arriving in the Antennal Lobe (AL) have been pre-processed by a fixed and shallow network at the periphery. To uncover the functional significance of this organization, we developed a nonlinear phenomenological model of asymmetrically coupled ORNs responding to odor mixture stimuli. We derived an analytical solution to the ORNs’ dynamics, which shows that the peripheral network can extract the valence of specific odor mixtures via transient amplification. Our model predicts that for efficient read-out of the amplified valence signal there must exist specific patterns of downstream connectivity that reflect the organization at the periphery. Analysis of AL→Lateral Horn (LH) fly connectomic data reveals evidence directly supporting this prediction. We further studied the effect of ephaptic coupling on olfactory processing in the AL→Mushroom Body (MB) pathway. We show that stereotyped ephaptic interactions between ORNs lead to a clustered odor representation of glomerular responses. Such clustering in the AL is an essential assumption of theoretical studies on odor recognition in the MB. Together our work shows that preprocessing of olfactory stimuli by a fixed and shallow network increases sensitivity to specific odor mixtures, and aids in the learning of novel olfactory stimuli.
Work led by Palka Puri, in collaboration with Chih-Ying Su and Shiuan-Tze Wu
November, 16, 2022
Society for Neuroscience Meeting

Barbara Webb
University of Edinburgh
November 23, 2022
Neural circuits for vector processing in the insect brain
Several species of insects have been observed to perform accurate path integration, constantly updating a vector memory of their location relative to a starting position, which they can use to take a direct return path. Foraging insects such as bees and ants are also able to store and recall the vectors to return to food locations, and to take novel shortcuts between these locations. Other insects, such as dung beetles, are observed to integrate multimodal directional cues in a manner well described by vector addition. All these processes appear to be functions of the Central Complex, a highly conserved and strongly structured circuit in the insect brain. Modelling this circuit, at the single neuron level, suggests it has general capabilities for vector
encoding, vector memory, vector addition and vector rotation that can support a wide range of directed and navigational behaviours.

Thibaud Taillefumier
The University of Texas
at Austin
November 30, 2022
Neural networks in the replica-mean field limits
In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.

Grace Lindsay
NYU
December 7, 2022
Connecting performance benefits on visual tasks to neural mechanisms using convolutional neural networks
Behavioral studies have demonstrated that certain task features reliably enhance classification performance for challenging visual stimuli. These include extended image presentation time and the valid cueing of attention. Here, I will show how convolutional neural networks can be used as a model of the visual system that connects neural activity changes with such performance changes. Specifically, I will discuss how different anatomical forms of recurrence can account for better classification of noisy and degraded images with extended processing time. I will then show how experimentally-observed neural activity changes associated with feature attention lead to observed performance changes on detection tasks. I will also discuss the implications these results have for how we identify the neural mechanisms and architectures important for behavior.

Vladimir Itskov
The Pennsylvania State
University
December 14, 2022
Convex neural codes in recurrent networks and sensory systems.
Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions:
(i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space.
To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.
DECEMBER, 21, 2022
Hanukkah and Christmas Break
DECEMBER, 28, 2022
Happy New Year
.jpeg)
Haim Sompolinsky
The Hebrew University
and Harvard
January 4, 2023
Geometry of concept learning
Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples.
We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations.
We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs.
Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities.
I will discuss ongoing effort to extend this work to other high level cognitive tasks.

Tim Vogels
IST, Klosterneuburg
Austria
January 18, 2023
Meta-learning functional plasticity rules in neural networks
Synaptic plasticity is known to be a key player in the brain’s life-long learning abilities. However, due to experimental limitations, the nature of the local changes at individual synapses and their link with emerging network-level computations remain unclear. I will present a numerical, meta-learning approach to deduce plasticity rules from either neuronal activity data and/or prior knowledge about the network's computation. I will first show how to recover known rules, given a human-designed loss function in rate networks, or directly from data, using an adversarial approach. Then I will present how to scale-up this approach to recurrent spiking networks using simulation-based inference.

Alessandro Sanzeni
Universita Bocconi
Milano
January 25, 2023
Dynamics of cortical circuits: underlying mechanisms
and computational implications
A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.

Soledad Gonzalo Cogno
NTNU,Trondheim
February 1, 2023
Minute-scale periodic sequences in medial entorhinal cortex
The medial entorhinal cortex (MEC) hosts many of the brain’s circuit elements for spatial navigation and episodic memory, operations that require neural activity to be organized across long durations of experience. While location is known to be encoded by a plethora of spatially tuned cell types in this brain region, little is known about how the activity of entorhinal cells is tied together over time. Among the brain’s most powerful mechanisms for neural coordination are network oscillations, which dynamically synchronize neural activity across circuit elements. In MEC, theta and gamma oscillations provide temporal structure to the neural population activity at subsecond time scales. It remains an open question, however, whether similarly coordination occurs in MEC at behavioural time scales, in the second-to-minute regime. In this talk I will show that MEC activity can be organized into a minute-scale oscillation that entrains nearly the entire cell population, with periods ranging from 10 to 100 seconds. Throughout this ultraslow oscillation, neural activity progresses in periodic and stereotyped sequences. The oscillation sometimes advances uninterruptedly for tens of minutes, transcending epochs of locomotion and immobility. Similar oscillatory sequences were not observed in neighboring parasubiculum or in visual cortex. The ultraslow periodic sequences in MEC may have the potential to couple its neurons and circuits across extended time scales and to serve as a scaffold for processes that unfold at behavioural time scales.

Lenka Zdeborová
EPFL, Lausanne
February 8, 2023
Understanding Machine Learning via
Exactly Solvable Statistical Physics Models
The affinity between statistical physics and machine learning has a long history. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the optimization algorithms commonly used for learning.

German Mato
CONICET, Bariloche
February 15, 2023
Orientation selectivity in rodent V1: theory vs experiments
Neurons in the primary visual cortex (V1) of rodents are selective to the orientation of the stimulus, as in other mammals such as cats and monkeys. However, in contrast with those species, their neurons display a very different type of spatial organization. Instead of orientation maps they are organized in a “salt and pepper” pattern, where adjacent neurons have completely different preferred orientations. This structure has motivated both experimental and theoretical research with the objective of determining which aspects of the connectivity patterns and intrinsic neuronal responses can explain the observed behavior. These analysis have to take into account also that the neurons of the thalamus that send their outputs to the cortex have more complex responses in rodents than in higher mammals, displaying, for instance, a significant degree of orientation selectivity. In this talk we present work showing that a random feed-forward connectivity pattern, in which the probability of having a connection between a cortical neuron and a thalamic neuron depends only on the relative distance between them is enough explain several aspects of the complex phenomenology found in these systems. Moreover, this approach allows us to evaluate analytically the statistical structure of the thalamic input on the cortex. We find that V1 neurons are orientation selective but the preferred orientation of the stimulus depends on the spatial frequency of the stimulus. We disentangle the effect of the non circular thalamic receptive fields, finding that they control the selectivity of the time-averaged thalamic input, but not the selectivity of the time locked component. We also compare with experiments that use reverse correlation techniques, showing that ON and OFF components of the aggregate thalamic input are spatially segregated in the cortex.
.jpeg)
Sophie Denève
CNRS, Paris
February 22, 2023
CANCELLED

Richard Naud
University of Ottawa
March 1st , 2023
Silences, Spikes and Bursts: Three-Part Knot of the Neural Code
When a neuron breaks silence, it can emit action potentials in a number of patterns. Some responses are so sudden and intense that electrophysiologists felt the need to single them out, labeling action potentials emitted at a particularly high frequency with a metonym – bursts. Is there more to bursts than a figure of speech? After all, sudden bouts of high-frequency firing are expected to occur whenever inputs surge. In this talk, I will discuss the implications of seeing the neural code as having three syllables: silences, spikes and bursts. In particular, I will describe recent theoretical and experimental results that implicate bursting in the implementation of top-down attention and the coordination of learning.
March 8 & March 15
Eve and following day Cosyne22
No Seminar

Stefano Fusi
Columbia University
March 22 , 2023
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.