top of page
kleinfeld.jpeg

David Kleinfeld

UCSD

October 11, 2023

Exceptionally the talk will start at 11:15 am ET

Vasomotor dynamics: Measuring, modeling, and understanding the other network in the brain

Much as Santiago Ramón y Cajal is the godfather of neuronal computation, which occurs among neurons that communicate predominantly via threshold logic, Camillo Golgi is the inadvertent godfather of neurovascular signaling, in which the endothelial cells that form the lumen of blood vessels communicate via electrodiffusion as well as threshold logic. I will address questions that define spatiotemporal patterns of constriction and dilation that develop across the network of cortical vasculature: First - is there a common topology and geometry of brain vasculature (our work)? Second - what mechanisms govern neuron-to-vessel and vessel-to-vessel signaling (work of Mark Nelson at U Vermont)? Last - what is the nature of competition among arteriole smooth muscle oscillators and the underlying neuronal drive (our work)? This answers to these questions bear on fundamental aspects of brain science as well as practical issues, including the relation of fMRI signals to neuronal activity and the impact of vascular dysfunction on cognition. Challenges and opportunities for experimentalists and theorists alike will be discussed.

  • YouTube

Alaa Ahmed

University of Colorado, Boulder

October 18, 2023

Alaa-Ahmed-44.jpeg

A unifying framework for movement control and decision making

To understand subjective evaluation of an option, various disciplines have quantified the interaction between reward and effort during decision making, producing an estimate of economic utility, namely the subject ‘goodness’ of an option. However, those same variables that affect the utility of an option also influence the vigor (speed) of movements to acquire it. To better understand this, we have developed a mathematical framework demonstrating how utility can influence not only the choice of what to do, but also the speed of the movement follows. I will present results demonstrating that expectation of reward increases speed of saccadic eye and reaching movements, whereas expectation of effort expenditure decreases this speed. Intriguingly, when deliberating between two visual options, saccade vigor to each option increases differentially, encoding their relative value. These results and others imply that vigor may serve as a new, real-time metric with which to quantify subjective utility, and that the control of movements may be an implicit reflection of the brain’s economic evaluation of the expected outcome.

  • YouTube

Dan Goodman

Imperial college

October 25, 2023

dan-goodman_1637704090254_x1.jpeg
  • YouTube

Multimodal units fuse-then-accumulate evidence across channels

We continuously detect sensory data, like sights and sounds, and use this information to guide our behaviour. However, rather than relying on single sensory channels, which are noisy and can be ambiguous alone, we merge information across our senses and leverage this combined signal. In biological networks, this process (multisensory integration) is implemented by multimodal neurons which are often thought to receive the information accumulated by unimodal areas, and to fuse this across channels; an algorithm we term accumulate-then-fuse. However, it remains an open question how well this theory generalises beyond the classical tasks used to test multimodal integration. Here, we explore this by developing novel multimodal tasks and deploying probabilistic, artificial and spiking neural network models. Using these models we demonstrate that multimodal units are not necessary for accuracy or balancing speed/accuracy in classical multimodal tasks, but are critical in a novel set of tasks in which we comodulate signals across channels. We show that these comodulation tasks require multimodal units to implement an alternative fuse-then-accumulate algorithm, which excels in naturalistic settings and is optimal for a wide class of multimodal problems. Finally, we link our findings to experimental results at multiple levels; from single neurons to behaviour. Ultimately, our work suggests that multimodal neurons may fuse-then-accumulate evidence across channels, and provides novel tasks and models for exploring this in biological systems.

kimberly-stachenfeld-111-e1561035130775.webp

Kimberly Stachenfeld

Google Deep Mind 

November 1, 2023

  • YouTube

Prediction Models for Brains and Machines

Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations. I will also cover work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. I will also talk about work applying this perspective to the deep RL setting, where we can study the effect of predictive learning on representations that form in a deep neural network and how these results compare to neural data. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.

jirsa.png

Viktor Jirsa

CNRS

November 8, 2023

Postponed to December 6

No seminar

November 15, 2023

Society for Neuroscience Meeting

No Seminar

matthias.png

Matthias Kaschube

Goethe-University, Frankfurt am Main

November 22, 2023

The Emergence of Cortical Representations

The internal and external world is thought to be represented by distributed patterns of cortical activity. The emergence of these cortical representations over the course of development remains an unresolved question. In this talk, I share results from a series of recent studies combining theory and experiments in the cortex of the ferret, a species with a well-defined columnar organization and modular network of orientation-selective responses in visual cortex. I show that prior to the onset of structured sensory experience, endogenous mechanisms set up a highly organized cortical network structure that is evident in modular patterns of spontaneous activity characterized by strong, clustered local and long-range correlations. This correlation structure is remarkably consistent across both sensory and association areas in the early neocortex, suggesting that diverse cortical representations initially develop according to similar principles. Next, I explore a classical candidate mechanism for producing modular activity – local excitation and lateral inhibition. I present the first empirical test of this mechanism through direct optogenetic cortical activation and discuss a plausible circuit implementation. Then, focusing on the visual cortex, I demonstrate that these endogenously structured networks enable orientation-selective responses immediately after eye opening. However, these initial responses are highly variable, lacking the reliability and low-dimensional structure observed in the mature cortex. Reliable responses are achieved after an experience-dependent co-reorganization of stimulus- evoked and spontaneous activity following eye opening. Based on these observations, I propose the hypothesis that the alignment between feedforward inputs and the recurrent network plays a crucial role in transforming the initially variable responses into mature and reliable representations.

Blake Bordelon

Harvard University

November 29, 2023

bordelon.png
  • YouTube

Mean Field Approaches to Learning Dynamics in Deep Networks

Deep neural network learning dynamics are very complex with large numbers of learnable weights and many sources of disorder. In this talk, I will discuss mean field approaches to analyze the learning dynamics of neural networks in large system size limits when starting from random initial conditions. The result of this analysis is a dynamical mean field theory (DMFT) where all neurons obey independent stochastic single site dynamics. Correlation functions (kernels) and response functions for the features and gradients at each layer can be computed self-consistently from these stochastic processes. Depending on the choice of scaling of the network output, the network can operate in a kernel regime or a feature learning regime in the infinite width limit. I will discuss how this theory can be used to analyze various learning rules for deep architectures (backpropagation, feedback alignment based rules, Hebbian learning etc), where the weight updates do not necessarily correspond to gradient descent on an energy function. I will then present recent extensions of this theory to residual networks at infinite depth and discuss the utility of deriving scaling limits to obtain consistent optimal hyperparameters (such as learning rate) across widths and depths. Feature learning in other types of architectures will be discussed if time permits. Lastly, I will discuss open problems and challenges associated with this theoretical approach to neural network learning dynamics.

jirsa.png

Viktor Jirsa

CNRS, Marseille

December 6, 2023

Digital Twins in Brain Medicine

Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.

  • YouTube
dugue.jpeg

Laura Dugué

CNRS, Paris

December 13, 2023

Oscillatory Traveling Waves as a mechanism

for perception and attention

Brain oscillations have long been a topic of extensive research and debate regarding their potential functional role. Our research, and that of others, has shown that oscillations modulate perceptual and attentional performance periodically in time. Oscillations create periodic windows of excitability with more or less favorable periods recurring at particular phases of the oscillations. However, perception and attention emerge from systems not only operating in time, but also in space. In our current research we ask: how does the spatio-temporal organization of brain oscillations impact perception and attention? In this presentation, I will discuss our theoretical and experimental work on humans. We test the hypothesis that oscillations propagate over the cortical surface, so-called oscillatory Traveling Waves, allowing perception and attentional facilitation to emerge both in space and time.

December 20, 27 2023

CHRISTMAS & NEW YEAR VACATION

mezard.jpeg

Marc Mézard

Bocconi University, Milano

January 3, 2024

Matrix Factorization with Neural Networks

The factorization of a large matrix into the product of two matrices is an important mathematical problem encountered in many tasks, ranging from dictionary learning to machine learning.  Statistical physics can provide on the one hand theoretical limits on the possibility of factorizing matrices in the limit of infinite size, and also practical algorithms. While this program has been successful in the case of finite rank matrices, the regime of extensive rank (scaling linearly with the dimension of the matrix) turns out to be much harder. This talk will describe a new approach to matrix factorization that maps it to neural network models of associative memory: each pattern found in the associative memory corresponds to one factor of the matrix decomposition. A detailed theoretical analysis of this new approach shows that matrix factorization in the extensive rank regime is possible when the rank is below a certain threshold.

  • YouTube

January 10, 2024

 

No seminar

Ann_new.width-350.jpg

Ann Kennedy

Northwestern University

January 17, 2024

Neural computations underlying the regulation of motivated behavior

As we interact with the world around us, we experience a constant stream of sensory inputs, and must generate a constant stream of behavioral actions. What makes brains more than simple input-output machines is their capacity to integrate sensory inputs with an animal’s own internal motivational state to produce behavior that is flexible and adaptive. Working with neural recordings from subcortical structures involved in regulation of survival behaviors, we show how the dynamical properties of neural populations give rise to motivational states that change animal behavior on a timescale of minutes. We also show how neuromodulation can alter these dynamics to change behavior on timescales of hours to days.

  • YouTube

N Alex Cayco Gajic

Ecole normale supérieure, Paris

January 24, 2024

gajic.jpeg
  • YouTube

Discovering learning-induced changes in neural representations from large-scale neural data tensors

Learning induces changes in neural activity over slow timescales. These changes can be summarized by restructuring neural population data into a three-dimensional array or tensor, of size neurons by time points by trials. Classic dimensionality reduction methods often assume that neural representations are constrained to a fixed low-dimensional latent subspace. Consequently, this view does not capture how the latent subspace could evolve over learning, nor how high-dimensional neural activity could emerge over learning. Furthermore, the link between these empirically-observed changes in neural activity as a result of learning and circuit-level changes in recurrent dynamics is unclear. In this talk I will discuss our recent efforts towards developing dimensionality reduction and data-driven modeling methods based on tensors in order to identify how neural representations change over learning. First we introduce a new tensor decomposition, sliceTCA, which is able to disentangle latent variables of multiple covariability classes that are often mixed in neural population data. We demonstrate in three datasets that sliceTCA is able to capture more behaviorally-relevant information in neural data than previous methods. Second, to probe for how circuit-level changes in neural dynamics implement the observed changes in neural activity, we develop a data-driven RNN-based framework in which the recurrent connectivity is constrained to be low tensor rank. We demonstrate that such low tensor rank RNNs (ltrRNNs) are able to capture changes in neural geometry and dynamics in motor cortical data from a motor adaptation task. Together, both sliceTCA and ltrRNN demonstrate the utility of interpretable, tensor-based methods for discovery of learning-induced changes in neural representations directly from data.

image (1).png

Jonathan Kadmon

The Hebrew University

January 31, 2024

Neural mechanisms of adaptive behavior

Animals and humans rapidly adapt their behavior to dynamic environmental changes, such as predator threats or fluctuating food resources, often without immediate rewards. Existing literature posits that animals rely on internal representations of the environment, termed “beliefs”, for their decision policy. However, previous work ties belief updates to external reward signals, which does not explain adaptation in scenarios where trial-and-error approaches are inefficient or potentially perilous. In this work, we propose that the brain utilize dynamic representations that continuously infer the state of the environment, allowing it to update behavior rapidly. I will present a Bayesian theory for state inference in a partially observed Markov Decision Process with multiple interacting latent variables. Optimal behavior requires knowledge of hidden interactions between latent states. I will show that recurrent neural networks trained through reinforcement solve the task by learning the hidden interaction between latent states, and their activity encodes the dynamics of the optimal Bayesian estimators. The behavior of rodents trained on an identical task aligns with our theoretical model and neural network simulations, suggesting that the brain utilizes dynamic internal state representation and inference.

Nicholas Priebe

The University of Texas

at Austin

February 7, 2024

téléchargement (3).jpeg

The origins of variable responses in neocortical neurons

I will discuss a collaborative project studying the origins of variable responses in neocortical neurons. The spiking responses of neocortical neurons are remarkably variable. Distinct patterns are observed when the same stimulus is presented in the sensory areas or when the same action is executed in motor areas. This is quantified across trials by measuring the Fano factor (FF) of the neuronal spike counts, which is generally near 1, consistent with spiking times  following a noisy Poisson process.The two candidate sources for noise are the synaptic drive that converges on individual neurons or intrinsic transducing processes within neurons. To parse the relative contributions of these noise sources, we made whole-cell intracellular recordings from cortical slices and used in the whole cell dynamic clamp configuration while using dynamic clamp to injecting excitatory and inhibitory conductances previously recorded in vivo from visual cortical neurons (Tan et al. 2011). By controlling the conductance directly, we can test whether intrinsic processes contribute to poisson firing. We found that repeated injections of the same excitatory and inhibitory conductance evoked stereotypical spike trains, resulting in FF near 0.2. Varying the amplitude of both excitatory and inhibitory conductances changed the firing rate of recorded neurons but not the Fano factor. These records indicate that intrinsic processes do not contribute substantially to the Poisson spiking of cortical cells. Next, to test whether differences in network input are responsible for Poisson spike patterns, we examined spike trains evoked by injecting excitatory and inhibitory conductances recorded from different presentations of the same visual stimulus. These records exhibited different behaviors depending on whether the injected conductances were from visually-driven or spontaneous epochs: during visually-driven epochs, spiking responses were Poisson (FF near 1); during spontaneous epochs spiking responses were super-Poisson (FF above 1). Both of these observations are consistent with the quenching of variability by sensory stimulation or motor behavior (Churchland et al. 2010). We also found that excitatory conductances, in the absence of inhibition, are sufficient to generate spike trains with Poisson statistics. Our results indicate that the Poisson spiking emerges not from intrinsic sources but from differences in the synaptic drive across trials, the nature of this synaptic drive can alter the nature of variability, and that that excitatory input alone is sufficient to generate Poisson spiking.

Nader Nikbakht

MIT

February 14, 2024

nader_headshot.jpg

Thalamocortical dynamics in a complex learned behavior

Performing learned behaviors requires animals to produce precisely timed motor sequences. The underlying neuronal circuits must convert incoming spike trains into precisely timed firing to indicate the onset of crucial sensory cues or to carry out well-coordinated muscle movements. Birdsong is a remarkable example of a complex, learned and precisely timed natural behavior which is controlled by a brainstem-thalamocortical feedback loop. Projection neurons within the zebra finch cortical nucleus HVC (used as a proper name), produce precisely timed, highly reliable and ultra-sparse neural sequences that are thought to underlie song dynamics. However, the origin of short timescale dynamics of the song is debated. One model posits that these dynamics reside in HVC and are mediated through a synaptic chain mechanism. Alternatively, the upstream motor thalamic nucleus Uveaformis (Uva), could drive HVC bursts as part of a brainstem-thalamocortical distributed network. Using focal temperature manipulation we found that the song dynamics reside chiefly in HVC. We then characterized the activity of thalamic nucleus Uva, which provides input to HVC. We developed a lightweight (~1 g) microdrive for juxtacellular recordings and with it performed the very first extracellular single unit recordings in Uva during song. Recordings revealed HVC-projecting Uva neurons contain timing information during the song, but compared to HVC neurons, fire densely in time and are much less reliable. Computational models of Uva-driven HVC neurons estimated that a high degree of synaptic convergence is needed from Uva to HVC to overcome the inconsistency of Uva firing patterns. However, axon terminals of single Uva neurons exhibit low convergence within HVC such that each HVC neuron receives input from 2-7 Uva neurons. These results suggest that thalamus maintains sequential cortical activity during song but does not provide unambiguous timing information. Our observations are consistent with a model in which the brainstem-thalamocortical feedback loop acts at the syllable timescale (~100 ms) and does not support a model in which the brainstem-thalamocortical feedback loop acts at fast timescale (~10 ms) to generate sequences within cortex.

bottom of page