October 11, 2023
Exceptionally the talk will start at 11:15 am ET
Vasomotor dynamics: Measuring, modeling, and understanding the other network in the brain
Much as Santiago Ramón y Cajal is the godfather of neuronal computation, which occurs among neurons that communicate predominantly via threshold logic, Camillo Golgi is the inadvertent godfather of neurovascular signaling, in which the endothelial cells that form the lumen of blood vessels communicate via electrodiffusion as well as threshold logic. I will address questions that define spatiotemporal patterns of constriction and dilation that develop across the network of cortical vasculature: First - is there a common topology and geometry of brain vasculature (our work)? Second - what mechanisms govern neuron-to-vessel and vessel-to-vessel signaling (work of Mark Nelson at U Vermont)? Last - what is the nature of competition among arteriole smooth muscle oscillators and the underlying neuronal drive (our work)? This answers to these questions bear on fundamental aspects of brain science as well as practical issues, including the relation of fMRI signals to neuronal activity and the impact of vascular dysfunction on cognition. Challenges and opportunities for experimentalists and theorists alike will be discussed.
University of Colorado, Boulder
October 18, 2023
A unifying framework for movement control and decision making
To understand subjective evaluation of an option, various disciplines have quantified the interaction between reward and effort during decision making, producing an estimate of economic utility, namely the subject ‘goodness’ of an option. However, those same variables that affect the utility of an option also influence the vigor (speed) of movements to acquire it. To better understand this, we have developed a mathematical framework demonstrating how utility can influence not only the choice of what to do, but also the speed of the movement follows. I will present results demonstrating that expectation of reward increases speed of saccadic eye and reaching movements, whereas expectation of effort expenditure decreases this speed. Intriguingly, when deliberating between two visual options, saccade vigor to each option increases differentially, encoding their relative value. These results and others imply that vigor may serve as a new, real-time metric with which to quantify subjective utility, and that the control of movements may be an implicit reflection of the brain’s economic evaluation of the expected outcome.
October 25, 2023
Multimodal units fuse-then-accumulate evidence across channels
We continuously detect sensory data, like sights and sounds, and use this information to guide our behaviour. However, rather than relying on single sensory channels, which are noisy and can be ambiguous alone, we merge information across our senses and leverage this combined signal. In biological networks, this process (multisensory integration) is implemented by multimodal neurons which are often thought to receive the information accumulated by unimodal areas, and to fuse this across channels; an algorithm we term accumulate-then-fuse. However, it remains an open question how well this theory generalises beyond the classical tasks used to test multimodal integration. Here, we explore this by developing novel multimodal tasks and deploying probabilistic, artificial and spiking neural network models. Using these models we demonstrate that multimodal units are not necessary for accuracy or balancing speed/accuracy in classical multimodal tasks, but are critical in a novel set of tasks in which we comodulate signals across channels. We show that these comodulation tasks require multimodal units to implement an alternative fuse-then-accumulate algorithm, which excels in naturalistic settings and is optimal for a wide class of multimodal problems. Finally, we link our findings to experimental results at multiple levels; from single neurons to behaviour. Ultimately, our work suggests that multimodal neurons may fuse-then-accumulate evidence across channels, and provides novel tasks and models for exploring this in biological systems.
Google Deep Mind
November 1, 2023
Prediction Models for Brains and Machines
Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations. I will also cover work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. I will also talk about work applying this perspective to the deep RL setting, where we can study the effect of predictive learning on representations that form in a deep neural network and how these results compare to neural data. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.
November 8, 2023
Postpone to December 6
November 15, 2023
Society for Neuroscience Meeting
Goethe-University, Frankfurt am Main
November 22, 2023
The Emergence of Cortical Representations
The internal and external world is thought to be represented by distributed patterns of cortical activity. The emergence of these cortical representations over the course of development remains an unresolved question. In this talk, I share results from a series of recent studies combining theory and experiments in the cortex of the ferret, a species with a well-defined columnar organization and modular network of orientation-selective responses in visual cortex. I show that prior to the onset of structured sensory experience, endogenous mechanisms set up a highly organized cortical network structure that is evident in modular patterns of spontaneous activity characterized by strong, clustered local and long-range correlations. This correlation structure is remarkably consistent across both sensory and association areas in the early neocortex, suggesting that diverse cortical representations initially develop according to similar principles. Next, I explore a classical candidate mechanism for producing modular activity – local excitation and lateral inhibition. I present the first empirical test of this mechanism through direct optogenetic cortical activation and discuss a plausible circuit implementation. Then, focusing on the visual cortex, I demonstrate that these endogenously structured networks enable orientation-selective responses immediately after eye opening. However, these initial responses are highly variable, lacking the reliability and low-dimensional structure observed in the mature cortex. Reliable responses are achieved after an experience-dependent co-reorganization of stimulus- evoked and spontaneous activity following eye opening. Based on these observations, I propose the hypothesis that the alignment between feedforward inputs and the recurrent network plays a crucial role in transforming the initially variable responses into mature and reliable representations.