top of page

Karel Svoboda

Allen Institute

October 30, 2024

svobodak.jpg

VVTNS Fifth Season Opening Lecture

Illuminating synaptic learning​

How do synapses in the middle of the brain know how to adjust their weight to advance a behavioral goal (i.e. learning)? This is referred to as the synaptic ‘credit assignment problem’. A large variety of synaptic learning rules have been proposed, mainly in the context of artificial neural networks. The most powerful learning rules (e.g. back-propagation of error) are thought to be biologically implausible, whereas the widely studied biological learning rules (Hebbian) are insufficient for goal-directed learning. I will describe ongoing work focused on understanding synaptic learning rules in the cortex in a brain-computer interface task.

Hannah Choi

Georgia Tech

November 6, 2024

profile-3-1-193x300.jpg

Unraveling information processing through functional networks

While anatomical connectivity changes slowly through synaptic learning, the functional connectivity of neurons changes rapidly with ongoing activity of neurons and their functional interactions. Functional networks of neurons and neural populations reflect how their interactions change with behaviors, stimulus types, and internal states. Therefore, the information propagation across a network can be analyzed through the varying topological properties of the functional networks. Our study investigates the functional networks of the visual cortex at both the single-cell and population levels. Our analyses of functional connectivity of single neurons, constructed from spiking activity in neural populations of the visual cortex, reveal local and global network structures shaped by stimulus complexity. In addition, we propose a new method for inferring functional interactions between neural populations that preserves biologically constrained anatomical connectivity and signs. Applying our method to 2-photon data from the mouse visual cortex, we uncover functional interactions between cell types and cortical layers, suggesting distinct pathways for processing expected and unexpected visual information.

  • YouTube

Friedemann Zenke

University of  Basel

November 13, 2024

fzenke_mugshot-262x300.jpg

Learning invariant representations through prediction

Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains perform this processing in deep sensory networks shaped through plasticity. However, our understanding of the underlying plasticity mechanisms remains rudimentary. I will introduce Latent Predictive Learning (LPL), a plasticity model prescribing a local learning rule that combines Hebbian elements with predictive plasticity. I will show that deep neural networks equipped with LPL develop disentangled object representations without supervision. The same rule accurately captures neuronal selectivity changes observed in the primate inferotemporal cortex in response to altered visual experience. Finally, our model generalizes to spiking neural networks and naturally accounts for several experimentally observed properties of synaptic plasticity, including metaplasticity and spike-timing-dependent plasticity (STDP). LPL thus constitutes a plausible normative theory of representation learning in the brain while making concrete testable predictions.

Chris-Eliasmith_IAI24-400x460.jpg

The algebra of cognition

Chris Eliasmith

University of Waterloo November 20, 2024

In recent years, my lab and others have demonstrated the value of vector symbolic algebras (VSAs) for capturing a wide variety of neural and behavioural results. In this talk I discuss the surprising and compelling variety of tasks and styles of reasoning that are well-suited to descriptions using a specific VSA. These tasks include path integration, navigation, Bayesian reasoning, sampling, memorization, and logical inference. The resulting spiking neural network models capture various hippocampal cell types (grid, place, border, etc.), behavioural errors, and a variety of observed neural dynamics. 

Memming Park_web_small.cleaned.jpg

Memming Park

Champalimaud Foundation

November 27, 2024

Back to the Continuous Attractor

Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals. Unfortunately, continuous attractors suffer from severe structural instability in general---they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations. We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms. Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar. We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors. Fast-slow decomposition analysis uncovers the existence of a persistent slow manifold that survives the seemingly destructive bifurcation, relating the flow within the manifold to the size of the perturbation. Moreover, this allows the bounding of the memory error of these approximations of continuous attractors. Finally, we train recurrent neural networks on analog memory tasks to support the appearance of these systems as solutions and their generalization capabilities. Therefore, we conclude that continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.

bottom of page