s200_ariel.furstenberg.jpg

Ariel Furstenberg

The Hebrew University

November 24, 2021

Change of mind in rapid free-choice picking scenarios

In a famous philosophical paradox, Buridan's ass perishes because he is equally hungry and thirsty, and cannot make up his mind whether to first drink or eat. We are faced daily with the need to pick between alternatives that are equally attractive (or not) to us. What are the processes that allow us to avoid paralysis and to rapidly select between such equal options when there are no preferences or rational reasons to rely on? One solution that was offered is that although on a higher cognitive level there is symmetry between the alternatives, on a neuronal level the symmetry does not maintain. What is the nature of this asymmetry of the neuronal level? In this talk I will present experiments addressing this important phenomenon using measures of human behavior, EEG, EMG and large scale neural network modeling, and discuss mechanisms involved in the process of intention formation and execution, in the face of alternatives to choose from. Specifically, I will show results revealing the temporal dynamics of rapid intention formation and, moreover, ‘change of intention’ in a free choice picking scenario, in which the alternatives are on a par for the participant. The results suggest that even in arbitrary choices, endogenous or exogenous biases that are present in the neural system for selecting one or another option may be implicitly overruled; thus creating an implicit and non-conscious ‘change of mind’. Finally, the question is raised: in what way do such rapid implicit ‘changes of mind’ help retain one’s self-control and free-will behavior?     

farzada_WWTNS (1).jpg

Farzada Farkhooi

Humboldt  University

 Berlin

November 17, 2021

Noise-induced properties of active dendrites

Neuronal dendritic trees display a wide range of nonlinear input integrations due to their voltage-dependent active calcium channels. We reveal that in vivo-like fluctuating input enhances nonlinearity substantially in a single dendritic compartment and shifts the input-output relation to exhibiting nonmonotonous or bistable dynamics. In particular, with the slow activation of calcium dynamics, we analyze noise-induced bistability and its timescales. We show bistability induces long-timescale fluctuation that can account for observed dendritic plateau potentials in vivo conditions. In a multicompartmental model neuron with realistic synaptic input, we show that noise-induced bistability persists in a wide range of parameters. Using Fredholm's theory to calculate the spiking rate of multivariable neurons, we discuss how dendritic bistability shifts the spiking dynamics of single neurons and its implications for network phenomena in the processing of in vivo–like fluctuating input.

Tchumachenko-011_Foto (Large)_Portrait2.jpg.2776539.jpg

Tatjana Tchumatcheko

University of Bonn

November 10, 2021

Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models

The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.

Vijay.jpeg

Vijay Balasubramanian

University of Pennsylvania

November 3, 2021

  • YouTube

Becoming what you smell: adaptive sensing in the olfactory system

 I will argue that the circuit architecture of the early olfactory system provides an adaptive, efficient mechanism for compressing the vast space of odor mixtures into the responses of a small number of sensors.  In this view, the olfactory sensory repertoire employs a disordered code to compress a high dimensional olfactory space into a low dimensional receptor response space while preserving distance relations between odors.  The resulting representation is dynamically adapted to efficiently encode the changing environment of volatile molecules.  I will show that this adaptive combinatorial code can be efficiently decoded by systematically eliminating candidate odorants that bind to silent receptors.  The resulting algorithm for "estimation by elimination" can be implemented by a neural network that is remarkably similar to the early olfactory pathway in the brain.   Finally, I will discuss how diffuse feedback from the central brain to the bulb, followed by unstructured projections back to the cortex, can produce the convergence and divergence of the cortical representation of odors presented in shared or different contexts.  Our theory predicts a relation between the diversity of olfactory receptors and the sparsity of their responses that matches animals from flies to humans.  It also predicts specific deficits in olfactory behavior that should result from optogenetic manipulation of the olfactory bulb and cortex, and in some disease states

Carsen_Stringer_in_bubble.jpg

Carsen Stringer
HHMI
Janelia Research Campus
October, 27, 2021

 

  • YouTube

Rastermap: Extracting structure from high dimensional neural data

 Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. Compared to standard algorithms such as t-SNE and UMAP, Rastermap finds finer and higher dimensional patterns of neural variability, as measured by quantitative benchmarks. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data during a 2AFC task, artificial neural activity from an agent playing atari games, and neural responses to visual textures. We found within these datasets unique subpopulations of neurons encoding abstract properties of the environment.

October 20, 2021 - 10 am to 12:15 pm (EDT)

 In Memoriam of Naftali Tishby  (1952-2021)

tali.jpeg
mypic2.jpeg

Amir Globerson

Tel Aviv University

 On the implicit bias of SGD in deep learning.

Tali's work emphasized the tradeoff between compression and information preservation. In this talk I will explore this theme in the context of deep learning. Artificial neural networks have recently revolutionized the field of machine learning. However, we still do not have sufficient theoretical understanding of how such models can be successfully learned. Two specific questions in this context are: how can neural nets be learned despite the non-convexity of the learning problem, and how can they generalize well despite often having more  parameters than training data. I will describe our recent work showing that gradient-descent optimization indeed leads to "simpler" models, where simplicity is captured by lower weight norm and in some cases clustering of weight vectors. We demonstrate this for several teacher and student architectures, including learning linear teachers with ReLU networks, learning boolean functions and learning convolutional pattern detection architectures.

EliNelken-200x300.jpeg

Eli Nelken

The Hebrew University of

Jerusalem

Through the bottleneck: my adventures with the 'Tishby program'

One of Tali's cherished goals was to transform biology into physics. In his view, biologists were far too enamored by the details of the specific models they studied, losing sight of the big principles that may govern the behavior of these models. One such big principle that he suggested was the 'information bottleneck (IB) principle'. The iIB principle is an information-theoretical approach for extracting the relevant information that one random variable carries about another. Tali applied the IB principle to numerous problems in biology, gaining important insights in the process. Here I will describe two applications of the IB principle to neurobiological data. The first is the formalization of the notion of surprise that allowed us to rigorously estimate the memory duration and content of neuronal responses in auditory cortex, and the second is an application to behavior, allowing us to estimate 'optimal policies under information constraints' that shed interesting light on rat behavior.

  • YouTube
s200_ila.fiete.jpg

Ila Fiete

MIT

October 13, 2021

Pre-structured scaffolds for memory: an architecture for robust high-capacity associative memory that can tradeoff pattern number and richness

  • YouTube
longtin.jpeg

André Longtin

University of Ottawa

October 6, 2021

  • YouTube

 Adaptation-driven sensory detection and sequence memory

Spike-driven adaptation involves intracellular mechanisms that are initiated by spiking and lead to the subsequent reduction of spiking rate. One of its consequences is the temporal patterning of spike trains, as it imparts serial correlations between interspike intervals in baseline activity. Surprisingly the hidden adaptation states that lead to these correlations themselves exhibit quasi-independence. This talk will first discuss recent findings about the role of such adaptation in suppressing noise and extending sensory detection to weak stimuli that leave the firing rate unchanged. Further, a matching of the post-synaptic responses to the pre-synaptic adaptation time scale enables a recovery of the quasi-independence property, and can explain observations of correlations between post-synaptic EPSPs and behavioural detection thresholds.  We then consider the involvement of spike-driven adaptation in the representation of intervals between sensory events. We discuss the possible link of this time-stamping mechanism to the conversion of egocentric to allocentric coordinates. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus.