top of page

Itamar Landau

Stanford University

February 21, 2024

1648752549514.jpeg

Random Matrix Theory and the Statistical Constraints of Inferring Population Geometry from Large-Scale Neural Recordings

Contemporary neuroscience has witnessed an impressive expansion in the number of neurons whose activity can be recorded simultaneously, from mere hundreds a decade  ago to tens and even hundreds of thousands in recent years. With these advances, characterizing the geometry of population activity from large-scale neural recordings has taken center stage. In classical statistics, the number of repeated measurements is generally assumed to far exceed the number of free variables to be estimated. In our work, we ask a fundamental statistical question: as the number of recorded neurons grows, how are estimates of the geometry of population activity, for example, its dimensionality, constrained by the number of repeated experimental trials? Many neuroscience experiments report that neural activity is low-dimensional, with the dimensionality bounded as more neurons are recorded. We therefore begin by modeling neural data as a low-rank neurons-by-trials matrix with additive noise, and employ random matrix theory to show that under this hypothesis iso-contours of constant estimated dimensionality form hyperbolas in the space of neurons and trials -- estimated dimensionality increases as the product of neurons and trials. Interestingly, for a fixed number of trials, increasing the number of neurons improves the estimate of the high-dimensional embedding structure in neural space despite the fact that this estimation grows more difficult, by definition, with each neuron. While many neuroscience datasets report low-rank neural activity, a number of recent larger recordings have reported neural activity with "unbounded" dimensionality. With that motivation, we present new random matrix theory results on the distortion of singular vectors of high-rank signals due to additive noise and formulas for optimal denoising of such high-rank signals. Perhaps the most natural way to model neural data with unbounded dimensionality is with a power-law covariance spectrum. We examine the inferred dimensionality measured as the estimated power-law exponent, and surprisingly, we find that here too, under subsampling, the iso-contours of constant estimated dimensionality form approximate hyperbolas in the space of neurons and trials – indicating a non-intuitive but very real  ompensation between neurons and trials, two very different experimental resources. We test these observations and verify numerical predictions on a number of experimental datasets, showing that our theory can provide a concrete prescription for numbers of neurons and trials necessary to infer the geometry of population activity. Our work lays a theoretical foundation for experimental design in contemporary neuroscience.

February 28 & March 6, 2024

Cosyne 2024

No Seminar

cisek.gif

Paul Cisek

University of Montreal

March 13, 2024

Rethinking behavior in the light of evolution

In theoretical neuroscience, the brain is usually described as an information processing system that encodes and manipulates representations of knowledge to produce plans of action. This view leads to a decomposition of brain functions into putative processes such as object recognition, working memory, decision-making, action planning, etc., inspiring the search for the neural correlates of these processes. However, neurophysiological data do not support many of the predictions of these classic subdivisions. Instead, there is divergence and broad distribution of functions that should be unified, mixed representations combining functions that should be distinct, and a general incompatibility with the conceptual subdivisions posited by theories of information processing. In this talk, I will explore the possibility of resynthesizing a different set of functional subdivisions, guided by the growing body of data on the evolutionary process that produced the human brain. I will summarize, in chronological order, a proposed sequence of innovations that appeared in nervous systems along the lineage that leads from the earliest multicellular animals to humans. Along the way, functional subdivisions and elaborations will be introduced in parallel with the neural specializations that made them possible, gradually building up an alternative conceptual taxonomy of brain functions. These functions emphasize mechanisms for real-time interaction with the world, rather than for building explicit knowledge of the world, and the relevant representations emphasize pragmatic outcomes rather than decoding accuracy, mixing variables in the way seen in real neural data. I suggest that this alternative taxonomy may better delineate the real functional pieces into which the brain is organized, and can offer a more natural mapping between behavior and neural mechanisms.

Merav Stern

Rockefeller University

March 20, 2024

merav.jpeg

TBA

lorenzo.jpeg

Lorenzo Fontolan

Université Aix-Marseille

March 27, 2024

TBA

TBA

April 3, 2024

TBA

michael_buice_web-new.jpg

Michael Buice

Allen Institute

April 10, 2024

TBA

TBA

April 17, 2024

TBA

April 24, 2024​

No seminar

TBA

May 1, 2024

TBA

TBA

May 8, 2024

TBA

Agostina Palmigiano

Gastby Unit, London,

May 15, 2024

agos_news.jpg

TBA

TBA

May 22, 2024

TBA

TBA

May 29, 2024

TBA

TBA

June 5, 2024

TBA

June 12, 2024​

No seminar

TBA

June 19, 2024

TBA

TBA

June 26, 2024

VVTNS Fourth Season Closing Lecture

TBA

bottom of page