top of page

Riccardo Zecchina

Bocconi University, Milano

June 18, 2025

images (4).jpeg

Local Deep Learning without Gradients in

Asymmetric Recurrent Networks

We introduce a statistical physics framework for learning in neural architectures composed of single or interconnected asymmetric attractor networks. These systems can exhibit a manifold of global fixed points capable of implementing sophisticated input-output mappings, which we characterize analytically. Learning from extensive datasets is achieved through the stabilization of fixed points via a fully distributed and local learning process, implemented at the single-neuron level. This simple mechanism yields performance comparable to that of conventional feedforward deep neural networks trained using gradient-based methods. The effectiveness of the model stems from the dense and accessible manifolds of stable fixed points, which encode the internal representations of data. Unlike other approaches to deep learning without backpropagation, our method does not attempt to estimate gradients.

Sebastian Seung

Princeton Neuroscience Institute

June 25, 2025

sebastian.jpg

VVTNS Fifth Season Closing Lecture

​Insights into vision from interpreting a neuronal wiring diagram​

​

In 2023, the FlyWire Consortium released the neuronal wiring diagram of an adult fly brain. This contains as a corollary the first complete wiring diagram of a visual system, which has been used to identify all 200+ cell types that are intrinsic to the Drosophila optic lobe. About half of these cell types were previously unknown, and less than 20% have ever been recorded by a physiologist. I will argue that plausible functions for many cell types can be guessed by interpreting the wiring diagram.

bottom of page