Neural Systems III
FOCUS · K02 · ID: 46189
Presentations
-
Associative Memory of Knowledge Structures and Sequences
ORAL · Invited
–
Publication: Julia Steinberg and Haim Sompolinsky. Associative memory of structured knowledge. In preparation
Presenters
-
Julia A Steinberg
Princeton University
Authors
-
Julia A Steinberg
Princeton University
-
Haim I Sompolinsky
The Hebrew University of Jerusalem and Harvard University, Hebrew University of Jerusalem, Center for Brain Science, Harvard Univer
-
-
Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass
ORAL · Invited
–
Publication: G. Dellaferrera, G. Kreiman, Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass, Manuscript in preparation
Presenters
-
Giorgia Dellaferrera
Harvard Medical School and Boston Children's Hospital
Authors
-
Giorgia Dellaferrera
Harvard Medical School and Boston Children's Hospital
-
Gabriel Kreiman
Harvard Medical School and Boston Children's Hospital
-
-
Robust sequential retrieval of memories in interaction-modulated neural networks
ORAL
–
Publication: In preparation.
Presenters
-
Lukas Herron
University of Maryland, College Park
Authors
-
Lukas Herron
University of Maryland, College Park
-
BingKan Xue
University of Florida
-
Pablo Sartori
Gulbenkian Institute
-
-
Working memory via combinatorial persistent states atop chaos in a random multivariate network
ORAL
–
Publication: Pang, Rich. "Working memory via combinatorial persistent states atop chaos in a random multivariate network." In progress.
Presenters
-
Rich Pang
Princeton University
Authors
-
Rich Pang
Princeton University
-
-
Dynamical phases and computation in nonlinear networks with correlated couplings
ORAL
–
Publication: D. Wennberg, S. Ganguli, and H. Mabuchi. Spectra of matrices with partially symmetric randomness. Forthcoming.<br>D. Wennberg, A. Yamamura, S. Ganguli, and H. Mabuchi. Forthcoming.
Presenters
-
Daniel Wennberg
Stanford University
Authors
-
Daniel Wennberg
Stanford University
-
Atsushi Yamamura
Stanford University
-
Surya Ganguli
Stanford, Stanford University
-
Hideo Mabuchi
Stanford University
-
-
Structured Neural Codes Enable Sample Efficient Learning Through Code-Task Alignment
ORAL
–
Publication: https://www.biorxiv.org/content/10.1101/2021.03.30.437743v1
Presenters
-
Blake Bordelon
Harvard University
Authors
-
Blake Bordelon
Harvard University
-
Cengiz Pehlevan
Harvard University
-
-
Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views?
ORAL
–
Publication: arXiv (https://arxiv.org/abs/2110.07472).<br>Submitted to ICLR 2022 (https://iclr.cc).
Presenters
-
Matthew S Farrell
Harvard University
Authors
-
Matthew S Farrell
Harvard University
-
Blake Bordelon
Harvard University
-
Shubhendu Trivedi
Massachusetts Institute of Technology
-
Cengiz Pehlevan
Harvard University
-
-
Signal representation and learning in random feedback neural networks
ORAL
–
Publication: Susman, L., Mastrogiuseppe, F., Brenner, N., & Barak, O. (2021). Quality of internal representation shapes learning performance in feedback neural networks. Physical Review Research, 3(1), 013176.
Presenters
-
Lee Susman
Princeton University
Authors
-
Lee Susman
Princeton University
-
Francesca Mastrogiuseppe
University College London
-
Naama Brenner
Technion Israel Institute of Technology, Technion - Israel Institute of Technolog
-
Omri Barak
Technion Israel Institute of Technology
-
-
Understanding multi-pass stochastic gradient descent via dynamical mean-field theory
ORAL
–
Publication: - The effective noise of stochastic gradient descent and how local knowledge of partial information drives complex systems, Francesca Mignacco, Pierfrancesco Urbani, Article in preparation.<br><br>- Stochasticity helps to navigate rough landscapes: comparing gradient-descent-based algorithms in the phase retrieval problem, Francesca Mignacco, Pierfrancesco Urbani, Lenka Zdeborova, Machine Learning: Science and Technology, 2021.<br><br>- Dynamical mean-field theory for stochastic gradient descent in Gaussian mixture classification, Francesca Mignacco, Florent Krzakala, Pierfrancesco Urbani and Lenka Zdeborova, Advances in Neural Information Processing Systems, 2020, vol. 33.<br>To appear in the "Machine Learning 2021'' Special Issue, JSTAT.
Presenters
-
Francesca Mignacco
Institute of Theoretical Physics, CEA Saclay
Authors
-
Francesca Mignacco
Institute of Theoretical Physics, CEA Saclay
-
-
Nested canalizing functions minimize sensitivity and simultaneously promote criticality
ORAL
–
Publication: arXiv:2109.01117
Presenters
-
Hamza Coban
Koc University
Authors
-
Alkan Kabakcioglu
Koc University, Koç University
-
Hamza Coban
Koc University
-