APS Logo

When are Neural Networks Kernel Learners?

ORAL

Abstract

Certain limits of neural networks have been shown to be equivalent to kernel machines with a kernel that stays constant during training known as the neural tangent kernel (NTK). These limits generally do not exhibit the phenomenon of feature learning, to which a large part of the success of deep learning is attributed. Can neural networks that learn features still be described by kernel machines with a data-dependent learned kernel? We demonstrate that this can indeed happen due to a phenomenon we term silent alignment, which requires that the NTK of a network evolves in eigenstructure while small in overall scale. We show that such an effect takes place in homogenous neural networks with small initialization trained on approximately whitened data. We provide an analytical treatment of this effect in the linear network case. In general, we find that the kernel develops a low-rank contribution in the early phase of training, and then evolves in overall scale, yielding a function equivalent to a kernel regression solution with the final network's NTK. The early spectral learning of the kernel depends on both depth and on relative learning rates in each layer. We also demonstrate that non-whitened data can weaken the silent alignment effect.

Publication: "Neural Networks as Kernel Learners: The Silent Alignment Effect" - in submission. To be on arXiv by late October.

Presenters

  • Alexander B Atanasov

    Harvard University

Authors

  • Alexander B Atanasov

    Harvard University

  • Blake Bordelon

    Harvard University

  • Cengiz Pehlevan

    Harvard University