APS Logo

Evidence for Griffiths Phase Criticality in Residual Neural Networks

ORAL

Abstract

Research on synthetic and biological neural networks shows that they operate robustly near phase transitions. Griffiths phases, stretched configuration regions with critical behavior that exist for heterogeneously structured networks, are a proposed explanation for how this occurs. Meanwhile, by adding identity "skip" connections to feedforward neural networks, researchers in deep learning have developed Residual Neural Networks (ResNet) that can be reliably trained to unprecedented depths, a property that in conventional networks only occurs precisely at criticality. Here we show that the Griffiths phase concept is indeed related to these behaviors in ResNets when configured with sufficient normalization/weight variance. Here, by measuring output scaling and examining the structure of random networks, we show that ResNets display the same topological signatures and extended region of relaxed scaling observed in Griffith phase criticality, while conventional feedforward networks do not. These results shed light on why ResNets operate so robustly, the role of normalization layers now common in deep learning architectures, and suggests how future models could be better understood (including more recent ResNet-based models) or designed.

The authors thank NTT (Nippon Telegraph and Telephone Corporation) Research for their financial and technical support.

Presenters

  • Maxwell Anderson

    Cornell University

Authors

  • Logan G Wright

    Cornell University, Cornell University & NTT Research

  • Maxwell Anderson

    Cornell University

  • Peter L McMahon

    Cornell University, Stanford Univ