Feature learning and overfitting in neural networks
ORAL
Abstract
It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data. Yet, understanding when and how this feature learning improves performance remains a challenge: for example, for a fixed task such as classifying images, feature learning is beneficial in modern architectures but detrimental in standard fully-connected feed-forward networks. Here we propose an explanation for this puzzle, by showing that feature learning can result in poor generalization performances as it leads to a `sparse' neural representation, where only a fraction of the connection in the original network are active. Although sparsity is known to be essential for learning anisotropies in the data, it is detrimental when the target function is constant or smooth along certain directions of input space. We illustrate this phenomenon in two settings: (i) regression of Gaussian random functions on the d-dimensional unit sphere and (ii) classification of benchmark image datasets. For (i), we can compute the scaling of the generalization error with number of training points analitically, thus show quantitatively how methods that do not learn features generalize better if the target function is sufficiently smooth. For (ii), we show empirically that learning features can indeed lead to sparse and thus less smooth representations. Since an image classifier must be highly smooth with respect to small deformations of the image, this is likely cause of poor performance.
–
Publication: arxiv: 2206.12314<br>Accepted to NeurIPS 2022
Presenters
-
Francesco Cagnetta
Ecole Polytechnique Federale de Lausanne
Authors
-
Francesco Cagnetta
Ecole Polytechnique Federale de Lausanne