Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei, Niladri S. Chatterji, Peter L. Bartlett; 24(303):1−49, 2023.
Abstract
In this work, we provide a characterization of the feature-learning process in two-layer ReLU networks trained by gradient descent on the logistic loss following random initialization. We consider data with binary labels that are generated by an XOR-like function of the input features. We permit a constant fraction of the training labels to be corrupted by an adversary. We show that, although linear classifiers are no better than random guessing for the distribution we consider, two-layer ReLU networks trained by gradient descent achieve generalization error close to the label noise rate. We develop a novel proof technique that shows that at initialization, the vast majority of neurons function as random features that are only weakly correlated with useful features, and the gradient descent dynamics `amplify’ these weak, random features to strong, useful features.
[abs]
[pdf][bib]© JMLR 2023. (edit, beta) |