Scaled Conjugate Gradient Method for Nonconvex Optimization in Deep Neural Networks
Naoki Sato, Koshiro Izumi, Hideaki Iiduka; 25(395):1−37, 2024.
Abstract
A scaled conjugate gradient method that accelerates existing adaptive methods utilizing stochastic gradients is proposed for solving nonconvex optimization problems with deep neural networks. It is shown theoretically that, whether with a constant or diminishing learning rate, the proposed method can obtain a stationary point of the problem. Additionally, its rate of convergence with a diminishing learning rate is verified to be superior to that of the conjugate gradient method. The proposed method is shown to minimize training loss functions faster than the existing adaptive methods in practical applications of image and text classification. Furthermore, in the training of generative adversarial networks, one version of the proposed method achieved the lowest Fr\'echet inception distance score among those of the adaptive methods.
[abs]
[pdf][bib] [code]© JMLR 2024. (edit, beta) |