On the Inductive Bias of Dropout
David P. Helmbold, Philip M. Long; 16(105):3403−3454, 2015.
Abstract
Dropout is a simple but effective technique for learning in neural networks and other settings. A sound theoretical understanding of dropout is needed to determine when dropout should be applied and how to use it most effectively. In this paper we continue the exploration of dropout as a regularizer pioneered by Wager et al. We focus on linear classification where a convex proxy to the misclassification loss (i.e. the logistic loss used in logistic regression) is minimized. We show:
- when the dropout-regularized criterion has a unique minimizer,
- when the dropout- regularization penalty goes to infinity with the weights, and when it remains bounded,
- that the dropout regularization can be non- monotonic as individual weights increase from 0, and
- that the dropout regularization penalty may not be convex.
[abs]
[pdf][bib]© JMLR 2015. (edit, beta) |