Low-Rank Kernel Learning with Bregman Matrix Divergences
Brian Kulis, Mátyás A. Sustik, Inderjit S. Dhillon; 10(13):341−376, 2009.
Abstract
In this paper, we study low-rank matrix nearness problems, with a focus on learning low-rank positive semidefinite (kernel) matrices for machine learning applications. We propose efficient algorithms that scale linearly in the number of data points and quadratically in the rank of the input matrix. Existing algorithms for learning kernel matrices often scale poorly, with running times that are cubic in the number of data points. We employ Bregman matrix divergences as the measures of nearness---these divergences are natural for learning low-rank kernels since they preserve rank as well as positive semidefiniteness. Special cases of our framework yield faster algorithms for various existing learning problems, and experimental results demonstrate that our algorithms can effectively learn both low-rank and full-rank kernel matrices.
[abs]
[pdf][bib]© JMLR 2009. (edit, beta) |