Guarantees of Riemannian Optimization for Low Rank Matrix Recovery
Skip to main content
eScholarship
Open Access Publications from the University of California

Department of Mathematics

Other bannerUC Davis

Guarantees of Riemannian Optimization for Low Rank Matrix Recovery

Published Web Location

https://arxiv.org/pdf/1511.01562.pdf
No data is associated with this publication.
Abstract

We establish theoretical recovery guarantees of a family of Riemannian optimization algorithms for low rank matrix recovery, which is about recovering an $m\times n$ rank $r$ matrix from $p < mn$ number of linear measurements. The algorithms are first interpreted as iterative hard thresholding algorithms with subspace projections. Based on this connection, we show that provided the restricted isometry constant $R_{3r}$ of the sensing operator is less than $C_\kappa /\sqrt{r}$, the Riemannian gradient descent algorithm and a restarted variant of the Riemannian conjugate gradient algorithm are guaranteed to converge linearly to the underlying rank $r$ matrix if they are initialized by one step hard thresholding. Empirical evaluation shows that the algorithms are able to recover a low rank matrix from nearly the minimum number of measurements necessary.

Item not freely available? Link broken?
Report a problem accessing this item