Fundamental Limits of Low-Rank Matrix Estimation: Information-Theoretic and Computational Perspectives
Abstract: Many statistical estimation problems can be reduced to the reconstruction of a low-rank n×d matrix when observed through a noisy channel. While tremendous positive results have been established, relatively few works focus on understanding the fundamental limitations of the proposed models and algorithms. Understanding such limitations not only provides practitioners with guidance on algorithm selection, but also spurs the development of cutting-edge methodologies. In this talk, I will present some recent progress in this direction from two perspectives in the context of low-rank matrix estimation. From an information-theoretic perspective, I will give an exact characterization of the limiting minimum estimation error. Our results apply to the high-dimensional regime n,d→∞ and d/n→∞ (or d/n→0) and generalize earlier works that focus on the proportional asymptotics n,d→∞, d/n→δ∈(0,∞). From an algorithmic perspective, large-dimensional matrices are often processed by iterative algorithms like power iteration and gradient descent, thus encouraging the pursuit of understanding the fundamental limits of these approaches. We introduce a class of general first order methods (GFOM), which is broad enough to include the aforementioned algorithms and many others. I will describe the asymptotic behavior of any GFOM, and provide a sharp characterization of the optimal error achieved by the GFOM class.
This is based on joint works with Michael Celentano and Andrea Montanari.
This seminar was held virtually over zoom and a recording is not available.