In this thesis, we will talk about some non-convex analysis applied on the low-rank matrix related problems via the Riemannian optimization. In Chapter 2, we will develop tools to provide theoretical guarantee for the asymptotic escape from strict saddle points and saddle sets when using the Riemannian gradient descent. In Chapter 3, we will establish fast and near optimal convergence theory for a class of low-rank matrix recovery problems by using the Riemannain gradient descent and with random initializaiton. In Chapter 4, we will talk about how to shape a landscape of a non-convex loss function by using an active function. This will help with the concentration and push the sampling requirements to be optimal, while it also keeps the well-behaved property that all local minima are global minima and all saddles are strict.</P>
| Date of Award | 2020 |
|---|
| Original language | English |
|---|
| Awarding Institution | - The Hong Kong University of Science and Technology
|
|---|
Nonconvex optimization for low-rank matrix related problems
LI, Z. (Author). 2020
Student thesis: Doctoral thesis