Optimization with low-rank regularization

  • Quanming YAO

Student thesis: Doctoral thesis

Abstract

Low-rank modeling is important for machine learning, which has been widely used in many areas such as recommender systems and social network analysis in data mining, image and video processing in computer vision. Traditionally two approaches as factorization approach and nuclear norm regularization, are popularly used for low-rank matrix learning. In this thesis, we first propose an efficient algorithm based on proximal gradient descent for learning nuclear norm regularized problem, and a greedy algorithm to learn factorization approach which can deal with convex but possibly nonsmooth loss function. However, as larger singular values are more informative thus should be less penalized, low-rank matrix learning problem with adaptive nonconvex regularization is recently proposed, which also show better empirical performance. As existing algorithms can only solve small-scale problems, we develop a new algorithm, which is further accelerated and parallelized, scaling up the learning problem to large matrices. Finally, inspired by recent developments in proximal gradient descent for nonconvex optimization, we propose a general and powerful problem transformation method which transfers a large family of nonconvex regularization problems back into convex ones. Such transformation can help reuse existing algorithms, which are originally designed for convex regularizers, now on nonconvex ones. As a result, the new algorithms can run much faster than the state-of-the-art on the original problems.
Date of Award2018
Original languageEnglish
Awarding Institution
  • The Hong Kong University of Science and Technology

Cite this

'