Effective Optimization Algorithms for Multi-objective Learning with Conflicting Objectives

  • Hansi YANG

Student thesis: Doctoral thesis

Abstract

Real-world machine learning applications involving multiple conflicting objectives often lead to longer training time and worse model. This thesis addresses these challenges by developing optimization algorithms aimed at reconciling such conflicts to enhance learning efficacy. We propose a cohesive framework encompassing three innovative strategies to tackle high gradient variance, navigate complex objective curvatures and optimize sample ordering.

Firstly, we introduce momentum-based variance reduction methods for few-shot learning, where limited data can result in substantial gradient variance across tasks. Our methods yield accurate gradient estimates that promote faster convergence and improved performance.

Secondly, we explore bi-level learning with cubic regularization to overcome label conflicts from training label noise. Our approach leverages bi-level learning for flexible control over the model training process, utilizing cubic regularization to navigate intricate objective curvatures within bi-level learning. This strategy enhances the training stability and increases robustness in the presence of mislabeled data.

Finally, we present a gradient balancing framework tailored for multi-task learning, where conflicts between tasks and samples are prevalent. Our method dynamically adjusts the sample order during the model training process to ensure equitable representation of all samples, thereby facilitating effective learning across diverse tasks.

Additionally, we investigate applications of these techniques in real-world scientific challenges to predict molecular properties in the presence of conflicting molecules known as activity cliff. We reformulate it as a node classification problem and introduce both node-level and edge-level tasks with curriculum learning to enhance learning efficacy for these complex molecules.

Collectively, these strategies confront optimization challenges in machine learning with conflicting tasks. By effectively reducing gradient variance in few-shot learning, bolstering robustness against label inaccuracies, and ensuring balanced learning across multiple tasks, this work establishes a strong foundation for the advancement of effective optimization algorithms in multi-task learning.

Date of Award2025
Original languageEnglish
Awarding Institution
  • The Hong Kong University of Science and Technology
SupervisorJames Tin Yau KWOK (Supervisor)

Cite this

'