Generalizable and Efficient Transfer Learning for Graph and Language Models

  • Jiashun CHENG

Student thesis: Doctoral thesis

Abstract

Transfer learning, as a key paradigm in modern machine learning, has rapidly advanced the scalability and effectiveness of model deployment by enabling knowledge reuse across tasks, thereby driving the advancement of intelligent business innovations. This thesis identifies three principal challenges arising from the evolving demands of real-world industrial deployments: (1) generalizable graph pre-training, (2) effective adaptation for graphs, and (3) efficient adaptation for large language models (LLMs), and presents three innovative approaches to tackle these challenges.

Our first framework, WGDN, revisits the long-overlooked generative graph pre-training by proposing the augmentation-adaptive graph Wiener filter that better captures the topological nature of graphs within the reconstruction paradigm, thereby improving the generalizability of graph representation learning. Subsequently, APF addresses the challenges of limited supervision and local homophily disparity in graph anomaly detection by introducing a unified framework that combines anomaly-aware pre-training with granularity-adaptive fine-tuning, enabling task-aware and effective adaptation. Furthermore, SeLoRA targets the parameter redundancy in Low-Rank Adaptation (LoRA) fine-tuning for LLMs, and introduces a sparse spectral re-parameterization module that achieves highly efficient adaptation while preserving expressive capacity.

Extensive experiments across diverse benchmarks and downstream tasks in both graph and language domains validate the effectiveness and efficiency of these approaches. Collectively, they aim to advance the frontier of transfer learning toward greater generalizability, adaptability, and resource efficiency.

Date of Award2025
Original languageEnglish
Awarding Institution
  • The Hong Kong University of Science and Technology
SupervisorFugee TSUNG (Supervisor) & Jia LI (Supervisor)

Cite this

'