Abstract
Low-Rank Adaptation (LoRA) is a widely used method for efficiently fine-tuning large models by introducing lowrank matrices into weight updates. However, existing LoRA techniques fail to account for activation information, such as outliers, which significantly impact model performance. This omission leads to suboptimal adaptation and slower convergence. To address this limitation, we present Activation-Informed Low-Rank Adaptation (AIRA), a novel approach that integrates activation information into initialization, training, and rank assignment to enhance model performance. Specifically, AIRA introduces: (1) Outlierweighted SVD decomposition to reduce approximation errors in low-rank weight initialization, (2) Outlier-driven dynamic rank assignment using offline optimization for better layer-wise adaptation, and (3) Activation-informed training to amplify updates on significant weights. This cascaded activation-informed paradigm enables faster convergence and fewer fine-tuned parameters while maintaining high performance. Extensive experiments on multiple large models demonstrate that AIRA outperforms state-of-the-art LoRA variants, achieving superior performance-efficiency trade-offs in vision-language instruction tuning, few-shot learning, and image generation. Codes are available at https://github.com/lliai/LoRA-Zoo.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of IEEE International Conference on Computer Vision (ICCV) |
| Publisher | IEEE |
| Pages | 1729-1739 |
| Number of pages | 11 |
| Publication status | Accepted/In press - 2026 |
| Event | International Conference on Computer Vision (ICCV 2025) - Honolulu, United States Duration: 19 Oct 2025 → 23 Oct 2025 https://iccv.thecvf.com |
Conference
| Conference | International Conference on Computer Vision (ICCV 2025) |
|---|---|
| Abbreviated title | ICCV 2025 |
| Country/Territory | United States |
| City | Honolulu |
| Period | 19/10/25 → 23/10/25 |
| Internet address |