【阅读笔记】LoRAHub:Efficient Cross-Task Generalization via Dynamic LoRA Composition
发布时间:2023年12月28日
一、论文信息
1 论文标题
LoRAHub:Efficient Cross-Task Generalization via Dynamic LoRA Composition
2 发表刊物
NIPS2023_WorkShop
3 作者团队
Sea AI Lab, Singapore
4 关键词
LLMs、LoRA
二、文章结构
1 引言
1.1 研究动机
Investigation into the inherent modularity and composability of LoRA modules. To verify is it feasbile to compose LoRA modules for efficiently generalizing towards unseen tasks?
1.2 任务背景
Intro-P1: LLM->issues->LoRA->efficiency->inherent modularity and composability
Intro-P2: generalization of LoRA->automatic assembling without human design->few-shot->auto orchestrate->LoRAHub、LoRAHub Learning
traditional LoRA methods primarily concentrate on training and testing within the same tasks, rather than venturing into few-shot cross-task generalization.
2 创新方法
LoraHub learning
Compose Stage: existing LoRA modules are integrated into one unified module, employing a set of weights, denoted as
w
w
w, as coefficients. 【加权合并】
Adapt Stage: the amalgamated (合并的) LoRA module is evaluated on a few examples from the unseen task.
Subsequently, a gradient-free algorithm is applied to refine w. After executing K iterations, a highly adapted LoRA module is produced, which can be incorporated with the LLM to perform the intended task.
For our case, we deploy this algorithm to shape the search space of w, and eventually select the best weights based on their performance on the few-shot examples from the unseen task.