From Instance Training to Instruction Learning: Task Adapters Generation from Instructions

1The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 3Unisound, Beijing, China 4Platform and Content Group, Tencent, Beijing, China
Corresponding Author

Abstract

Large language models (LLMs) have acquired the ability to solve general tasks by utilizing instruction finetuning (IFT). However, IFT still relies heavily on instance training of extensive task data, which greatly limits the adaptability of LLMs to real-world scenarios where labeled task instances are scarce and broader task generalization becomes paramount. Contrary to LLMs, humans acquire skills and complete tasks not merely through repeated practice but also by understanding and following instructional guidelines. This paper is dedicated to simulating human learning to address the shortcomings of instance training, focusing on instruction learning to enhance cross-task generalization. Within this context, we introduce Task Adapters Generation from Instructions (TAGI), which automatically constructs the task-specific model in a parameter generation manner based on the given task instructions without retraining for unseen tasks. Specifically, we utilize knowledge distillation to enhance the consistency between TAGI developed through Learning with Instruction and task-specific models developed through Training with Instance, by aligning the labels, output logits, and adapter parameters between them. TAGI is endowed with cross-task generalization capabilities through a two-stage training process that includes hypernetwork pretraining and finetuning. We evaluate TAGI on the Super-Natural Instructions and P3 datasets. The experimental results demonstrate that TAGI can match or even outperform traditional meta-trained models and other hypernetwork models, while significantly reducing computational requirements.

TAGI Overview

Comparison of the typical Training withInstance and the proposed Learning with Instruction: The former involves training the model at the instance level with parameter updates, while the latter generates a task-specific adapter at the task level with parameter generation.

TAGI Model

TAGI consists of two core components: a hypernetwork which receives task instructions and generates parameter-efficient adapters, and a task-specific model which combines the vanilla LLM and the generated adapters from hypernetwork.

TAGI Comparison

We compare the characteristics of all comparison methods and the proposed TAGI.

TAGI Evaluation

Results of our main experiment on SNI dataset.

Results of our main experiment on P3 dataset.

BibTeX

@article{liao2024instance,
        title={From Instance Training to Instruction Learning: Task Adapters Generation from Instructions},
        author={Liao, Huanxuan and Xu, Yao and He, Shizhu and Zhang, Yuanzhe and Hao, Yanchao and Liu, Shengping and Liu, Kang and Zhao, Jun},
        journal={arXiv preprint arXiv:2406.12382},
        year={2024}
      }