Llama Adapter – 1 時間未満の美学に合わせたインストラクション モデル
This repo proposes LLaMA-Adapter, a lightweight and simple adapter for fine-tuning instruction-following LLaMA models 🔥, using 52K data provied by Stanford Alpaca. Overview By inserting adapters into LLaMA’s transformer, our method only introduces 1~8M learnable parameters, and turns a LLaMA into an instruction-following model within 25~50 minutes. LLaMA-Adapter is plug-and-play due to a proposed Zero…