[CaCL] Paper for Next Week 11/3

Lewis, Ash lewis.2799 at buckeyemail.osu.edu
Thu Oct 27 14:21:34 EDT 2022


Hi all,

Next week I’ll be leading discussion on the following paper, linked here<https://aclanthology.org/2021.acl-long.353.pdf>:

Prefix-Tuning: Optimizing Continuous Prompts for Generation
Li and Liang, 2021

Abstract:
Fine-tuning is the de facto way of leveraging large pretrained language models for downstream tasks. However, fine-tuning modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which we call the prefix. Prefix-tuning draws inspiration from prompting for language models, allowing subsequent tokens to attend to this prefix as if it were “virtual tokens”. We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We show that by modifying only 0.1% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics that are unseen during training.

*****

See you all on Thursday!

Ash

Ash Lewis (she/her/hers)
PhD Student, Department of Linguistics
The Ohio State University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osu.edu/pipermail/cacl/attachments/20221027/b0f1e90b/attachment-0001.html>


More information about the CaCL mailing list