[CaCL] Reading for 3/28: Contrastive Decoding: Open-ended Text Generation as Optimization

Lin, Yi Chien lin.4434 at buckeyemail.osu.edu
Sun Mar 24 16:41:05 EDT 2024


Hi all,

This is a friendly reminder that the paper we will discuss this week (3/28) is “Contrastive Decoding: Open-ended Text Generation as Optimization” (Li et al., 2023). Please find the abstract and the link to the paper in the following.

Best,
Yi-Chien

寄件者: CaCL <cacl-bounces+lin.4434=osu.edu at lists.osu.edu> 代表 Lin, Yi Chien via CaCL <cacl at lists.osu.edu>
日期: 星期四, 2024年3月7日 下午2:33
收件者: cacl at lists.osu.edu <cacl at lists.osu.edu>
主旨: [CaCL] Reading for 3/28: Contrastive Decoding: Open-ended Text Generation as Optimization
Hi All,

CaCL will not meet next week (3/14) and the week after (3/21). Our next meeting will be on 3/28 – we will be discussing “Contrastive Decoding: Open-ended Text Generation as Optimization” (Li et al., 2023).

Paper:
https://aclanthology.org/2023.acl-long.687/<https://urldefense.com/v3/__https:/aclanthology.org/2023.acl-long.687/__;!!KGKeukY!wlPhBq-744it4BSBM1kqsSqUz8AchcHxOLByey3L_tmrI0-AoqwOd4rZ-5mlxmMGcDbWz22zoigqnTkuu_U$>

Abstract:
Given a language model (LM), maximum probability is a poor decoding objective for open-ended generation, because it produces short and repetitive text. On the other hand, sampling can often produce incoherent text that drifts from the original topics. We propose contrastive decoding (CD), a reliable decoding approach that optimizes a contrastive objective subject to a plausibility constraint. The contrastive objective returns the difference between the likelihood under a large LM (called the expert, e.g. OPT-13B) and a small LM (called the amateur, e.g. OPT-125M), and the constraint ensures that the outputs are plausible. CD is inspired by the fact that the failures of larger LMs (e.g., repetition, inco- herence) are even more prevalent in smaller LMs, and that this difference signals which texts should be preferred. CD requires zero additional training, and produces higher quality text than decoding from the larger LM alone. It also works across model scales (OPT-13B and GPT2-1.5B) and significantly outperforms four strong decoding algorithms (e.g., nucleus, top-k) in automatic and human evaluations across wikipedia, news and story domains.

Best,
Yi-Chien
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osu.edu/pipermail/cacl/attachments/20240324/2114c1f6/attachment.html>


More information about the CaCL mailing list