From cheung.179 at buckeyemail.osu.edu Thu Mar 3 16:17:51 2022 From: cheung.179 at buckeyemail.osu.edu (Cheung, Willy) Date: Thu, 3 Mar 2022 21:17:51 +0000 Subject: [CaCL] Reading for 3/10 Message-ID: Hi CaCLers, Next Thursday we will discuss the following paper: Discourse structure interacts with reference but not syntax in neural language models by Davis and van Schijndel (2020). Paper link: https://arxiv.org/pdf/2010.04887.pdf arXiv:2010.04887v1 [cs.CL] 10 Oct 2020 two pretrained unidirectional transformer LMs: TransformerXL (Dai et al.,2019) and GPT-2 XL (Radford et al.,2019).2 TransformerXL was trained on Wikitext-103, arxiv.org Zoom link: https://osu.zoom.us/j/95536921111?pwd=TG9YdVZ0Wk45R2hCdHhTYk5ubkhIQT09 Abstract: Language models (LMs) trained on large quantities of text have been claimed to acquire abstract linguistic representations. Our work tests the robustness of these abstractions by focusing on the ability of LMs to learn interactions between different linguistic representations. In particular, we utilized stimuli from psycholinguistic studies showing that humans can condition reference (i.e. coreference resolution) and syntactic processing on the same discourse structure (implicit causality). We compared both transformer and long short-term memory LMs to find that, contrary to humans, implicit causality only influences LM behavior for reference, not syntax, despite model representations that encode the necessary discourse information. Our results further suggest that LM behavior can contradict not only learned representations of discourse but also syntactic agreement, pointing to shortcomings of standard language modeling. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oh.531 at buckeyemail.osu.edu Fri Mar 18 10:06:52 2022 From: oh.531 at buckeyemail.osu.edu (Oh, Byung-Doh) Date: Fri, 18 Mar 2022 14:06:52 +0000 Subject: [CaCL] 3/24: The 35th Annual Conference on Human Sentence Processing Message-ID: Hello everyone, Next week, CaCL will not meet. Instead, CaCL members are encouraged to attend The 35th Annual Conference on Human Sentence Processing (HSP2022). Registration is free at : https://hsp2022.ucsc.edu Best, Byung-Doh ================= Byung-Doh Oh (he/him/his) Ph.D. Student Department of Linguistics The Ohio State University -------------- next part -------------- An HTML attachment was scrubbed... URL: From oh.531 at buckeyemail.osu.edu Sun Mar 27 10:45:06 2022 From: oh.531 at buckeyemail.osu.edu (Oh, Byung-Doh) Date: Sun, 27 Mar 2022 14:45:06 +0000 Subject: [CaCL] 3/31: Syntax-Enhanced Pre-trained Model Message-ID: Hello everyone, Next week, we'll be discussing the following paper: Syntax-Enhanced Pre-trained Model (Xu et al., 2021) https://aclanthology.org/2021.acl-long.420.pdf We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa. Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages. Such a problem would lead to the necessity of having human-annotated syntactic information, which limits the application of existing methods to broader scenarios. To address this, we present a model that utilizes the syntax of text in both pre-training and fine-tuning stages. Our model is based on Transformer with a syntax-aware attention layer that considers the dependency tree of the text. We further introduce a new pre-training task of predicting the syntactic distance among tokens in the dependency tree. We evaluate the model on three downstream tasks, including relation classification, entity typing, and question answering. Results show that our model achieves state-of-the-art performance on six public benchmark datasets. We have two major findings. First, we demonstrate that infusing automatically produced syntax of text improves pre-trained models. Second, global syntactic distances among tokens bring larger performance gains compared to local head relations between contiguous tokens. Best, Byung-Doh ================= Byung-Doh Oh (he/him/his) Ph.D. Student Department of Linguistics The Ohio State University -------------- next part -------------- An HTML attachment was scrubbed... URL: