From oh.531 at buckeyemail.osu.edu Fri Mar 3 09:47:49 2023 From: oh.531 at buckeyemail.osu.edu (Oh, Byung-Doh) Date: Fri, 3 Mar 2023 14:47:49 +0000 Subject: [CaCL] CaCL 3/9: No meeting (HSP conference) Message-ID: Dear CaCL members, Next week (3/9), we will not meet as many of us will be attending HSP 2023. The following week (3/16), we will not meet as it is spring break. Three weeks from now (3/23), I will lead discussion of paper TBD; I will email members before spring break to gauge their interest in potential candidates. Best, Byung-Doh ================= Byung-Doh Oh (he/him/his) Ph.D. Student Department of Linguistics The Ohio State University -------------- next part -------------- An HTML attachment was scrubbed... URL: From oh.531 at buckeyemail.osu.edu Fri Mar 17 16:23:26 2023 From: oh.531 at buckeyemail.osu.edu (Oh, Byung-Doh) Date: Fri, 17 Mar 2023 20:23:26 +0000 Subject: [CaCL] CaCL 3/23: A noisy-channel approach to depth-charge illusions (in preparation for Ted Gibson visit) Message-ID: Hi everyone, Next Thursday, we'll discuss the following Cognition article in preparation for Ted Gibson's visit on Friday. It's about modeling the plausibility of depth-charge illusions under the noisy-channel Bayesian inference framework and is directly relevant to his talk. This work is mostly experimental in nature, so we hope other department members that are interested can also join us for a more fruitful discussion (please include this line in the weekly digest ?). A noisy-channel approach to depth-charge illusions (Zhang et al. 2023) https://www.sciencedirect.com/science/article/abs/pii/S0010027722003353 (paywalled, please access through OSU Library) The ?depth-charge? sentence, No head injury is too trivial to be ignored, is often interpreted as ?no matter how trivial head injuries are, we should not ignore them? while the literal meaning is the opposite ? ?we should ignore them?. Four decades of research have failed to resolve the source of this entrenched semantic illusion. Here we adopt the noisy-channel framework for language comprehension to provide a potential explanation. We hypothesize that depth-charge sentences result from inferences whereby comprehenders derive the interpretation by weighing the plausibility of possible readings of the depth-charge sentences against the likelihood of plausible sentences being produced with errors. In four experiments, we find that (1) the more plausible the intended meaning of the depth-charge sentence is, the more likely the sentence is to be misinterpreted; and (2) the higher the likelihood of our hypothesized noise operations, the more likely depth-charge sentences are to be misinterpreted. These results suggest that misinterpretation is affected by both world knowledge and the distance between the depth-charge sentence and a plausible alternative, which is consistent with the noisy-channel framework. Best, Byung-Doh ================= Byung-Doh Oh (he/him/his) Ph.D. Student Department of Linguistics The Ohio State University -------------- next part -------------- An HTML attachment was scrubbed... URL: From oh.531 at buckeyemail.osu.edu Tue Mar 21 08:41:03 2023 From: oh.531 at buckeyemail.osu.edu (Oh, Byung-Doh) Date: Tue, 21 Mar 2023 12:41:03 +0000 Subject: [CaCL] =?windows-1252?q?=5BFINAL=5D_3/23=A0Dependency_locality_as?= =?windows-1252?q?_an_explanatory_principle_for_word_order?= Message-ID: Hello everyone, Sorry about the spam, but just in case some people are on this list but not on the lingosu list. Let's discuss the following paper in preparation for the Gibson talk on Friday... Dependency locality as an explanatory principle for word order http://tedlab.mit.edu/tedlab_website/researchpapers/Futrell_Levy_Gibson_2020.pdf This work focuses on explaining both grammatical universals of word order and quantitative word-order preferences in usage by means of a simple efficiency principle: dependency locality. In its simplest form, dependency locality holds that words linked in a syntactic dependency (any head?dependent relationship) should be close in linear order. We give large-scale corpus evidence that dependency locality predicts word order in both grammar and usage, beyond what would be expected from independently motivated principles, and demonstrate a means for dissociating grammar and usage in corpus studies. Finally, we discuss previously undocumented variation in dependency length and how it correlates with other linguistic features such as head direction, providing a rich set of explananda for future linguistic theories. Best, Byung-Doh ================= Byung-Doh Oh (he/him/his) Ph.D. Student Department of Linguistics The Ohio State University -------------- next part -------------- An HTML attachment was scrubbed... URL: From clark.3664 at buckeyemail.osu.edu Thu Mar 23 14:46:05 2023 From: clark.3664 at buckeyemail.osu.edu (Clark, Christian) Date: Thu, 23 Mar 2023 18:46:05 +0000 Subject: [CaCL] CaCL reading for 3/30 Message-ID: Hi CaCL members, Our reading for next Thursday will be McCoy et al. 2023. Title: How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech Link: https://arxiv.org/abs/2301.11462 Abstract: When acquiring syntax, children consistently choose hierarchical rules over competing non-hierarchical possibilities. Is this preference due to a learning bias for hierarchical structure, or due to more general biases that interact with hierarchical cues in children's linguistic input? We explore these possibilities by training LSTMs and Transformers - two types of neural networks without a hierarchical bias - on data similar in quantity and content to children's linguistic input: text from the CHILDES corpus. We then evaluate what these models have learned about English yes/no questions, a phenomenon for which hierarchical structure is crucial. We find that, though they perform well at capturing the surface statistics of child-directed speech (as measured by perplexity), both model types generalize in a way more consistent with an incorrect linear rule than the correct hierarchical rule. These results suggest that human-like generalization from text alone requires stronger biases than the general sequence-processing biases of standard neural network architectures. ---- Christian Clark Ph.D. Student Department of Linguistics The Ohio State University -------------- next part -------------- An HTML attachment was scrubbed... URL: