<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style>
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:12.0pt;color:black">Hi everyone,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black">Here’s the paper we’ll be discussing this week:
<a href="https://www.aclweb.org/anthology/P19-1452/">https://www.aclweb.org/anthology/P19-1452/</a>. The abstract is below.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black;background:white">Here’s the Zoom link for Thursday (9/10)
<a href="https://osu.zoom.us/j/94712017994?pwd=cVdNUGRSMkMyU2R0dTZaZ2xnSmxjUT09">
https://osu.zoom.us/j/94712017994?pwd=cVdNUGRSMkMyU2R0dTZaZ2xnSmxjUT09</a><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:14.0pt;color:black"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">Abstract:<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks. We focus on one such model, BERT, and aim to quantify where linguistic information is captured within the network. We
find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, and that the regions responsible for each step appear in the expected sequence: POS tagging, parsing, NER, semantic roles, then coreference. Qualitative
analysis reveals that the model can and often does adjust this pipeline dynamically, revising lower-level decisions on the basis of disambiguating information from higher-level representations.<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">See you then!<o:p></o:p></p>
<p class="MsoNormal">Ash<o:p></o:p></p>
</div>
</body>
</html>