How language understanding unfolds in minds and machines

On October 22, 2020 at 4:00 pm till 5:00 pm
Roger Levy

Language allows us to package our thoughts into symbolic forms and transmit some approximation of them into each other's minds. We do this hundreds of times a day as listeners, speakers, readers, and writers. How we're able to achieve this is one of the great scientific questions in the study of mind and brain.  In this talk I describe several of our research group's recent advances in our work on this question.  First, we offer new results from a theory of how memory constrains human understanding: namely, that context representations are "lossy" in an information-theoretic sense.  This theory novelly links memory representations for grammatical structures to the statistics of the natural language environment, explaining recent results showing how the same grammatical configuration can differ in difficulty for native speakers of different languages.  The theory also predicts new generalizations about word order that we empirically confirm in a broad sample of languages.  Second, we evaluate and calibrate contemporary deep-learning models for human-like processing using numerous controlled experimental benchmarks and human behavioral datasets.  Our results bear on classic questions of the learnability of syntactic structures from linguistic input, and also highlight the continued importance of model architecture for human-like linguistic generalization.  Third, we closely investigate the preferences guiding how we package our thoughts into forms, taking advantage of the English pronoun system to study how native speakers talk about and infer the gender of participants in expected events.  Our case studies include the first time-series controlled psycholinguistic experiment studying language processing related to an ongoing world event (the 2016 U.S. Presidential election campaign), and newly reveal bias in how we prefer to talk about expected events.

https://mit.zoom.us/j/93157121082

Zoom Webinar