Zoom meeting URL: https://mit.zoom.us/j/95187074245
How do adults understand children's speech? Children’s productions in early language development often bear little resemblance to typical adult pronunciations, yet caregivers nonetheless reliably recover meaning from them. In this talk, I test a suite of Bayesian models of spoken word recognition to understand how adults overcome the noisiness of child language and show that communicative success between children and adults relies heavily on adult inferential processes. By evaluating competing language models (e.g., fine-tuned BERT models) on phonetically-annotated child corpora, I show that adults' recovered meanings are best predicted by prior expectations fitted specifically to the child language environment, rather than to typical adult-adult language. After quantifying the contribution of this "child-directed listening" over developmental time, I discuss the consequences of this finding for how we formulate the problem of first language learning.