Developing a scalable theory of alternatives; Flexibly understanding actions in different sentences and different worlds: Contextually adaptive semantics for physical actions and goals

On October 13, 2020 at 12:00 pm till 1:00 pm
Jennifer Hu (Levy Lab), Cathy Wong (Tenenbaum Lab)

Zoom Webinar

Jennifer Hu

Developing a scalable theory of alternatives

Humans consider counterfactual observations in order to perform a variety of reasoning tasks, such as making causal and moral judgments. These alternative possibilities also enable us to draw pragmatic inferences in language understanding. For example, if you hear “some students passed the exam,” you likely infer that not all students passed, because the speaker could have used the more informative alternative “all students passed the exam” if that had been the case. In this talk, I will discuss existing theories of linguistic alternatives, as well as a growing body of empirical evidence that motivates a more flexible theory of how alternatives are learned, generated, and deployed in pragmatic inference.

Cathy Wong

Flexibly understanding actions in different sentences and different worlds: Contextually adaptive semantics for physical actions and goals

Distributional word embeddings and large predictive text models have driven remarkable recent progress in many natural language processing tasks, especially ones that relate text to text. When humans use language, however, we often use language in the context of the world, and with incredible flexibility — a single word can be reused across an enormous range of linguistic and grounded contexts, while still permitting subtle distinctions in how we imagine, plan around, execute, and assess the linguistic acceptability of words depending on the specifics of a particular sentence in the context of a particular world. We propose a roadmap for a program driven directly by this contextual flexibility — we focus on achieving human-like understanding of the language of physically grounded actions and goals, across different levels of spatiotemporal abstraction, varying physical world dynamics, and a diverse set of linguistically-specified goals. We discuss a representational framework, dataset of grounded linguistic planning queries in a diverse range of video game environments, and methods for inferring and learning contextually-adaptive semantics across a battery of planning and instruction-following tasks.

Zoom Webinar