Will I Be There for You?: risks of helping in ongoing non-anonymous relationships; Using Bayesian Theory of Mind to Capture Third-Person Emotion Attributions

On December 8, 2020 at 12:00 pm till 1:00 pm
Era Wu (Saxe Lab), Brandon Davis (Saxe Lab)

Zoom Webinar URL: https://mit.zoom.us/j/99970679713

Era Wu

Will I Be There for You?: risks of helping in ongoing non-anonymous relationships

Compared to helping a stranger, helping a familiar other offers many potential incentives. Helping a friend means helping someone whose well-being we care about, who is more likely to help us in the future, and who may contribute to enhancing our reputation as a helper because of shared social connections. However, in this talk, I will be focusing on times when we are discouraged from helping someone we know. First, I will argue that there are greater risks incurred by helping a familiar other than by helping a stranger: e.g, offering help could make a friend feel indebted or patronized, could set a precedent for helping again in the future, or could establish in the social network an expectation that we will always play the role of helper for the sake of fairness. Second, I will describe in-the-works hypotheses about these sometimes overlooked risks of helping. Third, I will propose study designs to investigate whether people consider these potential risks when deciding whether, whom and how to help.

Brandon Davis

Using Bayesian Theory of Mind to Capture Third-Person Emotion Attributions

At an evolutionary level, humans are wired to interact with other humans. As a result, we are able to make rich inferences about the mental states of others. Specifically, humans are adept at inferring the emotional state of others based on sparse amounts of information. How is this possible and what affords us this cognitive flexibility? In this talk, I propose that, en route to inferring the emotion of an agent, the observer must first infer the agent's latent desires and preferences. I will present behavioral data from MTurk and formalize a computational model that performs Bayesian inference over observed actions from an agent modeled as a Partially Observable Markov Decision Process (POMDP), and generates a probability distribution over unobserved beliefs and preferences If the behavioral data match, this model has the potential to contribute to formalizing a computational account of human emotion intelligence.

Zoom Webinar