Adversarial examples and human-ML alignment

On July 23, 2020 at 1:00 pm till 2:30 pm
Shibani Santurkar

Machine learning models today achieve impressive performance on challenging benchmark tasks. Yet, these models remain remarkably brittle—small perturbations of natural inputs, known as adversarial examples, can severely degrade their behavior.

Why is this the case?

In this tutorial, we take a closer look at this question, and demonstrate that the observed brittleness can be largely attributed to the fact that our models tend to solve classification tasks quite differently from humans. Specifically, viewing neural networks as feature extractors, we study how features extracted by neural networks may diverge from those used by humans, and how adversarially robust models can help to make progress towards bridging this gap.

Zoom meeting: https://mit.zoom.us/j/98984209390?pwd=M1Z4Mk02N0NwWEZ5Z3ZYTEo5TWlJUT09
Password: 107671

 

Additional tutorial info: 
The tutorial will include demos—we will use Colab notebooks so please bring laptops along. In these demos, we will explore the brittleness of standard ML models by crafting adversarial perturbations, and use these as a lens to inspect the features models rely on.

Suggested reading (in order of importance): Adversarial examples [https://arxiv.org/abs/1412.6572]; Training robust models [https://arxiv.org/abs/1706.06083]; ML models rely on imperceptible features [https://arxiv.org/abs/1905.02175]; Robustness as a feature prior [https://arxiv.org/abs/1805.12152https://arxiv.org/abs/1906.00945].

Zoom Webinar