Toward Building Machines that Learn Generalizable, Interpretable Knowledge

On February 4, 2020 at 12:00 pm till 1:00 pm
Kevin Ellis (Tenenbaum Lab and Solar-Lezama Lab)

Humans can learn to solve a seemingly endless range of problems: building, design, coding, and using language, to name a few. Humans can learn to do all this from a relatively modest amount of experience, can creatively compose what they learn to extrapolate beyond their direct experience, and can communicate their knowledge in ways that other humans can comprehend and contribute to. Machines which are intelligent along all these dimensions are surely very far off. Here, however, I will argue that an AI technique, program induction, will play a role in building these more human-like AIs. Across case studies in vision, phonology, and learning-to-learn, this talk will present program induction systems that take a small step toward machines that can: acquire new knowledge from modest amounts of experience; generalize that knowledge to extrapolate beyond their training; represent their knowledge in an interpretable format; and are applicable to a broad span of problems, from picture drawing to equation discovery. These systems integrate symbolic, probabilistic, and neural network approaches, alongside techniques drawn from the program synthesis community.

McGovern Seminar Room, 46-3189