Machine Understanding (MU)

If we want users to trust the cars they are in and feel comfortable and willing to relinquish control, autonomous subsystems will have to be able to explain why they took certain actions and show that they can be accountable for errors. Explanations will have to be simple enough for users to understand, even when subject to cognitive distractions.

Our machine understanding work explores techniques for enabling autonomous systems to explain themselves by generating coherent symbolic representations of the relevant antecedents of significant events in the course of driving.

Sorry, no posts matched your criteria.

The Machine Understanding Team

Leilani Gilpin
Ph.D. Candidate
Gerry Sussman
Panasonic Professor of Electrical Engineering
Ben Yuan
Ph.D. Candidate