AI Explainability and Accountability

If we want users to trust the cars they are in and feel comfortable and willing to relinquish control, autonomous subsystems will have to be able to explain why they took certain actions and show that they can be accountable for errors. Explanations will have to be simple enough for users to understand, even when subject to cognitive distractions.

Our machine understanding work explores techniques for enabling autonomous systems to explain themselves by generating coherent symbolic representations of the relevant antecedents of significant events in the course of driving.

Upcoming Events

New Cross-Disciplinary Group Contemplates AI Ethics

Published on 2018-10-23

MIT Students Address Ethical Issues in the Areas of Artificial Intelligence (AI) and Autonomous Machines…

Read More

Student Reflections (China 2017) – Matías Aránguiz V. (SJTU)

Published on 2018-03-27

By: Matías Aránguiz V. PhD student in Law, Shanghai Jiaotong University During the summer, a team…

Read More

Student Reflections (China 2017) – Leilani Gilpin (MIT)

Published on 2018-03-27

Student Reflections: Leilani Gilpin (MIT) By: Leilani Gilpin I did’t know what to expect when…

Read More

The AI Explainability and Accountability Team

Leilani Gilpin
Ph.D. Candidate
Gerry Sussman
Panasonic Professor of Electrical Engineering
Ben Yuan
Ph.D. Candidate