AI Explainability and Accountability
If we want users to trust the cars they are in and feel comfortable and willing to relinquish control, autonomous subsystems will have to be able to explain why they took certain actions and show that they can be accountable for errors. Explanations will have to be simple enough for users to understand, even when subject to cognitive distractions.
Our machine understanding work explores techniques for enabling autonomous systems to explain themselves by generating coherent symbolic representations of the relevant antecedents of significant events in the course of driving.
MIT Students Address Ethical Issues in the Areas of Artificial Intelligence (AI) and Autonomous Machines…Read More
By: Matías Aránguiz V. PhD student in Law, Shanghai Jiaotong University During the summer, a team…Read More
Student Reflections: Leilani Gilpin (MIT) By: Leilani Gilpin I did’t know what to expect when…Read More