The AI Policy group focuses on increasing the trustworthiness of artificial intelligence (AI) and machine learning (ML) systems by enhancing their explainability and accountability. Current research topics include the role of AI in financial decision making, working with stakeholders on AI principles, and policy to increase access to new training data sets.
One example of the importance of this work is with autonomous vehicles. If we want users to trust the cars they are in and feel comfortable and willing to relinquish control, autonomous subsystems will have to be able to explain why they took certain actions and show that they can be accountable for errors. Explanations will have to be simple enough for users to understand, even when subject to cognitive distractions.
In response to this, our machine understanding work explores techniques for enabling autonomous systems to explain themselves by generating coherent symbolic representations of the relevant antecedents of significant events in the course of driving.
Read AI Policy News and Blog Posts
Since its introduction in 1968, the Fair Housing Act has shielded proctected classes from discrimination….Read More
Luis Videgaray, former foreign minister of Mexico and MIT alumnus, has become a distinguished fellow at…Read More
In a recent New York Times article, Steve Lohr discusses U.S. and global policy regarding…Read More
Written by Grace Abuhamad. The AI Policy Congress benefitted from a unique international perspective as…Read More
Written by Grace Abuhamad. The Current State of Autonomous Vehicles “Level 5 autonomy or flying…Read More
AI Tool Builders and Their Users: What Should We Expect From the Tools and Who Is Responsible When They Fail?
Written by: Daniel J. Weitzner February 19, 2019 In the midst of a heated global…Read More