The AI Policy group focuses on increasing the trustworthiness of artificial intelligence (AI) and machine learning (ML) systems by enhancing their explainability and accountability. Current research topics include the role of AI in financial decision making, working with stakeholders on AI principles, and policy to increase access to new training data sets.
One example of the importance of this work is with autonomous vehicles. If we want users to trust the cars they are in and feel comfortable and willing to relinquish control, autonomous subsystems will have to be able to explain why they took certain actions and show that they can be accountable for errors. Explanations will have to be simple enough for users to understand, even when subject to cognitive distractions.
In response to this, our machine understanding work explores techniques for enabling autonomous systems to explain themselves by generating coherent symbolic representations of the relevant antecedents of significant events in the course of driving.
Read AI Policy News and Blog Posts
On January 27, 2021, The CSAIL Computing and Society CoR and the MIT Internet Policy…Read More
Artificial intelligence (AI) technologies can be found in a wide variety of areas nowadays. AI…Read More
Since its introduction in 1968, the Fair Housing Act has shielded proctected classes from discrimination….Read More
Luis Videgaray, former foreign minister of Mexico and MIT alumnus, has become a distinguished fellow at…Read More
In a recent New York Times article, Steve Lohr discusses U.S. and global policy regarding…Read More
Written by Grace Abuhamad. The AI Policy Congress benefitted from a unique international perspective as…Read More
The AI Policy Team
AI Policy Events
Warning: preg_match(): Compilation failed: invalid range in character class at offset 12 in /home/customer/www/internetpolicy.mit.edu/public_html/wp-content/plugins/js_composer/include/classes/shortcodes/vc-basic-grid.php on line 172