The AI Policy group focuses on increasing the trustworthiness of artificial intelligence (AI) and machine learning (ML) systems by enhancing their explainability and accountability. Current research topics include the role of AI in financial decision making, working with stakeholders on AI principles, and policy to increase access to new training data sets.
One example of the importance of this work is with autonomous vehicles. If we want users to trust the cars they are in and feel comfortable and willing to relinquish control, autonomous subsystems will have to be able to explain why they took certain actions and show that they can be accountable for errors. Explanations will have to be simple enough for users to understand, even when subject to cognitive distractions.
In response to this, our machine understanding work explores techniques for enabling autonomous systems to explain themselves by generating coherent symbolic representations of the relevant antecedents of significant events in the course of driving.
Read AI Policy News and Blog Posts
Written by Nicolas Rothbacher. The Application of AI in Criminal Justice How the application of…Read More
Written by Natalie Lao Demystifying Machine Learning for Regulators On Wednesday, January 16, Professor Hal…Read More
In a post for the New York Times, Steve Lohr discussed the MIT AI Policy…Read More
MIT Students Address Ethical Issues in the Areas of Artificial Intelligence (AI) and Autonomous Machines…Read More
By: Matías Aránguiz V. PhD student in Law, Shanghai Jiaotong University During the summer, a team…Read More
The AI Policy Team
AI Policy Events
Warning: preg_match(): Compilation failed: invalid range in character class at offset 12 in /home/ipri2637/public_html/internetpolicy.mit.edu/wp-content/plugins/js_composer/include/classes/shortcodes/vc-basic-grid.php on line 172