MIT Students Address Ethical Issues in the Areas of Artificial Intelligence (AI) and Autonomous Machines
Imagine that you’ve just submitted your resume for an exciting new job opportunity. Only a little while ago, you could safely assume that a human would be reviewing your application. But today, your resume may face the discerning eye of a machine, which will help decide whether or not you move forward in the hiring process.
This is only one example of the modern trend of using machines to make decisions that were previously solely made by humans. Autonomous machines, medical devices, and much more are now relying on algorithms to make key decisions that can greatly impact human livelihood.
The ability of AI algorithms and autonomous machines to make good decisions depends, of course, on how they are created — and this is where potential issues pop up.
A relatively small number of people are creating AI algorithms and autonomous vehicles and, as with any small group, this is leading to inherent bias. As a result, we are faced with many issues caused by the use of this technology. This includes:
- Al programs may rely on stereotypes, such as those used to predict sexual preferences
- AI recruiting tools may be biased against women
- Risk assessment tools used in the criminal justice system show a racial bias
Introducing the MIT AI Ethics Reading Group
To help better understand and address this issue, PhD students Leilani Gilpin, Harini Suresh, and Irene Chen from the MIT Department of Electrical Engineering and Computer Science (EECS) have created a campus-wide AI and ethics reading group.
The goal of the MIT AI Ethics Reading Group is to address the ethical and moral questions affecting AI and autonomous machines, as well as the need for explainable decisions. To achieve this goal, the group aims to draw on the knowledge of students from a variety a disciplines, ranging from political science and philosophy to computer science and engineering. In doing so, the group hopes to minimize bias by extending the conversation beyond the computer science community and involve people with a wide variety of ethical ideas.
When discussing this new group, Gilpin mentions that student participants will get two main benefits from joining. First, group members will become well-versed in AI ethics issues and become empowered to talk intelligently about these topics. Second, members will learn how to use their voice to contribute to this important and evolving field.
To reach these goals, the AI Ethics Reading Group will meet bi-weekly to discuss preselected pieces from the ever-increasing foundational and recent literature on AI and ethics. The meeting format will involve:
- Introductions and beginning exercises
- Small group conversations about the reading
- Documenting group discussions via a blog post
The MIT AI Ethics Reading Group hopes to address important issues facing society today.
Does this group sound interesting to you? Find out more about the MIT AI Ethics Reading Group and join the mailing list here.