AI Policy Congress – Part 4 Criminal Justice & Fairness

2019-02-11 - 6 minutes read

Written by Nicolas Rothbacher.

The Application of AI in Criminal Justice

How the application of AI to the criminal justice system is affecting that system is a question that looms large in the current AI policy discourse. To examine the issue, the MIT Internet Policy Research Institute (IPRI) and Quest for Intelligence (QI) convened a panel at their AI Policy Congress. Moderator Daniel J. Weitzner, director of IPRI, introduced the panel saying that, “of all the uses of the new analytic and predictive power of AI, applications in criminal justice have attracted a disproportionate share of attention and concern. This is because the potential impact on citizen’s liberty and (at least in the U.S.) ongoing concern regarding bias and unfairness in exercise of police power.”

In response, Carol Rose, Executive Director of the American Civil Liberties Union (ACLU) of Massachusetts, concurred, adding that she prefers to refer to the system as a “criminal legal system” since it has consistently discriminated against minorities and low-income communities. It is in this context, Weitzner said, that a rapidly increasing number of applications of AI to the criminal justice system are arising. From facial recognition and other investigative tools to behavior prediction and risk assessment, these tools have all been adopted quickly, to help law enforcement address pressing needs.

Addressing the proliferation of AI tools, Rose highlighted the need to think about this context when designing AI systems, making sure to design systems that align with our priorities for justice and fairness. Taking a broad view, Rose argues for a focus on a less-incarcerated and healthier society – and to involve more stakeholders in the justice system. This design must also be backed up by strong science that uses good data and reaches sound conclusions about effectiveness, she added, resulting in algorithms with the right outcomes rather than just good intentions.

Criminal Justice & Fairness panelists James A. Baker, Carol Rose, and Daniel Weitzner. Photo by Caty Fairclough.

Designing Fair and Functional AI Systems

In his commentary, panelist James A. Baker, Lecturer on Law at Harvard Law School and former General Counsel to the Federal Bureau of Investigation (FBI), agreed that we must design carefully and create useful products. Baker sees a lot of potential for AI to create positive change in the justice system, making things more “efficient, effective and ultimately fair.” For example, Baker sees AI helping in all phases of the investigative and prosecutorial process, ensuring that detectives and officers focus on the right people and the right priorities.

But as the process continues, from investigation to arrest and finally prosecution and potentially punishment, Baker says, “the risks of injustice generated by AI systems that are not ethical, that are not explainable, increase” and the need for care and accountability becomes more pressing due to concerns about the rights of citizens. The problem with current technology, as Baker sees it, is that many companies are selling bad tools to law enforcement that make their jobs harder, not easier. His request: “To companies, please don’t sell us junk. To researchers, help us know what is junk.”

This discussion was the main thread of the panel: should we be designing tools to reform the system and create a different paradigm of criminal justice or is it better to create tools that make the current system more effective? Nowhere was this clearer than in the discussion of risk assessment technology and its involvement in sentencing and bail decisions. Rose’s position was firm: these algorithms should never be the determining factor in revoking of a person’s liberty because a tool doesn’t make the decision just. Baker was more hopeful: perhaps if technologists and lawyers can communicate to judges the intricacies of the technology and the algorithms are made publicly accountable, they might create more fair outcomes for everyone.

Two visions of AI governance were on display here, one positioning AI as an important means to improve a good, but dysfunctional society, the other arguing that society is really what must change, and that AI can be a means toward change or another way to perpetuate the status quo. Whichever we choose, it will be a result of philosophical, ethical, and political judgement, not technological innovation.

Learn More About the MIT AI Policy Congress

This post is part of a series on the first MIT AI Policy Congress, edited by Grace Abuhamad. Read the rest of the series on the IPRI Blog.

Tags: