Stop blaming the algorithm! Some paths forward on the question of algorithmic accountability
When ProPublica reporter Julia Angwin presents her recent work on algorithmic recommendation systems for criminal sentencing, it is with more outrage than surprise that she concludes that the very algorithm she examines is an agent of systemic injustice. Angwin spent months investigating the system implemented in Broward County, FL to help judges assess potential recidivism when determining appropriate sentences, parole, or bail. According to her analysis, when controlling for prior crimes, future recidivism, age, and gender, black defendants were 45 percent more likely to be assigned higher risk scores, and therefore more likely to receive harsher sentences, than white defendants. The statistical robustness of this claim, and competing interpretations of what it means for the criminal justice system, are part of an ongoing debate. Angwin’s visit to the Internet Policy Research Initiative (IPRI) also helped frame the growing and multifaceted research agenda under the rubric of “algorithmic accountability.” Moving beyond a narrative of “blaming the algorithm,” five axes of inquiry emerged.
Interpretability: IPRI’s Gerry Sussman and others have argued that autonomous systems should be able to explain themselves. Algorithms and machines, in other words, should be interpretable by design. (This rationale drives Sussman, Gilpin, and Yuan’s work on “self-driving” cars that can articulate natural language reasoning undergirding actions taken on the road). Angwin, on the other hand, has argued that research should center instead on outcomes. In her view, the societal outcomes of an algorithmically-derived process, not the inner workings of the algorithmic processes itself, should be subject to scrutiny.
Fixity: Algorithmic processes fix fluid socio-legal categories. The “black boxing” of automation makes judgements of recidivism, even when based only on tangentially related re-arrest data, more impervious to scrutiny. STS scholar Stephanie Dick drew attention to the hardening of a legal concept like “violent crime” through this algorithmic process. Historically a mutable notion, violent crime can include or exclude domestic abuse, police hostilities, and animal cruelty. The necessary flattening of these categories as mathematical variables not only stabilize and entrenches a fixed notion of violent crime, but hides the complexity inherent in the legal term.
Discretion: Despite efforts to outlaw these so-called recidivism scores, such systems are currently protected under the rationale that they aid but do not limit judicial discretion. At the same time, the algorithmic evaluation fundamentally blocks the empathic individuation on which American due-process relies. As IPRI founding director Danny Weitzner argued, it is when stepping before the judge that the system must recognize the individual as an individual. If critical assessments about the defendant are made by an autonomous system and accepted by the court, defendants access to individualized treatment and a measure of mercy is lost.
Authority: The debates over algorithmic sentencing reflect a larger potential realignment of authority by asking who is empowered to judge technologically-complex policymaking. In particular, computer scientist David Karger highlighted the tension among different journalistic and academic standards, raising the question of whether peer-review process adequately response in a timely manner and if non-peer-reviewed analysis can appropriately guide policy.
Algorithm as Object of Study: All of these issues beg the question: what does it mean to take algorithms as objects of study? In her concluding remarks, Angwin declared that, in key ways, the issues at hand have nothing to do with algorithms. Discussions of linear algebra can serve to redirect attention from more meaningful efforts to reform our approach to criminal justice. However, Angwin argued that it is by introducing the mathematical analysis that a broader conversation of democratic values is provoked.
Julia Angwin’s entire presentation along with responses from the panel (David Karger, Stephanie Dick, and Danny Weitzner) is available for viewing. This event was hosted by the Internet Policy Research Initiative’s Surveillance and Security Speaker Series.