AI Tool Builders and Their Users: What Should We Expect From the Tools and Who Is Responsible When They Fail?

2019-02-19 - 20 minutes read

Written by: Daniel J. Weitzner

February 19, 2019

In the midst of a heated global debate about how to manage the rapid rise of new artificial intelligence (AI) technologies, Google offers this thoughtful piece on “Perspectives on Issues in AI Governance.” What’s important about the essay is not so much that it has specific answers to AI policy questions such as the future of work, security, privacy, or fairness. It doesn’t have much in the way of detail on those questions. But rather, it offers a distinctive approach to what kind of high-level strategies we ought to use in addressing AI policy questions. Teasing out these strategies can be helpful in understanding the roles that government, industry and civil society can play individually and collaboratively to move forward on AI policy questions. First and foremost, Google urges us to rely on the laws, institutions, and broad values that we have a society, rather than trying to reinvent the wheel. The challenge, according to Google, will be in the implementation of those broad values. We are urged not to get distracted by trying to write new principles for AI regulation from scratch. This is sensible as far as it goes, but we can learn a lot from how Google positions AI within this framework of existing rules and values.

On the positive side, Google seems prepared to offer technical expertise in addressing the challenges of deploying AI with policy goals in mind. Yet it is not clear how much responsibility Google is prepared to take when things go wrong with the AI services that Google offers to the world. Lurking in the background of the Google paper and the entire AI governance debate is the question of how to allocate responsibility and liability between those who make AI tools and services, and those who use the tools. These are critical questions because, as we saw in our first MIT AI Policy Congress (landing page and discussion summaries, coverage), governments around the world are trying to figure out how to ensure that AI deployments across many parts of society are trustworthy.

Google’s opening approach to AI governance leans heavily on self-regulatory models and the hope that the existing regulatory frameworks will address key problems:

“There are already many sectoral regulations and legal codes that are broad enough to apply to AI, and established judicial processes for resolving disputes. For instance, AI applications relating to healthcare fall within the remit of medical and health regulators, and are bound by existing rules associated with medical devices, research ethics, and the like. When integrated into physical products or services, AI systems are covered by existing rules associated with product liability and negligence. Human rights laws, such as those relating to privacy and equality, can serve as a starting point in addressing disputes. And of course there are a myriad of other general laws relating to copyright, telecommunications, and so on that are technology-neutral in their framing and thus apply to AI applications.” (p.4)

In many ways this reads like a call to repeat the Internet regulatory model that that United States and other OECD countries adopted in the 1990s.

“To date, self- and co-regulatory approaches informed by current laws and perspectives from companies, academia, and associated technical bodies have been largely successful at curbing inopportune AI use. We believe in the vast majority of instances such approaches will continue to suffice, within the constraints provided by existing governance mechanisms (e.g., sector-specific regulatory bodies).” (p.2)

There were many benefits from the mix of regulations that led to the Internet revolution (Weitzner, D.J. (2018). Promoting Economic Prosperity in Cyberspace. Ethics & International Affairs, 32(4), 425-439.), however, we now see that there were also unaccounted for social costs in the areas of privacy and cybersecurity. Already there is evidence that some of those same externalities are reappearing in the deployment of AI technologies, so it’s really too early to claim that existing structures will answer the mail on all AI/data analytics challenges. We can already see examples of AI technologies being deployed without adequate public oversight.

So before we declare that existing policy structures are “curbing inopportune AI use” (p.2), consider these two well-analyzed and widely discussed examples of the failure of existing law to control harm from AI systems:

  1. COMPAS criminal recidivism prediction system: Thanks to outstanding data-driven, tech-savvy journalism by Julia Angwin and her team at ProPublica (now at TheMarkup), it became clear that a system widely used by courts to predict criminal recidivism in parole and sentencing decisions has significant racial bias built into its model (Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks.” ProPublica, May 23, 2016). While there is dispute about just what would be fair in these cases (Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807), there is no question that there are unresolved questions about fairness and discrimination. Despite the evidence of harmful discrimination, many courts continue to use this and other similar systems. None of the legal structures currently in place are doing much to curb this particular risk from automated decision-making. It is true that this particular system isn’t even sophisticated enough to have a machine learning model. It’s just a bunch of hard-coded rules. But if the law can’t control these simpler systems in this area, it’s hard to imagine how it would tackle more complex AI. Some policymakers call for a halt in using these systems (Guest opinion, Rep. Greg Chaney, Idaho Press: Idaho must eliminate computerized discrimination in its criminal justice system, Feb 6, 2019), but it appears that they are still in widespread use around the country. (Issie Lapowsky, Crime-predicting Algorithms May Not Fare Much Better Than Untrained Humans, Wired.)
  2. Face Recognition: Studies by MIT and Stanford computer science students have revealed dramatic racial and gender bias in several of the most widely used and technically sophisticated face recognition systems. (Buolamwini, J., & Gebru, T. (2018, January) Gender shades: Intersectional accuracy disparities in commercial gender classification. in Conference on Fairness, Accountability and Transparency (pp. 77-91).)  We now know that these systems, some of which are still on the market, are far less accurate for women and people with darker skin. To the extent that we rely on face recognition for identification purposes associated with any important professional or personal opportunity, the biases shown in these systems do real harm to women and people of color. Recent news reports suggests the buck has passed between users (police) and the toolmaker (Amazon). (Bryan Menegus, Defense of Amazon’s Face Recognition Tool Undermined by Its Only Known Police Client, Gizmondo, 1/13/19). Some developers such as Microsoft and IBM deserve credit for acting to correct these errors quickly. Google has also taken services offline quickly when similar errors were pointed out. But other companies, including Amazon, have instead blamed these harmful mistakes on user error, calling into question whether we should rely on market pressure as a corrective. (Cyrus Farivar, Amazon: Cops should set confidence level on facial recognition to 99%, Ars Technica, 7/30/2018).

These cases challenge Google’s assertion that existing regulatory structures are sufficient in their current form to address hard AI policy questions. However, Google is still correct that we should not try to reinvent the wheel with brand new policy categories, or worse yet, attempt to create a whole new stand-alone field of regulation specially designed to address AI technology.

Bringing these existing regulatory and institutional structures to the point that they can handle the new governance challenges of AI requires:

  1. More technical expertise in the existing enforcement bodies
  2. Better tools to assess the safety, robustness, and fairness of AI systems
  3. Adequate legal authority to compel responsible behavior by those who design or use AI without adequate attention to society’s priorities

Google is actually doing a commendable amount of research and development on technical tools that contribute to the overall need to make AI more trustworthy, including work on explainability, interpretability, and fairness assessment. (For example, Doshi-Velez, Finale, and Been Kim. “Towards a rigorous science of interpretable machine learning.” arXiv preprint arXiv:1702.08608 (2017).) Yet their governance framework is largely silent on the broader question of how we can be sure that these tools are available across the range of AI application areas and how governments and others will learn to use them. Actually, the paper strikes a somewhat pessimistic note on the current capability to assess the trustworthiness of AI systems. While it is widely accepted that neither the public nor regulators should trust AI systems that are not explainable and subject to interrogation by human users, Google is forthright about the fact that at least with today’s state-of-the-art:

“…there are technical limits as to what is currently feasible for complex AI systems. With enough time and expertise, it is usually possible to get an indication of how complex systems function, but in practice doing so will seldom be economically viable at scale, and unreasonable requirements may inadvertently block the adoption of life-saving AI systems.” (p.8)

As long as these technical limitations remain, there will be real gaps in our ability to govern AI deployments in a way that earns public trust. While explanation, interpretation, and methods for assessing robustness are lacking, it will be hard to account for externalities associated with the use of these systems.

Despite the trustworthiness and reliability gaps evident in our current governance framework, Google seems to want to reduce its own legal responsibility for any harms that may flow from AI-related faults. Per the last section of the paper:

“Google recommends a cautious approach for governments with respect to liability in AI systems, since the wrong frameworks might place unfair blame, stifle innovation, or even reduce safety. Any changes to the general liability framework should come only after thorough research establishing the failure of the existing contract, tort, and other laws.” (28)

With this Google seems to want us to treat AI technology as if it is a wholly new species of technology, as if the software and data that makes up neural nets and other AI systems have nothing to do with the software, hardware, and networks that surround us today. Google’s case for limited regulation and liability limits could apply to more or less any new technology. On the contrary, AI systems that are the subject of these governance discussions have many similarities in technical, business, and social terms to the Internet, software, and service infrastructure that we have today: tools. While innovative in many respects, AI will be offered either as software, platforms, or as part of vertically integrated services. Like today’s Internet-based software, platforms, and services, users of AI tools will depend on them for a wide range of business, personal, and public sector functions. But, also like today’s Internet tools, users will have relatively little knowledge or ability to shape the fundamental features of the tools. And like today, the platform providers — Google, Amazon, Facebook, and others — will be in a position to shape the way that new AI tools are delivered.

By contrast, liability limits were imposed by the US Congress (section 230), the European Commission (Ecommerce Directive), and other legislatures to protect Internet platforms in the 1990s based on some very particular technical limitations of those platforms at the time and the nascent nature of the Internet marketplace. What’s more, there was a real risk to online free speech that if platforms were held responsible for speech of their third-party users of the platforms, then those same platforms would have strong incentives to restrict the speech of third parties. The corresponding risk is that AI tool providers might discourage or prevent uses that are not verifiably safe and trustworthy. Perhaps that would actually be a good thing.

Before we preemptively declare that AI toolmakers need protection from liability, we should consider experience from today’s Internet environment:

  • There is clear evidence that early liability limits in software licensing led to radical underinvestment in security during the PC era. It was not until major, systemic vulnerabilities were identified in platforms such as Microsoft Windows and Google that those companies launched major security engineering efforts. Those efforts produced dramatic improvements in security of those services. Yet today many other software and platform offerings suffer from major security weakness, and as the industry as a whole struggles to adopt effective security practices, societies around the world suffer substantial monetary and non-monetary loss from security weaknesses.
  • Following a number of serious privacy breaches, including but not limited to Facebook and Cambridge Analytica, policymakers around the world recognize that we have been late in taking privacy risks seriously in national laws.

Careful light-touch regulation of early Internet platforms produced a mix of innovation and great economic and social growth, but also real unaddressed externalities. So where does that leave us as to the responsibilities of AI tool makers and users? The careful approach proposing reliance on existing sectoral regulation, as opposed to new AI-specific laws, outlined in much of the Google paper is a good start toward trustworthy and innovative deployment of AI tools. This must include taking a clear-eyed view of the risks and externalities that will come along with some AI applications, and making sure that rules, whether existing or new laws, will address them.

Going forward, let’s avoid the mistakes we made in the software and Internet marketplaces that emphasized rapid deployment to the exclusion of incentives for responsible treatment of social risks inevitable in any complex new systems. To avoid those mistakes, we should start by assuring that tools builders have the incentive to build mechanisms that provide clear, empirical measures of the trustworthiness, safety, and reliability in the tools and systems they provide. These technical capabilities are an essential prerequisite to the thoughtful application of the existing safety and liability rules that Google sites. Accountability and transparent operation of AI tools are a must for any widespread deployment of AI systems in which there is risk of harm.

Google and many, but not all, other AI tool builders have shown early commitment to research in the areas of interpretability and explainability. The path forward to broad adoption and acceptance of AI tools is to deepen that research and figure out how these tools can be deployed in the service of making risk more transparent. This will enable industry, civil society and governments to have a clear view of what risks that may arise, thus enabling thoughtful decisions about where responsibility for harm lies. Getting those decisions right, and having them enforced in law, are essential to building public trust in AI applications.