Avoiding a One-Size-Fits-All Approach to Regulating Artificial Intelligence

2020-01-13 - 3 minutes read

Artificial intelligence (AI) technologies can be found in a wide variety of areas nowadays. AI is used in the criminal justice system, in autonomous vehicles, and will be used in the film industry to help decide what movies get commissioned. And these are just a few examples of the multitude of uses for AI technology. So, with such a wide variety of applications, shouldn’t the regulation of this tool also be nuanced and vary depending on the use?

In a recent Washington Post op-ed, R. David Edelman states that “[i]f we’re going to govern AI, we need to recognize it for what it is: a tool, with innumerable uses. And that means we need to govern it for the ways people actually use it, and not as a phenomenon in and of itself.”

Image by mikemacmarketing and licensed under CC BY 2.0.

The need to regulate AI systems is obvious. We know that AI systems can be biased or otherwise flawed, causing harm when used. For instance, biased AI could cause discrimination in the housing industry or in the criminal justice system where “untrustworthy AI might lead to wrongful arrests based on bad facial recognition”. But the question of how to regulate AI is less clear.

One option would be taking an omnibus approach, but this could be risky. In his op-ed, Edelman mentions the “right to explanation” created by European data privacy laws. This law requires AI systems to have human-readable justifications for their decisions. While this can be reasonable in some situations, it does not hold for all cases. Here, Edelman brings up the example of a system that can diagnose cancer using complex patterns in medical records, but may not be able to explain these diagnoses. “If it were effective and safe, however, shouldn’t it be lawful, even if it isn’t interpretable?” Edelman asks.

To help address this issue, Edelman states that “[a]s we move ahead, the federal government urgently needs to work on crafting substantive, tailored AI policies that look at the ways these technologies are used in public contexts as well as private ones.” The result should be context-specific analyses of AI instead of a “one-size-fits all approach”.

Read the full article in the Washington Post via the button below:

Further Reading

Check out these related blog posts about AI Policy: