Written by Grace Abuhamad.
The Current State of Autonomous Vehicles
“Level 5 autonomy or flying cars — which comes first?” This was the question that MIT Professor and CSAIL Director Daniela Rus, herself an expert in unmanned aerial systems, posed to her panel, featuring aviation expert John-Paul Clarke, VP at United Technologies, and MIT Course 16 Professor Brian Williams, whose work focuses on autonomous systems and their applications in space. The panel was split. Williams suggested that, given the process of technology adoption, by the time we adopt AI, “people will have moved on to something else.”
Since there is often a lack of consensus about “how advanced” autonomous systems are, despite near-daily press coverage, the panel started with a status update. Rus stated that Level 4 autonomy is possible today, which means that a vehicle is able to perform all driving functions under certain conditions (for a description of automation levels, see below). Panelists agreed, but noted that there are trade-offs in how well these systems can understand their environment. Clarke likened these trade-offs to a parent’s thought process in deciding whether to let their children play outside: how good are are they at knowing if they are in trouble (self-estimation), and how good are they at getting out of trouble (option space). Even with these considerations in mind, Clarke estimated that autonomous vehicles (AVs) would be deployable in urban areas within 5-10 years.
Addressing the Safety of Autonomous Vehicles
Much of the safe deployment of autonomous systems depends on testing. Williams noted differences in rules for air and sea, for which the testing processes are longer than on land. With this, the panel pivoted to a discussion of public trust and the regulation of safety. While the general public might have started to embrace AVs, with multiple cities authorizing pilot projects and testing, a fatal crash in Tempe, AZ last March reinforced public safety concerns. How should we think about the right thresholds of safety? How safe is “safe enough” for testing and ultimate introduction to the public? Why are there different thresholds of “acceptable” safety for aviation than for cars, for example?
Part of the issue is that drivers’ perception of their ability exceeds their actual ability. Clarke noted that adoption may be easier in aviation because pilots have trained with semi-autonomous cockpits and generally have less of a gap (than drivers) in estimating their piloting abilities. The challenge in Clarke’s view is that, “we need to deal with issues around machines being in control.” Humans are used to being in a supervisory role when working with machines, but there may come a time where that relationship is reversed because systems will be coordinating among themselves. In the interim, Williams suggested that the AV experience be more like driving with a partner, where they are helping you along the way, and could, for example, reschedule a meeting if you were running late.
In response to public safety concerns, Rus suggested that perhaps autonomous systems should be subject to a kind of driver’s road test. Clarke encouraged policymakers and engineers focus more on estimating reliability and confidence than demonstrating capability. For example, since voice commands in aviation have not always been reliable, there is a set of procedures and structures to fill gaps in the underlying technology (in this case, commands need to be repeated until they are understood). These procedures are essential for public trust because, unlike a human co-pilot, Clarke said, “you can’t look into the eyes of your AI co-pilot and judge their confidence.”
Learn More About the MIT AI Policy Congress
- AI Policy Congress – Part 1 Governance Challenges
- AI Policy Congress – Part 2 Democratizing AI through Transparency and Education
- AI Policy Congress – Part 3 Healthcare
- AI Policy Congress – Part 4 Criminal Justice & Fairness
- AI Policy Congress – Part 6 Manufacturing & Labor
- AI Policy Congress – Part 7 An International Perspective