Student Reflections (China 2017) – Leilani Gilpin (MIT)

2018-03-27 - 8 minutes read

Student Reflections: Leilani Gilpin (MIT)

By: Leilani Gilpin

I did’t know what to expect when I arrived in China for the MIT-SJTU joint course.  I had never been to Asia, and although I had taken the equivalent MIT semester long courses, I was excited to learn something new.  In particular, I was curious to hear and experience the Chinese perspective on Internet Policy.  I was especially excited to work with and learn from the Chinese students, who had a diverse academic background, similar to the MIT students (lawyers, engineers, business).   While I am trained as an engineer, with a background in mathematics and computer science, I have recently been interested in policy to both broaden my research reach and increase its impact.  My specific research topic is how to explain the behavior of autonomous machines, especially autonomous vehicles.  However, I’ve been interested in asking broader questions: within my specific research area of artificial intelligence (AI), how do we regulate it?  And how would this apply in other countries?  For months, I have been searching for a law professional or student with similar interests to be the legal expert in this area of work.   This class encouraged me to ask big impact questions and launched a future collaboration in law and AI.  

A good example of a plausible set of AI and law questions arose in the Thursday moot court.  The case was complicated, with many different defendants: Chinese ride-sharing application, Didi, Amazon Echo, and the NEST thermostat.  I was on the NEST team, which had a tough set of facts with which defend itself.   In summary, a budding entrepreneur, Jeannell, was getting herself ready for a funding pitch.  She requested a ride using Didi before jumping into the shower, but the application did not work.  When she exited the shower, she realized that the application had not located a car and she would be late for her meeting.   After investigating the other connected devices in her home, she discovered that her Didi apps depended on a proximity detection system that was built into her Amazon Echo.  Her Echo had shut down because the attic temperature had exceeded the 92F operating threshold; the attic had overheated because the NEST shut down due to a known security vulnerability.  Tough times for the NEST team.  

We spent two days preparing, and put together a strong argument.  We ended up sharing the fault for the case, although we felt strongly about our defense, in which we tried to assign blame to the user for using a system with a known security vulnerability.  Although law and liability is handled quite different in China, the Chinese students picked up on American law practices quickly.  We provided facts and evidence, and attempted to persuade a set of  “judges” (faculty and students) that the Nest’s security vulnerability was not responsible for a shared ride failure.  The connection may appear to be distant, but with connected devices, assigning fault can be ambiguous.  With so many connected parts, algorithms that think for themselves, and security vulnerabilities, how do we even begin to sift through the information to assign fault?

While we were preparing for the moot court, I explained to my teammates that  this is a question that drives my research.  How do we make intelligent autonomous machines and artificial intelligence accountable for their mistakes?  And how do we begin to think about liability?  Let’s say that an autonomous vehicle wrongly classifies a neighboring car as fog that can be driven through and gets in an accident.  Who is at fault?  The artificial intelligence algorithm that wrongly classified the image?  The video for producing a “blurry” image?  The software developers that worked on the algorithm?  Or the car manufacturer?  

After I returned home to MIT, I continued this conversation with M., a law Ph.D. student at SJTU who was a part of my team for the Thursday moot court activity.  Using my existing artificial intelligence research and his law knowledge, we started with a large literature search in artificial intelligence, robots, law, and ethics.  We have a shared folder of hundreds of papers, and have looked for conferences in technology and ethics around the world.  The only difficulty has been choosing what to focus on.  We have thought about topics like explanatory artificial intelligence for liability, which is a direct application of my work.  We have considered looking at the effects of robot traders on the stock market and economics.  This is something M knows well, but it is completely new to me.  But the main idea that we are pursuing is what happens locally and abroad (the global impact) of what robots learn to lie, specifically as applied to the stock market.  At first, we were thinking that it would be interesting to look at what would happen if medical robots could lie to keep patient morale high.  However, this is a controversial subject, and we think a more timely and appropriate application is the stock market, where bluffing is almost key to success.  

Now being back at MIT for some time, I miss Shanghai, but I am happy that the research conversation continues.  I’m learning a lot: not just about the differences in technology policy in China and in the US, but also in economics, law, and how collaboration is fostered.  It’s exciting to see that my small research project could expand and have a much bigger impact.  And further, it has been one of the most fruitful and productive collaborations yet. I’m excited to see the initial outcome, in the form of a white paper, in a few months.

For other Student Reflections or to see the Course Syllabus and Photos from the trip – see the Foundations of Internet Policy: a comparative perspective.

Tags: