By: Matías Aránguiz V.
PhD student in Law, Shanghai Jiaotong University
During the summer, a team of professors and students from the MIT Internet Policy Research Initiative (IPRI) came to Shanghai Jiaotong University to give a one-week course. The classes were a mix of engineering, law, and policy. Students from the engineering and law school sat together and debated about the future of technology, Artificial Intelligence, and internet. After the conclusion of the class, students from both universities, continued asking questions: what will the world be like in the future, with robots in the equation? How will human and robots interact?
This reminds me of my childhood when my father encouraged me to read Isaac Asimov. I remember when my father would pick me up from school, and the two of us would drive for hours while discussing Asimov’s future worlds where robots would interact with humans. The first books I read were the Foundation Trilogy and then the series of Robots. At that time, all these stories sounded so attractive and distant. The most interesting part for me was the idea of a robot having dreams. How is possible that a robot can lose control of his/its internal system and have a process that cannot be explained? But this idea came up in the MIT class as a realistic scenario.
One of the interesting activities in the course was the moot court. In that exercise, we had the opportunity to debate and think about similar scenarios in artificial intelligence and internet connected devices. We split up into small groups where we discussed the ethics and accountability of A.I. and self-driving cars. In the activity, my group represented a company that lacked proper standard securities updates. This security vulnerability produced a cascade of malfunctioning devices that resulted in a possible compromise of civil liability. During the preparation of that case, I had the chance to discuss many topics in detail with my group. We talked about the ramifications of artificial intelligence both locally and abroad within autonomous vehicles, robots, and high-performance computing and how liabilities can be standardized in areas of constant change. We came to several solutions; possible only in the interaction of different disciplines such as policy, engineering, law, or even ethics.
This is an ongoing topic of conversation. Even after the activity finished, I stuck around where some of my classmates continued talking about these issues. In particular, how we can regulate future interaction between robots and humans. As humans, we love to control new phenomena, especially those that we do not understand. This gives us a sense of security or mental tranquility. Regulation gives us satisfaction and pretense of power. This is very needed. Public opinion is full of insecurities towards Artificial Intelligence, based on the idea that we are in front of a new “beast” which is faster and smarter than us. Regulation seems like the first and only tool we can have before the “beast” starts to walk amongst us.
And this is a glorified, public topic: Elon Musk is concerned. He asked for regulation of a “fundamental risk to human civilization”. But regulating A.I is difficult We do not know how to predict the future of robots in the coming years. The innovation is dispersed: different labs around the world are developing new technologies. And this dispersion in the innovative process makes it impossible to know the form that robots are going to take in a couple of years, the variation is enormous. It is a hard task to regulate what we do not yet understand. There is an imperative requirement to coordinate all the countries to assume the same regulation. If robots are a risk to civilization, we need a global cooperation, otherwise the effort is meaningless.
With the help of AI, big data, and machine learning our decisions will be clearer and correct; less controlled by dogma and prejudice and more technical. We will be able to consider more consequences, not only the easily foreseen ones but those that are not immediately evident. There is so much work to make robots to explain themselves, and part of that work is for machines to explain humanity. The issue here is not competition between two forces about control or the right way to create a society; the issue is collaboration. The fear that we have for our previous decisions in the world can be solved by using robots to help us reduce our mistakes. The understanding of robots must be a tool for humanity.
This is not a new concern. Bertrand Russell in 1945, campaigning about the civilization treat of that time, the atomic bombs, said “[m]ankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense.” The words of Russell can be used today for the threat of robots, it is the same logic. In the near future, we will not make decisions completely by ourselves, but we will need to share the process with robots. The question is how we can increase our common sense and how we can teach it to our future co-decisionmakers. I hope the future collaboration between SJTU-MIT can start to answer these questions.
More information on the course and other student reflections are available here.