Through many sci-fi movies, we’ve seen a number of fantasies surrounding artificial intelligence, also referred to as AI. Wall-E and Ava from Ex-Machina represent just a few among thousands of other depictions. While these depictions haven’t yet been created, AI has improved significantly and is rapidly expanding as technology improves. Ray Kurzweil’s Law of Accelerating Returns states that technological growth is exponential, meaning that with greater resources, society will be able to achieve the same advances at significantly quicker rates. Because growth is exponential, Kurzweil writes that, “we won't experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today's rate).” While some may dispute Kurzweil’s Law of Accelerating Returns, AI is definitely experiencing significant gains. Amazon’s Alexa is becoming increasingly popular, as are chatbots that have predictive responses to human language inputs. IBM’s Deep Blue defeated the reigning world champion chess player, IBM’s Watson defeated Jeopardy! champions, and Google’s DeepMind won 46 Atari games. But what are the laws that govern these new AI devices?
Isaac Asimov created “Three Laws of Robotics.” The first law states that robots may not injure a human being or, through inaction, allow a human being to come to harm. The second requires that a robot must obey orders given to it by human beings except where such orders would conflict with the First Law. The final law states that a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. While these are theoretical laws, they are still reasonable when considered in practice. Asimov's laws bring up a number of questions about the guidelines surrounding ethical AI. How is our current society incorporating these theoretical guidelines when it comes to AI? With increasingly automated manufacturing technology, who would be liable if an accident occurred – the developer or the user?
Jones v. W+M Automation, Inc., a 2007 New York Supreme Court case, found that developers are only liable when they were negligent or could foresee harm and did not act. As long as developers follow regulations, they will not be found liable if accidents occur. What would happen if AI were to keep on proliferating with no responsibility? Most AI is created using reinforcement learning, where the machines base future actions on successful past actions. If machines were not told to correct themselves when accidents occurred, it could be possible that, through reinforcement learning, machines would continue to make similar errors in the future without liability. Certainly the developers would adjust the machine in order to not repeat the same accident to avoid negative publicity repercussions, but these developers would not be found liable for any other future accidents that were similar, but not identical. This court ruling seems to contradict Asimov’s First Law.
It’s not likely that AI will be held liable to injury. Developers aren’t intending to cause harm with the AI (if they did, that would be another crime to consider), and likely can’t fully foresee the harms of AI. They’ll do their best to consider the worst-case scenario, but it is unlikely that they can account for every negative scenario. The most likely legal standards that AI will be held to are ethical guidelines that must be mandated through international/national regulation. These standards don’t yet exist in depth because AI is still being rapidly developed, and because many developers are focused more on rapid advancement in the technology instead of creating laws to protect society from potential harm. Because developers are pushing for faster, better AI, conforming to these ethical and legal standards may slow down their growth. While standards may be created, they will likely not be very restraining for the sake of technological progress. It’s likely that a universal ethical code will be developed eventually, but such a code is not imminent. Quality standards will likely also become common, as AI needs to be trained in order to enact its reinforcement learning. These standards will likely require a certain level of training or certain tests that the bot must pass in order to be placed into the market.
Governments are beginning to recognize the need for these AI standards. After all, These standards require mandate and enforcement in order to be applied in the technology industry during development. An example of a government working on this issue is the government of Britain. The British Parliament, conscious of the need for regulation, has written a formal statement to that end. The House of Commons Science and Technology Committee states that, “while it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begin now.” While there is no action yet, this is one step closer to having mandated standards and regulations that follow Isaac Asimov’s “Three Laws of Robotics.”