By Daniel Healow
While the words “robot” and “battery” are commonly used in the same sentence, these phrases are usually referring to electricity, not assault. Unfortunately, use of the latter definition is increasing in frequency due to an uptick in malicious human actions taken against intelligent robots undergoing real-world testing. As the number of independently-operating robots have multiplied in humans’ daily lives, so have instances of violence against them.
This proliferation of independently-operating robots has spurred a rapid growth of literature extrapolating on Asimov’s “Three Laws of Robotics,” which stress the need to prevent harm to humans by malicious robots. However, the study of human crimes committed against robots has received significantly less attention. Despite this discrepancy, it is important that the law governing human and robot interactions quickly evolves to keep pace with human reliance on these machines.
Recent public deployments of autonomous robots have offered us an initial glimpse into the benefits, as well as the hazards, of our future lives amongst service robots. Recently, an intoxicated 41-year-old man was arrested in Mountain View, CA after tipping over a 300-pound Knightscope K5 security robot patrolling its headquarters’ parking lot. Fortunately, the robot quickly alerted the Knightscope employees inside the nearby building, who tracked down the suspect and detained him until police arrived. The robot is now back on patrol after suffering only a few scratches.
Unfortunately, the Knightscope robot is hardly the first to receive brutal treatment from humans. In 2015, a drunk 60-year-old man was arrested for kicking one of Softbank Co.’s “Pepper” robots at a store in Japan. Just weeks prior, “Hitchbot,” the hitchhiking robot, was destroyed after a short two-week experimental journey around the United States and Canada.
Since robots lack “personhood,” they do not yet (and may never) have a cause of action for a legal “battery” claim. Additionally, current laws aimed at preventing these types of malicious damage to robots are ill-equipped to provide adequate protection, given their growing importance in our lives. Similar to lawsuits concerning the destruction of human-controlled drones, existing arrests for harming robots generally lead to charges for “criminal mischief” or “wanton endangerment.”
UW Law’s Professor Ryan Calo recognizes intelligent robots as a seemingly new type of legal subject “halfway” between a person and an object, thus justifying an oscillation of their legal treatment based on the circumstances. Recent research also suggests humans tend to “anthropomorphize” robots in certain circumstances, further supporting the necessity of adopting new laws protecting robots to match our emotional and intuitive expectations.
Though many locals who were interviewed in Mountain View found the robot-tipping to be comical, one does not need to look very far into the future to see the ramifications these human-on-robot crimes could have on society, especially given the potential uses of robots in essential sectors such as healthcare. “Companion robots” are a new technology segment seeking to capitalize on an aging population’s need for in-home care amidst a predicted human care worker shortage. Autonomous drones are also already being used to deliver medical supplies and blood tests.
As robots continue to take on these types of life-critical responsibilities, a “criminal mischief” charge does not appear to reflect the severity that irresponsible human actions could have on the robot itself, as well as other humans being serviced by the machine.
In an email to the Mountain View Police Department, I learned that despite its semi-autonomous capabilities, the Knightscope is still likely to fall more on the “property” end of the spectrum than “person,” based on its current capabilities. The Knightscope robot itself cannot yet report a crime or interact with police, and is limited to contacting either the manager of the equipment or Knightscope headquarters. A human must then report the crime for law enforcement to receive notification.
In another autonomous robot application, Starship Technologies has partnered with food delivery app DoorDash to begin delivering food to customers within a two-mile radius of test restaurants. Similar to the Knightscope robot, the Starship robot is equipped with cameras and alarms to discourage “robo-nappers” or other humans attempting to inflict harm. Additionally, Starship’s bots have been accompanied by human handlers in the early deployment phase to discourage any human harassment.
Thus, while initial robot deployments are still fairly dependent on humans, current reactions to existing implementations demonstrate the necessity of new policy to further discourage destruction of robots.
However, determining the criminal sanctions to be imposed on humans is only one of the many reasons why a legal categorization of robots is important. Their categorization would have real implications for crime prevention as well. For instance, Mountain View, like many municipalities, requires local residents to pay to register their alarm systems and charges fines based on false alarms. So, if a robot like Knightscope reports a crime in progress, is that an alarm or a “free” report of a crime?
My response from the police department indicates this is a hurdle they have yet to cross, since Knightscope relies on a human supervisor and has no direct communication capabilities with law enforcement. However, as the autonomy of service robots increases, a human vs. property policy discussion will certainly impact underlying business and deployment costs, as well as prioritization of calls for help made by robots. Municipalities would be well-served to begin considering how service robots will impact their own operations and how community values should extend to autonomous robot deployments.