A New York City Councilmember has proposed the first bill that would prohibit weaponizing police robots

By: Zoe Wood

In February of this year, I wrote about how lethal autonomous weapons systems—or Killer Robots, depending upon who you ask—are under-regulated both nationally and internationally. In March, New York City councilmember Ben Kallos proposed what is likely the nation’s first law regulating law enforcement use of robots armed with weapons.

Proposed Int. No. 2240-A is just fifteen lines long and it bans two types of conduct. First, the New York Police Department (NYPD) “shall not authorize the use, attempted use or threatened use of a robot armed with any weapon.” Second, the NYPD “shall not authorize the use, attempted use or threatened use of a robot in any manner that is substantially likely to cause death or serious physical injury, regardless of whether or not the robot is armed with a weapon.” A weapon is “a device designed to inflict death or serious physical injury,” and a robot is “an artificial object or system that senses, processes and acts, to at least some degree, and is operated either autonomously by computers or by an individual remotely.” 

Councilmember Kallos proposed Int. 2240 in response to a late-February 2021 incident during which the NYPD brought an (unarmed) robot to an active crime scene in the Bronx. Kallos reportedly watched footage of this event “in horror.” “Robots can save police lives, and that’s a good thing,” says Kallos, “[b]ut we also need to be careful it doesn’t make a police force more violent.”

The robot in question strongly resembles a dog despite a complete lack of fur, ears, or anything resembling a face, and is generically called “Spot” by its manufacturer, Boston Dynamics. Spot is approximately two-and-three-quarters feet tall and three-and-a-half feet long. It weighs 11.5 pounds and can last up to 90 minutes on one battery charge. Spot can also travel at about three miles per hour. Why exactly would the Spot robot be useful at an active crime scene? The robot’s primary utility is for surveillance. According to Boston Dynamics, “Spot is an agile mobile robot that navigates terrain with unprecedented mobility, allowing you to automate routine inspection tasks and data capture safely, accurately, and frequently.” At an active home inspection like the one in late February in the Bronx, Spot was likely brought in by the NYPD to surveil the residence with cameras before police officers entered the premises. Although Spot has the ability to operate autonomously, the NYPD has reportedly only used its Spot robot with a remote control.

A prominent robotics company owned by Hyundai Motor Group, Boston Dynamics  is perhaps most popularly famous for its dancing robots. However, the advertising on its website focuses on solutions for construction, industrial inspection, and work in warehouses. Notably, policing is not actively advertised as an implementation for Spot. In fact, policing is mentioned just twice, embedded within two FAQs. The same goes for military use of Spot, which is not advertised and appears just once in an FAQ.

In fact, Boston Dynamics is opposed to weaponizing its robots. CEO Robert Playter has explained that “[a]ll of our buyers, without exception, must agree that Spot will not be used as a weapon or configured to hold a weapon.” This language is included in the Software License section of the Spot robot’s Terms and Conditions of Sale, which reads in relevant part: “The License will automatically and immediately terminate, and we may disable some or all Equipment functionality, upon (a) intentional use of the Equipment to harm or intimidate any person or animal, as a weapon or to enable any weapon,” or “(b) use or attempted use of the Equipment for any illegal or ultra-hazardous purpose.” Playter explains “[a]s an industry, we think that robots will achieve long-term commercial viability only if people see robots as helpful, beneficial tools without worrying if they’re going to cause harm.”

There are several glaring reasons why this paragraph in Spot’s Terms and Conditions of Sale is insufficient to ensure that Spot not be weaponized. First, and foremost, while Terms and Conditions of Sale are binding, they lack the staying power of a law or regulation and can be changed from contract to contract. Second, there are many different kinds of robots currently in use by law enforcement, and they should all be regulated uniformly. Regulation lacking in uniformity could lead to a complex legal landscape that is difficult to enforce, riddled with loopholes, and which favors certain technologies over others. Third, and most specifically, the Boston Dynamics Terms and Conditions of Sale do not define key terms such as “intimidate” and “ultrahazardous.” Finally, the commercial viability of robots, as cited by Boston Dynamics, should not be the primary motivation behind regulations that ban killer robots. What makes a product commercially viable is wont to change, and besides, human-centric policy which values people over property should form the basis for regulation that bans killer robots.

This is where Councilmember Kallos’s bill excels: it would have staying power, would apply to all police robots uniformly, and appears to be motivated by human-centric policy considerations. And although the bill may seem premature, there have been several instances of law enforcement robots using force against people, at least once resulting in death. Most famously, in 2016, the Dallas Police Department used a military robot—a Remotec Andros—to curtail a standoff with a sniper who had killed two police officers and gravely injured three more. Faced with an hours-long standoff, Dallas police officers placed a pound of C-4 explosive on the robot’s extension arm and maneuvered it by remote-control into the building where the sniper hid. When the C-4 exploded, it killed the sniper and ended the standoff. Less famously, police in Dixmont, Maine used the same type of robot with the same type of explosive when they were called to the home of a man having a mental health crisis. When officers could not get the man to come out of the house, and after the man began shooting out of his window at the armored police vehicle, police maneuvered a robot armed with an explosive device towards the man’s home. When the device detonated, it caused the man’s house to collapse, but miraculously killed neither the man, nor the dog or kitten living with him.

Still, Councilmember Kallos’s bill is vulnerable to some criticism. First, Int. No. 2240 does not appear to take into account the years of organizing work done by the Campaign to Stop Killer Robots. The Campaign advocates for maintaining meaningful human control over the use of force, and Int. No. 2240 stops just short of this. While it prohibits “use of a robot in any manner that is substantially likely to cause death or serious physical injury,” it does not ban outright the use of force by remote controlled or autonomous robots. The Campaign has written policy that explains why such a ban is necessary, and Int. No. 2240 would benefit from being informed by such policy.

Finally, Int. No. 2240 does not contemplate that weaponization, lethal or otherwise, is not the only aspect of police robots that is concerning. Although this need not necessarily be reflected in a bill that bans weaponization, discussion about regulating police robots should keep in mind their myriad uses, including their capacity for surveillance. 

The Other Type of Robot Battery

Picture1By Daniel Healow

While the words “robot” and “battery” are commonly used in the same sentence, these phrases are usually referring to electricity, not assault. Unfortunately, use of the latter definition is increasing in frequency due to an uptick in malicious human actions taken against intelligent robots undergoing real-world testing. As the number of independently-operating robots have multiplied in humans’ daily lives, so have instances of violence against them. Continue reading

Man or Machine? EU Considering “Rights for Robots”

robotBy Grady Hepworth

Isaac Asimov’s 1942 short story “Runaround” is credited for creating the famous “Three Laws of Robotics.” Asimov’s Laws, although theoretically fictional (and most recently featured in the 2004 motion picture I, Robot), require robots to i) not hurt humans, to ii) obey humans, and to iii) only protect themselves when doing so wouldn’t conflict with the first two rules. However, the European Union (“EU”) made headlines this month when it took steps toward making Asimov’s Laws a reality.
Continue reading

Sorry, that isn’t actually Scarlett Johansson.

 

By Beth St. Clair

 What would you do if someone built a robot version of you?

 It happened to Scarlett Johansson. A graphic designer from Hong Kong spent over a year, and $50,000, to build a robot in her likeness. While the robot’s abilities are limited, it can respond to compliments and questions, laugh, bow, and blink its eyes. Most notable, however, is the fact that the designer used 3D-printing technology and silicone to make the robot look exactly like Johansson.

For some, the coquettish machine represents an objectification of women, “an utterly disappointing reflection of the way women are portrayed in society.” For others, it is an extreme example of fandom.

But because the programming and machinery needed to make very advanced robots are now so widely available that a person can create one at her own house, we will see more celeb-bots in the future. Those robots, especially female celebrity-inspired robots equipped with realistic features and the ability to mimic life-like movement, will continue to be controversial.

Continue reading

Securing Dr. Robot

unnamed By Brooks Lindsay

Medical device robots present a number of cybersecurity, privacy, and safety challenges that regulation and industry standards must address in order to safely and rapidly advance innovation in the field.

The University of Washington’s Computer Science Department recently highlighted the problem. Computer Science Researchers hacked a teleoperated surgical robot called the Raven II during a mock surgery. The hack involved moving pegs on a pegboard, launching a denial-of-service attack that stopped the robot, and making it impossible for a surgeon to remotely operate. The researchers maliciously controlled a wide range of the Raven II’s functions and overrode command inputs from the surgeon. The researchers designed the test to show how a malicious attack could easily hijack the operations of a medical device robot. The researchers concluded that established and readily available security mechanisms, like encryption and authentication, could have prevented some of these attacks.  Continue reading