COVID-19 Vaccine Passports: A One-Way Ticket to a Normal Life or a Threat to Individual Privacy and Equality?

By: Kelsey Cloud

As the percentage of fully vaccinated individuals continues to increase globally, countries have begun to consider whether or not to require vaccine passports—digital passes confirming that the owner has been fully vaccinated against COVID-19—in order to attend sporting events, concerts, and other pre-pandemic activities. A simple scan of a QR code on a smartphone or printed paper would allow the flow of international travel to resume, as well as allow consumers access to certain businesses, events, or locations within their home countries. China, New Zealand, Israel, and the United Kingdom have already launched various versions of vaccine passports, with widely varying policies and methods of implementation. For example, the European Union’s Digital Green Certificate collects an individual’s name, birthdate, date of issuance, and vaccine information. Moreover, a multitude of international organizations, including the World Health Organization and International Air Travel Association, have begun launching efforts to coordinate vaccine passport implementation as well.

In the United States, President Biden issued an Executive Order instructing the State Department to collaborate with global health organizations to establish international travel guidelines. However, the administration explicitly stated that the federal government will not issue vaccine passports, nor collect or store personal vaccine data. As such, without a federal mandate, vaccine passport initiatives in the U.S. remain in the private sector, driven by companies such as Microsoft, Salesforce, IBM, and MasterCard. All applications are still in the development stage, and several governors have already issued executive orders banning the use of vaccine passports in their states. The absence of a federally issued vaccine passport, coupled with the lack of uniform digital standards at an international level, create significant implementation issues that stir complicated political and ethical debates surrounding privacy, inequality, discrimination, and fraud.

Private Sector Digital Vaccine Passports Full of Privacy Concerns

The lack of a federal privacy law regulating the collection and use of personal information fosters concern surrounding oversight and control of that information. While companies developing passport applications seek to preserve as much individual privacy as possible, a federal application would likely need to include enough personal and medical information to confirm that someone has been vaccinated, such as name, contact information, and medical records from health care providers. The protections of the Health Insurance Portability and Accountability Act (HIPAA) would not be implicated in most situations, since passport applications could be developed without transmitting information to HIPAA-covered entities such as hospitals.

While some companies claim that their applications have robust privacy protections by encrypting confidential user data, the lack of legal remedies for privacy violations still leaves digital vaccine passports ripe for abuse and vulnerable to privacy breaches. Allowing private companies direct access to medical records raises questions concerning if and how third-party companies will store and use that data. Without oversight from the federal government, those private companies could capture personal health information in ways that create a significant target for hackers.

Current Vaccine Distribution Policies Reinforce Systems of Inequality    

Additionally, restrictive vaccine distribution policies favor  high-   income countries and worsen inequalities domestically and internationally. Globally, most low and middle-income countries still lack access to COVID-19 vaccines, and within high income countries, African Americans and Hispanic individuals continue to be vaccinated at lower rates than White individuals. As of April 15th, only 0.1% of the 841 million administered vaccine doses went to individuals in low-income countries. Joia Mukherjee, Chief Medical Officer of Partners in Health, warns that the world is “creating another superstructure or colonial hierarchy of people from wealthier countries having access and poorer countries not having access.” In the U.S., high poverty and uninsured populations, as well as non-citizen immigrants, share a direct correlation with lower vaccination rates. Moreover, vaccine passports have the potential to discriminate against those who cannot receive vaccinations due to medical or religious reasons. While many vaccine passports, such as the State of New York’s Excelsior Pass, allow for the use of a negative COVID-19 test in place of proof of vaccination, other countries like China exclusively admit vaccinated individuals, increasing the possibility for discrimination.

In addition, those without smartphones, mobile devices, or a reliable mobile data plan would suffer technological discrimination. Vaccine passport applications would disregard those in marginalized communities, such as formerly incarcerated people or undocumented people, who typically hold higher fears of government surveillance of their private health information. Requiring digital passports for travel, both domestically and internationally, could only exacerbate these inequalities.

Towards a Unified Global Approach Vaccine certifications for international travel are not new—many countries currently require proof of yellow fever vaccinations, for example. Mass vaccination initiatives have sprouted throughout history, both in the U.S. and around the world. Prior vaccine initiatives in place before COVID-19 already carry vaccination requirements for attending work, educational institutions, and traveling internationally, which the vast majority of the world complies with. In order to ensure the effectiveness and reliability of COVID-19 vaccination passports, private companies, international organizations, and other entities developing vaccine passports must safeguard the privacy of medical information, prevent fraudulent vaccination data, and implement anti-discriminatory policies that lessen global inequalities.

A New York City Councilmember has proposed the first bill that would prohibit weaponizing police robots

By: Zoe Wood

In February of this year, I wrote about how lethal autonomous weapons systems—or Killer Robots, depending upon who you ask—are under-regulated both nationally and internationally. In March, New York City councilmember Ben Kallos proposed what is likely the nation’s first law regulating law enforcement use of robots armed with weapons.

Proposed Int. No. 2240-A is just fifteen lines long and it bans two types of conduct. First, the New York Police Department (NYPD) “shall not authorize the use, attempted use or threatened use of a robot armed with any weapon.” Second, the NYPD “shall not authorize the use, attempted use or threatened use of a robot in any manner that is substantially likely to cause death or serious physical injury, regardless of whether or not the robot is armed with a weapon.” A weapon is “a device designed to inflict death or serious physical injury,” and a robot is “an artificial object or system that senses, processes and acts, to at least some degree, and is operated either autonomously by computers or by an individual remotely.” 

Councilmember Kallos proposed Int. 2240 in response to a late-February 2021 incident during which the NYPD brought an (unarmed) robot to an active crime scene in the Bronx. Kallos reportedly watched footage of this event “in horror.” “Robots can save police lives, and that’s a good thing,” says Kallos, “[b]ut we also need to be careful it doesn’t make a police force more violent.”

The robot in question strongly resembles a dog despite a complete lack of fur, ears, or anything resembling a face, and is generically called “Spot” by its manufacturer, Boston Dynamics. Spot is approximately two-and-three-quarters feet tall and three-and-a-half feet long. It weighs 11.5 pounds and can last up to 90 minutes on one battery charge. Spot can also travel at about three miles per hour. Why exactly would the Spot robot be useful at an active crime scene? The robot’s primary utility is for surveillance. According to Boston Dynamics, “Spot is an agile mobile robot that navigates terrain with unprecedented mobility, allowing you to automate routine inspection tasks and data capture safely, accurately, and frequently.” At an active home inspection like the one in late February in the Bronx, Spot was likely brought in by the NYPD to surveil the residence with cameras before police officers entered the premises. Although Spot has the ability to operate autonomously, the NYPD has reportedly only used its Spot robot with a remote control.

A prominent robotics company owned by Hyundai Motor Group, Boston Dynamics  is perhaps most popularly famous for its dancing robots. However, the advertising on its website focuses on solutions for construction, industrial inspection, and work in warehouses. Notably, policing is not actively advertised as an implementation for Spot. In fact, policing is mentioned just twice, embedded within two FAQs. The same goes for military use of Spot, which is not advertised and appears just once in an FAQ.

In fact, Boston Dynamics is opposed to weaponizing its robots. CEO Robert Playter has explained that “[a]ll of our buyers, without exception, must agree that Spot will not be used as a weapon or configured to hold a weapon.” This language is included in the Software License section of the Spot robot’s Terms and Conditions of Sale, which reads in relevant part: “The License will automatically and immediately terminate, and we may disable some or all Equipment functionality, upon (a) intentional use of the Equipment to harm or intimidate any person or animal, as a weapon or to enable any weapon,” or “(b) use or attempted use of the Equipment for any illegal or ultra-hazardous purpose.” Playter explains “[a]s an industry, we think that robots will achieve long-term commercial viability only if people see robots as helpful, beneficial tools without worrying if they’re going to cause harm.”

There are several glaring reasons why this paragraph in Spot’s Terms and Conditions of Sale is insufficient to ensure that Spot not be weaponized. First, and foremost, while Terms and Conditions of Sale are binding, they lack the staying power of a law or regulation and can be changed from contract to contract. Second, there are many different kinds of robots currently in use by law enforcement, and they should all be regulated uniformly. Regulation lacking in uniformity could lead to a complex legal landscape that is difficult to enforce, riddled with loopholes, and which favors certain technologies over others. Third, and most specifically, the Boston Dynamics Terms and Conditions of Sale do not define key terms such as “intimidate” and “ultrahazardous.” Finally, the commercial viability of robots, as cited by Boston Dynamics, should not be the primary motivation behind regulations that ban killer robots. What makes a product commercially viable is wont to change, and besides, human-centric policy which values people over property should form the basis for regulation that bans killer robots.

This is where Councilmember Kallos’s bill excels: it would have staying power, would apply to all police robots uniformly, and appears to be motivated by human-centric policy considerations. And although the bill may seem premature, there have been several instances of law enforcement robots using force against people, at least once resulting in death. Most famously, in 2016, the Dallas Police Department used a military robot—a Remotec Andros—to curtail a standoff with a sniper who had killed two police officers and gravely injured three more. Faced with an hours-long standoff, Dallas police officers placed a pound of C-4 explosive on the robot’s extension arm and maneuvered it by remote-control into the building where the sniper hid. When the C-4 exploded, it killed the sniper and ended the standoff. Less famously, police in Dixmont, Maine used the same type of robot with the same type of explosive when they were called to the home of a man having a mental health crisis. When officers could not get the man to come out of the house, and after the man began shooting out of his window at the armored police vehicle, police maneuvered a robot armed with an explosive device towards the man’s home. When the device detonated, it caused the man’s house to collapse, but miraculously killed neither the man, nor the dog or kitten living with him.

Still, Councilmember Kallos’s bill is vulnerable to some criticism. First, Int. No. 2240 does not appear to take into account the years of organizing work done by the Campaign to Stop Killer Robots. The Campaign advocates for maintaining meaningful human control over the use of force, and Int. No. 2240 stops just short of this. While it prohibits “use of a robot in any manner that is substantially likely to cause death or serious physical injury,” it does not ban outright the use of force by remote controlled or autonomous robots. The Campaign has written policy that explains why such a ban is necessary, and Int. No. 2240 would benefit from being informed by such policy.

Finally, Int. No. 2240 does not contemplate that weaponization, lethal or otherwise, is not the only aspect of police robots that is concerning. Although this need not necessarily be reflected in a bill that bans weaponization, discussion about regulating police robots should keep in mind their myriad uses, including their capacity for surveillance.