The debate about Lethal Autonomous Weapons Systems has reached a fever pitch, but the military’s artificially intelligent weapons remain under-regulated and under-defined

By: Zoe Wood

Recently in autonomous weapon news

“Without effective AI, military risks losing next war” reads the title of a November 2019 press release by the Department of Defense. Artificial intelligence, the press release explained, is the Department of Defense’s top priority for tech modernization.

The American military uses artificial intelligence in many ways, perhaps most controversially as a component of lethal autonomous weapon systems, or LAWS. LAWS are long debated, but 2020 saw a frenzy of high-stakes discussion about their use and development. This discussion starts with the military’s recently professed goal of advancing its arsenal of LAWS, namely by making them more autonomous.

For example, the general who oversees defense against missile threats and air-based attacks has professed his desire to automate missile detection systems in response to ever faster and more powerful weapons. To that end, he wants to “move humans further out in the decision-making loop.” What does this mean, exactly? The rest of this post will explain, but briefly, it means taking decisions out of the hands of people and leaving these decisions—including decisions to use deadly force—to artificially intelligent systems.

By way of response, the Human Rights Watch, an international non-governmental organization, released a report calling on nations to develop an international treaty that requires use of force to remain under the strict control of human decision making. The report advocates for laws and policies on a national level that commit nations to retaining “meaningful human control” over weapons, and establishes bans on developing, producing, and using fully autonomous weapons.

What makes a weapon autonomous?

In fact, the answer is not entirely clear. Weapons systems come with varying degrees of autonomy. At the lowest level of autonomy are “human-in-the-loop” weapons systems. These are only semi-autonomous, which means that they can only engage targets or groups of targets that have been specifically selected by the person operating the weapon. One step up, “human-on-the-loop” systems can select targets by themselves and make the decision to engage—e.g., fire upon—those targets. However, “human-on-the-loop” weapons are not considered fully autonomous because they are designed to give human operators the time and opportunity to intervene and end an engagement. In other words, they are designed to be fairly closely monitored by people. Finally, “human-out-of-the loop” systems are classified by the Department of Defense as fully autonomous. This means that, once these types of weapons are activated, they can identify, select, and engage targets without intervention by a person.

These three classifications provide a useful framework, but not all weapons systems fall squarely within one of the three categories. For example, Israel’s Harpy weapon hovers between the upper tiers of autonomy. While it is commonly activated with specific and finite objectives already programmed in, the Harpy has the ability to “loiter” for up to two-and-a-half hours after deployment, which gives it a degree of indeterminacy and autonomy. As such, the Harpy does not need to be launched with a specific target and location already programmed in. Rather, once launched, it can search for enemy radars over up to 500 kilometers. These capabilities allow the Harpy to find and engage targets of which its human operator was not even aware.

By contrast, America’s ATLAS—Artificially Intelligent Targeting System—cannot initiate force because it simply does not have a physical connection to a trigger mechanism. ATLAS is therefore part of a human-in-the-loop system because it provides information, acquired by artificial intelligence, to a human that may lead that human to initiate force. However, army acquisition chief Bruce Jette said that the army may explore converting ATLAS to a human-on-the-loop system. ATLAS’s increased autonomy would look like this: a human officer reviews surveillance data and subsequently clears a platoon of robots to open fire on a group of targets.

That the three classifications of autonomous weapons fail to accurately categorize two of the world’s most prominent autonomous weapons suggests that a new definition system is necessary. It seems misleading—and will lead to ineffective regulation—to classify a weapons system like the Harpy as only semi-autonomous when it has the ability to independently select and engage targets. Crucially, the definition of a fully autonomous weapon should err on the side over-inclusivity so that weapons like the Harpy do not escape strict regulation. Generally speaking, it is essential to come up with a clear and accurate system of classification for levels of autonomy that can operate both nationally and internationally. Such a system of definitions is essential for an adequate regulatory framework.

How are autonomous weapons currently governed?

Today, as LAWS actively push the outer boundary of semi-autonomous, very little governs their use and development. While International Humanitarian Law (IHL) bans weapons that are indiscriminate or which cause unnecessary suffering, it does not explicitly ban autonomous weapons and there is no guarantee that autonomous weapons fall into either of these two banned weapons categories. Moreover, there is no treaty or principle of customary international law that explicitly bans autonomous weapons. Nor is there any indication that such a treaty is close on the horizon. As of 2019, most major military powers, including the US, UK, Australia, Israel, and Russia, oppose new international regulations on the development or use of autonomous weapons. They argue that existing IHL is sufficient to regulate weapons systems with increasing levels of autonomy despite the fact that IHL makes no specific mention of LAWS. A UK Ministry of Defense spokesperson even suggested that LAWS defy regulation because there is “still no international agreement on the characteristics of lethal autonomous weapons systems.” This is excellent support for the assertion that a definition system for levels of autonomy is key, and it need not be as complicated as the Ministry of Defense spokesperson suggests.

In the U.S., Department of Defense Directive 3000.09 governs autonomous and semi-autonomous weapons. This directive dictates that “autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” However, the policy does not define “appropriate levels of human judgement.” In addition, Section 4.c(2) of the policy limits autonomous weapons to defense purposes, and explicitly bans autonomous weapons from selecting human targets. However, Section 4.d of the policy allows for Section 4.c(2) to be overridden if two deputy secretaries, of policy and technology, approve the use.

Most recently, on February 25, 2020, the Department of Defense adopted five Principles of Artificial Intelligence Ethics which apply not specifically to LAWS but to the use of artificial intelligence “in both combat and noncombat situations.” These principles require that artificially intelligent systems be (1) responsible, (2) equitable, (3) traceable, (4) reliable, and (5) governable.

While these principles are on the right track, they are not contained within a statute or directive and are therefore not binding. They are also extremely vague. For example, the Department of Defense has defined “responsible” as “exercise[ing] appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI capabilities.” Similarly, “governable” means that “[t]he department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

Interestingly, these principles, namely “governable,” can be seen as an acknowledgement on the part of the US that LAWS should be governed by more than existing IHL. But these principles are essentially meaningless, and there is no indication that the US plans to engage in meaningful regulation of LAWS. This is unacceptable. Even if the US stops short of banning any development or use of autonomous weapons, as proposed by Human Rights Watch, it must at the very least enact binding legislation that clearly defines key concepts such as autonomy and “appropriate levels of human judgment,” and which bans outright with no exceptions the use of lethal force on a human by a completely autonomous weapon.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s