The debate about Lethal Autonomous Weapons Systems has reached a fever pitch, but the military’s artificially intelligent weapons remain under-regulated and under-defined

By: Zoe Wood

Recently in autonomous weapon news

“Without effective AI, military risks losing next war” reads the title of a November 2019 press release by the Department of Defense. Artificial intelligence, the press release explained, is the Department of Defense’s top priority for tech modernization.

The American military uses artificial intelligence in many ways, perhaps most controversially as a component of lethal autonomous weapon systems, or LAWS. LAWS are long debated, but 2020 saw a frenzy of high-stakes discussion about their use and development. This discussion starts with the military’s recently professed goal of advancing its arsenal of LAWS, namely by making them more autonomous.

For example, the general who oversees defense against missile threats and air-based attacks has professed his desire to automate missile detection systems in response to ever faster and more powerful weapons. To that end, he wants to “move humans further out in the decision-making loop.” What does this mean, exactly? The rest of this post will explain, but briefly, it means taking decisions out of the hands of people and leaving these decisions—including decisions to use deadly force—to artificially intelligent systems.

By way of response, the Human Rights Watch, an international non-governmental organization, released a report calling on nations to develop an international treaty that requires use of force to remain under the strict control of human decision making. The report advocates for laws and policies on a national level that commit nations to retaining “meaningful human control” over weapons, and establishes bans on developing, producing, and using fully autonomous weapons.

What makes a weapon autonomous?

In fact, the answer is not entirely clear. Weapons systems come with varying degrees of autonomy. At the lowest level of autonomy are “human-in-the-loop” weapons systems. These are only semi-autonomous, which means that they can only engage targets or groups of targets that have been specifically selected by the person operating the weapon. One step up, “human-on-the-loop” systems can select targets by themselves and make the decision to engage—e.g., fire upon—those targets. However, “human-on-the-loop” weapons are not considered fully autonomous because they are designed to give human operators the time and opportunity to intervene and end an engagement. In other words, they are designed to be fairly closely monitored by people. Finally, “human-out-of-the loop” systems are classified by the Department of Defense as fully autonomous. This means that, once these types of weapons are activated, they can identify, select, and engage targets without intervention by a person.

These three classifications provide a useful framework, but not all weapons systems fall squarely within one of the three categories. For example, Israel’s Harpy weapon hovers between the upper tiers of autonomy. While it is commonly activated with specific and finite objectives already programmed in, the Harpy has the ability to “loiter” for up to two-and-a-half hours after deployment, which gives it a degree of indeterminacy and autonomy. As such, the Harpy does not need to be launched with a specific target and location already programmed in. Rather, once launched, it can search for enemy radars over up to 500 kilometers. These capabilities allow the Harpy to find and engage targets of which its human operator was not even aware.

By contrast, America’s ATLAS—Artificially Intelligent Targeting System—cannot initiate force because it simply does not have a physical connection to a trigger mechanism. ATLAS is therefore part of a human-in-the-loop system because it provides information, acquired by artificial intelligence, to a human that may lead that human to initiate force. However, army acquisition chief Bruce Jette said that the army may explore converting ATLAS to a human-on-the-loop system. ATLAS’s increased autonomy would look like this: a human officer reviews surveillance data and subsequently clears a platoon of robots to open fire on a group of targets.

That the three classifications of autonomous weapons fail to accurately categorize two of the world’s most prominent autonomous weapons suggests that a new definition system is necessary. It seems misleading—and will lead to ineffective regulation—to classify a weapons system like the Harpy as only semi-autonomous when it has the ability to independently select and engage targets. Crucially, the definition of a fully autonomous weapon should err on the side over-inclusivity so that weapons like the Harpy do not escape strict regulation. Generally speaking, it is essential to come up with a clear and accurate system of classification for levels of autonomy that can operate both nationally and internationally. Such a system of definitions is essential for an adequate regulatory framework.

How are autonomous weapons currently governed?

Today, as LAWS actively push the outer boundary of semi-autonomous, very little governs their use and development. While International Humanitarian Law (IHL) bans weapons that are indiscriminate or which cause unnecessary suffering, it does not explicitly ban autonomous weapons and there is no guarantee that autonomous weapons fall into either of these two banned weapons categories. Moreover, there is no treaty or principle of customary international law that explicitly bans autonomous weapons. Nor is there any indication that such a treaty is close on the horizon. As of 2019, most major military powers, including the US, UK, Australia, Israel, and Russia, oppose new international regulations on the development or use of autonomous weapons. They argue that existing IHL is sufficient to regulate weapons systems with increasing levels of autonomy despite the fact that IHL makes no specific mention of LAWS. A UK Ministry of Defense spokesperson even suggested that LAWS defy regulation because there is “still no international agreement on the characteristics of lethal autonomous weapons systems.” This is excellent support for the assertion that a definition system for levels of autonomy is key, and it need not be as complicated as the Ministry of Defense spokesperson suggests.

In the U.S., Department of Defense Directive 3000.09 governs autonomous and semi-autonomous weapons. This directive dictates that “autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” However, the policy does not define “appropriate levels of human judgement.” In addition, Section 4.c(2) of the policy limits autonomous weapons to defense purposes, and explicitly bans autonomous weapons from selecting human targets. However, Section 4.d of the policy allows for Section 4.c(2) to be overridden if two deputy secretaries, of policy and technology, approve the use.

Most recently, on February 25, 2020, the Department of Defense adopted five Principles of Artificial Intelligence Ethics which apply not specifically to LAWS but to the use of artificial intelligence “in both combat and noncombat situations.” These principles require that artificially intelligent systems be (1) responsible, (2) equitable, (3) traceable, (4) reliable, and (5) governable.

While these principles are on the right track, they are not contained within a statute or directive and are therefore not binding. They are also extremely vague. For example, the Department of Defense has defined “responsible” as “exercise[ing] appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI capabilities.” Similarly, “governable” means that “[t]he department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

Interestingly, these principles, namely “governable,” can be seen as an acknowledgement on the part of the US that LAWS should be governed by more than existing IHL. But these principles are essentially meaningless, and there is no indication that the US plans to engage in meaningful regulation of LAWS. This is unacceptable. Even if the US stops short of banning any development or use of autonomous weapons, as proposed by Human Rights Watch, it must at the very least enact binding legislation that clearly defines key concepts such as autonomy and “appropriate levels of human judgment,” and which bans outright with no exceptions the use of lethal force on a human by a completely autonomous weapon.

Oh Deere: Precision Agriculture and the Push for Rural Farmers to Adopt New Technologies

Photo by Tom Fisk on Pexels.com

By: Savannah McKinnon

Roughly 29% of farmers in the United States have no internet access. Older farmers and ranchers, especially, rely on experience to determine the amount of fertilizer or water necessary to sustain the year’s harvest. As a result, precision farming, a digital method which uses a database for farmers to gather historical data and utilize that data to make management decisions, has received slow adoption rates. On top of these slow rates, the FBI released a memo in 2016 claiming that farmers adopting precision agriculture tools were at risk of their digitalized data being held for ransom by “hacktivists” coming after GMOs. While precision agriculture could enable a more resilient food system, it must reshape its platform to appease farmers slow to adopt new technology.

History of Precision Agriculture

Precision agriculture started in the 1960s when farmers would collect or log their data, then make decisions based on that data. By 1990, GPS technology was available for farmers to store information, such as planting field position. Precision farming data allows for farmers to make informed management decisions that permeate farm marketing, production, and growth. However, with farm data storage comes data privacy concerns

On top of “hacktivists” attempting to hack John Deere or Monsanto precision agriculture systems, third party issues may arise when the farmers utilize contracts. Confidentiality agreements may privatize data, but most contracts offer no guarantee this data is free from John Deere or Monsanto sharing it with third parties. The Personal Information Protection Electronics Document Act of 2000 was meant to address data privacy pertaining to farmers by preventing the exposure of private data in commercial activity. The Act also consists of ten privacy principles, but larger agribusinesses are bypassing the act through enacting complicated contracts. These agribusinesses have an incentive to bypass privacy laws to enable them to utilize farmer data to develop new technologies and perpetuate market manipulation tactics. Specifically, Monsanto production contracts with farmers have terms allowing Monsanto to keep farmer data even after the contract duration ends. Farmers essentially pay for precision agriculture services while getting no monetary benefit from agribusinesses using their data.

Usefulness of Precision Agriculture

Nevertheless, precision agriculture data is vital for the efficiency and profitability of farmers. Precision farming systems use yield mapping to help develop maximize harvest yields; a geographic information system and global position system, available in all newer John Deere tractors, collect geo-spatial information to record data in fields; variable rate technology  allow farmers to record and apply different rates of fertilizer in various locations on a farmer’s property. This kind of valuable information supplies farmers with data to maximize efficiency on their farms, sustainability, and profitability. Further, precision agriculture data is also collected through drones, smart irrigations systems, robotics and artificial intelligence. Essentially, financial technologies are now used to democratize the agricultural market, allowing farming to become more accessible if precision agriculture were widespread. These technologies, when implemented properly have numerous benefits, including producing healthier foods at a lower cost, making cheap produce more accessible for low-income households, and decreasing topsoil erosion.

Conclusion

Today, over 70% of North American farmers use precision agriculture, but less than half use sufficient software necessary to analyze it. This is partially because farmers fear a lack of autonomy over their data, and farmers are generally slow to adopt new technology. 

To resolve data autonomy challenges, advocates for farmer data privacy have requested Congress to consider data privacy legislation similar to the Health Insurance Portability and Accountability Act (HIPAA). This would provide federal protections for precision agriculture through enacting field data policy safeguards on business associates and others with access to farmer data. It is essential for any policy protecting precision agriculture to designate an individual to oversee a corporation’s compliance to data privacy principles. While this sort of data privacy protection approach has floated around Congress, no formal solution has been prioritized.

However, Congress did prioritize the Precision Agriculture Connectivity Act of 2018 aimed to increase internet access among farms to improve access to precision agriculture technologies. Though, the task force enabled by the Act is merely performative at this time, signaling further action necessary to protect farmers’ data.

Precision agriculture is a necessary investment for farmers to help conserve soil and water. However, the lack of strong policy and oversight protecting data needs to be addressed by Congress. With a guarantee of data protection, farmers could open up to the idea of adopting this new technology; a technology that would solve a whole host of problems on farms by increasing efficiency and promoting multi-crop farming.

Parler is Not an Enigma: Section 230 Applies to Antitrust Claims

By: Tallman Trask

Parler’s antitrust lawsuit against Amazon has been widely derided. Professor and noted antitrust scholar Herbert Hovenkamp commented that the suit was not “going to fly” because “there really aren’t any facts” in the complaint to support the kind of conspiracy Parler is alleging. TechDirt called it “laughably bad.” Reuters described what it called the suit’s “hollow core,” quoting experts who suggested a complete lack of any antitrust problem in the facts Parler alleges. Finally, Judge Barbara Rothstein pointed to the lack of evidence presented when she denied the company’s motion for a preliminary injunction, suggesting the evidence Parler had presented did not meet the Twombly standards, which require that an antitrust complaint allege a conspiracy that is not merely conceivable, but rather one that is plausible, and include “enough factual matter to suggest an agreement” (and reporters following the case described the Judge as “not impressed”).

But no matter the merits of the suit itself, there is one aspect of Parler’s filings that sits at the intersection of several popular and trendy topics regarding Big Tech and the law: Section 230 of the Communications Decency Act and the Sherman Act. Section 230 allows interactive computer service providers to escape liability for removing content they find “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” And the Sherman Act, as it applies here, prohibits every “contract, combination . . ., or conspiracy, in restraint of trade or commerce among the several States.” In a supplemental filing, Parler claims that Amazon cannot be “immune under Section 230 of the Communications Decency Act” (as Amazon has claimed they are) because Parler’s “federal and state claims all are based on allegations of anticompetitive conduct.” That is, Parler says Section 230 immunity does not extend to cover antitrust claims, at least not in the Ninth Circuit.

Parler is wrong.

There is No Blanket Antitrust Exception to Section 230

            In making their claim, Parler relies on Enigma Software Group USA, LLC v. Malwarebytes, Inc., a 2019 case where the Ninth Circuit looked at the overlap of Section 230 and antitrust law. As applicable to Parler’s claims, the facts in Enigma are simple: Malwarebytes, a provider of security software, changed its system and began to flag a competitor’s products as security risks. It then encouraged users, through pop-up warnings and other means, to neither download nor install the competitor’s software. The competitor, which did not similarly flag Malwarebytes’ products, sued, claiming that their products were not security risks and that Malwarebytes was acting not out of concern for the security of their customers, but out of “anticompetitive animus.” Malwarebytes, in turn, claimed that the allowance for removal of “otherwise objectionable” content in Section 230 provided them with immunity from the claims. The Ninth Circuit disagreed, holding that “immunity under § 230 . . . does not extend to anticompetitive conduct.”

Parler’s filing interprets the holding from Enigma as prohibiting any claim of immunity under Section 230 whenever there is an allegation of anticompetitive behavior. That is not, however, what the Ninth Circuit held, and there are clear differences between Enigma and Parler’s claims. First, while the Ninth Circuit has “held that ‘immunity under [§ 230] does not extend to anticompetitive conduct,’” the holding is limited. It merely clarifies that where “a provider’s basis for objecting to and seeking to block materials is because those materials benefit a competitor,” the provider is not entitled to immunity under Section 230. In other words, the Ninth Circuit held that Section 230 immunity does not extend to cover moderation driven by anticompetitive desires. That is not, however, the equivalent of holding, as Parler claims, that Section 230 cannot and does not cover any conduct wherever there is a claim that said conduct was potentially anticompetitive. At most, the Ninth Circuit has held that there is some conduct which is so purely anticompetitive, so clearly outside the bounds of the intent of the “otherwise objectionable” exception in Section 230, that it cannot possibly fit within Section 230 immunity. The court has not, however, ruled that Section 230 immunity does not exist where a provider responds to content clearly within the “otherwise objectionable” category (as the hate speech, violent threats, and other content on Parler’s site was) simply because there may be some potential anticompetitive effect if the provider moderates, or removes the content or user access to its services.

While the Ninth Circuit’s analysis of the interaction between Section 230 and the Sherman Act is more extensive than that which other circuits have undertaken, other courts have broadly agreed with the Ninth Circuit. For example, the D.C. Circuit, considering a slightly different claim made under both Section 1 and Section 2 of the Sherman Act, concluded that Section 230 immunity was warranted. Writing for the court, then Chief Judge Merrick Garland concluded that the “complaint [was] barred by § 230 of the Communications Decency Act,” while noting “that immunity is not limitless” and in some cases Section 230 may not apply. Further, a view of Enigma which holds that Section 230 applies, but is not unlimited, meshes with earlier Ninth Circuit interpretations of the applicable law.

While past decisions clearly suggest that Section 230 immunity can apply in at least some antitrust contexts (and should apply in the context of Parler’s suit), Parler’s suit is also different from Enigma in at least one other important way. While Enigma was a dispute between direct competitors, Parler’s dispute with Amazon is between a service provider and a company which purchases the service, a distinction which made Enigma different from earlier decisions but did not eliminate the earlier interpretation that Section 230 applies in a limited way. Moreover, there was a genuine dispute in Enigma over whether the competitor’s software was actually “objectionable,” while there is no question that content on Parler’s site was objectionable, a contention supported by dozens and dozens of screenshots filed with the court by Amazon, which clearly show vile content from Parler, which Parler has not countered.

While Enigma does address the space where Section 230 overlaps with antitrust law, it does not hold that immunity ends where anticompetitive effects potentially begin. Rather, the Ninth Circuit has been more limited in its conclusions. Parler’s claims that Amazon cannot enjoy Section 230 immunity do not fit within the bounds of the law, and they do not fit within the Ninth Circuit’s understanding of the limits of Section 230.

The First Amendment Needs an Update

By: Katherine Czubakowski

Recent news about Twitter and Facebook banning former President Trump has many wondering about the legality of such action.  Although the consensus is that Twitter’s action does not violate the First Amendment, it does raise questions about whether or not the First Amendment should apply to, at least some, forums on the internet.  The Supreme Court’s precedent has focused largely on free speech rights in physical spaces, but with more communication happening over the web—particularly in the quarantined and socially-distanced world of today, it’s time for the Court to address when and how speech on the internet can be regulated.

Lower courts have recently begun questioning the assumption that speech on social media cannot be protected by the First Amendment.  Under current law, social media sites are allowed to ban whoever they want.  The Constitution, and by extension the Bill of Rights, only applies to government action, so  Twitter, Facebook, and other private social media platforms are allowed to regulate speech in any way they choose through their Terms of Service.  However, this view has been challenged numerous times by those wishing to comment on public officials’ Facebook pages, those who have been banned from Twitter for their own speech, and those who have been blocked by other Twitter users with varying success.  Although some courts have held that social media platforms are strictly private companies hosting private speech, other courts have come to the conclusion that some aspects of social media platforms  fall under First Amendment protection, such as when the account is run by a government officer speaking in their official capacity because the interactive features open to the public change the nature of these accounts from private to public.

One of these cases, Trump v. Knight First Amendment Institute, is currently pending petition at the Supreme Court, giving the Court the ideal opportunity to announce a new policy for how the First Amendment should apply to internet forums.  The plaintiffs in this case argue that by blocking them from his personal Twitter account, former President Trump violated the First Amendment by preventing them from speaking in a public forum based on the viewpoints they would have expressed.  Ensuring that citizens can disagree with politicians and engage in a public debate, if only in the comments section, is one of the most important ways to rebut the effects of confirmation bias, prevalent on social media.  As politicians and public officials take increasing advantage of social media and leverage it as a tool to promote their own policies, citizens’ rights should be expanded to ensure a fair and open public debate.

Now is the time for the Court to extend First Amendment protections to any website which encourages public discussion.  While expanding First Amendment protections to internet forums would likely allow former President Trump back on Twitter, it would also keep him from banning those with whom he disagreed from replying and challenging his opinions.  The Founding Fathers thought a fair, open, and representative public debate was the best way to protect our democracy – we just need to move their ideas into the digital age.

Catfish Bait: Too Few State Laws Protect the “Faces” of Catfishers

By: Cameron Cantrell

Most people on the Internet have heard of “catfishing,” or the act of deceiving someone by creating a false personal profile online, and its namesake 2010 documentary and subsequent MTV show. While the movie and show mostly document standard catfishing—where the perpetrator poses as a fictional person for romantic gain rather than to further a scam or fraud—the cases of malicious catfishing get more media attention. The difference between standard and malicious catfishing is not some legal term of art, but rather a reflection of the intent of the perpetrator. Malicious catfishers deceive their victims to further an immoral end, often monetary exploitation or other long-con “romance” scams (in season 5 of the MTV show, one catfisher said the charade was a “game” to him, after his victim sent him over $500). The direct victims of malicious catfishing can usually find some legal ground to sue, but what about the unknowing third party whose pictures or likeness the catfisher relied on? Unfortunately, states almost unilaterally leave the “catfish bait”—the person whose likeness a catfisher appropriates—without legal remedy.

Existing laws rarely protect these third-party victims of malicious catfishing

The regular lineup of privacy torts, such as false light, defamation, and right of publicity, do not protect the catfish bait if the catfish’s persona is fictitious. For catfish bait to take advantage of their remedies, the catfisher would have to use the catfish bait’s face and name, or some other combination of identifiers that makes it clear they are pretending to be that specific person. The same is true for the twelve or so states that criminalize impersonating someone online, because they require the catfisher impersonate an “actual person.” Noting this statutory void, some legal articles have tried sewing together a patchwork of civil and criminal offenses (intentional infliction of emotional distress and cyberbullying, for example) to provide some relief. Even these hopeful academics, however, acknowledge that the mosaic of existing laws comes up short because current laws can’t be read to allow catfish bait sue catfishes for using their likeness in “completely fictitious profile[s].” This is frustrating for prospective plaintiffs given that most situations people would describe as “catfishing” involve fictitious personas, not outright impersonation. 

Say that you are the catfish bait, and a catfisher uses your picture as their own on social media. Some websites like Twitter and Tinder let you report the profile on those grounds, but a federal law known as “section 230” prevents guaranteed action. In the case the platform does remove the profile, however, the clever catfisher may move to another website or remake their account, and resume using your likeness. In the 49 states without a legal framework for this type of “fictitious” catfishing, there’s no meaningful incentive to keep this catfisher from habitually using your likeness again, and again, and again.

Only one state protects “faces” of fictitious personas

Oklahoma’s Catfishing Liability Act of 2016 creates a private right of action for “catfish bait” (those whose names, images, or voices are being used to “create a false identity” online). It is the only state law of its kind. Through it, plaintiffs in Oklahoma can obtain preliminary injunctions against an alleged catfisher, forcing the catfisher to stop using their likeness. If they win, plaintiffs can get actual damages as well as at least $500 and court costs in punitive damages. This law is perfectly tailored to the harms a catfisher can cause to “catfish bait” because it recognizes that some catfishing—and some catfishers—wreak more havoc than others (for example, the person who uses another’s likeness to talk to 400 people may have caused more harm than the person catfishing an acquaintance out of insecurity or ignorant boredom). Accordingly, it awards punitive damages as a general deterrent for all catfishing and actual damages as an additional deterrent for especially undesirable catfishing, such as that done by high-profile or prolific catfishes.

More states should follow Oklahoma’s lead

While many population-dense, torts-focused states like California, Texas, and New York make it illegal for a catfisher to use another’s picture to impersonate that specific person, Oklahoma is the only state that outlaws using another’s  picture to become someone new. Curiously, even with catfishing’s notoriety in popular culture, there doesn’t seem to be any public opposition to laws like Oklahoma’s, nor any demonstrated political support. Yet catfishing is an increasingly common practice over the last decade, and is likely going to become even more frequent in years to come. With plenty of evidence that catfishing presents a growing problem, do legislatures really view redress for catfish bait as that low of a priority?

It seems like the only possible answer is yes; to date, only one other state has introduced substantially similar legislation, and it didn’t receive lively discussion. That state, Wisconsin, got its comparable bill easily out of committee in 2017 but never scheduled it for floor discussion. After being reintroduced in their next legislative session, it again died from neglect, this time in committee.

The question of “why not” remains, with the argument to legislate buttressed by airtight reasoning. The problem is growing, the harms are undue, the legislation is simple, the statute does not require funding, the policy it embodies is reasonable, and the penalties it carries are moderate at best. What else could other states’ legislatures possibly be waiting for?