‘Brexit’ and its Effects on International Trademark Law: Attack of the Clones

Michael W. Meredith

The United Kingdom formally left the European Union on January 31, 2020. Since that time, the country’s legal system has been in an eleven-month ‘transition’ state that has delayed the effects of the country’s departure. However, that transition period came to a close on December 31, 2020 and the New Year will now force both the public and any businesses operating in either the European Union or the United Kingdom to reckon with the full brunt of ‘Brexit.’

The U.K., the E.U., and the International Trademark Register

One of the many legal challenges presented by ‘Brexit’ has to do with a peculiarity in the trademark registration system of the UK vis-à-vis the European Union and the International Register. The International Register is a global trademark protection system created pursuant to the Paris Convention for the Protection of Industrial Property of 1883 (the “Paris Convention”)—a multilateral treaty that permits its signatories to efficiently and cost-effectively apply for, register, and manage their trademarks in multiple jurisdictions across the globe using a single, centralized system. One of the primary benefits of that system is that it allows trademark owners to pursue a single ‘international’ trademark application that, if approved, would provide the applicant with trademark rights in any of the Paris Convention’s member countries designated in the application.

Both the European Union and the United Kingdom are signatories to the Paris Convention, but because an EU-level trademark registration protects the use of a mark in all European Union member states, international applicants will often choose to designate the European Union as a whole, instead of the United Kingdom or any other individual EU member state, in their international applications.

The same is true with respect to trademark owners who have not elected to pursue international trademark registrations, but still wish to do business throughout Europe. Often, they will seek trademark protection with the European Union’s Intellectual Property Office instead of pursuing multiple duplicative applications in each EU member state. These otherwise prudent and cost-effective trademark protection strategies, however, are now being challenged by ‘Brexit,’ as owners of international or EU-level marks are now left to question whether their rights will be protected in the newly-independent UK.

The Effect of the U.K.’s Withdrawal Agreement

The answer to many of their concerns is included in a Withdrawal Agreement that was reached between the EU and the UK on February 1, 2020. The Agreement provides that, for any EU-level trademark registration or international registration designating the EU, the UK will create a duplicate or ‘cloned’ registration in the UK’s national registry, affording registrants with the same trademark rights they would have had in the UK, pre-‘Brexit.’ But this ‘cloning’ process will not come without certain costs to the trademark owner. The newly-created UK registration cannot be managed or maintained through the EU Intellectual Property Office, or the international trademark registry. Indeed, UK trademark attorneys are not even permitted, post-‘Brexit,’ to serve as representatives at the European Union Intellectual Property Office. Instead, ‘cloned’ registration must be managed directly through the United Kingdom’s national office and will require the retention and appointment of a UK-approved trademark attorney or representative.

A ‘cloned’ entry will also not be provided in the UK registry for any international or EU-level applications that were ‘pending’ as of December 31, 2020. As such, should the owner of a pending international application designating the EU hope to receive trademark protection in the UK, they will need to appoint a UK-approved trademark attorney or representative to prepare and file a parallel application with the UK Intellectual Property Office before September 30, 2021, or they will be unable to assert the ‘priority date’—the date that their trademark rights are deemed to begin—provided for in their existing EU or international application.

What’s Next?

An open question is what effect the UK’s exit from the European Union will have on applications or registrations that are subject to a challenge by third parties. EU trademark applications and registrations may be challenged by any third party if: (1) that third party has made relevant use of a similar trademark in the EU prior to the date of a potentially infringing trademark’s application or registration; and/or (2) the registrant has failed to make use of its mark in the EU for a period of five years. The UK’s Withdrawal Agreement provides that even EU trademark registrations that are subject to a third-party challenge will be ‘cloned’ in the UK trademark registry and that UK applications duplicating pending EU applications that have been challenged may be filed before September 30, 2021. But the Agreement is silent with respect to the duplication of pending opposition or cancellation proceedings, suggesting that duplicate challenges will need to be filed by third-parties against the UK ‘clones.’

The UK has indicated that should a ‘cloned’ EU trademark registration be successfully challenged by a third party and removed from the EU registry, the UK ‘clone’ will also be removed. But it is unclear whether that is true or legally permissible if the challenge to the EU trademark is based upon the use or non-use of a trademark only in EU member states, but not the UK, or when multiple grounds for the challenge have been alleged. Prior to ‘Brexit,’ such a challenge would result in the cancellation of an EU-registered trademark but not a UK registration. As such, the effect of a similar challenge to a UK ‘clone’ is not at all clear and the UK Intellectual Property Office has left the majority of these disputes to be resolved according to its own discretion, noting that: “[i]f a third party sends . . .  a cancellation notice [regarding an EU registration]. . . [t]he IPO Tribunal will then decide whether or not the trade mark should be cancelled in the UK.”

Given the ongoing value of trademark rights to modern businesses and the fact that the UK serves as an important trade hub for a number of international businesses operating in the EU, the UK should issue clear guidance regarding the grounds upon which a third-party challenge to a trademark application or registration will be accepted in the UK. Otherwise, businesses will be forced to inefficiently adjust their practices based upon the case-by-case rulings of the UK Intellectual Property Office, generating unnecessary instability for trademark owners in an already tumultuous time. 

Autonomous Vehicles May Never Become “Self-Aware”, But That Doesn’t Mean They’re Not Coming for Your Job

By: Mason Hudon

“Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug.” – Terminator 2: Judgment Day

The Insidious Issue: Worker Displacement

True driverless cars will inevitably begin to find their way onto American roadways within the next decade, and the touted benefits are palpable. From saving time to reducing traffic fatalities, companies developing these technologies, including Tesla, Waymo, Ford, and Nissan, have rested their arguments for the development of driverless cars on reveries of a futuristic tomorrow in which one can press a button and be whisked off to their destination without a second thought. Sounds nice, doesn’t it?

It’s important to note that personal autonomous vehicles are likely not the first experience that the average American will have with this technology. Companies like Peloton Technology (no, not the exercise bike) and Locomation are making sure of that. These corporations are developing and refining a revolutionary process called “platooning” whereby two or more semi-trucks move together in a tight line while being piloted autonomously, with only the truck at the very front of the platoon piloted by a person. Platooning is poised to upend the freight industry and may lead to increased efficiency and decreased costs for major players in the space. According to Caleb Weaver, Uber’s Director of Public Affairs for the West Coast, the freight industry will undoubtedly be the first to feel a true and deep impact from AV innovation.

With every bit of news about the perils of autonomous vehicle technology causing accidental deaths, allowing source-code to be hacked, or increasing traffic-congestion in the short-to-medium term, comes reassurance from the industry that software will get better, cybersecurity will become more robust, and in the long-run, a driverless vehicle world will benefit society. Most of the time, manufacturers of autonomous vehicles respond to criticism with great aplomb. But, crucially, the autonomous vehicle world struggles to find answers to a serious criticism of the industry as a whole: widespread worker displacement in the transportation industry.

As the AV industry pushes ever forward, it falls to American lawmakers to act on this issue soon. The consequences of waiting much longer might be dire.

Despite a recent dearth of truck drivers in the United States, approximately 3.5 million people continue to drive freight, making it one of the largest industries for employment in the nation. In contrast, Uber, which is America’s most popular rideshare company, is said to contract with upwards of 1 million gig workers per year. Additionally there are about 207,000 taxi drivers in the United States. All told, this non-exhaustive list of drivers in the United States includes about 4.7 million people, many of whom may not have higher education or experience outside of their work as drivers. If these people lose their jobs to autonomous vehicles, lawmakers are going to be faced with a very tricky unemployment problem.

Potential Solutions

It’s doubtful that American lawmakers or industries will act as drastically as Indian Transport Minister Nitin Gadkari, who in 2017 vowed to prohibit driverless cars on Indian roads. Still, slowing the expansion of AV technologies may be part of the approach. In 2017, Microsoft magnate Bill Gates proposed a tax on robots that take jobs away from humans, and a similar measure was proposed in the European Union around the same time, although it ultimately failed to pass as law. In practice, a “robot tax” would have the effect of incentivizing companies to retain human workers for longer periods of time, and its proceeds could be used to fund other parts of the transition process, notably job retraining or universal basic income.

Job retraining is a particularly complex issue because it involves many different questions concerning different entities at every level of development and potential rollout. For example, should job retraining programs be funded by government entities or by the private corporations that are causing the worker displacement in the first place? Should these programs be formulated to give displaced workers total freedom of choice in deciding their “new careers” or should the programs be designed to focus on workers who intend to stay in the same industry? The latter question is especially poignant considering that “[w]hile new jobs will result from the new industry, they’re unlikely to be a direct match for the commercial driving ones that are going away. Those engineering and managing the technology, for example, are not the same folks that are driving buses.” In all likelihood, job retraining for potentially millions of displaced workers will require both private and public investment on a largely unprecedented scale, and the legal framework that will facilitate this process will have to be complex and developed in a timely manner to prevent an economic crisis for the displaced.

Universal basic income (UBI), like job retraining, presents unique challenges, but it places significant economic burdens on governmental entities rather than on private corporations, or worse yet, on private individuals. Indeed, UBI could significantly alleviate some of the growing pains associated with widespread worker displacement by autonomous vehicles. Federally, such a program is unlikely to ever come into existence. State governments, however, might be able to adopt a UBI system, provided they can figure out a viable way to fund it.

The No Solution Contingency

If the tech companies developing these technologies are to be believed, a solution (outside of simple job training) might not be necessary. According to Waymo, the elimination of jobs for drivers signals the creation of jobs for technicians, dispatchers, customer service representatives, and fleet response teams that will employ about as many people as are currently employed. These roles, which will likely become available at the majority of companies transitioning to autonomous vehicle fleets, can then be filled by people that are already company employees.

It sure sounds fantastic, but skepticism should remain high. Hoping that this situation will just work itself out (like Waymo suggests it will) is not a safe road for American lawmakers to take, and they should remain wary of the complexity of the issues at hand. Waymo’s approach downplays the costs and pitfalls of widespread job retraining, and it seems that many other autonomous vehicle companies are confident this issue will not be their cross to bear. Before we see actual job loss due to the AV industry, American lawmakers should begin preparing for the inevitable through proactive legislation and direct address of the issues.

The debate about Lethal Autonomous Weapons Systems has reached a fever pitch, but the military’s artificially intelligent weapons remain under-regulated and under-defined

By: Zoe Wood

Recently in autonomous weapon news

“Without effective AI, military risks losing next war” reads the title of a November 2019 press release by the Department of Defense. Artificial intelligence, the press release explained, is the Department of Defense’s top priority for tech modernization.

The American military uses artificial intelligence in many ways, perhaps most controversially as a component of lethal autonomous weapon systems, or LAWS. LAWS are long debated, but 2020 saw a frenzy of high-stakes discussion about their use and development. This discussion starts with the military’s recently professed goal of advancing its arsenal of LAWS, namely by making them more autonomous.

For example, the general who oversees defense against missile threats and air-based attacks has professed his desire to automate missile detection systems in response to ever faster and more powerful weapons. To that end, he wants to “move humans further out in the decision-making loop.” What does this mean, exactly? The rest of this post will explain, but briefly, it means taking decisions out of the hands of people and leaving these decisions—including decisions to use deadly force—to artificially intelligent systems.

By way of response, the Human Rights Watch, an international non-governmental organization, released a report calling on nations to develop an international treaty that requires use of force to remain under the strict control of human decision making. The report advocates for laws and policies on a national level that commit nations to retaining “meaningful human control” over weapons, and establishes bans on developing, producing, and using fully autonomous weapons.

What makes a weapon autonomous?

In fact, the answer is not entirely clear. Weapons systems come with varying degrees of autonomy. At the lowest level of autonomy are “human-in-the-loop” weapons systems. These are only semi-autonomous, which means that they can only engage targets or groups of targets that have been specifically selected by the person operating the weapon. One step up, “human-on-the-loop” systems can select targets by themselves and make the decision to engage—e.g., fire upon—those targets. However, “human-on-the-loop” weapons are not considered fully autonomous because they are designed to give human operators the time and opportunity to intervene and end an engagement. In other words, they are designed to be fairly closely monitored by people. Finally, “human-out-of-the loop” systems are classified by the Department of Defense as fully autonomous. This means that, once these types of weapons are activated, they can identify, select, and engage targets without intervention by a person.

These three classifications provide a useful framework, but not all weapons systems fall squarely within one of the three categories. For example, Israel’s Harpy weapon hovers between the upper tiers of autonomy. While it is commonly activated with specific and finite objectives already programmed in, the Harpy has the ability to “loiter” for up to two-and-a-half hours after deployment, which gives it a degree of indeterminacy and autonomy. As such, the Harpy does not need to be launched with a specific target and location already programmed in. Rather, once launched, it can search for enemy radars over up to 500 kilometers. These capabilities allow the Harpy to find and engage targets of which its human operator was not even aware.

By contrast, America’s ATLAS—Artificially Intelligent Targeting System—cannot initiate force because it simply does not have a physical connection to a trigger mechanism. ATLAS is therefore part of a human-in-the-loop system because it provides information, acquired by artificial intelligence, to a human that may lead that human to initiate force. However, army acquisition chief Bruce Jette said that the army may explore converting ATLAS to a human-on-the-loop system. ATLAS’s increased autonomy would look like this: a human officer reviews surveillance data and subsequently clears a platoon of robots to open fire on a group of targets.

That the three classifications of autonomous weapons fail to accurately categorize two of the world’s most prominent autonomous weapons suggests that a new definition system is necessary. It seems misleading—and will lead to ineffective regulation—to classify a weapons system like the Harpy as only semi-autonomous when it has the ability to independently select and engage targets. Crucially, the definition of a fully autonomous weapon should err on the side over-inclusivity so that weapons like the Harpy do not escape strict regulation. Generally speaking, it is essential to come up with a clear and accurate system of classification for levels of autonomy that can operate both nationally and internationally. Such a system of definitions is essential for an adequate regulatory framework.

How are autonomous weapons currently governed?

Today, as LAWS actively push the outer boundary of semi-autonomous, very little governs their use and development. While International Humanitarian Law (IHL) bans weapons that are indiscriminate or which cause unnecessary suffering, it does not explicitly ban autonomous weapons and there is no guarantee that autonomous weapons fall into either of these two banned weapons categories. Moreover, there is no treaty or principle of customary international law that explicitly bans autonomous weapons. Nor is there any indication that such a treaty is close on the horizon. As of 2019, most major military powers, including the US, UK, Australia, Israel, and Russia, oppose new international regulations on the development or use of autonomous weapons. They argue that existing IHL is sufficient to regulate weapons systems with increasing levels of autonomy despite the fact that IHL makes no specific mention of LAWS. A UK Ministry of Defense spokesperson even suggested that LAWS defy regulation because there is “still no international agreement on the characteristics of lethal autonomous weapons systems.” This is excellent support for the assertion that a definition system for levels of autonomy is key, and it need not be as complicated as the Ministry of Defense spokesperson suggests.

In the U.S., Department of Defense Directive 3000.09 governs autonomous and semi-autonomous weapons. This directive dictates that “autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” However, the policy does not define “appropriate levels of human judgement.” In addition, Section 4.c(2) of the policy limits autonomous weapons to defense purposes, and explicitly bans autonomous weapons from selecting human targets. However, Section 4.d of the policy allows for Section 4.c(2) to be overridden if two deputy secretaries, of policy and technology, approve the use.

Most recently, on February 25, 2020, the Department of Defense adopted five Principles of Artificial Intelligence Ethics which apply not specifically to LAWS but to the use of artificial intelligence “in both combat and noncombat situations.” These principles require that artificially intelligent systems be (1) responsible, (2) equitable, (3) traceable, (4) reliable, and (5) governable.

While these principles are on the right track, they are not contained within a statute or directive and are therefore not binding. They are also extremely vague. For example, the Department of Defense has defined “responsible” as “exercise[ing] appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI capabilities.” Similarly, “governable” means that “[t]he department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

Interestingly, these principles, namely “governable,” can be seen as an acknowledgement on the part of the US that LAWS should be governed by more than existing IHL. But these principles are essentially meaningless, and there is no indication that the US plans to engage in meaningful regulation of LAWS. This is unacceptable. Even if the US stops short of banning any development or use of autonomous weapons, as proposed by Human Rights Watch, it must at the very least enact binding legislation that clearly defines key concepts such as autonomy and “appropriate levels of human judgment,” and which bans outright with no exceptions the use of lethal force on a human by a completely autonomous weapon.

Oh Deere: Precision Agriculture and the Push for Rural Farmers to Adopt New Technologies

Photo by Tom Fisk on Pexels.com

By: Savannah McKinnon

Roughly 29% of farmers in the United States have no internet access. Older farmers and ranchers, especially, rely on experience to determine the amount of fertilizer or water necessary to sustain the year’s harvest. As a result, precision farming, a digital method which uses a database for farmers to gather historical data and utilize that data to make management decisions, has received slow adoption rates. On top of these slow rates, the FBI released a memo in 2016 claiming that farmers adopting precision agriculture tools were at risk of their digitalized data being held for ransom by “hacktivists” coming after GMOs. While precision agriculture could enable a more resilient food system, it must reshape its platform to appease farmers slow to adopt new technology.

History of Precision Agriculture

Precision agriculture started in the 1960s when farmers would collect or log their data, then make decisions based on that data. By 1990, GPS technology was available for farmers to store information, such as planting field position. Precision farming data allows for farmers to make informed management decisions that permeate farm marketing, production, and growth. However, with farm data storage comes data privacy concerns

On top of “hacktivists” attempting to hack John Deere or Monsanto precision agriculture systems, third party issues may arise when the farmers utilize contracts. Confidentiality agreements may privatize data, but most contracts offer no guarantee this data is free from John Deere or Monsanto sharing it with third parties. The Personal Information Protection Electronics Document Act of 2000 was meant to address data privacy pertaining to farmers by preventing the exposure of private data in commercial activity. The Act also consists of ten privacy principles, but larger agribusinesses are bypassing the act through enacting complicated contracts. These agribusinesses have an incentive to bypass privacy laws to enable them to utilize farmer data to develop new technologies and perpetuate market manipulation tactics. Specifically, Monsanto production contracts with farmers have terms allowing Monsanto to keep farmer data even after the contract duration ends. Farmers essentially pay for precision agriculture services while getting no monetary benefit from agribusinesses using their data.

Usefulness of Precision Agriculture

Nevertheless, precision agriculture data is vital for the efficiency and profitability of farmers. Precision farming systems use yield mapping to help develop maximize harvest yields; a geographic information system and global position system, available in all newer John Deere tractors, collect geo-spatial information to record data in fields; variable rate technology  allow farmers to record and apply different rates of fertilizer in various locations on a farmer’s property. This kind of valuable information supplies farmers with data to maximize efficiency on their farms, sustainability, and profitability. Further, precision agriculture data is also collected through drones, smart irrigations systems, robotics and artificial intelligence. Essentially, financial technologies are now used to democratize the agricultural market, allowing farming to become more accessible if precision agriculture were widespread. These technologies, when implemented properly have numerous benefits, including producing healthier foods at a lower cost, making cheap produce more accessible for low-income households, and decreasing topsoil erosion.

Conclusion

Today, over 70% of North American farmers use precision agriculture, but less than half use sufficient software necessary to analyze it. This is partially because farmers fear a lack of autonomy over their data, and farmers are generally slow to adopt new technology. 

To resolve data autonomy challenges, advocates for farmer data privacy have requested Congress to consider data privacy legislation similar to the Health Insurance Portability and Accountability Act (HIPAA). This would provide federal protections for precision agriculture through enacting field data policy safeguards on business associates and others with access to farmer data. It is essential for any policy protecting precision agriculture to designate an individual to oversee a corporation’s compliance to data privacy principles. While this sort of data privacy protection approach has floated around Congress, no formal solution has been prioritized.

However, Congress did prioritize the Precision Agriculture Connectivity Act of 2018 aimed to increase internet access among farms to improve access to precision agriculture technologies. Though, the task force enabled by the Act is merely performative at this time, signaling further action necessary to protect farmers’ data.

Precision agriculture is a necessary investment for farmers to help conserve soil and water. However, the lack of strong policy and oversight protecting data needs to be addressed by Congress. With a guarantee of data protection, farmers could open up to the idea of adopting this new technology; a technology that would solve a whole host of problems on farms by increasing efficiency and promoting multi-crop farming.

Parler is Not an Enigma: Section 230 Applies to Antitrust Claims

By: Tallman Trask

Parler’s antitrust lawsuit against Amazon has been widely derided. Professor and noted antitrust scholar Herbert Hovenkamp commented that the suit was not “going to fly” because “there really aren’t any facts” in the complaint to support the kind of conspiracy Parler is alleging. TechDirt called it “laughably bad.” Reuters described what it called the suit’s “hollow core,” quoting experts who suggested a complete lack of any antitrust problem in the facts Parler alleges. Finally, Judge Barbara Rothstein pointed to the lack of evidence presented when she denied the company’s motion for a preliminary injunction, suggesting the evidence Parler had presented did not meet the Twombly standards, which require that an antitrust complaint allege a conspiracy that is not merely conceivable, but rather one that is plausible, and include “enough factual matter to suggest an agreement” (and reporters following the case described the Judge as “not impressed”).

But no matter the merits of the suit itself, there is one aspect of Parler’s filings that sits at the intersection of several popular and trendy topics regarding Big Tech and the law: Section 230 of the Communications Decency Act and the Sherman Act. Section 230 allows interactive computer service providers to escape liability for removing content they find “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” And the Sherman Act, as it applies here, prohibits every “contract, combination . . ., or conspiracy, in restraint of trade or commerce among the several States.” In a supplemental filing, Parler claims that Amazon cannot be “immune under Section 230 of the Communications Decency Act” (as Amazon has claimed they are) because Parler’s “federal and state claims all are based on allegations of anticompetitive conduct.” That is, Parler says Section 230 immunity does not extend to cover antitrust claims, at least not in the Ninth Circuit.

Parler is wrong.

There is No Blanket Antitrust Exception to Section 230

            In making their claim, Parler relies on Enigma Software Group USA, LLC v. Malwarebytes, Inc., a 2019 case where the Ninth Circuit looked at the overlap of Section 230 and antitrust law. As applicable to Parler’s claims, the facts in Enigma are simple: Malwarebytes, a provider of security software, changed its system and began to flag a competitor’s products as security risks. It then encouraged users, through pop-up warnings and other means, to neither download nor install the competitor’s software. The competitor, which did not similarly flag Malwarebytes’ products, sued, claiming that their products were not security risks and that Malwarebytes was acting not out of concern for the security of their customers, but out of “anticompetitive animus.” Malwarebytes, in turn, claimed that the allowance for removal of “otherwise objectionable” content in Section 230 provided them with immunity from the claims. The Ninth Circuit disagreed, holding that “immunity under § 230 . . . does not extend to anticompetitive conduct.”

Parler’s filing interprets the holding from Enigma as prohibiting any claim of immunity under Section 230 whenever there is an allegation of anticompetitive behavior. That is not, however, what the Ninth Circuit held, and there are clear differences between Enigma and Parler’s claims. First, while the Ninth Circuit has “held that ‘immunity under [§ 230] does not extend to anticompetitive conduct,’” the holding is limited. It merely clarifies that where “a provider’s basis for objecting to and seeking to block materials is because those materials benefit a competitor,” the provider is not entitled to immunity under Section 230. In other words, the Ninth Circuit held that Section 230 immunity does not extend to cover moderation driven by anticompetitive desires. That is not, however, the equivalent of holding, as Parler claims, that Section 230 cannot and does not cover any conduct wherever there is a claim that said conduct was potentially anticompetitive. At most, the Ninth Circuit has held that there is some conduct which is so purely anticompetitive, so clearly outside the bounds of the intent of the “otherwise objectionable” exception in Section 230, that it cannot possibly fit within Section 230 immunity. The court has not, however, ruled that Section 230 immunity does not exist where a provider responds to content clearly within the “otherwise objectionable” category (as the hate speech, violent threats, and other content on Parler’s site was) simply because there may be some potential anticompetitive effect if the provider moderates, or removes the content or user access to its services.

While the Ninth Circuit’s analysis of the interaction between Section 230 and the Sherman Act is more extensive than that which other circuits have undertaken, other courts have broadly agreed with the Ninth Circuit. For example, the D.C. Circuit, considering a slightly different claim made under both Section 1 and Section 2 of the Sherman Act, concluded that Section 230 immunity was warranted. Writing for the court, then Chief Judge Merrick Garland concluded that the “complaint [was] barred by § 230 of the Communications Decency Act,” while noting “that immunity is not limitless” and in some cases Section 230 may not apply. Further, a view of Enigma which holds that Section 230 applies, but is not unlimited, meshes with earlier Ninth Circuit interpretations of the applicable law.

While past decisions clearly suggest that Section 230 immunity can apply in at least some antitrust contexts (and should apply in the context of Parler’s suit), Parler’s suit is also different from Enigma in at least one other important way. While Enigma was a dispute between direct competitors, Parler’s dispute with Amazon is between a service provider and a company which purchases the service, a distinction which made Enigma different from earlier decisions but did not eliminate the earlier interpretation that Section 230 applies in a limited way. Moreover, there was a genuine dispute in Enigma over whether the competitor’s software was actually “objectionable,” while there is no question that content on Parler’s site was objectionable, a contention supported by dozens and dozens of screenshots filed with the court by Amazon, which clearly show vile content from Parler, which Parler has not countered.

While Enigma does address the space where Section 230 overlaps with antitrust law, it does not hold that immunity ends where anticompetitive effects potentially begin. Rather, the Ninth Circuit has been more limited in its conclusions. Parler’s claims that Amazon cannot enjoy Section 230 immunity do not fit within the bounds of the law, and they do not fit within the Ninth Circuit’s understanding of the limits of Section 230.