Game Over: Courts Crack Down on Video Game Cheating

By: Quinn Weber

Although something as seemingly innocuous as cheating in a game might seem outside the purview of federal courts, the legal system has long shown an interest in preserving fairness between players in a variety of contests. In Nevada, N.R.S. 465.085 makes the act of cheating at “any gambling game” in a casino a crime, punishable by a prison sentence. Similarly, 18 U.S.C. § 224 makes it a federal crime to seek to influence the outcome of any “sporting contest” by means of bribery. Now, courts are seeking to extend the law’s interest in competitive integrity into the digital realm by supporting video game companies in their lawsuits against businesses that create and sell cheating software.

Background

Cheating in video games has been going on for decades. But how does one cheat at a video game, and why do companies care enough to get the courts involved? Though cheating can take many forms, often players of popular online video games such as Fortnite or Call of Duty download and install third-party software. Once installed, this software modifies the code of the game allowing players to access different abilities from what the game developer intended. So what? Well if that player decides to play that game competitively against other players via an online multiplayer mode using cheating software they can utilize that software to gain an unfair advantage. Different programs provide different advantages. Such programs include so-called “aimbots”: programs that fully or partially automate the process of aiming and shooting in shooter games such as Call of Duty. Other programs such as “ESP hacks” allow players to see obscured/hidden information about their competitors.

Though some players may enjoy using these cheats, it is certainly not all fun and games for the developers who create and maintain online multiplayer games. Developers worry that cheaters will damage the experience of common players, driving them to play less or stop altogether. This could be costly for game developers who often generate money based on players continually making purchases within a game as much as purchasing the game itself. Developers have long maintained a policy of banning individual players who they detect cheating, but in recent years they have gone a step further.

The Lawsuits

Over the last few years several major video game developers such as Bungie, the creator of the popular video game franchises Halo and Destiny, Ubisoft (Assassin’s Creed) and Epic Games (Fortnite) have done more than just ban players. They have also aggressively pursued litigation against the creators of cheat software, seeking to put them out of business. These lawsuits have, by and large, been remarkably successful. Between 2021 and 2023, Bungie, which has been particularly aggressive about suing cheat makers, recently triumphed in two separate lawsuits against cheat makers who created cheating software for the popular video game Destiny. Bungie received damages totaling over sixteen million dollars, and the court enjoined the groups from distributing any more cheating programs. Back in February 2023, Call of Duty developer Activision won a similar suit against a cheat developer in California District Court resulting in a judgment of a million dollars and an injunction. 

These victories are the result of aggressive litigation tactics that utilize several causes of action. One of Bungie’s recent lawsuits against cheat maker Elite Boss Tech Incorporated included eight distinct claims ranging from copyright infringement and tortious interference in contractual relations to civil conspiracy. This is typical of these types of lawsuits, as there is no statutory framework explicitly aimed at preventing and punishing video game cheating. As a result, companies and their legal departments pursue a variety of tactics hoping that some stick. Although Bungie’s suits have succeeded overall, courts have often not been persuaded by all of their claims. In their suit against Elite Boss Tech, the Western District of Washington dismissed their claim that the defendant had violated the Washington Consumer Protection Act. Similarly, Bungie failed in another anti-cheating lawsuit in its claim of common law copyright infringement. The most potent and reliable legal tool video game developers have available to them is the Digital Millennium Copyright Act (“DMCA”). The DMCA, a sweeping federal law governing intellectual property, criminalizes the act of circumventing technological measures to access a copyrighted work. This prohibition has been extremely useful for game developers trying to prevent cheating, as cheat makers must design their software to circumvent the security protocols that developers implement to maintain fairness. 

Looking Ahead

Although developers such as Bungie have won some recent high profile victories in court, their battle to keep cheating out of their games will continue to be difficult. These cheating softwares are typically sold and distributed over the internet. Accordingly, jurisdiction can be an obstacle for game developers looking to bring cheat makers into court. In fact, in the case Bungie, Inc. v. Thorpe, Bungie recently had a motion for early discovery denied for lack of personal jurisdiction, as the defendants were not residents of the forum state, and the court found that their activities were not aimed at any forum in particular. It can be difficult for potential plaintiffs to even identify cheat makers as cheating programs can be created by small teams of skilled software engineers who can operate their illicit online businesses using pseudonyms. Ultimately, the global gaming industry is currently worth over 200 billion dollars and is projected to continue to grow in coming years. With so many people worldwide consuming video games the market for cheats will continue to exist, and the battle to eliminate them won’t end anytime soon. 

Alice in Algorithm-land: Legal recourse for victims of content-recommendation rabbit holes

By: Cameron Eldridge

There was a time early on in the social media landscape when all anyone would be able to tell about you based on the content of your feed was who you followed: friends, family, preferred news networks, favorite tv shows, or bands. However, content-recommendation algorithms, which were once only used for advertising, are now the backbone of social media platforms, determining what users see and when they see it. 

The content-recommendation algorithms used by Facebook, Instagram, Twitter, and Tiktok have one goal: maximizing user engagement, which means showing users whatever will keep them looking. This can benefit users when liking one video of an adorable baby animal means they get fed more. But it can also be dangerous, when a single interaction with content about mental illness or a terrorist organization can trigger the algorithm to send users spiraling down a rabbit hole, slowly distorting how they view themselves and how they interact with the world. Unfortunately, due to Section 230, when users find themselves or their loved ones have been victims of these rabbit holes they’re often left with no one to legally blame.  

Shattering the Section 230 shield

Section 230(c)(1) of the Communications Decency Act immunizes “interactive computer services” like social media platforms for publishing content created by another party. Historically, Section 230 has served as a shield protecting social media platforms from any and all liability for harmful videos, comments, and posts made on their platforms. So when a Louisiana teen’s family sues Meta because she killed herself after being fed content about suicide and self-harm, or when the family of a ten-year-old who choked themselves to death while participating in TikTok challenge sues Tiktok, the companies can avoid any consequences. If victims of the algorithm want any chance at holding social media platforms accountable, they’ll need a more creative legal strategy than content-based attacks.

A flaw in the design

A recent products liability claim against Meta brought by the Social Media Victims Law Center on behalf of plaintiff Alexis Spence is attempting to hold Instagram accountable by arguing that Instagram’s feed and explore features are defective by design. Spence, who was eleven years old when she first started using Instagram, and now at twenty years old suffers from severe mental illness, claims that these design features of the Instagram app are the but-for cause of her injuries. While it is too early to tell how Spence’s case will pan out, there is some supporting precedent in another recent case, Lemmon v. Snap, Inc. The court in this case held Snapchat liable for foreseeable injuries resulting from its ‘speed filter,’ another design-based claim. 

Another promising strategy that is currently being tested is an attack against the recommendation algorithm itself. Next month the question of whether Section 230 should protect platforms when they make targeted recommendations of information, or only protect platforms when they engage in traditional editorial functions like publishing or withdrawing content, will be raised in front of the Supreme Court by University of Washington Law Professor Eric Schnapper in Gonzalez v. Google

Gonzalez is brought on behalf of Nohemi Gonzalez, a 23-year-old U.S. citizen who was studying in Paris in November 2015, when he was murdered in one of a series of violent ISIS attacks that resulted in the deaths of over a hundred people. The complaint alleges that YouTube not only unknowingly published hundreds of ISIS recruitment videos but also affirmatively recommended those videos to users and that these recommendations go beyond the traditional editorial functions of a publisher which Section 230 textually protects. 

Many in the tech world fear that alterations to Section 230 protections like those Gonzalez seeks to make would render the existence of social media platforms legally impossible. How would apps like TikTok, which is based almost entirely on its content-recommendation algorithm, continue to function if they could be held liable for its every consequence? A ruling against Google would certainly change social media platforms as we know them, but it may also force them to take more responsibility for the kind of rabbit holes they’re sending users down. While this would pose a financial and logistical burden, it’s one that tech companies like Meta and Google probably can and should bear. 

Regulating Technology: Can Hermès Secure the Bag?

By: Trent M.C. McBride

It is no surprise that the legal system trails behind technological advancements. Lawyers, judges, and policymakers necessarily must wait for technology to mature before attempting to regulate it.  Due to this maturity period, it is estimated that the law is consistently around 5 years behind technological developments.  This lag in the law creates scenarios where new technologies are generally unregulated for several years before the legal system ever gets involved. 

An example of this lag in the law is the technology of Non-Fungible Tokens, or NFTs.  In very general terms, NFTs, in conjunction with Blockchain technology, allow for digital or real-world assets to be given unique digital identifiers that certify ownership of that asset.  These certified assets can then be bought and sold on the open market.  

While NFTs have been around since 2014, gaining more popularity around 2018, they have been largely unregulated with little to no oversight.  However, the law is finally catching up to this technology and the case Hermès International v. Mason Rothschild (Hermès v. Rothschild) is gaining significant attention.  

Hermès v. Rothschild 

Currently, in active litigation, Hermès v. Rothschild, is a case out of the Southern District of New York. The case centers on the legality of online creators and artists replicating tangible, real-world, legally protected, assets and turning them into unique digital assets that are being marketed as NFTs without permission.  

The defendant, Mason Rothschild, is an entrepreneur who has found success creating digital replicas of the popular Birkin Bag from Hermès.  In May 2021, Rothschild created his first Birkin Bag inspired NFT—the Baby Birkin.  This NFT went on the market and within 8 months sold for $47,000.

On the heels of this success, Rothschild went on to create the currently disputed project— “MetaBirkins.”    This project consists of 100 unique MetaBirkins that are covered in a wide range of colored faux fur.  These NFTs are currently sold at varying prices with a floor price of 2.5 ETH or roughly $3,750 USD.  So far, this project has generated between $450,000 – $800,000.

Once Hermès caught wind of this project, they sought legal action to protect their brand.  In their Complaint, Hermès alleged seven causes of action: misappropriation and unfair competition under New York common law; common law trademark infringement; injury to business reputation and dilution; cybersquatting; federal trademark dilution; false descriptions and representations; and trademark infringement.  The central theme throughout each cause of action is that the MetaBirkins brand is profiting from the well-known and highly successful Hermès brand without permission and that MetaBirkins’ use is harming the Hermès reputation and brand. 

Rothschild immediately moved to dismiss these claims arguing his actions were protected under the First Amendment. The court was not convinced by this line of reasoning and denied Rothschild’s motion to dismiss.  The court held that the Hermès complaint provided sufficient factual allegations to bring a claim. 

This does not mean that Hermès has won this case.  It simply means that the case will continue, and the two parties will resume litigation. As this case plays out, we may finally gain some clarity as to the legality of NFT creation and distribution of already existing real-world assets. 

Are there other examples?

The Hermès litigation is just one example of technology outpacing the legal system. In the grand scheme of things, this dispute will have a relatively small impact on the broader public. As it stands, NFT technology is a niche area of the law that will likely go unnoticed by most people. However, if this lag in legal intervention extends to other areas of technology that have a broader reach, there could be serious consequences. 

Take a moment to consider the implications of medical care and the adoption of virtual doctor’s visits or Mental Health Apps that supplement care.  We will all need medical care at some point in our lives and these technologies will undoubtedly impact that interaction.  

Or consider Financial Technology (FinTech) in which these companies are changing the way we all use and handle currency.  Money is the foundation of the modern world and allowing for unregulated manipulation of this sector could have a global reach.

Technologies in a wide range of areas are advancing rapidly and changing the world in drastic ways. It is important that we closely monitor these advancements to prevent extended lags in the law and be diligent in our regulations.