NFT or Bust? – Impact on The Video Game World

By: Joanna Mirsch

The idea of spending real, hard-earned cash in the video game world is not a new concept. Gamers have been making in-game purchases for quite some time now: unlocking new weapons, characters, levels/maps, and more. These purchases have usually been seen as fun perks to gameplay that allow gamers to tailor their gameplay experience through the content they purchase. However, the growing presence of non-fungible tokens (NFTs) within the video game realm is potentially an entirely different occurrence.

What are NFTs?

There are many ways to understand what an NFT is. It is helpful to first look at what the two words mean separately. At its core, a nonfungible item is something that cannot be exchanged for another thing of equal value; it’s one of a kind. The token references a unit of currency on the blockchain, which is how cryptocurrencies are bought and sold. An NFT—much like bitcoin, ethereum, and dogecoin—is a digital currency that is a type of money. One of the best perks of cryptocurrencies is that they are nearly impossible to counterfeit. Digital currencies operate on what is called the “blockchain”. A blockchain is a decentralized ledger of all transactions across a peer-to-peer network. Since every transaction is recorded across this large network, it makes it difficult for attackers to hack it because they would need to control large portions of the ledger to do any damage.

What distinguishes NFTs from other currencies is that their “underlying technology certifies and guarantees the authenticity of a tether item, raising its value.” Moreover, they can be thought of as “unique, digital version[s] of a certificate of authenticity, publicly rubber-stamped by the blockchain.” As of February 2021, only mere months after coming into the public eye, NFTs have become a booming market whose sales have reached $500 million. NFTs are essentially unique proof of ownership over items people cannot tangibly hold in their hands, such as digital works of art, coupons, video clips, etc. NFTs are one-of-a-kind pieces of code that are stored and protected on a shared public exchange. Fordham Law School professor Donna Redel, who teaches about crypto-digital assets, has explained NFTs as the purchase of a code that manifests as images. Notable NFT purchases include artwork, clips of LeBron James dunking, free pies for life from a Los Angeles pizza shop, digital homes, and much more.

Legal issues surrounding NFTs

It’s potentially dangerous to allow the sale of these unique items without both the creators and users of NFTs truly understanding the rights granted to token holders. While the purchase of NFTs allows buyers to have unique, one-of-a-kind pieces of digital artwork or other products, buyers do not usually get the copyright or trademark to the item. Furthermore, just because you purchase a specific NFT, does not mean others cannot purchase endless other versions of it elsewhere online. Therefore, NFTs—from a copyright perspective—are digital receipts showing that the owner owns a version of the work but does not own any of the exclusive rights in reproducing or preparing derivative works as awarded to copyright owners in §106 of the U.S. Copyright Act. This lack of transparency or awareness behind these NFT purchases could pose serious infringement issues. Many individuals purchasing NFTs are not familiar with the legal restrictions relating to copyrighted works. NFTs do not authenticate IP rights. At most, purchasing an NFT only allows the purchaser to receive the token itself and the right to use the copyrighted work for personal use.

Due to the immature market and lack of flushed out NFT regulation, it is possible the NFTs will allow infringers to steal intellectual property from their rightful owners. The potential for copyright owners to lose ownership over their works is a legitimate fear. Numerous artists have already reported that they discovered their work is being stolen and sold as NFTs without their permission. However, the pertinent legal question that remains is whether the first sale doctrine applies to purchases made by NFT owners. This doctrine allows for individuals who purchase copyrighted works to have the right to sell, display, or otherwise dispose of that particular copy. If this doctrine applies, the owners of NFTs would be able to sell the digital NFTs after they purchase them without the artist’s permission. But it is likely the doctrine is not applicable since NFTs are not tangible works as required by copyright law. The lack of clear rules surrounding NFTs are likely to allow for problems to arise as they grow in popularity.

What do NFTs mean for the gaming world?

One videogame company, SEGA, recently announced its plans to sell NFTs based on its intellectual property—including their classic and current IPs and upcoming projects—in the summer of 2021. SEGA could sell a digital piece of one of its classic games—such as Sonic the Hedgehog art—to a buyer for an extremely high price point. This is one of the reasons that NFTs could become a problem for gamers. Because the sale of NFTs has such a high potential for profit, collectible pieces of classic or limited-edition games—such as the original production sketches of Sonic or a game’s original soundtrack—which might otherwise be bundled with games or sold as physical objects will likely be held back to be sold as more profitable NFTs instead. However, an even greater problem involves possible, and likely, IP infringement through the sale of unlicensed uses of NFTs. DC Comics recently warned creative teams and freelancers employed by DC against unlicensed uses of NFTs after an artist made $1.85 million by selling NFTs of characters he used to draw for DC. This same issue could occur in the video game world too. While there are steps that could be taken to push back against unlicensed uses and sales of NFTs – the real question is whether the video game industry truly benefits from involving NFTs in their games.

While the video game industry has persuaded gamers to buy intangible, digital goods for a long time, what is the benefit of merging NFTs with games? Do gamers need this kind of authenticity to play games? Currently, there are a handful of popular NFT games that allow gamers to tokenize their game assets and use in them in-game or trade them as crypto-collectibles. Some of these games—such as CryptoKitties—record up to $30,000 dollars’ worth of daily transactions and more than 8000 new users weekly. These types of games are supposed to be a mix of thrill and potential profitability. It’s possible that NFT-enabled games can provide a potential boon for the multibillion-dollar video game industry. Currently, games allowing players to buy digital deeds for real estate—in the form of an NFT—have already generated millions of dollars. However, with the growing trend of microtransactions in games, the question of whether NFTs could simply create another pay-to-win structure that incentivizes users to pay large amounts of money to acquire these “authentic” and “unique” digital items is a valid concern. Moreover, what happens if gamers begin selling, distributing, etc. the NFTs they purchase in games? Where do the boundaries exist when it comes to the purchase of NFT content? Perhaps the video game industry is better off not engaging in this new, but potentially problematic, realm of digital currency.

Is Code Killing Copyright?

By: Katherine Czubakowski

Early last month, the Supreme Court released its long-awaited decision in Google LLC v. Oracle America Inc.  The Court found that Google’s unauthorized copying of 11,500 lines of code from Oracle’s Java SE API was fair use because Google took only as much code as it needed to create a new and transformative program. While some argue that this outcome protects fundamental aspects of how code is created and the technology industry, others see this decision as a significant blow to copyright protections. This disagreement comes down to a fundamental question the Supreme Court seems to have side-stepped in this case: whether code should be protected under copyright at all.

An API, or Application Programming Interface, is a list of actions one can take regarding specific software and how one would take those actions. For example, if gardening were a software, you could choose the action you want to perform (dig, e.g.) and how you want to perform that action (with a shovel, a hoe, a pickaxe, your hands, etc.). The Java API in question contains a basic list of common actions (sorting a list, for example) and how those actions are accomplished (alphabetically, numerically, etc.). When Google began developing the Android software used in their smartphones, they wrote their own code to tell the program what to do and how to do it, but copied the declaring code—the part of the program which matches the name assigned to each task with the program necessary to perform the task—from 37 of Java’s listed tasks. By doing so, the programmers working on the Android software were able to continue using the commands with which they were familiar, such as PrintLn() (which tells the program to print the specified text on the user’s screen) and LocalDate.now() (which tells the computer to display the user’s current date and time), in their own code, but these commands relied on Google’s newly written code to perform the task.

In determining that Google was legally allowed to copy this code, the Court relied on the doctrine of fair use.  Although copyright owners generally hold exclusive rights to create derivative works, which are new works based on their own pre-existing work, fair use is a legal exemption which allows someone to use copyright protected work without the author’s permission in certain circumstances. Courts consider fair use on a case-by-case basis and analyze four different aspects of the otherwise-infringing use: its purpose and character, the nature of the copyrighted work, the amount and substantiality of the portion used, and the effect on the potential market for the copyrighted work.  In its recent cases regarding fair use, the Court has created a sub-factor that it considers under the purpose and character of the use: transformativeness. A work is considered transformative when it uses the original copyrighted work in an unexpected way or in a way which alters the original meaning or message.  Transformativeness weighs strongly in favor of fair use because it encourages creativity and furthering of the arts.  This sub-factor frequently affects all four factors in the fair use analysis and can sometimes even outweigh the importance of the other three factors. It can often be difficult to tell if a work used in a transformative way is a derivative work or if it falls under the fair use exception.

In Google LLC v. Oracle America Inc. the Court’s decision hinged on its finding that Google’s use was transformative. The Court first analyzed the nature of the work and found that APIs were fundamentally different than other types of code. Because the declaring code fuses together the uncopyrightable idea of how the code is organized with the copyrightable code which tells the computer how to perform a function, the Court saw the copied code as valuable only as a result of the programmer’s investment in learning it. Since the copied code did not hold independent value, the Court felt that applying fair use in this circumstance would not undermine general copyright protection for other programs. The Court then turned to the purpose and character of the use, which is where they discussed the work’s transformative nature. It found that Google’s purpose in using the copied code was “to create a different task-related system for a different computing environment” than the creators of Java had originally intended.  Google’s use of the code was part of the “creative progress” which the Court saw as copyright law’s objective, so they found that the use was transformative. The Court further found that, although Google copied “virtually all of the declaring code needed to call up hundreds of different tasks,” they copied a relatively small amount of the total API in question. Because this relatively small portion of the API was tied to a valid and transformative purpose, the Court felt that the third factor weighed in favor of fair use as well. Finally, the Court found that Android was not a market substitute for Java SE because the two products were substantially different. Weighing all these factors together, the Court found that because they only took as much as was necessary to allow their programmers to use “accrued talents to work in a new and transformative program,” Google’s “reimplementation of a user interface” was protected by the fair use doctrine.

The Court’s analysis and reliance on transformation in this case presents a danger to those seeking to copyright their code. This is because code is fundamentally different than many other works protected by copyright; it combines functionality with creative expression. Unlike traditionally copyrightable works, programs are usually created in a way which relies on previously created code to function. When writing new code, very few programmers actually write code which can interact directly with the computer. Instead, they use one of a number of programs which translate a more readable code, such as Java, into code which the computer can understand. Without being able to copy some fundamental aspects of the language, programmers would have to create a new language anytime they wanted to write new code. In practice, this means that many different programs with different purposes all rely on the same underlying program(s) to translate their code into a form the computer can understand. 

Although the Court likely reached the correct outcome in this case, the repercussions of its decision in other fields damages traditional copyright holder’s rights. The Court’s transformative analysis fails when applied in the context of programming because a program’s reliance on other code is a necessary aspect of its creation. Thousands of substantially different programs rely on the same underlying code in order to function. However, purely creative expression does not have this same reliance on preexisting works—as evidenced by Congress’s grant of derivative works rights to copyright holders. By trying to fit both pure creative expression and functional creative expression under the same body of law, the Court has blurred the lines between what is transformative and what is derivative and has put at risk the exclusive rights guaranteed to copyright owners of traditionally copyrightable works.

Peeved with Your Pre-Order? Part Two: Cyberpunk 2077, Federal Securities Laws, and the Lanham Act

By: Moses Merakov

In addition to  Section Five of the FTC Act and parallel state-level false advertising statutes (as discussed in part one), gamers can potentially pursue false advertising litigation under the Lanham Act and through federal securities laws.

The Lanham Act

The Lanham Act ,also known as the Trademark Act of 1946, is the federal statute that governs trademark infringement/dilution, false advertising, and related unfair competition. To win a false-advertising claim under Section 43(a) of the Lanham Act, a plaintiff must prove that the defendant made (1) a false or misleading statement of fact; that was (2) used in a commercial advertisement or promotion; that (3) deceives or is likely to deceive in a material way; (4) in interstate commerce; and (5) has caused or is likely to cause competitive or commercial injury to the plaintiff. However, this method of attack is generally unavailable for the general consumer. Only commercial competitors of the defendant, not typical consumers of the defendant’s product, can “allege an injury to a commercial interest in reputation or sales,” necessary to secure standing to sue. See Lexmark v. Static Control.  Nevertheless, consumers can typically still pursue a false advertising lawsuit under comparable state laws. As mentioned in part one, Washington State’s Consumer Protection Act embodies false advertisement claims and functions similarly, in effect, to the Lanham Act.

Federal Securities Laws

In 2020, Cyberpunk 2077 instantaneously turned from one of the anticipated games of the year to one of the year’s biggest flops. The game arrived to store shelves with such an intense pandemonium of game breaking bugs that both Sony and Microsoft offered refunds to distraught purchases of the game and Sony removed the game from its online digital store. Almost immediately, two different law firms, the LA-based Schall Law Firm and the NYC-based Rosen Law Firm, filed class-action lawsuits against CD Projekt Red, the game’s developer, alleging that the company misled its investors. According to both firms, there were violations of  §§10(b) and 20(a) of the Securities Exchange Act of 1934 and Rule 10b-5 promulgated thereunder by the U.S. Securities and Exchange Commission.

The Securities Exchange Act is complex, but essentially it is a composite of regulations that prevent “manipulative and deceptive” practices in securities trading.  Section 10(b) of the Securities Exchange Act of 1934 [15 USC § 78j(b)] provides that:

“It shall be unlawful for any person, directly or indirectly, by the use of any means or instrumentality of interstate commerce or of the mails, or of any facility of any national securities exchange… [to] use or employ, in connection with the purchase or sale of any security registered on a national securities exchange or any security not so registered, or any securities-based swap agreement… any manipulative or deceptive device or contrivance in contravention of such rules and regulations as the Commission may prescribe as necessary or appropriate in the public interest or for the protection of investors.”

To recover damages in a private securities-fraud action under § 10(b), a plaintiff must prove “(1) a material misrepresentation or omission by the defendant; (2) scienter (a mental state embracing intent to deceive, manipulate, or defraud); (3) a connection between the misrepresentation or omission and the purchase or sale of a security; (4) reliance upon the misrepresentation or omission; (5) economic loss; and (6) loss causation.” See Matrixx Initiatives, Inc. v. Siracusano.

In short, Rosen Law Firm and Schall Law Firm have an incredible burden in establishing that CD Projekt Red defrauded investors. The firms allege that CD Projekt Red lied to investors about the state of the game and failed to disclose that the game launch would be a financial catastrophe. To win, the firms would have to prove that CD Projekt Red made false or misleading statements, that it knew the statements were false or misleading and intentionally meant to mislead investors at the time the statements were made, and that those misleading statements would cause the company to be overvalued. Only litigation and proper discovery will truly tell whether there are enough facts for the firms to be successful, but some legal analysts say there is likely no case.

Conclusion

            Unless you are a “competitor” of the videogame developer or an investor, the Lanham Act and federal securities laws are likely not your best avenue for recovering for that falsely advertised video game you bought. 

Electric Soothsayers: The Ethics of Brain-Machine Interfaces

By: Mason Hudon

“Over one’s mind and over one’s body the individual is sovereign.” – John Stuart Mill

            In mid-April of this year, a company called Neuralink released a video of a male macaque monkey playing a version of the Atari classic game, “Pong”. At first glance, the video appears to be nothing more than a cute gimmick… that is until the viewer realizes that the joystick the monkey is using isn’t even plugged in—the program is being controlled by the creature’s brain by way of a complex, proprietary microchip.

            Neuralink, the brainchild of billionaire tech tycoon Elon Musk (better known as the CEO of Tesla, Inc. and founder of SpaceX) develops “breakthrough technology for the brain” known as brain-machine (or brain-computer) interfaces (BCIs). Essentially, Neuralink and other companies like it, are seeking to blur the line between human and machine, introducing computer hardware into human brains to do anything from making the world more accessible for disabled communities, to enhancing the video game experience, to “achiev[ing] a symbiosis with artificial intelligence” as Mr. Musk puts it. According to Limor Shmerlin Magazanik, Director of the Israel Tech Policy Institute, “a BCI decodes direct brain signals—colloquially known as the firing of neurons—into commands a machine can understand. Using either an invasive method—a chip implanted directly in the brain—or non-invasive neuroimaging tools, letting the machine pull raw data from the brain and translate it to action in the outside world.” While the technological singularity (the merging of human and machine into an inseparable existence) may still be quite far off for the human race, the introduction of BCI technologies that implicate the human brain raise very serious legal and ethical concerns regarding personal autonomy, privacy, and the rights and identities of humans as we currently perceive them.

            It’s true, “[b]asic neurotechnologies have been around for a while—including technologies like cochlear implants and deep brain stimulation and more complicated brain-computer interfaces,” but technologies of the kind that Neuralink and other companies involved in advanced BCI development are seeking to introduce are wholly unprecedented. In fact, Maja Larson, general counsel for the Seattle-based Allen Institute, has expressed that this “commercialization” of formerly purely medical applications for BCIs has never been seen and risks turning “benign research politicized”. When profit margins and the “bottom line” are introduced into an equation that previously sought to solve relatively narrow issues (typically divorced from the idea of revenue generation and solidly situated within the clinical environment), all bets might be off.

Legal regimes and regulations have not been crafted to deal with many of the dilemmas that these technologies will pose, for example: how college admissions should be handled for students that have brain implants that aid them in their school work or allow them to access the internet, how brain data should be protected when a BCI is communicating with a public WiFi network, how advertising will be implicated if companies can detect your needs (like hunger), or, even more complexly, how the regime of intellectual property will be impacted as a whole. Additionally, Scientific American writes “[o]ne tricky aspect is that most of the neurodata generated by the nervous systems is unconscious. It means it is very possible to unknowingly or unintentionally provide neurotech with information that one otherwise wouldn’t. So, in some applications of neurotech, the presumption of privacy within one’s own mind may simply no longer be a certainty.” The legal community will ultimately be tasked with addressing these deep concerns, and efforts should begin sooner, rather than later to develop new laws that preemptively protect against abuses of this technology before it is too late.

            Robert Gomulkiewicz, Charles I. Stone Professor of Law at the University of Washington School of Law, discusses in his Legal Protections of Software class that intellectual property protections for software don’t always work extremely well because lawmakers in the mid-20th century chose to conform existing IP regimes like copyright, patent and trademark to novel technologies far different than the items and ideas that they protected in years prior to the advent of the computer. Instead of creating sui generis laws that might account for all of the nuances and complexities of software, lawmakers opted for the “easy option” by retrofitting copyright, patent and trademark laws to fit contemporary needs. Such an “easy option” may work adequately for protecting software when financial concerns are the only issues implicated, but when it comes to the human mind and the privacy of one’s own thoughts and emotions, a retrofitted system leaves much to be desired because the stakes are so high. Sui generis laws are thus both a legal and moral imperative for lawmakers seeking to tackle BCI technologies moving forward. While new statutory regimes may and should draw important aspects from intellectual property and existing privacy regimes into their language, it remains clear that crafting brand new policy cannot and should not be avoided.

            Given the complexity of the issues inherent in BCI technologies, it will be critical to involve stakeholders from different backgrounds and paradigms including lawyers, engineers, bioethicists, doctors, and perhaps even philosophers in coalescing competing ideologies of the role of BCIs into workable legal doctrine. Particular focus should be directed towards ensuring privacy, equity, autonomy, and safety for those wishing to partake in BCI technologies. Specifically, discussions should concern: (1) securing the protection of the fundamental autonomy of the human mind, (2) securing the protection of the fundamental autonomy of the human body, (3) allowing for the ability of BCI users to control third-party access to their data, (4) ensuring accuracy in the interpretive methods used by software that attempts to translate the data from people’s brains, (5) ensuring disclosure of the use of performance enhancing BCIs in academic and competitive settings, (6) mitigating the effects of hacking and malware on BCIs, (7) elucidating the role and risks of allowing artificial intelligence a role in BCIs as Elon Musk has discussed, and (8) ensuring that people remain psychologically sound after installation. This list is not exhaustive, but these should cover some of the central issues that will underpin the legal framework for BCIs in the future. As with many technical innovations, things move pretty fast, and this means that legal entities need to act now to protect the qualities of human existence that we currently hold dear.

All Bark, No Bite: Washington’s 2021 Facial Recognition Regulation Lacks Enforcement Mechanism

By: Alex Coplan

Taking effect on July 1, 2021, Washington’s new facial recognition (FR) law will regulate state and local government use of FR technologies.  But will the new law be effective enough to protect your identity? The law serves as a middle ground between privacy advocates and government officials in favor of using FR programs. However, due to vague wording and lack of oversight, the bill’s intent may not always produce the desired results.

On its face, SB 6280 provides proper safeguards to prevent government use and abuse of FR technology. The law includes significant provisions requiring accountability and limitations on use, and signals the Washington legislature’s belief that FR use creates serious policy issues.

First, SB 6280 will apply to all state and local government agencies. This means that agencies operating in Washington must comply with the new law and be subject to its oversight. Some exceptions, however, include the Department of Licensing (DOL) and the Transportation Security Administration (TSA). While these agencies are not subject to SB 6280, they are required to disclose the use of FR technology if located in Washington.

Second, the law requires agencies to provide a notice of intent and produce an accountability report. In other words, government agencies must file notice to obtain and implement FR services. If approved, those agencies must provide accountability reports every two years. These reports disclose the capabilities of the FR system, the data types the system collects, the training procedures and security for protecting data, and how the system can benefit the community.

Third, and perhaps most importantly, SB 6280 requires “meaningful human review” when FR programs create “legal effects concerning individuals or similarly significant effects concerning individuals.” Meaningful human review requires review or oversight by one or more individuals who are trained in accordance with the act. Training includes coverage of the capabilities and limitations of the FR program, and how to interpret the program’s output. The bill considers “legal” or “similarly significant effects” to be decisions that result in the provision or denial of criminal justice, financial services, housing, education, employment opportunities, and other basic civil rights. Accordingly, if an individual faces a significant outcome following government use of FR, they have the right to meaningful human review.

Fourth, agencies must receive a warrant in order to use FR programs during real-time surveillance. This means government actors may not use FR programs simultaneously with a live video feed. Specifically, SB 6280 prohibits use of FR programs body-worn cameras used by law enforcement, meaning police may not use FR programs in the field without judicial authorization.

Companies like Microsoft—who create FR technology—favored and lobbied for the passage of SB 6280. Why, in a bill intended to limit FR use, would Microsoft approve of this legislation? Arguably, SB 6280 may be all bark, but no bite. While this law sounds effective, it lacks enforcement procedures.

For example, the accountability reports may not provide much accountability. At this point, the reports are not required to be approved by any regulatory or legislative body. As a result, there is no enforcement mechanism for the provision of this bill. If an agency chooses to not follow the procedure, who will stop them?

Further, the “meaningful human review” provision lacks substantial definition. The subsection defining this phrase is a single sentence, which fails to provide any direction for decision making. Further, the provision does not require review from any third party, allowing agencies to review any potential misconduct themselves, with their own employees.

Additionally, SB 6280’s warrant requirement only covers real-time identification, so agencies may freely use FR technology on previously shot footage without obtaining a warrant first. By allowing agencies to engage in this practice without a warrant, SB 6280 subjects the public to unchecked surveillance by law enforcement.

Washington enacted SB 6280 to provide compromise and regulation to an emerging field. However, the law lacks sufficient procedures for enforcing violations. Without those procedures, or a moratorium on FR use, government agencies can abuse these technologies to the detriment of every Washingtonian.