Tick-Tock, TikTok: Could the Party Actually Stop?

By: Enny Olaleye

Over the past couple years, the United States government has expressed several concerns about the potential security risks associated with TikTok, a popular social media application that has amassed over 1 billion active users every month. Some officials have suggested that TikTok could be used by the Chinese government to collect data on American citizens or to spread propaganda.

According to TikTok, the social media app serves as a platform for users to “come together to learn, be entertained, and grow their business, as they continue to create, discover and connect with a broader global community.” TikTok now serves as one of the largest worldwide social networks, coming in third after Facebook and Instagram. This is evidenced by its widely popular international status—approximately 150 million Americans alone are active members of the social media app—making up nearly half of the U.S. population. 

As a result of the app’s widespread popularity in the United States, government officials have become wary about the potential risks the social media app poses. In August 2020, former President Donald Trump issued an executive order that would have effectively banned TikTok in the US unless it was sold to an American company within 45 days. This order cited concerns about national security and the app’s handling of user data, as the US government was worried that user data could be accessed by the Chinese government. Additionally, the US government accused TikTok of censoring content that would be unfavorable to the Chinese Communist Party and allowing misinformation to spread on its platform. 

However, the ban was temporarily blocked by a federal judge and the Biden administration later put the order on hold to conduct a broader review of security risks posed by Chinese technology. In June 2021, President Biden signed an executive order that expanded the scope of previous orders related to Chinese-owned apps and other technology. The order aimed to protect US citizens’ sensitive data and prevent the Chinese government from gaining access to that data through apps like TikTok. In September 2021, the Biden administration announced that it would not pursue a ban on TikTok, but it would continue to monitor the app’s potential security risks.

In response, TikTok has repeatedly denied these allegations, saying that it stores U.S. user data on servers located in the U.S. and that it has never provided data to the Chinese government. The company has also taken steps to distance itself from its Chinese parent company, ByteDance, by appointing a U.S. CEO and creating a U.S.-based data center.

Unfortunately, TikTok’s actions have done little to appease the concerns raised by federal officials and have only led legislators to double down on taking action against the platform. As of March 2023, the Biden administration has called on TikTok’s parent company, ByteDance, to either sell the app or face a possible ban in the United States because of concerns about data privacy and national security. The White House has also signaled its support for draft legislation in the House of Representatives that would allow the federal government to regulate or ban technology produced by some foreign countries, including TikTok. The federal government and multiple states have also already banned TikTok on government devices

Thus, the question arises: “Can the federal government actually do that?” 

Well—not necessarily. First and foremost, the U.S. government does not have the authority to ban speech, as free speech is a right guaranteed to citizens under the First Amendment of the U.S. Constitution. Posts on TikTok are protected by the First Amendment since they are a form of speech. While the Biden Administration has replaced the Trump order with one that provides a more solid legal ground for potential action, it still cannot override the First Amendment protections afforded to speech on TikTok.

Further, if this issue is brought to the courts, proponents of a TikTok ban will most likely claim the national security risks posed by the app are self-evident. Proponents of a ban believe that TikTok’s relationship to China and how Chinese law requires companies to cooperate with all requests from Beijing’s security and intelligence services creates obvious security problems. However, that line of reasoning is unlikely to find support with federal judges, who will be weighing the potential security risks against the imposition of real-world restrictions on the rights of 150 million Americans to post and exercise free speech on an extremely popular platform. 

Even if judges were to rule that a TikTok ban is neutral when it comes to content and viewpoint, the government would still have to prove that the remedy is narrowly tailored to serve, at a minimum, a “significant government interest,” in order to justify a ban and the corresponding restriction on speech. To ensure narrow tailoring, the Supreme Court developed the standard of strict scrutiny when reviewing free speech cases. To satisfy strict scrutiny, the government must show that the law meets a compelling government interest and that the regulation is being implemented using the least restrictive means. However, narrow tailoring is not confined to strict scrutiny cases, as seen in McCullen v. Coakley, where the Court determined that a Massachusetts law regulating protests outside abortion clinics was not content based and, thus, not subject to strict scrutiny.

As of April 2023, there has been no federal legislation passed that would permit an outright ban of TikTok in the United States. While the First Amendment likely limits the government’s ability to ban the app outright, it could still target TikTok’s ability to conduct U.S.-based financial transactions. That includes potential restrictions on its relationship with Apple and Google’s mobile app stores, which would severely hamper TikTok’s growth. By targeting conduct instead of speech, such a restriction would be outside of the First Amendment’s protections.
Regardless, amid all this government action, there is one thing that has made itself apparent. As the federal government escalates its efforts against TikTok, it’s coming up against a stark reality: even a politically united Washington may not have the regulatory and legal powers to wipe TikTok off American phones.

Liability, Authorship, & Symmetrial Causation in AI-Generated Outputs

By: Jacob Alhadeff

Copyright has insufficiently analyzed causation for both authorship and liability because, until now, causation was relatively obvious. If someone creates a painting, then they caused the work and receive authorial rights. If it turned out that the painting was of Mickey Mouse, then that painter may be liable for an infringing reproduction. However, recent technological advances have challenged the element of causation in both authorship and infringement. In response, recent law and scholarship have begun to address these issues. However, because they have addressed causation in isolation, current analysis has provided logically or ethically insufficient answers. In other words, authorial causation has ignored potential implications for an entity’s infringement liability, and vice-versa. Regardless of how the law responds, generative AI will require copyright to explore and enumerate the previously assumed causation analyses for both infringement and authorship. This blog explores how generative AI exposes the logical inconsistencies that result from analyzing authorial causation without analyzing causation for infringing reproductions.

Generative AI largely requires the following process: (1) an original artist creates works, (2) a developer trains an AI model on these works, and (3) an end-user prompts the AI to generate an output, such as “a mouse in the style of Walt Disney.” This generative AI process presents a novel challenge for copyright in determining who or what caused the output because generative AI challenges conventional notions of creation.

Causing Infringement

Andersen et al. recently filed a complaint against Stability AI, one of the most popular text-to-art foundation models. This class action alleges that Stability AI is directly liable for infringing that result from end-user prompted generations. However, in a recent decision more closely analyzing causation and volition in infringement, the Ninth Circuit found that “direct liability must be premised on conduct that can reasonably be described as the direct cause of infringement.” Stability AI should not be found directly liable for infringing these artists’ copyright, in part because Stability AI cannot reasonably be said to be the direct cause of infringement. Such a finding would be similar to holding Google liable for reproducing images of Mickey Mouse on people’s computer screens when they search for “Mickey Mouse.”  

This lawsuit is particularly relevant since end-users have prompted thousands of generations that include the phrase “Mickey Mouse” and many appear substantially similar to Disney’s Mickey. If thousands of end-users have intentionally prompted the AI to generate Mickey Mouse, then what volitional conduct can most reasonably be described as the direct cause of infringement? It is clearly the end-user. However, what if the end-user simply prompted “a cartoon mouse” and the AI generated an infringing image of Mickey? Here, the end-user may not have intended to generate Mickey and reasonable notions of fairness may not find the end-user as the most direct cause of infringement. However, copyright is a strict liability tort, meaning that liability attaches regardless of a reproducer’s intent. Therefore, unless copyright applies an intentional or a negligence theory for direct liability, which it should not, then whomever or whatever is liable for infringing outputs shall be liable for both of the infringing outputs— “Mickey Mouse” and “a cartoon mouse.” Such an outcome not only feels deeply unfair, but it is unreasonable to say that the end-user is the direct cause of infringement when prompting “a cartoon mouse,” and vice versa. 

Cases called to answer similar questions have recently grappled with these same issues of volition and causation. Generally, courts have been hesitant to find companies liable for actions that are not reasonably deemed volitional conduct causing infringement. The court in Cartoon Network, for example, found that “volition is an important element of direct liability.” In the Loopnet case, the court found that “the Copyright Act… requires conduct by a person who causes in some meaningful way an infringement.” In this way, the law has so far mirrored our prior intuitions of fairness. Legal scholarship has noted that when copyright law has grappled with novel technology, it has found that causation in infringement requires volition that “can never be satisfied by machines.” This reasoning, as applied  to generative AI, may mean that an AI company should not normally be directly liable for outputs that infringe the reproduction right. 

Causing Authorship

This causation analysis has also begun for authorship rights. One copyright scholar compellingly argues that copyright law should explicitly enumerate a causal analysis for granting authorship rights. Such an analysis would follow tort law’s two step causation analysis including: (1) creation in fact and (2) legal creation. Aviv Gaon surveys authorial options in The Future of Copyright in the Age of AI, writing that there are those that favor assigning authorship to the end-user prompter, the AI developer, finding outputs joint works, or even attributing authorship to the AI itself. The simplest legal option would be to treat AI like a tool and grant authorship to the end-user. This is exactly how the law responded when photography challenged conventional notions of creativity and authorship. Opponents of finding photographers as authors argued that photography was “merely mechanical, with no place for… originality.” The Supreme Court in Burrow Giles instead found that the photographer “gives effect to the idea” and is the work’s “mastermind” deserving of copyright. 

However, treating AI like a conventional tool is an inconsistent oversimplification in the current context. Not only is it often less analogous to say that an end-user prompter is the ‘mastermind’ of the output, but AI presents a more attenuated causation analysis that should not result in  a copyright for all AI-generations. As an extreme example, recent AIs are employing other AIs as replicable agents. In these circumstances, a single prompt could catalyze one AI to automatically employ other AI agents to generate numerous potentially creative or infringing outputs. Here, the most closely linked human input would be a prompt that could not be said to have masterminded or caused the many resultant expressive outputs. Under Balganesh’s framework, no human could reasonably be found as the factual or legal cause of the output. Such use-cases will further challenge the law’s notions of foreseeability as reasonable causation becomes increasingly attenuated.

Importantly, in the face of this ongoing debate and scholarship, the Copyright Office recently made their determination on authorship for AI-generated works. In February 2023, the US Copyright Office amended its decision regarding Kristina Kashtanova’s comic book, Zarya of the Dawn, stating that the exclusively AI-generated content is not copyrightable.  Ms. Kashtanova created her comic book using Midjourney, a text-to-art AI, to generate much of the visual art involved. The copyright office stated that her “selection, coordination, and arrangement” of AI-generated images are copyrightable, but not the images themselves. The Office’s decision means that all exclusively AI-generated content, like natural phenomena, is not the type of content copyright protects and is freely accessible to all. The Office’s decision was based on their interpretation that “it was Midjourney—not Kashtanova—that originated the ‘traditional elements of authorship.’” The Office’s decision is appropriate policy, but when analyzed in conjunction with the current law on causation in infringement, it is inconsistent and may result in an asymmetrical allocation of the rights and duties that attend creation. Relevantly, how can a machine that is incapable of volition originate art? This is one of many ontological paradoxes that AI will present to law. 

Symmetrically Analyzing Causation

Two things are apparent. First, there is a beautiful symmetry in AI-generations being uncopyrightable, and the machines originating such works symmetrically do not have sufficient volition to infringe. If such a system persists, then copyright law may not play a major role in generative AI, though this is doubtful. Second, such inconsistencies inevitably result from causation analyses for mechanically analogous actions that only analyze one of infringement or authorship. Instead, I propose that copyright law symmetrically analyze mechanically analogous causation for both authorship and infringement of the reproduction right. Since copyright law has only recently begun analyzing causation, it is reasonable, and potentially desirable, that the law does not require this symmetrical causation. After all, the elements of authorship and infringement are usefully different. However, what has been consistent throughout copyright is that when an author creates, they risk both an infringing reproduction and the benefits of authorship rights. In other words, by painting, a painter may create a valuable copyrightable work, but they also may paint an infringing reproduction of Mickey Mouse. Asymmetrical causation for AI art could be analogized to the painter receiving authorship rights while the company that made the paintbrush being liable for the painter’s infringing reproductions. Such a result would not incentivize a painter to avoid infringement, and thereby improperly balance the risks and benefits of creation. Ultimately, if the law decides either the end-user or the AI company is the author, then the other entity should not be asymmetrically liable for infringing reproductions. Otherwise, the result will be ethically and logically inconsistent. After all, as Antony Honore wrote in Responsibility and Fault, in our outcome-based society and legal system, we receive potential benefit from and are responsible for the harms reasonably connected to our actions.

Dancing Around the Issue: Washington Lawmakers Grapple with State Regulation of Adult Entertainment

By: Matt Williamson

When people think of Washington many things quickly come to mind: Apples, Planes, Rain, Grunge, Twilight; all understandable. Restrictive alcohol laws though? Not so much. 

Despite this, Washington maintains a near-total prohibition on any alcohol service in adult entertainment clubs–making it one of only a few states to do so

This year, a group of exotic entertainment advocates, working with state lawmakers, aimed to change this. The group helped introduce, and has lobbied for, the passage of SB 5614: A bill designed to reverse restrictions on alcohol service and allow strip clubs to apply for liquor licenses

While this might seem like a fairly humble goal, the policy change would represent a massive shift in the landscape of adult entertainment in Washington. Alongside the reversal of the alcohol restrictions, SB 5614 also contains a series of potentially hugely impactful provisions aimed at providing a safer, fairer, and more stable working environment for exotic dancers across the state. 

Why Alcohol Matters

To understand why the seemingly small change could mean so much to Washington-based dancers, one first has to understand the secondary effects this restriction creates.

Washington’s restriction on alcohol service in strip clubs stems not from statute, but administrative rules. The restrictions, enshrined in WAC 314-11-050, were established by the Washington State Liquor and Cannabis Board, and prohibit the sale of alcohol in any establishment where certain types of activity take place. Because the restricting activities include stripping, a fairly essential element of strip clubs, the rule establishes a de facto exclusion on alcohol sales in strip clubs. 

What this creates is a major economic problem that club owners and management must combat. Without alcohol sales, these clubs are cut off from a huge source of revenue, and must turn to other means to extract money from patrons and staff alike

This is the landscape that has produced one of the most hated aspects of exotic dancing in Washington: Dancer fees. Rather than paying them, many clubs actually charge dancers a fee to perform, arguing that they will earn money through tips, and that the fees are required to be able to maintain the club’s business viability. Naturally, this contributes to the financial instability of the profession, as dancers often encounter shifts where they make little to no money, and are nonetheless forced to pay for the opportunity. 

It’s not just the economics either. Many dancers argue that the lack of alcohol sales in Washington clubs robs these establishments of the ability to create a social or entertaining environment and restricts them to a customer base exclusively seeking a sexual experience. Dancers have noted that Washington clubs have a distinctly sexually-focused vibe, as opposed to Oregon clubs, where alcohol is served and the environment tends to be more akin to a bar. 

Moreover, the added revenue from alcohol sales presents advocates with an opportunity to invest in protections for exotic dancers that have long been missing from the industry. SB 5416 includes provisions requiring better security in clubs, mandatory training for dancers including on financial security planning, and prohibitions on predatory club fees and penalties. 

Legislative Struggles

Despite significant support from groups like Strippers are Workers, which has championed the bill, it sadly seems as though SB 5416 is unlikely to pass the state legislature this year. 

However, advocates can take at least some solace in the nature of its demise: SB 5416 has not failed to receive enough votes at any of the crucial steps in the legislative process, but instead ran afoul of the greatest obstacle any piece of Washington legislation ever faces–the absurdly compacted legislative schedule.

Washington has a part-time legislature, which means that Legislators in the House and Senate only meet for between 3 and 4 months a year. When considered in light of the thousands of bills that are introduced every year, and the numerous procedural steps each must traverse, the massive scale of the scheduling problem quickly comes into focus. 

When SB 5416 passed the Senate in early March, it seemed to have serious momentum, receiving significant bipartisan support in that chamber and quickly being placed on the agenda of the House Committee on Labor & Workplace Standards. 

But sadly, things quickly seemed to fizzle as some notes of opposition arose in the House, and the crush of bills began to overwhelm policy committees. Now, as the cutoff for bills advancing out of House policy committees has passed, and the bill remains with the Regulated Substances & Gaming Committee, it seems all but doomed.

Conclusion

This seems an unworthy end for a bill that seeks to strike at the heart of a serious issue for thousands of working Washingtonians. Exotic dancers deserve so much better than the often predatory working environments they encounter in Washington clubs, and it is clear that repealing our state’s misguided alcohol restrictions could go a long way towards addressing the underlying causes of these conditions and providing dancers with the support and protection they need. Hopefully, advocates and their allies will get another shot at passing this legislation soon, and next time legislators will find the time to seriously consider and pass it.

Regulating Emerging Technology: How Can Regulators Get a Grasp on AI?

By: Chisup Kim

Uses of Artificial Intelligence (“AI”), such as ChatGPT, are fascinating experiments that have the potential to morph their user’s parameters, requests, and questions into answers. However, as malleable these AIs are to user requests, governments and regulators have not had the same flexibility in governing this new technology. Countries have taken drastically different approaches to AI regulations. For example, on April 11, 2023, China announced that AI products developed in China must undergo a security assessment to ensure that content upholds “Chinese socialist values and do[es] not generate content that suggests regime subversion, violence or pornography, or disput[ions to] to economic or social order.” Italy took an even more cautionary stance, outright banning ChatGPT. Yet domestically, in stark contrast to the decisive action taken by other countries, the Biden Administration has only begun vaguely examining whether there should be rules for AI tools.

In the United States, prospective AI regulators seem to be more focused on the application of AI tools to a specific industry. For example, the Equal Employment Opportunity Commission (“EEOC”) has begun an initiative to examine whether AI in employment decisions comply with federal civil rights laws. On autonomous vehicles, while the National Highway Traffic Safety Administration (“NHTSA”) has not yet given autonomous vehicles the green light exemption from occupant safety standards, they do maintain a web page open to a future with automated vehicles. Simultaneously, while regulators are still trying to grasp this technology, AI is entering every industry and field in some capacity. TechCrunch chronicled the various AI applications from Y Combinator’s Winter Demo Day. TechCrunch’s partial list included the following: an AI document editor, SEC-compliance robo-advisors, Generative AI photographer for e-commerce, automated sales emails, an AI receptionist to answer missed calls for small companies, and many more. While the EEOC and NHTSA have taken proactive steps for their own respective fields, we may need a more proactive and overarching approach for the widespread applications of AI. 

Much like their proactive GDPR regulations in privacy, the EU proposed a regulatory framework on AI. The framework proposes a list of high-risk applications for AI, and creates more strenuous obligations for those high-risk applications and tempered regulations for the limited and no risk applications of AI. Applications identified as high-risk include the use of AI in critical infrastructure, education or vocational training, law enforcement, and administration of justice. High-risk applications would require adequate risk assessment and mitigation, logging of data with traceability, and clear notice and information provided to the user. ChatBots are considered limited risk but require that the user has adequate notice that they’re interacting with a machine. Lastly, the vast majority of AI applications are likely to fall under the “no risk” bucket for harmless applications, including applications such as video games or spam filters. 

If U.S. regulators fail to create a comprehensive regulatory framework for AI, they will likely fall behind on this issue, much like they have fallen behind on privacy issues. For example, with privacy, the vacuum of guidance and self-regulating bodies forced many states and foreign countries to begin adopting GDPR-like regulations. The current initiatives by the EEOC and NHTSA are applaudable, but these organizations seem to be waiting for actual harm to occur before taking proactive steps to regulate the industry. For example, last year, NHTSA found that the Tesla autopilot system, among other driver-assisted systems, was linked to nearly 400 crashes in the United States with six fatal accidents. Waiting for the technology to come to us did not work for privacy regulations; we should not wait for AI technology to arrive either.

Legend of Zelda Mod Drives Nintendo IP Lawyers Wild

By: Nick Neathamer

Has video game fandom gone too far? Despite developing some of the biggest games on the market, Nintendo seems to think it has (at least in a legal sense). The company has recently claimed copyright infringement on multiple YouTube videos that show the use of fan-made modifications (“mods”) for the game Legend of Zelda: Breath of the Wild. 

Breath of the Wild is one of the most popular open-world video games in recent memory. Created by Nintendo, the game was deemed Game of the Year in 2017 at The Game Awards. However, one notable element the game is lacking is any multiplayer capability. YouTuber Eric Morino, better known by his channel name PointCrow, aspired to change that. In November 2021, he tweeted out a request for anyone to create a multiplayer mod for the game, offering up $10,000 to whoever could send a functional version. Two members of the modding community were able to create a mod that runs on a Wii U emulator (software which enables Wii U console games to be played on a PC), allowing multiple players to travel throughout the game’s fantastical setting of Hyrule together. On April 4, 2023, PointCrow released the mod to the public through his Discord (however, it has since been removed). 

After the release, Nintendo claimed copyright infringement on PointCrow’s videos that feature any use of the mod, prompting YouTube to take down those videos. Due to Nintendo’s reputation for being a highly litigious company, the copyright claims against PointCrow’s videos are not a huge surprise. However, PointCrow has argued and appealed the copyright strikes, saying that he has “significantly transform[ed]” Nintendo’s work and that his videos constitute fair use. 

Copyright ownership grants the holder several exclusive rights in regard to their copyrighted work, as laid out in §106 of the Copyright Act of 1976. One of these rights is the right to create subsequent works derived from the original copyrighted work. If someone other than the copyright owner creates such a derivative work, they would infringe the copyright in the original work. Unfortunately for the Breath of the Wild modders, present-day mods have been considered derivative works since the 9th Circuit Court of Appeals’ ruling in Micro Star v. Formgen. While many game developers seldom pursue legal recourse against the majority of modders, and some have even started to embrace the modding community, this derivative work status bars modders from having any copyright of their own in the mods they create. Additionally, if Nintendo does choose to sue for copyright infringement in relation to the multiplayer mod itself, PointCrow and the other creators are likely to be held liable. 

Next comes the question of whether PointCrow’s videos about the mod qualify as fair use. Fair use analysis involves considering four factors in a balancing test. Set out in §107 of the Copyright Act, these factors are (1) the purpose and character of the use, including whether the use is commercial; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use upon the potential market for or value of the copyrighted work. While courts must consider all four factors, the first and fourth factors are typically considered the most important in deciding whether an allegedly infringing work is a fair use. The first factor is more likely to weigh against fair use when the allegedly infringing work is commercial. However, commerciality may be overcome and the first factor may weigh towards fair use when the work in question has transformed the original, providing it with a new expression, purpose, or meaning. Here, PointCrow intends to monetize his videos on YouTube, making his use commercial. PointCrow’s claim that his videos have “significantly transform[ed]” Breath of the Wild indicate his belief that the videos are sufficiently transformative to warrant the first factor weighing in favor of fair use, despite their commerciality. One could certainly argue that by providing commentary and reactions to the gameplay, PointCrow has transformed Breath of the Wild by granting it a new expression. However, the entertaining purposes of both PointCrow’s videos and the game itself are very similar, despite the difference in watching a game versus playing it. For these reasons, it is difficult to predict whether a court would find this factor to weigh for or against fair use. 

The second factor most likely weighs against fair use. A use is less likely to be fair use when the original work is unpublished, because authors of unpublished works are expected to be able to decide how their work is originally used, or whether it may be released to the public at all. On the other hand, copying of a published work is more likely to be considered fair use. Even more relevant to the nature of the work is if the original work is creative, which tends to weigh against fair use in contrast to when the original work is primarily factual. Here, the second factor most likely weighs against fair use because the original game is a creative work, despite the game’s published status. Meanwhile, the third factor likely weighs in favor of fair use. PointCrow’s videos include actual gameplay, and therefore show large portions of the original game. However, displaying this large amount of the game is necessary to accomplish PointCrow’s intended purposes. Disregarding the legality of the mod itself, PointCrow needs to show gameplay in order to demonstrate differences between the original game and the modded version, as well as to show his unique experiences with Breath of the Wild that viewers want to see. Because of the need to use this large amount of gameplay for his intended purpose, a court is likely to find that the third factor weighs in favor of fair use. 

The fourth factor, effect of the use upon a potential market of the copyrighted work, weighs against fair use when an allegedly infringing work provides a substitute for the original. With this in mind, it is not entirely clear what role PointCrow’s videos play in the video game entertainment market. PointCrow would likely argue that his videos are essentially free advertising for Breath of the Wild and Nintendo, while Nintendo may argue that watching someone play the game essentially provides a substitute for playing the game itself and therefore has a negative effect on the market for the game. A court may also be persuaded by the argument that by promoting the multiplayer mod, which runs on an emulator instead of an actual Nintendo console, PointCrow’s videos are indirectly causing a substitution loss to Nintendo in console sales. This makes it more likely, although not certain, that the fourth factor would weigh against fair use. 

Despite their best intentions and love for the game, it appears that PointCrow and other fans of Legend of Zelda: Breath of the Wild are infringing Nintendo’s copyright by creating a multiplayer mod. Less clear is whether videos that promote the mod are infringing. A lack of existing litigation surrounding gaming videos only exacerbates this uncertainty. With the upcoming release of Legend of Zelda: Tears of the Kingdom, a direct sequel to Breath of the Wild, content creators are likely unsure how to make gameplay videos while complying with copyright laws. That said, Nintendo’s history of litigation has not stopped fans from making their passion projects thus far, and it certainly seems like fans will continue to create mods and videos going forward. But perhaps the takedown of PointCrow’s videos will finally send the message that despite Nintendo’s success at making games, the company is not playing around when it comes to their intellectual property.