Monster Energy vs. Everyone: Why is a drink company challenging video games for using the word “monster”?

By: Perry Maybrown

Have you ever been playing Monster Hunter and thought, “Huh, this must be related to Monster Energy.” While I personally have never faced this conundrum, it’s a scenario that Monster Energy has been very worried about. So much so, that for the past few years they have been targeting a wide range of industries to protect their trademark of the word “monster.” Two notable attacks that have been revived by the media were against Pokémon for “Pocket Monster” and Capcom for their videogame franchise “Monster Hunter.” Both complaints were filed in Japan and promptly dismissed. 

This has not deterred the energy drink company in the slightest however, as they have recently sent a cease and desist letter to independent development studio Glowstick Entertainment for their game Dark Deception: Monsters & Mortals. In their letter, Monster Energy requests that Glowstick never again attempt to trademark something with the word “Monster,” or have any trademarks that could at all resemble their own. They go on to request that the logo of the game be modified and sent to them for approval. Monster asked Glowstick to refrain from using the colors green, white, and black (a task that is especially daunting considering the Glowstick logo is green on a black background). Furthermore, Monster required Glow Stick not emphasize the word “monster” more than any other in the title of their game.

Furious with these demands, founder and CEO of the studio, Vincent Livings, took to Twitter to air his complaints and share the cease and desist letter. This has led to extreme reactions from the Twitter sphere and news outlets which have shared the story

Are these reactions warranted? Or is Monster simply protecting its rightful trademark? 

A trademark can be composed of a variety of elements: words, images, sounds, even colors can be used as a way to denote a specific brand or product. What a trademark is not, is complete ownership of a single word, symbol or color in all situations. Rather, a trademark only protects the use of your mark in connection with similar goods or services. Boiling it down to the most basic level, infringement occurs when a consumer may become confused between two marks. This is referred to as the likelihood of confusion

The courts have determined a test and list of factors that they weigh when deciding infringement. On the west coast (9th Circuit) these are referred to as the Sleekcraft Factors. The factors are as follows:

(1) Strength or Weakness of the Plaintiff’s Mark. 

(2) Defendant’s Use of the Mark

(4) Actual Confusion.  

(5) Defendant’s Intent.  

(6) Marketing/Advertising Channels. 

(7) Consumer’s Degree of Care.  

(8) Product Line Expansion. 

(9) Other Factors.

On the east coast they are the polaroid factors, which are similar to Sleekcraft: 

(1) the strength of the plaintiff’s mark; 

(2) the degree of similarity between the two marks; 

(3) the proximity of the products; 

(4) the likelihood that the owner will bridge the gap; 

(5) evidence of actual confusion; (6) defendant’s good faith in adopting the mark; 

(7) the quality of defendant’s product; and 

(8) the sophistication of the consumers. will likely cause confusion with plaintiff’s mark.

While a court may review all of these factors to determine the likelihood of confusion, there are several that are most pivotal when reviewing the scant facts we know for Monster Energy. To start with, product line expansion refers to whether the goods and services are related and the likelihood of one company expanding into the others business. This again is to help figure out “likelihood of confusion” on the part of the consumers. For example, it’s easier to become confused between two purses both made by a company called Gucci, however consumers are less likely to relate the two if it’s a sink maker that goes by that name. 

In the case of Monster,it is critical to ask, how likely is a beverage manufacturer to enter the video game market? Not only are the two products completely unrelated, the video game industry is difficult to break into on a good day. 

Furthermore,  the strength of the mark is evaluated on a sliding scale, with the weakest being what is called a “generic mark.” Generic words do not receive trademark protection because everyone needs to use them to describe their business. For example, if I created a coffee company called Coffee Company that would be generic. Imagine if I could now prevent all other coffee companies from using the word “coffee.” That would be wild! 

Next up is determining how descriptive the trademark is. Basically, is your trademark just describing the thing you are selling? These types of trademarks usually do not receive protection, but can in certain instances. 

The strongest marks are fanciful, arbitrary or suggestive. A fanciful mark is the best one you can get from a legal standpoint, because it’s a word you just made up (think Pepsi, Kodak etc.). A suggestive mark is one that kind of sounds like the product (like Netflix). And finally is arbitrary, which is just a random word that is used on a product that may have meaning elsewhere but isn’t directly descriptive of the product itself (Apple Computers is a great example of this). The Monster Energy mark would likely either be considered arbitrary or suggestive. Maybe suggestive because its name implies that you get monstrous, or huge amounts of energy from it. Arbitrary perhaps because the monster seems unrelated to the beverage. 

Here is the issue for Monster; while it is arbitrary for their product in particular, the word is descriptive when it comes to the video games they are challenging. Take for example Monster Hunter, which is a game series where you… hunt monsters. Or Pocket Monsters, where you collect monsters, you can fit in your pocket. At worst the word may be considered generic as it is used so ubiquitously throughout the industry to describe games and their contents. 

This is important because courts are unwilling to impose trademark protections for generic marks, because of how damaging it could be to the market. While descriptive marks may receive some kind of protection it’s a challenge that requires a large amount of work from the company that wishes to secure the trademark. Thanks to this, courts would be even more unwilling to find in Monster’s favor and force video game companies to stop using such a descriptive term. 

Conclusion

Like the saying goes, “with great power comes great responsibility.” While trademark owners do have rights to that mark, their power comes with responsibility  as they must defend their mark or risk losing it. In that sense, it seems logical that Monster is so zealously fighting to keep the word “monster” out of other companies’ mouths. On the other hand, this overkill method to attack any use of such a common word isn’t a great look for the company. 

At the end of the day, the decision of whether infringement exists is for the courts to decide. And it’s Monster’s choice to spend the money getting to that point, win or lose. We don’t have all the facts for either of these cases, and only know one side of the story, so it’s difficult to say if there are more factors that could play into this issue. But for now, it seems to be David vs Goliath. And the public is on David’s side.

Tick-Tock, TikTok: Could the Party Actually Stop?

By: Enny Olaleye

Over the past couple years, the United States government has expressed several concerns about the potential security risks associated with TikTok, a popular social media application that has amassed over 1 billion active users every month. Some officials have suggested that TikTok could be used by the Chinese government to collect data on American citizens or to spread propaganda.

According to TikTok, the social media app serves as a platform for users to “come together to learn, be entertained, and grow their business, as they continue to create, discover and connect with a broader global community.” TikTok now serves as one of the largest worldwide social networks, coming in third after Facebook and Instagram. This is evidenced by its widely popular international status—approximately 150 million Americans alone are active members of the social media app—making up nearly half of the U.S. population. 

As a result of the app’s widespread popularity in the United States, government officials have become wary about the potential risks the social media app poses. In August 2020, former President Donald Trump issued an executive order that would have effectively banned TikTok in the US unless it was sold to an American company within 45 days. This order cited concerns about national security and the app’s handling of user data, as the US government was worried that user data could be accessed by the Chinese government. Additionally, the US government accused TikTok of censoring content that would be unfavorable to the Chinese Communist Party and allowing misinformation to spread on its platform. 

However, the ban was temporarily blocked by a federal judge and the Biden administration later put the order on hold to conduct a broader review of security risks posed by Chinese technology. In June 2021, President Biden signed an executive order that expanded the scope of previous orders related to Chinese-owned apps and other technology. The order aimed to protect US citizens’ sensitive data and prevent the Chinese government from gaining access to that data through apps like TikTok. In September 2021, the Biden administration announced that it would not pursue a ban on TikTok, but it would continue to monitor the app’s potential security risks.

In response, TikTok has repeatedly denied these allegations, saying that it stores U.S. user data on servers located in the U.S. and that it has never provided data to the Chinese government. The company has also taken steps to distance itself from its Chinese parent company, ByteDance, by appointing a U.S. CEO and creating a U.S.-based data center.

Unfortunately, TikTok’s actions have done little to appease the concerns raised by federal officials and have only led legislators to double down on taking action against the platform. As of March 2023, the Biden administration has called on TikTok’s parent company, ByteDance, to either sell the app or face a possible ban in the United States because of concerns about data privacy and national security. The White House has also signaled its support for draft legislation in the House of Representatives that would allow the federal government to regulate or ban technology produced by some foreign countries, including TikTok. The federal government and multiple states have also already banned TikTok on government devices

Thus, the question arises: “Can the federal government actually do that?” 

Well—not necessarily. First and foremost, the U.S. government does not have the authority to ban speech, as free speech is a right guaranteed to citizens under the First Amendment of the U.S. Constitution. Posts on TikTok are protected by the First Amendment since they are a form of speech. While the Biden Administration has replaced the Trump order with one that provides a more solid legal ground for potential action, it still cannot override the First Amendment protections afforded to speech on TikTok.

Further, if this issue is brought to the courts, proponents of a TikTok ban will most likely claim the national security risks posed by the app are self-evident. Proponents of a ban believe that TikTok’s relationship to China and how Chinese law requires companies to cooperate with all requests from Beijing’s security and intelligence services creates obvious security problems. However, that line of reasoning is unlikely to find support with federal judges, who will be weighing the potential security risks against the imposition of real-world restrictions on the rights of 150 million Americans to post and exercise free speech on an extremely popular platform. 

Even if judges were to rule that a TikTok ban is neutral when it comes to content and viewpoint, the government would still have to prove that the remedy is narrowly tailored to serve, at a minimum, a “significant government interest,” in order to justify a ban and the corresponding restriction on speech. To ensure narrow tailoring, the Supreme Court developed the standard of strict scrutiny when reviewing free speech cases. To satisfy strict scrutiny, the government must show that the law meets a compelling government interest and that the regulation is being implemented using the least restrictive means. However, narrow tailoring is not confined to strict scrutiny cases, as seen in McCullen v. Coakley, where the Court determined that a Massachusetts law regulating protests outside abortion clinics was not content based and, thus, not subject to strict scrutiny.

As of April 2023, there has been no federal legislation passed that would permit an outright ban of TikTok in the United States. While the First Amendment likely limits the government’s ability to ban the app outright, it could still target TikTok’s ability to conduct U.S.-based financial transactions. That includes potential restrictions on its relationship with Apple and Google’s mobile app stores, which would severely hamper TikTok’s growth. By targeting conduct instead of speech, such a restriction would be outside of the First Amendment’s protections.
Regardless, amid all this government action, there is one thing that has made itself apparent. As the federal government escalates its efforts against TikTok, it’s coming up against a stark reality: even a politically united Washington may not have the regulatory and legal powers to wipe TikTok off American phones.

Liability, Authorship, & Symmetrial Causation in AI-Generated Outputs

By: Jacob Alhadeff

Copyright has insufficiently analyzed causation for both authorship and liability because, until now, causation was relatively obvious. If someone creates a painting, then they caused the work and receive authorial rights. If it turned out that the painting was of Mickey Mouse, then that painter may be liable for an infringing reproduction. However, recent technological advances have challenged the element of causation in both authorship and infringement. In response, recent law and scholarship have begun to address these issues. However, because they have addressed causation in isolation, current analysis has provided logically or ethically insufficient answers. In other words, authorial causation has ignored potential implications for an entity’s infringement liability, and vice-versa. Regardless of how the law responds, generative AI will require copyright to explore and enumerate the previously assumed causation analyses for both infringement and authorship. This blog explores how generative AI exposes the logical inconsistencies that result from analyzing authorial causation without analyzing causation for infringing reproductions.

Generative AI largely requires the following process: (1) an original artist creates works, (2) a developer trains an AI model on these works, and (3) an end-user prompts the AI to generate an output, such as “a mouse in the style of Walt Disney.” This generative AI process presents a novel challenge for copyright in determining who or what caused the output because generative AI challenges conventional notions of creation.

Causing Infringement

Andersen et al. recently filed a complaint against Stability AI, one of the most popular text-to-art foundation models. This class action alleges that Stability AI is directly liable for infringing that result from end-user prompted generations. However, in a recent decision more closely analyzing causation and volition in infringement, the Ninth Circuit found that “direct liability must be premised on conduct that can reasonably be described as the direct cause of infringement.” Stability AI should not be found directly liable for infringing these artists’ copyright, in part because Stability AI cannot reasonably be said to be the direct cause of infringement. Such a finding would be similar to holding Google liable for reproducing images of Mickey Mouse on people’s computer screens when they search for “Mickey Mouse.”  

This lawsuit is particularly relevant since end-users have prompted thousands of generations that include the phrase “Mickey Mouse” and many appear substantially similar to Disney’s Mickey. If thousands of end-users have intentionally prompted the AI to generate Mickey Mouse, then what volitional conduct can most reasonably be described as the direct cause of infringement? It is clearly the end-user. However, what if the end-user simply prompted “a cartoon mouse” and the AI generated an infringing image of Mickey? Here, the end-user may not have intended to generate Mickey and reasonable notions of fairness may not find the end-user as the most direct cause of infringement. However, copyright is a strict liability tort, meaning that liability attaches regardless of a reproducer’s intent. Therefore, unless copyright applies an intentional or a negligence theory for direct liability, which it should not, then whomever or whatever is liable for infringing outputs shall be liable for both of the infringing outputs— “Mickey Mouse” and “a cartoon mouse.” Such an outcome not only feels deeply unfair, but it is unreasonable to say that the end-user is the direct cause of infringement when prompting “a cartoon mouse,” and vice versa. 

Cases called to answer similar questions have recently grappled with these same issues of volition and causation. Generally, courts have been hesitant to find companies liable for actions that are not reasonably deemed volitional conduct causing infringement. The court in Cartoon Network, for example, found that “volition is an important element of direct liability.” In the Loopnet case, the court found that “the Copyright Act… requires conduct by a person who causes in some meaningful way an infringement.” In this way, the law has so far mirrored our prior intuitions of fairness. Legal scholarship has noted that when copyright law has grappled with novel technology, it has found that causation in infringement requires volition that “can never be satisfied by machines.” This reasoning, as applied  to generative AI, may mean that an AI company should not normally be directly liable for outputs that infringe the reproduction right. 

Causing Authorship

This causation analysis has also begun for authorship rights. One copyright scholar compellingly argues that copyright law should explicitly enumerate a causal analysis for granting authorship rights. Such an analysis would follow tort law’s two step causation analysis including: (1) creation in fact and (2) legal creation. Aviv Gaon surveys authorial options in The Future of Copyright in the Age of AI, writing that there are those that favor assigning authorship to the end-user prompter, the AI developer, finding outputs joint works, or even attributing authorship to the AI itself. The simplest legal option would be to treat AI like a tool and grant authorship to the end-user. This is exactly how the law responded when photography challenged conventional notions of creativity and authorship. Opponents of finding photographers as authors argued that photography was “merely mechanical, with no place for… originality.” The Supreme Court in Burrow Giles instead found that the photographer “gives effect to the idea” and is the work’s “mastermind” deserving of copyright. 

However, treating AI like a conventional tool is an inconsistent oversimplification in the current context. Not only is it often less analogous to say that an end-user prompter is the ‘mastermind’ of the output, but AI presents a more attenuated causation analysis that should not result in  a copyright for all AI-generations. As an extreme example, recent AIs are employing other AIs as replicable agents. In these circumstances, a single prompt could catalyze one AI to automatically employ other AI agents to generate numerous potentially creative or infringing outputs. Here, the most closely linked human input would be a prompt that could not be said to have masterminded or caused the many resultant expressive outputs. Under Balganesh’s framework, no human could reasonably be found as the factual or legal cause of the output. Such use-cases will further challenge the law’s notions of foreseeability as reasonable causation becomes increasingly attenuated.

Importantly, in the face of this ongoing debate and scholarship, the Copyright Office recently made their determination on authorship for AI-generated works. In February 2023, the US Copyright Office amended its decision regarding Kristina Kashtanova’s comic book, Zarya of the Dawn, stating that the exclusively AI-generated content is not copyrightable.  Ms. Kashtanova created her comic book using Midjourney, a text-to-art AI, to generate much of the visual art involved. The copyright office stated that her “selection, coordination, and arrangement” of AI-generated images are copyrightable, but not the images themselves. The Office’s decision means that all exclusively AI-generated content, like natural phenomena, is not the type of content copyright protects and is freely accessible to all. The Office’s decision was based on their interpretation that “it was Midjourney—not Kashtanova—that originated the ‘traditional elements of authorship.’” The Office’s decision is appropriate policy, but when analyzed in conjunction with the current law on causation in infringement, it is inconsistent and may result in an asymmetrical allocation of the rights and duties that attend creation. Relevantly, how can a machine that is incapable of volition originate art? This is one of many ontological paradoxes that AI will present to law. 

Symmetrically Analyzing Causation

Two things are apparent. First, there is a beautiful symmetry in AI-generations being uncopyrightable, and the machines originating such works symmetrically do not have sufficient volition to infringe. If such a system persists, then copyright law may not play a major role in generative AI, though this is doubtful. Second, such inconsistencies inevitably result from causation analyses for mechanically analogous actions that only analyze one of infringement or authorship. Instead, I propose that copyright law symmetrically analyze mechanically analogous causation for both authorship and infringement of the reproduction right. Since copyright law has only recently begun analyzing causation, it is reasonable, and potentially desirable, that the law does not require this symmetrical causation. After all, the elements of authorship and infringement are usefully different. However, what has been consistent throughout copyright is that when an author creates, they risk both an infringing reproduction and the benefits of authorship rights. In other words, by painting, a painter may create a valuable copyrightable work, but they also may paint an infringing reproduction of Mickey Mouse. Asymmetrical causation for AI art could be analogized to the painter receiving authorship rights while the company that made the paintbrush being liable for the painter’s infringing reproductions. Such a result would not incentivize a painter to avoid infringement, and thereby improperly balance the risks and benefits of creation. Ultimately, if the law decides either the end-user or the AI company is the author, then the other entity should not be asymmetrically liable for infringing reproductions. Otherwise, the result will be ethically and logically inconsistent. After all, as Antony Honore wrote in Responsibility and Fault, in our outcome-based society and legal system, we receive potential benefit from and are responsible for the harms reasonably connected to our actions.

Dancing Around the Issue: Washington Lawmakers Grapple with State Regulation of Adult Entertainment

By: Matt Williamson

When people think of Washington many things quickly come to mind: Apples, Planes, Rain, Grunge, Twilight; all understandable. Restrictive alcohol laws though? Not so much. 

Despite this, Washington maintains a near-total prohibition on any alcohol service in adult entertainment clubs–making it one of only a few states to do so

This year, a group of exotic entertainment advocates, working with state lawmakers, aimed to change this. The group helped introduce, and has lobbied for, the passage of SB 5614: A bill designed to reverse restrictions on alcohol service and allow strip clubs to apply for liquor licenses

While this might seem like a fairly humble goal, the policy change would represent a massive shift in the landscape of adult entertainment in Washington. Alongside the reversal of the alcohol restrictions, SB 5614 also contains a series of potentially hugely impactful provisions aimed at providing a safer, fairer, and more stable working environment for exotic dancers across the state. 

Why Alcohol Matters

To understand why the seemingly small change could mean so much to Washington-based dancers, one first has to understand the secondary effects this restriction creates.

Washington’s restriction on alcohol service in strip clubs stems not from statute, but administrative rules. The restrictions, enshrined in WAC 314-11-050, were established by the Washington State Liquor and Cannabis Board, and prohibit the sale of alcohol in any establishment where certain types of activity take place. Because the restricting activities include stripping, a fairly essential element of strip clubs, the rule establishes a de facto exclusion on alcohol sales in strip clubs. 

What this creates is a major economic problem that club owners and management must combat. Without alcohol sales, these clubs are cut off from a huge source of revenue, and must turn to other means to extract money from patrons and staff alike

This is the landscape that has produced one of the most hated aspects of exotic dancing in Washington: Dancer fees. Rather than paying them, many clubs actually charge dancers a fee to perform, arguing that they will earn money through tips, and that the fees are required to be able to maintain the club’s business viability. Naturally, this contributes to the financial instability of the profession, as dancers often encounter shifts where they make little to no money, and are nonetheless forced to pay for the opportunity. 

It’s not just the economics either. Many dancers argue that the lack of alcohol sales in Washington clubs robs these establishments of the ability to create a social or entertaining environment and restricts them to a customer base exclusively seeking a sexual experience. Dancers have noted that Washington clubs have a distinctly sexually-focused vibe, as opposed to Oregon clubs, where alcohol is served and the environment tends to be more akin to a bar. 

Moreover, the added revenue from alcohol sales presents advocates with an opportunity to invest in protections for exotic dancers that have long been missing from the industry. SB 5416 includes provisions requiring better security in clubs, mandatory training for dancers including on financial security planning, and prohibitions on predatory club fees and penalties. 

Legislative Struggles

Despite significant support from groups like Strippers are Workers, which has championed the bill, it sadly seems as though SB 5416 is unlikely to pass the state legislature this year. 

However, advocates can take at least some solace in the nature of its demise: SB 5416 has not failed to receive enough votes at any of the crucial steps in the legislative process, but instead ran afoul of the greatest obstacle any piece of Washington legislation ever faces–the absurdly compacted legislative schedule.

Washington has a part-time legislature, which means that Legislators in the House and Senate only meet for between 3 and 4 months a year. When considered in light of the thousands of bills that are introduced every year, and the numerous procedural steps each must traverse, the massive scale of the scheduling problem quickly comes into focus. 

When SB 5416 passed the Senate in early March, it seemed to have serious momentum, receiving significant bipartisan support in that chamber and quickly being placed on the agenda of the House Committee on Labor & Workplace Standards. 

But sadly, things quickly seemed to fizzle as some notes of opposition arose in the House, and the crush of bills began to overwhelm policy committees. Now, as the cutoff for bills advancing out of House policy committees has passed, and the bill remains with the Regulated Substances & Gaming Committee, it seems all but doomed.

Conclusion

This seems an unworthy end for a bill that seeks to strike at the heart of a serious issue for thousands of working Washingtonians. Exotic dancers deserve so much better than the often predatory working environments they encounter in Washington clubs, and it is clear that repealing our state’s misguided alcohol restrictions could go a long way towards addressing the underlying causes of these conditions and providing dancers with the support and protection they need. Hopefully, advocates and their allies will get another shot at passing this legislation soon, and next time legislators will find the time to seriously consider and pass it.

Regulating Emerging Technology: How Can Regulators Get a Grasp on AI?

By: Chisup Kim

Uses of Artificial Intelligence (“AI”), such as ChatGPT, are fascinating experiments that have the potential to morph their user’s parameters, requests, and questions into answers. However, as malleable these AIs are to user requests, governments and regulators have not had the same flexibility in governing this new technology. Countries have taken drastically different approaches to AI regulations. For example, on April 11, 2023, China announced that AI products developed in China must undergo a security assessment to ensure that content upholds “Chinese socialist values and do[es] not generate content that suggests regime subversion, violence or pornography, or disput[ions to] to economic or social order.” Italy took an even more cautionary stance, outright banning ChatGPT. Yet domestically, in stark contrast to the decisive action taken by other countries, the Biden Administration has only begun vaguely examining whether there should be rules for AI tools.

In the United States, prospective AI regulators seem to be more focused on the application of AI tools to a specific industry. For example, the Equal Employment Opportunity Commission (“EEOC”) has begun an initiative to examine whether AI in employment decisions comply with federal civil rights laws. On autonomous vehicles, while the National Highway Traffic Safety Administration (“NHTSA”) has not yet given autonomous vehicles the green light exemption from occupant safety standards, they do maintain a web page open to a future with automated vehicles. Simultaneously, while regulators are still trying to grasp this technology, AI is entering every industry and field in some capacity. TechCrunch chronicled the various AI applications from Y Combinator’s Winter Demo Day. TechCrunch’s partial list included the following: an AI document editor, SEC-compliance robo-advisors, Generative AI photographer for e-commerce, automated sales emails, an AI receptionist to answer missed calls for small companies, and many more. While the EEOC and NHTSA have taken proactive steps for their own respective fields, we may need a more proactive and overarching approach for the widespread applications of AI. 

Much like their proactive GDPR regulations in privacy, the EU proposed a regulatory framework on AI. The framework proposes a list of high-risk applications for AI, and creates more strenuous obligations for those high-risk applications and tempered regulations for the limited and no risk applications of AI. Applications identified as high-risk include the use of AI in critical infrastructure, education or vocational training, law enforcement, and administration of justice. High-risk applications would require adequate risk assessment and mitigation, logging of data with traceability, and clear notice and information provided to the user. ChatBots are considered limited risk but require that the user has adequate notice that they’re interacting with a machine. Lastly, the vast majority of AI applications are likely to fall under the “no risk” bucket for harmless applications, including applications such as video games or spam filters. 

If U.S. regulators fail to create a comprehensive regulatory framework for AI, they will likely fall behind on this issue, much like they have fallen behind on privacy issues. For example, with privacy, the vacuum of guidance and self-regulating bodies forced many states and foreign countries to begin adopting GDPR-like regulations. The current initiatives by the EEOC and NHTSA are applaudable, but these organizations seem to be waiting for actual harm to occur before taking proactive steps to regulate the industry. For example, last year, NHTSA found that the Tesla autopilot system, among other driver-assisted systems, was linked to nearly 400 crashes in the United States with six fatal accidents. Waiting for the technology to come to us did not work for privacy regulations; we should not wait for AI technology to arrive either.