Hargis v. Pacifica: The Case with Potential to Shape AI’s Legal Future 

By: Miranda Glisson

The internet has made it incredibly easy for people to find, copy, and paste other’s photography. But what are the legal protections available for photographers? How likely is it that artists, well-known or novice, can find every unlawful use of their copyrighted work? In a groundbreaking case, photographer Scott Hargis made history with a record-setting damages award for the unauthorized use of his photographs. 

Introduction 

Hargis is an architecture and interiors photographer, living in the San Francisco Bay Area, with worldwide clientele. Hargis was hired by Atria Management Company to take photos of several senior living facilities. Another company, Pacifica Senior Living Management, then acquired the senior living properties from Atria and used 42 of Hargis’s photos depicting these properties on their website, without obtaining Hargis’ permission. Hargis’ agent informed Pacifica that those photo licenses were not transferable from Atria to Pacifica, and representatives of Hargis requested Pacifica to take Hargis’ images off of their website. However, Pacifica refused on multiple occasions, and Hargis brought suit against Pacifica for Copyright Infringement

Willful Copyright Infringement 

Statutory damages are damages awarded by a judge or jury in a copyright infringement suit to a copyright owner. The amount of statutory damages awarded to a copyright owner when copyright infringement is found depends on whether the infringement is considered innocent or willful. A court may find innocent infringement when the defendant, or infringer, can demonstrate they were “not aware and had no reason to believe that the activity constituted an infringement.” However, innocent infringement cannot be found when there was a proper copyright notice on the work, as found in Hargis v. Pacific Senior Living Management. 

Willful infringement does not require that the defendant have actual knowledge of their infringing actions. Rather, it is only required that there is a showing by a preponderance of the evidence that the infringer “acted with reckless disregard for, or willful blindness to the copyright holder’s rights.” If copyright infringement is found and determined to be innocent infringement, the statutory maximum is $30,000 per copyrighted work infringed upon. The statutory maximum for willful infringement is much larger at $150,000 per copyrighted work. However, even if willful infringement is found, the fact-finders must determine how much statutory damages should be awarded to the Plaintiff between the minimum of $750 (also the minimum for innocent infringement) and the maximum of $150,000 for willful infringement. 

Hargis v. Pacific Senior Living Management: $6.3 Million Jury Verdict 

Legitimate copyright infringement cases will often end in settlement instead of going to trial. However, in the case of Hargis v. Pacific Senior Living Management, Pacifica refused to settle leading to the largest jury verdicts for copyright infringement of photographs. The United States District Court, Central District of California jury found that Pacifica infringed on all 42 of Hargis’ photographs. the evidence supported a finding of willful infringement due to Pacifica ignoring Hargis’ request for payment and refusing to take the photos off the website for a year and half after the suit was filed. They found each infringement to be willful and asserted the maximum statutory damage amount of $150,00 for each of the 42 photographs, leading to a $6.3 million jury verdict

Protection of Photographers Works in the Growing World of AI 

In 2019, Copytrack, a global company that enforces image rights, investigated how many photographer’s images are stolen on the internet. They estimated that more than 2.5 billion images are stolen daily. Hargis v. Pacific Senior Living Management demonstrates how seriously U.S. courts view the infringement of photographs and the financial impact unlawful uses of copyrighted works can result in. Currently, with the ever growing world of AI, more lawsuits are popping up, with claims that AI companies are infringing on their copyrights by using the owner’s images to train AI Models.  

With the favorable results for Hargis and his images, willful use of copyrighted images has the potential to cost AI companies millions, maybe billions, as AI models need to see between 200-600 images of a particular concept before it can replicate it. Further, training a model from scratch, or fine tuning one, can still require thousands of data points. With heaps of data points and works used to train AI models, developers of these models could be on the hook for massive fines depending on how willful the use of the copyrighted work is found. How courts and companies will approach this problem in the future is unknown, however it has the potential to cause ripple effects in AI development.

Conclusion 

Hargis v. Pacific Senior Living Management sets a powerful precedent for protecting photographers’ right in the growing digital era and the severe financial consequences of infringement, especially willful infringement. Photographs, and other copyrighted works, are exposed to misuse and as courts began to evaluate AI’s use of copyrighted material, the lessons from Hargis v. Pacific Senior Living Management may play an instrumental role in decision making and serve as a warning to infringers. 

From Prompt to Picture: AI Art and the Ability to Copyright It

By: Alex Okun

As a general concept, Artificial Intelligence (“AI”) is not new: “chatbots” have been available for decades, and virtual assistants like Apple’s “Siri” first appeared 15 years ago. However, the latest iteration of AI – “Generative AI” – takes the concept one step further. Generative AI platforms can produce entirely new text or images based on prompts as short as a sentence. A new world of “AI art” has emerged online, and now many users are hoping to monetize their creations. However, consumers will not purchase a work from the creator if others can freely distribute copies of it. Effective commercial use requires the right to prevent third parties from doing the same, and to do that, one must first obtain a valid copyright.

Copyright Law’s “Authorship” and “Originality” Requirements

For a work to be copyrighted, it must be an “original work of authorship fixed in any tangible medium of expression.” A “work of authorship” requires an author, and the courts have consistently held that an “author” must be human. “Originality” requires that an author contribute “a modicum of creativity” to their work. However, courts have acknowledged that machines can be utilized to create a work without jeopardizing copyrightability. In the landmark case Burrow-Giles v. Sarony (1884), the Supreme Court held that a photograph could be “original” (and thus copyrightable) so long as it represents the photographer’s “intellectual conceptions.” While it is relatively clear when a camera manifests the user’s artistic choices, ambiguity arises when the machine also plays a role in creative decision-making. There is no question that AI-generated art is original; to be copyrightable, the question is where the originality came from.

Even if a work has sufficient originality, copyright will only protect the parts of it that manifest the author’s creativity. In Urantia Foundation v. Maaherra (1997), the Ninth Circuit Court of Appeals held that “divine messages” in a book could not be copyrighted because they originated from a deity rather than from a human being. Similarly, the United States Copyright Office (“USCO”) in 2023 approved the copyright of an author’s comic book but denied protection to an AI-generated image depicted in it.

One route to copyrighting AI art is to include it in a compilation of works. A compilation can be copyrighted if the author selects or arranges works in a way that requires creative discretion (like selecting the “best poems of the year” or arranging art pieces thematically). The USCO acknowledges that compilations of AI art can have sufficient originality, but each work included cannot obtain a copyright absent sufficient human authorship. Thus, copyright authorities must determine how much creativity a user must contribute to an AI-generated image to be copyrightable.

On the docket

Several lawsuits have been brought against the USCO for denying copyright claims by users of Generative AI applications. In 2022, Dr. Stephen Thaler sued the USCO over its determination that he could not copyright an image produced by his Generative AI application, “Creativity Machine.” Thaler did not claim to be an “author”; instead, he listed Creativity Machine as his employee, who had created the piece at his direction. The District Court upheld the USCO’s decision in 2023, finding that the AI application could not be the “author” because it is not human. Thaler appealed the ruling in 2024 to the DC Circuit Court of Appeals, but hearings have not yet been scheduled.

Whereas Thaler was focused primarily on the viability of non-human “authors,” a case filed in 2024 illustrates the legal issues arising from the “originality” requirement for AI users. In September 2024, Jason Allen sued the USCO for denying copyright protection for an award-winning image he created using the popular AI application Midjourney. He argues that the art was only partially generated using AI and that his contributions to the work justified a finding that it was sufficiently “original” to be copyrighted. According to filings, Allen inputted “at least 624 text prompts” to the application before the image created what he envisioned. Initial hearings took place in December 2024, but the court has not yet reached a decision.

Policy Changes

Distinguishing “machine-assisted” artistic works from “machine-generated” works has been a persistent issue for the USCO in the past several years. In 2023, the USCO issued ambiguous guidance that stated copyright protection in AI art depends “on the nature of human involvement in the creative process.” On January 29, 2025, the USCO issued clarifying guidance to resolve the confusion. It states unequivocally that “prompts alone do not provide sufficient human control to make users of an AI system the authors of the output.” To justify this policy, USCO pointed out that entering duplicate prompts multiple times can result in varying results. It also rejected the prospect of “revising prompts” (when a user enters subsequent requests to alter the initial image produced), likening it to “re-rolling the dice.”

The 2025 USCO guidance also distinguished mere prompts from “expressive inputs,” in which the user uploads media to the AI application and then asks it to modify the material in some specific way. Expressive inputs can merit greater protection because the user exercises greater control by giving the AI model a “starting point” rather than generating images from basic text. However, the USCO reaffirmed its view on severing the AI alterations from the author’s work and only protecting the user’s original work’s “perceptible” aspects in the new image. Of course, this categorically excludes AI-generated content based on media not created by the user.

In contrast, the United Kingdom’s (“UK”) copyright law specifically allows copyrighting “computer-generated” artwork and defines the author as the person who makes the “arrangements necessary” for its creation. This phrase leaves legal experts unsure whether this would mean the AI application’s programmers or its users would be deemed “authors” of AI art. However, the answer to this question may be of little consequence: many of the top Generative AI companies (including OpenAI, Midjourney, and Adobe) expressly grant their users full ownership of what they create. If US lawmakers chose to grant AI companies copyright protection in AI art, users might simply select the applications that promise to transfer ownership to them.

Conclusion

As US media companies increasingly rely on Generative AI, the ability to claim ownership in AI-generated work is a growing risk to business productivity. Resolving this issue is particularly important to content creators because production studios may need to continue relying on artists if they cannot copyright AI-generated content. Despite the greater specificity in the USCO’s new guidelines, the efficacy of these policies remains in question. The 2025 guidance is the second installment of a three-part report initiated in 2023, and it is unclear whether Congress or the Trump administration will attempt to modify these policies. Moreover, federal courts have the final say on these issues because the requirements of “authorship” and “originality” are constitutional questions. So long as this legal ambiguity persists, the “AI revolution” in the art industry will likely need to wait.

Technology, Law, and the Future: How Loper Bright v. Raimondo Could Impact Artificial Intelligence Governance

By: Joseph Valcazar

The world was a very different place in 1984. Prince debuted his critically acclaimed Purple Rain album; The Terminator, Gremlins, and the Indiana Jones sequel dominated the box office; the future’s most popular video game, Tetris, was released; and, of course, the Supreme Court released its landmark Chevron v. Natural Resource Defense Council (Chevron) opinion. This case established the Chevron deference, a legal doctrine instrumental to the evolution of administrative law for over forty years. This doctrine was cited in more than 18,000 federal opinions. 

That was until 2024 when the current Supreme Court issued its opinion in Loper Bright Enterprises v. Raimondo (Loper Bright), effectively overruling Chevron. In an instant, the federal administrative state was turned on its head, leading to many questions about what the future holds for key administrative issues. And currently, there are few greater hot-button topics than artificial intelligence (AI).

What was Chevron Deference?

Chevron deference refers to a legal doctrine where courts afforded federal agencies, like the Food and Drug Administration or the Environmental Protection Agency (EPA), deference when interpreting ambiguous federal statutes. As long as these interpretations were deemed reasonable, courts would defer to the agency’s reasonable interpretation of the law, even when the courts may have preferred an alternative interpretation. 

For example, the dispute in the original Chevron case revolved around whether the term “source” in the Clean Air Act applied to individual equipment that emitted air pollution—such as smokestacks or boilers—or only to industrial plants on a whole as a source of pollution. The EPA interpreted “source” to cover the latter, allowing industrial plants to modify individual pieces of equipment without a permit so long as the total emissions of the plant did not increase. In a unanimous decision, the Supreme Court held the EPA’s interpretation to be reasonable, deferring to the agency and future agency interpretations and thus creating Chevron deference. 

This doctrine guided administrative action for forty years, influencing how Congress drafted its legislation. As Justice Kagan pointed out in her Loper Bright dissent, Congress would intentionally leave vague or ambiguous terms for agencies to resolve. Such as directing the Federal Aviation Administration to restore the “natural quiet” of the Grand Canyon National Park. 

Then Loper Bright happened. In one broad swoop, the Supreme Court overruled this long-standing precedent, or as Justice Gorsuch squarely put it, “[t]oday, the Court places a tombstone on Chevron no one can miss.” As a result, administrative law has entered a state of limbo. With deference removed, it is now up to the court’s independent judgment to decide when an agency has acted within its proper authority. There is no longer a barrier restricting courts from interjecting their own potentially conflicting interpretations of administrative statutes. Critics of Loper Bright express concerns that judges, who lack subject matter expertise on many complex matters, will create inconsistent rulings across jurisdictions. They worry this may lead to more confusion and uncertainty surrounding agencies’ authority. 

If true, these concerns have significant implications for an agency’s ability to react to novel technologies such as AI.

What’s the 101 on AI? 

To describe AI in simple terms, it is a form of technology that can perform advanced tasks and reach conclusions as a human would. This technology has experienced rapid growth in recent years. AI will seemingly touch every area of our lives. Whether it’s within your own home refining your Google search results, in healthcare as a tool to diagnose illness, or in business to automate key processes, AI is being widely adopted to reshape every aspect of our lives. This is not to say every use of A.I. is popular, or without its share of controversy. Examples, such as the use of AI in insurance denial claims, are just one of many reasons why some believe the ability to regulate AI is essential. Without proper governance of AI, privacy risks, system biases, and transparency concerns will exist, and what could be a net good could just as quickly become a net negative that abuses the public’s information. 

How Can Agencies Respond to Loper Bright?

With the complexity of AI, questions arise on how federal agencies should approach regulating such a novel technology. The answer is unclear in the wake of Loper Bright. Agencies may still interpret broad or ambiguous statutes; Loper Bright did not eliminate this power. However, actions related to AI and other hot-button issues will likely receive higher scrutiny from potential plaintiffs, leading to more litigation. Agencies may consider this fact when planning to issue new regulations. This could cause them to act more cautiously or strategically and thus respond less effectively to rapidly emerging issues.

Agencies may lean on issuing more guidance documents and statements that explain new regulations or clarify existing policy. However, these are not legally binding and non-enforceable. One advantage of this fact is that not every guidance document is currently subject to judicial review. Therefore, these guidance documents could be strategically utilized to advocate for specific policy positions without facing the scrutiny that a typical regulation would face. 

One pitfall of this strategy is that guidance documents are relatively limited in scope. In Appalachian Power Co. v. Environmental Protection Agency (EPA) (2000), the D.C. Circuit Court held that the EPA had improperly issued a guidance document because the guidance had the effect of a binding ruling on private and state actors. This case highlights how courts often do not enjoy attempts to evade judicial review. If agencies rely more on issuing guidance documents going forward, a likely outcome is courts choosing to exercise greater scrutiny over these documents to reduce any apparent workaround of Loper Bright

Conclusion

It’s unclear right now how agency actions will evolve in a post-Chevron world. The only thing that appears certain is that litigation will follow. The power paradigm between the judicial and executive branches has rapidly and significantly shifted. At a time when the private sector has just announced a $500 billion investment in AI, there are no signs that this emerging technology has any plans of slowing down. The next few years of governance will be critical in determining Loper Bright’s long-term effect on AI regulation. 

While this blog has focused primarily on the administrative state and its ability (or now lack thereof) to regulate this novel technology, agencies are not the only mechanism of governance that exists. As always, the legislature can draft and pass legislation regulating AI and its implementation. However, given Congress’s recent and current inefficiency, meaningful legislation around AI seems slim.

(A.I.) Drake, The Weeknd, and the Future of Music

By: Melissa Torres

A new song titled “Heart on My Sleeve” went viral this month before being taken down by streaming services. The song racked up 600,000 Spotify streams, 275,000 YouTube views, and 15 million TikTok views in the two weeks it was available. 

Created by an anonymous TikTok user, @ghostwriter977, the song uses generative AI to mimic the voices of Drake and The Weeknd. The song also featured a signature tagline from music producer Metro Boomin. 

Generative AI is a technology that is gaining popularity because of its ability to generate realistic images, audio and text. However, concerns have been raised about its potential negative implications, particularly in the music industry, because of its impact on artists. 

Universal Music Group (UMG) caught wind of the song and had the original version removed from platforms due to copyright infringement. 

UMG, the label representing these artists, claims that the Metro Boomin producer tag at the beginning of the song is an unauthorized sample. YouTube spokesperson Jack Malon says, “We removed the video after receiving a valid copyright notification for a sample included in the video. Whether or not the video was generated using artificial intelligence does not impact our legal responsibility to provide a pathway for rights holders to remove content that allegedly infringes their copyrighted expression.”

While UMG was able to remove the song based on an unauthorized sample of the producer tagline, it still leaves the legal question surrounding the use of voices generated by AI unanswered. 

In “Heart on My Sleeve”, it is unclear exactly which elements of the song were created by the TikTok user. While the lyrics, instrumental beat, and melody may have been created by the individual, the vocals were created by AI. This creates a legal issue as the vocals sound like they’re from Drake and The Weeknd, but are not actually a direct copy of anything. 

These issues may be addressed by the courts for the first time, as initial lawsuits involving these technologies have been filed. In January, Andersen et. al. filed a class-action lawsuit raising copyright infringement claims. In the complaint, they assert that the defendants directly infringed the plaintiffs’ copyrights by using the plaintiffs’ works to train the models and by creating unauthorized derivative works and reproductions of the plaintiffs’ work in connection with the images generated using these tools.

While music labels argue that a license is required because the AI’s output is based on preexisting musical works, proponents for AI maintain that using such data falls under the fair use exception in copyright law. Under the four factors of fair use, advocates for AI claim the resulting works are transformative, meaning they do not create substantially similar works and have no impact on the market for the original musical work.

As of now, there are no regulations regarding what training data AI can and cannot use. Last March, the US Copyright Office released new guidance on how to register literary, musical, and artistic works made with AI. The new guidance states that copyright will be determined on a case-by-case basis based on how the AI tool operates and how it was used to create the final piece or work. 

In further attempts to protect artists, UMG urged all streaming services to block access from AI services that might be using the music on their platforms to train their algorithms. UMG claims that “the training of generative AI using our artists’ music…represents both a breach of our agreements and a violation of copyright law… as well as the availability of infringing content created with generative AI on DSPs…” 

Moreover, the Entertainment Industry Coalition announced the Human Artistry Campaign, in hopes to ensure AI technologies are developed and used in ways that support, rather than replace, human culture and artistry. Along with the campaign, the group outlined principles advocating AI best practices, emphasizing respect for artists, their work, and their personas; transparency; and adherence to existing law including copyright and intellectual property. 

Regardless, numerous AI-generated covers have gone viral on social media including Beyoncé’s “Cuff It” featuring Rihanna’s vocals and the Plain White T’s’ “Hey There Delilah” featuring Kanye West’s vocals. More recently, the musician Grimes recently shared her support toward AI-generated music, tweeting that she would split 50% royalties on any successful AI-generated song that uses her voice. “Feel free to use my voice without penalty,” she tweeted, “I think it’s cool to be fused [with] a machine and I like the idea of open sourcing all art and killing copyright.”

As UMG states, it “begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.”

While the music industry and lawyers scramble to address concerns presented by generative AI, it is clear that “this is just the beginning” as @ghostwriter977 ominously noted under the original TikTok posting of the song. 

Is AI Good in Moderation?

By: Chisup Kim

In 2016, Microsoft released Tay, a chatbot based on artificial intelligence on Twitter that became smarter as users interacted with it. Unfortunately, this experiment did not last long, as some Twitter users coordinated a barrage of inappropriate tweets towards Tay to force the chatbot to parrot out racist and sexist tweets. Tay tweeted racial slurs, support for gamergate, and incredibly offensive positions within a matter of hours of being online. Last week, Microsoft returned to the AI space by launching a new AI-powered Bing search engine in partnership with OpenAI, the developers of ChatGPT. Unlike Tay, the Bing Search AI is designed as a highly-powered assistant that summarizes relevant articles or provides related products (e.g., recommending an umbrella for sale with a rain forecast). While many news outlets and platforms are specifically focused on reporting on whether the Bing AI chatbot is sentient, the humanization of an AI-powered assistant creates new questions about the liability that could be created by the AI’s recommendations. 

Content moderation itself is not an easy task technically. While search engines are providing suggestions based on statistics, search engine engineers also run parallel algorithms to “detect adult or offensive content.” However, these rules may not cover more nefariously implicit searches. For example, a search engine likely would limit or ban explicit searches for child pornography. However, a user may type, for example, “children in swimsuits” to get around certain parameters, while simultaneously influencing the overall algorithm. While the influence may not be as direct or to the same extent as Tay on Twitter, AI machine learning algorithms incorporate user behavior into their future outputs that taint the search experience for the original intended audience. In this example, tainted search results influenced by the perverted could affect the results for a parent looking to buy an actual swimsuit for their child with photos depicting inappropriate poses. Around five years ago, Bing was criticized  for suggesting racist and provocative images of children that were likely influenced by the searches by a few nefarious users. Content moderation is not an issue that lives just with the algorithm or just with its users, but rather a complex relationship between both parties that the online platforms and their engineers must consider. 

Furthermore, the humanization of a recommendation service altering how third party content is provided may lead to further liability for the online platform. The University of Washington’s own Professor Eric Schnapper is involved in the Gonzalez v. Google case, which examines the question of whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when making algorithmically targeted recommendations of a third-party content provider. Section 230 currently immunizes most online platforms that are considered an “interactive computer service” from being a “publisher or speaker” of third-party information or content. The Gonzales plaintiff is challenging Google on the grounds that YouTube’s algorithmic recommendation system led some users to be recruited into ISIS, and ultimately led to the death of Nohemi Gonzalez in the 2015 terrorist attacks in Paris. After the first days of arguments, the Supreme Court Justices seemed concerned about “creating a world of lawsuits” by attaching liability to recommendation-based services. No matter the result of this lawsuit, the interactive nature of search engine based assistants creates more of a relationship between the user and the search engine. Assessing how content is being provided has been seen in other administrative and legislative contexts such as the SEC researching the gamification of stock trading in 2021 and California restricting the types of content designs on websites intended for children. If Google’s AI LaMDA could pass the famous Turing Test to appear to have sentience (even if it technically does not), would the corresponding tech company be more responsible for the results from a seemingly sentient service or would it create more responsibility on the user’s responses? 

From my perspective, I think it depends on the role that the search engines give their AI-powered assistants. As long as these assistants are just answering questions and providing pertinent and related recommendations without taking demonstrative steps of guiding the conversation, then search engines’ suggestions  may still be protected as harmless recommendations. However, engineers need to continue to be vigilant on how user interaction in the macroenvironment may influence AI and its underlying algorithm, as seen with Microsoft’s Twitter-chatbot Tay or with  some of Bing’s controversial suggestions. The queries sent with covert nefariousness should be closely monitored as to not influence the experience of the general user. AI can be an incredible tool, but online search platforms should be cognizant of the rising issues of how to properly moderate content and how to display content to its users.