
By: Esha Kher
In recent weeks, AI-generated images mimicking the iconic look of Studio Ghibli have gone viral across platforms like X and Instagram, sparking controversy. Selfies, family portraits, and memes have been transformed into soft, pastel-hued depictions that echo the dreamlike aesthetic of the legendary Japanese animation studio. Founded in 1985 by Hayao Miyazaki, Isao Takahata, and Toshio Suzuki, Studio Ghibli is renowned for its rich storytelling and distinctive, heartwarming visual style—now replicated widely through AI.
The recent trend in which users generate AI images using the latest version of OpenAI’s GPT-4o, mimicking Studio Ghibli’s aesthetic, has gained immense traction on social media, even causing server overloads. This emerging trend has sparked considerable debate, dividing public opinion into two camps: AI enthusiasts and staunch critics of AI-generated art. Supporters view the phenomenon as a tribute to Studio Ghibli’s influence and a democratization of creative tools. Critics, however, find this trend to be both a hollow and inauthentic imitation of Hayao Miyazaki’s distinct style—devoid of creative soul or artistic merit. An old clip of Miyazaki himself resurfaced during the controversy, in which he vehemently denounced AI-generated imagery as “an insult to life itself”.
Regardless of one’s stance on the debate, the trend raises important legal questions. Do AI models rely on copyrighted material to replicate distinct visual styles? And when these outputs resemble a studio’s recognizable aesthetic, like that of Studio Ghibli, do they risk infringing on copyright or trademark protections?
How ChatGPT Generates “Stylized” Images
GPT-4o uses an autoregressive algorithm to generate images by breaking them down into visual “tokens,” which function like words in a sentence. Just as ChatGPT predicts the most likely sequence of words in a sentence, generative image models predict and assemble these visual tokens to form coherent images. Through training on large datasets of images and text, the model learns to associate certain patterns, like color palettes or brushstrokes, with specific words, encoding them as abstract “styles” within its neural network. So when a user references “Studio Ghibli,” the model doesn’t retrieve frames from actual films but instead draws on a learned mathematical representation of the studio’s aesthetic (otherwise known as “Ghibli-ness”). This ability to isolate and apply visual features across new images is known as a “style engine”.
Copyright Law Implications
The use of style engines has raised entirely new questions about copyright law and creative ownership. U.S. copyright law doesn’t protect certain artistic styles, the law only protects the unique ways those styles are expressed or, in other words, original works of authorship.
However, legal experts caution that this distinction may not be sufficient. While “style” in the abstract is not copyrightable, what people casually refer to as “style” may include recognizable, discrete elements of a work of art. Therefore, the blanket statement that an artistic style isn’t protectable under copyright law may not be absolute. Courts may still find infringement if generated images include elements that are original, expressive, and substantially similar to the copyrighted works.
This legal ambiguity is at the heart of Andersen v. Stability AI, a landmark lawsuit filed in 2022 by three visual artists against AI companies Stability AI, Midjourney, and DeviantArt. The artists allege that these companies used their copyrighted artworks without consent to train AI models like Stable Diffusion, which can generate images that imitate their distinctive styles. The plaintiffs argue that such outputs constitute derivative works, and even if the results aren’t direct copies, the unauthorized use of their works in training data alone may amount to copyright infringement. Similar concerns have surfaced in other lawsuits, including Huckabee v. Meta and Millette v. Nvidia, where creators claim that their content was scrapped and repurposed by generative AI platforms, raising serious questions about how copyright applies in the context of machine learning.
Further, there is growing concern that OpenAI may have used Studio Ghibli’s films and artwork to train its generative image model without prior consent from the animation studio. This could constitute copyright infringement if the works were repurposed in a way that exceeds the scope of what is allowed under the fair use doctrine. The fair use doctrine permits the use of copyrighted material for specific purposes, such as academic research or the creation of an entirely new invention. OpenAI maintains that training its models is fair use under copyright law, but this defense is largely untested.
Trademark Law Implications
Beyond copyright law, Studio Ghibli could assert that OpenAI’s generation of “Ghibli-style” images infringes upon its trademark rights under the Lanham Act. While an animation style—such as Studio Ghibli’s distinctive visual aesthetic—is not independently protected by trademark law and does not trigger the traditional “likelihood of confusion” test used in trademark infringement claims, other aspects of trademark law may still apply.
When there is no registered trademark involved, Section 43(a) of the Lanham Act (15 U.S.C. § 1125(a)) provides broader protection by prohibiting false endorsement, sponsorship, or affiliation. Under this provision, a claim can arise when: (1) a defendant uses elements closely associated with a person or brand, such as names, visual likenesses, or identifying characteristics, and (2) this use creates a false impression in the minds of consumers that there is a connection, endorsement, or affiliation with the original brand.
In this context, while Studio Ghibli may not have a registered trademark on its animation “style” per se, OpenAI’s promotion of “Ghiblification” experiments—along with OpenAI employees sharing Ghibli-style portraits of themselves on social media—could potentially give rise to a false endorsement claim under § 43(a). This is especially true if such references imply to the public that Studio Ghibli has authorized or collaborated with OpenAI in developing these tools. Even allowing prompts such as “in the style of Studio Ghibli” could lead consumers to mistakenly believe that Studio Ghibli has endorsed or is affiliated with the image generation process. While this may not amount to traditional trademark infringement, it opens the door to a false association claim under the broader protections of the Lanham Act.
Conclusion
The rise of AI-generated art in the style of Studio Ghibli underscores the growing legal uncertainty surrounding copyright and trademark protections. As of now, Studio Ghibli has not initiated any legal action against OpenAI regarding the AI-generated images mimicking its distinctive animation style. Nonetheless, there might be grounds to pursue action under U.S. law. While U.S. copyright law does not formally protect artistic styles, the line becomes blurry when AI outputs closely resemble original works in expression and substance. At the same time, Studio Ghibli may have a claim under trademark law, particularly if the AI-generated images create consumer confusion or falsely present an endorsement by the studio. By capitalizing on Ghibli’s recognizable aesthetic, OpenAI risks infringing on the studio’s brand and artistic reputation. These unresolved legal questions underscore the need for updated legal frameworks that account for how AI systems produce and distribute creative content without prior consent from creators.



