Photo by Levart_Photographer on Unsplash
ChatGPT, OpenAI's AI chatbot, has ignited a social media frenzy with its ability to generate "Ghibli-style" images. Users have been creating stunning AI artworks that emulate the dreamy aesthetics of Studio Ghibli, flooding social platforms with these viral visuals. However, beyond the excitement, concerns over copyright and AI-generated content have started to surface.
Studio Ghibli's animations are celebrated worldwide for their unique artistry and heartfelt storytelling, embodying the dedication and craftsmanship of its animators. But when AI generates similar images, does it constitute copyright infringement? How can artists safeguard their work? And can current copyright laws keep up with AI's rapid advancements? To delve deep into these pressing questions, The Nexus invited Evan Brown, a Chicago-based intellectual property attorney, to provide legal insights into this controversy. Evan Brown is known for handling complex matters related to AI, copyright, trademarks, domain names and other areas across law and technology.
The Nexus: From a legal standpoint, does OpenAI's generation of Ghibli-style images infringe on Studio Ghibli's copyright? How is copyright infringement defined in the United States?
Evan Brown: Copyright infringement in the United States occurs when someone uses material protected by copyright law without permission in a way that violates the copyright owner's exclusive rights. For OpenAI's Ghibli-style images, the legal analysis hinges on the crucial distinction between style and specific expression.
U.S. copyright law explicitly states that protection does not "extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery." This means Studio Ghibli's general artistic style — characterized by dreamlike natural environments and particular animation techniques — likely falls outside copyright protection.
However, if OpenAI generates images that reproduce specific protected elements — like copying the exact design of Howl's moving castle with its distinctive mechanical-organic features and unique decorative elements — this could potentially constitute infringement. The key test would be whether there is "substantial similarity" between the AI-generated images and the protected expressive elements of Ghibli's works, not just a similar aesthetic approach.
For infringement to be established, courts would examine whether OpenAI had access to Ghibli's works (generally assumed for widely available films) and whether the similarities arose from actual copying rather than coincidence or independent creation.
The Nexus: Could Studio Ghibli take any legal actions against OpenAI? What would be the legal basis for Studio Ghibli to sue OpenAI?
Evan Brown: The success of such claims would depend heavily on whether the AI-generated images copy protected expressive elements rather than just emulating an artistic style. OpenAI would likely defend itself by arguing that any similarities relate to unprotectable stylistic elements or by invoking the fair use doctrine, which permits limited use of copyrighted material without permission for purposes such as criticism, comment, news reporting, teaching, scholarship, or research.
The Nexus: How can creators legally defend their rights against AI infringement, and how difficult is it to do so in practice?
Evan Brown: Creators face both legal and practical challenges when defending against AI-related infringement:
Legal strategies include:
Registering works with the U.S. Copyright Office, which establishes a public record of ownership and enables statutory damages in infringement cases
Documenting unique creative elements to help establish what aspects deserve protection
Developing licensing terms that specifically address AI training and generation
Pursuing strategic enforcement against particularly clear cases of infringement
Advocating for legislative updates that address AI-specific challenges
However, practical difficulties make this challenging:
Identifying when AI systems have been trained on specific works is technically difficult
Proving that an AI system was trained on particular works presents significant evidentiary hurdles
Legal action requires substantial financial resources many creators lack
The "black box" nature of AI systems makes establishing direct links between training data and outputs extremely complex
The global nature of AI development creates jurisdictional complications
In practice, these challenges mean that individual creators often find it nearly impossible to enforce their rights effectively against large AI companies without significant resources or collective action.
The Nexus: What do you think of the controversy of AI companies using copyrighted material for training models? Is the existing copyright law adequate to address the challenges posed by AI-generated content?
Evan Brown: The controversy around AI training on copyrighted materials highlights a fundamental tension between technological innovation and creators' rights — one that current copyright law wasn't designed to address.
The argument that AI training constitutes "transformative" use under fair use doctrine has merit — these systems don't reproduce exact copies but rather learn patterns to create new works. However, content creators raise legitimate concerns about their materials being used without authorization or compensation to develop commercial products that could potentially undermine the market for their original works.
Existing copyright law appears inadequate for several reasons:
It wasn't designed with machine learning in mind and lacks frameworks for distinguishing between learning from works and copying protected elements
It assumes human creators making conscious decisions about copying, while AI systems operate through statistical pattern recognition
It doesn't provide clear guidance on balancing creator protection with the societal benefits of AI advancement
Its territorial nature creates jurisdictional problems for globally deployed AI systems
A more effective approach might include specialized licensing frameworks for AI training data, compensation systems that acknowledge creators' contributions, clearer standards for derivative works in the AI context, and transparency requirements about training sources.
As this field continues to evolve rapidly, we need thoughtful adaptation of copyright principles that preserves the incentives for human creativity while allowing technological progress to continue.
The interview questions are reviewed by Ron Frederick; the interview article is edited by Alex Li.
"Questions on AI" is a new interview series by The Nexus, focusing on the wide-ranging impact of AI technologies. We’ll be speaking with experts, scholars, and practitioners from various fields to explore how AI is reshaping our world.