Is AI-generated content protectable under U.S. Copyright law?

The collision course of AI-generated content and copyright is the subject of ongoing legal and ethical debate. Historically, technology has often been a tool that artists have used to further their craft, from the invention of the printing press to digital art software. However, with AI, technology is not just a tool; it becomes a creator in its own right.


AI’s ever-evolving capacity to generate human-like content in 1/100000th of the time it takes a human has the potential to completely disrupt the way we value human creativity and threatens the livelihood of writers, artists, and even lawyers.


This has left creatives and intellectual property rights holders grappling with existential questions: in the age of AI, what does it mean to create original content and what is it now worth? And more pressingly, does the law protect AI-generated work? The battle to answers these questions is being waged on multiple fronts, from the courtroom, to the legislature, and even to the picket lines of the 2023 WGA and SAG strikes.

A Developing Legal Framework

The crux of the legal protection debate hinges on copyright law, which is designed to protect original works of authorship fixed in a tangible medium of expression. Traditionally, this has covered everything from screenplays, books, and songs to paintings, photographs, and sculptures.


But if an AI produces a screenplay or image, who owns the copyright? The developer of the AI? The user who ran the specific command? Or perhaps no one at all?

Human “Authorship” is Required for Copyright Protection

For starters, we know that some “human” involvement is required for a work to be copyrightable. The question is how much? In a federal court decision (Thaler v. Perlmutter, et al.) published August 18, 2023, the court ruled that the U.S. Copyright Office was correct in denying copyright registration for a work that was created by an AI without any human involvement, emphasizing the importance of human authorship as a fundamental requirement for copyright protection.


The work at issue in Thaler was an image titled “A Recent Entrance to Paradise”, which was created by an AI system called the “Creativity Machine”. Thaler sought to claim the copyright in this computer-generated work himself “as a work-for-hire to the owner of the Creativity Machine.” Thaler theorized that since he owned the Creativity Machine, it was comparable to an employee producing works during the course of their employment, no different from say a movie studio hiring a writer to write a screenplay. The Copyright Office rejected this idea and reaffirmed that only humans can create copyrightable works of authorship.


Practically, Thaler tells us that you cannot generate a work from an AI and expect to own the copyright to that work. But, if AI-generated content is incorporated into a larger, predominantly human-created work, can it be likened to any other tool or instrument that artists have historically used to augment their creations? There isn’t a direct answer yet to this question, but I suspect the answer is yes. However, there are underlying legal issues that complicate this framework.

Who Taught AI how to Generate Content in the First Place?

The Thaler Court highlighted a critical unresolved question: How do we assess the originality of AI-generated works when the AI systems might have been trained using unknown pre-existing works?


This question is at the forefront of a copyright infringement lawsuit filed on August 18, 2023, by a group of 17 authors that includes George R. R. Martin, John Grisham, and Michael Connelly against OpenAI and its AI tool ChatGPT.


The lawsuit alleges that OpenAI illegally copied the copyrighted works of Martin, Grisham, and many others to train ChatGPT, resulting in apparent instances of ChatGPT being used to impersonate specific writers to generate low-quality ebooks. The lawsuit cites as an example author Jane Friedman discovering half a dozen books being sold on Amazon under her name that she did not write or publish! The lawsuit seeks to enjoin OpenAI from using the Plaintiffs’ works to train ChatGPT, as well as monetary damages. Sarah Silverman, Christopher Golden, and Richard Kadrey filed a similar case on July 7, 2023.


These lawsuits pose an existential challenge: if a machine can echo an artist’s unique voice in seconds for what might otherwise take a human hours, days, or even years to craft, where does one draw the line between human originality and machine-generated content? How do we evaluate the worth of human effort? And what happens when the voice and nuance of the world’s most established artists are suddenly (and perhaps unknowingly) incorporated into larger, predominantly human-created work that incorporate AI-generated content?

Broader Implications

We know AI can generate content, but can it truly replicate the emotions and experiences that shape human creativity? Do audiences even care? We know the U.S. Senate does, to some degree at least, based on its October 27, 2022 letter to the Copyright Office and USPTO, urging the joint establishment of a national commission on AI to assess the role of AI across the innovation economy and consider what changes, if any, should be made to existing law. The Senate letter contemplates the possibility that protection of AI-generated work does not neatly fall into any of the currently existing buckets of IP (e.g. Copyright, Patent, Trademark, etc.), and that new forms of protection might be necessary. The Senate placed a deadline of October 17, 2023 for the commission to be established with the hope of a report being issued to Congress by December 31, 2024.


The reality for many people is that the very essence of what it means to be a creator is under attack. We saw this unfold in the recently concluded WGA Strike, where among other things, screenwriters sought protection against the threat of AI devaluing their talent and work, and potentially rendering them obsolete, particularly in a world where AI can be trained to impersonate their voice. The WGA made significant ground when major regulations were agreed upon to limit the use of AI on WGA covered projects, including the WGA reserving the right (not unlike R. R. Martin and co.) to assert that exploitation of writers’ material to train AI is prohibited by the collective bargaining agreement or other law.


While many argue that AI, irrespective of its sophistication, will never truly capture the human essence, others see it as a means to augment human creativity rather than replace it (including apparently the WGA, which agreed that a writer can choose to use AI when performing writing services if the studio/ production company consents).


These events underscore the pressing need for a robust legal framework that can navigate the complex interplay between AI and human creativity. It’s clear that as technology and creativity collide, there’s an urgent need to rethink our legal paradigms to preserve the trajectory and value of artistic human expression.


About David Yaffe.

David Yaffe is an intellectual property and business lawyer with an extensive background in litigation, transactions, and employment matters. Yaffe is the founder of the Miami, Florida law firm Yaffe Law, which caters to the unique needs of creatives, businesses, and employees. The firm's office can be reached at 305-699-2315.

Yaffe Law

October 4, 2023

Related Blog & News

Want to stay current on all things PPC? Learn digital advertising best practices, get tips and tricks, and insights you can’t find anywhere else.