By Gordon Hull
This one has been percolating a while… Steven Thaler’s AI created a picture (below the fold), and Thaler has been using it to push for the copyrightability of AI-generated material. That endeavor has been getting nowhere, and a DC District Court just ruled on the question of “whether a work generated autonomously by a computer falls under the protection of copyright law upon its creation,” in the same way as a work generated by a person. Copyright attaches to human work very generously – this blog post is copyrighted automatically when I write it, and so are doodles you make on napkins. You get lots of extra protections and litigation benefits if you register, but registration is not a requirement for copyright in itself. Per 17 U.S.C. Sec. 102, copyright subsists in “original works of authorship fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” Given this, it’s not hard to see why someone would want to know whether AI could be an “author” in the relevant sense.
The Court ruled that “United States copyright law protects only works of human creation.” This is not a surprise. The central argument is that “Copyright is designed to adapt with the times. Underlying that adaptability, however, has been a consistent understanding that human creativity is the sine qua non at the core of copyrightability, even as that human creativity is channeled through new tools or into new media.” Indeed, “human authorship is a bedrock requirement of copyright.” The Court both cites historical precedent and grounds it in the purpose of Copyright, which is constitutionally to incentivize the creation of new works:
“At the founding, both copyright and patent were conceived of as forms of property that the government was established to protect, and it was understood that recognizing exclusive rights in that property would further the public good by incentivizing individuals to create and invent. The act of human creation—and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts—was thus central to American copyright from its very inception. Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them”
The Court cites a bunch of case law where someone has asserted copyright for various non-human entities, ranging from deities to monkeys, and been denied (the most relevant case is likely the one where a macaque monkey took a selfie, and PETA sued on behalf of the monkey). In sum, “Plaintiff can point to no case in which a court has recognized copyright in a work originating with a non-human.”
Lacking statutory, constitutional, theoretical or judicial support, the court concludes that AI cannot be an author. Here's the picture and then some thoughts:
Some notes:
First, Thaler was trying to argue that the AI created the work and that he should get the rights because the resulting image was a “work for hire.” Work for hire doctrine standardly applies when, for example, you code for Microsoft and that’s your job. The product is copyrightable, but because it’s your ordinary employment and you code what they tell you to (there’s eight factors; I’m simplifying), they automatically own the work and are considered the author. So the “author” is the corporation, not the individual employee of the corporation who did the coding. The court repeatedly declined to go there, on the grounds that the work simply wasn’t copyrightable. But given that the narrow question of whether the AI generated the work entirely on its own is a bit artificial – at the very least it was prompted – we’re likely to see more nuanced attempts to negotiate the sense of autonomy behind AI creation.
Second, there is a contingent of folks that tries to say that current AI is sentient. They’re wrong, and the Court correctly views the question of whether non-human sentient beings can be authors as an academic conjecture, citing a paper by Justin Hughes for the point that “[t]he day sentient refugees from some intergalactic war arrive on Earth and are granted asylum in Iceland, copyright law will be the least of our problems.”
Finally, this was only an initial ruling. Expect at least an appeal and an appellate decision. The issues here are weird and it’s not clear what the right path forward is (though imho the legal analysis here is sound; I’m talking about a matter of what the right kind of policy looks like, however we get there). For example, if the AI-generated work is copyrightable, let’s say as a work for hire (I tell ChatGPT to “write a story about two star crossed lovers”), then I can become the proud owner of indefinitely many such works literally every day. That intuitively seems weird. At a minimum, it inflates my role as an author in a way that seems reminiscent of the worst kinds of Romantic-author thesis. There’s a complicated set of questions underneath this – arguably, existing copyright underplays the extent to which creativity is a group project – but Thaler’s work-for-hire argument does seem to give too much to whoever prods the AI into motion.
One of the worries about AI is that there’s going to be tons and tons of AI-generated material sort of take over the Internet. There has been a similar problem in photography with the rise of sites like Getty – the Internet is awash in good-enough photography to the point that professional photographers are really getting squeezed. Jessica Silbey’s new book goes into this as a question of how we can make intellectual property work in the context of the values we want it to promote, and I think a number of related conversations can and should take place in the case of AI authorship. One way to frame the problem might be this: if we view copyright as incentivizing creation, then you need a lot less incentive to motivate AI creation, because that involves much less work on the part of the “author.” So the author should get less protection to incentivize their creation. However, there’s also TONS of hidden work by the people who make the AI – and a lot of those people aren’t the well-paid engineers, but do poorly-remunerated and difficult work on content curation or training the models. We need to talk about those folks, much less the people whose photos are being used to train the model. Calling it all “public domain” doesn’t work either; more than ever, property needs to be understood as a form of cultural stewardship.
Recent Comments