By Gordon Hull
In the previous two posts (here and here) I’ve developed a political account of authorship (according to which whether we should treat an AI as an author for journal articles and the like is a political question, not one about what the AI is, or whether its output resembles human output), and argued that AIs can’t be property held accountable. Here I want to argue that AI authorship risks social justice concerns.
That is, there are social justice reasons to expand human authorship that are not present in AI. As I mentioned in the original post, researchers like Liboiron are trying to make sure that the humans who put effort into papers, in the sense that they make it possible, get credit. In a comment to that post, Michael Muller underlines that authorship interacts with precarity in complex ways. For example, “some academic papers have been written by collectives. Some academic papers have been written by anonymous authors, who fear retribution for what they have said.” Many authors have precarious employment or political circumstances, and sometimes works are sufficiently communal that entire communities are listed as authors. There are thus very good reasons to use authorship strategically when there are minoritized individuals or people in question. My reference to Liboiron is meant only to indicate the sort of issue in the strategic use of authorship to protect minoritized or precarious individuals, and to gesture to the more complex versions of the problem that Muller points to. The claim I want to make here is that , as a general matter, AI authorship isn’t going to help those minoritized people, and might well make matters worse.
If anything, therre’s a plausible case that elevating an AI to author status will make social justice issues worse. There’s at least two ways to get to that result, one specific to AI and one more generally applicable to cognitive labor.
Recent Comments