By Gordon Hull
I have argued in various contexts that when we think about AI and authorship, we need to resist the urge to say that AI is the author of something. Authorship should be reserved for humans, because authorship is a way of assigning responsibility, and we want humans to be responsible for the language they bring into the world.
I still think that’s basically right, but I want to acknowledge here that the argument does not generalize and that responsibility in sociotechnical systems involving AI needs to be treated very carefully. Consider the case of so-called autonomous vehicles (AVs), the subject of a really interesting paper by Maya Indira Ganesh. Ganesh basically argues that the notion of an autonomous vehicle obscures a much more complicated picture of agency in a couple of ways. First, automation doesn’t actually occur. What really happens is that human labor is distributed differently across a sociotechnical system:
“automation does not replace the human but displaces her to take on different tasks … humans are distributed across the internet as paid and unpaid micro-workers routinely supporting computer vision systems; and as drivers who must oversee the AV in auto-pilot” (2).
Automation is really “heteromation.” This part seems absolutely correct; it is also the subject of Matteo Pasquinelli’s genealogy of AI. Pasquinelli shows in detail how the automation of labor – and in particular, labor that can be divided into discrete tasks – has been a key factor in the development of computing and other systems from the start; Babbage’s analytical engine is as much about the division of labor as anything else. Pasquinelli’s last major chapter is about the development of pattern recognition and the models on which current AI are based. Here, in the case of AVs (and both I and others have talked about this in the case of language models), the system itself performs as well as it does not only because it scrapes a lot of data from the internet and other sources, but also because humans are intimately involved in training the machines, whether in the case of RLHF, toxicity removal, or the identification of images in vision systems. Vision systems are key to AVs and Ganesh emphasizes that the distributed labor of mechanical Turkers and other annotators are essential to the operation of the vehicles. The fragility of these image recognition systems is therefore central to the failure of AVs.
Continue reading "Drivers and AI and responsibility (part 1)" »
Recent Comments