By Gordon Hull
I have argued in various contexts that when we think about AI and authorship, we need to resist the urge to say that AI is the author of something. Authorship should be reserved for humans, because authorship is a way of assigning responsibility, and we want humans to be responsible for the language they bring into the world.
I still think that’s basically right, but I want to acknowledge here that the argument does not generalize and that responsibility in sociotechnical systems involving AI needs to be treated very carefully. Consider the case of so-called autonomous vehicles (AVs), the subject of a really interesting paper by Maya Indira Ganesh. Ganesh basically argues that the notion of an autonomous vehicle obscures a much more complicated picture of agency in a couple of ways. First, automation doesn’t actually occur. What really happens is that human labor is distributed differently across a sociotechnical system:
“automation does not replace the human but displaces her to take on different tasks … humans are distributed across the internet as paid and unpaid micro-workers routinely supporting computer vision systems; and as drivers who must oversee the AV in auto-pilot” (2).
Automation is really “heteromation.” This part seems absolutely correct; it is also the subject of Matteo Pasquinelli’s genealogy of AI. Pasquinelli shows in detail how the automation of labor – and in particular, labor that can be divided into discrete tasks – has been a key factor in the development of computing and other systems from the start; Babbage’s analytical engine is as much about the division of labor as anything else. Pasquinelli’s last major chapter is about the development of pattern recognition and the models on which current AI are based. Here, in the case of AVs (and both I and others have talked about this in the case of language models), the system itself performs as well as it does not only because it scrapes a lot of data from the internet and other sources, but also because humans are intimately involved in training the machines, whether in the case of RLHF, toxicity removal, or the identification of images in vision systems. Vision systems are key to AVs and Ganesh emphasizes that the distributed labor of mechanical Turkers and other annotators are essential to the operation of the vehicles. The fragility of these image recognition systems is therefore central to the failure of AVs.
Ganesh’s second point is that this puts humans in an impossible subject position: they are tasked with supervising an entity that they have no actual access to. As Ganesh puts it:
“The discursive construction of the AV rests on the transition from human to robot driving; precisely because the AV is not just a car or a robot but is also a distributed data infrastructure running AI technologies, there are subject positions the human may find herself in that she cannot necessarily predict or control given what big data infrastructures are” (2)
In other words, the very term “autonomous” vehicle is a complicated rhetorical strategy that tends to overstate the extent to which the vehicle is autonomous or the extent to which the driver is able to control it. The human confronts a “scale of computational, automated decision-making that is near impossible to intervene in from the outside” (2). She reports riding with a test driver who found it nearly impossible to “let” an AV drive because its default responses differed so much from her own. Humans are put in charge of entities that behave in strange ways, and – this strikes me as the essential point – tend to fail both catastrophically and unpredictably (I am reminded here of work on strategies to fool image recognition systems by altering a few pixels. This causes the machine to not recognize the image on the basis of an anomaly that a human wouldn’t even recognize. That’s the technical equivalent of wearing a goofy mask).
To think of an AV as autonomous (and somehow ‘intelligent’) is to miss the point. As Pasquinelli says of AI in general:
“It appears that the project of AI has never been truly biomorphic (aiming to imitate natural intelligence) … but implicitly sociomorphic – aiming to encode the forms of social cooperation and collective intelligence in order to control them. The destiny of the automation of intelligence cannot be seen as separate from the political drive to autonomy: it was ultimately the self-organization of the social mind that gave form and momentum to the project of AI” (160).
From this point of view, the gambit of the autonomous vehicle is essentially that a disciplined operation of collective intelligence, in which the recognition of objects around a vehicle is a process that can be mastered by a sufficiently robust division of labor and the complex aggregation of discrete, straightforward tasks, is a better way to navigate a car than the biological system that it seeks to replace. That is, as Ganesh points out:
“Computer vision in AVs is not advanced enough for driving and has emerged as a weak link in all fatal crashes so far. It is not that the AV, fitted with multiple sensors, cameras, Lidar and radar to document the environment, cannot visually sense, but that it cannot make sense of what it senses. Humans must annotate images so that computer vision algorithms can learn to distinguish one object from another, and then apply this when encountering new and unfamiliar images” (6)
The annotation and training and the algorithm are the sociomorphic replacement of the biological driver. Auto executives fantasize about Pakistani workers connected through 5G networks correcting machine perceptions in realtime (no, really, Ganesh quotes one!) and chastise pedestrians for not being predictable enough. And they install lots of systems for monitoring the biological humans tasked with operating these assemblages.
All of this matters because of the way that the autonomy language, and the idea that humans are supervisors, is utilized to assign blame to humans when something goes wrong. Citing Karen Levy’s work on the trucking industry (you really ought to read her book), Ganesh notes that “despite the limited human control in such systems, accountability and liability still fall on the human operator, coupled with surveillance and monitoring systems to discipline the human to remain alert and vigilant in their role as driver-overseer” (2). Thus system crashes and failures:
“are evidence of the human operator/driver in the difficult role of having to be simultaneously vigilant and relaxed so as to take over at a moment’s notice; and particularly in the context of the auto-pilot, the technology that makes autonomous driving appear ‘real’ in the sense of the car being self-driving. Thus, surveillance and monitoring of human drivers has become a part of the AV-driving experience. AV testing requires that a driver-facing camera be fitted to record and monitor driver behaviour, physiological states, and affect” (7).
Affect recognition is of course completely unreliable pseudo-science. Also, humans who are relaxed can’t suddenly take over a machine. Levy’s takedown of this “baton passing” model is compelling. As she notes, “the time scale in which the baton is passed is miniscule: because of the nature of driving, a human is likely to have an extremely short window – perhaps only a fraction of a second – in which to understand the machine’s request to intervene, assess the environmental situation, and take control of the vehicle” (Data Driven, 133-4). Apparently a 2015 NHSTA study found that it could take humans as long as seventeen seconds to regain control. As Levy notes, a robust psychological literature supports the thought that “it’s cognitively unrealistic to expect humans to remain alert to the environment in case of emergencies” because of “passive fatigue” and “vigilance decrement:” even someone who tries to pay attention to the road is going to start missing things because they are monitoring, but not engaged in it (this is why TSA screeners rotate in and out of monitoring screens frequently). In short, “so long as humans have some duty to monitor the driving environment … humans will almost inevitably do a poor job at accepting the baton from the machine” (136).
Next time I’ll loop back to moral language.
Recent Comments