By Gordon Hull
Last time, I followed a paper by Maya Indira Ganesh that thinks about the construction of autonomy in autonomous vehicles. Ganesh points out that what happens isn’t so much the arrival of an autonomous vehicle but the displacement of human labor into a sociotechnical system and the production of the driver as a supervisor of this system. The result is a system of (in Karen Levy’s terms) “baton passing” where the machine gets the human to help it navigate tricky situations.
So why expect it to or act like it will? It lets people get blamed when they fail to do the impossible; as Ganesh puts it, “no doubt this surveillance data will protect car and ride-sharing companies against future liability if drivers are found to be distracted. This monitoring is literal bodily control because it is used to make determinations about people” (7). In other words, the imaginary of an autonomous vehicle that passes control to a driver when needed allows everyone to forget that we know that the visual recognition systems in AVs aren’t reliable. Instead of a failure on the part of whoever made the AV, the failure is on the driver who should have known that the vehicle would need assistance, and who failed to promptly act when it did. As I indicated last time, Levy cites quite a bit of research to the effect that humans cannot plausibly perform this function. But that doesn’t mean that there won’t be efforts to stick it on us; Levy outlines how various forms of biosurveillance are coming for truckers, and we can imagine similar efforts coming for drivers of AVs.
Levy argues that one effect of all of this is to deflect our attention away from actual social problems. Her book is structured around the adoption of electronic logging devices (ELDs) to monitor truck drivers. The core social problem is that truckers are only compensated for the time they are actually moving their vehicle – not the time they spend waiting to unload it, finding a place to sleep, eating meals on the road, etc. Here is Levy:
“By using digital surveillance to enforce rules, we focus our attention on an apparent order that allows us to ignore the real problems in the industry, as well as the deeper economic, social, and political causes. Under the apparent order envisioned by the ELD, the fundamental problem in trucking is that truckers cannot be trusted to reliably report how much they work, and the solution to that problem is to make it more difficult for them to fudge the numbers. But under the actual order, the problem in trucking is that drivers are incentivized to work themselves well beyond healthy limits – sometimes to death. The ELD doesn’t solve this problem or even attempt to do so. It doesn’t change the fundamentals of the industry – its pay structure, its uncompensated time, its danger, its lack of worker protections …. More meaningful reform in trucking would require a ground-up rethinking of how the industry is structured economically, in order to make trucking a decent job once more. So long as trucking is treated as a job that churns through workers, these problems won’t be solved …. If we paid truckers for their work, rather than only for the miles they drive – for the actual number of hours they work, including time they are waiting to be loaded and unloaded at terminals, time they are inspecting their trucks and freight, time they are taking the rest breaks that their bodies need to drive safely – drivers would be far less incentivized to drive unsafely or when fatigued” (153-4, emphases original).
But of course paying truckers for all that work would be bad for the profit margins of trucking companies and might well require slightly more expensive consumer goods. Also, most of the unpaid work is also work that cannot be automated or quantified in the same way that miles can. Paying truckers only for the miles they drive is convenient both for capitalism and quantification.
Ganesh mentions the story of Rafaela Vasquez, an Uber test driver who failed to stop her AV from striking and killing a cyclist crossing the road. Ganesh’s article is from 2020; in July 2023, Vasquez pleaded guilty to a reduced charge of endangerment and will serve four years of supervised probation. Vasquez had apparently been looking at her phone up until the last second before the crash. But recall Levy: Vasquez might have been unable to stop the vehicle even if she’d been staring at the road with both hands on the wheel, because the failure of the image recognition system to recognize a cyclist surely counts as unpredictable and because she’d likely be suffering from passive fatigue and vigilance decrement. Worse, lots of research tells us an ugly truth about the criminal justice system: most cases don’t go to trial and accused folks take plea bargains because of the risk of going to trial on a charge like negligent homicide, which is what Vasquez was originally charged with, is too great. This is especially true for defendants who don’t have lots of money and therefore can’t afford really good representation. So the criminal record and probation and guilty plea may very well be the result of a maximin calculation on Vasquez’s part, not any sense of guilt in the way most of us use the term.
In any case, Vasquez just took all the blame for a cascade of failures:
“The investigation found that the probable cause of the crash was Vasquez’s failure to monitor the self-driving car’s environment while she was distracted by her phone. But it also accused Uber of contributing to the crash by operating with an “inadequate safety culture” and failing to properly oversee its safety drivers. Uber’s automated driving system failed to classify Herzberg as a pedestrian because she was crossing in an area without a crosswalk, according to the NTSB. Uber’s modifications to the Volvo also gutted some of the vehicle’s safety features, including an automatic emergency braking feature that might have been able to save Herzberg’s life, investigators wrote.”
That’s right: Uber somehow failed to train its system on the idea that a pedestrian might cross the street somewhere other than the sidewalk, and they also disabled safety features on the car! And the NTSB cited this contribution to the crash! And yet only Vasquez faced criminal liability; “prosecutors declined to criminally charge Uber in the crash in 2019 but recommended that investigators further examine Vasquez’s actions as the safety driver.” Uber got away with paying a settlement to the victim’s family.
The decision to prosecute Vasquez and not Uber resulted from a one-sided application of a “but-for” causality standard:
“Vasquez's eyes were focused on the phone screen instead of the road for approximately 32 percent of the 22-minute period, the report said. Tempe investigators later determined the crash would not have occurred if Vasquez had been ‘monitoring the vehicle and roadway conditions and was not distracted.’ But [County Attorney Sheila Sullivan] Polk said that wasn't enough to prosecute Uber. ‘After a very thorough review of all the evidence presented, this Office has determined that there is no basis for criminal liability for the Uber corporation arising from this matter,’ Polk wrote.”
We need to separate the question of how much Vasquez could/should have done from the disappearance of liability for Uber. Their conflation, and the urge to focus only on Vasquez is is the work done by the baton-passing view.
AV advocates will invariably lead by telling you that most vehicle accidents are caused by people. This somehow then legitimates the idea that we should adopt their product because it isn't a people. That move ignores the extent to which AVs are systems that displace human labor and reconfigure the biological agent into a distributed, sociomorphic system. Because this system isn't very reliable, they call up on a human to supervise it. But that then obscures that the “baton passing” model they are promoting is provably very dangerous, and it makes humans even less reliable. It also hides that the image recognition systems can and will fail in predictable and catastrophic ways. Beyond that, it hides all of the massive cost to a country where cars are the only viable mode of transportation for the vast majority of people, and perpetuates a system where other kinds of transit are unimaginable. And it buries all of that with rhetoric of responsibility and autonomy that underscores that hapless humans – see, we told you! – are bad drivers (and bad pedestrians). In this regard, the moral language around humans supervising (and failing to adequately supervise) AVs is analogous to the use of moral language to blame poor mothers for their condition in order to justify intrusive surveillance of them, rather than thinking about the structural features that make so many mothers poor. If only they would work harder! If only drivers would work harder! As Ganesh says, “’Autonomous’ driving is less about the promise of freedom from the car, and is perhaps for the car” (5). And the people that get rich selling you on the imaginary of the car.
Recent Comments