By Gordon Hull
UPDATE: 6/14: Here's a nice takedown ("Nonsense on Stilts") of the idea that AI can be sentient.
I don’t remember where I read about an early text-based chatbot named JULIA, but it was likely about 20 years ago. JULIA played a flirt, and managed to keep a college student in Florida flirting back for something like three days. The comment in whatever I read was that it wasn’t clear if JULIA had passed a Turing test, or if the student had failed one. I suppose this was inevitable, but it appears now that Google engineer Blake Lemoine is failing a Turing test, having convinced himself that the natural language processing (NLP) system LaMDA is “sentient.”
The WaPo article linked above includes discussion with Emily Bender and Margaret Mitchell, which his exactly correct, as they’re two of the lead authors (along with Timnit Gebru) on a paper (recall here) that reminds everyone that NLP is basically a string prediction task: it scrapes a ton of text from the Internet and whatever other sources are readily available, and gets good at predicting what is likely to come next, given a particular input text. This is why there’s such concern about bias being built into NLP systems: if you get your text from Reddit, then for any given bit of text, what’s likely to come next is racist or sexist (or both). The system may sound real, but it’s basically a stochastic parrot, as Bender, Gebru and Mitchell put it.
So point one: LaMDA is not sentient, any more than ELIZA and JULIA were sentient, but chatbots are getting pretty good at convincing people they are. Still, it’s disturbing that the belief is spreading to people like Lemoine who really, really ought to know better.
Second point: yeah, Google put Lemoine on leave, but their track record on this is checkered at best. The publication of the Stochastic Parrots paper seems to have been the reason that the company dumped Gebru (and later Mitchell), as their reasons for Gebru’s departure in particular were never quite satisfying if one didn’t include the content of the paper as a factor. One suspects a PR move here.
More broadly, any analysis of this state of affairs needs to start with the point that companies like Amazon and Google are desperate for us to communicate with devices like Alexa as though those devices had mental states or were otherwise sentient. Indeed, as Dylan Wittkower emphasizes, Alexa bypasses the effort to get us to anthropomorphize and think that it has a mind; rather, to get it to do anything at all we have to adopt an intentional stance and treat it as if “she” has a mind. The performance of mindedness is key to interfacing with the system.
This performance is of a piece with the logic behind Lemoine’s decision that LaMDA is sentient: if we act like these systems are sentient for long enough, then we start to think they are, and come to accept a functionalist argument about sentience. As Yarden Katz points out, this serves a number of interests at once, especially both the profits and world views of the big tech companies that are invested in AI. As Katz chronicles, this is also a battle that’s been fought before by people like Hubert Dreyfus, as overclaims about the forthcoming sentience of AI cropped up when the algorithms were top-down and fully-structured.
The performance also obscures that the data behind these systems comes from somewhere: a carefully crafted biopolitical public domain, a legal and regulatory regime pushed heavily by big tech that both cajoles and nudges people into sharing their data and makes everything out there on the Internet freely available for appropriation by those companies. Worse than in the case of the old algorithmic systems of the 1960s, current AI also relies heavily on exploited labor, often from people in lower-income countries. As Kate Crawford explains, labor is exploited throughout the system, from the extraction of the rare earth minerals that are inside the increasingly large servers that are needed to train NLP to (often via Mechanical Turk) through to the process of curating and labeling data. As Shakir Mohamed, Marie-Therese Png and William Isaac note, these “ghost workers”
“do this work in remote settings, distributed across the world using online annotation platforms or within dedicated annotation companies. In extreme cases, the labeling is done by prisoners and the economically vulnerable” (668)
All of this disappears in a narrative that says that sentience has popped up like a mushroom inside of LaMDA, but it is absolutely in capital’s interest that the ghost that we see in the machine is “sentience” and not the traces of labor exploitation.
There’s also a fundamental power grab here: the tech bros are telling us that their AI is sentient, but they also get to tell us what the functional characteristics of sentience are. Here’s Katz, noting the fundamental congruence of this narrative with neoliberal orthodoxy:
“This perspective on AI inherits neoliberal doctrine’s primary contradiction. The contradiction arises when a centralized elite sets the conditions for a magical computational process (whether the market or a computing system) and decides when it works or needs fixing, but also claims that this process is beyond human control. The corporations building AI’s celebrated systems likewise espouse decentralized democracy over hierarchical control, but corporate elites dictate what counts as data and how it is used; the mythical flat, democratic market doesn’t exist.” (121)
At the end of the day, Lemoine’s confusion is silly and ought to be uninteresting. But it comes the same week as developments in another AI problem-area: intellectual property. Last Monday, the Federal Circuit heard oral arguments in Thaler’s effort to enable an AI to be an “inventor.” This argument has gone nowhere so far (Thaler is appealing a rejection by the PTO), and it should go nowhere further, but Thaler won’t be the first to make it. Also last week, Thaler filed suit in federal district court to block the Copyright Office’s ruling that an AI can’t be an author.
I’ll have more to say about Thaler as the cases/summer proceed, but it’s fairly clear that there’s a lot of forces aligned trying to convince us that machines have minds. The initial takeaway should be that this is not a philosophy of mind problem except indirectly; it’s a political problem.
Recent Comments