By Gordon Hull
There’s an emerging literature on Large Language Models (LLMs, like ChatGPT) that basically argues that they undermine a bunch of our existing assumptions about how language works. As I argued in a paper a year and a half ago, there’s an underlying Cartesianism in a lot of our reflections on AI, which relies on a mind/body distinction (people have minds, other things don’t), and then takes language use as sufficient evidence that one’s interlocutor has a mind. As I argued there, part of what makes LLMs so alarming is that they clearly do not possess a mind, but they do use language. So they’re the first examples we have of an artifact that can use language; language-use is no longer sufficient to indicate mindedness. In that paper, drew the implication that we need to abandon our Cartesianism about AI (caring whether it “has a mind”) and become more Hobbesian (thinking about the sociopolitical and regulatory implications of language-producing artifacts). Treating LLMs as the origin points of speech has real risks, including making the human labor that produces them invisible, and making it harder to impose liability since machines can’t meet a standard scienter requirement for assigning tort liability.
Here I want to take up a somewhat different thread, one that I started exploring a while ago under the general topic of iterability in language models. This thread takes the literature on language models seriously; where I want to go with it is to talk about an under-discussed latent Platonism in how we tend to approach language(models). I’ll start with the literature, which divides into a couple of sections, a Wittgensteinian and a Derridean.
1. The Wittgensteinian Rejection of Cartesian AI
Lydia Liu makes the case for a direct Wittgensteinian influence on the development of ML, via the Cambridge Researcher Margaret Masterson. I only ran into this work recently, so on the somewhat hubristic assumption that other folks in philosophy also don’t know it, I’ll offer a basic summary here (in my defense: Liu says that “the news of AI researchers’ longtime engagement with Wittgenstein has been slow to arrive.” She then adds that “the truth is that Wittgenstein’s philosophy of language is so closely bound up with the semantic networks of the computer from the mid-1950s down to the present that we can no longer turn a blind eye to its embodiment in the AI machine” (Witt., 427)).
Recent Comments