Recall that ChatGPT a couple of months ago did a total face plant on the topic of Kierkegaard's knight of faith from the knight of infinite resignation. Well, with the fullness of time and an upgrade, it's a lot better now: (screen grabs below the fold)
The previous version was notoriously bad about finding things inside of books. Given the improvement of this answer, it looks like they've expanded the scope of the algorithm's training data to include more text from books and/or more text from journal articles. Or maybe just a lot more text, because
On the other hand, you still shouldn't use it for your bibliography!
Well, um, the first article doesn't exist. I mean, that's a real journal and Daniel Zamora writes on Foucault - but Zamora hasn't published in the journal or an article of that title. Mitchell Dean is really well known (24,000 cites!) but wrote no such paper. Neither did Colin Koopman (though I wish he had). That Oxford Handbook doesn't exist either, and neither does the McWhorter paper as far as I can tell. And as an admirer of Lisa Guenther's work on incarceration, which is theoretically located at the intersection of phenomenology and Levinas, I'd be... surprised if she wanted to talk about ethics of care. Can't find that Revel paper either. Thomas Flynn did write a paper called "Philosophy as a Way of Life: Foucault and Hadot" in 2005, so as long as the bar is the one set by nuclear warfare and horeshoes - close - I guess that counts? And so it goes (I quit checking before I did all 10).
What strikes me as interesting is that all of these are plausible authors for papers on Foucault. Most of them have in fact written on Foucault. Zamora and Dean have published a few papers together. All of them are also plausible paper topics, and they are in plausible places. This offers anecdotal confirmation of a fundamental point about LLMs (large language models) like ChatGPT: they work through statistical prediction; after ingesting lots and lots (and lots and lots) of stuff on the Internet, they basically generate text by predicting what is likely to come next. It's like a really fancy version of autofill. So if I say "I'm writing a paper on Foucault and the microphysics of," the algorithm will almost certainly come up with "power," because that's pretty much how that sentence has to end. If I start a sentence with "Brian Leiter is complaining about," the algorithm will be able to come up with "SPEP" or "woke." The Internet as a whole makes these papers and authors and phrases plausible. But the model is not designed to go back and index its answers to see to it if they've actually been said. The bibliography task, at which it repeatedly fails, compared to its increasing prowess in understanding Kierkegaard, tells us a lot about how it works. It generates plausible answers to prompts. But that doesn't mean they're correct.
I asked it about myself too:
0 for 5. The closest it got was the first one, since "The Biopolitics of Intellectual Property" is the title of a book I wrote a couple of years ago (the subtitle is wrong, in addition to the journal and date). My first published paper was in fact in Philosophy and Social Criticism, though it was about Marx and Derrida and in 1997. The others are totally wrong, though, again, a casual observer might find them plausible in the sense that you could imagine that I'd written them.
Recent Comments