By Gordon Hull
In a couple of previous posts (first, second), I looked at what I called the implicit normativity in Large Language Models (LLMs) and how that interacted with Reinforcement Learning with Human Feedback (RLHF). Here I want to start to say something more general, and it seems to me like Derrida is a good place to start. According to Derrida, any given piece of writing must be “iterable,” by which he means repeatable outside its initial context. Here are two passages from the opening “Signature, Event, Context” essay in Limited, Inc.
First, writing cannot function as writing without the possible absence of the author and the consequence absence of a discernable authorial “intention:”
“For a writing to be a writing it must continue to ‘act’ and to be readable even when what is called the author of the writing no longer answer for what he has written, for what he seems to have signed, be it because of a temporary absence, because he is dead or, more generally, because he has not employed his absolutely actual and present intention or attention, the plenitude of his desire to say what he means, in order to sustain what seems to be written ‘in his name.’ …. This essential drift bearing on writing as an iterative structure, cut off from all absolute responsibility, from consciousness as the ultimate authority, orphaned and separated at birth from the assistance of its father, is precisely what Plato condemns in the Phaedrus” (8).
Second, iterability puts a limit to the use of “context:”
“Every sign, linguistic or nonlinguistic, spoken or written (in the current sense of this opposition), in a small or large unit, can be cited, put between quotation marks, in so doing it can break with every given context, engendering an infinity of new contexts in a manner which is absolutely illimitable. This does not mean that the mark is valid outside of a context, but on the contrary that there are only contexts without any center or absolute anchorage” (12)
It seems to me that Derrida’s remarks on iterability are relevant in the context of LLMs because they indicate that LLMs are radically dependent on iterability. This is true in at least three ways, each of which points to an important source of their implicit normativity.
Recent Comments