Today’s New APPS interview is with Elizabeth Wilson, Professor of Women’s, Gender and Sexuality Studies at Emory University. The subject of the interview is her new book, Affect and Artificial Intelligence (University of Washington Press, 2010).
-----
Thank you for doing this interview with us, Elizabeth. You don’t shy away from a critical engagement with the disembodied nature of classical AI, but you don’t do it from an exteriorizing and distancing relation. Instead you seek a more positive, empathic rapport with the history of the subject and its major figures. Can you tell us more about this decision?
Much of this emerged from the archival nature of the project. I spent a lot of time in archives looking at correspondence and marginalia and early drafts of papers written by AI pioneers like Alan Turing, Walter Pitts, and Warren McCulloch. I also looked at logbooks and correspondence and reports written in the early computer labs (in the US and the UK).
While I certainly saw material that sponsored a disembodied notion of AI, I also saw a lot of concerns (both quotidian and intellectual) about the nature of the body in relation to mind, especially emotion in relation to mind. I was struck that there was more “feeling” in these materials that a conventional critique of the disembodied nature of AI usually acknowledges.
That was one of the key realizations of the project: interest in emotion in AI and computer science didn’t just drop from the sky in the mid 1990s. Rather, there is a history of engagement with emotion in AI from the very beginning. This is a minor history of random notes, not to be found in every place, but nonetheless it is a history that significantly refigures the claims that the current interest in emotion and computation is new.
And from this a number of important political and intellectual questions follow: What interests are being served when someone argues that emotion has only recently arrived in AI? If this interest is not new, then what presumptions and expectations are being imported, silently, from older work? Is all AI affective? If so, then what affects have counted the most, and to what ends?
You also categorize your book in terms of an introjective rather than projective approach. Can you explain what’s at stake in this move?
I am trying to deepen the kinds of psychological models available for us to talk about AI.
The idea that we project our own emotional states onto machines is a very common one, both in academic literatures and more broadly. And it is usually deployed in a derisory way: “Oh, that’s simply projection.” However, this doesn’t seem quite right to me: projection is a defense mechanism, which means its primary function is to take some unpalatable part of oneself and put it into another person or an object (here, a computational device) in order to be rid of it. This doesn’t seem to fully explain the kinds of relations people had, in the early days, with the computational devices they were building.
Rather, it seemed to me (on the basis of the archival materials) that these early pioneers were “introjecting” their machines; that is, bringing the machines inside, psychologically, in order to generate an intimacy with them and in order to expand their own psychological competency. Introjection is a taking in of the world/objects in the pursuit of growth. Projection is a more aggressive gesture to protect oneself against an internal threat.
In terms of the archival material I was looking at, the introjective mode seemed much more frequent than the later: these awkward, geeky, oftentimes reclusive men had complex and emotionally strong relations to their machines that can’t be uniformly described as projective.
Likewise I would wager that most contemporary users of computational devices are more introjectively than projectively attached to them (here Sherry Turkle’s work on children and artificial devices has been immensely perceptive).
I loved what you did with the Gary Kasparov vs. Big Blue chess match. Can you explain your reading of that match?
In 1997 Deep Blue (a purpose-built chess computer built by IBM) beat Gary Kasparov (the then world champion) in a series of chess games. The commonest explanation for Kasparov’s defeat was that IBM had finally built a machine with enough computational power that it could simply out-calculate a player as brilliant as Kasparov: Deep Blue could analyze 200 million chess positions per second, Kasparov could analyze around 3.
Once I started reading about these games, however, it became evident that a crucial part of the encounter was the emotionality and volatility of Kasparov, on the one hand, and the absence of emotion in the IBM machine, on the other. Kasparov was known to be a particularly frightening player--he intimated his opponents, often forcing them into early, fatal mistakes. He had no such power of intimidation over the IBM machine and the large team of engineers on hand to run it.
I argue that it was this unfamiliar, weirdly stagnant affective encounter with the machine--as much as the cognitive power of the machine itself--that foiled Kasparov. It wasn’t simply that he was out-maneuvered cognitively, it was that he was starved of the emotionally complex relation between himself and his opponent that he needed in order to play chess at the very highest levels. So chess--which has been taken to exemplify the pinnacle of human cognitive capacities--turns out to be drenched in emotionality, and necessarily so.
So it’s not only that he couldn’t psych his opponent out, it’s also that he couldn’t psych himself up?
Yes, that’s it exactly.
Imagination, curiosity, surprise, even childishness: these are the “positive affects” of Alan Turing in your book. Can you put them into connection with your reading of the move in AI toward the solitary adult-chess player rather than the affective and connected infant as models of intelligence?
In his famous 1950 article that outlines the Turing test, Turing notes that there are (at least) two ways to approach the building of artificial intelligence: it could be modeled on the solitary, adult chess player, or it could be modeled on the developing infant. That is, artificial intelligence could be built fully-functioning, or it could be built as a more rudimentary function that, like an infant, is able to learn. Most classical AI followed the first of these paths. It wasn’t really until the 1990s that mainstream AI started taking the ideas of learning/infantile machines more seriously; this coincided with the interest in emotion in AI.
We see this shift to emotion in the Rodney Brooks / MIT work, yes? But you’re saying that’s more of a re-discovery than an invention out of whole cloth.
Yes, a close examination of Turing’s work shows that way back when at the beginning Turing didn’t strongly distinguish between the chess model and the child model for thinking about digital machines: these latter day concerns with learning, child development, and affect are widely available in his writing. That is, imagination, curiosity, surprise, childishness are already in the foundations of AI, so (as you say) the recent “discovery” of affect, development, mutuality in AI and HCI is actually a return to some of the earliest concerns in computational theory.
Silvan Tomkins plays a big role in your book. He clearly deserves a higher profile than he now enjoys, at least in the philosophy circles I inhabit. For the benefit of those like me who have a lot to catch up on here, can you sketch a little how you came upon his work, and how it helps you tell your story? Obviously I can’t ask you to reconstruct your book, but if you sketch out a few lines of inquiry that would I’m sure be of great interest to many readers.
Like many people in the humanities I was introduced to Silvan Tomkins by Eve Kosofsky Sedgwick and Adam Frank’s reader of his work in 1995 (Shame and its Sisters). The introduction to that reader (“Shame and cybernetic fold”) has been a very influential piece of writing for me: it brings together concerns about biology, scientific inquiry and post-structuralist thinking in ways that are immensely generative.
Tomkins’ affect theory draws on a psychological tradition that starts with Darwin and goes through William James in the early twentieth century. His theory draws from, yet significantly critiques, the dominant psychological schools of twentieth century US-based psychology (behaviorism, psychoanalysis, cognitivism), and provides an extraordinarily rich phenomenology of affective experience: from a set of 9 basic affects and a few basic axioms (e.g., positive affect should be maximized; affect inhibition should be minimized) he is able to build probably the most intensive and valuable theory of emotion of the last 100 years.
That compositional approach (the building up of complex emotions from simple elements and rules) reminds me of Spinoza.
Yes, there are a number of important similarities between Spinoza and Tomkins. Unfortunately it has become commonplace in contemporary critical and philosophical work to proceed as if Spinozist/Deluezian theories of affect are in opposition to (or a critique of) the psychological theories of affect/emotion that we find in Darwin, James and Tomkins. I don’t see the field as antagonistic in that way: the differences and similarities between these canonical authors are too complex to be reduced to a simple choice between them, to be reduced to a naïve alliance with one of them over the others.
How does Tomkins play a role in your book?
With regard to the book, Tomkins’ affect theory fit the archival data particularly well; he is important in this book for two reasons.
First, there is a clear intellectual genealogy from his affect theory on the 1960s and the contemporary work in affective computing. Many of the well known emotion theorists in psychology, whose work was foundational for affective computing in the 1990s, people like Paul Ekman, were students of Tomkins; and Tomkins theory of “basic affects” [fear, anger, excitement, enjoyment, interest, shame, contempt, distress] is the framework for most of the contemporary computational work on emotion. In terms of understanding the intellectual debts of contemporary AI, this genealogy strikes me as very important.
Second, Tomkins’s affect theory is indebted to post-war cybernetics. He found an affinity between his affect theory and cybernetic concerns with feedback and amplification. This helps us to understand that there is no necessary antagonism between theories of computation and theories of emotion, between the artificial and the affective.
Your chapter on “Artificial Psychotherapy” had some moments of real comedy, particularly when the two programs ELIZA and PARRY were set up on a blind date, as it were. You conclude that some of the “shambles” that resulted came from forgetting that “All AI is HCI (Human-Computer Interaction).” This might sum up your entire book, but can you focus here on how the presence of that formula led to the relative success of ELIZA and the relative failure of PARRY?
ELIZA is a program that simulates a psychotherapist. It was written in the 1960s by Joseph Weizenbaum and was initially available on the MIT time-sharing computer network. It was immediately a great success with users: people ‘talked’ with ELIZA in fluid, natural, engaged ways. What was remarkable was that people took to ELIZA in the early days as if it were an interested psychological interlocutor, even though ELIZA is basically a chatterbot (it simply searches for keywords and provides standardized responses in order to keep a conversation going). ELIZA felt like (and therefore functioned as) a credibly intelligent agent, despite its rudimentary programming.
In the book I argue that the way in which the program was written and shared generated a kind of networked intelligence that made ELIZA a viable and intelligent agent to its early users: its success was due in large part to the effective interface of human with computer in the MIT milieu and then later on ARPANET.
PARRY, on the other hand, was a simulation of a paranoid patient, and was programmed (by Kenneth Mark Colby) in a very similar way to ELIZA and about the same time, but it was instantiated in a more singular (less networked, unshared) fashion. Sequestered inside a single machine and deployed as a testing device, people found it irritating and implausible. Divorced from the traffic of human-computer, flesh-and-wire networks that had given rise to ELIZA, PARRY failed to prosper, even though it is more or less the same program.
I use the differences between the receptions of ELIZA and PARRY to argue that a human interlocutor is always needed to keep an artificial device functioning. There are no autonomous AI agents: they are, every one of them, part of a network of human-computer interaction that is not peripheral (or simply contextual) to their function but at the very heart of their intelligence.
You are somewhat critical of the use of adaptationist / evolutionary psychology arguments put forth by, among others, Sherry Turkle. Yet you’re in broad agreement with the Andy Clark (e.g., Natural Born Cyborgs) line of thought, which also relies on evolutionary arguments. So much so that you say “We fell for AI early,” a wonderful phrase. Can you tell us how you approach these issues?
Yes, I find evolutionary psychology arguments largely implausible, irrespective of what they are deployed to explain. They are too amenable to the maintenance of the status quo in relation to gender, race, sexuality, and--worse--they promote the idea that evolutionary processes have to make good sense.
Evolutionary theory, as I understand it, is an account of differentiation, systemic affinities and disequilibria, and gorgeous, excessive productions of function and form (here I have found Richard Doyle’s work on sexual selection to be galvanizing [see Darwin's Pharmacy: Sex, Plants, and the Evolution of the Noösphere]). The idea that all this complexity can be contained by the rigors of twenty-first century rationality is ludicrous--twenty-first century rationality is another of the weird functionalities that the evolutionary system generated, it is not the mistress of that system.
Some commentators want to argue that when we engage with artificial devices it is a kind of misunderstanding--thanks to our evolutionary history, they argue, we mistake these devices for social agents: they “push Darwinian buttons” as Turkle argues. HCI is a kind of ruse, by this logic.
I argue against this kind of position (and, I think, in broad alliance with Clark) that artificiality (the machinic, the inorganic) is part of us from the beginning--we are naturally artificial, if you like. It is this originary miscegenation of artifice and human nature that evolutionary theory brings to our attention
One thing I really admire about your book is that not only do you thematize affect, your book performs it as well, if I can put it like that. Thus the chapter on Walter Pitts is tragic in perhaps even a deeper way than the chapter on Alan Turing. Given Turing’s terrible fate, that’s saying something. Can you tell us how you treat Pitts and the homosocial vs. homosexual distinction?
Yes, poor Walter Pitts. He was the author of an early and influential paper on what would come to be known as neural networks. He was something of an eccentric, in ways not unlike Alan Turing; and he died in tragic circumstances at about the same age as Turing. However, one of the contrasts I make between Turing and Pitts in the book is that affects of enjoyment and interest are threaded all through Turing’s work, whereas affectivity seems to have been inhibited in Pitts and in his intellectual milieu.
This affective inhibition is especially true in relation to sexuality. It’s not clear to me, on the basis of the biographical data we have, that Pitts can be said to fit within the dominant economy of twentieth century sexuality (i.e., that one’s sexuality would be ordered primarily by the gender of the object choice). The rubrics of heterosexuality, bisexuality, asexuality, homosexuality seem to have little purchase when thinking about what is most interesting about Pitts’ life and work: he lived and worked in a homosocial environment (in which affective bonds between men were intense and highly valued professionally) but homosexuality itself was foreclosed in a way that we do not see in Turing’s milieu.
Eve Kosofsky Sedgwick has argued that while it is typical of homonormative environments to strictly delineate between homosociality (male-male bonding) and homosexuality (men loving men), these two affective trajectories are in fact always entangled. In the chapter on Pitts I offer a reading of a small editing correction in some archival materials that clearly delineates the difference between the homophilia of Turing’s world and the affectively and sexually inhibited context in which Pitts struggled to think and live, and I claim that a context where a line between homosociality and homosexuality is drawn in this way is much more likely to generate, not just unhappy lives, but also models of artificial intelligence that are affectless, delibidinized, dull.
Thank you so much, Elizabeth, for a tour of the some of the themes of your book. I hope it is widely reviewed and discussed, for you put forth some remarkable, and remarkably important, analyses.
Recent Comments