Next Saturday, the University of Leuven is hosting an outreach event called Philosophy Festival ("Feest van de Filosofie"). This year's theme is people & technology ("mens & techniek"). I was asked to join a panel discussion on the technological singularity. The introduction will be given by a computer engineer (Philip Dutré, Leuven). There will be a philosopher of technology (Peter-Paul Verbeek, Twente) and a philosopher of probability (me, Groningen); and the moderator is a philosopher, too (Filip Mattens, Leuven). So far, I have not worked on this topic, although it does combine a number of my interests: materials science, philosophy of science, and science fiction.

The idea of a technological singularity (often associated with Ray Kurzweil) originates from the observation that the rate of technological innovations seems to be speeding up. Extrapolating these past and current trends suggests that there may be a point in the future at which systems that have been built by humans (software, robots, …) will become more intelligent than humans. This is called the technological singularity. Moreover, once there are systems that are able to develop systems that are more intelligent than systems of the previous generation, there may be an intelligence explosion. The possibilities of later generations of such systems are inconceivable to humans. (This theme has been explored in many science fiction stories, including the robot stories by Isaac Asimov (1950's and later), the television series "Battlestar Galactica" (2004-2009), and the movie "Her" (2013).)

Skynet.

Even this brief introduction gives us plenty of opportunity for reflection on concepts (What is intelligence?) and consequences (What will happen to humans in a post-singularity world?). I am planning to analyze a very basic assumption, by raising the following question: When are we justified to pick a particular trend that has been observed in the past (e.g., Moore's observation of an exponential increase in the number of transistors on commercial chips) and extrapolate it into the future? Viewed in this way, the current topic is an example of the general problem of induction.

The hypothesis "The observed trend will continue to hold" is only one among many. Let me offer two alternative hypotheses:

Alternative hypothesis (1): Crash

The increasing rate of change will cause a breakdown of human society. Recently, I was next in line to buy a parking ticket and noticed that the elder person in front of me was having trouble to operate the brand new and fancy ticket machine. (I tried to assist him, but this only added to his embarrassment.) This small encounter made me worry about my own future: will I be able to keep-up with the rest of society if things keep changing at this speed (or even faster)? So, there might as will be a technological burnout instead of an intelligence explosion.

Alternative hypothesis (2): Stagnation

Throughout science, there are lots of examples of processes that start with an exponential increase (of some variable) followed by saturation. (You might think of an exponential onset of growth followed by delayed growth due to limitations of space, supplies, etc.) The result is an S-shaped curve, rather than an exponential one. This reminds me of the ideas of Ivan Illich, who developed a theory of two turning points. Illich also surveyed past technological advances, but he came to a conclusion very different from Kurzweil's. According to Illich, the first turning point is marked by a steep increase in efficiency, but at the second turning point, it is the system's disadvantages that start to build up (counterproductivity): just think of cars vs. traffic jams, or hospitals vs. hospital-acquired infections.

At this point, we are faced with three options: (0) continued exponential increase (at least long enough to reach the singularity), (1) crash, and (2) stagnation. Which one correctly predicts our future?

Although I am planning to leave that as an open question, I might add two remarks. First, to answer this question, we would need to look into the underlying processes that may sustain each of these projected trends and then determine how likely we consider each of these processes to be. Second, there may be yet many different hypotheses besides these three.

Now, I would not be surprised to learn that none of this is very original and that all of it has been said by others in more eloquent ways. Unfortunately, I simply do not have the time to check the literature. Instead, I just read an introduction to the topic and let my mind wander. If you are willing to share your own thoughts or references to short texts (such that I can digest them in the limited time I have), that will be greatly appreciated!

Posted in ,

4 responses to “Thinking about the technological singularity”

  1. Jon Cogburn Avatar

    Wow! This is interesting.
    In my Tristan Garcia class today we went over his chapter on Humanity, where he actually discusses singularity. Garcia has a weird analysis of what’s going on. He thinks that with the advance of evolutionary theory we became terrified that there was nothing particularly distinctive about humans and that as a result of this we’ve tried to become gods. It’s a very strange view, but it is interesting that we started trying to teach human language to apes at the same time we started trying to teach intelligence to machines. In ancient myths gods often give to humans that which separates us from other animals, and Garcia sees AI and some animal ethology in the same light.
    Anyhow, I think we might already be in the stagnation phase as far as computational approaches that might be relevant to science fiction writers who write about singularity. There is so little funding for traditional rule based AI (which, contra Dreyfus, I think is awesome), and I’m very skeptical that just increasing computational power is going to result in corpus based approaches delivering very much. The amazon dot com algorithm for suggesting purchases might have been the highpoint of that approach (given the way the security state is going at corpus based approaches, this might be for the good).
    All things considered I hope I’m wrong about the skepticism. It would be so cool to be able to create intelligence that could withstand interstellar travel.
    One place where I think singularity might happen is with “artificial intelligence” as the term is used by video game designers. The “game engines” keep evolving in fascinating and surprising ways. The normative constraint to create more and more entertaining games forces a tremendous amount of creativity, and this is one area where private money is possibly doing a better job than current public financing (contrast with new pharmaceuticals, nearly all of which are designed by university professors).

    Like

  2. David Chalmers Avatar

    cool topic. i think the intelligence explosion thesis is the crucial thesis here and is largely independent of the exponential growth thesis (even if we get to human+-level AI through slow growth, the recursive explosion to superintelligence may then plausibly kick in). since you ask for references: http://consc.net/papers/singularity.pdf. the following page is also useful, both for substantive points and references: http://intelligence.org/ie-faq/

    Like

  3. Sylvia Wenmackers Avatar

    Jon, thanks for sharing the Garcia view, which I wasn’t familiar with. Given that we live in world in which so many things happen, I tend to be skeptical of reading too much into synchronicities: they are bound to happen, although they may evoke creative thinking. (BTW, maybe you should suggest to your students to make a Wikipedia page on Tristan Garcia in English?)
    Regarding stagnation: there seems to be a lower rate of increase in clockspeed in recent years, but it’s hard to tell whether that’s a temporary lag or the onset of stagnation. Of course, there are also worries about limitations on raw materials.
    Although the gaming industry isn’t catering to the needs of computer science, the investments do allow game developers to experiment and push the boundaries. And it certainly wouldn’t be the first time that hardware or software elements initially developed for gaming purposes were put to good use by scientists as well. So, who knows: all the developers of games about creating intelligence and interstellar travel may actually be contributing to make similar things happen in reality.

    Like

  4. Sylvia Wenmackers Avatar

    David, I just finished reading a related course chapter prepared by the main speaker: the only philosophical paper in the (short) reference list is yours. So, I had just put it on my reading list, when I saw you posted a link to it. Thanks!
    I understand your observation that the intelligence explosion can be reached without exponential growth. At the same time, it taps into an issue I am still unsure about: speed plays a role at various places in this discussion, but it is not always mentioned explicitly. The understanding (or at least measurement) of intelligence itself is closely related with speed. Intelligent behaviour is connected to being able to respond in a timely manner (which depends on the context).
    But some differences in intelligence cannot be compensated by increased speed alone: having a larger intelligence will allow someone (or something) to finish certain tasks faster, yet some tasks cannot be done at all – no matter how slowely – before a certain level of intelligence has been achieved. (Maybe for some tasks you just need a large working memory – or some other resource other than time – and without it there is no to complete it.) And some things can be done, but not in any relevant time span (e.g., before the Sun explodes).
    Hm, I’m just not fast enough to think through all of this by Saturday.

    Like

Leave a comment