Remembering Vernor Vinge: A 2008 Interview on the Technological Singularity

Vernor Vinge passed away recently. He was a visionary science fiction writer and thinker who greatly influenced our understanding of the potential future of technology. He presented the concept of Technological Singularity at NASA in 1993. I want to share an interview I conducted with Vernor on March 31, 2008, in which we discussed his groundbreaking ideas about the technological singularity.

In our conversation, Vernor explained that the singularity revolves around the creation of entities smarter than humans, which could lead to a future that becomes difficult for us to comprehend. The rapid, unintelligible progress was envisioned by mathematician John von Neumann even earlier, in the 1940s.

Vernor acknowledged the challenges in depicting post-singularity scenarios in his writing, as the “event horizon” of the singularity suggests an inability to see beyond it. However, he also noted that if the transition to the singularity takes years, humanity’s actions could play a significant role in shaping the outcome. He proposed that during this time, humans might benefit from intelligence amplification or biomedical enhancements, potentially enabling us to better understand and participate in the transition.

We also touched on the concept of a “hard take-off” singularity, where the emergence of superintelligence happens rapidly, perhaps over the course of a mere hundred hours. While he found this idea unsettling and recognized it as a minority view among his peers, Vernor argued that it was not entirely implausible. He drew an analogy to the rise of humans within the animal kingdom, suggesting that to other animals, humans may have seemed inexplicably adaptable and capable of imposing rapid change.

Vernor and I agreed that it is crucial for technologists to effectively communicate the potential impacts of artificial intelligence to policymakers and the public. Failing to do so could lead to misguided attempts at suppressing research, which may only drive development into less controllable domains. Vernor maintained that the most prudent approach is to involve the collective wisdom of humanity in navigating the challenges posed by the singularity. This challenge is still open as I write 16 years later, and more urgent than ever.

Vernor’s legacy serves as a reminder of the importance of thoughtful, inclusive discourse about the trajectory of transformative technologies. Spurred by his memory, we must strive to create a future that benefits all of humanity, as we explore new territories of intelligence and capability.

Here is an edited transcript of our conversation.

David: I wanted to remind our listeners about your seminal article with NASA 15 years ago, which was one of the first to reintroduce the concept of technological singularity. Originally, it was a concept from space, developed by von Neumann and Ulam in the 40s, but you were certainly the one to bring it to a wider audience and formulate it more precisely. Tell us a little bit about what has happened in the last 15 years. How do you feel about the concept having been explored by specialists?

Vernor: It seems to me that progress in the direction of the singularity has gone along without too many great surprises. One difference, although it applies to the von Neumann case, is the question of what is making it happen. To me, the singularity comes down to making or becoming things that are smarter than humans. That is the fundamental crux of it. If that doesn’t happen, then von Neumann’s vision of progress going so fast that people can’t understand it probably isn’t going to happen.

David: There has been some progress among specialists who study the concept deeply, like you, in trying to depict post-singular scenarios. The event horizon interpretation of the singularity says you cannot look beyond it, but your job as a science fiction writer is to falsify that vision and still try to see humanity living in a post-singular world.

Vernor: I am probably more strongly on the side of the event horizon and not being able to see beyond than most people. The analogy is between us and the rest of the animal kingdom. A goldfish could not understand what we are doing here this afternoon with this interview. On the other hand, especially if the transition takes years, it’s clear that what humanity is doing is important in affecting the outcome, hopefully in beneficial ways. If it takes a long time to happen, the humans of the era will probably benefit from technology that comes along. Their intelligence will be enhanced, either by intelligence amplification or biomedical advancements. 

So, although I may be right about the claims of unintelligibility and the event horizon, about how things smarter than us are beyond our understanding, that may be a moot point if, as we go there, we become smarter ourselves. In 1982, I was at an AI conference at Carnegie Mellon and used the term “singularity” on a panel. I was talking to Hans Moravec, whom I first met at that time. He already knew all this, but he said, “I agree with everything you’re saying, except I wouldn’t use the term ‘singularity’ for it.” He explained that if you are one of the participants, this won’t be an unintelligible transition; it will be a nice, smooth one. He said, “I intend to ride that curve, and it will be quite clear to me what’s going on.”

David: The speed of change, the first and second derivatives of that curve, are important not only for technologists but for society at large. The adaptability of humans is finite, so the faster the change, the more difficult it is for us to keep riding the curve, as Hans said. I wonder whether you have an opinion regarding the uptake of the concept of singularity or accelerating change by society at large, by non-technologists, educators, and politicians, and whether they have come to grips with it at all.

Vernor: I don’t think it has consciously impinged especially. Whether such conscious acknowledgment becomes important enough also depends on how fast it happens. The faster it happens, the more uncomfortable I am about it. You could imagine situations where immense changes are absorbed by some segment of society that then kind of moves away from the rest. That’s very unsettling, though not necessarily a disaster, as the people who are getting smarter and their machines getting smarter would probably be more benign than most elite factions in the past. Still, the experience is unsettling to me.

The extreme version, what I call a hard take-off, where it all happens in about a hundred hours or so, sounds very scary to me. Yet, reasoning by analogy, I do not regard that as implausible as some people do. Most of my friends who talk about the singularity regard that as terribly unlikely and unbelievable that it could happen so fast. I’m certainly not advocating for it to happen so fast. I find that particular possibility very scary.

The thing about it that has a certain amount of plausibility for me is by analogy with the most recent comparable event to the singularity, which was the rise of humans within the animal kingdom. From the standpoint of local animals, I think that humans looked preternaturally adaptable and preternaturally fast in how quickly they could bring changes into the environment. I’m not talking about 20th or 21st century humans, but rather paleolithic humans.

David: With the invention of agriculture and after, there has passed a time which from a biological standpoint is just an eye blink. The animals had no chance of being able to adapt.

Vernor: Even before the invention of agriculture, just the way that humans would learn to hunt showed extreme adaptability. Nature can adapt critters to hunt other critters, but it takes much longer because it relies on natural selection and evolution. 

Reasoning by analogy, one might speculate that something smarter than us would have an equivalent speed up. There is a small amount of support for that when you look at network changes and communication network changes. Do you remember the studies of the Slammer worm?

I think they concluded that it had gone from initiation to saturation of the accessible targets across the world within about 20 minutes.

So, one can imagine that if there was a network substrate to what was going on, once you got superhuman intelligence, you would get a rapid bootstrap. It wouldn’t be a decades-long slow evolution toward change, but something that would happen in an afternoon. I’m not saying that’s the way it’s going to be. I’m just saying a hard takeoff is imaginable, and it looks like it would not be attractive. It looks like it would be one of the more dangerous ways for this sort of thing to happen.

David: Before I mentioned educators and politicians, the reason I think it is the technologists’ responsibility to frame and phrase the challenges in a way that politicians can understand and act upon in a desirable manner is because we have seen in the recent past what happens when this doesn’t go that way. Genetically modified foods in Europe and stem cell research in the US have shown how harmful a relatively clumsy political intervention can become. It doesn’t stop progress; it just deviates progress to less controllable environments, and it still arrives at your doorstep even if you supposedly don’t want it. 

AIs and AGIs are the fundamental subjects of the technological singularity, but I don’t think we are doing a good enough job explaining their potential nature and the ways they could impact the world. Do you expect that there could be a backlash against AI research if it showed progress? Given the examples I gave, would that make it more dangerous? Would it maximize negative outcomes instead of maximizing positive outcomes?

Vernor: I think backlashes against any technology are always possible. Successful suppression-type backlashes are almost always impossible. The result of suppression, I think, is usually of the sort that you described. It means that the technology is confined to certain categories of elites or the military. It also means that the planning done with regard to the progress is confined to self-appointed experts who are often self-selected for incompetence.

One thing that makes me hesitant to talk about planning is that we don’t know what the most dangerous things are. We all know there are things that could be extremely dangerous, but we are not in a position to visualize the web of cause and effect that determines what is dangerous when. This could reduce some people to quivering inaction, while making others turn into fanatical suppressionists.

I think the most healthy and correct approach is to be as inclusive as possible in bringing in the expertise and wisdom of all of humanity in questions like this. What is happening very naturally with the Internet does that. If you look at the world as a whole, we have six billion people, with hundreds of millions connected. Of those, almost all are persons of goodwill. Many of them are highly intelligent and well-educated. As an ensemble, they are a greater mass of wisdom and intelligence than any elite, even elites that are really elite because they’re smart. What’s out there with the web surpasses that in breadth and intelligence.

I think that is what we should depend on. The web and the internet are providing that for us. Adam Smith and free markets also play into that because that is a way that people vote with their own resources about what they think is right. The world is a very dangerous place, but the dangers are not just technological; they are human. It’s about the fact that the universe as a whole is a very dangerous place. This doesn’t change any of that, but I think that is the way we can have the best chance to get through it.