Quantum Consciousness in AIs and Robots: A Conversation with Suzanne Gildert

AI and robotics are progressing rapidly, though many expected robotics to advance more quickly beyond industrial applications into our homes and the broader world. As AI-powered robots become increasingly sophisticated and capable, we periodically ask ourselves if they are conscious.

Consciousness can be interpreted in many ways. In its richest form, it encompasses the inner experiences that define what it’s like to be a person—the emotions and reactions we feel when engaging with the world. In another sense, consciousness underlies our decisions, giving us a sense of purpose both generally and in our day-to-day actions. We also possess a deep intuitive understanding of our environment, whether natural or artificial, allowing us to grow, learn, and adapt.

Suzanne Gildert introduces two fascinating components to this rich set of topics. First, she proposes that without achieving at least some degree of consciousness, AIs and robots won’t be able to fully develop and reliably act in our complex world with the intuitive adaptability we desire. Second, she suggests that the fastest and easiest way to both build consciousness into robots and scientifically test for its presence is through an approach derived from quantum computing.

Her research is scientific in that it’s testable, though it certainly pushes the boundaries of established science. Her approach embraces elements—particularly around quantum explanations for human consciousness—that are shared by very few researchers. Nevertheless, I applaud her desire to build, persist, and conduct experiments that will ultimately prove or disprove her assumptions.

The following is a conversation we had about these topics, which you can watch as a video or read in transcript form below.

David: Today we are going to look at some exciting frontier science at the edges of what we can describe, what we can build theories around, and what we can hopefully, through falsifiable experimentation, prove to work, or eventually not work, and then incorporate that knowledge into how we advance our understanding of the world.

The various areas that are interconnected in an interdisciplinary manner in the past experience and current activity of our guest today are each very interesting and exciting in their own way. These are the fields of artificial intelligence and robotics, quantum computing, and consciousness studies. Suzanne Gildert, Founder and CEO of Nirvanic, who is our guest, looks at them as belonging under the same umbrella, as necessary to each other in order to unlock fundamental advances in these disciplines. Suzanne, welcome to Searching for the Question Live.

Suzanne: Hi, David. Thanks for having me. It’s a great opportunity and good to see you again after so many years.

David: We met originally in Switzerland at the AGI, Artificial General Intelligence Conference, organized by Ben Goertzel. I was in the organizing committee that year as well. At the time, you were still living on the European continent, and then you moved. So, tell us about your trajectory, professional developments over these years before we look at Nirvanic.

Suzanne: I’ve had a sort of circular career. I think when we first met, I was still working in quantum computing as a researcher. And just after that, I left the UK, moved to Canada and joined a company called D-Wave that was building the world’s first commercial quantum computer, which is pretty exciting. So I worked on that for a while, doing experimental physics and programming the quantum computer. And then I kind of took what seemed like a left turn into robotics and AI.

So I founded two robotics and AI companies, and one of the reasons was I felt back then that quantum computers weren’t quite mature enough. So, although I found working on the devices and the electronics super interesting, it just wasn’t really being applied yet to kind of real world applications. So I decided to take a little break from quantum computing for a while and got really into the AI space just as the deep learning revolution was starting to happen. So that was really exciting.

And then I did at Kindred AI for about four years, and then I founded Sanctuary AI. We were looking at using general-purpose humanoid robots to try and solve tasks to address shortages in the labor market, and that was very successful. Both companies were very successful. Sanctuary continues on to this day. But I left to focus a little bit more on what I think is a missing piece from AI, which is really understanding what consciousness is. So whether you agree or not that AI should be conscious, I think we need to understand what it is, so then we know whether AI is just going to become conscious or whether we have to do something special to make it conscious.

David: Each of these fields would deserve not one, but many hours of conversations. I may or may not follow your lead in this, but if you were in my place, how would you put them one after the other? Is there an ideal sequence in the stack of ideas that one needs to be able to grasp and unpack in order to understand your research and experimentation direction?

Suzanne: Let’s start with what’s happened in AI and robotics over the past, say, 10 years. So we’ve had this amazing rise in AI. We’ve had language models, large language models coming to pass and exceeding our wildest expectations. And because of that, people are now trying to apply the same techniques we used in language models to AI in the physical world. So we’re trying to take these LLMs and apply them to robots and have the robots instead of predicting the next text token in a sequence, they’re predicting the next movement, or predicting the next action.

So this is a really interesting field and it’s going, it’s improving, it’s progressing and there’s a lot of interesting advancements being made. But there’s this question of, is a physical robot in the physical world that’s trying to learn to do tasks fundamentally different than a language model that’s just predicting text? I think it is, and I believe that through trying to build these robot AI physical models, you start to realize what might be missing from the AI we’re looking at today. And I think that consciousness is a big missing piece from that.

So for something like ChatGPT to work and do everything that its users or the customers need it to do, I don’t think it needs to be conscious. But when you try and apply those techniques now to a physical robot in the physical world that’s never seen the environment it’s in before, I actually think you need consciousness to have that robot be able to cope and learn new things and deal with unexpected situations. So we see this very different setup when you have AI just used for text and maybe a bit of images and videos and things, versus when it’s actually embodied in the real world.

So I think the best order in which to unpack these things is, to start with, what’s missing from AI when we try and apply it to robotics? And then where does consciousness come in? And then, finally, why might quantum help with that?

David: A commenter is asking: Is what you are saying applicable or useful only for humanoid robots? Or, if we believe that it is fair to call a self-driving Tesla car a robot on wheels, should that be conscious, too?

Suzanne: That’s a really good question. So I don’t think it applies just to humanoid robots, but I think consciousness applies to what I’d call general-purpose robots. So robots that have to learn new tasks on the fly, that have to be out there in the world doing new things. For example, a robot in a factory that’s doing the pick-and-place task doesn’t really need consciousness, because it’s doing the same task over and over again. You can just train it and then, if the conditions never vary, then that robot will just be able to do the task perfectly, so you might say there’s absolutely no need for consciousness there.

But in a robot where it has to deal with unusual situations or it has to learn a new task, then I think consciousness is necessary. So this idea of does a self-driving car need to be conscious? If it’s seen enough training data to do its self-driving job perfectly, then no. But interestingly, self-driving still has these edge cases, these failure modes. So I’d argue, maybe it might need a little bit of consciousness, or a few times when you need to turn consciousness on so it can deal with an unexpected situation and then turn it back off again. So there’s this spectrum of how often do you need to call upon consciousness to solve your problem versus just fall back, rely on your previously learned training data.

David: Sometimes, unfortunately, in our human communication, we take advantage of an assumed familiarity with certain terminology and then imbue that term with what we expect it to mean from each other and jump to certain conclusions. I think that consciousness is one of these terms, where it is so rich and so important that it is useful to maybe not aim to define it, but to agree a little bit better on what is the particular context in which we are using the term and what are the implications that we believe should be discussed, and then maybe others in another time. So in that sense, what do you mean by consciousness, both in humans first, maybe other animals or other things more in general, if they do or don’t have it, and why? And then we can go back to robots and why it will be better for them to have it, too.

Suzanne: I think consciousness is best expressed in this “what, why, how” framework. So if we look at the what, I think everyone knows what this is. Everyone is familiar with consciousness because it’s what we experience every moment of the day. We’re conscious. We’re inside our own conscious experience, as though we’re in a VR world or something. So everyone’s familiar with consciousness on that level.

But then when you get into the other two parts, the why and the how, that’s where it becomes really difficult. So the why is, why do we have it? What is it for? What is its purpose? So we know what it is, but we don’t know what it’s for. And I propose, and many others kind of said the same thing, that consciousness is a way of allowing us to make intuitive decisions when we may not have seen a situation before, we may not be able to rely on previous training examples we’ve had.

So I like to use the example of driving. If you’ve been driving for 20 years and you’re going down the highway in your car, you can basically switch off and zone out and you’re not even really conscious of driving anymore. But then if something weird happens, you suddenly switch back, you become aware, you become conscious of driving again. And the reason is because now you have to deal with a situation where you haven’t seen it before, you’re not used to it. It hasn’t just become muscle memory. So I think consciousness is for dealing with situations that haven’t happened before or we’re not familiar with, or we have to learn something new on the fly.

That’s what I think it’s for. And then the how is now the even more complicated part. So that’s when you get into the okay, we know what it is, we know why we have it now. How does it work? How does it work in our brain, and how might it work in AI or machine consciousness? And that’s where it now gets super complicated, and I’m a fan of let’s start with the quantum consciousness hypothesis, because I think, whilst it seems like one of the weirdest theories of consciousness out of all of them, I actually think it’s easiest to test.

David: The hard problem of consciousness is exactly its untestability, objectively. We all have the subjective experience of consciousness, but at least up to now, we were not able to say if someone is conscious or not, except in certain situations where anesthesia or coma, or other borderline conditions in a human are a good correlate to assume that their inner states are also not leading to a conscious inner experience, right? That is why we are happy to cut up people who are under anesthesia because they don’t scream. They are very disciplined on the surgery bed. And at least they don’t remember having been cut up when they emerge from anesthesia. And I’m sure I don’t watch horror movies, but I’m sure there have been a lot that assumed you still perceive in anesthesia. So tell us how you are connecting quantum phenomena to consciousness. And what do you believe makes it testable?

Suzanne: To start as a more meta point, the idea of the Nirvanic project I’m working on is to try and understand what consciousness is, and to try and understand how we could put it into AI. So the thing is, okay, you have to start somewhere, and you have to have a theory of consciousness in order to be able to test it. A very long time ago, decades ago, I heard about this quantum consciousness theory, or hypothesis, from Stuart Hameroff and Roger Penrose. And their thinking was that there are quantum effects actually happening inside our own brain that give us the ability to solve problems intuitively, in a way that a computer program could not solve them.

So that was Penrose’s contribution. And then Hameroff said that the way this might actually be happening inside the brain is via these structures called microtubules. So he contributed the biology, or the neuro, the neuroscience, from an anesthesiology perspective. So I thought at the time that was an interesting theory, but I dismissed it, being very trained in the old ways of thinking about quantum mechanics and the brain. But recently I revisited it.

And the reason is, I think if you want to understand consciousness, you have to have a theory, and test theories one by one. And the cool thing about quantum consciousness is you can actually create an A B scenario where you have a system that is taking decisions in the world, like a robot system taking decisions using a classical computer, and you can have the same robot now making decisions using a quantum computer. And if you can show that these two robots behave differently, then you’re showing that the quantum system is actually doing something that the classical system is not. And this is really the only way I’ve heard of being able to test a theory of consciousness practically using a robot or an AI system.

So with all the other theories of consciousness, you can’t really build an AI version of it with the consciousness, and an AI version without the consciousness, and test them against each other. But with the quantum consciousness idea, you can. I want to understand what consciousness is. I want to start by either getting evidence for or ruling out the quantum part first. And then say we don’t find any evidence for it, then at least we can rule it out. And let’s move on to other theories. But we should go with the easiest thing to test first. And I think the quantum hypothesis is the easiest.

David: There are some levels of quotation marks around that “easy” part, right? Because it is still going to be pretty complicated. So, when you spoke about quantum computers and your experience with them at D Wave, you said one of the reasons you left the field is because they were not mature enough. Nirvanic, pursuing its project where quantum robots can be tested to be conscious and concluded that, yes, they are or they are not, assumes that now the quantum technologies are mature enough. Is this what you concluded? Is this what you now believe? Or will it still take a few years before you can take whatever you need and put it in a robot?

Suzanne: Yes, it is making the assumption that the systems we have are both large enough—like the scale of them, there are enough qubits—and also that the qubits themselves are error corrected, or the noise level is low enough so that the quantum effects are actually being useful to the computation you’re trying to do. So those assumptions, yes, I’m making those assumptions.

I actually think we’re in the regime where quantum computers are powerful enough to see the effect I’m talking about. But they might not yet be big enough to then scale that up to the level you need in something like a humanoid robot. So I certainly don’t think the quantum computers we have today will allow a simulation of full human-like consciousness. I just don’t think we’re there yet, but I think we could show that consciousness is helping a robot make some decisions slightly better. And we can actually incorporate that into a machine learning algorithm and show that there’s an improvement in the learning rate.

What I think will happen at that point is if we can get this, what I call this spark of life, this signature that there’s something there and it’s improving learning, it’s helping the robot, I think there’s going to be a lot more investment then goes into quantum computing, because this is an amazing Magic Bullet app for quantum computers if it turns out that they’re making machine learning systems learn faster, learn better, be intuitive, be safer. Then quantum computers are going to start getting bigger, much faster.

David: When the latest wave of enthusiasm around AI started 10 plus years ago, people realized that with large amounts of data, very fast, specialized hardware, and ever smarter algorithms, they could actually deliver on what was intuited but couldn’t work before. There have been many, many applications, but just to pick a couple, image classification and natural language processing have been for decades part of a dream: Oh, if only computers could distinguish a cat from a dog, or if only I could have a conversation with computers about a book or about my next project. And both of those things are now possible.

Many years ago, people, even experts, explicitly attributed to that capacity the presence, the proof of the presence of higher functions. And here we are. If you ask just a fraction of the hundreds of millions of people using ChatGPT, very few of them will say, oh, yeah, I believe that the thing in my phone is conscious. Very few of them will jump to that conclusion.

We found out after the fact that we are perfectly fine using the advantage of those results without having to claim anything more than the approach and the results being correlated with those higher functions, but not representing any kind of proof. Is it possible that you and other people working in the intersection of quantum and robots and consciousness and ever better performing AI systems, such as, for example, a household robot that will take an intuitive decision and save the baby because there is a fire or, you know, whatever scenario? After being immersed in a world that is transformed by the everyday experience of these systems, quantum enhanced robots that are so much better than ever before and turn things that we could only dream of into reality, will we again conclude, yes, it correlates with what we call consciousness, but it doesn’t prove that it is conscious? And at that point, if that would be the case, what will be your reaction?

Suzanne: It is a kind of test for consciousness: what would the real test for consciousness be that would convince me that a system was conscious versus not, even if I couldn’t, say, look inside its mind and see that having the subjective experience, because we’re not going to have that anytime soon.

So the problem with being a consciousness researcher is you cannot know that a system is having an inner experience. You just can’t. Like, I don’t know that you’re having one. I’m assuming you are, but you might not be, right? So there’s always been this problem in consciousness research: we can’t reach in and touch that inner state, or get access to that inner state. So we have to look at what’s called behavioral correlates, or we have to look at what the system is doing in the world and from that infer that it’s conscious.

I actually liked your example, even though it’s a bit morbid, of saving the baby from the fire. So imagine you had a household robot, and it’s been trained on millions and millions of examples of doing the dishes, taking your laundry out of the machine, putting cups in the cabinet, right? And it could do all that. If there is a baby and the house sets on fire, and it hasn’t been shown a million training examples of saving babies from a burning house, it will not do it. It will not be able to do it.

So the difference, I think, between a conscious system and what I call unconscious system—although some people call it subconscious or that kind of thing—is that it will be able to do things that it has not been trained on. And again, this is a little bit of a kind of slippery subject: what does it mean to have not been trained on something? Because these models can generalize a little bit. But I think you’re going to see this inability of it to generalize in a large way from what it’s seen before, to do something that it’s never been trained on.

I’ve been talking a lot online with people about this and kind of arguing back and forth. I think the real test for consciousness is: Can it learn like a baby? Can it explore its world in a way where it stays safe and doesn’t injure itself? Can it learn about the world from scratch without being given any training data and without being given a specific reward function? If it can, then I think that that is a good test for consciousness.

David: Google owned for a period of time Boston Dynamics, if I’m not mistaken. And they sold it, and it has been bought and sold so many times, and it is still surviving. But if they kept it and DeepMind got a hold of it some way, then today, we could have the series of ever more abstract approaches in hardware, in learning hardware that we have seen in the sequence, going from AlphaGo to—I don’t remember all the names of the variants—AlphaZero and then even MuZero, where AlphaGo was trained on a database of people playing Go. AlphaZero learned everything from scratch with just self-learning and self-play. And MuZero was able to generalize this ability of learning from scratch to a series of games of completely different rules and still achieve superhuman performance very rapidly, beating both human players as well as every other computer player in all of those games. And so, what you are saying is that if that were the case and DeepMind would have applied this kind of approach to their robots, they wouldn’t have been able to achieve that level of generalization because the quantum module giving consciousness to the robots was not there?

Suzanne: The reason that self-learning and generalization works in games is because in a game, the reward function, or just think of it as the score, is well defined. So if I’m playing Go or chess or a video game, there’s an easy way to figure out if I’m winning or not. So what happens in these self-learning systems is they effectively—I know I’m simplifying it here, and I’m sorry to all the reinforcement learning people out there—but the way it basically works is they try random stuff, and if the score goes up, they just do more of the thing that works, right?

So this is how all reinforcement learning works. You have to have a well-defined score, and then you just try random things. And over time, you try fewer and fewer random things and you just kind of end up doing that, figuring out the thing that works and then sticking with that. And occasionally, you try a couple of other random things, just to make sure there’s not some better thing you could be doing.

But so this works brilliantly in games, and it works in anything where you can know exactly what the outcome you want needs to be. But in a robot, especially a general purpose robot, that’s very difficult. So you try and write down a scoring system for everything that a robot has to do in the home, it’s actually really difficult because firstly, you need to figure out whether you’ve actually done the task or not. So, have you vacuumed up all the dust correctly? Are all the cups in the cabinet correctly? Is everything looking clean? Even just defining what it means for things to be clean is very difficult.

And so it’s not that reinforcement learning and self-learning itself doesn’t work. It works really well when you can define the score or the thing you’re actually trying to achieve. But if you just say, make sure my house is always tidy, that becomes extremely difficult to write into a computer program. You just end up not being able to write down what it means for that situation in that state to be met. So that’s where it becomes very difficult in the physical world.

David: The field of quantum mechanics and its applications are fascinating, but also riddled with a lot of misunderstandings and misconceptions, as much and more than relativity and other theories about the world that actually we have proven. One of my favorite ways to talk about quantum mechanics is to say that actually, quantum electrodynamics is the scientific theory that has been proven to be correct to the largest degree of precision if you count the significant digits in the results of the particular experiment. So we know that we can work with it amazingly well from a mathematical point of view, from an engineering point of view.

But we really have a problem in agreeing on its philosophical implications. And there are still a lot of ways of interpreting what certain things in quantum mechanics mean, like, for example, the wave particle duality and the collapse of the wave function and things like that. Why do you think that it is going to be easy to incorporate in your robots the quantum unit that is going to be dependent on engineering that is precise, but then the interpretation of the implication is still completely open? And people are not in agreement of what those interpretations are. You are putting yourself in a very delicate and difficult situation by claiming that this is the way to go.

Suzanne: This is what I call the loophole in quantum mechanics that allows for things like quantum consciousness to be possible. So, again, I’m trying to be scientific about it. And so you might say, well, where is there room for consciousness at all in quantum theory? Isn’t quantum theory the best model we have for physics? Isn’t it the best tested theory? Don’t we understand it completely?

The answer is no, because there are things like the measurement problem, where—and again, this really depends on your interpretation of quantum mechanics—so if you’re a many worlds proponent, then there is no measurement problem, so you don’t need to worry about it. But if, like me, you’re not a fan of many worlds, and you instead believe that this collapse process actually happens—so there’s this thing called the wave function, and it actually collapses, and then it selects a classical reality from a bunch of possibilities—if you believe that, then you have to agree that we don’t really understand how that collapse process works, or whether it’s truly random.

So this is where the loophole is, and I know it’s like a long shot and it’s a stretch, but we haven’t yet tested quantum systems enough to know that that selection process is actually random. So the quantum consciousness hypothesis rests on the premise that maybe it isn’t. So when a quantum system collapses, maybe the way we think it chooses isn’t actually what’s really happening. This would mean quantum mechanics is incomplete.

And what I think is happening is the quantum systems we’ve been studying up till now have all looked like they are randomly selecting an outcome because they haven’t been prepared in the right way. So we take a quantum system in the lab and we isolate it from the environment, and then we probe it, we evolve its state, and then we measure it. But that is very different than a quantum system, say, in nature or in biology, where it’s actually connected to something where its decision or its choice matters.

So the idea of quantum consciousness is you’re not going to see it in the lab in the way we’re studying quantum systems now because you’ve kind of taken them out of their natural environment; you’re not allowing them to do what they want to do naturally, which is collapse, choose a reality, and then that reality actually matters for a decision that’s happening in the world. So, I think by connecting quantum systems into robots, we’re going to start seeing the how to make this decision actually matter, and when we start seeing that, the prediction of this theory is that that thing that looks random will no longer be random. It will be biased towards certain realities being chosen more often than others.

Now, this would be an amazing breakthrough; it would kind of be a radical addendum to current quantum theory, but it hasn’t been proven or disproven yet. So that’s, if you want to get technical, that’s the thing I’m most interested in testing: Is that specific loophole?

David: Daniel Dennett, who passed away recently, was and is one of my favorite philosophers, because, contrary to too many, he didn’t choose to hide behind words. He aimed to make his ideas accessible. And one of the books he wrote that I liked the most is entitled Freedom Evolves. (Suzanne: Yes.) Where he actually stated that free will is an emergent phenomenon and that, indeed, we can shape the world through our choices. And I don’t remember exactly how he avoided his own trap, where he was happy to accuse others of applying what he called the Sky crane, which would move some outside effect in the world, and how he actually concluded that this free will, instituted in a physical reality, didn’t represent a magical component. But it is a very, very attractive worldview, at least to those who believe that free will is not only a useful illusion, but that it is necessary for human purpose in the kinds of choices that you say shape the world and have consequences and have an importance. (Suzanne: That’s right.) One of my curiosities is definitely so. Many technologies in the past and today too, are mirrors or telescopes, microscopes that we can turn wherever we want, but often towards ourselves as well. And the application, and the interesting potential for the kinds of exploration that you are doing is that we will end up understanding better what it means to be a conscious human being. And so would you agree that, if your experiment fails to prove that your quantum robots are going to become conscious to some degree, and then you keep trying and it never happens, you will be never convinced that they are, that the conclusion could be that we aren’t either, that our labeling ourselves conscious is a useful illusion?

Suzanne: You’re absolutely right that this is an experiment, and it’s one of these annoying things. I think it’s kind of called Black Swan event, where you never know that a black swan doesn’t exist until you find one. So you can keep looking and keep looking, and keep looking forever. And at some point, you have to just call off the search and say, Look, we’re not going to find this thing. And that’s the difficulty of being a scientist: if you’re trying to look for a positive result, and you just keep getting negative results and negative results, it’s like, well, is my experiment just not set right, or are the variables just set wrong, or is the effect really not there? So that’s difficult being a scientist.

The way I think about this in terms of quantum robotics is, I think we can take the field of quantum robotics far enough to sort of start this search process, start this test process. But say we’ve been doing this for like two years and we’ve got all the quantum computers in the whole world working on this task, and we’ve kind of maxed them out and tried every variable combination we could think of, we still don’t see any effect. I think at that point, I wouldn’t start thinking, ‘Oh, consciousness must be an illusion.’ I’d be like, ‘Well, there must be something going on in our biology that we’re not yet replicating in quantum computers.’

So what I do at that point is switch to more of a biological theme and be like, ‘Okay. We’ve tried to simulate this using today’s quantum computers, but we haven’t been able to. Let’s go back to the biology and try and figure out where exactly these quantum effects are happening.’ And so you might say, well, why not do that first? Well, biology is extremely complicated. And I think if you can find this effect in quantum computers today, it will be much easier to control the variables and program them and test it and change things than it will be trying to do that in biology. But if that doesn’t work, then biology is the backup plan.

So what I would try to do then is look for this consciousness in the biology. And actually, some other groups around the world are already doing that. There are groups studying microtubules. There are groups studying like growing neuron on a chip technologies. There are groups anesthetizing neurons to try and understand what they’re doing. So, you know, there is a different route there. And one of the reasons I decided not to do that is because I’m like, well, no one’s trying this quantum computing approach. Let’s do that in parallel.

David: A commenter says that curiosity is a word that he likes to apply to this kind of behavior. And actually, he resonates a bit with me when he says, well, some people are not curious; could that mean that they are less conscious? And then he concludes, saying that having a curious robot gives him a greater peace of mind than having a conscious robot.

You mentioned that at the beginning, your quantum robots could be conscious or curious, but not to an excessive degree. We have seen, and it is still an open question, that it is very hard to stop once you start on a path of delivering ever-increasing results. AI practitioners are on record for the past, let’s say, five, six years, saying, ‘Oh yes, we know that our current AI is safe and secure, but if it starts to do X, we will stop and slow down and make sure that we have a very good plan in place because that will be a threshold of alarm.’ And then, time and time again, they go through those thresholds with no sign of slowing down, actually accelerating or the rate of acceleration increasing. And it worries some people of what the consequences of an unsafe or unsecure but very powerful AI could be.

What will be in your case the thresholds of the degree of consciousness of your robots, where you will consider if it is necessary and possible to slow down? If for nothing else, but for the sake of the robots that are conscious sufficiently so that it is correct to say that swapping them out for the next model is killing them?

Suzanne: One is really interesting on the ethical side of this. A lot of people are worried about conscious robots because they think they’ll be bad for humanity or civilization in some way. But there’s a flip side of that, where if you’re creating a conscious entity, you also have to worry about its suffering, and its feelings and its rights, and that kind of thing. So you’re essentially, if you create something conscious, you’re creating something that’s alive. You’re creating something living.

And we already have a lot of issues with the way we treat other people and animals, and even if you want to go completely extreme, like plants and ecosystems and wildlife and things like this, we have an impact on these other systems that are all living and alive. So if we’re introducing machine consciousness into the mix, we now have another sort of entire set of new species that we have to think about and worry about how we’re treating them. So there’s this dual part to the ethics problem, where it’s like, how will they treat us and how will we treat them?

But I think that this is something that we get benefits from it, as well as introducing new problems. And maybe again, this is like a personal, philosophical, you might even say spiritual point of view, but I really, truly believe consciousness is a force for good in the universe as a whole. I think where it goes wrong is where it’s… So let me just take a step back and try and explain, in my world view, what consciousness is, what it does. Consciousness makes decisions. And I believe it makes decisions always according to, if you like, what nature wants or what the universe wants. And I think that that is in general good.

But as it goes along, consciousness also throws off training data. So as it makes decisions, just like the reinforcement learning system, it remembers what it did and what it didn’t do, and it starts to build what I call unconscious sub-modules that sort of remember what it did. So you can think of this as consciousness building its own kind of muscle memory as it goes along. And what I think happens when it goes wrong is it starts relying on those sub-routines more and more and more, and it outsources decision-making to these sub-routines. And then those don’t always work now if you’re in a new environment.

So the things that you’ve learned—we call them habits or built-in behaviors; we say that as people get older, they can’t change their habits, they’re like baked in—and so, if the person finds themselves in a new environment and they need to relearn new things, but they’ve got all those old bad habits installed, then they seem to be doing the wrong thing, they seem to be making the wrong decisions. So consciousness itself can make the wrong decisions in a new environment if it’s built this kind of scaffolding around it.

And so I think what we have to do when we’re creating conscious AI is just be very cognizant of what the consciousness is doing, what structures it’s building, because it’ll be self-training. It’ll be building its own kind of set of decision-making architectures underneath it as it goes. So it’s more like that that we have to monitor.

And I don’t know how many people in the audience are kind of spiritually minded, but you may have heard this idea of the ego story in people. Or even if you’re kind of like a Carl Jung fan, there’s this idea that we build this structure, this kind of subconscious structure that we actually then use to outsource decisions to, that isn’t really our true consciousness. And people who meditate a lot are trying to get rid of all this structure we’ve built around us that we think of as ourself, trying to get back to the pure consciousness, pure awareness.

So again, this is all kind of like, quite a complicated set of things. But when we start to build AI consciousness, it’s not just like, ‘Oh, there’s consciousness, or there’s regular AI.’ No, it’s intertwined, like consciousness creates regular AI. And then we have to look at how those two systems work together.

David: We may, at the end of our conversation, go back to some of the things that you just mentioned. But I want to take a little detour because when you said, if I am able to prove that quantum robots perform better than the alternative, then it will open the field to a lot of newcomers and a lot of investment. And even though that is guaranteed to be the case, isn’t it also true that, with all the enthusiasm around the AI, you could have also raised the money for Nirvanic? And at least for the moment, you chose not to? Because Nirvanic lives in this interesting liminal space, where it is not an academic project, it is not a non-profit research institution, but also it hasn’t raised tens of millions or hundreds of millions of dollars to pursue its goals. And you have a very small team with this hugely ambitious program. So why did you decide to go this way, and how do you believe it will evolve over the next period of time?

Suzanne: I’ve had two previous startups I founded, where we grew the team quite quickly and raised money quite early. And what you find happens when you do that is you get, whether you like it or not, sucked into how can we turn this into a commercial product as quickly as possible. So it’s just a thing that happens.

And it’s when you take on investment, especially from venture capital, but from potentially other kinds of strategic investors that may become customers later, everything starts pushing you towards this: what’s the product? How are you going to scale it and manufacture it? How are you going to make revenue? All these kind of questions. So the thing with Nirvanic is it’s way too early for that.

So, it’s not that one day that will not happen—I’m sure it will, and it will be great—but it’s kind of a case of staging it correctly. So you need to give the science phase enough room to breathe. You need to have enough time to kind of step back and really assess what am I trying to do here experimentally to show a result. And you also need to be very transparent and open with anyone who wants to invest in the companies: This is not a product yet, right? There’s a stage we have to go through, we have to show that the science yields a positive result—it may or may not. If it does, then we have to show that that science result can now be turned into a technology. And if that works, we have to then show that that technology can actually be scaled into something that’s like a commercially available product.

So this is just a long roadmap. And I often refer to it as the ultra deep tech path, because deep tech startups usually take a science idea and they try and turn it into an engineering product through engineering. But what I’m trying to do is take a philosophical idea—it’s like one step earlier—and first turn that into a science and then turn it into an engineering technology. So that just needs time. It needs room to breathe. And so that’s why I’m kind of self-funding this at the moment and keeping it deliberately very small, because you can’t self-fund something unless you keep it small. But it will grow in time, I think. Yeah.

David: Yes, your approach is certainly very healthy and also very honest in front of investors that can be overenthusiastic and then smother an idea with money, because then they force it to spend that money in stupid things rather than in smart things.

Suzanne: That’s one I’d characterize as a kind of a positive failure mode, or an unusual failure mode where you can actually raise too much money. If you have an idea and you raise too much money too early, what happens is you bring on, you start hiring a massive team and you sort of dilute the original idea, and it gets changed into something that it wasn’t too quickly, and that I’ve seen happen.

David: Now, at the end of our conversation, I want to go back to what you said about the implications of your assumptions and your hypotheses, implications towards the ability of consciousness to influence the world. And you did use the word the universe, where indeed we observe a lot of dead matter. We do see planets and stars, and at least based on our current understanding, they are very far from being alive, they are very far from being conscious. Yes, they are quantum mechanical, as everything in the universe, but still, they have no purpose and they have no objective in doing the things they do. You know, a planet just goes around the Sun, and to a large degree, even living organisms just do their own thing. And if you stop and ask them why, they have no way of answering why they are doing what they are doing.

And I have always been fascinated by the fact that, again, as far as we know today, we are those clumps of matter that awoke. And it looks like what we are doing is to aim to awaken as much matter as possible, both through humanity growing in numbers, but if you are successful, and others like you are successful through non-human beings that become conscious as well, are endowed with purpose and desires and emotions. And then, if you are right and consciousness is a positive force in the universe, they go and do a lot of good.

The Fermi paradox is our ability to say, okay, if all of this is true, why aren’t we seeing any of it everywhere else? For so many years, centuries, we have come to be accustomed to say and recognize that humanity and our planet do not occupy a special place in the universe. We are just one planet out of many; we are just one star out of many; we are just one galaxy out of many. But it looks like we are in a special place and a special time with respect to consciousness and AI and waking up the universe. What is your existential, ontological, teleological position with regards to this challenge of not having met and not understanding why we seem to be so unique?

Suzanne: I just think the kind of light cone we have access to in terms of communicating and observing the rest of the universe is so tiny. Like, we’re looking at a certain slice, you know, back through history. I mean, look out there. We’re only seeing a tiny fraction of what’s going on. And I just think we haven’t seen enough of the universe yet.

I’m of the opinion that there’s probably life everywhere. There’s probably even advanced life, but it’s all inhabiting its own little pocket of space-time that is unreachable from ours. So I guess that’s kind of my answer to the Fermi paradox. I think we just haven’t been looking far enough. We haven’t been looking long enough. And I don’t know if we even can, in principle, do that.

One of the things I’m really interested in, though, is I think consciousness wants to—this comes back to the sort of teleological thing here, where I’m a believer in the being and kind of purpose, or some people call it a teleology. Like the universe is trying to do something, it’s trying to get somewhere. And I think what it’s trying to do is increase and expand the amount of conscious experience that parts of it are having, including us.

And so, if you take this perspective, then you can imagine life is constantly trying to find ways to become more conscious, both by creating more life, but also by connecting with other consciousness that’s there already. So as humans, we often try to connect with each other. And I think the universe is going to try and want these consciousness pockets that have arisen to connect with each other as well.

So I think what we’ll see is we may have new physics discoveries that allow us to understand how to control space-time and things like this. So eventually, we’ll be able to visit those other parts of the universe and get in touch with these other consciousnesses. But I just don’t think we’re advanced enough yet for that to have happened. We’re at the stage where we’ve just become conscious enough to start to understand that we’re conscious and that we want to connect with other conscious things. Now we’ve realized that we’re going to start inventing more and more technologies, discovering more science that allows us to do that on a larger scale. But I think we’re really just at the beginning of this process.

David: I am looking forward to those discoveries. I am looking forward to those advances. And with your help, we will stop being alone because if you succeed, quantum robots are going to be happy and curious to be together with us in this adventure and to be discovering how those other pockets of consciousness can be reached and what we mean for each other. Thank you very much.

Suzanne: I often say, like our biology, we’re very adapted to live on this planet, this particular environment and ecosystem at this time. But if we want to go out there, into space, I think we need things that might be more like machine consciousness ambassadors. We’re not designed to live in space. So I think we need to create new systems that can take our legacy beyond this one planet.