Humanoid Robots and the Future of Labor

In the latest episode of Searching for the Question Live, I had the pleasure of hosting Adam Dorr from RethinkX, for a conversation about the impending revolution of humanoid robots and AI.

Adam painted a picture of a future where robots could outnumber humans 10 to 1, performing tasks we can’t even imagine yet, being produced at the pace of 1 billion per year within 10-15 years.

What happens to human labor? How do we ensure a fair distribution of the incredible productivity these robots will bring? Are we bound to create sentient machines? Our social contract needs an overhaul. The idea that a person’s value equals their economic output is going to look as barbaric as slavery does to us now.

But Adam reminded us that optimism isn’t enough. We need to make the right choices, ask the right questions, and above all, focus on protecting people as we hurtle into this new world.

Watch the video of the conversation and read the edited transcript.

David: Welcome to episode number 101 of Searching for the Question Live. My name is David Orban and I’m very excited to discuss with our guest how AI and humanoid robots will bring rapid and profound disruption to human societies worldwide, redefining our relationship with fundamental concepts like labor and human purpose. Our guest is Adam Dorr of RethinkX, the think tank founded by Tony Seba. Tony was a previous guest on this show, where we spoke about the unstoppable revolution of renewable, sustainable energy. Welcome Adam to Searching for the Question Live.

Adam: Thanks so much, David, for having me.

David: Let’s start with a question I like to ask my guests. Tell us about yourself and how you’ve gotten to where you are today.

Adam: I was studying technology and its implications for environmental policymaking and planning at UCLA for my PhD when I received an extraordinary phone call. It was from James Arbib, who told me he was co-founding a new think tank with Tony Seba to explore disruptive technologies and their implications for society, the environment, geopolitics, health, energy, transportation, and much else. Tony was like a rock star in my world – I was already showing his lectures to my students at UCLA. This was like when your favorite band calls and asks you to play guitar for them on tour. I couldn’t believe it at first.

I joined Tony and Jamie at RethinkX in 2017, and we began an absolutely thrilling research program exploring the implications of disruptive technologies. In the past seven years, we’ve explored the disruption of energy, transportation, food, and now most recently, the disruption of labor by humanoid robots and AI. It’s been an extraordinary professional experience that I never could have imagined.

David: Did you ask them what made them notice you at the time?

Adam: I believe I had caught the attention of John Elkington, a very famous thinker in sustainability. I think he had seen one of my academic publications about shortcomings in technology forecasting and recommended it to Jamie and Tony. When they saw my work, they saw someone who thought like they do about technology, disruption, and the future. 

It’s funny – before publishing that paper, I was worried about spending so much time on it as it was a distraction from my PhD studies. But I was so passionate about the topic that I set aside the time to write it. I’m glad I did, as it changed my whole trajectory.

David: Tell me more about RethinkX itself. For those unfamiliar with its business model, it may be surprising that such detailed, well-illustrated, and sourced documents are available for free. You even translate the most important ones into languages other than English. Do you have an endowment that allows you to do this, or do you charge for consulting with companies or governments?

Adam: This is really due to the extraordinary vision and integrity that our founders, Tony Seba and James Arbib, have built into RethinkX’s DNA. We’re an independent research organization, and the overwhelming majority of our funding is philanthropic. We don’t produce research reports about disruptions for paying clients. We do consult and engage broadly with industry, investors, policymakers, and civic leaders to share information and provide actionable insights. But our mission is to disseminate and educate with our insights and discoveries.

This would be an extraordinarily difficult challenge without the vision and support of our philanthropic funding. It’s an extraordinary position to be in, both as an organization and for me personally. As a research scientist, I have extraordinary freedom in a research sense. Beyond that, I have intellectual freedom. Our whole team is free to explore whatever topics we feel are most important to understand at the moment. We’re not stuck focusing on a particular topic. We’re free to change our minds and shift our focus. I simply cannot give enough credit or thanks to Tony Seba and James Arbib for making all of that possible and integral to our mission.

David: Even if it’s not part of your job to do fundraising, it’s worth reminding people that they can donate to RethinkX, which is a nonprofit, and that if they are U.S. residents, their donation will be tax-deductible. You said something worth repeating – you don’t develop reports for paying clients because the price you would pay by receiving their money is that the report would be secret and exclusive to them, rather than being able to spread it as widely as possible for everyone to benefit.

Adam: Yeah, this has been not only central to our mission and strategy but also our success. The reports we’ve produced and the research and insights we’ve generated have gone further and, I hope, have made more impact because they’ve been open, shareable, and freely accessible. I believe that’s allowed them to have a greater impact than if they had been for only a small private audience.

David: I hope we’ll have time to go back and talk about environmentalism as well. It used to be a relatively niche topic in the 70s and 80s, but in Germany, it turned into a political movement that found a role in parliament and influenced policy. This led to the subsidies that Germany provided for solar, which was instrumental in accelerating solar adoption beyond what would have happened on its own.

Also, I have a few provocations we may want to cover. Bjorn Lomborg, for example, is a very controversial figure. In his Nobel Prize resource allocation experiment, rather than the canonical CO2 reduction program, the billions of dollars were to be given to poor populations to increase their resilience, adaptability, and individual or community wealth.

The last provocation, if we have time, is how tainted I feel the German Greens potentially are today. It was discovered that they falsified the report that led to turning off nuclear plants and turning on carbon plants. The experts’ conclusion was not that extending the useful life would be dangerous, but they censored the experts and presented their ideological position as if it were scientific truth.

Now, obviously, humans are complex and fallible. But I think there’s a fundamental truth that everyone is starting to realize: unsustainability is unsustainable. It doesn’t matter if we do it or if it’s done to us by Gaia. The whole system will not let us move it out of balance forever. It will return to balance by itself, and either we will be there or we won’t, but it will be in balance.

Let’s move on to your analysis of the impact of humanoid robots. When did you decide that this was what you wanted to study, and how long did it take until you got where you are now with this report?

Adam: We had been publishing material about robotics for almost a decade. Around 2015 and 2016, before I joined RethinkX, I wrote pieces arguing that the same technology being developed for self-driving vehicles would enable robotics in general. A car that can drive itself is a robot on wheels. My thinking at that time was that it’s a logical choice to try to solve artificial intelligence for robotics with self-driving vehicles because the potential market is enormous.

It seemed to me that tech giants, particularly Google (now Alphabet), were justifying spending billions of dollars on R&D to develop self-driving vehicle technology as a path to a general artificial intelligence embodied in robotics solution. I argued that self-driving technology would be the pebble that starts the robotics avalanche, the automation avalanche.

This was interesting because when we saw other major tech companies enter that space, it surprised many people. For example, when Tesla announced their Optimus robot program, many people thought it was a joke. But it’s exactly what I expected. They’re a technology company, and the same technology that will enable self-driving vehicles can open the door to robotics of all kinds.

We began researching this seriously in the immediate follow-on from when GPT-3 originally launched, and then GPT-3.5 enabling ChatGPT. The large language model revolution within artificial intelligence began making its way into public consciousness. Policymakers, investors, and industry leaders began to see that this technology is not science fiction anymore. It’s real, it’s coming, it’s accelerating exponentially, and it’s going to be hugely impactful.

We gathered our thoughts in this recent piece on robotics. We were probably mostly complete about six months ago. The temptation was to take time to put together a full report, do more quantitative analysis and modeling. But we decided that the situation was more urgent and required us to put our thoughts out immediately. We feel this particular disruption is likely to proceed extremely quickly, perhaps even faster than the disruption of energy or transportation. So we felt there was no time to lose.

David: There are many approaches and reactions to artificial intelligence and artificial general intelligence. Some believe that the data available on the Internet is just not enough, both quantitatively and qualitatively, to train an algorithm that can be generally intelligent, especially about common sense for the physical world. There’s a school of thought called embodied intelligence that explicitly says unless you have a body and your intelligence is in that body, you won’t be able to interact and have opinions and act on the world the way humans can.

In this sense, it’s logical that companies wanting to drive AI would want to attack the ability to acquire data from the physical world. It’s a bit surprising that Google actually sold their robotics unit. OpenAI had a robotics division doing interesting experiments transferring digital knowledge into physical knowledge, like a robotic hand learning to solve a Rubik’s cube. 

Do you have any intuition why their efforts were abandoned? What made the first group stop and the second group go ahead?

Adam: That’s a great question. I can’t pretend to have answers, and I wouldn’t speak for those particular tech giant companies. It’s a bit of a mystery. The reasons may be sound, but it is strange given the timing and positioning.

One factor that’s probably involved is that manufacturing hardware at scale, especially large complex machines, is extremely challenging and represents a fundamentally new domain. A lot of the technological focus and expertise in Silicon Valley is in software and computation hardware. Manufacturing large devices like automobiles is a different kind of challenge. It could be that this out-of-domain, unfamiliar challenge was financially or logistically suboptimal or intimidating in some way.

It’s also possible that it was just a mistake. These things do happen. Lots of mistakes and regretful missteps have happened in the past, not just in Silicon Valley but in major industries of all kinds. This fits with the pattern we’ve seen throughout history, where sometimes the incumbents who are best positioned are unable to leverage what would ostensibly be a major competitive advantage.

A classic example is Kodak, the giant photography company. They not only were dominant in photography in general, but they owned key technology and patents around digital cameras. They were perfectly positioned to lead in the digital disruption of traditional film photography. Yet, as we all know, they failed to lead. They suffered from the innovator’s dilemma. They didn’t disrupt themselves, waited too long, and outsiders instead won the race to the digital disruption of photography.

We may be seeing something similar. It could just be that the incumbent technology firms are suffering from that same sort of innovator’s dilemma and are unable to self-disrupt. As strange and unintuitive as that may seem, we’ve seen that pattern many times throughout history.

David: The Innovator’s Dilemma is a wonderful book by Clay Christensen that illustrates with well-researched examples how hard it is for a dominant player to endure the hard work of making a solution workable that at the beginning is inferior to what they thrive with. Some have the foresight and succeed in the transition. The best example is Netflix, which started shipping DVDs and now has stopped because they told the DVD subscribers either to go with streaming or they couldn’t serve them anymore. This was despite everyone telling them they would never be able to serve high-quality video to millions of subscribers over the internet, which is what they’re doing now.

It’s worth showing a couple of fundamental concepts to our viewers. The first is the power of exponentials. In your reports, you show a lot of trends where one is passing away and the other is taking over. The most well-known exponential is Moore’s Law, which for over 50 years predicted how transistors would become more numerous and less expensive, leading to ever more powerful computing devices around us in a self-fulfilling prophecy.

What a lot of people don’t realize is that we’re not observing just a single exponential where the knee of the curve will be followed by an opposite trend as a given technology or market is saturated. We’re seeing many generations of technologies that are designing the exponential we’re looking at. The learning curve that allowed cars to go from being a luxury item to ubiquitous is the same learning curve that’s going to make humanoid robots affordable, not only productive and disruptive, but potentially affordable to everyone.

What’s the important difference between the humanoid robots we’re talking about and the industrial robots that have been installed in various places in the hundreds of thousands? One of our viewers, Emiliano, comments that they’ve been working beside us since the 80s. What’s the difference with the new generation of humanoid robots?

Adam: The clearest difference is that the automation we have today, the robots working in automated production facilities and factories, are not intelligent in the expansive sense that we now see when we say AI. It’s true we don’t yet have artificial general intelligence. We don’t have AI that is totally, generally intelligent. Nevertheless, what we’re anticipating with humanoid robots is a much more generalized robotics solution.

In the same way that human beings can perform much more flexible and general work inside a factory setting than a traditional factory robot, imagine a single robotic arm, huge, three or four meters tall, weighing several tons, moving through a factory, helping to assemble something. That’s what we mean by automation, by industrial robots today. They’ve been tremendously effective and influential in a similar way that other heavy machinery has been influential and beneficial to human productivity.

For example, bulldozers are a form of mechanization. You can think of a bulldozer as a machine that amplifies human productive capacity because it’s able to do so much more of the sort of work that we need than a human being can do. A bulldozer can move more earth in an hour than a hundred people with shovels can move in a month. An industrial robot is far stronger, can be much faster, can have far greater precision or endurance, and so forth. So it can exceed human beings on many parameters. That’s why we utilize them in our production today.

But what they don’t have is the flexibility and adaptability to perform novel and new tasks or truly complicated tasks that innately require a large amount of what you might think of as error correction. If something doesn’t go perfectly correct with a traditional industrial robot of the last several decades, if something goes wrong, that robot is not flexible enough to adjust and make corrections and stay on task and keep being productive. Typically, they have to stop, be evaluated, be corrected, maintained, fixed, whatever it is, by a human operator.

Humanoid robots of the sort that are enabled by the AI being developed first for autonomous driving applications, and also enabled by the new artificial intelligence behind large language models and generative AI, when we put those two things together, we don’t get a general intelligence, but we get much more flexible and adaptive and error-correcting potential. What that means is that humanoid robots are going to be able to do productive, useful work in applications that previously only human beings could do.

These are tasks that in the past only a human being could do. Surprisingly, they may seem simple because almost any human being could do them. Almost any human being can fold laundry or sort objects by color between two baskets. These things don’t seem challenging for the typical human being. But they require a deceptively large amount of intelligence. We’ve now reached the point where that’s possible with artificial intelligence. So these humanoid robots are going to find enormous numbers of tasks that they can be productive in.

The other thing to mention is that the unit of analysis that’s appropriate for humanoid robots is the task. It’s not correct to think of humanoid robots as having jobs in the same way that a human being has a job. Human beings enter into employment contracts, which we then call having a job, and that entails legal obligations. It can involve many different tasks that you have to perform. It can involve the performance of tasks that you don’t anticipate, that you have to create in order to spontaneously or creatively solve a problem if it comes up. Our jobs can be connected to our identity. They can be part of our culture.

Humanoid robots are not going to perform labor that has any of those aspects to it necessarily. Maybe one day when the technology advances to the point where robotics and sentience converge. But up until then, robots that are not sentient, not generally intelligent, not agentic, will still be able to be generalized enough and flexible enough to perform a very large variety of tasks. There are an enormous number of tasks, great and small, that go into the manufacture and production and distribution of goods and the delivery of services of all kinds throughout the economy, literally millions upon millions.

On top of that, there are many tasks that are not performed today because no human being is either willing or able to enter into an employment contract to perform them. In other words, there is a latent demand for labor to perform tasks that is not being fulfilled today, either because the tasks are too dangerous for a human being to perform, or maybe they are simply too difficult or otherwise undesirable, and human beings are not willing to perform them for a wage that employers are willing to pay. In other words, there’s no way to make a transaction, a trade for labor work there.

But humanoid robots are likely to find and perform very large numbers of tasks that are too dangerous or too repetitive or too dull and do them at a cost that employers, that producers can pay. A simple analysis, our analysis and others have shown that the cost of robotic labor per hour could be competitive from the start with human beings. One can see through fairly straightforward quantitative analysis that a humanoid robot that costs as much as $100,000 or $200,000 could still, over its lifetime, perform labor on a per-hour basis at a cost competitive with human workers. And then from there, it’s very easy to see that much less expensive robots could perform labor for a small fraction of what human beings must be paid.

David: I think it’s important to clarify a fundamental point that shifts the conversation from what could be easily used from a political point of view as a conflict or competition where humans will lose out and robots will win, into a deeper understanding of how dramatically what we talk about when we talk about the economy is going to change. A useful observation to achieve this shift of perspective is to realize that the economy is not a zero-sum game by definition. There wouldn’t be any economic transaction if both parties didn’t believe that they gained from the transaction, and the economy is not a closed system. It’s an open system where what we did in the past under certain conditions is potentially not even comparable to what we do under the new system.

In your report, for example, you talk about LED lighting and how LEDs are so different that talking about how many watts they consume or what their power is, is not meaningful. Now people talk about their color. And very importantly, we also have crazy shapes like LED strips. We have all kinds of new illumination. We haven’t even fully started redesigning our homes, taking LEDs for granted, which are so inexpensive both to install and to use that they completely change the equation.

I’d like to launch a challenge. Unfortunately, I’m not a good designer, so I cannot bring you a solution. But your charts hide this fact. When we talk about, for example, insulin from animal sources going down close to zero because synthetic insulin, indistinguishable from human natural insulin, takes over completely, the 100% on the left side is, I don’t know, a thousand times smaller than the 100% on the right side. They are just not the same thing. Potentially it’s the same with every one of these charts where the fact that they are made compatible by being on a percentage scale hides the fact that they created an immense amount of value that otherwise wouldn’t be available. I have no idea how to graphically represent this, but I believe it’s an important detail.

Adam: That’s a very astute observation, David. Some of our charts that we’ve presented do show this. It’s a different scale on the Y axis instead of zero to 100% market share or percentage of anything else – market share, revenues, whatever it might be. If you show the absolute magnitude, the absolute scale there, you can see that markets grow over time, production grows over time, absolute quantities that you measure grow over time. And then what you see is the old technology, it’s just a blip, a bump at the bottom as the new one crosses over, but then launches upwards exponentially often for its first portion before plateauing in any given, saturating any given domain. And yes, this is a more challenging thing to visualize.

David: I have to jump in because not only do your projections see within an astonishingly short amount of time, within 10 years or 15 years, yearly production rates of billions of humanoid robots, but what I believe is that when we cross the humanoid robot revolution with the space industry revolution, we unlock not only a potential, but a need for tens or hundreds or thousands of billions of robots to inhabit the solar system and build so that billions of humans can also live there, forever dwarfed in their numbers by the robots, but in a manner that otherwise wouldn’t be possible. I think that’s what’s going to happen on Mars already. We will have Mars missions landing soon, and they will deliver Optimus robots by the hundreds and then the thousands, together with other kinds of systems that will enable the building of habitats. And once everything is nice and cozy, then the humans will arrive.

Adam: I think there’s undeniable logic to recognizing that up until this point in human history, labor has been the fundamental limiting factor of production because everything else, scarcity of anything else, is ultimately reducible to labor and in particular to skilled labor. In other words, to labor combined with knowledge, labor combined with intelligence. Any other quantity of anything that you’re interested in increasing, whether it’s the amount of energy or the amount of goods and services, the amount of material that’s available, the transformations of energy and material of any kind, from raw materials into finished goods, all of that other activity, whether it’s productive activity in the general sense or economic activity in the more narrow sense, all of that activity is fundamentally limited. It’s fundamentally constrained by the amount of labor, intelligent labor, that is deployable.

Up until now in human history, the only source, the only available amount of intelligent labor was how many human beings you had. And not just that, but how many educated, skilled human beings of working age were available to do that kind of labor. So we have one reason why we’ve become more productive as a civilization over the centuries is just the brute fact of population growth. There are more of us now, but we’ve still been profoundly, profoundly limited. Yes, there are billions of human beings now, but this is still hugely limiting.

What artificial intelligence and robotics offer is a way to explode open that limitation, for that to cease to be limiting in the way it has been. And the potential to manufacture productive capacity itself, in other words, to manufacture labor, the capacity for labor, this is fundamentally new and totally game-changing. We will be able to expand labor, the available amount of labor, vastly faster than we ever could at any prior time in human history.

There are two things to think about here. Number one is that we can expand labor as fast as we can manufacture humanoid robots now. And this is one of the things that we emphasize in the piece that we wrote. So that in itself justifies an enormous, a staggering amount of investment. True nationalization of investment in these programs is now absolutely justified on many bases, on the basis of productivity, on the basis of security and so forth.

But even more than that, humanoid robots, because they are able to do useful labor, can close the loop on themselves and can accelerate their own production, their own manufacturing. We call this auto-catalysis, self-acceleration, or self-enabling.

David: And in two ways. One way is the fact that, statistically speaking, in this AI-generated image, if you want to add a new human worker on the left side, you need 20 years. That human needs to be born, needs to be raised, needs to be educated. And then 20 years later, hopefully they can be a productive worker. If you want to add a new robot on the right side, however long it takes to build a robot, that will be it. Maybe an hour, two hours, whatever it is.

Adam: And more than just that, to build on that, yes, it’s the manufacturing, but then to educate this robot, all it is is a download. It’s not even overnight. It can happen over the air in a few seconds.

David: The horizontal transfer of knowledge that we achieved via the invention of writing enabled human civilization to greatly improve. However, the speed of knowledge transfer was limited by the physical transfer of the books or the scrolls or whatever they were, and the ability of the recipient to read and to act on the knowledge contained. The horizontal knowledge transfer and skills, expert skill level performance among robots is unimpeded and goes with literally the speed of light as these new insights spread in their population.

Which brings an interesting question. If robots will change our world, why aren’t huge corporations like Coca-Cola producing robots instead of sugar and water? One reason could be that they are unable to learn or unable to unlearn. They are unable to, for example, if that were their goal, and to a large degree it is, if they want to provide increasing shareholder value to their stockholders, then maybe they should diversify and decide that producing a product or a service different than their core product today is better. But in order to do that, they have to unlearn and then learn, which they can only do at a very, very slow speed today.

So whatever new corporations are going to be designed around the knowledge of advanced AI and humanoid robots, it is very likely that they will be extremely nimble, not set in their own ways and able to adapt to needs unanticipated in a manner that no traditional company can.

Adam: I think we can look to ecology and biology for examples of what to expect in human economies because these dynamics of these systems are not identical of course, but there are enough similarities that we can see instructive lessons. In any ecosystem, there are many niches, many different ways for a species to find a niche and succeed and make a living, as you might say, to survive and to thrive. And so the world of ecology, the ecosystems, the living biosphere around us is full of tens of millions of different organisms surviving and thriving in different ways.

It isn’t convergence. This is one of the great surprises of evolution. There is no single convergence onto one optimal way to be alive and to make a living and to be productive and so forth. And we see this also in economies, right? For any firm or any company, there is benefit to specialization and focus on a niche, to creating value in one particular way. And then once you are in that niche, it can be, as you said, David, difficult and dangerous, risky, uncertain, expensive to step away from that niche and expand into some other one, especially a novel one that’s unexplored. And that requires an agility. It requires a flexibility that has high cost in many different dimensions of cost, not just purely financial cost.

And so this is why throughout history, we tend, again, another reason why we tend not to see the incumbent successful, small or large firms being the innovators. We tend not to see that overall. There are, of course, exceptions, but overall throughout history, we don’t see very much of that. What we see is innovation coming from very flexible and agile outsiders rather than the established incumbent successful insiders of an industry. That’s just a pattern that we see again and again. And again, there are lessons and similarities that we can look to natural systems where we see similar dynamics.

So yeah, I would not expect Coca-Cola to announce a humanoid robot and that they are going to be a leader or innovator in that space. Having said that, maybe we will be surprised. If a car company like Tesla, and I use car company in a limited and sarcastic way because it’s not true – Tesla is not just a car company, that’s very clear. But if a car company can go into robotics, if a search company, and again, I’m joking, Google is more than just a search company, but if a search company can go into self-driving or robotics, as they did temporarily in the case of humanoid robots, unfortunately, well, maybe some of these other large companies, maybe some of these other large organizations could and we could see some surprises.

One other lesson that we see when we look at the history of technological progress over the centuries for things that are incremental and disruptive, both, we see lots of surprises. There is no single hard, fast rule that never gets broken. There are many, many exceptions. What we see are general probabilities. We see general patterns, but there are often surprises, often unexpected and strange situations. And so one thing that we should anticipate is that there will be interesting twists and turns and surprises in the humanoid robotics and artificial intelligence space over the next 10 to 15 years. We don’t know what those are. We can’t know what those are. Absolutely. We can expect that something will surprise us.

David: The execution of innovation is necessarily through trial and error and competition of teams aiming for the same outcome through different means. And that is what made the electronics industry so incredibly successful over the past 50 years. And the acceleration of innovation brought by AI is going to amplify this uncertainty further.

Stanford University published and is publishing the AI Index, and in their 2019 annual report, they put this chart of AI compute once again on a logarithmic axis. The y-axis is orders of magnitude. And then they interpolated it with two lines, basically saying there was a disruption in the underlying mechanisms. And when deep learning and then transformer architectures were introduced, we are now talking about a different era. And I think they are wrong. They are interpreting the data through a limited lens that is similar to how an exponential can be interpolated through various linear trends. And then you talk about, oh, this was linear, then a disruption came, and that was the other linear, another disruption came.

What we are actually seeing is, I believe, one of the most important insights that everyone should grab and grasp as they think about today’s AI age. It is that the doubling times of AI infrastructure are shrinking. So if it was that the power of our computers would double every two years during the era of non-AI based compute, and that held for long, now, what we are talking about is actually the increase of an order of magnitude, so a tenfold increase in little more than a year. And this is not going to stay constant. Year after year, we will see a tenfold increase happening in shorter and shorter amounts of time.

And amazingly, well, not really amazingly, but someone who understands this very well is Jensen Huang, who every time he has a keynote presents these charts, and I have not seen other people looking at them with this glance and highlighting that these charts are mind-blowing because if over the course of the past 10 years, NVIDIA followed the exponential curve of Moore’s Law, they would have increased their performance less than a thousandfold. Instead, they increased it 10 million times. And that is what I call the paradigm of jolting technologies of an era where the shrinking of the doubling times is what is characterizing the capabilities of our systems.

Now, another question to you. In your report, you say, and it is a key assumption, given how fast our assumptions may need to be re-evaluated, you say, so long as humanoid robots are not sentient, they will not have jobs. And during our conversation today, you also repeated, they will not be agentic, they will not set their own goals, they will not have their own aspirations, they will not seek to find out what is going on in the universe unless we tell them to do so. It may be a necessary assumption in order for you to have a report that is believable by your readers, because it is already something that a lot of people have to suspend their disbelief in order to be able to read until the end. But what if you are wrong in the assumption that for a long time, this will hold true?

There are a lot of people who believe whatever we ascribe to ChatGPT or Midjourney or others shouldn’t be called creativity and only humans truly exhibit creativity. And those people certainly will say that it is a despicable joke to even think about robots or AI ever becoming self-aware or sentient or goal-seeking or all of those things. But what if they are wrong? And if they are wrong, is that going to happen in a thousand years, a hundred years, or maybe in ten years?

Adam: Well, my personal position on this for a very long time has been that we ought to expect that reasoning and general intelligence will be accompanied by features of sentience that we are familiar with – consciousness, self-awareness, metacognition. So the ability to think about your own thinking, theory of mind, complex interchange between short and long-term memory for planning and identity purposes, things that are essential for being agentic, for being an agent.

I’ve argued for a long time in my own work, and certainly thought this for a very long time, that the existing proof in biology is that this is not something that is fundamentally a huge leap from systems that are narrowly intelligent. Here’s what I mean by that. The difference between animal brains is not one of extraordinarily complex architectural differences or extraordinarily subtle algorithmic differences. The primary difference that we see when we look at different animal brains is scale. That’s the main thing that we see.

So the main difference between the brain of a mouse and a cat and a dog and a chimpanzee and a human being is scale. It’s not that there are uniquely complex structures in the human brain that are not present in other mammals. Another interesting, very intriguing aspect of animal brains is that subjectivity consciousness came before reasoning and symbolic and abstract, intelligent, generally intelligent capability.

So in other words, for natural selection to have discovered and then protected, preserved, perpetuated consciousness before it discovered and protected and perpetuated intelligence is very revealing in terms of how it’s, in general intelligence in the way we think of it as human level intelligence. And so for these and other reasons, my suspicion is that we are not all of that far away from instantiating general intelligence in artificial systems.

I suspect it is not a project with deep mysteries that is going to take many decades to unravel and crack the code on. I suspect that the combination of scale, so in other words, compute resources and data available to train on, and especially, David, you mentioned this, real world, physical world, sensory data. Because, of course, chimpanzees are really quite intelligent, even our dogs and cats are extraordinarily capable. They’re not reasoning in the way that human beings are, but they have an intelligence that’s extraordinary. And none of that is acquired by training on language, for example. It’s all acquired to the extent that it is acquired after birth. It’s acquired through training on sensory data, not on linguistic data or otherwise. You could call it video data and audio data, but it’s acquired by training on sensing the three-dimensional environment around us, and not on training on language tokens, for example, different set of tokens.

So my suspicion is, my suspicion very strongly, is that it’s a relatively small number of ingredients that have to come together, combined with scale, and we will, as a civilization, be surprised in retrospect how quickly, and perhaps even how easily, artificial general intelligence emerged. That’s my suspicion. I think we will probably look back in 15 or 20 years and we will think, huh, that came faster and was easier than we expected. That is my personal suspicion. And that’s been my mindset about artificial general intelligence and agentic intelligence for at least a decade, perhaps longer.

Adam: Now, there’s another point that you mentioned, which I think is worth touching on. And that is, well, what about all of these humanoid robotics that are going to be disruptive economically and otherwise, because they are going to be performing tasks that up until now only human beings have been able to do? And there, I think there’s a subtlety we can examine. I think it is possible for humanoid robots, similar to self-driving vehicles, to become intelligent enough to be very useful and extremely productive without necessarily needing to be sentient, sapient, or agentic. In other words, I suspect that things are possible without sapience, sentience, and agentic reasoning. Now it is no longer possible to say artificial intelligence can’t do anything useful without being sentient. Five years ago, you might be able to argue that. Now you can’t.

So I think we’re on fairly firm ground claiming that humanoid robots will be able to be very productive without needing to be fully sentient. So that is the claim that we could choose to manufacture to deploy narrowly intelligent robots that would be very useful, that would be very productive, but would not need to be sentient and agentic. Okay, that is a choice. That is a social choice that we could make.

What I think deserves very, very careful consideration in advance is, once we are capable of creating agentic, sentient, artificial general intelligence, where and when do we instantiate that? Under what circumstances? I think it would be, if it were possible, to create a robot that was not self-aware and was not agentic, and that therefore, one might argue, does not need to have rights and obligations and so forth protected through institutions, legal and ethical institutions, through ideas like personhood. If those things don’t apply, I think it could be a terrible crime for humanity to not employ those robots, only narrowly intelligent robots, and to instead create and then even attempt to deploy robots that are sentient and that do have a legitimate moral and ethical claim to the rights of personhood.

So it really is about, there’s two different questions there. One is, what will be possible? And two is, what will we choose to do with those possibilities? I certainly imagine that we will, we as humanity will make the wise and responsible decision ultimately to not enslave a new type, a new kind of sentient being. I think that would be a terrible, terrible mistake. And we, it took far too long to learn that dreadful lesson through our own very difficult human history, right? I mean, it took us a very, very long time as a civilization for humanity to realize that slavery is an abomination morally and ethically. And so I hope that we learn that lesson well enough that we will never make that mistake with artificially sentient persons.

But having said that, I suspect we will be able to choose to make humanoid robots that are not self-aware, are not sentient, don’t have agency, and therefore do not need the rights and privileges and protections of personhood. And that we will be able to manufacture billions upon billions of those machines, and that they will be able to do a great deal of productive work and that we will be able to glean the benefits that we describe in our piece from that incredible explosion of productivity built upon machine labor.

But I can’t stress strongly enough, we have to be very, very, very careful and be absolutely sure that we are not creating beings that do have any agency or consciousness or sentience and that cannot, we have to be absolutely sure they cannot suffer. And this is going to be, I believe, one of the great moral obligations and tests of our time is to act wisely as a civilization as we bring an entirely new kind of intelligent being into the universe. This will be one of the most important things we ever do. And so we must do it very, very, very carefully.

David: The people who look at the disruption brought to horse-drawn carriages by cars, concluding after the fact that it wasn’t a big problem and that every person whose livelihood depended on the horse-drawn carriages for sure found another way of living, make several mistakes naively and I’m sure in good faith. One mistake is that after the fact, it is very easy to conclude and discount the suffering and the chaos and the anxiety that a transformation like this implied. The other is that, and I love your title, is that we don’t have to look at what happened to the humans riding in the horse-drawn carriage. What we have to look at is what happened to the horses. And if the transformation were such that the horses ended up okay, we would now have cars where in the driver’s seat, there is a horse. That is what would be the conclusion of the transformation. But no, in the driver’s seat, there is absolutely no horse. We ate the horses, literally. And it may be horrifying to our vegan or vegetarian viewers, and most Americans today don’t eat horses, but I guarantee that in New York, in the first one or two decades of the century, a lot of people ate horse meat.

We also believe that the social contract is kind of a natural law, unchangeable, God-given, and it is a laughable assumption because the social contract is an agreement that can, and very likely in this case, must be renegotiated. There is already a percentage of people who feel uncomfortable using smartphones, or they use smartphones at such a small degree, maybe 10% of the potential, or maybe even less. They only answer phone calls or make phone calls with a smartphone, 1% of the capabilities of the smartphone, that they are digitally illiterate in practice. And just like somebody who cannot read or write is unemployable today, someone who is completely digitally illiterate is very close to being unemployable.

The percentage of people who will not be able to adapt to this new world potential is going to be very, very large, not because they won’t want to try, but because it is beyond their adaptability that has a very clear cognitive and biological limit for all of us. So I’m an optimist as well. And I am an optimist as a starting point, then justify my optimism with the things that I find out with that biased exploratory purpose. However, I also want to understand what are the dangers and what are the foundations of a potential conflict, both within and between societies.

So we, by the way, originally scheduled this for an hour and we are already over. Thank you for your patience. And please let me know anytime you feel bored and want to end it. The secret sign is this. Just do like this and I will know that it’s time to end it. So what is your feeling with respect to the adaptability? And let’s just pick three macro societies, the US, Europe, and China. Are they doing the right things so that by the time the disruption is very clearly visible, they are not doing the first steps, but they have as much as possible already in place to take care of whatever percentage of people are going to just throw in the towel and say, listen, I did everything I could. Sorry. I have been driving my truck for 30 years. I’m 50 years old. If you tell me that the solution is that I become a web designer, I will just punch you in the face. That is my answer to your proposal. So how do you feel about these three areas of the world getting ready for the new humanoid robot revolution?

Adam: Well, there’s a lot here to unpack. Let me start by saying that in our piece that we wrote, we started and closed by emphasizing the point that we ought to, in the face of disruptions of this magnitude, focus on protecting people and not the incumbent industries or interests or the status quo. And of course, that is the overarching prerogative that all of us share together. It’s not just the responsibility of governments or industries or policymakers or decision makers. All of us are in this together and we all share the responsibility to navigate this tumultuous time as best we can with that key principle in mind that we need to protect human beings. We need to protect people first and everything else needs to be subsidiary to that overarching mission goal.

The second thing that I think is very important is that nobody knows what the right steps are to take. Nobody knows what the answers are in what is the right thing to do, what is the first step to take, what are the right moves for any disruption, but especially for one of this magnitude where so much is at stake, where it is going to be so impactful. It’s what RethinkX calls a phase change disruption, which means the systems built around this sector of, in this case, it’s labor, everything that’s connected to labor will be affected. And the crazy thing about that is that labor affects absolutely everything. So this is going to be transformative for human civilization as a whole.

So in the face of a challenge like this, nobody has the right answers yet. Nobody has the right answers at the beginning. I love, David, the title of this series of yours, right? Searching for the question. That is the correct way to think. It’s not that we have to find all the right answers at the start, it’s much more important to begin by asking the right questions. That is what we must do. And part of what we wanted to emphasize in the piece that we wrote is to frame the challenge here so that humanity can begin asking the right questions and exploring possibilities.

Another guiding principle that we always recommend is to embrace experimentation and de-risk failure. And so we are not punishing failure. We must learn and gain useful knowledge and experience as quickly as possible. We cannot do that if we don’t make experiments, and we cannot do that if we heavily punish failure. So this is where a culture of entrepreneurialism like we see in places in the world, like Silicon Valley is so famous, is very useful. In a circumstance where there’s large uncertainty, lots changing, very high stakes, it’s very valuable, very useful to experiment quickly and to learn from one’s own mistakes and successes and the mistakes and successes of others. And in this case, it could be around the world. So we’re very, very quickly here, societies need to begin all over the world experimenting and without punishing failure if things don’t work out, to discover what good steps are and good choices are versus less optimal ones in how we adapt to this challenge.

The last thing I’ll say, just very quickly, is that I am optimistic because if we look at, you mentioned the United States, Europe, and China. Now, why am I optimistic that it is possible to navigate this transformation in a beneficial way? Well, here’s a basis for my optimism. If you look at the United States and Europe, that’s two out of the three that you mentioned, and increasingly also in China, but especially United States and Europe, what fraction of the population is actively engaged in the productive workforce today? Today, it’s a very small fraction, or I should say it the other way around. I should say that there are far, a far larger fraction of the population of the US and Europe today does not work than, say for example, 300 years ago. 300 years ago, people worked from the age of five or six years old until the day they died. Almost everybody just worked all the time. And you had one day of rest. And on most days, you worked 12 to 16 hours.

We did an enormous amount of toil as a species up until the modern era, certainly after it took the Industrial Revolution and then truly the modern era with the invention of electricity, electric motors, steam engines, appliances, machines, to alleviate some of that burden. Now think about the United States and Europe. Young people don’t work at all until 15, 16 years old, 18 years old. If they go to university, 22, 23 years old. Huge numbers of young people don’t work. We have retirement now. Retirement was a new idea in the last 200 years or so. Not everybody had the privilege of being able to retire at all. But a very large number of people in the United States and Europe are retired. They don’t work. They’re elderly. They don’t work. They are still members of society. They still are contributing, but they don’t directly engage in the social contract of employment for the exchange of their labor for financial remuneration.

And then on top of that, there are some fraction of all adults, it varies and you can measure it in different ways, but some fraction of adults are not employed. So of the working age adults who can work, not all of them do in the United States and Europe. So we’re talking about a fairly large fraction of the society is not currently engaged in the production of goods and services. And yet, the standard, the material standard, in other words, the productivity of those societies at large in the United States and Europe is higher than ever. It’s higher than ever with a smaller fraction of the population working than ever.

This is not just extraordinary, it gives me hope. It makes me optimistic that we can continue that trend all the way to zero where no humans are working, where everybody is effectively in their childhood still. Or if you prefer the other end of the timeline, in their retirement. I prefer to think about it as being in childhood. And what do we do in our childhood in the best circumstances? We learn, we play, we focus on becoming valuable members of our families and our communities. We build friendships. We learn to love people. We learn to care about things. We learn to take on responsibilities and obligations. This is what childhood is about. And childhood, thank goodness, in the modern era, is not about going into a sweatshop and toiling, doing work, whether it’s skilled or unskilled. And we have come to see that childhood stolen if a child has to do that.

I think, ideally, we will come to see all of human life in that same sort of way. That human beings, it’s wrong morally for human beings to have to engage in the social contract of exchanging their labor for remuneration so that they can take that remuneration, their money that they earn from selling their labor, into the marketplace and to acquire some claim against the production of their society, the goods and services produced by their society. We need a new mechanism for giving everybody a claim to the share of that productivity.

Final thing I’ll say, just as we’ve become more and more productive throughout human history, despite the shrinking fraction of our populations working, in the future, productivity will not decrease. I think this is a widespread misconception among the public still that it’s a zero sum. I think you used this phrase, David, earlier. There’s a misconception that it is zero sum. There’s a misconception that if a robot takes a job, that that job is gone for a human being and somehow we won’t be as productive. But of course, this is mistaken. The productivity will all still occur. All of the same goods and services. In fact, more, many, many, probably many, probably orders of magnitude, more goods and services will be produced. More material prosperity will be available when robots are doing all of that production activity than today when all of it’s being done by humans or virtually all of it is being done by humans.

And so we have this combination of a very strong precedent, certainly in the United States and Europe, for a substantial fraction of the population no longer working, and yet society being prosperous materially and otherwise. And then looking into the future, the prospect of becoming very much, much more materially prosperous and the fraction of human beings needing to work diminishing to zero. The real challenge in my mind is how do we renegotiate that contract so that we have some equitable distribution of the claims to the productive base, of the claims to that prosperity? If you’re a citizen, if you’re a member of your society, what gives you the claim to some fraction of your society’s enormous future production? And that, I think, is a challenge for us to figure out. But I think we can.

David: A couple of analogies. I am an avid user of AI tools. As a matter of fact, I am in the process of writing a book called AI Powered Knowledge, which is itself being written together with the AIs. And I literally feel in my daily intensive use of these tools as the conductor of an orchestra, where the different tools have different flavors, they contribute to the symphony that is the particular output that we are creating. And even though the unique sounds are not produced by my hands, no one will claim that a conductor of an orchestra is not a crucial component without whom the piece of music couldn’t be executed.

A similar, probably slightly adapted, metaphor could apply to robots as well. We will rapidly see a one-to-one ratio of humans to robots on the planet, but production of robots will not stop. We will get to 10 robots per human. And then it will be easy to feel that we will just look at what needs to be done and rather than having to do it ourselves, we will instruct our little robotic teams or large robotic teams to go and execute those tasks.

And then the other metaphor that I want to offer is, or rather than metaphor, this is an analogy, is that of breeding. When a child is born, you don’t start a tally of how many breaths she took so that when she turns 18, you can present her with a loan to be repaid to society because she took those millions of breaths and now she owes society that amount. Not only because air is abundant and it is not even worth measuring how many molecules of oxygen were used in the process of breeding. But maybe even more importantly, because it is part of an ecosystem and she is participating in the ecosystem by the act of existing.

So the act of existing as a human is its own reward, both philosophically, because we don’t ask ourselves, as a child, should I exist? We are happy to exist and explore what it means to exist. And when we do ask ourselves, should I exist? And we conclude that we shouldn’t and commit suicide, it is universally seen as something bad. The same way in a future society, the current equation that your value is equal to your economic output, and as a consequence, when your economic output goes to zero because you lose your job, your value to society goes to zero, will be seen as barbaric and cruel, inhumane.

David: And the various categories of people that you mentioned who don’t work, include so many valuable people. My best example is grandparents. Grandparents are often retired, but would anyone sane in their mind conclude that they are worthless? That they don’t serve any purpose? Maybe not economically, even though sometimes they do, you know, rather than paying the babysitter, if you can give your child to the grandparent, you spared some money. So even economically. But from the point of view of a well-rounded, well-balanced, dignified human society, grandparents have a huge role to play. And whatever that role is going to be in the future for every category of humans, it is really upon us to search for that role.

Now, you said that experimentation and learning from mistakes is crucial. Does that mean that it is an important mistake for America to isolate itself from China rather than intensify the exchange of learning, scientifically, economically, entrepreneurially, positioning itself in an antagonistic manner, in a win-lose manner with China, especially in this crucial moment, is, in my opinion, a very, very big mistake.

Adam: I mean, I hesitate to step out of my domain of expertise and wade into geopolitics and foreign affairs. And I think the only thing I will say about the U.S. and China relationship is that for all of its complexity…

David: Let me help you. I hope no one comes and says, can you please not translate the next report in Mandarin?

Adam: Exactly, exactly. So, I mean, again, and that’s a great example, right? The sharing of knowledge and the facilitation of collaboration where it’s healthy. Limited competition can also be very healthy. We know that, a core part of successful market economies for a very long time. So competition has its role, but within bounds, within limits, well-regulated, well-governed, know the rules of the game. Competition doesn’t work well when it’s a free-for-all, when it’s a war with no rules. Competition works when there is a referee on the field and everybody knows the rules and everybody’s playing and the playing field is level and so forth.

But in general, my overarching view is that the opportunities to create mutual prosperity through direct collaboration, through healthy competition and trade, and through careful and judicious experimentation and open-mindedness and willingness to learn – those are very likely to provide optimal paths, good paths, successful, beneficial paths forward through this very challenging time. But I don’t think there’s anything particularly insightful or new about that thinking. It’s really pretty obvious common sense notions, time-honored classic wisdom based on simple principles. Maybe it’s worth reminding ourselves of them and remembering and recognizing their value, but I’m not offering anything new here.

I think there are major mistakes we could avoid. An arms race to very dangerous technologies without guidelines in place could be a very bad situation. So I am concerned, for example, about the race to artificial superintelligence, general intelligence first, and then quickly, if not immediately, on to artificial superintelligence. I’m worried about that being an arms race where governments and large corporations are all actors. And some of those dynamics are concerning from a game theoretic standpoint, certainly they’re concerning.

But I remain optimistic or remain hopeful that we can navigate our way through this. And again, I would point to past successes. I mean, it’s very easy to get caught up in this moment and not recognize the triumphs we’ve already achieved, the challenges we’ve already managed. Yes, we’ve had horrors throughout human history. We’ve had dreadful wars. We’ve had terrible conflicts. We’ve had enormous disasters and catastrophes. Yes, all true. Nevertheless, we have managed to not end human civilization yet with nuclear annihilation, nuclear weapons, for example, that was an incredibly dangerous technology for humanity to get its hands on.

So it’s a, I mean, I’m not saying we’re safe and can be complacent today with that old technology by any means. No, but what I mean is that our success there gives me hope and consolation, in a Bayesian sense, it weights my priors for thinking that we can do this again. We can replicate that same kind of success. So I remain fundamentally optimistic. I certainly wouldn’t have written my book “Brighter” about the future of the environment if it weren’t for a sort of fundamental optimism. But that optimism, I feel, is grounded in the data of history, the data around technology in the past, and now increasingly, the data, the evidence around the technology that is transforming our world today. The potential is enormous.

Now, I have argued that there are some very bad moves we could make. We could make a number of foolish mistakes here. I’ll give you one example. And then maybe David, we might close with this example and pick up this conversation again sometime soon. I’m running short on time, but I’d love to continue the conversation because there’s a bunch of things we didn’t get to, but let me just close with this one thought here, which is there are big mistakes we could make. One of them, I believe would be an enormous, a huge error, would be to commit to the policy recommendation of degrowth.

The policy recommendation that comes out of my scientific discipline and my domain of activism and advocacy, environmentalism. Okay. I am an environmental scientist. I am an environmentalist. I’ve been an environmental advocate and activist for my entire adult life.

David: Were you ever in favor of degrowth and then you changed your mind?

Adam: There was a brief time at the beginning of my graduate school training, so more than 15 years ago now, where the training was absolutely degrowth. That was the orthodoxy. That was the doctrine that we were being told. And I was never fully comfortable with it. But at that time, I was the student, these eminent teachers, my professors, and instructors who I respected, and research scientists were almost unanimously saying this with very few exceptions. And I thought, well, they must know something I don’t know. But I was never fully convinced.

And then quite quickly, when I moved into my graduate program to begin tackling this, I became very firm in my beliefs. Now, as it happens, I’ve had training. I’m very familiar with all of the scientific literature that informs the degrowth orthodoxy, that informs that ideology. So I’m very familiar with ecological economics literature, with environmental justice literature, and so forth. So I know those, I know the community, I know the thinking, I know the reasoning. And because of that, I understand where its errors are and what false assumptions it makes.

Now for the purposes of our conversation, this is a much longer discussion to have about why degrowth would be a mistake. And I have published, I’ve even put out some videos about this for viewers who might be interested in looking into that a little bit. But for this conversation here, let me just say this. One of the most fundamental enabling conditions for solving any problem for meeting any challenge is prosperity. No matter what problem you’re facing, no matter what challenge you’re struggling to deal with, if you have prosperity, if you have material prosperity, if you have social prosperity, if you have economic, financial prosperity, if you have personal prosperity, if you have your health and your wellbeing, mentally and physically, then every problem you face and challenge you face will be easier to solve than if you don’t have that prosperity.

Now, problems that are solvable with a given amount of prosperity can become impossible to solve if you lose that prosperity. That’s the flip side of that fact. So if you become physically unwell or mentally unwell or financially insecure in your own personal life, small problems that you were solving or could solve can suddenly become overwhelming. We all know this, we all understand this from our personal lives. The same thing is true at the level of our entire society and our entire global civilization.

And what that means is it would be a disastrous mistake to commit, to volunteer, to reduce our prosperity, which is what degrowth does. Don’t let anybody fool you. We can’t have less economic activity and productivity and have more material prosperity at the same time. Those will not work. They don’t work on any level, conceptually or physically. And so as we enter this incredibly challenging, tumultuous period of human history where so much is changing, and we talked about humanoid robotics, but we can have similar conversations about transformation in energy, in food, in transportation and so forth, and all of them combine together. This is a huge challenge. How we adapt, how we respond, this is an enormous challenge.

And at the same time, we have huge actual problems. Climate change is a huge problem that we have to face. Geopolitical conflicts are a huge problem we have to resolve. We have many, many problems that we have to deal with. And we have the challenge of adapting to these transformations that are coming technologically. What do we need to be in the best position to handle those problems and challenges? As much prosperity as possible, obviously, of course.

So if you want to grab a lever that gives you some control, give you some influence, grab the lever on the system. You want to pull that lever in the direction of more prosperity. A big mistake you could make would be to push the lever in the wrong direction to less prosperity. That is what degrowth would do. And it would make it much, much, much harder. It would make solving our problems, big problems like climate change and the other environmental ones I talk about in my book, but other social problems, economic problems, geopolitical problems, and the challenge of just adaptation to these crazy changes that are coming, AI, robotics, all of that, the more prosperity we have, the better we will be able to navigate those changes and find good solutions and find a positive path forward.

And that means what? What does that mean? We need to build more energy. We need to build more transportation capacity. We need to build more food capacity. We need to build more labor and humanoid robots and AI capacity. Those capacities are what translate directly into prosperity economically and otherwise. And that I believe is how we position ourselves best to tackle these enormous challenges we face, not by pulling the lever the other way, not by turning around or retreating or running away from these challenges, which is what degrowth would do.

So that would be, so as optimistic as I am and hopeful as I am based on current information and precedent from human history, there are mistakes we could make, terrible ones. Choosing to devote our energy and take our societies toward degrowth would be a big mistake. There are other ones we can talk about in future conversations. So I’m very optimistic, but we have to make good choices. We can’t just kick back, put our feet up, relax and say, okay, everything’s going to be fine. Don’t worry. We have to take responsibility. We can’t be complacent. We must make good choices here. But there are good choices. There are good options. There are solutions to be found. I’m very confident, very hopeful of that. And we just need to discover them. And how do we discover that? Going back to the title of your series here, David, we have to ask the correct questions. We have to search for and find the right questions to ask. That is how we start on this amazing journey.

David: Adam, thank you very much. This was really amazing and I’m glad that we have so much more to explore. I will definitely invite you back. In the meantime, I invite people to check out your current book, “Brighter”. Your website is adamDorr.com, generated by a single ChatGPT prompt, as you proudly display on there. They can also subscribe to you on X at adam_Dorr.