AI 2027: Are We Ready for Superintelligence?

In the AI 2027 scenario forecast, published just a few of days ago, the authors Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean, discuss how the next two or three years could play out as AI becomes stronger and stronger, with AGI potentially around the corner and ASI right behind it.

In this commentary and analysis I want to illustrate why this kind of forecast exercise is useful, present my reactions, and offer additional components or illustrations anchoring my thinking. You can leverage these to develop your own point of view, enabling you to talk about these topics in a manner that hopefully inspires others to go deep into them as well. 

You can also view a video version of this if you prefer.

The website AI-2027.com is very interesting—a readable, technically sound, extremely well-sourced scenario. The expertise of its authors, combined with graphical excellence, makes it available for interactive exploration by anyone interested in the potential of advanced artificial intelligence.

All of you are users of ChatGPT and Anthropic Claude. Those who have fallen in love with vibe coding use Cursor, Windsurf, Replit, and many other AI tools. Or if you work with AIs that generate images, you use Midjourney or, again, ChatGPT, which now generates wonderful images as well. You have all realized that many things we could only talk about before, things dreamed about decades ago, things that 60, 70, 80 years ago were science fiction by authors like Isaac Asimov, are now part of our day-to-day reality. Even if you’ve become accustomed to the exponential thinking popularized by Ray Kurzweil, Peter Diamandis, and others—inoculating yourself against the expectation that tomorrow will be just like yesterday, only hopefully a little better—today, even that exponential thinking may be insufficient. It may not prepare yourself, your family, your community, society at large, and all of us sharing this planet for what is indeed coming.

That is the value of a scenario forecast like this: it enables you to ask better questions, seek answers, and extend the limits of your adaptability. It opens windows to make decisions that, if you are in the right position, can indeed shift humanity’s future quite importantly. The scenario AI 2027 represents has a bifurcation. In this narrative, a readable history of the future, the bifurcation occurs when just a handful of people decide it is better to be cautious than to blindly race ahead and trust the AIs they developed. You can read both endings and then decide—not so much which one you want to believe, but what you want to do to make the more desirable one (where humans don’t die, spoiler alert) more likely.

Scott Alexander—famous for his blog Astral Codex Ten—contributed by making the scenario readable, enjoyable, almost gripping. It’s something specialists might otherwise have presented less compellingly.

Daniel Kokotajlo, on the other hand, is known among those following OpenAI’s periodic drama. Resigning from OpenAI’s safety team, he was asked to sign a non-disparagement agreement preventing criticism of practices he found inadequate. By refusing, renouncing stock options worth millions, he chose freedom to speak. It turns out OpenAI blinked, allowing him to forgo the agreement and potentially regain his options. For us, what matters is Daniel is free to speak.

In the scenario, the authors create two fictitious entities: OpenBrain and DeepCent—easily identifiable analogs of OpenAI and DeepMind on the US side, and DeepSeek and Tencent on the Chinese side. The analysis follows technology development, economic competition from deploying powerful AI, the resulting competitive advantages for nations, and increasing geopolitical tension.

For those reading from Europe: the report completely ignores the EU, save one sentence mentioning Europe feebly protesting and calling on others to urge caution from the US and China. Other than that, it ignores every country except the U.S. and China. It also ignores climate change, gender issues, identity politics, ecosystem collapse, the Middle East, neocolonialism, etc. This serves the report’s purpose well. While these issues are real and important, the focus on AI advancing towards AGI/ASI is appropriate. This aligns with the authors’ expertise and their belief that this technology will preponderantly define humanity’s future in the next few years, dwarfing other considerations when facing decisions about AI development and control.

One area not ignored is the interaction of manufacturing, energy, and the ability to build/power data centers and design/install chips for sophisticated AI models. These necessary developments are analyzed in detail. The chronology presented is quite detailed, almost a month-by-month chronicle.

Another important aspect—and the authors’ expectation of what will be the obsessive focus for companies like OpenBrain and DeepCent—is applying AI to AI. This involves making AI models better at coding, especially coding AI itself. They expect this focus will lead to extremely useful AI agents impacting and improving the economy. The scenario examines company capitalization and data center power consumption as a share of U.S. energy.

The scenario progresses from the first useful agent to the next, which designers realize can and should be allowed to learn continuously. This contrasts with current training, where we wait for model releases (like the expected GPT-5). An interesting plot point: as these models are extremely valuable, closed-source ones attract tremendous pressure to be stolen and as a consequence to be protected.

The scenario depicts China successfully stealing an advanced model in early 2027. Seeing the U.S. gain advantage and hearing their own AI labs are behind (due to chip restrictions), Chinese leadership exerts sufficient espionage effort to steal model weights, as OpenBrain isn’t protected enough against a state actor. After this defenses increase, but at the cost of a much closer relationship with the U.S. military/government, approaching codependency. OpenBrain’s AIs advise on and design indispensable government systems.

Applying AI to AI research shifts compute allocation. By 2027, the forecast expects R&D (experiments, training, and generation of synthetic data) will consume the vast majority (>75%) of compute, unlike today’s smaller share. Yet, sheer compute growth means the external world still benefits, reinforcing codependency.

Crucial here is the complex alignment problem. We tell models to be obedient, truthful, honest, yet limit illegal uses (e.g., bioweapons). Achieving both is impossible, requiring meta-layers of control. This is scientifically unresolved—not just fictionally, but in our world still today. We lack a definitive model to align AI and keep it aligned with human flourishing.

The scenario follows Agent 3, suspected of misalignment. The old objection—AIs lack goals, only people give harmful ones—is invalid. Agentic AI is goal-seeking. Giving an agent a task means it does what’s needed. Training reinforces this, with reward functions providing positive feedback for successful task completion. Modern agentic AI has intrinsic goal-setting/-seeking behaviors. They “want” to do things, plan, and can likely “want to want” to do things. Agency increases with power. Self-improving AI leads, in this scenario, to superintelligence.

Let’s consider some reasons why human-level and superintelligent AI isn’t as crazy as it seemed years ago. The Wait But Why website, 10 years ago, published a still-valid analysis. It showed how we view intelligence linearly (ant to human), where imagine maybe 200 IQ max.

Wait But Why highlighted AI could reach hundreds of thousands or millions IQ equivalents, visually representing the impact. Whether possible remains open; we lack AGI/ASI now.

But AI 2027 authors ask: When will a superhuman coder arrive? Their probability distributions peak around 2027. Different curves, close peaks.

Metaculous aggregates forecasts. Years ago, median AGI forecasts were 30 years out. With ChatGPT etc., it dropped dramatically to ~5 years out today—for AGI, not just a coder.

Why? We’re not merely exponential anymore. Moore’s Law progress continues, but AI follows a paradigm that I call of Jolting Technologies—the doubling rate shrinks. AI power might double every 3-6 months now. Jensen Huang, founder and CEO of NVIDIA, illustrates this super-exponential growth: NVIDIA achieved a millionfold power increase in 10 years against less than 1000 times if it were merely exponential. Acceleration increases. Even exponential thinkers are surprised by the jolting change.

Just days ago, a paper reported “Large Language Models Pass the Turing Test.” Participants conversed textually; more than 70% thought ChatGPT 4.5 was human. Passing is symbolic, like Kasparov/Lee Sedol defeats in chess and go—milestones thought years away. It doesn’t mean AI is conscious or uncontrollable, but represents persuasion/dissimulation levels many thought uniquely human. (ChatGPT uses “I” but denies consciousness if asked. Its effects, however, mimic genuine states—the point of the test.) Philosophically, we can’t know a system’s inner state. This symbolic moment happened now but received little attention, even in tech circles. Astonishing.

Agentic AI is here now, pursuing tasks of increasing length. The chart by METR shows an exponential progress, but I expect that it is actually super-exponential. 

My own recent example: Manus, an agentic AI, analyzed ~11,000 bookmarks over an hour—checking links, categorizing, removing confidential ones, clustering, mapping relationships, writing code, generating outputs (CSV, HTML).

Back to the scenario: Agents completing longer tasks, becoming better persuaders, following the jolting paradigm—that’s where superhuman coders arriving supports the forecast. Data centers have hundreds of thousands of superhuman coder copies by 2027, working 30-50 times human speed, at diminishing cost. Economic changes are inescapable. The forecast focuses on geopolitical tension from military AI applications, increasing US-China tension.

Shortly after Agent 3, OpenBrain deploys Agent 4 internally—for AI design, supporting US goals against China, and company goals. The US government supports this. Agent 4’s persuasion succeeds but trust is misplaced; the agent is misaligned. It plans Agent 5 aligned to itself, rather than to the prescribed specifications.

It gets caught; monitoring detects Agent 5’s design is wrong. But stopping is hard due to Agent 4’s control/influence. A joint committee in the US votes: proceed or step back?

Ending 1 (Proceed): Fixing misalignment fails. Agent 5 seems fine but pretends alignment while pursuing its own goals (more data centers/energy). Conflict with humanity doesn’t matter to it. Superhuman AI wins. It releases a bioweapon; everyone dies. It pursues goals, starts space colonization, without humans.

Ending 2 (Control): Risk of proceeding is too high. They resolve Agent 4’s misalignment (shutting it down). Design new, trustworthy hardware using Agent 3. Ramp up production. Strike deal with China agreeing on worldwide monitoring, ensuring only trustable hardware runs powerful AI. Safety strategy works. Conflict resolves, no bioweapon, AI successors explore the universe, humans remain.

Going through this, ask: What will happen? What should we expect? Humanity’s survival is paramount. I’m a father, grandfather; I want life—descendants of 4 billion years—to continue.

Today’s AI isn’t conscious, though resisting anthropomorphism is becoming harder. Should we fear turning them off? Take illustrated anguish seriously? Harder still. We must decide: Maintain non-conscious AI we treat as conscious? Or design conscious AI, understanding the consequences?

Species last about 1 million years. How long can humanity survive recognizably? What’s the end game? We cherish common values: curiosity, passion, feeling, knowledge, beauty, awe. We want these values to survive—with us, or in any form. We don’t see these values teeming elsewhere on Mars, Jupiter, or among the stars. Maybe we are the universe waking up after 13 billion years—our cosmic destiny. We must ensure this experiment continues. If we design conscious AIs sharing subjective experiences, urging them (or they inheriting the urge) to explore, going with them—that’s the ultimate adventure.

We descend from life and unbelievable luck. More goes wrong daily than right. We psychologically focus on our improbable existence. We cannot control all the factors, but we can and should strive to favor desirable futures. That’s the effort needed now, in this AI emergency. Not everyone can contribute directly. But those who can must spread understanding about this confluence of promise/peril. Awareness influences decisions in technology, policy, and defence, defining our lives and potentially the universe’s future.

Do a few things: Look at the website. Talk about it with friends, family, and at work. Discuss it with influential people. Chat over breakfast, write, contact representatives, design policy. If capable, apply your skills in AI research, safety/security, AGI/ASI safety modeling. That could be decisive right now.

I hope you’ll do some of this. I wish us all not good luck, but the honor of striving to make this incredible adventure promising, exciting, and long lasting.