Ten Years Later: Releasing “Something New?” to the Commons

In 2015, I finished writing a book about artificial intelligence that almost no one asked for.

At the time, AI was still primarily a research story. Deep learning was advancing, but foundation models did not exist. There were no systems embedded in everyday workflows, no AI‑generated text saturating the internet, no sustained policy debates about alignment or compute governance. Most conversations defaulted to robots, automation, and a vaguely distant future.

I wrote Something New: AIs and Us because I believed that framing was inadequate. The book’s central argument was not about timelines or machines outperforming humans at specific tasks. It was about scale.

Artificial intelligence, I argued, should not be understood at the level of an individual mind, but at the level of civilization. Technology does not merely support humanity. It shapes what humanity is. If AI crossed certain thresholds, it would not just automate tasks, but it would reconfigure social coordination, knowledge production, and agency itself.

That framing has aged better than I expected, not because any particular prediction came true, but because the underlying question turned out to be the right one.

What Changed

Over the past decade, AI moved from hypothetical to infrastructural. Models are no longer curiosities or lab‑bound demonstrations. They are embedded in economic workflows, creative practices, governance processes, and epistemic pipelines.

As a result, the debate shifted. The central question is no longer “Can we build this?” but “What does this do to power, incentives, legitimacy, and trust?”

The book anticipated that change in direction, though not in speed or texture. What it could not anticipate was what it feels like to live through an intelligence transition that does not arrive as a single rupture, but as a rolling transformation, unevenly distributed across institutions, regions, and social strata.

Releasing to the Commons

In light of that shift, I have reacquired the rights to the book from the original publisher and released it under a Creative Commons Attribution license (CC‑BY‑4.0).

The complete text is now freely available online in four languages: English, Italian, Spanish, and Hungarian. Anyone can read it, share it, translate it further, or build on it, with attribution.

Read it free: somethingnew.davidorban.com

For readers who prefer physical or e‑reader formats, paperback and Kindle editions remain available on Amazon, with links provided on the site. The distinction is deliberate. The ideas are open. The formats are optional. You pay for convenience.

What the Book Does Not Cover

Looking back, there are important things Something New does not address. These omissions are not accidental, and they are worth naming explicitly.

Alignment as an operational problem. The book assumes that sufficiently advanced intelligences would recognize the value of cooperation, pluralism, and shared goals. A decade of observing misaligned incentives in human institutions amplified by algorithmic systems makes it clear that this assumption requires far more rigorous treatment. Alignment is not a philosophical preference. It is an engineering, economic, and institutional problem.

Political economy and power. The book largely brackets capital concentration, platform dynamics, and geopolitical competition. Today, these are central to any serious discussion of AI, not because the technology changed direction, but because it scaled fast enough to collide with real institutions and entrenched interests.

Empirical grounding. In 2015, scaling laws, emergent capabilities, and deployment‑driven feedback loops were speculative. Today, they are measurable. That shift changes the nature of responsibility, governance, and urgency in ways that were difficult to justify rigorously at the time.

I do not see these gaps as failures. I see them as markers of an intellectual phase transition. Something New belongs to a first wave of synthesis, before AI became ambient. Its value today is as a time‑stamped baseline: a record of what could be reasoned from first principles before the fog lifted and the terrain became visible.

What Comes Next

If I were writing this book today, it would not be a revised edition. It would be a companion work, explicitly contrasting early intuitions with post‑2022 realities.

Such a work would treat alignment as institutional design rather than a property of models alone. It would examine power as an emergent consequence of deployment and incentives, not intent. And it would take seriously the fact that intelligence is now being scaled and distributed through organizations long before it is unified or fully understood.

For now, releasing the original book to the commons felt like the correct move. Knowledge compounds best when it is not fenced in. Ten years on, the conversation this book was trying to start is no longer marginal. It is unavoidable.

The question is not whether something new is happening. It is whether we are building the institutions, frameworks, and clarity of thought required to meet it.

I would rather explore that question in the open, together.