I had a conversation with James Hughes, a sociologist and executive director of the Institute for Ethics and Emerging Technologies (IEET) about the impact of advanced technologies such as AI, robotics, and human enhancement on society, politics, and the future of work.
We looked into the lessons we could learn from 20th-century worker struggles and how they might be applied to the 21st century, emphasizing the importance of adapting to the ever-changing technological landscape. The potential consequences of AI and automation on employment, the role of unions, and the need for new forms of worker organization and empowerment.
James also touched upon the geopolitical implications of AI, the need for stronger transnational institutions to address existential risks, and the potential for rapid political change in countries like Russia and China. He discussed the collapse of utopian imagination and the necessity of envisioning positive futures that balance technological progress with freedom and equality.
We covered a wide range of topics, including the fragmentation of the transhumanist movement, the precautionary principle in technology regulation, the use of persuasive AI tools, demographic collapse, and the potential role of artificial wombs in the future.
Here is the edited transcript of our conversation.
David: If we fought battles, even bloody ones, in the 20th century to conquer rights for workers that we now take for granted, like weekends, paid holidays, healthcare – what can we do to preserve or enhance those rights in the 21st century? To discuss this and many other exciting topics, I invited my friend James Hughes.
James: Delighted to be here, David.
David: I always like to start by asking my guests about their background and how they got to where they are today.
James: I was born in 1961 in Columbus, Ohio, coming of age in the 70s in the aftermath of the hippies and student left. Much of my intellectual life has been a dialogue between my interests in spirituality (Buddhism in particular), progressive politics, and futurism. When I first learned to read in third grade, I started reading science fiction and haven’t stopped since.
In college, I went to Sri Lanka to do development work and ordained as a Buddhist monk for a while. I studied Buddhism again in Japan and decided to do sociology for my doctoral work, focusing on bioethics questions inspired by Buddhist ideas around personhood. In the 90s, I met the transhumanists and eventually became executive director of the World Transhumanist Association. I wrote a book called Citizen Cyborg, reflecting on how personhood may change in the future and what we should do politically.
About half of us in the WTA executive then formed the IEET, initially focused on techno-progressivism – bringing a techno-optimistic view to ideas ranging from European liberalism to hardcore “replace the market with the state”. We believe many on the left are overly critical of technology while many futurists have libertarian ideas we want to engage with. The IEET has worked on existential risks, human enhancement, the longevity dividend, and with science fiction creators on more realistic future visions. Most recently we ran a two-year postdoc on the future of work, producing white papers and talks on the topic.
David: Today’s episode is necessarily political since we’re talking about how consensus can be built in society, hopefully non-conflictually, but also how tensions build and can be resolved, sometimes in tense confrontations that should still be non-violent. Do you feel the traditional left-right axis is still a valuable way to analyze attitudes towards progress, technology and human development? Or are people aggregating around different topics and policies in ways that left-right analysis misinterprets?
James: That’s an excellent question and one I’ve studied for 35 years as a sociologist. When I got involved in transhumanist ideas, I realized there was a long techno-optimistic tradition on the left before World War II – Lenin said communism was socialism plus electrification. But that switched after the war due to environmentalism, neoliberalism and other factors.
We’ve argued for this enlightenment tradition with two dimensions – pre vs post-enlightenment views on liberal individualism, women’s rights, secularism, nationalism, racism; and the economic dimension debating how much market vs state, going back to the Scottish vs French Enlightenments. I’ve asked whether technology issues are a third dimension or part of these two.
Most evidence now points to college-educated, culturally liberal people being more tech-optimistic while blue-collar and poor people are less so. I think that can be turned around with the right policies and a more proactive technological innovation regime. We need to consider the impact of divisions between those who will and won’t benefit from technology and how to address that.
David: I agree, and it reminds me of that scene in Monty Python’s Life of Brian where various Jewish factions keep splintering over minor disagreements until they’re in groups of one. The transhumanist movement has seemed prone to that at times – intensely debating things that should unite rather than divide us. As former chair of Humanity+, I’ve seen firsthand how we argue over what should bring us together. Why do you think that is, and does it hobble our ability to make an impact?
James: Well, I don’t know that transhumanism will ever be as influential as feminism or environmentalism, but if you look at their histories, they fractured immediately too – there are 50 flavors of each. When transhumanists began thinking more seriously about electoral politics in the 2010s, I counseled that transhumanism as an idea is too thin for a political party or even a think tank. Our experience in Humanity+ and the WTA shows you can support the right to use human enhancement tech and be anything from a monarchist to a Putinist. It’s incompatible with too wide a range of views.
That’s partly why we started the IEET – we agreed there should be democratic states doing things for people, taking many arguments off the table. But I don’t think transhumanism can be the basis for a major political movement. It’s a subculture of ideas. We can see its influence peripherally through effective altruists and AI safety advocates impacting Congress, mobilizing lobbying money and scholars to influence AI regulation. To have well-educated people talking to ignorant Congress members about AI risks is overall a good thing, even if controversial. When you compare transhumanism to other movements, we haven’t had anything comparable, but dramatic future technologies like brain-computer interfaces, radical life extension, and human genetic modification will raise key regulatory issues in the next 10-20 years.
David: When you talk about European-style regulation, it implies starting from the precautionary principle – viewing new technologies with skepticism until proven safe, putting the onus on producers to demonstrate minimal harm vastly outweighed by benefits.
James: The precautionary principle has definitely been part of EU tech policy discussions on the GDPR, DSA, DMA, and AI Act. But you can also see the influence of both foreign big tech and European tech innovators arguing that some proposed AI Act regulations would further discourage European tech competitiveness. I think they’re right that we need to be careful about preserving innovation.
I’m concerned about the geopolitical AI struggle and believe it benefits liberal democracy for the US, Europe, Australia and other democracies to maintain a lead. The AI Act creates a risk hierarchy, saying some systems need prior approval based on potential societal impacts. The US has been more laissez-faire but Biden’s AI proposals also suggest different controls above certain compute levels.
I think that’s the right basic approach. The AI Act addresses things like firms telling workers when AI is used, auditing AI impacts on hiring to prevent bias, etc. I don’t see that as harmful to innovation, but there are cultural and economic factors beyond regulation explaining Europe’s AI sector challenges.
David: Yes, and as we stream this, the European elections saw gains by right-wing parties who may push for “sovereign” national or European AIs. That will be interesting as it means incentives for European or national AI champions along with the vertical integration of software, data centers, chips and energy that’s absent today. Since I favor a bottom-up variety of approaches to find what works through experimentation rather than top-down monoliths, I think this fragmentation of efforts could be good.
James: Yeah, research now tests the moral reasoning of LLMs, finding they generally reflect the utilitarian, secular values of young educated Silicon Valley people. Meanwhile China released a chatbot to guide study of Xi Jinping Thought while imposing strong restrictions. I’m sure we’ll see Islamist, CCP, far-right LLMs and sophisticated arguments for terrible positions. I don’t believe intelligence approximates moral truth.
David: Let’s bookmark that point as it has implications for your views on AI existential risk. But to go back – those 20th century battles for workers matter because their outcomes weren’t a given. Fascism lost, we thought, but whenever we felt democracy’s foundations were solid and we could relax, we were naive and wrong. The battles matter, outcomes matter, and they aren’t guaranteed. Apart from not being naive, what lessons from 100 years ago are applicable today?
James: Well, this is another question that’s been much on my mind since the rise of fascism really sent me into a funk with Trump’s election. But the global phenomenon of growing far-right strongman politics within democracies and increasingly authoritarian regimes reflects alienation and anxiety among many working class people about whether they’ll benefit from the technological future.
It’s mixed up with a global decline of traditional gender and family norms, some of the most fundamental aspects of being human for two billion years. I believe we’re entering a post-gender period where it won’t determine life chances and can be changed. That’s why transgender issues are so central to far-right backlash, as they threaten gender essentialism. It’s an under-discussed part of the global strongman politics phenomenon.
At the same time, the globalization of information is both terrifying and optimistic. I’ve always believed our solutions rely on international cooperation and transnational organizations. So the development of transnational identity is something I’ve closely monitored. You see it in the exciting European project, predicated on extending liberal democracy. I want more such projects and hope transnational communication supports them. But it also seems to support a troubling new far-right internationalization.
David: There are reasons to think the blunt tools of violent conflict that created progress through upheaval in the past shouldn’t be used today. Nuclear weapons mean local conflicts risk global annihilation. It negates our expectation that we can now better analyze and resolve complex challenges.
Europe has been an interesting experiment, still an experiment in the Popperian sense of only being able to falsify not verify that it’s working. It’s healthy to say we must try harder to keep it working. Europe leveraged crises at various points to strengthen itself, like approving the name European Union. Some now want to bring the union closer, especially fiscally, to form a United States of Europe. That fragmentation into small markets is a key reason Europe hasn’t built leading tech players. So what other differences should the 21st century battles to promote individual and community rights have compared to the past?
James: Well, let me just say about war, nuclear weapons made many thinkers realize the world probably needs stronger transnational institutions to control WMD risks. We’ve had the IAEA, anti-nuclear and bioweapon treaties, but they founder on global geopolitical divisions and mistrust. More progress strengthening those institutions and conflict resolution is necessary and may require significant geopolitical changes.
I’m a staunch Ukraine defender and hope a Russian defeat there causes regime change. China is its own democratization challenge, but there may be prospects in Russia that could lead to them eventually joining the EU in global WMD and AI agreements. China has some farsighted people on AI policy, but with much development under PLA control, their trustworthiness in negotiations is questionable.
At any rate, progress on the geopolitical front is something people should pay much more attention to. I’ve always promoted strengthening the UN Declaration of Human Rights legally, the ICJ, ways of establishing global rule of law. We could start with the easy cases of sending blue helmets to suppress genocides, which we failed at in places like Sudan. I think there are 25 blue helmet missions now, so strengthening transnational military force is good.
I actually support Macron’s idea of a European army as the US is a declining unaffordable hegemon that’s a worse values example than it used to be. Poland is pulling ahead, spending 3-4% of GDP on defense. But we need to think seriously about WMD and existential risks. I and folks from effective altruism, longtermism and AI safety all agree bioweapons and WMDs should be a very high political priority.
Whether that means not pushing things that might cause conflict with nuclear and bioweapon states, like ceding Ukraine to Russia to address their nuclear threats – that’s where we get into hard politics and futurist disagreements. I just wrote a piece on pronatalism arguing that we futurists should take the uncertainty of the future more seriously in our thinking. The focus on existential and WMD risks is treated as a “strange attractor” – if you compare futures with and without people, obviously you want to prevent the latter. Except you’re not obliged to try if you have no idea what to do. I think we’re in that situation.
As a sociologist I’ve been interested in Asimov’s psychohistorical Foundation proposal that with enough knowledge of society, minds, and the right AI we could predict the future and decide accordingly. Or the Dune version of Paul Atreides discerning paths in the multiverse. I just don’t think we know enough, because AI was supposed to be an opaque singularity barrier in predicting the future. Consequently, I think we should focus on this century and things we know will likely happen – existential risks will be bad this century without having to consider a billion years out to see that.
David: You mentioned the crucial word persuasion. I’m a big supporter of developing the science of memetics. A journal of memetics was founded by Daniel Dennett and others decades ago but didn’t succeed, potentially because we lacked social networks and data to really study how memes and ideas spread and take hold of minds. You mentioned Asimov’s Foundation series, which is fascinating because of the spoiler that you first need planetary scale populations to model the future, but then a mutant mule appears who can change behavior and must be opposed by Second Foundation operators leveraging his own emotions against him to get the project back on track.
James: I strongly identify with the Foundation trying to chart a better path as the empire declines, then suddenly all these fascists and strongmen appear that weren’t in the plans! But anyway, go on.
David: So the advanced tools of AI are what Asimov represented in the Second Foundation’s evolved mathematical speech and high-bandwidth communication. I’ll be an early adopter of Neuralink if it enables that! But even without enhancement, we’ll use AI as a persuasive force multiplier as I’m doing now – writing and publishing more and better, illustrating with rewarded social media images, etc. Are the good guys going to win? The bad guys lost in the past and couldn’t complain, so how will we leverage these tools for desirable futures?
James: Well, we need to think more about applying tech to citizen engagement, democratic accountability and new ways of deciding things. Representative democracy’s failures are making people tired of it. The administrative state needs serious AI-enabled reform, like automating records so the VA doesn’t take two years to induct people.
I do think we can make progress there. But I’m very concerned about AI strengthening undemocratic forces and empowering threatening individuals and small groups with tools like bioweapons. A key tradeoff many are pondering is whether mitigating these emerging tech risks requires a more invasive surveillance state.
George Church argues DNA printers should have built-in monitoring of who uses them and what they print, reporting to central authorities. Of course that means an insecure black market will appear. But we need to think seriously about the surveillance risks. Ingmar Persson and Julian Savulescu even argued for mandatory moral enhancement to prevent psychopaths!
David: Isn’t universal education a mandatory moral enhancement?
James: It is, and a key point I made about automating teaching is that there’s a moral and civic component to education since ancient China – teaching people to be responsible citizens. I’d want that preserved as homeschooling risks enabling little Christian Taliban madrassas. We saw them storm the Capitol and try to establish fascist dictatorship. I’d rather they went through public ed and learned to influence policy through lobbying and voting.
So yeah, education is important. To your broader question of persuading people the future can be positive and steering them the right way – after WWII and the USSR’s fall there was a collapse of utopian imagination that’s still quite limited. Fascism stepped into that gap – Modi pushed a Hindu nationalist utopia for 20 years that delivered nothing.
The utopian visions I think we need are things like a liberal fully automated luxury society. Not everyone thinking and doing the same, that’s been the bane of utopian projects, but convincing people a better future is possible with the right policies on AI and automation. Space exploration has always been largely performative science I want us to do without strong public benefits. But the perspective of expanding into space as a civilization is a utopian vision I share.
Radical life extension, cyborg modification, neural control – I think these could all be parts of utopian visions for people who share our concern for freedom and equality.
David: There are many interestingly intertwined threads here. For example, we talked about declining American hegemony and empire as the US can’t afford to police global trade through military power due to ballooning deficits. But the potential productivity increases from AI and robotics could actually make that deficit sustainable if they arrive fast enough.
James: Whether AI will lead to dramatic economic growth is an open question. Proving automation’s productivity advantages has been hard over the years, I think due to offsetting changes in how we work. It’s possible, but people don’t spend every minute in productive labor and are often distracted by new online things.
If there is dramatic growth, the question is whether any of that money finds its way into public coffers and what we do with it. We discussed taxing robots or automation, which I’m not necessarily for. But the tremendous wealth inequality created by the new tech economy needs more progressive taxation to redirect some wealth to public purposes.
Then the question is whether spending it on our global empire, like defending free trade with Taiwan, makes sense. Taiwan and North Korea are hotspots – if the US gets pulled into a potentially nuclear confrontation with China or North Korea, we’re in a whole new situation. I don’t want the US to just withdraw from everywhere either. There has to be a vision of a post-US military hegemony global regime, maybe an expanded less US-dominated NATO. But theoretically, if the US government could tax American firms that primarily benefited from a global AI-driven economic boom, then yes, it could potentially sustain our military spending.
David: There are dampeners on the success of those policies in the form of skilled labor shortages. To build incentivized US chip fabs, TSMC said they’d love to but can’t find the people.
James: Which is another argument for the state’s role in innovation. Even if you think we need venture capital and entrepreneurial freedom to create new things, you also need educated workers. In China, half of college grads are engineers vs only 5% in the US.
David: And too many lawyers! However, the longer demographic trend, contrary to what many on the left and environmentalists believe, is one of demographic collapse. Japan, South Korea, Italy lead this grim trend, but outside Nigeria and some African nations, the whole world is on track to not sustain its population. Then it’s not about educating workers – there won’t be any. So do you think accelerating artificial womb tech and deploying it could help address this? Ectogenesis is the term for bringing humans to term without female involvement. Today it’s science fiction like advanced AI and humanoid robots are for a few more days. Will it play a role?
James: I don’t want to be a conspiracy theorist, but last year a minor Chinese official was quoted in their press about the CCP’s lack of success getting people to have more kids – the party ordered members to have three and they’re not even at one. He said they may have to “industrialize the process.” I was like, wait, what? Are we at Brave New World “babies in bottles” already because women don’t want to birth them?
But yes, I’ve written about artificial wombs for 20 years and believe they’re inevitable. I don’t think many societies are ready to consider state-sponsored artificial wombs though. For places like Denmark with broad fertility assistance, IVF access for single women, lesbians, up to age 50 – they’ve expanded that to boost birth rates. Egg freezing is picking up too for women worried about not having kids by 35 and wanting to focus on careers first.
Many trends point to artificial wombs as another future fertility technology in a world of declining fertility. But it would still require IVF which is uncertain, unpleasant and often expensive. So I don’t know that artificial wombs will have a big societal impact even if proven safe. It’s similar to human cloning – not many actually want to clone or be cloned.
However, in “lifeboat” scenarios like a Battlestar Galactica situation where most humans have been killed and they debate requiring births, you can imagine artificial wombs being used if societies get really extreme, especially where women’s rights aren’t secure like Afghanistan.
David: An organization we haven’t mentioned are unions. In the past, collective bargaining was key for defining and maintaining workers’ rights. But it doesn’t seem unions realize what’s happening or about to happen.
James: Well, my son is a labor organizer for nurses, my daughter heads her graduate student union, and I’m a passionate labor supporter. US unions have gotten a huge boost the last 5-6 years from young people waking up and realizing it’s the right idea – Starbucks organizing, Amazon, etc.
At the same time, US union membership continues to decline. Going back to Marx, putting guys on an assembly line built solidarity. With labor’s gigification, precarity, less firm and worker commitment over time, big factory decline, rise of varied white collar jobs – all have made the labor union form work less well going forward. Only 11% of US workers are in unions now. Getting to Swedish levels of 80% will be hard for most of the world.
So I firmly believe we need new forms of worker and citizen coordination and empowerment that leverage electronic communication more than traditional unions or parties have. Parties are on their way out too. There have been liquid democracy experiments in Europe – the Five Star Movement in Italy, Podemos in Spain. Pirate parties have tried things. But none have worked great – Five Star came up with an incomprehensible populist mishmash.
I don’t know that we’ve cracked the nut of electronic democracy or collective empowerment yet. The flash mobs of the Arab Spring proved chimerical as change mechanisms. Being an active union worker or political party citizen takes a lot of work most don’t want to put in, understandably. So we need to make collective organizing as easy as possible.
David: You mentioned Italy. Italian unions had a big win forcing gig platforms like food delivery to hire riders as employees, not contractors. Italy has almost zero hour flexibility, so these are now all full-timers. Taxi unions also made Uber have to garage between rides so you have to wait 30 min and pay for them to reach you – Uber basically doesn’t exist there now.
If you’re a rider they actually hired or a cab driver, these look like victories. But medium-term, both are getting automated away. I use this as an example of unions fighting yesterday’s battles rather than mapping the future’s contours.
James: Oh, I absolutely agree. We’ve had similar debates in the US and California moved towards providing gig workers more traditional rights. I have mixed feelings. The general policy goal should be people being able to do gigs and still have health insurance, education, money for food.
The regulatory nature of gig work is a concern as it’s predicated on intensely surveilling workers, like McDonald’s calling in young workers for surges with no guaranteed hours, an old practice now turbocharged by AI tracking laptop clicks while working from home.
For gigs, I think we need to regulate worker compensation within platforms. I support democratic experiments with worker-owned gig platforms. But I’m concerned legacy industries regulating new competitors is the wrong approach. I agree with your point on taxis and Uber garaging.
It’s easy to imagine part of our path to technological unemployment will be further labor gigification, as we’ve seen with Amazon Turk. So we need to figure this out. Without the right protections it can be very dispiriting and impoverishing.
David: So to start wrapping up, the US system is resilient, swinging between extremes. People forget the social ruptures in the 60s and 70s that didn’t end up fully tearing the country apart. Today, many feel an extreme reaction is likely whether Biden or Trump wins in 2024 – Trump supporters may not accept a Biden win, while a Trump victory would mean extreme policies exacerbating tensions. How do you see things in March 2025?
James: I’ve been reading more fantasy literature because I just can’t stand to think about it! But politically, in college I joined DSA, the Democratic Socialists of America. They blew up in 2016 with Bernie Sanders who I strongly supported. But I discovered I don’t have a lot in common with the millennial left under 30. So I’m a bit politically homeless these days, not sure what to do personally. I was a DSA chapter chair in Connecticut for a while.
For 2024, I still think Biden can squeak it out. One problem is that in these extremes, especially with charismatic leaders, people see the logic of extrajudicial action – if elections aren’t working, maybe I have to start shooting people. We could tragically see that in the US.
The right outguns the left 20 to 1, so the left would never win that confrontation on a community level. Our best hope is to hold the judiciary as much as we can – the Supreme Court is largely lost already – and ensure we can still get elected to legislatures.
But controlling Trump last time was hard enough. This time they’re coming in with a vetted cohort of 50,000 federal appointees who believe the last election was stolen and Trump should be dictator. If he’s elected, my family is already looking into Italian citizenship – of course, you had to go and elect a post-fascist government yourself! But at least Meloni is no Trump. So I may be your neighbor someday.
David: Okay, interesting. There are de-dollarization trends that could further precipitate things.
James: Well, there’s no evidence of successful de-dollarization yet. The BRICS concept was a chimera. I believe cracking the resistance of Russia, North Korea, Iran, the Saudis to international law and a rules-based order is central to progress. But I’m not so concerned they’re all on the same page – Russia, China, India, Iran all have quite different agendas.
David: One thing I liked about Trump’s first term is that he didn’t go to war with Iran as I expected.
James: Yes, neither US side was going to war with Iran. It would be really hard if we had actual boots on the ground. We’d probably just bomb them to death.
David: So when you say the US must crack it, what do you mean?
James: I mean we need to clearly articulate what a reformed rules-based international order looks like beyond window dressing. I think Israel has been a huge setback here. After the West’s Ukraine response, we were on the way to coordinated Western advocacy for liberal democracy and defending the democratic order. But the October 6th events in Israel made it very hard to make that case in the developing world.
We have to get back to arguing that stronger transnational institutions and common principles are the way forward. For Iran and North Korea that means bending to nuclear nonproliferation and bioweapon agreements. With Russia, I don’t know what to do except get rid of Putin eventually – I’m open to suggestions.
But China could potentially see rapid discontinuous political change. The Soviet Union’s collapse surprised almost everyone. I think the CCP could change quickly in the next decade. But if they try to invade Taiwan sparking a war with the West, who knows what happens.
So yeah, the world is very uncertain and AI, bioweapons, genetic engineering will only increase that. I think that’s the biggest lesson – tech could be the unanticipatable political change agent going forward. Western AI dominance arising from our current lead could let us dominate world affairs in ways we can’t yet imagine.