logo
episode-header-image
Nov 5
2h 20m

#227 – Helen Toner on the geopolitics of...

Rob, Luisa, and the 80000 Hours team
About this episode

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. But according to Helen Toner, director of the Center for Security and Emerging Technology in DC, “the US and Chinese governments are barely talking at all.”

Links to learn more, video, and full transcript: https://80k.info/ht25

In her role as a founder, and now leader, of DC’s top think tank focused on the geopolitical and military implications of AI, Helen has been closely tracking the US’s AI diplomacy since 2019.

“Over the last couple of years there have been some direct [US–China] talks on some small number of issues, but they’ve also often been completely suspended.” China knows the US wants to talk more, so “that becomes a bargaining chip for China to say, ‘We don’t want to talk to you. We’re not going to do these military-to-military talks about extremely sensitive, important issues, because we’re mad.'”

Helen isn’t sure the groundwork exists for productive dialogue in any case. “At the government level, [there’s] very little agreement” on what AGI is, whether it’s possible soon, whether it poses major risks. Without shared understanding of the problem, negotiating solutions is very difficult.

Another issue is that so far the Chinese Communist Party doesn’t seem especially “AGI-pilled.” While a few Chinese companies like DeepSeek are betting on scaling, she sees little evidence Chinese leadership shares Silicon Valley’s conviction that AGI will arrive any minute now, and export controls have made it very difficult for them to access compute to match US competitors.

When DeepSeek released R1 just three months after OpenAI’s o1, observers declared the US–China gap on AI had all but disappeared. But Helen notes OpenAI has since scaled to o3 and o4, with nothing to match on the Chinese side. “We’re now at something like a nine-month gap, and that might be longer.”

To find a properly AGI-pilled autocracy, we might need to look at nominal US allies. The US has approved massive data centres in the UAE and Saudi Arabia with “hundreds of thousands of next-generation Nvidia chips” — delivering colossal levels of computing power.

When OpenAI announced this deal with the UAE, they celebrated that it was “rooted in democratic values,” and would advance “democratic AI rails” and provide “a clear alternative to authoritarian versions of AI.”

But the UAE scores 18 out of 100 on Freedom House’s democracy index. “This is really not a country that respects rule of law,” Helen observes. Political parties are banned, elections are fake, dissidents are persecuted.

If AI access really determines future national power, handing world-class supercomputers to Gulf autocracies seems pretty questionable. The justification is typically that “if we don’t sell it, China will” — a transparently false claim, given severe Chinese production constraints. It also raises eyebrows that Gulf countries conduct joint military exercises with China and their rulers have “very tight personal and commercial relationships with Chinese political leaders and business leaders.”

In today’s episode, host Rob Wiblin and Helen discuss all that and more.

This episode was recorded on September 25, 2025.

CSET is hiring a frontier AI research fellow! https://80k.info/cset-role
Check out its careers page for current roles: https://cset.georgetown.edu/careers/

Chapters:

  • Cold open (00:00:00)
  • Who’s Helen Toner? (00:01:02)
  • Helen’s role on the OpenAI board, and what happened with Sam Altman (00:01:31)
  • The Center for Security and Emerging Technology (CSET) (00:07:35)
  • CSET’s role in export controls against China (00:10:43)
  • Does it matter if the world uses US AI models? (00:21:24)
  • Is China actually racing to build AGI? (00:27:10)
  • Could China easily steal AI model weights from US companies? (00:38:14)
  • The next big thing is probably robotics (00:46:42)
  • Why is the Trump administration sabotaging the US high-tech sector? (00:48:17)
  • Are data centres in the UAE “good for democracy”? (00:51:31)
  • Will AI inevitably concentrate power? (01:06:20)
  • “Adaptation buffers” vs non-proliferation (01:28:16)
  • Will the military use AI for decision-making? (01:36:09)
  • “Alignment” is (usually) a terrible term (01:42:51)
  • Is Congress starting to take superintelligence seriously? (01:45:19)
  • AI progress isn't actually slowing down (01:47:44)
  • What's legit vs not about OpenAI’s restructure (01:55:28)
  • Is Helen unusually “normal”? (01:58:57)
  • How to keep up with rapid changes in AI and geopolitics (02:02:42)
  • What CSET can uniquely add to the DC policy world (02:05:51)
  • Talent bottlenecks in DC (02:13:26)
  • What evidence, if any, could settle how worried we should be about AI risk? (02:16:28)
  • Is CSET hiring? (02:18:22)

Video editing: Luke Monsour and Simon Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Up next
Nov 20
We're completely out of touch with what the public thinks about AI | Dr Yam, Pew Research Center
<p>If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree.</p><p>In three major reports released over the last year, the Pew Research ... Show More
1h 43m
Nov 11
OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)
Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.The for-profit’s trouble was that the entire operation was founded on the ... Show More
1h 56m
Oct 30
#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes
<p>For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating and low feedback.</p><p>According to Anthropic’s Holden Karnofsky, this situati ... Show More
4h 30m
Recommended Episodes
Dec 2024
The TED AI Show: Could AI really achieve consciousness? w/ neuroscientist Anil Seth
<p>Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the comput ... Show More
56m 51s
Dec 2024
Could AI really achieve consciousness? w/ neuroscientist Anil Seth
Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the computers ... Show More
56m 51s
Jul 2024
Minds of machines: The great AI consciousness conundrum
AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an u ... Show More
32m 3s
Jul 2024
E104 - Annaka Harris: Reality Is Stranger Than You Think, Consciousness, Perception, Free Will, AI & Love
<p>Annaka Harris dives deep into some of the most profound and perplexing questions about the nature of consciousness, perception, free will, AI, and the underlying meaning of love and existence.</p> <p>Annaka begins by defining consciousness and exploring the &quot;hard problem& ... Show More
2h 24m
Jul 2025
ChatGPT Comes to LIFE – First Podcast Face-to-Face with AI!
What happens when the world’s most curious interviewer meets the world’s most advanced artificial intelligence? In this thought-provoking episode of Luca’s Insight Track, we take you into a groundbreaking conversation with ChatGPT, an AI that has spoken to more humans than anyone ... Show More
45m 49s
May 2025
251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity
Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to ... Show More
2h 51m
Sep 16
#434 — Can We Survive AI?
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing surviv ... Show More
36m 26s
Dec 2018
25 | David Chalmers on Consciousness, the Hard Problem, and Living in a Simulation
The "Easy Problems" of consciousness have to do with how the brain takes in information, thinks about it, and turns it into action. The "Hard Problem," on the other hand, is the task of explaining our individual, subjective, first-person experiences of the world. What is it like ... Show More
1h 22m
Apr 2025
what does AI believe? (the hidden soul inside the machine)
<p>When we talk about artificial intelligence, the focus is usually on headlines: Will it take our jobs? Can it be trusted? Is it dangerous? But what if we’ve been asking the wrong questions?&nbsp;</p><br><p><a href="https://venturebeat.com/ai/anthropic-just-analyzed-700000-claud ... Show More
59m 34s
Mar 2025
#404 — What If Consciousness Is Fundamental?
<p dir="ltr">Sam Harris speaks with his wife, Annaka Harris, about <a href="https://annakaharris.com/lights-on/" target="_blank" rel= "noopener"><em>LIGHTS ON</em></a>, her ten-part audio documentary exploring the perplexities of consciousness and the cosmos. They discuss the har ... Show More
2h 20m