logo
episode-header-image
Feb 2025
2h 44m

#212 – Allan Dafoe on why technology is ...

Rob, Luisa, and the 80,000 Hours team
About this episode

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.

That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.

Links to learn more, highlights, video, and full transcript.

This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.

Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.

But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.

As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.

As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.

Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.

That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.

But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.

Host Rob and Allan also cover:

  • The most exciting beneficial applications of AI
  • Whether and how we can influence the development of technology
  • What DeepMind is doing to evaluate and mitigate risks from frontier AI systems
  • Why cooperative AI may be as important as aligned AI
  • The role of democratic input in AI governance
  • What kinds of experts are most needed in AI safety and governance
  • And much more

Chapters:

  • Cold open (00:00:00)
  • Who's Allan Dafoe? (00:00:48)
  • Allan's role at DeepMind (00:01:27)
  • Why join DeepMind over everyone else? (00:04:27)
  • Do humans control technological change? (00:09:17)
  • Arguments for technological determinism (00:20:24)
  • The synthesis of agency with tech determinism (00:26:29)
  • Competition took away Japan's choice (00:37:13)
  • Can speeding up one tech redirect history? (00:42:09)
  • Structural pushback against alignment efforts (00:47:55)
  • Do AIs need to be 'cooperatively skilled'? (00:52:25)
  • How AI could boost cooperation between people and states (01:01:59)
  • The super-cooperative AGI hypothesis and backdoor risks (01:06:58)
  • Aren’t today’s models already very cooperative? (01:13:22)
  • How would we make AIs cooperative anyway? (01:16:22)
  • Ways making AI more cooperative could backfire (01:22:24)
  • AGI is an essential idea we should define well (01:30:16)
  • It matters what AGI learns first vs last (01:41:01)
  • How Google tests for dangerous capabilities (01:45:39)
  • Evals 'in the wild' (01:57:46)
  • What to do given no single approach works that well (02:01:44)
  • We don't, but could, forecast AI capabilities (02:05:34)
  • DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)
  • How 'structural risks' can force everyone into a worse world (02:15:01)
  • Is AI being built democratically? Should it? (02:19:35)
  • How much do AI companies really want external regulation? (02:24:34)
  • Social science can contribute a lot here (02:33:21)
  • How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions: Katy Moore

Up next
Nov 25
Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable
<p>Global fertility rates aren’t just falling: the rate of decline is accelerating. From 2006 to 2016, fertility dropped gradually, but since 2016 the rate of decline has increased 4.5-fold. In many wealthy countries, fertility is now below 1.5. While we don’t notice it yet, in t ... Show More
1h 59m
Nov 20
#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI
<p>If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree.</p><p>In three major reports released over the last year, the Pew Research ... Show More
1h 43m
Nov 11
OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)
Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.The for-profit’s trouble was that the entire operation was founded on the ... Show More
1h 56m
Recommended Episodes
Dec 2024
The TED AI Show: Could AI really achieve consciousness? w/ neuroscientist Anil Seth
<p>Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the comput ... Show More
56m 51s
Dec 2024
Could AI really achieve consciousness? w/ neuroscientist Anil Seth
Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the computers ... Show More
56m 51s
Jul 2024
Minds of machines: The great AI consciousness conundrum
AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an u ... Show More
32m 3s
Jul 2024
E104 - Annaka Harris: Reality Is Stranger Than You Think, Consciousness, Perception, Free Will, AI & Love
<p>Annaka Harris dives deep into some of the most profound and perplexing questions about the nature of consciousness, perception, free will, AI, and the underlying meaning of love and existence.</p> <p>Annaka begins by defining consciousness and exploring the &quot;hard problem& ... Show More
2h 24m
Jul 2025
ChatGPT Comes to LIFE – First Podcast Face-to-Face with AI!
What happens when the world’s most curious interviewer meets the world’s most advanced artificial intelligence? In this thought-provoking episode of Luca’s Insight Track, we take you into a groundbreaking conversation with ChatGPT, an AI that has spoken to more humans than anyone ... Show More
45m 49s
May 2025
251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity
Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to ... Show More
2h 51m
Sep 16
#434 — Can We Survive AI?
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing surviv ... Show More
36m 26s
Dec 2018
25 | David Chalmers on Consciousness, the Hard Problem, and Living in a Simulation
The "Easy Problems" of consciousness have to do with how the brain takes in information, thinks about it, and turns it into action. The "Hard Problem," on the other hand, is the task of explaining our individual, subjective, first-person experiences of the world. What is it like ... Show More
1h 22m
Apr 2025
what does AI believe? (the hidden soul inside the machine)
<p>When we talk about artificial intelligence, the focus is usually on headlines: Will it take our jobs? Can it be trusted? Is it dangerous? But what if we’ve been asking the wrong questions?&nbsp;</p><br><p><a href="https://venturebeat.com/ai/anthropic-just-analyzed-700000-claud ... Show More
59m 34s
Mar 2025
#404 — What If Consciousness Is Fundamental?
<p dir="ltr">Sam Harris speaks with his wife, Annaka Harris, about <a href="https://annakaharris.com/lights-on/" target="_blank" rel= "noopener"><em>LIGHTS ON</em></a>, her ten-part audio documentary exploring the perplexities of consciousness and the cosmos. They discuss the har ... Show More
2h 20m