logo
episode-header-image
Feb 2025
2h 44m

#212 – Allan Dafoe on why technology is ...

Rob, Luisa, and the 80,000 Hours team
About this episode

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.

That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.

Links to learn more, highlights, video, and full transcript.

This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.

Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.

But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.

As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.

As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.

Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.

That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.

But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.

Host Rob and Allan also cover:

  • The most exciting beneficial applications of AI
  • Whether and how we can influence the development of technology
  • What DeepMind is doing to evaluate and mitigate risks from frontier AI systems
  • Why cooperative AI may be as important as aligned AI
  • The role of democratic input in AI governance
  • What kinds of experts are most needed in AI safety and governance
  • And much more

Chapters:

  • Cold open (00:00:00)
  • Who's Allan Dafoe? (00:00:48)
  • Allan's role at DeepMind (00:01:27)
  • Why join DeepMind over everyone else? (00:04:27)
  • Do humans control technological change? (00:09:17)
  • Arguments for technological determinism (00:20:24)
  • The synthesis of agency with tech determinism (00:26:29)
  • Competition took away Japan's choice (00:37:13)
  • Can speeding up one tech redirect history? (00:42:09)
  • Structural pushback against alignment efforts (00:47:55)
  • Do AIs need to be 'cooperatively skilled'? (00:52:25)
  • How AI could boost cooperation between people and states (01:01:59)
  • The super-cooperative AGI hypothesis and backdoor risks (01:06:58)
  • Aren’t today’s models already very cooperative? (01:13:22)
  • How would we make AIs cooperative anyway? (01:16:22)
  • Ways making AI more cooperative could backfire (01:22:24)
  • AGI is an essential idea we should define well (01:30:16)
  • It matters what AGI learns first vs last (01:41:01)
  • How Google tests for dangerous capabilities (01:45:39)
  • Evals 'in the wild' (01:57:46)
  • What to do given no single approach works that well (02:01:44)
  • We don't, but could, forecast AI capabilities (02:05:34)
  • DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)
  • How 'structural risks' can force everyone into a worse world (02:15:01)
  • Is AI being built democratically? Should it? (02:19:35)
  • How much do AI companies really want external regulation? (02:24:34)
  • Social science can contribute a lot here (02:33:21)
  • How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions: Katy Moore

Up next
Jul 15
Rebuilding after apocalypse: What 13 experts say about bouncing back
What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulse ... Show More
4h 26m
Jul 8
#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years
Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will be able to do everything needed to run an AI company, from writing code to design ... Show More
2h 50m
Jun 24
#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand
The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what comin ... Show More
2h 48m
Recommended Episodes
Jan 2025
With OpenAI seeking profits, activist seeks payback to the public
A battle is brewing over the restructuring of OpenAI, the creator of pioneering artificial intelligence chatbot ChatGPT. It was founded as a nonprofit in 2015 with the goal of developing AI to benefit humanity, not investors. But advanced AI requires massive processing power, whi ... Show More
15m 21s
Oct 2024
OpenAI’s Path to Become a For-Profit Company Is Complicated
OpenAI plans to convert from a non-profit to a for-profit organization, a complex move that is rarely done. WSJ reporter Theo Francis joins host Zoe Thomas to discuss the hurdles that OpenAI will face and the possible reasons for the change. Plus, a controversial bill to regulate ... Show More
13m 16s
Oct 2024
20VC: Why Founder Mode is Dangerous & Could Encourage Bad Behaviour | Why Fundraising is a Waste of Time & OKRs are BS | Why Angel Investing is Bad for Founders to Do and the VC Model is on it's Last
Zach Perret is the CEO and Co-Founder of Plaid, a technology platform reshaping financial services. To date, Zach has raised over $734M for Plaid from the likes of NEA, Spark, GV, Coatue and a16z, to name a few. Today, thousands of companies including the largest fintechs, severa ... Show More
50m 53s
May 2023
Do fundraisers understand the nuances of corporate giving?
How many prospective funders agree to meet as a simple courtesy only to pass up the opportunity that has been presented to them? What if a better understanding of how corporations go about their decision-making processes could reduce the fundraiser’s workload and increase the lik ... Show More
43m 54s
Mar 2025
Nonprofits navigate Trump’s drastic funding cuts, with The Chronicle of Philanthropy CEO Stacy Palmer
President Trump’s dramatic cuts to U.S. government grants are destabilizing every corner of the non-profit sector, leaving organizations scrambling to adapt. Stacy Palmer, CEO of the Chronicle of Philanthropy, explores the executive order’s impact on both the public and private s ... Show More
27m 37s
Apr 2025
Purpose Isn’t Found, It’s Built with Aaron Hurst
In episode 230 of The Business Development Podcast, Kelly Kennedy is joined by Aaron Hurst, bestselling author of The Purpose Economy and a pioneer in the field of meaningful work. Together, they explore the idea that purpose isn’t something we find, but something we intentionall ... Show More
1h 6m
Dec 2024
Fixing Education in America: What's Stopping Us?
Over half of Americans live in childcare deserts, while 90% of brain development happens before the age of five. All the while, education and childcare remain among the most resistant sectors to technological change. Billions of dollars have been spent, but outcomes continue to l ... Show More
38m 54s
Dec 2024
Sam Altman's Equity Dilemma
In this episode of the AI Chat podcast, host Jaeden Schaefer discusses the controversies surrounding OpenAI, particularly focusing on Sam Altman's claims about his equity in the company. The conversation delves into Altman's testimony before Congress, the significant valuation gr ... Show More
14m 4s
Nov 2024
ChatGPT’s First Victim + The Department of Government Efficiency (DOGE)
Scott and Ed open the show by discussing Spotify and Disney’s earnings, a gambling company’s strong third quarter results, and Elliot Management’s activist investment in Honeywell. Then Scott breaks down how Chegg allowed ChatGPT to take its business to the woodshed and why he th ... Show More
54m 57s
Jul 2024
Pump and Dump Schemes are Now Totally Legal
Send us a textA Texas District Judge Andrew S. Hanen has dismissed all charges against seven social-media influencers the SEC and Justice Department had accused of perpetrating a “stock manipulation scheme” on Twitter and Discord, ruling that the prosecution failed to state an of ... Show More
20m 27s