logo
episode-header-image
Apr 2025
2h 16m

#214 – Buck Shlegeris on controlling AI ...

Rob, Luisa, and the 80,000 Hours team
About this episode

Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.

So some are developing a backup plan to safely deploy models we fear are actively scheming to harm us — so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.

Today’s guest — Buck Shlegeris, CEO of Redwood Research — has spent the last few years developing control mechanisms, and for human-level systems they’re more plausible than you might think. He argues that given companies’ unwillingness to incur large costs for security, accepting the possibility of misalignment and designing robust safeguards might be one of our best remaining options.

Links to learn more, highlights, video, and full transcript.

As Buck puts it: "Five years ago I thought of misalignment risk from AIs as a really hard problem that you’d need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you’d probably be able to not have very much of your problem."

Of course, even if Buck is right, we still need to do those 40 things — which he points out we’re not on track for. And AI control agendas have their limitations: they aren’t likely to work once AI systems are much more capable than humans, since greatly superhuman AIs can probably work around whatever limitations we impose.

Still, AI control agendas seem to be gaining traction within AI safety. Buck and host Rob Wiblin discuss all of the above, plus:

  • Why he’s more worried about AI hacking its own data centre than escaping
  • What to do about “chronic harm,” where AI systems subtly underperform or sabotage important work like alignment research
  • Why he might want to use a model he thought could be conspiring against him
  • Why he would feel safer if he caught an AI attempting to escape
  • Why many control techniques would be relatively inexpensive
  • How to use an untrusted model to monitor another untrusted model
  • What the minimum viable intervention in a “lazy” AI company might look like
  • How even small teams of safety-focused staff within AI labs could matter
  • The moral considerations around controlling potentially conscious AI systems, and whether it’s justified

Chapters:

  • Cold open |00:00:00|  
  • Who’s Buck Shlegeris? |00:01:27|  
  • What's AI control? |00:01:51|  
  • Why is AI control hot now? |00:05:39|  
  • Detecting human vs AI spies |00:10:32|  
  • Acute vs chronic AI betrayal |00:15:21|  
  • How to catch AIs trying to escape |00:17:48|  
  • The cheapest AI control techniques |00:32:48|  
  • Can we get untrusted models to do trusted work? |00:38:58|  
  • If we catch a model escaping... will we do anything? |00:50:15|  
  • Getting AI models to think they've already escaped |00:52:51|  
  • Will they be able to tell it's a setup? |00:58:11|  
  • Will AI companies do any of this stuff? |01:00:11|  
  • Can we just give AIs fewer permissions? |01:06:14|  
  • Can we stop human spies the same way? |01:09:58|  
  • The pitch to AI companies to do this |01:15:04|  
  • Will AIs get superhuman so fast that this is all useless? |01:17:18|  
  • Risks from AI deliberately doing a bad job |01:18:37|  
  • Is alignment still useful? |01:24:49|  
  • Current alignment methods don't detect scheming |01:29:12|  
  • How to tell if AI control will work |01:31:40|  
  • How can listeners contribute? |01:35:53|  
  • Is 'controlling' AIs kind of a dick move? |01:37:13|  
  • Could 10 safety-focused people in an AGI company do anything useful? |01:42:27|  
  • Benefits of working outside frontier AI companies |01:47:48|  
  • Why Redwood Research does what it does |01:51:34|  
  • What other safety-related research looks best to Buck? |01:58:56|  
  • If an AI escapes, is it likely to be able to beat humanity from there? |01:59:48|  
  • Will misaligned models have to go rogue ASAP, before they're ready? |02:07:04|  
  • Is research on human scheming relevant to AI? |02:08:03|

This episode was originally recorded on February 21, 2025.

Video: Simon Monsour and Luke Monsour
Audio engineering: Ben Cordell, Milo McGuire, and Dominic Armstrong
Transcriptions and web: Katy Moore

Up next
Nov 25
Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable
<p>Global fertility rates aren’t just falling: the rate of decline is accelerating. From 2006 to 2016, fertility dropped gradually, but since 2016 the rate of decline has increased 4.5-fold. In many wealthy countries, fertility is now below 1.5. While we don’t notice it yet, in t ... Show More
1h 59m
Nov 20
#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI
<p>If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree.</p><p>In three major reports released over the last year, the Pew Research ... Show More
1h 43m
Nov 11
OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)
Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.The for-profit’s trouble was that the entire operation was founded on the ... Show More
1h 56m
Recommended Episodes
Dec 2024
The TED AI Show: Could AI really achieve consciousness? w/ neuroscientist Anil Seth
<p>Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the comput ... Show More
56m 51s
Dec 2024
Could AI really achieve consciousness? w/ neuroscientist Anil Seth
Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the computers ... Show More
56m 51s
Jul 2024
Minds of machines: The great AI consciousness conundrum
AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an u ... Show More
32m 3s
Jul 2024
E104 - Annaka Harris: Reality Is Stranger Than You Think, Consciousness, Perception, Free Will, AI & Love
<p>Annaka Harris dives deep into some of the most profound and perplexing questions about the nature of consciousness, perception, free will, AI, and the underlying meaning of love and existence.</p> <p>Annaka begins by defining consciousness and exploring the &quot;hard problem& ... Show More
2h 24m
Jul 2025
ChatGPT Comes to LIFE – First Podcast Face-to-Face with AI!
What happens when the world’s most curious interviewer meets the world’s most advanced artificial intelligence? In this thought-provoking episode of Luca’s Insight Track, we take you into a groundbreaking conversation with ChatGPT, an AI that has spoken to more humans than anyone ... Show More
45m 49s
May 2025
251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity
Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to ... Show More
2h 51m
Sep 16
#434 — Can We Survive AI?
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing surviv ... Show More
36m 26s
Dec 2018
25 | David Chalmers on Consciousness, the Hard Problem, and Living in a Simulation
The "Easy Problems" of consciousness have to do with how the brain takes in information, thinks about it, and turns it into action. The "Hard Problem," on the other hand, is the task of explaining our individual, subjective, first-person experiences of the world. What is it like ... Show More
1h 22m
Apr 2025
what does AI believe? (the hidden soul inside the machine)
<p>When we talk about artificial intelligence, the focus is usually on headlines: Will it take our jobs? Can it be trusted? Is it dangerous? But what if we’ve been asking the wrong questions?&nbsp;</p><br><p><a href="https://venturebeat.com/ai/anthropic-just-analyzed-700000-claud ... Show More
59m 34s
Mar 2025
#404 — What If Consciousness Is Fundamental?
<p dir="ltr">Sam Harris speaks with his wife, Annaka Harris, about <a href="https://annakaharris.com/lights-on/" target="_blank" rel= "noopener"><em>LIGHTS ON</em></a>, her ten-part audio documentary exploring the perplexities of consciousness and the cosmos. They discuss the har ... Show More
2h 20m