logo
episode-header-image
Apr 2025
2h 16m

#214 – Buck Shlegeris on controlling AI ...

Rob, Luisa, and the 80,000 Hours team
About this episode

Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.

So some are developing a backup plan to safely deploy models we fear are actively scheming to harm us — so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.

Today’s guest — Buck Shlegeris, CEO of Redwood Research — has spent the last few years developing control mechanisms, and for human-level systems they’re more plausible than you might think. He argues that given companies’ unwillingness to incur large costs for security, accepting the possibility of misalignment and designing robust safeguards might be one of our best remaining options.

Links to learn more, highlights, video, and full transcript.

As Buck puts it: "Five years ago I thought of misalignment risk from AIs as a really hard problem that you’d need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you’d probably be able to not have very much of your problem."

Of course, even if Buck is right, we still need to do those 40 things — which he points out we’re not on track for. And AI control agendas have their limitations: they aren’t likely to work once AI systems are much more capable than humans, since greatly superhuman AIs can probably work around whatever limitations we impose.

Still, AI control agendas seem to be gaining traction within AI safety. Buck and host Rob Wiblin discuss all of the above, plus:

  • Why he’s more worried about AI hacking its own data centre than escaping
  • What to do about “chronic harm,” where AI systems subtly underperform or sabotage important work like alignment research
  • Why he might want to use a model he thought could be conspiring against him
  • Why he would feel safer if he caught an AI attempting to escape
  • Why many control techniques would be relatively inexpensive
  • How to use an untrusted model to monitor another untrusted model
  • What the minimum viable intervention in a “lazy” AI company might look like
  • How even small teams of safety-focused staff within AI labs could matter
  • The moral considerations around controlling potentially conscious AI systems, and whether it’s justified

Chapters:

  • Cold open |00:00:00|  
  • Who’s Buck Shlegeris? |00:01:27|  
  • What's AI control? |00:01:51|  
  • Why is AI control hot now? |00:05:39|  
  • Detecting human vs AI spies |00:10:32|  
  • Acute vs chronic AI betrayal |00:15:21|  
  • How to catch AIs trying to escape |00:17:48|  
  • The cheapest AI control techniques |00:32:48|  
  • Can we get untrusted models to do trusted work? |00:38:58|  
  • If we catch a model escaping... will we do anything? |00:50:15|  
  • Getting AI models to think they've already escaped |00:52:51|  
  • Will they be able to tell it's a setup? |00:58:11|  
  • Will AI companies do any of this stuff? |01:00:11|  
  • Can we just give AIs fewer permissions? |01:06:14|  
  • Can we stop human spies the same way? |01:09:58|  
  • The pitch to AI companies to do this |01:15:04|  
  • Will AIs get superhuman so fast that this is all useless? |01:17:18|  
  • Risks from AI deliberately doing a bad job |01:18:37|  
  • Is alignment still useful? |01:24:49|  
  • Current alignment methods don't detect scheming |01:29:12|  
  • How to tell if AI control will work |01:31:40|  
  • How can listeners contribute? |01:35:53|  
  • Is 'controlling' AIs kind of a dick move? |01:37:13|  
  • Could 10 safety-focused people in an AGI company do anything useful? |01:42:27|  
  • Benefits of working outside frontier AI companies |01:47:48|  
  • Why Redwood Research does what it does |01:51:34|  
  • What other safety-related research looks best to Buck? |01:58:56|  
  • If an AI escapes, is it likely to be able to beat humanity from there? |01:59:48|  
  • Will misaligned models have to go rogue ASAP, before they're ready? |02:07:04|  
  • Is research on human scheming relevant to AI? |02:08:03|

This episode was originally recorded on February 21, 2025.

Video: Simon Monsour and Luke Monsour
Audio engineering: Ben Cordell, Milo McGuire, and Dominic Armstrong
Transcriptions and web: Katy Moore

Up next
Jul 15
Rebuilding after apocalypse: What 13 experts say about bouncing back
What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulse ... Show More
4h 26m
Jul 8
#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years
Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will be able to do everything needed to run an AI company, from writing code to design ... Show More
2h 50m
Jun 24
#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand
The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what comin ... Show More
2h 48m
Recommended Episodes
Jan 2025
With OpenAI seeking profits, activist seeks payback to the public
A battle is brewing over the restructuring of OpenAI, the creator of pioneering artificial intelligence chatbot ChatGPT. It was founded as a nonprofit in 2015 with the goal of developing AI to benefit humanity, not investors. But advanced AI requires massive processing power, whi ... Show More
15m 21s
Oct 2024
OpenAI’s Path to Become a For-Profit Company Is Complicated
OpenAI plans to convert from a non-profit to a for-profit organization, a complex move that is rarely done. WSJ reporter Theo Francis joins host Zoe Thomas to discuss the hurdles that OpenAI will face and the possible reasons for the change. Plus, a controversial bill to regulate ... Show More
13m 16s
Oct 2024
20VC: Why Founder Mode is Dangerous & Could Encourage Bad Behaviour | Why Fundraising is a Waste of Time & OKRs are BS | Why Angel Investing is Bad for Founders to Do and the VC Model is on it's Last
Zach Perret is the CEO and Co-Founder of Plaid, a technology platform reshaping financial services. To date, Zach has raised over $734M for Plaid from the likes of NEA, Spark, GV, Coatue and a16z, to name a few. Today, thousands of companies including the largest fintechs, severa ... Show More
50m 53s
May 2023
Do fundraisers understand the nuances of corporate giving?
How many prospective funders agree to meet as a simple courtesy only to pass up the opportunity that has been presented to them? What if a better understanding of how corporations go about their decision-making processes could reduce the fundraiser’s workload and increase the lik ... Show More
43m 54s
Mar 2025
Nonprofits navigate Trump’s drastic funding cuts, with The Chronicle of Philanthropy CEO Stacy Palmer
President Trump’s dramatic cuts to U.S. government grants are destabilizing every corner of the non-profit sector, leaving organizations scrambling to adapt. Stacy Palmer, CEO of the Chronicle of Philanthropy, explores the executive order’s impact on both the public and private s ... Show More
27m 37s
Apr 2025
Purpose Isn’t Found, It’s Built with Aaron Hurst
In episode 230 of The Business Development Podcast, Kelly Kennedy is joined by Aaron Hurst, bestselling author of The Purpose Economy and a pioneer in the field of meaningful work. Together, they explore the idea that purpose isn’t something we find, but something we intentionall ... Show More
1h 6m
Dec 2024
Fixing Education in America: What's Stopping Us?
Over half of Americans live in childcare deserts, while 90% of brain development happens before the age of five. All the while, education and childcare remain among the most resistant sectors to technological change. Billions of dollars have been spent, but outcomes continue to l ... Show More
38m 54s
Dec 2024
Sam Altman's Equity Dilemma
In this episode of the AI Chat podcast, host Jaeden Schaefer discusses the controversies surrounding OpenAI, particularly focusing on Sam Altman's claims about his equity in the company. The conversation delves into Altman's testimony before Congress, the significant valuation gr ... Show More
14m 4s
Nov 2024
ChatGPT’s First Victim + The Department of Government Efficiency (DOGE)
Scott and Ed open the show by discussing Spotify and Disney’s earnings, a gambling company’s strong third quarter results, and Elliot Management’s activist investment in Honeywell. Then Scott breaks down how Chegg allowed ChatGPT to take its business to the woodshed and why he th ... Show More
54m 57s
Jul 2024
Pump and Dump Schemes are Now Totally Legal
Send us a textA Texas District Judge Andrew S. Hanen has dismissed all charges against seven social-media influencers the SEC and Justice Department had accused of perpetrating a “stock manipulation scheme” on Twitter and Discord, ruling that the prosecution failed to state an of ... Show More
20m 27s