logo
episode-header-image
Jul 8
2h 50m

#220 – Ryan Greenblatt on the 4 most lik...

Rob, Luisa, and the 80,000 Hours team
About this episode

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will be able to do everything needed to run an AI company, from writing code to designing experiments to making strategic and business decisions.

As Ryan lays out, AI models are “marching through the human regime”: systems that could handle five-minute tasks two years ago now tackle 90-minute projects. Double that a few more times and we may be automating full jobs rather than just parts of them.

Will setting AI to improve itself lead to an explosive positive feedback loop? Maybe, but maybe not.

The explosive scenario: Once you’ve automated your AI company, you could have the equivalent of 20,000 top researchers, each working 50 times faster than humans with total focus. “You have your AIs, they do a bunch of algorithmic research, they train a new AI, that new AI is smarter and better and more efficient… that new AI does even faster algorithmic research.” In this world, we could see years of AI progress compressed into months or even weeks.

With AIs now doing all of the work of programming their successors and blowing past the human level, Ryan thinks it would be fairly straightforward for them to take over and disempower humanity, if they thought doing so would better achieve their goals. In the interview he lays out the four most likely approaches for them to take.

The linear progress scenario: You automate your company but progress barely accelerates. Why? Multiple reasons, but the most likely is “it could just be that AI R&D research bottlenecks extremely hard on compute.” You’ve got brilliant AI researchers, but they’re all waiting for experiments to run on the same limited set of chips, so can only make modest progress.

Ryan’s median guess splits the difference: perhaps a 20x acceleration that lasts for a few months or years. Transformative, but less extreme than some in the AI companies imagine.

And his 25th percentile case? Progress “just barely faster” than before. All that automation, and all you’ve been able to do is keep pace.

Unfortunately the data we can observe today is so limited that it leaves us with vast error bars. “We’re extrapolating from a regime that we don’t even understand to a wildly different regime,” Ryan believes, “so no one knows.”

But that huge uncertainty means the explosive growth scenario is a plausible one — and the companies building these systems are spending tens of billions to try to make it happen.

In this extensive interview, Ryan elaborates on the above and the policy and technical response necessary to insure us against the possibility that they succeed — a scenario society has barely begun to prepare for.

Summary, video, and full transcript: https://80k.info/rg25

Recorded February 21, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Ryan Greenblatt? (00:01:10)
  • How close are we to automating AI R&D? (00:01:27)
  • Really, though: how capable are today's models? (00:05:08)
  • Why AI companies get automated earlier than others (00:12:35)
  • Most likely ways for AGI to take over (00:17:37)
  • Would AGI go rogue early or bide its time? (00:29:19)
  • The “pause at human level” approach (00:34:02)
  • AI control over AI alignment (00:45:38)
  • Do we have to hope to catch AIs red-handed? (00:51:23)
  • How would a slow AGI takeoff look? (00:55:33)
  • Why might an intelligence explosion not happen for 8+ years? (01:03:32)
  • Key challenges in forecasting AI progress (01:15:07)
  • The bear case on AGI (01:23:01)
  • The change to “compute at inference” (01:28:46)
  • How much has pretraining petered out? (01:34:22)
  • Could we get an intelligence explosion within a year? (01:46:36)
  • Reasons AIs might struggle to replace humans (01:50:33)
  • Things could go insanely fast when we automate AI R&D. Or not. (01:57:25)
  • How fast would the intelligence explosion slow down? (02:11:48)
  • Bottom line for mortals (02:24:33)
  • Six orders of magnitude of progress... what does that even look like? (02:30:34)
  • Neglected and important technical work people should be doing (02:40:32)
  • What's the most promising work in governance? (02:44:32)
  • Ryan's current research priorities (02:47:48)

Tell us what you thought! https://forms.gle/hCjfcXGeLKxm5pLaA

Video editing: Luke Monsour, Simon Monsour, and Dominic Armstrong
Audio engineering: Ben Cordell, Milo McGuire, and Dominic Armstrong
Music: Ben Cordell
Transcriptions and web: Katy Moore

Up next
Jun 24
#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand
The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what comin ... Show More
2h 48m
Jun 12
#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good
For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell-bent on shattering that comfort.But according to Hugh White — one of the world' ... Show More
2h 48m
Jun 2
#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress
AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 minutes. (See graph.)These are substantial, multi-step tasks requiring sustained f ... Show More
3h 47m
Recommended Episodes
Jan 2025
With OpenAI seeking profits, activist seeks payback to the public
A battle is brewing over the restructuring of OpenAI, the creator of pioneering artificial intelligence chatbot ChatGPT. It was founded as a nonprofit in 2015 with the goal of developing AI to benefit humanity, not investors. But advanced AI requires massive processing power, whi ... Show More
15m 21s
Oct 2024
OpenAI’s Path to Become a For-Profit Company Is Complicated
OpenAI plans to convert from a non-profit to a for-profit organization, a complex move that is rarely done. WSJ reporter Theo Francis joins host Zoe Thomas to discuss the hurdles that OpenAI will face and the possible reasons for the change. Plus, a controversial bill to regulate ... Show More
13m 16s
Oct 2024
20VC: Why Founder Mode is Dangerous & Could Encourage Bad Behaviour | Why Fundraising is a Waste of Time & OKRs are BS | Why Angel Investing is Bad for Founders to Do and the VC Model is on it's Last
Zach Perret is the CEO and Co-Founder of Plaid, a technology platform reshaping financial services. To date, Zach has raised over $734M for Plaid from the likes of NEA, Spark, GV, Coatue and a16z, to name a few. Today, thousands of companies including the largest fintechs, severa ... Show More
50m 53s
Mar 2025
Nonprofits navigate Trump’s drastic funding cuts, with The Chronicle of Philanthropy CEO Stacy Palmer
President Trump’s dramatic cuts to U.S. government grants are destabilizing every corner of the non-profit sector, leaving organizations scrambling to adapt. Stacy Palmer, CEO of the Chronicle of Philanthropy, explores the executive order’s impact on both the public and private s ... Show More
27m 37s
Apr 20
Purpose Isn’t Found, It’s Built with Aaron Hurst
In episode 230 of The Business Development Podcast, Kelly Kennedy is joined by Aaron Hurst, bestselling author of The Purpose Economy and a pioneer in the field of meaningful work. Together, they explore the idea that purpose isn’t something we find, but something we intentionall ... Show More
1h 5m
Dec 2024
Fixing Education in America: What's Stopping Us?
Over half of Americans live in childcare deserts, while 90% of brain development happens before the age of five. All the while, education and childcare remain among the most resistant sectors to technological change. Billions of dollars have been spent, but outcomes continue to l ... Show More
38m 54s
Dec 2024
Sam Altman's Equity Dilemma
In this episode of the AI Chat podcast, host Jaeden Schaefer discusses the controversies surrounding OpenAI, particularly focusing on Sam Altman's claims about his equity in the company. The conversation delves into Altman's testimony before Congress, the significant valuation gr ... Show More
14m 4s
Nov 2024
ChatGPT’s First Victim + The Department of Government Efficiency (DOGE)
Scott and Ed open the show by discussing Spotify and Disney’s earnings, a gambling company’s strong third quarter results, and Elliot Management’s activist investment in Honeywell. Then Scott breaks down how Chegg allowed ChatGPT to take its business to the woodshed and why he th ... Show More
54m 57s
Jul 2024
Pump and Dump Schemes are Now Totally Legal
Send us a textA Texas District Judge Andrew S. Hanen has dismissed all charges against seven social-media influencers the SEC and Justice Department had accused of perpetrating a “stock manipulation scheme” on Twitter and Discord, ruling that the prosecution failed to state an of ... Show More
20m 27s