logo
episode-header-image
Dec 2024
2h 25m

#32 - Scott Aaronson - The Race to AGI a...

Liv Boeree
About this episode

How fast is the AI race really going? What is the current state of Quantum Computing? What actually *is* the P vs NP problem? - former OpenAI researcher and theoretical computer scientist Scott Aaronson joins Liv and Igor to discuss everything quantum, AI and consciousness. We hear about his experience working on OpenAI's "superalignment team", whether quantum computers might break Bitcoin, the state of University Admissions, and even a proposal for a new religion! Strap in for a fascinating conversation that bridges deep theory with pressing real-world concerns about our technological future.


Chapters:

1:30 - Working at OpenAI 4:23 - His Approaches to AI Alignment 6:23 - Watermarking & Detection of AI content 19:15 - P vs. NP 27:11 - The Current State of AI Safety 37:38 - Bad "Just-a-ism" Arguments around LLMs 48:25 - What Sets Human Creativity Apart from AI 55:30 - A Religion for AGI? 1:00:49 - More Moral Philosophy 1:05:24 - The AI Arms Race 1:11:08 - The Government Intervention Dilemma 1:23:28 - The Current State of Quantum Computing 1:36:25 - Will QC destroy Cryptography? 1:48:55 - Politics on College Campuses 2:03:11 - Scott's Childhood & Relationship with Competition 2:23:25 - Rapid-fire Predictions


Links:

♾️ Scott’s Blog: ⁠https://scottaaronson.blog/⁠

♾️ Scott’s Book: ⁠https://www.amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565⁠

♾️ QIC at UTA: https://www.cs.utexas.edu/~qic/ Credits


Credits:

♾️  Hosted by Liv Boeree and Igor Kurganov

♾️  Produced by Liv Boeree

♾️  Post-Production by Ryan Kessler


The Win-Win Podcast:

Poker champion Liv Boeree takes to the interview chair to tease apart the complexities of one of the most fundamental parts of human nature: competition. Liv is joined by top philosophers, gamers, artists, technologists, CEOs, scientists, athletes and more to understand how competition manifests in their world, and how to change seemingly win-lose games into Win-Wins.


#WinWinPodcast #QuantumComputing #AISafety #LLM

Up next
Yesterday
#148 - Jim Rutt - Why Capitalism Is Now A Game of Mind Control
Capitalism has historically been our biggest blessing. But what if it's now turning into our biggest curse? Is it time to change the game?My guest today believes so. Jim Rutt is an early internet pioneer, complexity scientist, former chairman of the Santa Fe Institute, and co-fou ... Show More
2h 30m
Sep 25
#47 - Emmett Shear - Why NATURE Holds the Answers To AI Alignment
Superintelligent AI is rapidly approaching, and humanity (specifically, the labs building it) is still a long way from proving it can be safely controlled. But what if that’s the wrong way of viewing the problem? Is there a way to instill values to the mutual benefit of both huma ... Show More
1h 42m
Aug 13
#46 - Isabelle Boemeke - Why Nuclear Power Is The Rad Future We Need
Energy is the lifeblood of civilization. Is nuclear power the source of abundant clean energy that's been under our noses this whole time?In this episode of the Win-Win Podcast, Liv sits down with Isabelle Boemeke (founder of Isodope & energy expert) to tackle the debate between ... Show More
2h 16m
Recommended Episodes
Jul 2022
Human justice and machine intelligence | Joanna Bryson
Should we be scared of AI?Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesJoanna Bryson discusses how she became interested in the ways different species use intelligence, how the typical tropes in science fiction misunderstand AI and the proble ... Show More
18m 46s
Sep 16
#434 — Can We Survive AI?
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing surviv ... Show More
36m 26s
May 2025
An Interview With the Herald of the Apocalypse
Is artificial intelligence about to take your job? According to Daniel Kokotajlo, the executive director of the A.I. Futures Project, that should be the least of your worries. Kokotajlo was once a researcher for OpenAI, but left after losing confidence in the company’s commitment ... Show More
58m 55s
May 2025
Season 4, Episode 9: Ray Kurzweil, The Singularity is Nearer: When We Merge with AI
Send us a textJoin Professor Jeffrey Sachs and futurist Ray Kurzweil for a compelling conversation on the accelerating pace of technological change and its profound implications for the future of humanity. In his new book, The Singularity Is Nearer, Kurzweil revisits and updates ... Show More
49m 11s
Apr 2025
[BEST OF] Ghosts in the Machines: Artificial Intelligence, Consciousness, & Capitalism
ORIGINALLY RELEASED Apr 10, 2024 In this episode, Alyson and Breht wrestle with the possible rise of artificial general intelligence and its implications. Together they discuss the Silicon Valley Techno-Cult and their dark religious search for immortality, their hubristic attempt ... Show More
1h 57m
Jul 20
Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann
Benjamin Mann is a co-founder of Anthropic, an AI startup dedicated to building aligned, safety-first AI systems. Prior to Anthropic, Ben was one of the architects of GPT-3 at OpenAI. He left OpenAI driven by the mission to ensure that AI benefits humanity. In this episode, Ben o ... Show More
1h 14m
Aug 2024
#379 — Regulating Artificial Intelligence
Sam Harris speaks with Yoshua Bengio and Scott Wiener about AI risk and the new bill introduced in California intended to mitigate it. They discuss the controversy over regulating AI and the assumptions that lead people to discount the danger of an AI arms race. If the Making Sen ... Show More
34m 21s
May 2024
Separating AI Hype from AI Hope
Is AI poised to solve all of humanity’s problems or are we headed for a tech-driven catastrophe?  In this episode of AI Knowhow, we dive into the polarizing world of AI hysteria to separate the hype from the hope.  Courtney, David, and Mohan break down where AI has been overhyped ... Show More
32m 36s
Jun 2025
#420 — Countdown to Superintelligence
Sam Harris speaks with Daniel Kokotajlo about the potential impacts of superintelligent AI over the next decade. They discuss Daniel’s predictions in his essay “AI 2027,” the alignment problem, what an intelligence explosion might look like, the capacity of LLMs to intentionally ... Show More
20m 29s
Jun 2025
Neil Lawrence on taking down the 'digital oligarchy' and why we shouldn't fear AI
When you think of Artificial Intelligence, does it inspire confidence, or concern?Although it's now generally accepted that this technology will play a major role in our future, a lot of conversations around AI and machine learning come back to the argument over us losing control ... Show More
28m 35s