logo
episode-header-image
Sep 16
36m 26s

#434 — Can We Survive AI?

SAM HARRIS
About this episode

Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing survival instincts, hallucinations and deception in LLMs, why many prominent voices in tech remain skeptical of the dangers of superintelligent AI, the timeline for superintelligence, real-world consequences of current AI systems, the imaginary line between the internet and reality, why Eliezer and Nate believe superintelligent AI would necessarily end humanity, how we might avoid an AI-driven catastrophe, the Fermi paradox, and other topics.

If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

Up next
Oct 6
#437 — Two Years Since 10/7
Sam Harris speaks with Dan Senor about the state of the world two years after the October 7th attacks. They discuss the rise of global antisemitism, immigration and the failure of Western nations to contend with the spread of Islam, the dramatic reshaping of the Middle East, the ... Show More
20m 8s
Oct 3
#436 — A Crisis of Trust
Sam Harris speaks with Michael Osterholm about his new book, The Big One: How We Must Prepare for Future Deadly Pandemics. They discuss the lessons learned from the COVID-19 pandemic, the major mistakes made in the public health response—including lockdowns, school closures, and ... Show More
25m 21s
Oct 2
#435 — The Last Invention
Sam Harris introduces the first episode of The Last Invention, a new podcast series on the hype and fear about the AI revolution, reported by Gregory Warner and Andy Mills. Gregory Warner was a foreign correspondent in Russia and Afghanistan, and the East Africa bureau chief for ... Show More
37m 10s
Recommended Episodes
May 2025
251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity
Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to ... Show More
2h 51m
May 2025
OpenAI whistleblower Daniel Kokotajlo on superintelligence and existential risk of AI
How much could our relationship with technology change by 2027? In the last few years, new artificial intelligence tools like ChatGPT and DeepSeek have transformed how we think about work, creativity, even intelligence itself. But tech experts are ringing alarm bells that powerfu ... Show More
38m 16s
May 2025
An Interview With the Herald of the Apocalypse
Is artificial intelligence about to take your job? According to Daniel Kokotajlo, the executive director of the A.I. Futures Project, that should be the least of your worries. Kokotajlo was once a researcher for OpenAI, but left after losing confidence in the company’s commitment ... Show More
58m 55s
Sep 2024
Yuval Noah Harari: This Election Will Tear The Country Apart! AI Will Control You By 2034! The Dark Truth Behind Meta & X!
Can humanity handle AI or will it be our downfall? Yuval Noah Harari looks back at history to guide us through this uncertain journey ahead. Yuval Noah Harari is a best-selling author, public intellectual and Professor of History at the Hebrew University of Jerusalem. He is the a ... Show More
1h 54m
Jan 2022
Ep210 - Mo Gawdat | Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
This episode we speak with author & entrepreneur Mo Gawdat about his book "Scary Smart." Artificial intelligence is smarter than humans. It can process information at lightning speed and remain focused on specific tasks without distraction. AI can see into the future, predicting ... Show More
1h 4m
Jul 2022
Human justice and machine intelligence | Joanna Bryson
Should we be scared of AI?Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesJoanna Bryson discusses how she became interested in the ways different species use intelligence, how the typical tropes in science fiction misunderstand AI and the proble ... Show More
18m 46s
Jun 2025
Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton
He pioneered AI, now he’s warning the world. Godfather of AI Geoffrey Hinton breaks his silence on the deadly dangers of AI no one is prepared for. Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognised as the ‘Godfather of AI’ for his pione ... Show More
1h 30m
Jun 2025
The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.Sam Ha ... Show More
47m 55s
Feb 2025
#64 Ex-Google Exec Reveals The Shocking Truth About AI with Mo Gawdat
Mo Gawdat is the former Chief Business Officer at Google X, an AI expert, and a best-selling author. He has been recognized for his early whistleblowing on AI's unregulated development and has become one of the most globally consulted experts on the topic. With years of experienc ... Show More
2h 9m
Sep 4
Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030 & Proof We're Living In a Simulation!
WARNING: AI could end humanity, and we’re completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we’re heading toward global collapse…or even World War III. Dr. Roman Yampolskiy is a leading voice in AI safety ... Show More
1h 29m