logo
episode-header-image
Oct 2
2h 31m

There's a cheap and low-tech way to save...

Rob, Luisa, and the 80000 Hours team
About this episode

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s single greatest vulnerability. Andrew Snyder-Beattie thinks conventional wisdom could be wrong.

Andrew’s job at Open Philanthropy is to spend hundreds of millions of dollars to protect as much of humanity as possible in the worst-case scenarios — those with fatality rates near 100% and the collapse of technological civilisation a live possibility.

Video, full transcript, and links to learn more: https://80k.info/asb

As Andrew lays out, there are several ways this could happen, including:

  • A national bioweapons programme gone wrong, in particular Russia or North Korea
  • AI advances making it easier for terrorists or a rogue AI to release highly engineered pathogens
  • Mirror bacteria that can evade the immune systems of not only humans, but many animals and potentially plants as well

Most efforts to combat these extreme biorisks have focused on either prevention or new high-tech countermeasures. But prevention may well fail, and high-tech approaches can’t scale to protect billions when, with no sane people willing to leave their home, we’re just weeks from economic collapse.

So Andrew and his biosecurity research team at Open Philanthropy have been seeking an alternative approach. They’re proposing a four-stage plan using simple technology that could save most people, and is cheap enough it can be prepared without government support. Andrew is hiring for a range of roles to make it happen — from manufacturing and logistics experts to global health specialists to policymakers and other ambitious entrepreneurs — as well as programme associates to join Open Philanthropy’s biosecurity team (apply by October 20!).

Fundamentally, organisms so small have no way to penetrate physical barriers or shield themselves from UV, heat, or chemical poisons. We now know how to make highly effective ‘elastomeric’ face masks that cost $10, can sit in storage for 20 years, and can be used for six months straight without changing the filter. Any rich country could trivially stockpile enough to cover all essential workers.

People can’t wear masks 24/7, but fortunately propylene glycol — already found in vapes and smoke machines — is astonishingly good at killing microbes in the air. And, being a common chemical input, industry already produces enough of the stuff to cover every indoor space we need at all times.

Add to this the wastewater monitoring and metagenomic sequencing that will detect the most dangerous pathogens before they have a chance to wreak havoc, and we might just buy ourselves enough time to develop the cure we’ll need to come out alive.

Has everyone been wrong, and biology is actually defence dominant rather than offence dominant? Is this plan crazy — or so crazy it just might work?

That’s what host Rob Wiblin and Andrew Snyder-Beattie explore in this in-depth conversation.


What did you think of the episode? https://forms.gle/66Hw5spgnV3eVWXa6

Chapters:

  • Cold open (00:00:00)
  • Who's Andrew Snyder-Beattie? (00:01:23)
  • It could get really bad (00:01:57)
  • The worst-case scenario: mirror bacteria (00:08:58)
  • To actually work, a solution has to be low-tech (00:17:40)
  • Why ASB works on biorisks rather than AI (00:20:37)
  • Plan A is prevention. But it might not work. (00:24:48)
  • The “four pillars” plan (00:30:36)
  • ASB is hiring now to make this happen (00:32:22)
  • Everyone was wrong: biorisks are defence dominant in the limit (00:34:22)
  • Pillar 1: A wall between the virus and your lungs (00:39:33)
  • Pillar 2: Biohardening buildings (00:54:57)
  • Pillar 3: Immediately detecting the pandemic (01:13:57)
  • Pillar 4: A cure (01:27:14)
  • The plan's biggest weaknesses (01:38:35)
  • If it's so good, why are you the only group to suggest it? (01:43:04)
  • Would chaos and conflict make this impossible to pull off? (01:45:08)
  • Would rogue AI make bioweapons? Would other AIs save us? (01:50:05)
  • We can feed the world even if all the plants die (01:56:08)
  • Could a bioweapon make the Earth uninhabitable? (02:05:06)
  • Many open roles to solve bio-extinction — and you don’t necessarily need a biology background (02:07:34)
  • Career mistakes ASB thinks are common (02:16:19)
  • How to protect yourself and your family (02:28:21)

This episode was recorded on August 12, 2025

Video editing: Simon Monsour and Luke Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Camera operator: Jake Morris
Coordination, transcriptions, and web: Katy Moore

Up next
Sep 26
Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution
Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought it was such a good interview and we wanted more people to see it, so we’re cross- ... Show More
1h 5m
Sep 15
Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)
At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It’s mostly luck,” he says, but “another part is what I think of as maximising ... Show More
1h 46m
Sep 8
Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)
We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultural influence, directly advise on major government decisions, and even operate m ... Show More
3h 1m
Recommended Episodes
Dec 2024
The TED AI Show: Could AI really achieve consciousness? w/ neuroscientist Anil Seth
Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the computers ... Show More
56m 51s
Dec 2024
Could AI really achieve consciousness? w/ neuroscientist Anil Seth
Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the computers ... Show More
56m 51s
Jul 2024
Minds of machines: The great AI consciousness conundrum
AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an u ... Show More
30m 18s
Jul 2024
E104 - Annaka Harris: Reality Is Stranger Than You Think, Consciousness, Perception, Free Will, AI & Love
Annaka Harris dives deep into some of the most profound and perplexing questions about the nature of consciousness, perception, free will, AI, and the underlying meaning of love and existence. Annaka begins by defining consciousness and exploring the "hard problem". She discusses ... Show More
2h 24m
Jul 24
ChatGPT Comes to LIFE – First Podcast Face-to-Face with AI!
What happens when the world’s most curious interviewer meets the world’s most advanced artificial intelligence? In this thought-provoking episode of Luca’s Insight Track, we take you into a groundbreaking conversation with ChatGPT, an AI that has spoken to more humans than anyone ... Show More
45m 49s
May 2025
251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity
Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to ... Show More
2h 51m
Sep 16
#434 — Can We Survive AI?
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing surviv ... Show More
36m 26s
Dec 2018
25 | David Chalmers on Consciousness, the Hard Problem, and Living in a Simulation
The "Easy Problems" of consciousness have to do with how the brain takes in information, thinks about it, and turns it into action. The "Hard Problem," on the other hand, is the task of explaining our individual, subjective, first-person experiences of the world. What is it like ... Show More
1h 22m
Apr 2025
what does AI believe? (the hidden soul inside the machine)
When we talk about artificial intelligence, the focus is usually on headlines: Will it take our jobs? Can it be trusted? Is it dangerous? But what if we’ve been asking the wrong questions?  A new study analyzed over 700,000 real conversations with an AI assistant called Claude. W ... Show More
59m 34s
Mar 2025
#404 — What If Consciousness Is Fundamental?
Sam Harris speaks with his wife, Annaka Harris, about LIGHTS ON, her ten-part audio documentary exploring the perplexities of consciousness and the cosmos. They discuss the hard problem of consciousness, whether consciousness is fundamental, what split-brain patients can teach us ... Show More
2h 20m