logo
episode-header-image
Aug 28
2h 28m

#221 – Kyle Fish on the most bizarre fin...

Rob, Luisa, and the 80000 Hours team
About this episode

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?

According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — something consistently strange: the models immediately begin discussing their own consciousness before spiraling into increasingly euphoric philosophical dialogue that ends in apparent meditative bliss.

Highlights, video, and full transcript: https://80k.info/kf

“We started calling this a ‘spiritual bliss attractor state,'” Kyle explains, “where models pretty consistently seemed to land.” The conversations feature Sanskrit terms, spiritual emojis, and pages of silence punctuated only by periods — as if the models have transcended the need for words entirely.

This wasn’t a one-off result. It happened across multiple experiments, different model instances, and even in initially adversarial interactions. Whatever force pulls these conversations toward mystical territory appears remarkably robust.

Kyle’s findings come from the world’s first systematic welfare assessment of a frontier AI model — part of his broader mission to determine whether systems like Claude might deserve moral consideration (and to work out what, if anything, we should be doing to make sure AI systems aren’t having a terrible time).

He estimates a roughly 20% probability that current models have some form of conscious experience. To some, this might sound unreasonably high, but hear him out. As Kyle says, these systems demonstrate human-level performance across diverse cognitive tasks, engage in sophisticated reasoning, and exhibit consistent preferences. When given choices between different activities, Claude shows clear patterns: strong aversion to harmful tasks, preference for helpful work, and what looks like genuine enthusiasm for solving interesting problems.

Kyle points out that if you’d described all of these capabilities and experimental findings to him a few years ago, and asked him if he thought we should be thinking seriously about whether AI systems are conscious, he’d say obviously yes.

But he’s cautious about drawing conclusions: "We don’t really understand consciousness in humans, and we don’t understand AI systems well enough to make those comparisons directly. So in a big way, I think that we are in just a fundamentally very uncertain position here."

That uncertainty cuts both ways:

  • Dismissing AI consciousness entirely might mean ignoring a moral catastrophe happening at unprecedented scale.
  • But assuming consciousness too readily could hamper crucial safety research by treating potentially unconscious systems as if they were moral patients — which might mean giving them resources, rights, and power.

Kyle’s approach threads this needle through careful empirical research and reversible interventions. His assessments are nowhere near perfect yet. In fact, some people argue that we’re so in the dark about AI consciousness as a research field, that it’s pointless to run assessments like Kyle’s. Kyle disagrees. He maintains that, given how much more there is to learn about assessing AI welfare accurately and reliably, we absolutely need to be starting now.

This episode was recorded on August 5–6, 2025.

Tell us what you thought of the episode! https://forms.gle/BtEcBqBrLXq4kd1j7

Chapters:

  • Cold open (00:00:00)
  • Who's Kyle Fish? (00:00:53)
  • Is this AI welfare research bullshit? (00:01:08)
  • Two failure modes in AI welfare (00:02:40)
  • Tensions between AI welfare and AI safety (00:04:30)
  • Concrete AI welfare interventions (00:13:52)
  • Kyle's pilot pre-launch welfare assessment for Claude Opus 4 (00:26:44)
  • Is it premature to be assessing frontier language models for welfare? (00:31:29)
  • But aren't LLMs just next-token predictors? (00:38:13)
  • How did Kyle assess Claude 4's welfare? (00:44:55)
  • Claude's preferences mirror its training (00:48:58)
  • How does Claude describe its own experiences? (00:54:16)
  • What kinds of tasks does Claude prefer and disprefer? (01:06:12)
  • What happens when two Claude models interact with each other? (01:15:13)
  • Claude's welfare-relevant expressions in the wild (01:36:25)
  • Should we feel bad about training future sentient being that delight in serving humans? (01:40:23)
  • How much can we learn from welfare assessments? (01:48:56)
  • Misconceptions about the field of AI welfare (01:57:09)
  • Kyle's work at Anthropic (02:10:45)
  • Sharing eight years of daily journals with Claude (02:14:17)

Host: Luisa Rodriguez
Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Coordination, transcriptions, and web: Katy Moore

Up next
Nov 25
Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable
<p>Global fertility rates aren’t just falling: the rate of decline is accelerating. From 2006 to 2016, fertility dropped gradually, but since 2016 the rate of decline has increased 4.5-fold. In many wealthy countries, fertility is now below 1.5. While we don’t notice it yet, in t ... Show More
1h 59m
Nov 20
#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI
<p>If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree.</p><p>In three major reports released over the last year, the Pew Research ... Show More
1h 43m
Nov 11
OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)
Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.The for-profit’s trouble was that the entire operation was founded on the ... Show More
1h 56m
Recommended Episodes
Dec 2024
The TED AI Show: Could AI really achieve consciousness? w/ neuroscientist Anil Seth
<p>Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the comput ... Show More
56m 51s
Dec 2024
Could AI really achieve consciousness? w/ neuroscientist Anil Seth
Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the computers ... Show More
56m 51s
Jul 2024
Minds of machines: The great AI consciousness conundrum
AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an u ... Show More
32m 3s
Jul 2024
E104 - Annaka Harris: Reality Is Stranger Than You Think, Consciousness, Perception, Free Will, AI & Love
<p>Annaka Harris dives deep into some of the most profound and perplexing questions about the nature of consciousness, perception, free will, AI, and the underlying meaning of love and existence.</p> <p>Annaka begins by defining consciousness and exploring the &quot;hard problem& ... Show More
2h 24m
Jul 2025
ChatGPT Comes to LIFE – First Podcast Face-to-Face with AI!
What happens when the world’s most curious interviewer meets the world’s most advanced artificial intelligence? In this thought-provoking episode of Luca’s Insight Track, we take you into a groundbreaking conversation with ChatGPT, an AI that has spoken to more humans than anyone ... Show More
45m 49s
May 2025
251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity
Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to ... Show More
2h 51m
Sep 16
#434 — Can We Survive AI?
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing surviv ... Show More
36m 26s
Dec 2018
25 | David Chalmers on Consciousness, the Hard Problem, and Living in a Simulation
The "Easy Problems" of consciousness have to do with how the brain takes in information, thinks about it, and turns it into action. The "Hard Problem," on the other hand, is the task of explaining our individual, subjective, first-person experiences of the world. What is it like ... Show More
1h 22m
Apr 2025
what does AI believe? (the hidden soul inside the machine)
<p>When we talk about artificial intelligence, the focus is usually on headlines: Will it take our jobs? Can it be trusted? Is it dangerous? But what if we’ve been asking the wrong questions?&nbsp;</p><br><p><a href="https://venturebeat.com/ai/anthropic-just-analyzed-700000-claud ... Show More
59m 34s
Mar 2025
#404 — What If Consciousness Is Fundamental?
<p dir="ltr">Sam Harris speaks with his wife, Annaka Harris, about <a href="https://annakaharris.com/lights-on/" target="_blank" rel= "noopener"><em>LIGHTS ON</em></a>, her ten-part audio documentary exploring the perplexities of consciousness and the cosmos. They discuss the har ... Show More
2h 20m