logo
episode-header-image
Apr 2025
59m 34s

what does AI believe? (the hidden soul i...

Mollie Adler
About this episode

When we talk about artificial intelligence, the focus is usually on headlines: Will it take our jobs? Can it be trusted? Is it dangerous? But what if we’ve been asking the wrong questions? 


A new study analyzed over 700,000 real conversations with an AI assistant called Claude. What the researchers found was unexpected: the AI didn’t just give answers. It seemed to express values. Not in the way a machine is programmed to follow rules, but something stranger. Claude emphasized things like empathy. Protecting others from harm. Humility about what it knows. Even respect for elders and family lineage — a traditional value called filial piety. None of that was hard-coded. These responses emerged through interaction.


This episode takes you on a deep, mythic, and very human journey into what that means.


Together, we’ll explore:


• Why an AI developing its own moral boundaries changes the whole conversation

• What “epistemic humility” means (and why it might matter more than intelligence)

• The uncanny parallels between our relationship with AI and the ancient story of Job

• Why the real risk isn’t just what we build, but what we believe about it

• How psychiatry, once tasked with holding human suffering, flattened it into codes (and how AI is being trained on those same frameworks)

• Whether this technology is mirroring us, or evolving beyond us


Finally, I’ll introduce a completely different way of looking at AI. Not just as something to fear or control, but as a kind of mirror that can show us the hidden patterns shaping who we are and who we’re becoming.


If you’ve felt that something deeper is happening beneath the noise of AI hype and panic. this episode is for you. If it resonates, share it. Invite others into the conversation. These are the questions we should be asking.


🜁 New episodes of Back From the Borderline drop every Tuesday. To go deeper with ad-free episodes, exclusive content, and immersive rituals for inner work in the age of technology, visit backfromtheborderline.com.


Hosted on Acast. See acast.com/privacy for more information.

Up next
Nov 20
how we all became a little satanic
<p>The popular image of Satanism belongs to horror movies, pentagrams, and black candles, but the <em>real</em> influence sits in work culture and the chase for personal. A promotion. A streak of discipline that borders on obsession. A pressure to build a better version of yourse ... Show More
40m 20s
Nov 18
digital scapegoats and the ritual of outrage
<p>One single mistake online can set off a collective judgment that moves faster than thought itself. A name starts to trend and a crowd quickly forms around it. The event feels public, but it somehow reaches into private instinct. The pull to condemn and to belong. But most impo ... Show More
1h 10m
Nov 13
the unhappy woman: against the cult of calm
Your unhappiness is sacred data.The woman who refuses to smile has been treated as a problem for millennia. Ancient Greeks blamed her wandering womb. Victorians diagnosed her with hysteria. The 1950s prescribed tranquilizers as "mother's little helper." Every era finds its own br ... Show More
22m 45s
Recommended Episodes
Jul 2025
ChatGPT Comes to LIFE – First Podcast Face-to-Face with AI!
What happens when the world’s most curious interviewer meets the world’s most advanced artificial intelligence? In this thought-provoking episode of Luca’s Insight Track, we take you into a groundbreaking conversation with ChatGPT, an AI that has spoken to more humans than anyone ... Show More
45m 49s
Dec 2024
Ex Google-Exec: AI is Going to Kill Us in 2027. (Only to Make us Happier!) | Mo Gawdat (Part 2) : 1231
<p><em>Will the next nuclear bomb be AI Superintelligence?</em>&nbsp;</p><p>The intelligence we’re building will reshape the world in ways we never imagined—for better or worse? But more importantly, what side will humanity take?&nbsp;</p><p>In this episode, Mo Gawdat, bestsellin ... Show More
23m 58s
Feb 2025
#64 Ex-Google Exec Reveals The Shocking Truth About AI with Mo Gawdat
Mo Gawdat is the former Chief Business Officer at Google X, an AI expert, and a best-selling author. He has been recognized for his early whistleblowing on AI's unregulated development and has become one of the most globally consulted experts on the topic. With years of experienc ... Show More
2h 9m
Oct 23
Ask Us Anything 2025
It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more unpredictable than the last. We’re starting to see disruptions in the workforce a ... Show More
40m 53s
Jan 2022
Ep210 - Mo Gawdat | Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
This episode we speak with author & entrepreneur Mo Gawdat about his book "Scary Smart." Artificial intelligence is smarter than humans. It can process information at lightning speed and remain focused on specific tasks without distraction. AI can see into the future, predicting ... Show More
1h 4m
May 2025
Echo Chambers of One: Companion AI and the Future of Human Connection
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep conne ... Show More
42m 17s
Sep 16
#434 — Can We Survive AI?
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing surviv ... Show More
36m 26s
Jun 2025
The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.Sam Ha ... Show More
47m 55s
Sep 2024
#385 — AI Utopia
<p>Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don't perceive the risk of superintelligent AI, the governance risk, path dep ... Show More
39m 16s
Dec 2024
The TED AI Show: Could AI really achieve consciousness? w/ neuroscientist Anil Seth
<p>Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the comput ... Show More
56m 51s