Feb 26
AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka - #762
In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, infere ... Show More
1h 18m
Jan 29
The Evolution of Reasoning in Small Language Models with Yejin Choi - #761
Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin’s recent work on making small language models reason more effectively. We ... Show More
1h 6m
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)
<p>In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks.</p>
<p>First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more mana ... Show More
35m 32s
Jun 2024
Cameron J. Buckner, "From Deep Learning to Rational Machines" (Oxford UP, 2023)
Artificial intelligence started with programmed computers, where programmers would manually program human expert knowledge into the systems. In sharp contrast, today's artificial neural networks – deep learning – are able to learn from experience, and perform at human-like levels ... Show More
1h 11m
Mar 2021
The Theory of a Thousand Brains
<p>In this episode, we talk with Jeff Hawkins—an entrepreneur and scientist, known for inventing some of the earliest handheld computers, the Palm and the Treo, who then turned his career to neuroscience and founded the Redwood Center for Theoretical Neuroscience in 2002 and Nume ... Show More
39m 36s
Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also d ... Show More
<p><span style= "color: #224422; font-family: 'Lucida Bright', Georgia, serif; font-size: medium;"> My guest this week is Anh Nguyen, a PhD student at the University of Wyoming working in the </span><a style= "font-family: 'Lucida Bright', Georgia, serif; font-size: medium;" href ... Show More