logo
episode-header-image
Aug 2023
45m 15s

Why Deep Networks and Brains Learn Simil...

Sam Charrington
About this episode
tail spinning
Up next
Feb 26
AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka - #762
In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, infere ... Show More
1h 18m
Jan 29
The Evolution of Reasoning in Small Language Models with Yejin Choi - #761
Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin’s recent work on making small language models reason more effectively. We ... Show More
1h 6m
Jan 8
Intelligent Robots in 2026: Are We There Yet? with Nikita Rudin - #760
Today, we're joined by Nikita Rudin, co-founder and CEO of Flexion Robotics to discuss the gap between current robotic capabilities and what’s required to deploy fully autonomous robots in the real world. Nikita explains how reinforcement learning and simulation have driven rapid ... Show More
1h 6m
Recommended Episodes
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 1/2 (Ep.223)
<p>In this episode, I explore the cutting-edge technology of graph neural networks (GNNs) and how they are revolutionizing the field of artificial intelligence. I break down the complex concepts behind GNNs and explain how they work by modeling the relationships between data poin ... Show More
27m 40s
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)
<p>In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks.</p> <p>First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more mana ... Show More
35m 32s
Jan 2015
Easily Fooling Deep Neural Networks
tail spinning
28m 25s
Oct 2017
The Complexity of Learning Neural Networks
Over the past several years, we have seen many success stories in machine learning brought about by deep learning techniques. While the practical success of deep learning has been phenomenal, the formal guarantees have been lacking. Our current theoretical understanding of the ma ... Show More
38m 51s
Aug 2017
[MINI] Recurrent Neural Networks
<p class="p1"><span class="s1">RNNs are a class of deep learning models designed to capture sequential behavior.<span class= "Apple-converted-space"> </span> An RNN trains a set of weights which depend not just on new input but also on the previous state of the neural network.<sp ... Show More
17m 6s
May 2020
Understanding Neural Networks
tail spinning
34m 43s
Jun 2024
Cameron J. Buckner, "From Deep Learning to Rational Machines" (Oxford UP, 2023)
Artificial intelligence started with programmed computers, where programmers would manually program human expert knowledge into the systems. In sharp contrast, today's artificial neural networks – deep learning – are able to learn from experience, and perform at human-like levels ... Show More
1h 11m
Feb 2020
Qu’est-ce que le Deep Learning ?
Le deep learning ou l’apprentissage profond est un type d'intelligence artificielle dérivé du machine learning qui signifie lui apprentissage automatique. Ici, la machine est capable d'apprendre par elle-même, contrairement à la programmation où elle se contente d'exécuter à la l ... Show More
5m 4s
Mar 2021
The Theory of a Thousand Brains
<p>In this episode, we talk with Jeff Hawkins—an entrepreneur and scientist, known for inventing some of the earliest handheld computers, the Palm and the Treo, who then turned his career to neuroscience and founded the Redwood Center for Theoretical Neuroscience in 2002 and Nume ... Show More
39m 36s
Apr 2024
Physics-Informed Neural Networks (PINNs) - Conor Daly | Podcast #120
💻 Full tutorial: • Physics-Informed Neural Networks (PIN... Physics-Informed Neural Networks (PINNs) integrate known physical laws into neural network learning, particularly for solving differential equations. They embed these laws into the network's loss function, guiding the l ... Show More
1h 5m