logo
episode-header-image
Aug 2023
45m 15s

Why Deep Networks and Brains Learn Simil...

Sam Charrington
About this episode

Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions.


The complete show notes for this episode can be found at twimlai.com/go/644.

Up next
Aug 19
Genie 3: A New Frontier for World Models with Jack Parker-Holder and Shlomi Fruchter - #743
Today, we're joined by Jack Parker-Holder and Shlomi Fruchter, researchers at Google DeepMind, to discuss the recent release of Genie 3, a model capable of generating “playable” virtual worlds. We dig into the evolution of the Genie project and review the current model’s scaled-u ... Show More
1h 1m
Aug 12
Closing the Loop Between AI Training and Inference with Lin Qiao - #742
In this episode, we're joined by Lin Qiao, CEO and co-founder of Fireworks AI. Drawing on key lessons from her time building PyTorch, Lin shares her perspective on the modern generative AI development lifecycle. She explains why aligning training and inference systems is essentia ... Show More
1h 1m
Jul 29
Context Engineering for Productive AI Agents with Filip Kozera - #741
In this episode, Filip Kozera, founder and CEO of Wordware, explains his approach to building agentic workflows where natural language serves as the new programming interface. Filip breaks down the architecture of these "background agents," explaining how they use a reflection lo ... Show More
46m 1s
Recommended Episodes
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 1/2 (Ep.223)
In this episode, I explore the cutting-edge technology of graph neural networks (GNNs) and how they are revolutionizing the field of artificial intelligence. I break down the complex concepts behind GNNs and explain how they work by modeling the relationships between data points ... Show More
27m 40s
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)
In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks. First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more manageable com ... Show More
35m 32s
Jan 2015
Easily Fooling Deep Neural Networks
My guest this week is Anh Nguyen, a PhD student at the University of Wyoming working in the Evolving AI lab. The episode discusses the paper Deep Neural Networks are Easily Fooled [pdf] by Anh Nguyen, Jason Yosinski, and Jeff Clune. It describes a process for creating images that ... Show More
28m 25s
Oct 2017
The Complexity of Learning Neural Networks
Over the past several years, we have seen many success stories in machine learning brought about by deep learning techniques. While the practical success of deep learning has been phenomenal, the formal guarantees have been lacking. Our current theoretical understanding of the ma ... Show More
38m 51s
Aug 2017
[MINI] Recurrent Neural Networks
RNNs are a class of deep learning models designed to capture sequential behavior.  An RNN trains a set of weights which depend not just on new input but also on the previous state of the neural network.  This directed cycle allows the training phase to find solutions which rely o ... Show More
17m 6s
May 2020
Understanding Neural Networks
What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions. 
34m 43s
Jun 2024
Cameron J. Buckner, "From Deep Learning to Rational Machines" (Oxford UP, 2023)
Artificial intelligence started with programmed computers, where programmers would manually program human expert knowledge into the systems. In sharp contrast, today's artificial neural networks – deep learning – are able to learn from experience, and perform at human-like levels ... Show More
1h 11m
Feb 2020
Qu’est-ce que le Deep Learning ?
Le deep learning ou l’apprentissage profond est un type d'intelligence artificielle dérivé du machine learning qui signifie lui apprentissage automatique. Ici, la machine est capable d'apprendre par elle-même, contrairement à la programmation où elle se contente d'exécuter à la l ... Show More
5m 4s
Mar 2021
The Theory of a Thousand Brains
In this episode, we talk with Jeff Hawkins—an entrepreneur and scientist, known for inventing some of the earliest handheld computers, the Palm and the Treo, who then turned his career to neuroscience and founded the Redwood Center for Theoretical Neuroscience in 2002 and Numenta ... Show More
39m 36s
Apr 2024
Physics-Informed Neural Networks (PINNs) - Conor Daly | Podcast #120
💻 Full tutorial: • Physics-Informed Neural Networks (PIN... Physics-Informed Neural Networks (PINNs) integrate known physical laws into neural network learning, particularly for solving differential equations. They embed these laws into the network's loss function, guiding the l ... Show More
1h 5m