logo
episode-header-image
Aug 2023
45m 15s

Why Deep Networks and Brains Learn Simil...

Sam Charrington
About this episode

Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions.


The complete show notes for this episode can be found at twimlai.com/go/644.

Up next
Oct 7
Recurrence and Attention for Long-Context Transformers with Jacob Buckman - #750
Today, we're joined by Jacob Buckman, co-founder and CEO of Manifest AI to discuss achieving long context in transformers. We discuss the bottlenecks of scaling context length and recent techniques to overcome them, including windowed attention, grouped query attention, and laten ... Show More
57m 23s
Sep 30
The Decentralized Future of Private AI with Illia Polosukhin - #749
In this episode, Illia Polosukhin, a co-author of the seminal "Attention Is All You Need" paper and co-founder of Near AI, joins us to discuss his vision for building private, decentralized, and user-owned AI. Illia shares his unique journey from developing the Transformer archit ... Show More
1h 5m
Sep 23
Inside Nano Banana 🍌 and the Future of Vision-Language Models with Oliver Wang - #748
Today, we’re joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Image—better known by its code name, “Nano Banana.” We dive into the development and capabilities of this newly released frontier vision-language model, beginning with th ... Show More
1h 3m
Recommended Episodes
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 1/2 (Ep.223)
In this episode, I explore the cutting-edge technology of graph neural networks (GNNs) and how they are revolutionizing the field of artificial intelligence. I break down the complex concepts behind GNNs and explain how they work by modeling the relationships between data points ... Show More
27m 40s
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)
In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks. First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more manageable com ... Show More
35m 32s
Jan 2015
Easily Fooling Deep Neural Networks
My guest this week is Anh Nguyen, a PhD student at the University of Wyoming working in the Evolving AI lab. The episode discusses the paper Deep Neural Networks are Easily Fooled [pdf] by Anh Nguyen, Jason Yosinski, and Jeff Clune. It describes a process for creating images that ... Show More
28m 25s
Oct 2017
The Complexity of Learning Neural Networks
Over the past several years, we have seen many success stories in machine learning brought about by deep learning techniques. While the practical success of deep learning has been phenomenal, the formal guarantees have been lacking. Our current theoretical understanding of the ma ... Show More
38m 51s
Aug 2017
[MINI] Recurrent Neural Networks
RNNs are a class of deep learning models designed to capture sequential behavior.  An RNN trains a set of weights which depend not just on new input but also on the previous state of the neural network.  This directed cycle allows the training phase to find solutions which rely o ... Show More
17m 6s
May 2020
Understanding Neural Networks
What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions. 
34m 43s
Jun 2024
Cameron J. Buckner, "From Deep Learning to Rational Machines" (Oxford UP, 2023)
Artificial intelligence started with programmed computers, where programmers would manually program human expert knowledge into the systems. In sharp contrast, today's artificial neural networks – deep learning – are able to learn from experience, and perform at human-like levels ... Show More
1h 11m
Feb 2020
Qu’est-ce que le Deep Learning ?
Le deep learning ou l’apprentissage profond est un type d'intelligence artificielle dĂ©rivĂ© du machine learning qui signifie lui apprentissage automatique. Ici, la machine est capable d'apprendre par elle-mĂȘme, contrairement Ă  la programmation oĂč elle se contente d'exĂ©cuter Ă  la l ... Show More
5m 4s
Mar 2021
The Theory of a Thousand Brains
In this episode, we talk with Jeff Hawkins—an entrepreneur and scientist, known for inventing some of the earliest handheld computers, the Palm and the Treo, who then turned his career to neuroscience and founded the Redwood Center for Theoretical Neuroscience in 2002 and Numenta ... Show More
39m 36s
Apr 2024
Physics-Informed Neural Networks (PINNs) - Conor Daly | Podcast #120
đŸ’» Full tutorial: ‱ Physics-Informed Neural Networks (PIN... Physics-Informed Neural Networks (PINNs) integrate known physical laws into neural network learning, particularly for solving differential equations. They embed these laws into the network's loss function, guiding the l ... Show More
1h 5m