logo
episode-header-image
Aug 2023
45m 15s

Why Deep Networks and Brains Learn Simil...

Sam Charrington
About this episode

Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions.


The complete show notes for this episode can be found at twimlai.com/go/644.

Up next
Yesterday
Distilling Transformers and Diffusion Models for Robust Edge Use Cases with Fatih Porikli - #738
Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year’s CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Auton ... Show More
1 h
Jun 24
Building the Internet of Agents with Vijoy Pandey - #737
Today, we're joined by Vijoy Pandey, SVP and general manager at Outshift by Cisco to discuss a foundational challenge for the enterprise: how do we make specialized agents from different vendors collaborate effectively? As companies like Salesforce, Workday, and Microsoft all dev ... Show More
56m 13s
Jun 17
LLMs for Equities Feature Forecasting at Two Sigma with Ben Wellington - #736
Today, we're joined by Ben Wellington, deputy head of feature forecasting at Two Sigma. We dig into the team’s end-to-end approach to leveraging AI in equities feature forecasting, covering how they identify and create features, collect and quantify historical data, and build pre ... Show More
59m 31s
Recommended Episodes
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 1/2 (Ep.223)
In this episode, I explore the cutting-edge technology of graph neural networks (GNNs) and how they are revolutionizing the field of artificial intelligence. I break down the complex concepts behind GNNs and explain how they work by modeling the relationships between data points ... Show More
27m 40s
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)
In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks. First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more manageable com ... Show More
35m 32s
Jan 2015
Easily Fooling Deep Neural Networks
My guest this week is Anh Nguyen, a PhD student at the University of Wyoming working in the Evolving AI lab. The episode discusses the paper Deep Neural Networks are Easily Fooled [pdf] by Anh Nguyen, Jason Yosinski, and Jeff Clune. It describes a process for creating images that ... Show More
28m 25s
Oct 2017
The Complexity of Learning Neural Networks
Over the past several years, we have seen many success stories in machine learning brought about by deep learning techniques. While the practical success of deep learning has been phenomenal, the formal guarantees have been lacking. Our current theoretical understanding of the ma ... Show More
38m 51s
Aug 2017
[MINI] Recurrent Neural Networks
RNNs are a class of deep learning models designed to capture sequential behavior.  An RNN trains a set of weights which depend not just on new input but also on the previous state of the neural network.  This directed cycle allows the training phase to find solutions which rely o ... Show More
17m 6s
May 2020
Understanding Neural Networks
What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions. 
34m 43s
Jun 2024
Cameron J. Buckner, "From Deep Learning to Rational Machines" (Oxford UP, 2023)
Artificial intelligence started with programmed computers, where programmers would manually program human expert knowledge into the systems. In sharp contrast, today's artificial neural networks – deep learning – are able to learn from experience, and perform at human-like levels ... Show More
1h 11m
Feb 2020
Qu’est-ce que le Deep Learning ?
Le deep learning ou l’apprentissage profond est un type d'intelligence artificielle dérivé du machine learning qui signifie lui apprentissage automatique. Ici, la machine est capable d'apprendre par elle-même, contrairement à la programmation où elle se contente d'exécuter à la l ... Show More
5m 4s
Mar 2021
The Theory of a Thousand Brains
In this episode, we talk with Jeff Hawkins—an entrepreneur and scientist, known for inventing some of the earliest handheld computers, the Palm and the Treo, who then turned his career to neuroscience and founded the Redwood Center for Theoretical Neuroscience in 2002 and Numenta ... Show More
39m 36s
Apr 2024
Physics-Informed Neural Networks (PINNs) - Conor Daly | Podcast #120
💻 Full tutorial: • Physics-Informed Neural Networks (PIN... Physics-Informed Neural Networks (PINNs) integrate known physical laws into neural network learning, particularly for solving differential equations. They embed these laws into the network's loss function, guiding the l ... Show More
1h 5m