logo
episode-header-image
Oct 2017
38m 51s

The Complexity of Learning Neural Networ...

Kyle Polich
About this episode

Over the past several years, we have seen many success stories in machine learning brought about by deep learning techniques. While the practical success of deep learning has been phenomenal, the formal guarantees have been lacking. Our current theoretical understanding of the many techniques that are central to the current ongoing big-data revolution is far from being sufficient for rigorous analysis, at best. In this episode of Data Skeptic, our host Kyle Polich welcomes guest John Wilmes, a mathematics post-doctoral researcher at Georgia Tech, to discuss the efficiency of neural network learning through complexity theory.

Up next
Feb 27
Collective Altruism in Recommender Systems
Ekaterina (Kat) Fedorova from MIT EECS joins us to discuss strategic learning in recommender systems—what happens when users collectively coordinate to game recommendation algorithms. Kat's research reveals surprising findings: algorithmic "protest movements" can paradoxically he ... Show More
54m 35s
Feb 18
Niche vs Mainstream
Anas Buhayh discusses multi-stakeholder fairness in recommender systems and the S'mores framework—a simulation allowing users to choose between mainstream and niche algorithms. His research shows specialized recommenders improve utility for niche users while raising questions abo ... Show More
34m 10s
Feb 2
Healthy Friction in Job Recommender Systems
In this episode, host Kyle Polich speaks with Roan Schellingerhout, a fourth-year PhD student at Maastricht University, about explainable multi-stakeholder recommender systems for job recruitment. Roan discusses his research on creating AI-powered job matching systems that balanc ... Show More
26m 37s
Recommended Episodes
Aug 2023
Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644
Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency pro ... Show More
45m 15s
Aug 2021
Adaptivity in Machine Learning with Samory Kpotufe - #512
Today we’re joined by Samory Kpotufe, an associate professor at Columbia University and program chair of the 2021 Conference on Learning Theory (COLT).  In our conversation with Samory, we explore his research at the intersection of machine learning, statistics, and learning the ... Show More
49m 58s
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 1/2 (Ep.223)
<p>In this episode, I explore the cutting-edge technology of graph neural networks (GNNs) and how they are revolutionizing the field of artificial intelligence. I break down the complex concepts behind GNNs and explain how they work by modeling the relationships between data poin ... Show More
27m 40s
May 2018
Practical Deep Learning with Rachel Thomas - TWiML Talk #138
In this episode, i'm joined by Rachel Thomas, founder and researcher at Fast AI. If you’re not familiar with Fast AI, the company offers a series of courses including Practical Deep Learning for Coders, Cutting Edge Deep Learning for Coders and Rachel’s Computational Linear Algeb ... Show More
44m 19s
Dec 2023
SE Radio 594: Sean Moriarity on Deep Learning with Elixir and Axon
<p><strong>Sean Moriarity</strong>, creator of the Axon deep learning framework, co-creator of the Nx library, and author of <em>Machine Learning in Elixir</em> and <em>Genetic Algorithms in Elixir,</em> published by the Pragmatic Bookshelf, speaks with SE Radio host <a href= "ht ... Show More
57m 43s
Jul 2023
Moore’s law in peril and the future of computing
Gordon Moore, the co-founder of Intel who died earlier this year, is famous for forecasting a continuous rise in the density of transistors that we can pack onto semiconductor chips. His eponymous “Moore’s law” still holds true after almost six decades, but further progress is be ... Show More
1h 1m
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)
<p>In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks.</p> <p>First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more mana ... Show More
35m 32s
Feb 2023
Terry Sejnowski: NeurIPS and the Future of AI
<p>In this episode, <a href= "https://twitter.com/sejnowski?lang=en">Terry Sejnowski</a>, an AI pioneer, chairman of the NeurIPS Foundation, and co-creator of Boltzmann Machines, delves into the latest developments in deep learning and their potential impact on our understanding ... Show More
37m 5s