logo
episode-header-image
Oct 2017
38m 51s

The Complexity of Learning Neural Networ...

Kyle Polich
About this episode

Over the past several years, we have seen many success stories in machine learning brought about by deep learning techniques. While the practical success of deep learning has been phenomenal, the formal guarantees have been lacking. Our current theoretical understanding of the many techniques that are central to the current ongoing big-data revolution is far from being sufficient for rigorous analysis, at best. In this episode of Data Skeptic, our host Kyle Polich welcomes guest John Wilmes, a mathematics post-doctoral researcher at Georgia Tech, to discuss the efficiency of neural network learning through complexity theory.

Up next
Dec 26
Video Recommendations in Industry
In this episode, Kyle Polich sits down with Cory Zechmann, a content curator working in streaming television with 16 years of experience running the music blog "Silence Nogood." They explore the intersection of human curation and machine learning in content discovery, discussing ... Show More
38m 16s
Dec 18
Eye Tracking in Recommender Systems
In this episode, Santiago de Leon takes us deep into the world of eye tracking and its revolutionary applications in recommender systems. As a researcher at the Kempelin Institute and Brno University, Santiago explains the mechanics of eye tracking technology—how it captures gaze ... Show More
52m 8s
Dec 8
Cracking the Cold Start Problem
In this episode of Data Skeptic, we dive deep into the technical foundations of building modern recommender systems. Unlike traditional machine learning classification problems where you can simply apply XGBoost to tabular data, recommender systems require sophisticated hybrid ap ... Show More
39m 57s
Recommended Episodes
Aug 2023
Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644
Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency pro ... Show More
45m 15s
Aug 2021
Adaptivity in Machine Learning with Samory Kpotufe - #512
Today we’re joined by Samory Kpotufe, an associate professor at Columbia University and program chair of the 2021 Conference on Learning Theory (COLT).  In our conversation with Samory, we explore his research at the intersection of machine learning, statistics, and learning the ... Show More
49m 58s
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 1/2 (Ep.223)
<p>In this episode, I explore the cutting-edge technology of graph neural networks (GNNs) and how they are revolutionizing the field of artificial intelligence. I break down the complex concepts behind GNNs and explain how they work by modeling the relationships between data poin ... Show More
27m 40s
May 2018
Practical Deep Learning with Rachel Thomas - TWiML Talk #138
In this episode, i'm joined by Rachel Thomas, founder and researcher at Fast AI. If you’re not familiar with Fast AI, the company offers a series of courses including Practical Deep Learning for Coders, Cutting Edge Deep Learning for Coders and Rachel’s Computational Linear Algeb ... Show More
44m 19s
Dec 2023
SE Radio 594: Sean Moriarity on Deep Learning with Elixir and Axon
<p><strong>Sean Moriarity</strong>, creator of the Axon deep learning framework, co-creator of the Nx library, and author of <em>Machine Learning in Elixir</em> and <em>Genetic Algorithms in Elixir,</em> published by the Pragmatic Bookshelf, speaks with SE Radio host <a href= "ht ... Show More
57m 43s
Jul 2023
Moore’s law in peril and the future of computing
Gordon Moore, the co-founder of Intel who died earlier this year, is famous for forecasting a continuous rise in the density of transistors that we can pack onto semiconductor chips. His eponymous “Moore’s law” still holds true after almost six decades, but further progress is be ... Show More
1h 1m
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)
<p>In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks.</p> <p>First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more mana ... Show More
35m 32s
Feb 2023
Terry Sejnowski: NeurIPS and the Future of AI
<p>In this episode, <a href= "https://twitter.com/sejnowski?lang=en">Terry Sejnowski</a>, an AI pioneer, chairman of the NeurIPS Foundation, and co-creator of Boltzmann Machines, delves into the latest developments in deep learning and their potential impact on our understanding ... Show More
37m 5s