logo
episode-header-image
Jan 2015
28m 25s

Easily Fooling Deep Neural Networks

Kyle Polich
About this episode

My guest this week is Anh Nguyen, a PhD student at the University of Wyoming working in the Evolving AI lab. The episode discusses the paper Deep Neural Networks are Easily Fooled [pdf] by Anh Nguyen, Jason Yosinski, and Jeff Clune. It describes a process for creating images that a trained deep neural network will mis-classify. If you have a deep neural network that has been trained to recognize certain types of objects in images, these "fooling" images can be constructed in a way which the network will mis-classify them. To a human observer, these fooling images often have no resemblance whatsoever to the assigned label. Previous work had shown that some images which appear to be unrecognizable white noise images to us can fool a deep neural network. This paper extends the result showing abstract images of shapes and colors, many of which have form (just not the one the network thinks) can also trick the network.

Up next
Aug 17
Networks and Recommender Systems
Kyle reveals the next season's topic will be "Recommender Systems". Asaf shares insights on how network science contributes to the recommender system field. 
17m 45s
Jul 21
Network of Past Guests Collaborations
Kyle and Asaf discuss a project in which we link former guests of the podcast based on their co-authorship of academic papers. 
34m 10s
Jul 6
The Network Diversion Problem
In this episode, Professor Pål Grønås Drange from the University of Bergen, introduces the field of Parameterized Complexity - a powerful framework for tackling hard computational problems by focusing on specific structural aspects of the input. This framework allows researchers ... Show More
46m 14s
Recommended Episodes
Aug 2023
Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644
Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency pro ... Show More
45m 15s
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)
In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks. First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more manageable com ... Show More
35m 32s
Apr 2023
The Power of Graph Neural Networks: Understanding the Future of AI - Part 1/2 (Ep.223)
In this episode, I explore the cutting-edge technology of graph neural networks (GNNs) and how they are revolutionizing the field of artificial intelligence. I break down the complex concepts behind GNNs and explain how they work by modeling the relationships between data points ... Show More
27m 40s
Feb 2020
Qu’est-ce que le Deep Learning ?
Le deep learning ou l’apprentissage profond est un type d'intelligence artificielle dérivé du machine learning qui signifie lui apprentissage automatique. Ici, la machine est capable d'apprendre par elle-même, contrairement à la programmation où elle se contente d'exécuter à la l ... Show More
5m 4s
Apr 2024
Physics-Informed Neural Networks (PINNs) - Conor Daly | Podcast #120
💻 Full tutorial: • Physics-Informed Neural Networks (PIN... Physics-Informed Neural Networks (PINNs) integrate known physical laws into neural network learning, particularly for solving differential equations. They embed these laws into the network's loss function, guiding the l ... Show More
1h 5m
Apr 2023
Olaf Sporns on Network Neuroscience
The intersection between cutting-edge neuroscience and the emerging field of network science has been growing tremendously over the past decade. Olaf Sporns, editor of Network Neuroscience, and Distinguished Professor, Provost Professor of Department of Psychological and Brain Sc ... Show More
13m 5s
Dec 2023
SE Radio 594: Sean Moriarity on Deep Learning with Elixir and Axon
Sean Moriarity, creator of the Axon deep learning framework, co-creator of the Nx library, and author of Machine Learning in Elixir and Genetic Algorithms in Elixir, published by the Pragmatic Bookshelf, speaks with SE Radio host Gavin Henry about what deep learning (neural netwo ... Show More
57m 43s
Sep 2023
Computers are learning to read our minds
Gašper’s work combines machine learning, statistical modeling, neuroimaging, and behavioral experiments “to better understand how neural networks learn internal representations in speech and how humans learn to speak.”One thing that surprised him about generative adversarial netw ... Show More
30m 6s
Mar 2021
The Theory of a Thousand Brains
In this episode, we talk with Jeff Hawkins—an entrepreneur and scientist, known for inventing some of the earliest handheld computers, the Palm and the Treo, who then turned his career to neuroscience and founded the Redwood Center for Theoretical Neuroscience in 2002 and Numenta ... Show More
39m 36s