logo
episode-header-image
Mar 2017
51m 28s

MLG 009 Deep Learning

OCDevel
About this episode

Try a walking desk to stay healthy while you study or work!

Full notes at ocdevel.com/mlg/9 

Key Concepts:

  • Deep Learning vs. Shallow Learning: Machine learning is broken down hierarchically into AI, ML, and subfields like supervised/unsupervised learning. Deep learning is a specialized area within supervised learning distinct from shallow learning algorithms like linear regression.
  • Neural Networks: Central to deep learning, artificial neural networks include models like multilayer perceptrons (MLPs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs). Neural networks are composed of interconnected units or "neurons," which are mathematical representations inspired by biological neurons.

Unique Features of Neural Networks:

  • Feature Learning: Neural networks learn to combine input features optimally, enabling them to address complex non-linear problems where traditional algorithms fall short.
  • Hierarchical Representation: Data can be processed hierarchically through multiple layers, breaking down inputs into simpler components that can be reassembled to solve complex tasks.

Applications:

  • Medical Cost Estimation: Neural networks can handle non-linear complexities such as feature interactions, e.g., age, smoking, obesity, impacting medical costs.
  • Image Recognition: Neural networks leverage hierarchical data processing to discern patterns such as lines and edges, building up to recognizing complex structures like human faces.

Computational Considerations:

  • Cost of Deep Learning: Deep learning's computational requirements make it expensive and resource-intensive compared to shallow learning algorithms. It's cost-effective to use when necessary for complex tasks but not for simpler linear problems.

Architectures & Optimization:

  • Different Architectures for Different Tasks: Specialized neural networks like CNNs are suited for image tasks, RNNs for sequence data, and DQNs for planning.
  • Neuron Types: Neurons in neural networks are referred to as activation functions (e.g., logistic sigmoid, relu) and differ based on tasks and architecture needs.
Up next
Mar 2017
MLG 010 Languages & Frameworks
<p><a href= "https://ocdevel.com/walk?utm_source=podcast&utm_medium=mlg&utm_campaign=mlg10"> Try a walking desk</a> to stay healthy while you study or work!</p> <p>Full notes at  <a href= "https://ocdevel.com/mlg/10?utm_source=podcast&utm_medium=mlg&utm_campaign=mlg10"> ocdevel.c ... Show More
44m 36s
Mar 2017
MLG 012 Shallow Algos 1
<p><a href= "https://ocdevel.com/walk?utm_source=podcast&utm_medium=mlg&utm_campaign=mlg12"> Try a walking desk</a> to stay healthy while you study or work!</p> <p>Full notes at <a href= "https://ocdevel.com/mlg/12?utm_source=podcast&utm_medium=mlg&utm_campaign=mlg12"> ocdevel.co ... Show More
53m 36s
Apr 2017
MLG 013 Shallow Algos 2
<p><a href= "https://ocdevel.com/walk?utm_source=podcast&utm_medium=mlg&utm_campaign=mlg13"> Try a walking desk</a> to stay healthy while you study or work!</p> <p>Full notes at  <a href= "https://ocdevel.com/mlg/13?utm_source=podcast&utm_medium=mlg&utm_campaign=mlg13"> ocdevel.c ... Show More
55m 36s
Recommended Episodes
Apr 2017
Feature Processing for Text Analytics
It seems like every day there's more and more machine learning problems that involve learning on text data, but text itself makes for fairly lousy inputs to machine learning algorithms.  That's why there are text vectorization algorithms, which re-format text data so it's ready f ... Show More
17m 28s
Dec 2016
Ep. 2: Where Deep Learning Goes Next - Bryan Catanzaro, NVIDIA Applied Deep Learning Research
Bryan Catanzaro, vice president for applied deep learning research at NVIDIA, talks about how we know an AI technology is working, the potential for AI-powered speech, and where we’ll see the next deep learning breakthroughs. 
32m 52s
Jun 2020
Rust and machine learning #4: practical tools (Ep. 110)
<p>In this episode I make a non exhaustive list of machine learning tools and frameworks, written in Rust. Not all of them are mature enough for production environments. I believe that community effort can change this very quickly.</p> <p>To make a comparison with the Python ecos ... Show More
24m 18s
Jul 2017
14: Artificial Thought (Neural Networks)
Go to www.brilliant.org/breakingmathpodcast to learn neural networks, everyday physics, computer science fundamentals, the joy of problem solving, and many related topics in science, technology, engineering, and math.  Mathematics takes inspiration from all forms with which life ... Show More
1h 5m
Apr 2021
464: A.I. vs Machine Learning vs Deep Learning
In this episode, I tackle three often conflated terms - AI, machine learning, and deep learning - to shine some light on what exactly they are. Additional materials: www.superdatascience.com/464 
7m 14s
May 2023
TinyML: Bringing machine learning to the edge
When we think about machine learning today we often think in terms of immense scale — large language models that require huge amounts of computational power, for example. But one of the most interesting innovations in machine learning right now is actually happening on a really s ... Show More
45m 45s
Jul 2023
AI Today Podcast: AI Glossary Series – Automated Machine Learning (AutoML)
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the term Automated Machine Learning (AutoML), explain how this term relate to AI and why it’s important to know about them. Show Notes: FREE Intro to CPMAI mini course CPMAI Training and Certifi ... Show More
9m 11s
Aug 2024
AI that connects the digital and physical worlds | Anima Anandkumar
<p>While language models may help generate new ideas, they cannot attack the hard part of science, which is simulating the necessary physics," says AI professor Anima Anandkumar. She explains how her team developed neural operators — AI trained on the finest details of the real w ... Show More
12m 14s
Nov 2024
SE Radio 641: Catherine Nelson on Machine Learning in Data Science
<p><strong>Catherine Nelson</strong>, author of the new O'Reilly book, <em data-renderer-mark="true">Software Engineering for Data Scientists</em>, discusses the collaboration between data scientists and software engineers -- an increasingly common pairing on machine learning and ... Show More
48m 19s
Jan 2025
How a worm could save humanity from bad AI | Ramin Hasani
<p>What if AI could think and adapt like a real brain? TED Fellow and AI scientist Ramin Hasani shares how liquid neural networks — a new, more flexible AI technology inspired by physics and living brains — could transform how we solve complex problems.</p><p><br></p><hr><p style ... Show More
6m 21s