logo
episode-header-image
Jan 2024
1h 5m

AI Trends 2024: Machine Learning & Deep ...

Sam Charrington
About this episode

Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs. Lastly, don’t miss Tom’s predictions on what he foresees happening this year as well as his words of encouragement for those new to the field.


The complete show notes for this episode can be found at twimlai.com/go/666.

Up next
Jul 9
Distilling Transformers and Diffusion Models for Robust Edge Use Cases with Fatih Porikli - #738
Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year’s CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Auton ... Show More
1 h
Jun 24
Building the Internet of Agents with Vijoy Pandey - #737
Today, we're joined by Vijoy Pandey, SVP and general manager at Outshift by Cisco to discuss a foundational challenge for the enterprise: how do we make specialized agents from different vendors collaborate effectively? As companies like Salesforce, Workday, and Microsoft all dev ... Show More
56m 13s
Jun 17
LLMs for Equities Feature Forecasting at Two Sigma with Ben Wellington - #736
Today, we're joined by Ben Wellington, deputy head of feature forecasting at Two Sigma. We dig into the team’s end-to-end approach to leveraging AI in equities feature forecasting, covering how they identify and create features, collect and quantify historical data, and build pre ... Show More
59m 31s
Recommended Episodes
Oct 2023
Txt
In this episode of High Theory, Matthew Kirschenbaum talks about txt, or text. Not texting, or textbooks, but text as a form of data that is feeding large language models. Will the world end in fire, flood, or text?In the full interview, Matthew recommended Tim Maughan’s novel In ... Show More
19m 54s
Jul 2023
Moore’s law in peril and the future of computing
Gordon Moore, the co-founder of Intel who died earlier this year, is famous for forecasting a continuous rise in the density of transistors that we can pack onto semiconductor chips. His eponymous “Moore’s law” still holds true after almost six decades, but further progress is be ... Show More
1h 1m
Apr 2024
Modern Education & Large Language Models (LLMs) - Luis Serrano | Podcast #113
📚 Grokking Machine Learning by Luis: https://bit.ly/3TfQKtS 🌍 Cohere LLM University: https://docs.cohere.com/docs/llmuDr. Luis Serrano’s interest in mathematics started in high school when he participated in mathematical olympiads, representing his native Colombia. Later, he co ... Show More
44m 34s
Dec 2023
SE Radio 594: Sean Moriarity on Deep Learning with Elixir and Axon
Sean Moriarity, creator of the Axon deep learning framework, co-creator of the Nx library, and author of Machine Learning in Elixir and Genetic Algorithms in Elixir, published by the Pragmatic Bookshelf, speaks with SE Radio host Gavin Henry about what deep learning (neural netwo ... Show More
57m 43s
Jul 2023
#130 Mathew Lodge: The Future of Large Language Models in AI
Welcome to episode #130 of Eye on AI with Mathew Lodge. In this episode, we explore the world of reinforcement learning and code generation. Mathew Lodge, the CEO of Diffblue, shares insights into how reinforcement learning fuels generative AI. As we explore the intricacies of re ... Show More
49m 44s
Feb 2024
#179 Why ML Projects Fail, and How to Ensure Success with Eric Siegel, Founder of Machine Learning Week, Former Columbia Professor, and Bestselling Author
We are in a Generative AI hype cycle. Every executive looking at the potential generative AI today is probably thinking about how they can allocate their department's budget to building some AI use cases. However, many of these use cases won't make it into production.In a similar ... Show More
48 m
Jul 2018
175: Insights from the Founder of KDnuggets
In this episode of the SuperDataScience Podcast, I chat with the President and Editor at KDNuggets, Gregory Piatetsky-Shapiro. You will hear about the recent advancements in Data Science, learn how Reinforcement Learning can greatly improve AI capabilities, and learn about the no ... Show More
53m 22s