Mar 10
Agent Swarms and Knowledge Graphs for Autonomous Software Development with Siddhant Pardeshi - #763
In this episode, Sid Pardeshi, co-founder and CTO of Blitzy, joins us to discuss building autonomous development systems able to deliver production-ready software at enterprise scale. Sid contrasts AI-assisted coding with end-to-end autonomy, arguing that “code is a commodity” an ... Show More
1h 16m
Feb 26
AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka - #762
In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, infere ... Show More
1h 18m
Jan 29
The Evolution of Reasoning in Small Language Models with Yejin Choi - #761
Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin’s recent work on making small language models reason more effectively. We ... Show More
1h 6m
Aug 2025
Amperity Reimagines Data and Developer Workflows with AI - Ep. 271
Derek Slager, co-founder and CTO of Amperity, explores how agentic AI and vibe coding are reshaping enterprise data management and the developer experience on the NVIDIA AI Podcast. Hear how Amperity’s platform unifies customer data, powers advanced analytics, and brings conversa ... Show More
36m 40s
Aug 2024
AI in Action: From Machine Learning Interpretability to Cybersecurity with Serg Masís and Nirmal Budhathoki
In this DSS Podcast, Anna Anisin welcomes Serg Masís, Climate and Agronomic Data Scientist at Syngenta. Serg, an expert in machine learning interpretability and responsible AI, shares his diverse background and journey into data science. He discusses the challenges of building fa ... Show More
25m 37s
Oct 2025
Evals, error analysis, and better prompts: A systematic approach to improving your AI products | Hamel Husain (ML engineer)
Hamel Husain, an AI consultant and educator, shares his systematic approach to improving AI product quality through error analysis, evaluation frameworks, and prompt engineering. In this episode, he demonstrates how product teams can move beyond “vibe checking” their AI systems t ... Show More
54m 48s
Jul 2025
903: LLM Benchmarks Are Lying to You (And What to Do Instead), with Sinan Ozdemir
Has AI benchmarking reached its limit, and what do we have to fill this gap? Sinan Ozdemir speaks to Jon Krohn about the lack of transparency in training data and the necessity of human-led quality assurance to detect AI hallucinations, when and why to be skeptical of AI benchmar ... Show More
1h 28m
Nov 2024
SE Radio 641: Catherine Nelson on Machine Learning in Data Science
<p><strong>Catherine Nelson</strong>, author of the new O'Reilly book, <em data-renderer-mark="true">Software Engineering for Data Scientists</em>, discusses the collaboration between data scientists and software engineers -- an increasingly common pairing on machine learning and ... Show More
48m 19s
Oct 2024
Gerald Sim, "Screening Big Data: Films That Shape Our Algorithmic Literacy" (Routledge, 2024)
Screening Big Data: Films that Shape Our Algorithmic Literacy (Routledge, 2024) examines the influence of key films on public understanding of big data and the algorithmic systems that structure our digitally mediated lives.
From star-powered blockbusters to civic-minded document ... Show More
1h 6m
Today, we're joined by Shreya Shankar, a PhD student at UC Berkeley to discuss DocETL, a declarative system for building and optimizing LLM-powered data processing pipelines for large-scale and complex document analysis tasks. We explore how DocETL's optimizer architecture works, the intricacies of building agentic systems for data processing, the current la ... Show More