logo
episode-header-image
Jul 2024
1h 20m

Building Real-World LLM Products with Fi...

Sam Charrington
About this episode

Today, we're joined by Hamel Husain, founder of Parlance Labs, to discuss the ins and outs of building real-world products using large language models (LLMs). We kick things off discussing novel applications of LLMs and how to think about modern AI user experiences. We then dig into the key challenge faced by LLM developers—how to iterate from a snazzy demo or proof-of-concept to a working LLM-based application. We discuss the pros, cons, and role of fine-tuning LLMs and dig into when to use this technique. We cover the fine-tuning process, common pitfalls in evaluation—such as relying too heavily on generic tools and missing the nuances of specific use cases, open-source LLM fine-tuning tools like Axolotl, the use of LoRA adapters, and more. Hamel also shares insights on model optimization and inference frameworks and how developers should approach these tools. Finally, we dig into how to use systematic evaluation techniques to guide the improvement of your LLM application, the importance of data generation and curation, and the parallels to traditional software engineering practices.


The complete show notes for this episode can be found at https://twimlai.com/go/694.

Up next
Yesterday
Distilling Transformers and Diffusion Models for Robust Edge Use Cases with Fatih Porikli - #738
Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year’s CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Auton ... Show More
1 h
Jun 24
Building the Internet of Agents with Vijoy Pandey - #737
Today, we're joined by Vijoy Pandey, SVP and general manager at Outshift by Cisco to discuss a foundational challenge for the enterprise: how do we make specialized agents from different vendors collaborate effectively? As companies like Salesforce, Workday, and Microsoft all dev ... Show More
56m 13s
Jun 17
LLMs for Equities Feature Forecasting at Two Sigma with Ben Wellington - #736
Today, we're joined by Ben Wellington, deputy head of feature forecasting at Two Sigma. We dig into the team’s end-to-end approach to leveraging AI in equities feature forecasting, covering how they identify and create features, collect and quantify historical data, and build pre ... Show More
59m 31s
Recommended Episodes
Aug 2024
Episode 201 - Introduction to KitOps for MLOps
Join Allen and Mark in this episode of Two Voice Devs as they dive into the world of MLOps and explore KitOps, an open-source tool for packaging and versioning machine learning models and related artifacts. Learn how KitOps leverages the Open Container Initiative (OCI) standard t ... Show More
33m 59s
Jan 2025
Erik Bernhardsson on Creating Tools That Make AI Feel Effortless
Today on No Priors, Elad chats with Erik Bernhardsson, founder and CEO of Modal Labs, a platform simplifying ML workflows by providing a serverless infrastructure designed to streamline deployment, scaling, and development for AI engineers. Erik talks about his early work on Spot ... Show More
23m 36s
Apr 1
SE Radio 662: Vlad Khononov on Balancing Coupling in Software Design
Software architect and author Vlad Khononov joins host Jeff Doolittle for a discussion on balancing coupling in software design. They start by examining coupling and its relationship to complexity and modularity. Vlad explains the historical models for assessing coupling and intr ... Show More
56m 19s
Apr 12
Simplifying Data Pipelines with Durable Execution
Summary In this episode of the Data Engineering Podcast Jeremy Edberg, CEO of DBOS, about durable execution and its impact on designing and implementing business logic for data systems. Jeremy explains how DBOS's serverless platform and orchestrator provide local resilience and r ... Show More
39m 49s
May 8
MLG 035 Large Language Models 2
At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup ... Show More
45m 25s
Nov 2024
Model Plateaus and Enterprise AI Adoption with Cohere's Aidan Gomez
In this episode of No Priors, Sarah is joined by Aidan Gomez, cofounder and CEO of Cohere. Aidan reflects on his journey to co-authoring the groundbreaking 2017 paper, “Attention is All You Need,” during his internship, and shares his motivations for building Cohere, which delive ... Show More
44m 15s
Mar 2024
Open sourcing AI app development with Harrison Chase from LangChain
Companies are employing AI agents and co-pilots to help their teams increase efficiency and accuracy, but developing apps that are trained properly can require a skill set many enterprise teams don’t have. This week on No Priors, Sarah and Elad are joined by Harrison Chase, the C ... Show More
27m 32s
Aug 2023
Cuttlefish Model Tuning
Hongyi Wang, a Senior Researcher at the Machine Learning Department at Carnegie Mellon University, joins us. His research is in the intersection of systems and machine learning. He discussed his research paper, Cuttlefish: Low-Rank Model Training without All the Tuning, on today’ ... Show More
27m 8s
Oct 2024
Which LLM Should You Use For Your Business? (Pros & Cons of Each)
Episode 26: Which Large Language Model (LLM) Should Your Business Use? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive deep into the pros and cons of various LLMs in this jam-packed episode.This episode explores the capabilities of AI tools li ... Show More
35m 39s