logo
episode-header-image
May 2025
50m 48s

MLG 034 Large Language Models 1

OCDevel
About this episode

Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance.

Links

Transformer Foundations and Scaling Laws

  • Transformers: Introduced by the 2017 "Attention is All You Need" paper, transformers allow for parallel training and inference of sequences using self-attention, in contrast to the sequential nature of RNNs.
  • Scaling Laws:
    • Empirical research revealed that LLM performance improves predictably as model size (parameters), data size (training tokens), and compute are increased together, with diminishing returns if only one variable is scaled disproportionately.
    • The "Chinchilla scaling law" (DeepMind, 2022) established the optimal model/data/compute ratio for efficient model performance: earlier large models like GPT-3 were undertrained relative to their size, whereas right-sized models with more training data (e.g., Chinchilla, LLaMA series) proved more compute and inference efficient.

Emergent Abilities in LLMs

  • Emergence: When trained beyond a certain scale, LLMs display abilities not present in smaller models, including:
    • In-Context Learning (ICL): Performing new tasks based solely on prompt examples at inference time.
    • Instruction Following: Executing natural language tasks not seen during training.
    • Multi-Step Reasoning & Chain of Thought (CoT): Solving arithmetic, logic, or symbolic reasoning by generating intermediate reasoning steps.
  • Discontinuity & Debate: These abilities appear abruptly in larger models, though recent research suggests that this could result from non-linearities in evaluation metrics rather than innate model properties.

Architectural Evolutions: Mixture of Experts (MoE)

  • MoE Layers: Modern LLMs often replace standard feed-forward layers with MoE structures.
    • Composed of many independent "expert" networks specializing in different subdomains or latent structures.
    • A gating network routes tokens to the most relevant experts per input, activating only a subset of parameters—this is called "sparse activation."
    • Enables much larger overall models without proportional increases in compute per inference, but requires the entire model in memory and introduces new challenges like load balancing and communication overhead.
  • Specialization & Efficiency: Experts learn different data/knowledge types, boosting model specialization and throughput, though care is needed to avoid overfitting and underutilization of specialists.

The Three-Phase Training Process

  • 1. Unsupervised Pre-Training: Next-token prediction on massive datasets—builds a foundation model capturing general language patterns.
  • 2. Supervised Fine Tuning (SFT): Training on labeled prompt-response pairs to teach the model how to perform specific tasks (e.g., question answering, summarization, code generation). Overfitting and "catastrophic forgetting" are risks if not carefully managed.
  • 3. Reinforcement Learning from Human Feedback (RLHF):
    • Collects human preference data by generating multiple responses to prompts and then having annotators rank them.
    • Builds a reward model (often PPO) based on these rankings, then updates the LLM to maximize alignment with human preferences (helpfulness, harmlessness, truthfulness).
    • Introduces complexity and risk of reward hacking (specification gaming), where the model may exploit the reward system in unanticipated ways.

Advanced Reasoning Techniques

  • Prompt Engineering: The art/science of crafting prompts that elicit better model responses, shown to dramatically affect model output quality.
  • Chain of Thought (CoT) Prompting: Guides models to elaborate step-by-step reasoning before arriving at final answers—demonstrably improves results on complex tasks.
    • Variants include zero-shot CoT ("let's think step by step"), few-shot CoT with worked examples, self-consistency (voting among multiple reasoning chains), and Tree of Thought (explores multiple reasoning branches in parallel).
  • Automated Reasoning Optimization: Frontier models selectively apply these advanced reasoning techniques, balancing compute costs with gains in accuracy and transparency.

Optimization for Training and Inference

  • Tradeoffs: The optimal balance between model size, data, and compute is determined not only for pretraining but also for inference efficiency, as lifetime inference costs may exceed initial training costs.
  • Current Trends: Efficient scaling, model specialization (MoE), careful fine-tuning, RLHF alignment, and automated reasoning techniques define state-of-the-art LLM development.
Up next
Jul 14
MLA 027 AI Video End-to-End Workflow
How to maintain character consistency, style consistency, etc in an AI video. Prosumers can use Google Veo 3’s "High-Quality Chaining" for fast social media content. Indie filmmakers can achieve narrative consistency by combining Midjourney V7 for style, Kling for lip-synced dial ... Show More
1h 11m
Jul 12
MLA 026 AI Video Generation: Veo 3 vs Sora, Kling, Runway, Stable Video Diffusion
Google Veo leads the generative video market with superior 4K photorealism and integrated audio, an advantage derived from its YouTube training data. OpenAI Sora is the top tool for narrative storytelling, while Kuaishou Kling excels at animating static images with realistic, hig ... Show More
40m 39s
Jul 9
MLA 025 AI Image Generation: Midjourney vs Stable Diffusion, GPT-4o, Imagen & Firefly
The AI image market has split: Midjourney creates the highest quality artistic images but fails at text and precision. For business use, OpenAI's GPT-4o offers the best conversational control, while Adobe Firefly provides the strongest commercial safety from its exclusively licen ... Show More
58m 51s
Recommended Episodes
Aug 18
High Performance And Low Overhead Graphs With KuzuDB
SummaryIn this episode of the Data Engineering Podcast Prashanth Rao, an AI engineer at KuzuDB, talks about their embeddable graph database. Prashanth explains how KuzuDB addresses performance shortcomings in existing solutions through columnar storage and novel join algorithms. ... Show More
1h 1m
Jul 2024
The Rise of Generative AI Video Tools
Episode 13: What impact will AI-generated content have on the entertainment industry? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive into this topic, envisioning a future where AI generates interactive movies and complex gaming worlds with in ... Show More
43m 48s
Sep 18
From RAG to Relational: How Agentic Patterns Are Reshaping Data Architecture
SummaryIn this episode of the AI Engineering Podcast Mark Brooker, VP and Distinguished Engineer at AWS, talks about how agentic workflows are transforming database usage and infrastructure design. He discusses the evolving role of data in AI systems, from traditional models to m ... Show More
52m 58s
Nov 2024
Code Generation & Synthetic Data With Loubna Ben Allal #51
Our guest today is Loubna Ben Allal, Machine Learning Engineer at Hugging Face 🤗 . In our conversation, Loubna first explains how she built two impressive code generation models: StarCoder and StarCoder2. We dig into the importance of data when training large models and what can ... Show More
47m 6s
Apr 2025
Canva Create 2025 - What's New for Educators? - HoET261
In this exciting crossover episode, Chris Nesi teams up with Leena Marie Saleh (The EdTech Guru) for a detailed look into Canva’s latest educational innovations unveiled during Canva Create 2025. Whether you’re a teacher, instructional coach, or tech integrator, this episode is p ... Show More
54m 32s
Jun 2025
806 : Topical English Vocabulary Lesson With Teacher Tiffani about Digital Art
In today’s episode, you will learn a series of vocabulary words that are connected to a specific topic. This lesson will help you improve your ability to speak English fluently about a specific topic. It will also help you feel more confident in your English abilities.5 Vocabular ... Show More
13m 21s
Jul 2024
Rendering Revolutions: Chaos founder Vlado Koylazov's Journey from V-Ray to Virtual Production
This podcast episode features Vlado Koylazov, co-founder of Chaos and inventor of the widely-used V-Ray rendering software. Koylazov shares his journey in computer graphics, from his early fascination with the field to the development of V-Ray and the latest innovations at Chaos. ... Show More
42m 42s
Sep 2024
Pausing to think about scikit-learn & OpenAI o1
Recently the company stewarding the open source library scikit-learn announced their seed funding. Also, OpenAI released “o1” with new behavior in which it pauses to “think” about complex tasks. Chris and Daniel take some time to do their own thinking about o1 and the contrast to ... Show More
50m 10s
Aug 2023
Deepdub’s Ofir Krakowski on Redefining Dubbing from Hollywood to Bollywood - Ep. 202
In the global entertainment landscape, TV show and film production stretches far beyond Hollywood or Bollywood — it's a worldwide phenomenon. However, while streaming platforms have broadened the reach of content, dubbing and translation technology still has plenty of room for gr ... Show More
32m 37s
Apr 2025
Simplifying Data Pipelines with Durable Execution
Summary In this episode of the Data Engineering Podcast Jeremy Edberg, CEO of DBOS, about durable execution and its impact on designing and implementing business logic for data systems. Jeremy explains how DBOS's serverless platform and orchestrator provide local resilience and r ... Show More
39m 49s