logo
episode-header-image
Jan 2022
1h 8m

MLA 020 Kubeflow and ML Pipeline Orchest...

OCDevel
About this episode

Machine learning pipeline orchestration tools, such as SageMaker and Kubeflow, streamline the end-to-end process of data ingestion, model training, deployment, and monitoring, with Kubeflow providing an open-source, cross-cloud platform built atop Kubernetes. Organizations typically choose between cloud-native managed services and open-source solutions based on required flexibility, scalability, integration with existing cloud environments, and vendor lock-in considerations.

Links

Dirk-Jan Verdoorn - Data Scientist at Dept Agency

Managed vs. Open-Source ML Pipeline Orchestration

  • Cloud providers such as AWS, Google Cloud, and Azure offer managed machine learning orchestration solutions, including SageMaker (AWS) and Vertex AI (GCP).
  • Managed services provide integrated environments that are easier to set up and operate but often result in vendor lock-in, limiting portability across cloud platforms.
  • Open-source tools like Kubeflow extend Kubernetes to support end-to-end machine learning pipelines, enabling portability across AWS, GCP, Azure, or on-premises environments.

Introduction to Kubeflow

  • Kubeflow is an open-source project aimed at making machine learning workflow deployment on Kubernetes simple, portable, and scalable.
  • Kubeflow enables data scientists and ML engineers to build, orchestrate, and monitor pipelines using popular frameworks such as TensorFlow, scikit-learn, and PyTorch.
  • Kubeflow can integrate with TensorFlow Extended (TFX) for complete end-to-end ML pipelines, covering data ingestion, preprocessing, model training, evaluation, and deployment.

Machine Learning Pipelines: Concepts and Motivation

  • Production machine learning systems involve not just model training but also complex pipelines for data ingestion, feature engineering, validation, retraining, and monitoring.
  • Pipelines automate retraining based on model performance drift or updated data, supporting continuous improvement and adaptation to changing data patterns.
  • Scalable, orchestrated pipelines reduce manual overhead, improve reproducibility, and ensure that models remain accurate as underlying business conditions evolve.

Pipeline Orchestration Analogies and Advantages

  • ML pipeline orchestration tools in machine learning fulfill a role similar to continuous integration and continuous deployment (CI/CD) in traditional software engineering.
  • Pipelines enable automated retraining, modularization of pipeline steps (such as ingestion, feature transformation, and deployment), and robust monitoring.
  • Adopting pipeline orchestrators, rather than maintaining standalone models, helps organizations handle multiple models and varied business use cases efficiently.

Choosing Between Managed and Open-Source Solutions

  • Managed services (e.g., SageMaker, Vertex AI) offer streamlined user experiences and seamless integration but restrict cross-cloud flexibility.
  • Kubeflow, as an open-source platform on Kubernetes, enables cross-platform deployment, integration with multiple ML frameworks, and minimizes dependency on a single cloud provider.
  • The complexity of Kubernetes and Kubeflow setup is offset by significant flexibility and community-driven improvements.

Cross-Cloud and Local Development

  • Kubeflow operates on any Kubernetes environment including AWS EKS, GCP GKE, and Azure AKS, as well as on-premises or local clusters.
  • Local and cross-cloud development are facilitated in Kubeflow, while managed services like SageMaker and Vertex AI are better suited to cloud-native workflows.
  • Debugging and development workflows can be challenging in highly secured cloud environments; Kubeflow's local deployment flexibility addresses these hurdles.

Relationship to TensorFlow Extended (TFX) and Machine Learning Frameworks

  • TensorFlow Extended (TFX) is an end-to-end platform for creating production ML pipelines, tightly integrated with Kubeflow for deployment and execution.
  • While Kubeflow originally focused on TensorFlow, it has grown to support PyTorch, scikit-learn, and other major ML frameworks, offering wider applicability.
  • TFX provides modular pipeline components (data ingestion, transformation, validation, model training, evaluation, and deployment) that execute within Kubeflow's orchestration platform.

Alternative Pipeline Orchestration Tools

  • Airflow is a general-purpose workflow orchestrator using DAGs, suited for data engineering and automation, but less resource-capable for heavy ML training within the pipeline.
    • Airflow often submits jobs to external compute resources (e.g., AI Platform) for resource-intensive workloads.
    • In organizations using both Kubeflow and Airflow, Airflow may handle data workflows, while Kubeflow is reserved for ML pipelines.
  • MLflow and other solutions also exist, each with unique integrations and strengths; their adoption depends on use case requirements.

Selecting a Cloud Platform and Orchestration Approach

  • The optimal choice of cloud platform and orchestration tool is typically guided by client needs, existing integrations (e.g., organizational use of Google or Microsoft solutions), and team expertise.
  • Agencies with diverse client portfolios often benefit from open-source, cross-cloud tools like Kubeflow to maximize flexibility and knowledge sharing across projects.
  • Users entrenched in a single cloud provider may prefer managed offerings for ease of use and integration, while those prioritizing portability and flexibility often choose open-source solutions.

Cost Optimization in Model Training

  • Both AWS and GCP offer cost-saving compute options for training, such as spot instances (AWS) and preemptible instances (GCP), which are suitable for non-production, batch training jobs.
  • Production workloads that require high uptime and reliability do not typically utilize cost-saving transient compute resources, as these can be interrupted.

Machine Learning Project Lifecycle Overview

  • Project initiation begins with data discovery and validation of the client's requirements against available data.
  • Cloud environment selection is influenced by client infrastructure, business applications, and platform integrations rather than solely by technical features.
  • Data cleaning, exploratory analysis, model prototyping, advanced model refinement, and deployment are handled collaboratively with data engineering and machine learning teams.
  • The pipeline is gradually constructed in modular steps, facilitating scalable, automated retraining and integration with business applications.

Educational Pathways for Data Science and Machine Learning Careers

  • Advanced mathematics or statistics education provides a strong foundation for work in data science and machine learning.
  • Master's degrees in data science add the most value for candidates from non-technical undergraduate backgrounds; those with backgrounds in statistics, mathematics, or computer science may benefit more from self-study or targeted upskilling.
  • When evaluating online or accelerated degree programs, candidates should scrutinize the curriculum, instructor engagement, and peer interaction to ensure comprehensive learning.
Up next
Jun 2022
MLA 021 Databricks: Cloud Analytics and MLOps
<div> <p>Databricks is a cloud-based platform for data analytics and machine learning operations, integrating features such as a hosted Spark cluster, Python notebook execution, Delta Lake for data management, and seamless IDE connectivity. Raybeam utilizes Databricks and other M ... Show More
26m 28s
Feb 2025
MLG 033 Transformers
Links: Notes and resources at ocdevel.com/mlg/33 3Blue1Brown videos: https://3blue1brown.com/ Try a walking desk stay healthy & sharp while you learn & code Try Descript audio/video editing with AI power-tools Background & Motivation RNN Limitations: Sequential processing prevent ... Show More
43m 23s
Feb 2025
MLA 022 Code AI: Cursor, Cline, Roo, Aider, Copilot, Windsurf
Vibe coding is using large language models within IDEs or plugins to generate, edit, and review code, and has recently become a prominent and evolving technique in software and machine learning engineering. The episode outlines a comparison of current code AI tools - such as Curs ... Show More
55m 29s
Recommended Episodes
Jan 2025
SE Radio 649: Lukas Gentele on Kubernetes vClusters
<p><strong>Lukas Gentele</strong>, CEO of Loft Labs, joins host <a href="https://se-radio.net/team/robert-blumen/">Robert Blumen</a> for a discussion of kubernetes vclusters (virtual clusters). A vcluster is a kubernetes cluster that runs kubernetes application on a host kubernet ... Show More
58m 14s
Nov 2024
Kubernetes Mastery: Core Concepts, Key Components, and Cloud Integration with AKS, EKS, and GKE
In this episode, we break down the essentials of Kubernetes, a powerful tool for managing containerized workloads and services. Drawing from a comprehensive guide, we explore Kubernetes from the ground up, covering its core concepts, critical components, and deployment across bot ... Show More
11m 46s
Dec 2022
Kubernetes v1.26 Electrifying, with Leonard Pahlke
<p><a href= "https://www.linkedin.com/in/leonardpahlke/?originalSubdomain=de"><span style="font-weight: 400;"> Leonard Pahlke</span></a> <span style="font-weight: 400;">is not only the Release Lead for Kubernetes v1.26, he's also a co-chair of the CNCF TAG for Environmental Susta ... Show More
31m 42s
Jan 2020
OpenShift and Kubernetes, with Clayton Coleman
<p>Five years ago, <a href= "https://twitter.com/smarterclayton">Clayton Coleman</a> took a bet on a new open source project that Google was about to announce. He became the first external contributor to Kubernetes, and the architect of Red Hat's reinvention of OpenShift from Paa ... Show More
47m 9s
Nov 2024
scikit-learn & data science you own
<p>We are at GenAI saturation, so let’s talk about scikit-learn, a long time favorite for data scientists building classifiers, time series analyzers, dimensionality reducers, and more! Scikit-learn is deployed across industry and driving a significant portion of the “AI” that is ... Show More
52m 2s
May 2024
MLOps + DevOps + Kubernetes with Annie Talvasto
<p>Machine learning models need updating - what's the reliable way to do it? While in Romania, Richard sat down with Annie Talvasto to talk about her work helping to build DevOps practices around machine learning: Building repeatable processes for data ingestions, cleaning, organ ... Show More
33m 17s
Jan 2022
Day Two Cloud 130: Getting Started With OpenShift For Kubernetes Orchestration
On today's Day Two Cloud we have a career conversation with Angela Andrews, a Solution Architect at Red Hat. Angela provides an introduction to OpenShift, a cloud-based offering for running and orchestrating Kubernetes containers in public clouds and on premises from Red Hat. We ... Show More
47m 50s
Aug 2022
Kubernetes 1.25, with Cici Huang
<p>It's release day! We discuss today's Kubernetes 1.25 with release team lead Cici Huang, Software Engineer at Google Cloud. What's in, what's out, and what is it like to lead a release you are also promoting a feature in?</p> <p>Do you have something cool to share? Some questio ... Show More
26m 52s
Jan 2025
#229 Mitesh Agrawal: Why Lambda Labs' AI Cloud Is a Game-Changer for Developers
<p dir="ltr">This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.</p> <p><strong> </strong></p> <p dir="ltr">NetSuite is offering a one-of-a-kind flexible financing program. ... Show More
56m 7s
Nov 2023
SE Radio 590: Andy Suderman on Standing Up Kubernetes
<p><strong>Andy Suderman</strong>, CTO of Fairwinds, joins host <a href="../../../team/robert-blumen/">Robert Blumen</a> to talk about standing up a kubernetes cluster. Their discussion covers build-your-own versus managed clusters provided by cloud services, and how to determine ... Show More
54m 14s