logo
episode-header-image
May 2018
17m 46s

MLA 003 Storage: HDF, Pickle, Postgres

OCDevel
About this episode

Practical workflow of loading, cleaning, and storing large datasets for machine learning, moving from ingesting raw CSVs or JSON files with pandas to saving processed datasets and neural network weights using HDF5 for efficient numerical storage. It clearly distinguishes among storage options—explaining when to use HDF5, pickle files, or SQL databases—while highlighting how libraries like pandas, TensorFlow, and Keras interact with these formats and why these choices matter for production pipelines.

Links

Data Ingestion and Preprocessing

  • Data Sources and Formats:

    • Datasets commonly originate as CSV (comma-separated values), TSV (tab-separated values), fixed-width files (FWF), JSON from APIs, or directly from databases.
    • Typical applications include structured data (e.g., real estate features) or unstructured data (e.g., natural language corpora for sentiment analysis).
  • Pandas as the Core Ingestion Tool:

    • Pandas provides versatile functions such as read_csvread_json, and others to load various file formats with robust options for handling edge cases (e.g., file encodings, missing values).
    • After loading, data cleaning is performed using pandas: dropping or imputing missing values, converting booleans and categorical columns to numeric form.
  • Data Encoding for Machine Learning:

    • All features must be numerical before being supplied to machine learning models like TensorFlow or Keras.
    • Categorical data is one-hot encoded using pandas.get_dummies, converting strings to binary indicator columns.
    • The underlying NumPy array of a DataFrame is accessed via df.values for direct integration with modeling libraries.

Numerical Data Storage Options

  • HDF5 for Storing Processed Arrays:

    • HDF5 (Hierarchical Data Format version 5) enables efficient storage of large multidimensional NumPy arrays.
    • Libraries like h5py and built-in pandas functions (to_hdf) allow seamless saving and retrieval of arrays or DataFrames.
    • TensorFlow and Keras use HDF5 by default to store neural network weights as multi-dimensional arrays for model checkpointing and early stopping, accommodating robust recovery and rollback.
  • Pickle for Python Objects:

    • Python's pickle protocol serializes arbitrary objects, including machine learning models and arrays, into files for later retrieval.
    • While convenient for quick iterations or heterogeneous data, pickle is less efficient with NDarrays compared to HDF5, lacks significant compression, and poses security risks if not properly safeguarded.
  • SQL Databases and Spreadsheets:

    • For mixed or heterogeneous data, or when producing results for sharing and collaboration, relational databases like PostgreSQL or spreadsheets such as CSVs are used.
    • Databases serve as the endpoint for production systems, where model outputs—such as generated recommendations or reports—are published for downstream use.

Storage Workflow in Machine Learning Pipelines

  • Typical Process:

    • Data is initially loaded and processed with pandas, then converted to numerical arrays suitable for model training.
    • Intermediate states and model weights are saved using HDF5 during model development and training, ensuring recovery from interruptions and facilitating early stopping.
    • Final outputs, especially those requiring sharing or production use, are published to SQL databases or shared as spreadsheet files.
  • Best Practices and Progression:

    • Quick project starts may involve pickle for accessible storage during early experimentation.
    • For large-scale, high-performance applications, migration to HDF5 for numerical data and SQL for production-grade results is recommended.
    • Alternative options like Feather and PyTables (an interface on top of HDF5) exist for specialized needs.

Summary

  • HDF5 is optimal for numerical array storage due to its efficiency, built-in compression, and integration with major machine learning frameworks.
  • Pickle accommodates arbitrary Python objects but is suboptimal for numerical data persistence or security.
  • SQL databases and spreadsheets are used for disseminating results, especially when human consumption or application integration is required.
  • The selection of a storage format is determined by data type, pipeline stage, and end-use requirements within machine learning workflows.
Up next
Jun 2018
MLA 005 Shapes and Sizes: Tensors and NDArrays
<div> <p>Explains the fundamental differences between tensor dimensions, size, and shape, clarifying frequent misconceptions—such as the distinction between the number of features ("columns") and true data dimensions—while also demystifying reshaping operations like expand_dims, ... Show More
27m 18s
Jul 2018
MLA 006 Salaries for Data Science & Machine Learning
<div> <p>O'Reilly's 2017 Data Science Salary Survey finds that location is the most significant salary determinant for data professionals, with median salaries ranging from $134,000 in California to under $30,000 in Eastern Europe, and highlights that negotiation skills can lead ... Show More
19m 35s
Oct 2018
MLA 007 Jupyter Notebooks
<div> <p>Jupyter Notebooks, originally conceived as IPython Notebooks, enable data scientists to combine code, documentation, and visual outputs in an interactive, browser-based environment supporting multiple languages like Python, Julia, and R. This episode details how Jupyter ... Show More
16m 52s
Recommended Episodes
Sep 18
From RAG to Relational: How Agentic Patterns Are Reshaping Data Architecture
SummaryIn this episode of the AI Engineering Podcast Mark Brooker, VP and Distinguished Engineer at AWS, talks about how agentic workflows are transforming database usage and infrastructure design. He discusses the evolving role of data in AI systems, from traditional models to m ... Show More
52m 58s
Jul 2025
Revolutionizing Python Notebooks with Marimo
SummaryIn this episode of the Data Engineering Podcast Akshay Agrawal from Marimo discusses the innovative new Python notebook environment, which offers a reactive execution model, full Python integration, and built-in UI elements to enhance the interactive computing experience. ... Show More
51m 56s
Jan 2025
Breaking Down Data Silos: AI and ML in Master Data Management
Summary In this episode of the Data Engineering Podcast Dan Bruckner, co-founder and CTO of Tamr, talks about the application of machine learning (ML) and artificial intelligence (AI) in master data management (MDM). Dan shares his journey from working at CERN to becoming a data ... Show More
57m 30s
Aug 26
From Academia to Industry: Bridging Data Engineering Challenges
SummaryIn this episode of the Data Engineering Podcast Professor Paul Groth, from the University of Amsterdam, talks about his research on knowledge graphs and data engineering. Paul shares his background in AI and data management, discussing the evolution of data provenance and ... Show More
50m 54s
Oct 5
The Data Model That Captures Your Business: Metric Trees Explained
SummaryIn this episode of the Data Engineering Podcast Vijay Subramanian, founder and CEO of Trace, talks about metric trees - a new approach to data modeling that directly captures a company's business model. Vijay shares insights from his decade-long experience building data pr ... Show More
1h 1m
Nov 2018
ML/DL for Non-Stationary Time Series Analysis in Financial Markets and Beyond with Stuart Reid - TWiML Talk #203
Today, we’re joined by Stuart Reid, Chief Scientist at NMRQL Research. NMRQL is an investment management firm that uses ML algorithms to make adaptive, unbiased, scalable, and testable trading decisions for its funds. In our conversation, Stuart and I dig into the way NMRQL uses ... Show More
58m 29s
Aug 18
High Performance And Low Overhead Graphs With KuzuDB
SummaryIn this episode of the Data Engineering Podcast Prashanth Rao, an AI engineer at KuzuDB, talks about their embeddable graph database. Prashanth explains how KuzuDB addresses performance shortcomings in existing solutions through columnar storage and novel join algorithms. ... Show More
1h 1m
Jul 2024
#225 The Full Stack Data Scientist with Savin Goyal, Co-Founder & CTO at Outerbounds
The role of the data scientist is changing. Some organizations are splitting the role into more narrowly focused jobs, while others are broadening it. The latter approach, known as the Full Stack Data Scientist, is derived from the concept of a full stack software engineer, with ... Show More
48m 44s
Sep 2021
An Exploration Of The Data Engineering Requirements For Bioinformatics
<div class="wp-block-jetpack-markdown"><h2>Summary</h2> <p>Biology has been gaining a lot of attention in recent years, even before the pandemic. As an outgrowth of that popularity, a new field has grown up that pairs statistics and compuational analysis with scientific research ... Show More
55m 10s