About this episode
Exploratory data analysis (EDA) sits at the critical pre-modeling stage of the data science pipeline, focusing on uncovering missing values, detecting outliers, and understanding feature distributions through both statistical summaries and visualizations, such as Pandas' info(), describe(), histograms, and box plots. Visualization tools like Matplotlib, along with processes including imputation and feature correlation analysis, allow practitioners to decide how best to prepare, clean, or transform data before it enters a machine learning model.
Links
EDA in the Data Science Pipeline
- Position in Pipeline: EDA is an essential pre-processing step in the business intelligence (BI) or data science pipeline, occurring after data acquisition but before model training.
- Purpose: The goal of EDA is to understand the data by identifying:
- Missing values (nulls)
- Outliers
- Feature distributions
- Relationships or correlations between variables
Data Acquisition and Initial Inspection
- Data Sources: Data may arrive from various streams (e.g., Twitter, sensors) and is typically stored in structured formats such as databases or spreadsheets.
- Loading Data: In Python, data is often loaded into a Pandas DataFrame using commands like
pd.read_csv('filename.csv'). - Initial Review:
df.info(): Displays data types and counts of non-null entries by column, quickly highlighting missing values. df.describe(): Provides summary statistics for each column, including count, mean, standard deviation, min/max, and quartiles.
Handling Missing Data and Outliers
- Imputation:
- Missing values must often be filled (imputed), as most machine learning algorithms cannot handle nulls.
- Common strategies: impute with mean, median, or another context-appropriate value.
- For example, missing ages can be filled with the column's average rather than zero, to avoid introducing skew.
- Outlier Strategy:
- Outliers can be removed, replaced (e.g., by nulls and subsequently imputed), or left as-is if legitimate.
- Treatment depends on whether outliers represent true data points or data errors.
Visualization Techniques
- Purpose: Visualizations help reveal data distributions, outliers, and relationships that may not be apparent from raw statistics.
- Common Visualization Tools:
- Matplotlib: The primary Python library for static data visualizations.
- Visualization Methods:
- Histogram: Ideal for visualizing the distribution of a single variable (e.g., age), making outliers visible as isolated bars.
- Box Plot: Summarizes quartiles, median, and range, with 'whiskers' showing min/max; useful for spotting outliers and understanding data spread.
- Line Chart: Used for time-series data, highlighting trends and anomalies (e.g., sudden spikes in stock price).
- Correlation Matrix: Visual grid (often of scatterplots) comparing each feature against every other, helping to detect strong or weak linear relationships between features.
Feature Correlation and Dimensionality
- Correlation Plot:
- Generated with
df.corr() in Pandas to assess linear relationships between features. - High correlation between features may suggest redundancy (e.g., number of bedrooms and square footage) and inform feature selection or removal.
- Limitations:
- While correlation plots provide intuition, automated approaches like Principal Component Analysis (PCA) or autoencoders are typically superior for feature reduction and target prediction tasks.
Data Transformation Prior to Modeling
- Scaling:
- Machine learning models, especially neural networks, often require input features to be scaled (normalized or standardized).
- StandardScaler (from scikit-learn): Standardizes features, but is sensitive to outliers.
- RobustScaler: A variant that compresses the influence of outliers, keeping data within interquartile ranges, simplifying preprocessing steps.
Summary of EDA Workflow
- Initial Steps:
- Load data into a DataFrame.
- Examine data types and missing values with
df.info(). - Review summary statistics with
df.describe().
- Visualization:
- Use histograms and box plots to explore feature distributions and detect anomalies.
- Leverage correlation matrices to identify related features.
- Data Preparation:
- Impute missing values thoughtfully (e.g., with means or medians).
- Decide on treatment for outliers: removal, imputation, or scaling with tools like
RobustScaler.
- Outcome:
- Proper EDA ensures that data is cleaned, features are well-understood, and inputs are suitable for effective machine learning model training.
Nov 2018
MLA 009 Charting and Visualization Tools for Data Science
<div> <p>Python charting libraries - Matplotlib, Seaborn, and Bokeh - explaining, their strengths from quick EDA to interactive, HTML-exported visualizations, and clarifies where D3.js fits as a JavaScript alternative for end-user applications. It also evaluates major software so ... Show More
24m 43s
Oct 2020
MLA 010 NLP packages: transformers, spaCy, Gensim, NLTK
<div> <p>The landscape of Python natural language processing tools has evolved from broad libraries like NLTK toward more specialized packages such as Gensim for topic modeling, SpaCy for linguistic analysis, and Hugging Face Transformers for advanced tasks, with Sentence Transfo ... Show More
26m 22s
Nov 2020
MLA 011 Practical Clustering Tools
<div> <p>Primary clustering tools for practical applications include K-means using scikit-learn or Faiss, agglomerative clustering leveraging cosine similarity with scikit-learn, and density-based methods like DBSCAN or HDBSCAN. For determining the optimal number of clusters, sil ... Show More
34m 50s
May 2023
675: Pandas for Data Analysis and Visualization
Wrangling data in Pandas, when to use Pandas, Matplotlib or Seaborn, and why you should learn to create Python packages: Jon Krohn speaks with guest Stefanie Molin, author of Hands-On Data Analysis with Pandas.
This episode is brought to you by Posit, the open-source data science ... Show More
1h 8m
Sep 2021
An Exploration Of The Data Engineering Requirements For Bioinformatics
<div class="wp-block-jetpack-markdown"><h2>Summary</h2>
<p>Biology has been gaining a lot of attention in recent years, even before the pandemic. As an outgrowth of that popularity, a new field has grown up that pairs statistics and compuational analysis with scientific research ... Show More
55m 10s
Jul 2025
169: ChatGPT vs Julius AI: Who Analyzes Data Better?
<p>The data analysis landscape is changing rapidly. New AI tools are emerging every week, and it can sometimes feel overwhelming. So in this video, I compare ChatGPT and Julius AI to see how they stack up against each other. We'll use a dataset of 1,444 data job listings from Fin ... Show More
28m 33s
Sep 9
What's New at CFI | Data Analysis in Python
Ready to take your data analysis skills to the next level? In this episode of What's New at CFI, we chat with subject matter expert Joseph Yeates about his newest course, Data Analysis in Python. This course is the perfect follow-up to our "Getting Started with Python" series and ... Show More
13m 33s
Jun 2025
Github Network Analysis
<p>In this episode we'll discuss how to use Github data as a network to extract insights about teamwork.</p> <p>Our guest, Gabriel Ramirez, manager of the notifications team at GitHub, will show how to apply network analysis to better understand and improve collaboration within h ... Show More
36m 46s
Nov 2024
What's New at CFI | ChatGPT for Data Analysis
<p>Data analysis has had some immense benefits with the introduction of ChatGPT. A great deal of the 'heavy lifting' can be done with ChatGPT, but it takes a bit of getting used to in order to take advantage of it. Try it out with our course on how to use ChatGPT for data analysi ... Show More
14 m
Apr 2017
Feature Processing for Text Analytics
It seems like every day there's more and more machine learning problems that involve learning on text data, but text itself makes for fairly lousy inputs to machine learning algorithms. That's why there are text vectorization algorithms, which re-format text data so it's ready f ... Show More
17m 28s
Aug 18
High Performance And Low Overhead Graphs With KuzuDB
SummaryIn this episode of the Data Engineering Podcast Prashanth Rao, an AI engineer at KuzuDB, talks about their embeddable graph database. Prashanth explains how KuzuDB addresses performance shortcomings in existing solutions through columnar storage and novel join algorithms. ... Show More
1h 1m
Sep 2
175: 5 Unique Data Analyst Projects (beginner to intermediate)
<p>Here are 5 exciting and unique data analyst projects that will build your skills and impress hiring managers! These range from beginner to advanced and are designed to enhance your data storytelling abilities.</p><p>✨ Try Julius today at https://landadatajob.com/Julius-YT</p>< ... Show More
18m 50s
Sep 2024
AI Agents for Data Analysis with Shreya Shankar - #703
Today, we're joined by Shreya Shankar, a PhD student at UC Berkeley to discuss DocETL, a declarative system for building and optimizing LLM-powered data processing pipelines for large-scale and complex document analysis tasks. We explore how DocETL's optimizer architecture works, ... Show More
48m 24s