logo
episode-header-image
Jan 2022
22m 54s

P12: O My God (Big O Notation)

Gabriel Hesch and Autumn Phaneuf
About this episode

There are times in mathematics when we are generalizing the behavior of many different, but similar, entities. One such time that this happens is the use cases of Big O notation, which include describing the long-term behavior of functions, and talking about how accurate numerical calculations are. On this problem episode, we are going to discuss Big O notation and how to use it.


This episode is licensed by Sofia Baca under a Creative Commons Attribution-ShareAlike-NonCommercial 4.0 International License. For more information, visit CreativeCommons.org.

[Featuring: Sofía Baca]


Up next
Jul 8
Random Shuffle Isn't Random At All
In this episode, we explore the intricate mathematics behind Sp0tify's (ok... and other's) shuffle feature, revealing how it is designed to feel random while actually being carefully curated. We discuss the psychological implications of randomness, the Fisher-Yates shuffle algori ... Show More
8m 20s
Jul 1
Algorithms & AI Simplified - The Not So Mathy Version
This conversation explores the intricate relationship between mathematics and artificial intelligence (AI) for people who don't want to get too math heavy and want things simplified as much as possible. It delves into how algorithms, machine learning, and various mathematical too ... Show More
9m 19s
Jun 24
What is Cryptography?
This conversation delves into the intersection of quantum computing and cryptography, focusing on the implications of quantum computers for current encryption methods and the necessity for post-quantum cryptography. Dr. Dustin Moody from NIST discusses the threats posed by quantu ... Show More
40m 56s
Recommended Episodes
Jun 14
Networks and Complexity
In this episode, Kyle does an overview of the intersection of graph theory and computational complexity theory. In complexity theory, we are about the runtime of an algorithm based on its input size. For many graph problems, the interesting questions we want to ask take longer an ... Show More
17m 49s
Nov 2024
Supermassive numbers
Russia has fined Google more than two undecillion rubles, which is more than 20 decillion dollars. How much you ask? 20 decillions is a 20 with 33 zeros behind it, more money than there is in the entire world!This unpayable fine inspired us to look at extremely large numbers, fro ... Show More
49m 29s
Jun 11
788: Estimation Sensitivities During Calculations (Case Interview & Management Consulting classics)
For this episode, let's revisit a Case Interview & Management Consulting classic where we look at how to make estimations when calculating smaller values or working with enclosed spaces like restaurants, the importance of sensitivity analyses and a new limitation of demand-driven ... Show More
5m 4s
Jun 16
Ep. 357: What Worries the Internet’s Favorite Philosopher?
Few philosophers in recent memory have enjoyed as much attention as Byung-Chul Han. His mix of profundity and pithiness in tackling some of the big issues of the modern technical environment has made him “the internet’s new favorite philosopher” (to quote The New Yorker). But is ... Show More
1h 12m
May 2018
Episode 19 - Emily Riehl
Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics and everyone’s favorite theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. This is your other host. Evelyn Lamb: Hi, I’m Evelyn Lamb, a freelance math and science ... Show More
32m 5s
Jan 2025
745: Estimation Sensitivities During Calculations (Case Interview & Management Consulting classics)
For this episode, let's revisit a Case Interview & Management Consulting classic where we look at how to make estimations when calculating smaller values or working with enclosed spaces like restaurants, the importance of sensitivity analyses and a new limitation of demand-driven ... Show More
5m 4s
Jul 2021
LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes
This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of outcomes in a space-time continuum which characterizes our physical world. Such a set is called an “environmental event”. The machine learning algorithm uses ... Show More
35m 29s
Aug 2024
813: Solving Business Problems Optimally with Data, with Jerry Yurchisin
Jerry Yurchisin from Gurobi joins Jon Krohn to break down mathematical optimization, showing why it often outshines machine learning for real-world challenges. Find out how innovations like NVIDIA’s latest CPUs are speeding up solutions to problems like the Traveling Salesman in ... Show More
1h 43m
Oct 2022
#203: Is Analytics Addicted to Complexity? with Frederik Werner
Do analysts make things more complicated than they need to be, or is the data representing a complex world, so that is just the nature of the beast? Or is it both? Stakeholders yearn for simple answers to simple questions, but the road to delivering meaningful results seems paved ... Show More
1 h
Jun 2019
53. An Overview of Lazy Brain Biases
We are getting near the end of our eight week series on all the biases. There is just one more to go after this one, which is about how our brains are biased toward novelty and stories. The first six episodes in the series, which are linked in the show notes, were on personal bia ... Show More
23m 57s