logo
episode-header-image
Mar 2024
2h 18m

Venkatesh Rao: Protocols, Intelligence, ...

Daniel Bashir
About this episode

“There is this move from generality in a relative sense of ‘we are not as specialized as insects’ to generality in the sense of omnipotent, omniscient, godlike capabilities. And I think there's something very dangerous that happens there, which is you start thinking of the word ‘general’ in completely unhinged ways.”

In episode 114 of The Gradient Podcast, Daniel Bashir speaks to Venkatesh Rao.

Venkatesh is a writer and consultant. He has been writing the widely read Ribbonfarm blog since 2007, and more recently, the popular Ribbonfarm Studio Substack newsletter. He is the author of Tempo, a book on timing and decision-making, and is currently working on his second book, on the foundations of temporality. He has been an independent consultant since 2011, supporting senior executives in the technology industry. His work in recent years has focused on AI, semiconductor, sustainability, and protocol technology sectors. He holds a PhD in control theory (2003) from the University of Michigan. He is currently based in the Seattle area, and enjoys dabbling in robotics in his spare time. You can learn more about his work at venkateshrao.com

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:38) Origins of Ribbonfarm and Venkat’s academic background

* (04:23) Voice and recurring themes in Venkat’s work

* (11:45) Patch models and multi-agent systems: integrating philosophy of language, balancing realism with tractability

* (21:00) More on abstractions vs. tractability in Venkat’s work

* (29:07) Scaling of industrial value systems, characterizing AI as a discipline

* (39:25) Emergent science, intelligence and abstractions, presuppositions in science, generality and universality, cameras and engines

* (55:05) Psychometric terms

* (1:09:07) Inductive biases (yes I mentioned the No Free Lunch Theorem and then just talked about the definition of inductive bias and not the actual theorem 🤡)

* (1:18:13) LLM training and efficiency, comparing LLMs to humans

* (1:23:35) Experiential age, analogies for knowledge transfer

* (1:30:50) More clarification on the analogy

* (1:37:20) Massed Muddler Intelligence and protocols

* (1:38:40) Introducing protocols and the Summer of protocols

* (1:49:15) Evolution of protocols, hardness

* (1:54:20) LLMs, protocols, time, future visions, and progress

* (2:01:33) Protocols, drifting from value systems, friction, compiling explicit knowledge

* (2:14:23) Directions for ML people in protocols research

* (2:18:05) Outro

Links:

* Venkat’s Twitter and homepage

* Mediocre Computing

* Summer of Protocols

* Essays discussed

* Patch models and their applications to multivehicle command and control

* From Mediocre Computing

* Text is All You Need

* Magic, Mundanity, and Deep Protocolization

* A Camera, Not an Engine

* Massed Muddler Intelligence

* On protocols

* The Unreasonable Sufficiency of Protocols

* Protocols Don’t Build Pyramids

* Protocols in (Emergency) Time

* Atoms, Institutions, Blockchains



Get full access to The Gradient at thegradientpub.substack.com/subscribe
Up next
Dec 2024
2024 in AI, with Nathan Benaich
Episode 142Happy holidays! This is one of my favorite episodes of the year — for the third time, Nathan Benaich and I did our yearly roundup of all the AI news and advancements you need to know. This includes selections from this year’s State of AI Report, some early takes on o3, ... Show More
1h 48m
Dec 2024
Philip Goff: Panpsychism as a Theory of Consciousness
Episode 141I spoke with Professor Philip Goff about:* What a “post-Galilean” science of consciousness looks like* How panpsychism helps explain consciousness and the hybrid cosmopsychist viewEnjoy!Philip Goff is a British author, idealist philosopher, and professor at Durham Univ ... Show More
1 h
Nov 2024
Some Changes at The Gradient
Hi everyone!If you’re a new subscriber or listener, welcome. If you’re not new, you’ve probably noticed that things have slowed down from us a bit recently. Hugh Zhang, Andrey Kurenkov and I sat down to recap some of The Gradient’s history, where we are now, and how things will l ... Show More
34m 25s
Recommended Episodes
Sep 2022
Molecular and Subatomic Physics with David Mazziotti
Title: Molecular and Subatomic Physics with David Mazziotti Description: Welcome to another episode of The New Quantum Era Podcast hosted by Kevin Rowney and Sebastian Hassinger. Today, they are joined by David Mazziotti, a physicist, and research team leader at the University of ... Show More
1h 8m
Jun 21
AI Vulnerabilities and the Gentle Singularity: A Deep Dive with Project Synapse
In this thought-provoking episode of Project Synapse, host Jim and his friends Marcel Gagne and John Pinard delve into the complexities of artificial intelligence, especially in the context of cybersecurity. The discussion kicks off by revisiting a blog post by Sam Altman about r ... Show More
1 h
Dec 2024
Adam Brown – How Future Civilizations Could Change The Laws of Physics
Adam Brown is a founder and lead of BlueShift with is cracking maths and reasoning at Google DeepMind and a theoretical physicist at Stanford.We discuss: destroying the light cone with vacuum decay, holographic principle, mining black holes, & what it would take to train LLMs tha ... Show More
2h 43m
Jul 2017
14: Artificial Thought (Neural Networks)
Go to www.brilliant.org/breakingmathpodcast to learn neural networks, everyday physics, computer science fundamentals, the joy of problem solving, and many related topics in science, technology, engineering, and math.  Mathematics takes inspiration from all forms with which life ... Show More
1h 5m
Feb 2025
Satya Nadella – Microsoft’s AGI Plan & Quantum Breakthrough
Satya Nadella on: Why he doesn’t believe in AGI but does believe in 10% economic growth; Microsoft’s new topological qubit breakthrough and gaming world models;Whether Office commoditizes LLMs or the other way around. Watch on Youtube; listen on Apple Podcasts or Spotify.-------- ... Show More
1h 16m
Nov 2024
BI 199 Hessam Akhlaghpour: Natural Universal Computation
Support the show to get full episodes and join the Discord community. Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going t ... Show More
1h 49m
Apr 2024
Measuring The Speed of AI Through Benchmarks
David Kanter, Executive Director at MLCommons, discusses the work they’re doing with MLPerf Benchmarks, creating the world’s first industry standard approach to measuring AI speed and safety. He also shares ways they’re testing AI and LLMs for harm, to measure—and, over time, red ... Show More
31m 45s
May 2021
1 - Dawn of a Quantum Era
On May 6th, 1981, at the MIT Endicott House, a group of computer scientists gathered together with elite physicists to make the case that quantum phenomena could be used for computation. Here, ideas that would influence the next four decades of quantum computing research and deve ... Show More
39m 39s
Mar 2025
How AI is saving billions of years of human research time | Max Jaderberg
Can AI compress the years long research time of a PhD into seconds? Research scientist Max Jaderberg explores how “AI analogs” simulate real-world lab work with staggering speed and scale, unlocking new insights on protein folding and drug discovery. Drawing on his experience wor ... Show More
19m 15s
May 2021
2 - Quantum Computing Has A Purpose! (The Factoring Algorithm)
In the mid-90’s, there was no quantum computing field. There was excitement, sure, but nearly a decade and a half after the conference at MIT Endicott House, the possibilities of marrying physics and computer science had yet to yield a significant technological breakthrough. That ... Show More
38m 59s