logo
episode-header-image
Feb 2025
13m 28s

AI won't plateau — if we give it time to...

TED
About this episode
To get smarter, traditional AI models rely on exponential increases in the scale of data and computing power. Noam Brown, a leading research scientist at OpenAI, presents a potentially transformative shift in this paradigm. He reveals his work on OpenAI's new o1 model, which focuses on slower, more deliberate reasoning — much like how humans think — in order to solve complex problems.

Hosted on Acast. See acast.com/privacy for more information.

Up next
Yesterday
"Marigolds," a poem about wonder | Safiya Sinclair
Poet Safiya Sinclair performs "Marigolds: A Letter to Wonder," an original poem she created for TED that explores memory, beauty and the fragility of life. After the poem, she talks with TED's Helen Walters about her writing process — and what it feels like when the creative muse ... Show More
9m 2s
Jul 8
The miraculous device that saved my farm — and changed my life | Josephine Waweru
Exhausted from carrying water up a hill to keep her small farm in Kenya thriving, Josephine Waweru received an unexpected call that offered a nearly unbelievable solution. She shares how one simple device allowed her crops (and her dreams) to flourish — and offers a glimmer of ho ... Show More
10m 18s
Jul 7
The inside story of Notre-Dame’s incredible reconstruction | Philippe Villeneuve
In a moment that stunned the world in 2019, the famed Notre-Dame in Paris went up in flames, threatening the future of the centuries-old Gothic treasure. Philippe Villeneuve, the chief architect of the cathedral’s restoration, recounts the collective effort to bring the building ... Show More
13m 38s
Recommended Episodes
Jan 2025
The AI revolution is running out of data. What can researchers do?
The explosive improvement in artificial intelligence (AI) technology has largely been driven by making neural networks bigger and training them on more data. But experts suggest that the developers of these systems may soon run out of data to train their models. As a result, team ... Show More
16m 31s
Apr 3
2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo
Scott and Daniel break down every month from now until the 2027 intelligence explosion.Scott Alexander is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel Kokotajlo resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking ... Show More
3h 4m
Mar 2025
Apple's Siri-ous Problem + How Starlink Took Over the World + Is AI Making Us Dumb?
This week, as the long-promised new Siri faces increasing delays, we explore why Apple seems to be falling even further behind in artificial intelligence. Then, the New York Times reporter Adam Satariano joins us to explain how Elon Musk’s satellite internet provider Starlink too ... Show More
1h 2m
May 2023
Azeem on AI: What Can the Copernican Revolution Teach Us about the Future of AI?
In his brief commentary, Azeem Azhar discusses the increasing complexity and capabilities of large language models (LLMs) and the transformative potential they hold. Just as the Copernican Revolution forced us to reassess our understanding of the universe and led to numerous soci ... Show More
10m 8s
Mar 2025
#217 Josh Wolfe: Human Advantage in the World of AI
While Silicon Valley chases unicorns, Josh Wolfe hunts for something far more elusive: scientific breakthroughs that could change civilization. As co-founder and managing partner of Lux Capital, he's looking for the kind of science that turns impossible into inevitable. Josh does ... Show More
2h 2m
Sep 2024
20VC: OpenAI's Newest Board Member, Zico Colter on The Biggest Bottlenecks to the Performance of Foundation Models | The Biggest Questions and Concerns in AI Safety | How to Regulate an AI-Centric Wor
Zico Colter is a Professor and the Director of the Machine Learning Department at Carnegie Mellon University.  His research spans several topics in AI and machine learning, including work in AI safety and robustness, LLM security, the impact of data on models, implicit models, an ... Show More
1 h
Dec 2024
#32 - Scott Aaronson - The Race to AGI and Quantum Supremacy
How fast is the AI race really going? What is the current state of Quantum Computing? What actually *is* the P vs NP problem? - former OpenAI researcher and theoretical computer scientist Scott Aaronson joins Liv and Igor to discuss everything quantum, AI and consciousness. We he ... Show More
2h 25m
Jul 2022
Human justice and machine intelligence | Joanna Bryson
Should we be scared of AI?Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesJoanna Bryson discusses how she became interested in the ways different species use intelligence, how the typical tropes in science fiction misunderstand AI and the proble ... Show More
18m 46s
Sep 2024
The Case for AI Optimism: Neil Lawrence on Humanity’s Edge Over Machines
If you're wrestling with the fast-paced rise of AI and what it means for our future, you are not alone. In this episode, Jessi sits down with Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge and author of The Atomic Human: What Makes Us Unique ... Show More
39m 51s
Aug 2024
Chaos and Complexity Economics (with J. Doyne Farmer)
Physicist J. Doyne Farmer wants a new kind of economics that takes account of what we've learned from chaos theory and that builds more accurate models of how humans actually behave. Listen as he makes the case for complexity economics with EconTalk's Russ Roberts. Farmer argues ... Show More
1h 5m