logo
episode-header-image
Apr 2023
22m 17s

Spotlight: The Three Rules of Humane Tec...

TRISTAN HARRIS AND AZA RASKIN, THE CENTER FOR HUMANE TECHNOLOGY
About this episode

In our previous episode, we shared a presentation Tristan and Aza recently delivered to a group of influential technologists about the race happening in AI. In that talk, they introduced the Three Rules of Humane Technology. In this Spotlight episode, we’re taking a moment to explore these three rules more deeply in order to clarify what it means to be a responsible technologist in the age of AI.

Correction: Aza mentions infinite scroll being in the pockets of 5 billion people, implying that there are 5 billion smartphone users worldwide. The number of smartphone users worldwide is actually 6.8 billion now.

 

RECOMMENDED MEDIA 

We Think in 3D. Social Media Should, Too
Tristan Harris writes about a simple visual experiment that demonstrates the power of one’s point of view

Let’s Think About Slowing Down AI

Katja Grace’s piece about how to avert doom by not building the doom machine

If We Don’t Master AI, It Will Master Us

Yuval Harari, Tristan Harris and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents in this New York Times opinion piece

 

RECOMMENDED YUA EPISODES 

The AI Dilemma

Synthetic humanity: AI & What’s At Stake

 

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Up next
Aug 14
“Rogue AI” Used to be a Science Fiction Trope. Not Anymore.
Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger. ... Show More
42m 11s
Jul 31
AI is the Next Free Speech Battleground
Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property. Where AI companies h ... Show More
49m 11s
Jul 17
Daniel Kokotajlo Forecasts the End of Human Dominance
In 2023, researcher Daniel Kokotajlo left OpenAI—and risked millions in stock options—to warn the world about the dangerous direction of AI development. Now he’s out with AI 2027, a forecast of where that direction might take us in the very near future. AI 2027 predicts a world w ... Show More
38m 19s
Recommended Episodes
Jun 2023
AI Won’t Wipe Out Humanity (Yet)
The idea that machine intelligence will one day take over the world has long been a staple of science fiction. But given the rapid advances in consumer-level artificial intelligence tools, the fear has felt closer to reality these past few months than it ever has before. The gene ... Show More
29m 51s
Apr 2021
The race to build AI that benefits humanity | Sam Altman
In this new season of the TED Interview, conversations with people who make a case for...optimism. Not some blind, hopeful feeling but the conviction that somewhere out there are solutions that, given the right attention and resources, can guide us out of the dark place we’re in. ... Show More
1h 9m
Apr 2021
The race to build AI that benefits humanity | Sam Altman
In this new season of the TED Interview, conversations with people who make a case for...optimism. Not some blind, hopeful feeling but the conviction that somewhere out there are solutions that, given the right attention and resources, can guide us out of the dark place we’re in. ... Show More
1h 9m
Jun 2023
Big Tech Wants You to Think AI Will Kill Us All
Did you know that AI is set to automate as many as a third of your tasks? In the future we’re all going to be saving a lot of time. That’s as long as no one invents artificial general intelligence that fires all the nukes or turns us all into paperclips. Which, some experts seem ... Show More
38m 5s
May 2024
The TED AI Show: Is AI destroying our sense of reality? with Sam Gregory
Could you spot a deepfake? We’re entering a new world where generative AI is challenging our sense of what’s real and what’s fiction. In our first episode, Bilawal and Sam Gregory, a human rights activist and technologist, discuss how to protect our sense of reality.This is an ep ... Show More
27m 25s
Jan 2022
The race to build AI that benefits humanity | Sam Altman
Will innovation in artificial intelligence drastically improve our lives, or destroy humanity as we know it? From the unintended consequences we've suffered from platforms like Facebook and YouTube to the danger of creating technology we can't control, it's easy to see why people ... Show More
1h 8m
Nov 2023
Superintelligent AI: The Doomers
In the first episode of a new, five-part series of Tech Tonic, FT journalists Madhumita Murgia and John Thornhill ask how close we are to building human-level artificial intelligence and whether ‘superintelligent’ AI poses an existential risk to humanity. John and Madhu speak to ... Show More
28m 47s
Nov 2023
Superintelligent AI: The Utopians
If even AI companies are fretting about the existential threat that human-level AI poses, why are they building these machines in the first place? And as they press ahead, a debate is raging about how we regulate this emergent sector to keep it under control. In the second episod ... Show More
24m 48s
Jul 2023
The AI Dilemma | Your Undivided Attention
This is an episode from Your Undivided Attention, another podcast from the TED Audio Collective.At Center for Humane Technology, we want to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks ... Show More
44m 12s