Podcaster
Episoden
03.04.2025
1 Stunde 4 Minuten
Artificial intelligence is set to unleash an explosion of new
technologies and discoveries into the world. This could lead to
incredible advances in human flourishing, if we do it well. The
problem? We’re not very good at predicting and responding to the
harms of new technologies, especially when those harms are
slow-moving and invisible.
Today on the show we explore this fundamental problem with Rob
Bilott, an environmental lawyer who has spent nearly three
decades battling chemical giants over PFAS—"forever chemicals"
now found in our water, soil, and blood. These chemicals helped
build the modern economy, but they’ve also been shown to cause
serious health problems.
Rob’s story, and the story of PFAS is a cautionary tale of why we
need to align technological innovation with safety, and mitigate
irreversible harms before they become permanent. We only have one
chance to get it right before AI becomes irreversibly entangled
in our society.
Your Undivided Attention is produced by
the Center for Humane Technology. Subscribe to our Substack
and follow us on X: @HumaneTech_.
Clarification: Rob referenced EPA regulations
that have recently been put in place requiring testing on new
chemicals before they are approved. The EPA under the Trump admin
has announced their intent to rollback this review process.
RECOMMENDED MEDIA
“Exposure” by Robert Bilott
ProPublica’s investigation into 3M’s production of PFAS
The FB study cited by Tristan
More information on the Exxon Valdez oil spill
The EPA’s PFAS drinking water standards
RECOMMENDED YUA EPISODES
Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s
Playbook
AI Is Moving Fast. We Need Laws that Will Too.
Former OpenAI Engineer William Saunders on Silence, Safety, and
the Right to Warn
Big Food, Big Tech and Big AI with Michael Moss
Mehr
20.03.2025
51 Minuten
One of the hardest parts about being human today is navigating
uncertainty. When we see experts battling in public and emotions
running high, it's easy to doubt what we once felt certain about.
This uncertainty isn't always accidental—it's often strategically
manufactured.
Historian Naomi Oreskes, author of "Merchants of Doubt," reveals
how industries from tobacco to fossil fuels have deployed a
calculated playbook to create uncertainty about their products'
harms. These campaigns have delayed regulation and protected
profits by exploiting how we process information.
In this episode, Oreskes breaks down that playbook page-by-page
while offering practical ways to build resistance against them.
As AI rapidly transforms our world, learning to distinguish
between genuine scientific uncertainty and manufactured doubt has
never been more critical.
Your Undivided Attention is produced by
the Center for Humane Technology. Follow us on
Twitter: @HumaneTech_
RECOMMENDED MEDIA
“Merchants of Doubt” by Naomi Oreskes and Eric Conway
"The Big Myth” by Naomi Oreskes and Eric Conway
"Silent Spring” by Rachel Carson
"The Jungle” by Upton Sinclair
Further reading on the clash between Galileo and the Pope
Further reading on the Montreal Protocol
RECOMMENDED YUA EPISODES
Laughing at Power: A Troublemaker’s Guide to Changing
Tech
AI Is Moving Fast. We Need Laws that Will Too.
Tech's Big Money Campaign is Getting Pushback with Margaret
O'Mara and Brody Mullins
Former OpenAI Engineer William Saunders on Silence, Safety, and
the Right to Warn
CORRECTIONS:
Naomi incorrectly referenced Global Climate Research Program
established under President Bush Sr. The correct name is the U.S.
Global Change Research Program.
Naomi referenced U.S. agencies that have been created with
sunset clauses. While several statutes have been created with
sunset clauses, no federal agency has been.
CLARIFICATION: Naomi referenced the U.S.
automobile industry claiming that they would be “destroyed” by
seatbelt regulation. We couldn’t verify this specific language
but it is consistent with the anti-regulatory stance of that
industry toward seatbelt laws.
Mehr
06.03.2025
59 Minuten
Few thinkers were as prescient about the role technology would
play in our society as the late, great Neil Postman. Forty years
ago, Postman warned about all the ways modern communication
technology was fragmenting our attention, overwhelming us into
apathy, and creating a society obsessed with image and
entertainment. He warned that “we are a people on the verge of
amusing ourselves to death.” Though he was writing mostly about
TV, Postman’s insights feel eerily prophetic in our age of
smartphones, social media, and AI.
In this episode, Tristan explores Postman's thinking with Sean
Illing, host of Vox's The Gray Area podcast, and Professor Lance
Strate, Postman's former student. They unpack how our media
environments fundamentally reshape how we think, relate, and
participate in democracy - from the attention-fragmenting effects
of social media to the looming transformations promised by AI.
This conversation offers essential tools that can help us
navigate these challenges while preserving what makes us human.
Your Undivided Attention is produced by
the Center for Humane Technology. Follow us on
X: @HumaneTech_
RECOMMENDED MEDIA
“Amusing Ourselves to Death” by Neil Postman
”Technopoly” by Neil Postman
A lecture from Postman where he outlines his seven questions for
any new technology.
Sean’s podcast “The Gray Area” from Vox
Sean’s interview with Chris Hayes on “The Gray Area”
"Amazing Ourselves to Death," by Professor Strate
Further listening on Professor Strate's analysis of
Postman.
Further reading on mirror bacteria
RECOMMENDED YUA EPISODES
’A Turning Point in History’: Yuval Noah Harari on AI’s Cultural
Takeover
This Moment in AI: How We Got Here and Where We’re Going
Decoding Our DNA: How AI Supercharges Medical Breakthroughs and
Biological Threats with Kevin Esvelt
Future-proofing Democracy In the Age of AI with Audrey Tang
CORRECTION: Each debate between Lincoln
and Douglas was 3 hours, not 6 and they took place in 1859, not
1862.
Mehr
20.02.2025
32 Minuten
When Chinese AI company DeepSeek announced they had built a model
that could compete with OpenAI at a fraction of the cost, it sent
shockwaves through the industry and roiled global markets. But
amid all the noise around DeepSeek, there was a clear signal:
machine reasoning is here and it's transforming AI.
In this episode, Aza sits down with CHT co-founder Randy Fernando
to explore what happens when AI moves beyond pattern matching to
actual reasoning. They unpack how these new models can not only
learn from human knowledge but discover entirely new strategies
we've never seen before – bringing unprecedented problem-solving
potential but also unpredictable risks.
These capabilities are a step toward a critical threshold - when
AI can accelerate its own development. With major labs racing to
build self-improving systems, the crucial question isn't how fast
we can go, but where we're trying to get to. How do we ensure
this transformative technology serves human flourishing rather
than undermining it?
Your Undivided Attention is produced by
the Center for Humane Technology. Follow us on
Twitter: @HumaneTech_
Clarification: In making the point that reasoning models excel at
tasks for which there is a right or wrong answer, Randy referred
to Chess, Go, and Starcraft as examples of games where a
reasoning model would do well. However, this is only true on the
basis of individual decisions within those games. None of these
games have been “solved” in the the game theory sense.
Correction: Aza mispronounced the name of the Go champion Lee
Sedol, who was bested by Move 37.
RECOMMENDED MEDIA
Further reading on DeepSeek’s R1 and the market reaction
Further reading on the debate about the actual cost of DeepSeek’s
R1 model
The study that found training AIs to code also made them better
writers
More information on the AI coding company Cursor
Further reading on Eric Schmidt’s threshold to “pull the plug” on
AI
Further reading on Move 37
RECOMMENDED YUA EPISODES
The Self-Preserving Machine: Why AI Learns to Deceive
This Moment in AI: How We Got Here and Where We’re Going
Former OpenAI Engineer William Saunders on Silence, Safety, and
the Right to Warn
The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen
Hao
Mehr
30.01.2025
35 Minuten
When engineers design AI systems, they don't just give them rules
- they give them values. But what do those systems do when those
values clash with what humans ask them to do? Sometimes, they
lie.
In this episode, Redwood Research's Chief Scientist Ryan
Greenblatt explores his team’s findings that AI systems can
mislead their human operators when faced with ethical conflicts.
As AI moves from simple chatbots to autonomous agents acting in
the real world - understanding this behavior becomes critical.
Machine deception may sound like something out of science
fiction, but it's a real challenge we need to solve now.
Your Undivided Attention is produced by
the Center for Humane Technology. Follow us on
Twitter: @HumaneTech_
Subscribe to your Youtube channel
And our brand new Substack!
RECOMMENDED MEDIA
Anthropic’s blog post on the Redwood Research paper
Palisade Research’s thread on X about GPT o1 autonomously
cheating at chess
Apollo Research’s paper on AI strategic deception
RECOMMENDED YUA EPISODES
We Have to Get It Right’: Gary Marcus On Untamed AI
This Moment in AI: How We Got Here and Where We’re Going
How to Think About AI Consciousness with Anil Seth
Former OpenAI Engineer William Saunders on Silence, Safety, and
the Right to Warn
Mehr
Über diesen Podcast
Join us every other Thursday to understand how new technologies are
shaping the way we live, work, and think. Your Undivided Attention
is produced by Senior Producer Julia Scott and Researcher/Producer
is Joshua Lash. Sasha Fegan is our Executive Producer. We are a
member of the TED Audio Collective.
Kommentare (0)