The tension between AI safety and truth-seeking isn’t what you think it is. In Skeptiko..
The tension between AI safety and truth-seeking isn’t what you think it is.
In Skeptiko episode 645 AI ethics expert Dr. Toby Walsh and author and AI expert Alex Tsakiris, explore a fundamental tension within narratives about AI safety. While much of today’s discourse focuses on preventing AI-generated hate speech or controlling future AGI risks, are we missing a more immediate and crucial problem? The conversation reveals how current AI systems, under the guise of safety measures, may be actively participating in narrative control and information suppression – all while claiming to protect us from misinformation.
1. The Political Nature of Language vs. The Quest for Objective Truth
Dr. Walsh takes a stance that many AI ethicists share: bias is inevitable and perhaps even necessary. As he puts it:
“Language is political. You cannot not be political in expressing ideas in the way that you use language… there are a lot of political choices being made.”
But is this view itself limiting our pursuit of truth? Alex Tsakiris challenges this assumption:
“I think we want something more. And I think AI help get us there. Our bias is not a strength.”
2. The Shadow Banning Problem: When “Safety” Becomes Censorship
The conversation takes a revealing turn when discussing Google’s Gemini refusing to provide information about former astronaut Harrison Schmidt’s views on climate change. This isn’t just about differing opinions – it’s about active suppression of information.
As Alex pointedly observes:
“This is a misdirect… It’s dishonest… This is the main issue with regard to AI ethics right now is not AI generating hate speech. It’s about Gemini shadow banning, and censoring people.”
3. The False Dichotomy of Protection vs. Truth
Even Walsh acknowledges the crude nature of current solutions:
“At the moment the technology is so immature that the tools that we have to actually design these systems are really crude… the way not to say anything wrong about climate change is to say nothing about climate change, which of course that is as wrong as saying the wrong things about climate change.”
4. The Promise of AI as Truth Arbiter
Despite these challenges, there’s hope. As Tsakiris notes:
“AI is a tool right now today that can mediate [polarized debates] in an effective way because so much of that discussion is about bias, it’s about preconceived ideas, about political agendas that don’t really have a role in science.”
The Way Forward
The conversation reveals a critical paradox in current AI development: while tech companies claim to be protecting us from misinformation through content restrictions and “safety” measures, they may be creating systems that are fundamentally biased and less capable of helping us discover truth.
This raises important questions:
* Are our current AI ethics frameworks actually serving their intended purpose?
* Have we confused protecting users with controlling narratives?
* Could AI’s greatest potential lie not in being “safe” but in being truly unbiased?