Why does it so often feel like we’re part of a mass AI experiment? What is the responsible way to test new technologies? Bridget Todd explores what it means to live with unproven AI systems that impact millions of people as they roll out across public life.
Why does it so often feel like we’re part of a mass AI experiment? What is the responsible way to test new technologies? Bridget Todd explores what it means to live with unproven AI systems that impact millions of people as they roll out across public life.
In this episode: a visit to San Francisco, a major hub for automated vehicle testing; an exposé of a flawed welfare fraud prediction algorithm in a Dutch city; a look at how companies comply with regulations in practice; and how to inspire alternative values for tomorrow’s AI.
Julia Friedlander is senior manager for automated driving policy at San Francisco Municipal Transportation Agency who wants to see AVs regulated based on safety performance data.
Justin-Casimir Braun is a data journalist at Lighthouse Reports who is investigating suspect algorithms for predicting welfare fraud across Europe.
Navrina Singh is the founder and CEO of Credo AI, a platform that guides enterprises on how to ‘govern’ their AI responsibly in practice.
Suresh Venkatasubramanian is the director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University and he brings joy to computer science.
IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd shares stories about prioritizing people over profit in the context of AI.