From educational institutions to healthcare professionals, from employers to governing bodies, artificial intelligence technologies and algorithms are increasingly used to assess and decide upon various aspects of our lives. However, the question arises: are these systems truly impartial and just in their judgments when they read humans and their behaviour? Our answer is that they are not. Despite their purported aim to enhance objectivity and efficiency, these technologies paradoxically harbor systemic biases and inaccuracies, particularly in the realm of human profiling. The Human Error Project has investigated how journalists, civil society organizations and tech entrepreneurs in Europe make sense of AI errors and how they are negotiating and coexisting with the human rights implications of AI. With the aim of fostering debate between academia and the public, the “Machines That Fail Us” podcast series will host the voices of some of the most engaged individuals involved in the fight for a better future with artificial intelligence. “Machines That Fail Us” is made possible thanks to grant provided by the Swiss National Science Foundation (SNSF)’s “Agora” scheme. The podcast is produced by The Human Error Project Team. Dr. Philip Di Salvo, the main host, works as a researcher and lecturer in the HSG’s Institute for Media and Communications Management. https://mcm.unisg.ch/ https://www.unisg.ch/
Date | Title & Description | Contributors |
---|---|---|
2024-07-05 | The AI we have built so far comes with many different shortcomings and concerns. At the same time, the AI tools we have today are the product of specific technological cultures and business decisions. Could we just do AI differently? For the final epi... |
|
2024-06-14 | We don’t necessarily have to build artificial intelligence the way we’re doing it today. To make AI really inclusive we must look beyond Western techno-cultures and beyond our understanding of technology being either utopian or dystopian. How could our... |
|
2024-05-17 |
Machines That Fail Us #3 Errors and biases: tales of algorithmic discrimination The records of biases, discriminatory outcomes, and errors as well as the societal impacts of artificial intelligence systems is now widely documented. However, the question remains: How is the struggle for algorithmic justice evolving? We asked Angela... |
|
2024-04-19 |
Machines That Fail Us #2: Following the AI beat – algorithms making the news What’s the role of journalism in making sense of AI and its errors? With Melissa Heikkilä, senior reporter at the MIT Technology Review. Host: Dr. Philip Di Salvo. |
|
2024-03-21 |
Machines That Fail Us #1: Making sense of the human error of AI What are the errors that artificial intelligence systems can make and what’s their impact on humans? The Human Error Project team discusses the results of their own research into AI errors and algorithmic profiling. |
|