Connected Social Media   /     Increasing Trust in AI Systems through Explainability – Intel Chip Chat – Episode 573

Description

In this Intel Chip Chat audio podcast with Allyson Klein: Dr. Casimir Wierzynski, Senior Director for the Office of the CTO in the AI Products Group (AIPG) at Intel, joins us to discuss explainable AI. A key topic at NIPS 2017, explainable AI systems are those in which the AI algorithm’s inner workings are revealed [...]

Subtitle
In this Intel Chip Chat audio podcast with Allyson Klein: Dr. Casimir Wierzynski, Senior Director for the Office of the CTO in the AI Products Group (AIPG) at Intel, joins us to discuss explainable AI. A key topic at NIPS 2017, explainable AI systems are
Duration
Publishing date
2018-03-02 07:01
Link
http://feedproxy.google.com/~r/ConnectedSocialMedia/~3/SMqxEpnmrbA/
Contributors
  Connected Social Media
author  
Enclosures
http://feedproxy.google.com/~r/ConnectedSocialMedia/~5/gTR7LBpxK88/Increasing_Trust_AI_Systems_through_Explainability_Intel_Chip_Chat_573.mp3
audio/mpeg

Shownotes

In this Intel Chip Chat audio podcast with Allyson Klein: Dr. Casimir Wierzynski, Senior Director for the Office of the CTO in the AI Products Group (AIPG) at Intel, joins us to discuss explainable AI. A key topic at NIPS 2017, explainable AI systems are those in which the AI algorithm’s inner workings are revealed transparently and can be easily understood by humans. Dr. Wierzynski contrasts this with neural networks in which it is more challenging to analyze the network’s component parts. In this interview, Dr. Wierzynski talks about why explainable AI is of particular interest when developing artificial neural networks, how the new, Intel-supported Partnership on AI is driving cross-industry collaboration in explainable AI, and how explainable AI provides opportunities to increase trust in AI systems.

For more information, please read Dr. Wierzynski’s blog post, “The Challenges and Opportunities of Explainable AI”.

Read further about Dr. Wierzynski’s work at:
ai.intel.com