29 min listen
Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189
FromThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189
FromThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
ratings:
Length:
64 minutes
Released:
Oct 10, 2018
Format:
Podcast episode
Description
In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I had the pleasure of speaking with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and when it’s important, and explore some nuances like the distinction between interpreting model decisions vs model function. We also dig into her paper Evaluating Feature Importance Estimates and look at the relationship between this work and interpretability approaches like LIME. We also talk a bit about Google, in particular, the relationship between Brain and the rest of the Google AI landscape and the significance of the recently announced Google AI Lab in Accra, Ghana, being led by friend of the show Moustapha Cisse. And, of course, we chat a bit about the Indaba as well. For the complete show notes for this episode, visit twimlai.com/talk/189. For more information on the Deep Learning Indaba series, visit twimlai.com/indaba2018.
Released:
Oct 10, 2018
Format:
Podcast episode
Titles in the series (100)
This Week in ML & AI - 7/8/16: A BS Meter for AI, Retrieval Models for Chatbots & Predatory Robots: This Week in Machine Learning & AI brings you the… by The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)