Not all algorithms are explainable. So does that mean that it’s ok to not provide any explanation on how your AI system got to the decision it did if you’re using one of those “black box” algorithms? The answer should obviously be no. So, what do you do then when creating Ethical and Responsible AI systems to address this issue around explainable and interpretable AI?
Fler avsnitt av AI Today Podcast
Visa alla avsnitt av AI Today PodcastAI Today Podcast med AI & Data Today finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
