Interpreting complicated models is a hot topic. How can we trust and manage AI models that we can’t explain? In this episode, Janis Klaise, a data scientist with Seldon, joins us to talk about model interpretation and Seldon’s new open source project called Alibi. Janis also gives some of his thoughts on production ML/AI and how Seldon addresses related problems.
Sponsors:
- DigitalOcean – Check out DigitalOcean’s dedicated vCPU Droplets with dedicated vCPU threads. Get started for free with a $50 credit. Learn more at do.co/changelog.
- DataEngPodcast – A podcast about data engineering and modern data infrastructure.
- Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.
Featuring:
- Janis Klaise – GitHub, LinkedIn, X
- Chris Benson – Website, GitHub, LinkedIn, X
- Daniel Whitenack – Website, GitHub, X
Show Notes:
Books
Upcoming Events:
- Register for upcoming webinars here!
Fler avsnitt av Practical AI
Visa alla avsnitt av Practical AIPractical AI med Practical AI LLC finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
