As generative AI moves into production, traditional guardrails and input/output filters can prove too slow, too expensive, and/or too limited. In this episode, Alizishaan Khatri of Wrynx joins Daniel and Chris to explore a fundamentally different approach to AI safety and interpretability. They unpack the limits of today’s black-box defenses, the role of interpretability, and how model-native, runtime signals can enable safer AI systems.
Featuring:
- Alizishaan Khatri – LinkedIn
- Chris Benson – Website, LinkedIn, Bluesky, GitHub, X
- Daniel Whitenack – Website, GitHub, X
Upcoming Events:
- Register for upcoming webinars here!
Fler avsnitt av Practical AI
Visa alla avsnitt av Practical AIPractical AI med Practical AI LLC finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
