"Understanding what's going on in a model is important to fine-tune it for specific tasks and to build trust."
Bhavna Gopal is a PhD candidate at Duke, research intern at Slingshot with experience at Apple, Amazon and Vellum.
We discuss
- How adversarial robustness research impacts the field of AI explainability.
- How do you evaluate a model's ability to generalize?
- What adversarial attacks should we be concerned about with LLMs?
Fler avsnitt av Thinking Machines: AI & Philosophy
Visa alla avsnitt av Thinking Machines: AI & PhilosophyThinking Machines: AI & Philosophy med Daniel Reid Cahn finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
