Artificial Intelligence (AI) is on every business leader’s agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise.
Today’s pick is Azeem’s conversation with Dr. Rumman Chowdhury, a pioneer in the field of applied algorithmic ethics. She runs Parity Consulting, the Parity Responsible Innovation Fund, and she’s a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University.
They discuss:
- How you can assess and diagnose bias in unexplainable “black box” algorithms.
- Why responsible AI demands top-down organizational change, implementing new metrics, and systems of redress.
- More details on the emerging field of “Responsible Machine Learning Operations”.
Further resources:
- TIME100AI: Rumman Chowdhury
- Building an AI We Can Trust with Anthropic’s Dario Amodei (Exponential View, 2023)
- “‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI” (The Guardian, May 2023)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Fler avsnitt av Azeem Azhar's Exponential View
Visa alla avsnitt av Azeem Azhar's Exponential ViewAzeem Azhar's Exponential View med Azeem Azhar finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
