An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
Fler avsnitt av The End Of The World with Josh Clark
Visa alla avsnitt av The End Of The World with Josh ClarkThe End Of The World with Josh Clark med iHeartPodcasts finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
