Prof Nick Bostrom is widely respected as the premier academic thinker on topics related to strong artificial intelligence, transhumanism, and existential risks. His talks, books, and articles cover all of these topics, and his vocation involves bringing attention and critical thought to these most pressing human issues.
He is founder and director of the Future of Huminity Institute at Oxford, and author of the book “Superintelligence: Paths, Dangers, Strategies,” which has been occasionally referenced by Bill Gates and Elon Musk during some of their interviews about the risks of artificial intelligence.
In our exclusive interview with Dr. Bostrom (below), we explore the topic of identifying “existential” human risks (those that could wipe out life forever), and how individuals and groups might mediate these risks on a grand scale to better secure the flourishing of humanity in the coming decades and centuries.
How can we determine the risks that are most likely? Where does “A.I.” stand as an existential risk today? How can society pool it’s efforts to prevent huge catastrophes?
You can listen to our interview below, or listen on iTunes and subscribe for more interviews with AI luminaries and machine learning researchers from around the globe.
[Did you enjoy this episode? Subscribe on iTunes and leave us a review with your thoughts]
Image credit: http://img.gfx.no/