Oxford’s Prof Nick Bostrom On AI and Existential Risk in the 21st Century

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Oxford's Prof Nick Bostrom On AI and Existential Risk in the 21st Century

Prof Nick Bostrom is widely respected as the premier academic thinker on topics related to strong artificial intelligence, transhumanism, and existential risks. His talks, books, and articles cover all of these topics, and his vocation involves bringing attention and critical thought to these most pressing human issues.

He is founder and director of the Future of Huminity Institute at Oxford, and author of the book “Superintelligence: Paths, Dangers, Strategies,” which has been occasionally referenced by Bill Gates and Elon Musk during some of their interviews about the risks of artificial intelligence.

In our exclusive interview with Dr. Bostrom (below), we explore the topic of identifying “existential” human risks (those that could wipe out life forever), and how individuals and groups might mediate these risks on a grand scale to better secure the flourishing of humanity in the coming decades and centuries.

How can we determine the risks that are most likely? Where does “A.I.” stand as an existential risk today? How can society pool it’s efforts to prevent huge catastrophes?

You can listen to our interview below, or listen on iTunes and subscribe for more interviews with AI luminaries and machine learning researchers from around the globe.

[Did you enjoy this episode? Subscribe on iTunes and leave us a review with your thoughts]

Image credit: http://img.gfx.no/

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe