Risks of AI – What Researchers Think is Worth Worrying About

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

risks of AI

The year 2015 might be seen as the year that “artificial intelligence risk” or “artificial intelligence danger” went mainstream (or close to it). With the founding of Elon Musk’s Open AI and The Leverhulme Centre for the Future of Intelligence; the increased attention on the Future of Life Institute and Oxford’s Future of Humanity Institute; and a flurry of attention around celebrity comments around AI dangers (including the now well-known statements of Bill Gates and Elon Musk), it’s safe to say that the risks of AI has embedded itself as a topic of pop-culture discourse — even if it’s not a very serious one amongst the populace at present.

Recently, we interviewed and reached out to a total of over 30 artificial intelligence researchers (all except one hold a PhD) and asked them about the risks of AI that they believe to be the most pressing in the next 20 years and the next 100 years. Below is a full list of all of our respondents; clicking on a respondent will bring up their answer to the 20-year risk question.

Note: The data for this graphic is no longer available publicly and are only available for Emerj research members. This change was enacted in November 2018.

Surveying the Trends — Risks of AI

More to fear about human beings than AI?

Automation and economic impact topped researchers’ risk list with 36 percent of responses, a positive correlation with the massive amount of media attention on autonomous vehicles and improved robotic manufacturing, among other industries. “General mismanagement” and “autonomous weapons” taken together ranked as relatively popular responses as well, at 15 percent and 12 percent respectively.

Several researchers spoke of the risk of AI exacerbating or accelerating present-day flaws in societal structures and pervasive issues. Dr. Joscha Bach made this point clear when he stated, “The risks brought about by near-term AI may turn out to be the same risks that are already inherent in our society. Automation through AI will increase productivity, but won’t improve our living conditions if we don’t move away from a labor/wage based economy. It may also speed up pollution and resource exhaustion, if we don’t manage to install meaningful regulations.”

The financial industry took a hit as a particularly risky space in which AI intersects with that that most precarious of human motivations: greed. “Financial algorithms. These are, by the way, already super-intelligences without human-centric goals (or maybe I’m too picky for not considering “making your trading house even richer” sufficiently human-centric),” was particularly pointed response by Dr. Andras Kornai. Dr. Daniel Berleant remarked in a similar vein: “Catastrophic distortion of the economy by artificial intelligences designed to make money for their owners.”

Just as interesting were the 18 percent of respondents who didn’t give or didn’t see any real risk inherent in AI in the next 20 years. Dr. Eduardo Torres Jara view the real ambiguity in how much power or autonomy we give to AI; he doesn’t, however, see AI itself as being a threat:

“It is hard to believe that AI will be an actual risk. Any advance technology have their own risks, for example the flight control of the space shuttle can fail and generate an accident. However, the technology used to control the space shuttle itself it is not dangerous. In the case of robots, we might not want to have weaponized autonomous robot because “autonomy” is not reliable enough even in robots with less fatal consequences in case of failure. The current fear of AI is that machines will become independent and will have free will and will take over. I do not see that happening in the near future.” – Dr. Eduardo Torres Jara

Dr. Danko Nikolic also sees AI as a tool that in and of itself is not danger; however, he recognizes the potential for it to draw further economic divisions unless if society does nothing to plan for resulting gaps. “I do not see any of these dangers. I think this is a kind of science fiction. AI will certainly bring economic advantages and this may increase the rich-poor economic divide….no AI will be able to solve that for us. We will have to find a solution ourselves,” Nikolic posited.

Then there were the responses that stood, more or less, apart from the crowd. In Dr. Stephen Thaler’s opinion, the greatest risk that human beings face is “the revelation that human minds may not be as wonderful as we all thought, leading to the inevitable humiliation and denial that accompanies significant technological breakthroughs,” he said.

Will humanity have to face its inadequacy compared to AI’s superhuman processing powers, or will human beings realize unique strengths as a species that cannot be replicated? Will the onslaught of AI technologies inspire an overcoming of human disparities as societies come together to address underlying faults, or catalyze growing rifts that escape our eventual control? On behalf of all of humanity, let’s hope we make collective decisions that increase our chances of the former.

Concluding Thoughts

While it’s important to bear in mind that the “categorization” was done after the survey (it could be argued that other categories could have been used to couch these responses), and that 33 researchers is by no means an extensive consensus, the resulting trends and thoughts of PhDs, most of whom have spent their careers in various segments of AI, are interesting and worth considering.

 

Image credits: gereports.cdnist.com

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe