This Might Be the Real Tragedy About Elon Musk’s Fears of Artificial Intelligence

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

This Might Be the Real Tragedy About Elon Musk's Fears of Artificial Intelligence

It was nearly a year ago when Elon Musk likened artificial intelligence to “summoning the demon.” The complete over-quoted quote from his 2014 talk at MIT is as follows:

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

If you’ve been on TechCrunch in the last 12 months, you’re probably read it thrice, and the before topic of “Musk on AI” went stale, 4 months later, Musk famously donated $10M to the Future of Life Institute in order to fund research around AI safety (you see all of the 2015 winners of FLI’s grants here, including a project by one of our past guests, Wendell Wallach or Yale).

Fears around the future of artificial intelligence have certainly existed well before Musk’s comments became popular, but his celebrity influence allowed such statements to become “media-worthy” and not merely relegated to the various niched corners of the net (such as Less Wrong, the IEET, Nick Bostrom’s blog, etc…).

In the same month of Musk’s donation, Bill Gates famously expressed his own lingering fears about artificial intelligence (in a similarly brief quote) during a Reddit AMA session. Legendary physicist Dr. Stephen Hawking had expressed his own serious concerns about the existential risk of AI just month’s prior to the Gates’ AMA… and January 2015 also marked the month that Musk, Apple co-founder Steve Wozniak, and other AI experts signed their Open Letter on Artificial Intelligence, calling for a larger global shift in focus toward AI safety.

Not everyone shares the opinion of Musk and Gates that AI is worth fearing, at least anything in the coming century – and there are myriad articles disagreeing with their basic premises, or the apparent lack of legitimate premises.

Much of the disagreement has been around timescales, or about the viability of AI ever getting “out of control” in the first place (this article in Computer World includes quite reasonable counter-arguments from well-informed researchers). Many such arguments are made by rather well-respected researchers. Stanford PhD and AI researcher Roger Schank disagreed wholeheartedly with Musk’s fears, and he’s not alone in the “smart-guy” camp of people who see no reason to fear AI right now.

Some of the arguments are made by folks who’ve done a bit of homework and believe there might be something more to add to the conversation (though they themselves are not experts, IE: this article in GigaOm, among countless others). Others, like Slate’s Adam Elkus, believe that Musk’s references to the supernatural – IE: demon summoning – aren’t a productive light the shed on the future of technology. Many of the points addressed seem quite valid, and worth adding to the conversation about AI, risk, and society.

Disagreement, however, is no tragedy. It’s difficult to argue that a healthy amount of clashing ideas and ideals are necessary to flesh out a fruitful path forward for society, and to understand important, multi-dimensional forces from many different sides. If there are some core nuggets to truth to be found, some genuine matters of concern or unearth, they are likely to be tumbled out when a lot of smart people can contribute to finding those truths.

What I believe is tragic, is when a debate no longer serves the end of finding truth, or unearthing concerns / opportunities.

One class of fruitless conversation comes in the form of mis-informed and close-minded dismissal of an idea. Advanced technologies are far from being the only domain in which such “disagreement” squashes the possibilities of progress / assessment, but I believe that there hasn’t been enough media-worthy attention on AI to bring this unfortunate facet of human nature out of hiding and into open debate, until now. A few minutes of Googling will unearth a good deal of articles and videos from those who do and those who do not consider AI to be a significant near-term threat. More often than I’d hope, the perspectives given are often “clearly those who disagree aren’t even sane.”

A second class of fruitless conversation seems about as hard to avoid as the first, namely, the protection of our beliefs and the swaying of our opinions to further our own outcomes. Some of the most ardent backlash to Elon Musk’s comments about AI came from those who are most heavily invested in developing AGI, or those who are “rooting for the Singularity” in one way or another. Of course, there’s positively nothing wrong with work on AGI, or with enthusiasm and interest in the Singularity… but someone invested in those domains is more likely to want to spit on concerns about technological progress.

One Google+ futurist channel announced a blog post with these lines:

“AI fear “COMPLETELY” dismantled. Woo hoo… take that Musk, Hawking, Tegmark, Bostrom etc (go on, sign your silly petition for AI “safety”)!”

I think that disagreeing with Tegmark and Bostrom is a healthy part of the fruitful debate around technologies and risk, it seems egoic to make them the “enemy”, making the debate less about truth, and more about slaying the “outsiders.”

I doubt the Musk (or certainly Bostrom) are thinking up evil plots about how they will slow down technological progress and bring pain an suffering to the human race. It’s probably safe to say that they’re reasonable, relatively well-intended folks who happen to have come to a different conclusion about AI and safety.

Our past guest Dr. Ben Goertzel is among the most influential proponents of artificial general intelligence – and his response to Musk was rather aggressive. He likened Musk to the Taliban (albeit in a tongue-in-cheek fashion), and seemed to frame Musk, MIRI, and other “AI Alarmists” as enemies of what is right. It wasn’t “scathing” or flamboyant or hateful, but the analogies might have been a bit extreme, and the “enemizing” quite direct. Later, Goertzel replaced the article with this toned-down version of the article, addressing some the bite-back from the Humanity+ readership.

In fact, the broad swath of articles on Humanity+ seem to serve the overt purpose of kicking the ideas of Musk and Hawking in the teeth, rather than carefully consider them fairly or openly:Humanity+

No matter how smart and well-informed the author, with a title like “AI Doomsaying is the Self Loathing of Jerks,” it seems safe to say that the article has more to do with shooting down the enemy than it does rationally assess the evidence.

It’s certainly not only the Singularitarians and techno-optimists who might skew the AI risk conversation into one of politicking – we should similarly suspect the techno-conservative groups to have plenty of their own belief to defend, and mud to sling.

As far as agendas and incentives go, we might have reason to suspect that we won’t be hearing the genuine concerns about AI threats from powerful people who’s livelihoods, influence, and power rely on artificial intelligence. Zuckerberg and Eric Schmitt don’t seem to have fears about AI – but they’ve got a lot to lose when it comes to regulation and AI fears, their best interests are probably served in quelling those fears. That doesn’t inherently make then bad people, or entirely disingenuous, it just means that we need more opinions in the assessment, more varied perspectives and less likelihood for individual bias.

I respect the opinion of essentially everyone I’ve mentioned here (that’s what Emerj is about, frankly). When Goerzel writes, I take his ideas seriously (here’s one of his many detailed papers on AI friendliness that’s worth digging into). When Bostrom, writes, I do the same, even his poetry stuff that nobody ever talks about… and I – like everyone else – likely defend my own beliefs an position on instinct, without the consideration that might be due to the matter at hand. If you’re a human being, you probably do, too.

My fear is that Musk’s fears about AI – and their current ripple effects – are not going to be catalysis for fruitful conversation about the development and governance of emerging technologies, but will just be more fodder to bicker over, more positions to defend, more “enemy making” to engage in (ref. San Keen). It’ll be the democrats pooh-poohing those stupid republicans and the republicans scoffing at those idiot democrats, all over again.

I’d consider that tragic, because I believe we have a lot at stake.

Hugo de Garis has supposed that in the coming decades, a great war might ensue between those who want to see machines surpass humans in their intelligence, and those who do not want such a development occur (see “Artilect War” clip below).

For all I know it’ll be 300 years before any of this becomes a serious consideration. It seems reasonable to suppose, however, that these will be concerns that we’ll face within the coming 20 to 40 years. I’d consider it tragic if these issues were to tear us apart… as it seems ideal for nations to unite and take seriously all manner of evidence around technological developments that would inexorably alter the course of life as we know it. Had we done so earlier with respect to the environment, we might have become better stewards of our planet.

I’ve been fortunate enough to interview dozens and dozens of computer science PhDs and AI researchers from Stanford to Oxford and beyond, and what seems most clear to me about AI risk, is that there is no apparent consensus around the legitimacy of the concern. Oddly enough, though some researchers and scientists seems quite open to assessing all facets of the debate, many of the AI-threat-concerned and non-AI-threat-concerned experts seem to feel comfortably correct in their conclusions, and sometimes express complete disregard for the perspectives of fellow researchers with differing opinions.

The entire dynamic reminds me of one of Montaigne’s essays, where he quotes Caesar:

“‘Tis the common vice of nature, that we at once repose most confidence, and receive the greatest apprehensions, from things unseen, concealed, and unknown.”–De Bello Civil, xi. 4.

Unlike issues around global warming, domestic violence, or nuclear proliferation, artificial intelligence is almost entirely speculative – at least in it’s imagined and massively powerful future form.

Will the unseen and unknown nature of the future of artificial intelligence make us more or less likely pursue the truth and validity of the concern, or will it make us more likely to polarize of views and righteously aim to defeat our enemies with opposing beliefs? My own fear is that we as human beings will be more concerned about preserving our own beliefs or fighting for our individual aims than we will be about assessing and managing the potential risks of artificial intelligence openly.

The potential tragedy of Musk’s comments and their reverberations is that they may be just a small ripple in an eventually frothy sea of argument that might just overlook AI safety, rife with all the same fruitless mud-slinging and politicking that’s held back progress on countless important issues before. I am crossing my fingers that this is not the case.

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe