Is Elon Musk an AI Watchdog or Oligarch?

Dyllan Furness

Dyllan explores technology and the human condition for Tech Emergence. His interests include but are not limited to whiskey, kimchi, and Catahoulas.

Is Elon Musk an AI Watchdog or Oligarch?

In business and capitalism, the value of an idea is initially measured by the investment it earns. Keen investors expect financial profit from their economic commitments; profits that fatten their wallets but don’t always coincide with the betterment of society. Meanwhile, a $1 billion joint investment by some of business and technology’s biggest names has shed that principle of return and, in the act, validated artificial intelligence as one of today’s most important topics.

The cumulative funds – pledged by VIPs like Tesla’s Elon Musk, Y Combinator’s Sam Altman, LinkedIn’s Reid Hoffman, and PayPal’s Peter Thiel – have been piped into a non-profit research initiative known as OpenAI. With the aim to develop amiable artificial intelligence the investors vocalize the value of funding research that’s free from capitalism’s temptations toward potential transgressions that tend to arise when financial backers want their money back. In their own words OpenAI seeks to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” To that end they’ve promised to make their technology open source.

If you regularly follow tech media you’re well aware that Musk’s attitude towards artificial intelligence is less than open. In fact, he’s one of the most outspoken opponents AI has seen in recent years. In an interview at the MIT AeroAstro Centennial Symposium, Musk warned the audience that AI is our “biggest existential threat” and poetically suggested that creating AI is tantamount to “summoning the demon.”

Nonetheless, Musk isn’t absolutely anti-AI. His $10 million investment in the Future of Life Institute (this January) is coupled with his venture investments in AI firms DeepMind and Vicarious, though he claims that both of said AI investments were more for interest and concern in the issue than for hope of financial return. But the SpaceX founder is super cautious about the possibilities of the technology, once insisting on international regulatory oversight and now funding a means to develop AI without concern for financial returns.

At first glance OpenAI is a welcome buffer between our present technological reality and an Asimovian apocalypse in which humans and AI battle for control. And indeed it’s received plenty of praise from scientists, business folk and laymen alike. But responses to Musk’s announcement haven’t been all positive. In particular, some of the investor’s comments in a recent interview with Steven Levy of Medium should raise questions and eyebrows about the how open the AI research initiatives practices will be.

In the first half of the interview Levy asks the OpenAI investors whether they will maintain oversight over what “comes out of” the organization. 

“We do want to build out an oversight for it over time,” Altman responds. “It’ll start just with Elon and me.”

Musk adds that he intends to attend OpenAI office meetings once a week to gain “a much deeper understanding of where things are in AI and whether we are close to something dangerous or not.” Later, when asked about his prior investment in DeepMind, Musk justifies his investment similarly as a means to “keep and eye on [AI]”.

This all begs the questions – who appointed Musk, Altman, et al to the AI development oversight committee?

In pledging funds to an NGO like OpenAI, these investors made a generous and valuable commitment to mankind, championed efforts to democratize AI research, and set a standard to dissociate potentially threatening technological advancements from the capitalism’s pervasive want for profit. But, in doing so, these individuals also positioned themselves as the watchdogs of our technological future.

Musk and Altman have voiced their concerns about AI but it’s doubtful they’ll really sanitize the technology in the way some analysts suggest. Nonetheless, it defeats the organization’s purported aims (e.g. to democratize AI research and free it from financial constraints) if the individuals with final say are some of the nation’s wealthiest men. And these investors aren’t the only people concerned about the reckless development of AI. Stephan Hawking and Bill Gates also warn about the destructive potential that is whimsical AI development. Artificial intelligence ethics is even a burgeoning field. Cambridge University recently even announced offering a degree in the field. So why wouldn’t Musk and Altman step back and appoint trained ethicists as OpenAI’s watchdogs? 

The fact is Musk and his fellow investors are not ethicists. They’re brilliant businessmen, tech-savvy entrepreneurs and trained engineers. And though everyone is entitled to their well-thought out ethical opinions, many philosophers have devoted their careers to investigating the slippery but important topics informing our judgement of right and wrong. If OpenAI wants to be democratic and free from the shackles of its investors then Musk and Altman should seek an oversight committee beyond themselves.

Credits: Nathaniel Wood for Wired

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter: