(Alternative Montaigne-like title for this essay: "That the Meek Must Feign Virtue")
When I first became focused on the military and existential concerns of AI in 2012, there was only a small handful of publications and organizations focused on the ethical concerns of AI. MIRI, the Future of Humanity Institute, the Institute for Ethics and Emerging Technologies, and the personal blogs of Ben Goertzel and Nick Bostrom was most of my reading at the time.
These limited sources focused mostly on the consequences of artificial general intelligence (i.e. post-human intelligence), and not on day-to-day concerns about privacy, algorithmic transparency, and governing big tech firms.
By 2014, artificial intelligence made its way firmly onto the radar of almost everyone in the tech world. New startups began (by 2015) ubiquitously including “machine learning” in their pitch decks, and 3-4-year-old startups were re-branding themselves around the value proposition of “AI.”
Not until later 2016 did the AI ethics wave make it into the mainstream beyond the level of Elon Musk’s tweets.
By 2017, some business conferences began having breakout sessions around AI ethics - mostly the practical day-to-day concerns (privacy, security, transparency). In 2017 and 2018, entire conferences and initiatives sprung up around the moral implications of AI, including the ITU’s “AI for Good” event, among others. The AAAI’s “AI, Ethics, and Society” event started in 2016, but picked up significant steam in the following years.
So why the swell in popularity of AI ethics and AI governance?
Why didn’t this happen back in 2012?
The most obvious answer is that the technology didn’t show obvious promise for disrupting business and life back in 2012. People in Silicon Valley, never mind elsewhere, didn’t have AI squarely on their radar - and today - AI and machine learning are recognized squarely as disruptive forces that will likely change the human experience, and certainly the nature of human work.
Now that AI is recognized as a massively disruptive force, people are interested in ensuring that its impacts on society and individuals is good. Certainly, much of the origin of “AI Good” initiatives stems from a desire to do good.
It would be childishly naive to believe that AI ethics isn’t also about power. Individuals, organizations, and nations are now realizing just how serious their disadvantage will be without AI innovation. For these groups, securing one’s interests in the future - securing power - implies a path other than innovation, and regulation is the next best thing.
In this essay I’ll explore the landscape of AI power, and the clashing incentives of AI innovators and AI ethics organizations.