ai power Articles and Reports
The general premise of this article is different from most of my previous AI Power articles.
While most of the articles in this series have related to the near-term struggles for power between organizations and governments with regards to regulation, data, and international policy, this article will focus on the long-term trajectory that AI and technology are headed towards and what that means for the most powerful nations and organizations.
Alternative Montaigne-like Article Title: "That the Meek Will Stand United for Only as Long as it Behooves Their Aims"
Today, the world of AI ethics is a harmonious ecosystem of organizations with uncontroversial and reasonable, respectable aims.
In 2016 and 2017 I spoke with dozens of venture capitalists, many of whom have a specific and overt focus on artificial intelligence technologies. I wanted to know what made an AI company worth investing in, and what business models were generally the most appealing for investment.
Deepfakes have made their way onto the radar of much of the First World.
As with many technology phenomena, deepfakes have their origins in pornography editing (the Reddit page that originally popularized deepfakes was banned in early 2018).
In April of this year, I was asked by UNICRI (the crime and justice wing of the UN) to present the risks and opportunities of deepfakes and programmatically generated content at United Nations headquarters for a convening titled: Artificial Intelligence and Robotics: Reshaping the Future of Crime, Terrorism, and Security.
Instead of speaking about the topic, we decided it would be better to showcase the technology to the UN, IGO, and law enforcement leaders attending the event. So we took a video of UNICRI Director Ms. Bettina Tucci Bartsiotas, and created a deepfake, altering her words and statements by using a model of her face on another person.
This project involved a tight time schedule, very little budget for the project, and only one minute of data. Programmatically generating video and voice with open-source technology isn’t easy with these limitations, but the video came out reasonably well all things considered. After the initial demo is a breakdown of the broader concerns of programmatically generated content, which will be the focus for the bulk of this essay:
The great power nations that master the use of artificial intelligence are likely to gain a tremendous military and economic benefits from the technology.
The United States benefitted greatly from a relatively fast adoption of the internet, and many of its most powerful companies today are the global giants of the internet age.
When it comes to its technological and economic future, the US generally believes:
(Alternative Montaigne-like title for this essay: "That the Meek Must Feign Virtue")
When I first became focused on the military and existential concerns of AI in 2012, there was only a small handful of publications and organizations focused on the ethical concerns of AI. MIRI, the Future of Humanity Institute, the Institute for Ethics and Emerging Technologies, and the personal blogs of Ben Goertzel and Nick Bostrom was most of my reading at the time.
These limited sources focused mostly on the consequences of artificial general intelligence (i.e. post-human intelligence), and not on day-to-day concerns about privacy, algorithmic transparency, and governing big tech firms.
By 2014, artificial intelligence made its way firmly onto the radar of almost everyone in the tech world. New startups began (by 2015) ubiquitously including “machine learning” in their pitch decks, and 3-4-year-old startups were re-branding themselves around the value proposition of “AI.”
Not until later 2016 did the AI ethics wave make it into the mainstream beyond the level of Elon Musk’s tweets.
By 2017, some business conferences began having breakout sessions around AI ethics - mostly the practical day-to-day concerns (privacy, security, transparency). In 2017 and 2018, entire conferences and initiatives sprung up around the moral implications of AI, including the ITU’s “AI for Good” event, among others. The AAAI’s “AI, Ethics, and Society” event started in 2016, but picked up significant steam in the following years.
So why the swell in popularity of AI ethics and AI governance?
Why didn’t this happen back in 2012?
The most obvious answer is that the technology didn’t show obvious promise for disrupting business and life back in 2012. People in Silicon Valley, never mind elsewhere, didn’t have AI squarely on their radar - and today - AI and machine learning are recognized squarely as disruptive forces that will likely change the human experience, and certainly the nature of human work.
Now that AI is recognized as a massively disruptive force, people are interested in ensuring that its impacts on society and individuals is good. Certainly, much of the origin of “AI Good” initiatives stems from a desire to do good.
It would be childishly naive to believe that AI ethics isn’t also about power. Individuals, organizations, and nations are now realizing just how serious their disadvantage will be without AI innovation. For these groups, securing one’s interests in the future - securing power - implies a path other than innovation, and regulation is the next best thing.
In this essay I’ll explore the landscape of AI power, and the clashing incentives of AI innovators and AI ethics organizations.