In 2016 and 2017 I spoke with dozens of venture capitalists, many of whom have a specific and overt focus on artificial intelligence technologies. I wanted to know what made an AI company worth investing in, and what business models were generally the most appealing for investment.
Episode Summary: At Emerj, we like to look around the corner at where AI is impacting industries and how people can make better business decisions based on that information. AI and data-driven software for enterprise is an emerging topic of interest, and in this episode we get a venture capitalist's perspective on where AI will play a vital and necessary role with real results in software and industry.
[This story has been revised and updated.]
Big data has turned out to be a key ingredient in turning machine learning from an abstract technology into a potentially invaluable tool of insight and foresight for businesses across industries. The burgeoning cognitive technologies of predictive analytics and data visualization are opening new windows of opportunity to companies trying to solve complex problems with multiple moving parts. From finding ways to retain new customers to more efficiently monitoring multiple performance metrics and easing performance volatility, more companies are gravitating towards machine learning-based data analysis tools in an effort to optimize operations and find innovative solutions and opportunities that were once too obscure for only the human eye.
Companies looking to apply AI are looking for a competitive advantage in their industry, something that will give them an edge in the market and help them grow. However, not every AI application can give a company a competitive advantage. Many AI applications are simply going to become the new normal.
Over the last four years, interviewing hundreds of AI researchers and AI enterprise leaders, we've consistently heard the same frustrations about AI adoption said time and time again.
"Culture is hard to change."
"Leadership doesn't know what they're trying to accomplish."
"Nobody knows what to do with these data scientists we've hired."
In our one-to-one work with enterprise clients, we've taken the most prevalent, recurring challenges to AI deployment and put them together into a framework of "prerequisites" to AI deployment.
When it comes to process automation, digital transformation leaders are now navigating the artificial intelligence hype. Although AI can yield some impressive results when it comes to digitizing processes that still involve paper and reducing the time customer service agents spend searching for customer information, leaders are perhaps too excited to jump into AI without knowing the fundamentals of what it entails.
(Alternative Montaigne-like title for this essay: "That the Meek Must Feign Virtue")
When I first became focused on the military and existential concerns of AI in 2012, there was only a small handful of publications and organizations focused on the ethical concerns of AI. MIRI, the Future of Humanity Institute, the Institute for Ethics and Emerging Technologies, and the personal blogs of Ben Goertzel and Nick Bostrom was most of my reading at the time.
These limited sources focused mostly on the consequences of artificial general intelligence (i.e. post-human intelligence), and not on day-to-day concerns about privacy, algorithmic transparency, and governing big tech firms.
By 2014, artificial intelligence made its way firmly onto the radar of almost everyone in the tech world. New startups began (by 2015) ubiquitously including “machine learning” in their pitch decks, and 3-4-year-old startups were re-branding themselves around the value proposition of “AI.”
Not until later 2016 did the AI ethics wave make it into the mainstream beyond the level of Elon Musk’s tweets.
By 2017, some business conferences began having breakout sessions around AI ethics - mostly the practical day-to-day concerns (privacy, security, transparency). In 2017 and 2018, entire conferences and initiatives sprung up around the moral implications of AI, including the ITU’s “AI for Good” event, among others. The AAAI’s “AI, Ethics, and Society” event started in 2016, but picked up significant steam in the following years.
So why the swell in popularity of AI ethics and AI governance?
Why didn’t this happen back in 2012?
The most obvious answer is that the technology didn’t show obvious promise for disrupting business and life back in 2012. People in Silicon Valley, never mind elsewhere, didn’t have AI squarely on their radar - and today - AI and machine learning are recognized squarely as disruptive forces that will likely change the human experience, and certainly the nature of human work.
Now that AI is recognized as a massively disruptive force, people are interested in ensuring that its impacts on society and individuals is good. Certainly, much of the origin of “AI Good” initiatives stems from a desire to do good.
It would be childishly naive to believe that AI ethics isn’t also about power. Individuals, organizations, and nations are now realizing just how serious their disadvantage will be without AI innovation. For these groups, securing one’s interests in the future - securing power - implies a path other than innovation, and regulation is the next best thing.
In this essay I’ll explore the landscape of AI power, and the clashing incentives of AI innovators and AI ethics organizations.
We’ve made it to article seven of seven in this “AI Zeitgeist” series. It’s been a while building up to this, and I’ve kept the competitive dynamics of AI as the topic of this seventh article because to me everything builds up to this.