Here at Emerj we’re dedicated to cutting through the AI hype that’s permeating the current zeitgeist in the business world. Although we’re skeptical about many of the claims that AI vendors make on their websites about what they’re AI software can do, it seems unlikely that the AI hype is going to disappoint venture capitalists and governments enough to usher in a third AI winter.
The first AI winter occurred in the 1970s, and the second in the 1990s. Researchers differ on whether or not a third AI winter is on the horizon, but generally, they agree that right now we’re in what they refer to as an AI spring. Starting with the advent of deep learning, AI started garnering considerable interest from VCs and industry leaders around 2012, continuing to now. Interest in AI hasn’t seemed to let up, but the conditions for the dawn of an AI winter appear to be stirring at least at face value.
In this article, we’ll talk about how the past two AI winters came about, differing perspectives on whether or not we’re headed for another AI winter right now, and the reasons why we’re probably not.
The Conditions for an AI Winter
The First AI Winter
We spoke with Dr. Nils Nilsson from Stanford a few years ago for an episode of our podcast. He’s one of the founding academics in the field of AI, and he’s been researching the area since before the first AI winter. “Work was pretty rampant at first,” he said. In the 1960s, Nilsson worked with other researchers trying to get computers to mimic human activities, such as solving algebra problems and playing strategic games like chess and checkers.
In the late 1950s, Frank Rosenblatt invented the Perceptron, an electronic device purportedly with the ability to learn and based on the earliest concept of neural networks. However, roughly ten years later, Marvin Minsky, co-founder of MIT’s AI lab, and Seymour Papert, published their book Perceptrons, which discussed the limitations of neural networks. Funding for neural networks dried up.
In addition, The the National Research Council arm of the US government had pulled funding for a Russian-English translation machine after it failed to live up to expectations a few years earlier. DARPA followed suit for AI in general several years later unless researchers proved the military application of the technology they were looking to research.
These events in tandem with several other failures, saw funding for the general AI field evaporate. Debate over the usefulness of certain kinds of AI and its overall failure to live up to expectations ushered in the first AI winter in the early 1970s.
The Second AI Winter
During the 1980s, expert systems dominated AI research. Minsky predicted that within 10 years, there would be machines that could do what humans could do as a result of advances in expert systems. They worked well for making structured decisions with predictable and repeatable steps.
Essentially, expert systems are webs of if-then statements. Although still in use today in some sectors, such as manufacturing, they are far outclassed by machine learning because they don’t have the ability to “learn,” so to speak. When presented with one set of circumstances, they will present a pre-programmed response every time.
They served their purpose, but in the late 1980s it became increasingly clear that expert systems were unable to deal with the variability of the real world. If they were presented with circumstances that were similar to those they were pre-programmed to respond to, they would fail to respond accurately. For every “if,” there is only one possible “then.” Although one could theoretically update the system to account for differing circumstances, this proved implausible and too expensive.
For example, in a scenario where the system has to catch a ball thrown by a human, an infinite number of factors come into play. The system has to take into consideration the trajectory, speed, and direction of the ball in a variety of wind conditions. It also has to account for the height, strength, and throwing ability of the human throwing the ball. There is an infinite number of subtleties that go into catching a ball.
Expert systems were never going to achieve that level of gritty reality. It would be impractical to try and hard-code every subtly and circumstance and update it every time one realized they had missed one, ten, or a hundred. The second AI winter occurred mainly because expert systems hit a dead end and did not live up to expectations.
The State of AI Today
Machine learning replaced expert systems, gaining prominence in the 2000s and, with the invention of deep learning, ushered in the so-called AI spring of the current decade. ML systems are not webs of if-then statements. Instead, they run on data as a kind of large scale statistical model. Due to the fluid nature of the data and its structure, machine learning systems can often do nuanced things in a nuanced world. When presented with a set of new circumstances, machine learning systems can often eventually learn how to present accurate responses to them.
As a result, funding has returned to AI research, notably from the private sector, where VCs are looking to capitalize on automation technology they see as more scalable than processes that run on people. There are a plethora of machine learning vendors claiming to offer the technology to banks and insurance firms, for example.
What machine learning systems do worse than expert systems, however, is provide even an inkling of explainability. This limitation may not matter match in some circumstances, such as product recommendations. If Netflix recommends a show to a customer that they really don’t want to watch, it may at most inconvenience the customer.
This is not the case in sectors such as healthcare. If a machine learning software recommends a doctor diagnose their patient with a certain disease, that doctor needs to be able to explain how the machine came to that diagnosis to the patient; they can’t return to their patient expecting them to trust that the software is correct in their diagnosis because medical diagnostics can be a matter of life and death.
In addition, machine learning systems don’t work with any accuracy without large volumes of data. This makes them at present difficult to use for scenarios where digital data isn’t readily available.
Will There Be a Third AI Winter?
Expert Opinion Varies
Indeed, the circumstances surrounding machine learning echo those surrounding expert systems 35 years ago. Business leaders are becoming familiar with the possibilities that AI could offer, and they’re starting to wonder if AI is right for processes at their companies. It seems that we’ve reached a cultural moment for AI where everyone is talking about it and its “revolutionary” or “disruptive” capabilities. Prominent figures like Elon Musk and the late Stephen Hawking have discussed AI as an “existential threat” in reference to the eventual capabilities of machine learning or a similar AI technology. World governments are also starting to take notice of AI in a way similar to the Cold War state of affairs during the first two AI winters.
In our interview with Nilsson, he noted that leaders in the 70s and 80s were concerned that “AI wasn’t really good enough; it wasn’t achieving its promises.” “Now,” he says, “sometimes people are saying AI is achieving its promises, it’s too good,” seeming to lend credence to concerns from Musk and Hawking.
At the same time, little headway has been made on “explainable AI,” attempts at solving the “black box” problem of machine learning so that it might be more useful to certain sectors. The limitations of machine learning with regards to its applicability in important sectors prompted Geoffrey Hinton, known as the Godfather of Deep Learning, to say we should “throw it all away and start again.”
Fei-Fei Li, Co-Director of the Stanford Human-Centered AI Institute, commented on a Twitter post of hers advocating for continued funding for academic research into AI saying “the only way to avoid a ‘winter’ is to continue intense basic science research in AI and ML.”In her post, she refers to the 1980s and 2010s as not AI springs, but AI summers, periods in which the media and industry directed their attention to initiatives born out of thorough scientific research and debate in academia ten years prior.
An incongruent relationship between leaders outside academia and AI researchers seems to be one factor for AI winters. When industry and government leaders run away with the AI hype, they’re expectations have historically failed to be met by computer scientists. Yann Lecun, Chief AI Scientist at Facebook and a former student of Geoffrey Hinton’s, told Bloomberg that an AI winter might occur if it “takes longer than the people funding [AI] research expect” to create machine learning systems with more general applications.
He added, however, that any AI winter that comes from that incongruence won’t be as severe as they were in the past because machine learning is actually rather good at garnering ROI for specific tasks.
Richard Socher, Chief Scientist at Salesforce, who earned his PhD in Computer Science from Stanford, echoed this sentiment when he told Bloomberg, “I can’t imagine an AI winter in the future that could be as cold as previous ones.”
Andrew Ng seems more optimistic overall. He argues:
The earlier periods of hype emerged without much actual value added, but today, [AI] is creating a flood of value. I’ve seen it with my own eyes. I’ve built some of those systems. I’ve seen the revenue being generated.
He goes so far as to refer to the current state of AI as the “eternal spring.” He adds that although there is considerable hype surrounding machine learning at the moment he “think[s] there’s such a strong underlying driver of real value that it won’t crash like it did in previous years.”
Why There Might Not Be Another AI Winter
Taken in isolation, another AI winter is hypothetically possible. However, the state of AI now is a lot different from the state of AI back in its early stages. Much of the excitement stirred up in the 1960s was for concepts that would later be explored in movies like The Matrix and I, Robot.
That is not the case today. Dominant companies like Facebook and Google are predicated on machine learning, and this was never the case in the years leading to the last two AI winters. These companies also offer products that have become ubiquitous in everyday life; the public is used to interacting with AI technology in many cases without realizing that it’s AI. Businesses and government agencies have come to rely on AI at least to some degree in real and tangible ways.
This state of affairs stands in stark contrast to the conditions of the 1960s and 1980s, when AI was either stuck in academia or barely scratching the surface of industry. It’s extremely unlikely that companies like Google, Facebook, and Amazon are going away anytime soon, and that means that AI is going to remain profitable at least to some industries for the foreseeable future.
The deep involvement of industry and government in AI makes it unlikely that funding will dry up to the same degree as the previous AI winter. While there is some debate as to whether machine learning systems are going to evolve into artificial general intelligence, they are already often more viable for narrow business use cases than humans in their present form. As a result, machine learning or some iteration of it is likely here to stay for the coming years until some other AI technology takes its place. AI as a field will likely have its “eternal spring.”
That said, machine learning’s challenges with data volume and interpretability will likely trigger a slowdown in VC interest for AI in some sectors and perhaps in some countries as well. Oscillations in that respect should be expected in the next few years.
Header Image Credit: World of Warships Forum