Putting the Horse Before the Cart May Lead the Way to Artificial General Intelligence

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Putting the Horse Before the Cart May Lead the Way to Artificial General Intelligence

Episode SummaryA lot of AI applications are not really “smart”, at least not in the sense of the word as most humans might envision a true artificial intelligence. If you know how Deep Blue beat Gary Kasparov, for example, then you may not believe that Watson is a legitimate thinking machine. Our guest this week, Dr. Pei Wang, is of the belief that building a Artificial “General” Intelligence (AGI), what researchers define as an entity with human-like cognition, is a separate question from figuring out AI applications in the more narrow sense. In this episode, Dr. Wang lays out three differentiating factors that separate AGI from AI in general, and also talks about three varied and active approaches being taken to try and accomplish AGI.

GuestDr. Pei Wang

Expertise:  Artificial Intelligence

Recognition in BriefDr. Pei Wang received his PhD from Indiana University (Bloomington) and went on to research and teach in the areas of Computer Science, AI, and AGI in the  Department of Computer and Information Sciences at Temple University. Dr. Wang has given many demonstrations and contributed extensive publications in the areas of AGI and AI (as well as others), including on the Non-Axiomatic Reasoning System (NARS), a design for a general-purpose intelligent system. He has also authored and published two books, “Rigid Flexibility: The Logic of Intelligence” and “Non-Axiomatic Logic: A Model of Intelligent Reasoning.”

Current Affiliations: Temple University

AI, AGI – What’s the Difference?

Artificial Intelligence, or AI, is practically a buzz word, but Artificial General Intelligence, or AGI, may be unfamiliar to a greater portion of the public. Just what is AGI, and how do you distinguish between the two?

In general, AGI is defined by human-level intelligence, or perceived consciousness, a feat that many AI researchers believe is possible. Dr. Pei Wang has for years, and still does at times, use the term AI to encompass both realms. “At the beginning, I didn’t like the phrase AGI; to me, AI should be general by definition and it seems redundant.” Pei later realized that most people in the field don’t believe this to be the case anymore, with most researchers working on AI that’s geared toward tackling a specific problem in the field, or what’s come to be known as “narrow AI”.

Since this has become the new norm, Pei Wang differentiates AGI from AI based on three major points (for those interested in reading more extensively, Dr. Wang has written a “gentle introduction” to AGI on his website). To start, Pei believes AGI should be general, not limited to any one skill such as chess (even at a champion level). Another difference is the treatment of AGI as a holistic concept, rather than as a collection of functions, which is the current treatment of AI, says Wang. “You see this in AI text books, with a chapter on learning, processing, robotics…if you talk with researchers, they typically work in one field in their whole career, but AGI encompasses all of these fields,” says Pei.

In other words, the blueprint for AGI should cover all. Granted (as so many of you were thinking), this is not easy to do. Wang notes that this holistic AI is exactly what people were working on in its more nascent years, but as the field progressed, there was a gradual increase in “dividing and conquering.” What many people don’t realize, says Wang, is that AGI won’t work if you cut it into pieces and miss the big picture. “Almost everyone in the AGI field believes that it can be done in our lifetime,” says Pei, though the majority of AI researchers believe it’s either too early to talk about real AGI or that all the parts need to be constructed before a whole human-like intelligence can be assembled.

As can be inferenced, Pei believes AGI should begin with a “seed” of a grander idea that is then expanded upon, the opposite spectrum of the “build the parts and AI will come” camp.

To Build an AGI – Start at the End or the Beginning?

When it comes to constructing AGI, Wang sums up three common approaches being used in the field today. The hybrid approach is the one taken up by those in mainstream AI who still have an interest in AGI. This group, in general, believes intelligence is too complicated to figure out all at once, so it’s better to work on such a structure and concept piece by piece, then eventually put the pieces together to make it work.

Pei believes that a problem with the hybrid approach is that parts built separately, and sometimes according to different foundations, are 9.99 times out of 10 not going to work together. “This has been tried many times in the history of AI, the pieces do something reasonable, but together it doesn’t work,” says Wang, though he’s careful to point out that this approach has not been proven to be impossible.

A second category of work is what Pei calls the Integrated approach. Researchers in this group are attempting to build one generally intelligent system, but may use more than one separate technology to achieve this intelligence, believing that each part has its strengths and weaknesses and that when combined, the whole system will work better (nextBigFuture describes Google’s Deep Mind project, which may be one example of this approach, with analysis of progress made by Ben Goertzel, who many consider to the “father of AGI”). This is different from the hybrid approach because scientists are starting with one single overall design, but can then talk about its parts. This is an important distinction, says Pei, because if you have no idea of your whole, then you cannot define your parts correctly.

The third approach within AGI is the “unified approach”, one which Wang identifies with his own work. This is not the most common approach – Wang refers to himself and a few others who still believe there is a first principal of intelligence from which all work should stem and move forward in developing a single core AI technology. “You can still use others (technologies) but they will be secondary…the system at its core will be unified…even though agree all technologies have strengths and weaknesses, consistency is key,” says Wang.

While Pei and some of his peers view the unified approach as simpler, basing everything on the same core technology, others in the field believe that working on the whole or end product while building each of the parts is too difficult. Needless to say, each approach is competing with the other, in the healthiest sense of the word. “The is normal for a field of this nature…it’s too early to tell who is winning, but it’s important to understand why people are doing different approaches…people have different ways to prioritize their principles and beliefs,” says Wang.

Suffice it to say that if nobody ever tried the unified theory, we’d never know if such an approach was possible. But as for how long it might take to build an actual AGI is hard to say. Other distinctions of more near-term consequences are also being discussed in the field, such as AI versus Intelligence Amplification (IA), as discussed in Wired by Avant Jhingran. Pei believes achieving an AGI can be done within most of our lifetimes, but only time will tell when and how that achievement comes to fruition.

Image credit: Temple University

 

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe