Episode Summary:
Most of us can admire AI such as Siri, Watson, and other agents shaping the fabric of future AI-powered entities, but it’s also possible to admire them as a “dead end”. Dr. Alexei V. Samsonovich is one researcher who believes that we won’t be close to perceiving AI as ‘conscious’ machines until we can grant them the necessary emotional intelligence. Though a lot of progress has been made in field of intelligent agents in the last 10 years, many researchers who are in the same camp as Samsonovich are now on a mission to develop human-like intelligence, cognitive abilities, emotional and social intelligence, and common sense reasoning.
Guest: Alexei Samsonovich
Expertise: Computation, Neuroscience, and Cognitive Architectures
Recognition in Brief: Dr. Alexei Samsonovich received his PhD in Applied Mathematics and Philosophy from the University of Arizona. Samsonovich has published extensively in a range of academic journals, and his work has been cited “965” times (and counting) according to the Web of Science. He has received several research grant awards from organizations such as DARPA, NINDS, NIMH, and many others. He is currently involved in several research projects, including semantic cognitive mapping of natural language, and developing a cognitive microcircuit titled NeuroNavigator.
Current Affiliations: Research Assistant Professor at the Krasnow Institute for Advanced Study, George Mason University
The Call for Emotionally-Intelligent AI
When it comes to developing advanced AI, Dr. Alexei Samsonovich is of the belief that “emotional intelligence in particular is key and the easiest first step on the road map…it’s relatively easy to implement, it sounds like (something) only a human can possess, but no one can prove that an agent does or does not have emotions”.
Samsonovich is interested in believable agents, ones that we can believe are alive and can understand us, and he thinks this is something that we can implement today. Despite the lack of progress in what Alexei deems to be a key area, there has been a lot of progress in natural language and machine learning, with agents like Siri, Watson, or the Google Car.
Yet Samsonovich is still prone to name these agents as negative examples. “They don’t really create exactly what we need, how many users actually talk to Siri every day, I hate using it and I never did, I tried maybe a couple of times just to understand that this is not different from another cheap tool, an artifact that is not alive, not capable of understanding me”. Siri, like so many of our cutting-edge technologies, resulted from a DARPA program.
The AI is a mixture of a variety of different approaches to achieve one goal. Alexei describes it as a “solution (that is) not elegant, not something that can be used by us to create something else”. He thinks Siri is a dead-end, and so is Watson for that matter. Both of these agents focus on a particular task – Siri is an “intelligent assistant” that points us in the right directions and Watson is the ultimate data analyst (thus far).
Samsonovich believes that we, as humans, need something else; maybe an agent that is not so smart, but one that we believe can understand us.
Alexei describes the concept as a shift from an autonomous, unfeeling agent to a social, virtual ‘actor’, an entity that can be our partner, and ideally our friend. “When I talk to a human, I don’t expect it to have buttons, but to infer the actions I want to take…this is exactly what people need from a virtual actor,” says Samsonovich.
Parting the Waters and Moving Ahead
When asked what the necessary steps are to arrive at such an entity, Alexei starts with what is “not critical” – giving it a photographic or realistic appearance, or the ability to express itself with gestures or facial expressions. These are not aspects that would be difficult, explains Samsonovich, and features that we can finance at the very beginning.
But if we develop an agent with relatively simplified language, we can still tell an automaton from a human. Is it possible to achieve mutual understanding through a common line interface rather than the “illusion” of fancy graphics and body language?
Alexei then delivers the critical aspects needed to build an authentic emotionally-intelligent AI, an agency that has the ability to understand our minds. These AI would have the ability to maintain narrative intelligence, exhibit social and emotional intelligence (which are closely aligned), and learn like a human being i.e. active learning. These are the areas where breakthroughs in science are most needed, says Alexei.
At this point, emotional intelligence might still be a bit fuzzy in meaning. In his own mind, Alexei means “emotionally-based cognition, capable of interpreting emotions and using them in decision-making”. For some reason, remarks Samsonovich, most of the research in AI has been focused on describing emotional states of an agent as a whole. The key point is not the emotional state of the key agent, but that agent’s appraisal of every element of each possible action, each object, and then using those appraisals to generate resulting goals, actions, and behavior.
”My view is that you can use emotional appraisals of every object, every agent, and every action to create action and motivate behavior”. Once science has generated algorithms that are able to achieve this cycle, we can then fine tune them. “Every action has an emotional flavor”, muses Alexei. even when doing simple activities like constructing a basic block building in groups. He did this experiment with undergraduates, and studied how every appraisal of a person’s actions influences the other person’s, which then feeds into the next action.
The building blocks of emotional intelligence are all around us, and Samsonovich is confident we can figure out how to integrate these elements into AI, with enough investment of capital and intellectual resources in the right places.