Episode Summary: It’s common knowledge that scientists study the brain to understand how to replicate intelligence in machines; it’s less commonly known that scientists also use machine models to understand how the mind works. In this episode, we talk with Dr. Ashok Goel, a researcher in the field of cognitive systems who sheds light on this idea. Dr. Goel also speaks about his perspective on where machines are becoming more creative, and what the future might look like if machines begin to reflect on their “identities” as humans do.
Guest: Ashok Goel
Expertise: Computer Science and Cognitive Science
Recognition in Brief: Dr. Ashok Goel is Director of Georgia Institute of Technology’s PhD program in Human-Centered Computing; Coordinator of the faculty consortium on Creativity, Learning & Cognition; and Co-Coordinator of the faculty consortium on Interactive Intelligence. He also serves as Director of the school’s Design & Intelligence Laboratory and Co-Director of Georgia Tech’s Center for Biologically Inspired Design. His research focuses on human-centered computing, artificial intelligence, and cognitive science, with a focus on computational design and creativity. He has authored and co-authored numerous academic papers and given related presentations in India and throughout the United States.
Current Affiliations: Georgia Institute of Technology, Georgia Tech, GVU Center, Institute for People and Technology, Institute for Robotics and Intelligent Machines, Center for 21st Century Universities, Health Systems Institute
Defining Human-Centered AI
In the complex world of Cognitive Systems, is it possible to use machines or AI to gain insight on human thought? Dr. Ashok Goel devotes his study to the field, and notes that there are three common references in the AI school of thought that should be articulated i.e. human-centered, human-level, and human-like AI. Each of these related concepts holds a different shade of meaning. Human-Centered refers to machines that interact in some way with humans; human-level suggests machines that use concepts like thought and learning; and human-like describes AI that mimics humans in some way.
When AI got its start, researchers really shared a goal on two sides of the same coin, says Ashok – use human cognition to inspire the construction of AI, and also use AI to inform our understanding of human cognition. While the latter may be surprising to some, AI is one of the many lines of science that strives to better understand human cognition. Psychologists study behavior, output and input factors; biologists focus in on neuroconnections; and AI suggests the approach that we construct machines and develop hypotheses about how the mind might be working. These scientific endeavors help to inform other questions that we can ask about our minds.
The Building and Shaping of Reality
Scientific research is similar to building an infinite mansion, with one discovery building on top of another. Goel goes back in time to 1920, around the time Neils Bohr discovered that the electron revolves around the nucleus of an atom. This idea was revolutionary in and of itself, but it wasn’t long before this idea transferred to our concept of the solar system and the now accepted hypothesis that planets, like electrons, rotate around the sun, their nucleus.
How does these parallel ideas occur? “We don’t know”, says Ashok, “but when we started building machines, we started to understand the phenomenal processes…the deep insight was that maybe it’s not features of objects that matter as much as relationships between objects”. As we build machines and programs, we can begin to understand similarities that exist in other structures – like our minds. Making analogies to make leaps in one’s own insights is an oft sited ability of visionaries across fields.
The field of biomimicry, for example, encourages designers to look at nature for the design idea. “There are so many buildings constructed with the same problem, how to get water hundreds of feet high,” muses Goel. Most buildings use electromechanical systems to get water up, but no one looks at how redwood trees do it. “If only we could build similar systems of transpiration, we could build new kinds of buildings,” Ashok muses. Like biomimicry and other scientific fields, AI researchers often write programs in the lab that make analogies of a similar kind, transferring ideas from nature that allow us to build better technology.
The Frontier of Cognitive Systems
Two of the most recent groundbreaking technologies in cognitive systems are known by (we can almost assume by all) listeners i.e. Apple’s Siri and IBM’s Watson. “Siri is an excellent example of a cognitive system”, states Ashok, “and it’s all things – centered, -leveled (in terms of language), -like – not all aspects, but some”. Siri has a relatively pleasant female voice, for example, far less androgynous than computer voices of yesteryears. Though many humans harbor a love-hate relationship with Siri, it’s still a powerful program that can, in near real-time, find answers to a human’s spontaneous questions.
Another example is IBM’s Watson, which has access to multiple data sources. Ask any question and it will give you an answer. Once again, Watson embodies some elements of each human reference. One aspect of particular importance that these systems have in common is that both use not one method of a solving problem, but multiple approaches. “This is what cognitive systems have been saying for a long time…intelligence does not occur from one magical network, but instead emerges out of interaction of many capabilities and combines to make sense of situations,” explains Goel.
It might seem somewhat reasonable to have a machine that thinks about things in a particular way, but then again there are so many ways that we as humans think. Cognitive systems need to have such a degree of complexity if ever want them to have the full human-centric experience.
As for the future? Per usual, predicting is a difficult venture (risky, but necessary), but on a broad level Ashok sees three lines of research in cognitive systems that he believes will be productive in the coming decade: meta-thinking; visual thinking; and the development and integration of emotions, ethics, and intelligence.
It seems likely that by making strides in these areas of AI, we’ll build machines that help us to better understand these connections in human beings.