Episode Summary: Studying the mind has influenced, and will continue to influence, the development of artificial intelligence. In a largely digital world, Bruce turns a clarifying light on the topic of digital versus analog computing, and articulates on how the latter may be making a slow comeback in the wake of discoveries in neural information processing.
Guest: Dr. Bruce MacLennan
Expertise: Bio-Inspired Computation; Self-organization; Algorithmic Nano-Assembly; Artificial Morphogenesis
Recognition in Brief: Dr. MacLennan has published more than 50 peer-reviewed journal articles and book chapters, as well as authored Functional Programming: Practice and Theory and Principles of Programming Languages: Design, Evaluation, and Implementation. He has has done more than 60 invited or refereed presentations, including recent appearances at Cambridge, UK and Himeji, Japan.
Current Affiliations: Associate Professor in Department of Electrical Engineering and Computer Science at University of Tennessee Knoxville
The Rise of Analog-Driven AI?
“There was a time prior to the 1980s when scientists didn’t believe that we needed to understand the brain”, remarks Dr. Bruce MacLennan, “but this idea has evolved greatly”. Dr. MacLennan’s opening comments in a recent interview may be putting the idea lightly. Understanding the brain has almost achieved publicity status in the world of neuroscience, from President Obama’s BRAIN initiative to the Human Connectome Project being spearheaded by Harvard and Massachusetts General Hospital (MGH).
Due to extensive study of the brain in the past decade or so, there has been a “rebirth” of interest in analog computing, which functions more like the brain than digital computers. Carver Meade, a professor at the California Institute of Technology, laid the foundation for VLSI digital circuity; his studies of brain processing indicated that an analog information processor was more efficient than digital processing.
As a society, we are “enchanted by the flexibility and speed of digital technology”, but the tradeoffs are different, notes Dr. MacLennan. “The brain uses components that are orders of magnitude slower than transistors used in digital technology”. This type of low-precision analog computing would seem inferior, but that’s only an illusion at the individual level of a neuron.
The magic occurs when you have billions of these low-precision processors on a massively parallel scale. Instead of value by a single neuron (which we might be inclined to compare to binary code in digital, where a 0 or 1 represents a piece of information), information is expressed through a whole population of neurons. Each individual neuron is, frankly, rather sloppy, but put a bunch of them together and you get a more accurate representation of reality.
There has lately been increasing recognition and realization of potential value in analog electronics, with organizations like the Department of Defense and the semiconductor industry reinvigorating their investigation of analog electronics.
The Brain as a Model
Neurons have been estimated to operate with one digit of precision, making for very inaccurate computing devices, but put enough together in parallel fashion and you can get precise behavior exhibited by living organisms. This is the ultimate attraction of using the brain as a model in developing new artificial intelligences.
Dr. MacLellan believes that there is “good reason to think we’ll need systems with large numbers of artificial neurons…there will be lots of manufacturing imperfections…we (need to develop) an understanding of how to put together lots of imperfect parts and still get competent behavior”. He gives the image of a dog leaping up into the air and catching a frisbee. “That’s what we want robots to do,” he adds.
There is so much we still do not know about neural information processing and how the mind works. As Bruce notes, who routinely keeps up with the literature, there are new discoveries every week. Researchers understand some of the basic principles, and breakthrough work has been seen in the areas of sensory and executive systems, but huge gaps remain.
“I can give you two scenarios for the future”, he reflects. “We will make slow progress on all of these things and AI will go along (with that slow progress), or we could have a breakthrough tomorrow in understanding the brain and discern basic principles that operate brain-wide”.
In order to steer toward the latter path, inter-disciplinary studies are crucial, a topic made notable by E.O. Wilson’s Consilience in the late 20th century. To many people’s surprise, Bruce notes that some of the initial impetus for studying the brain came from philosophers. For example, recent progress in psychology and philosophy has established the importance of “embodiment”, which makes the argument that you cannot have dynamic AI without the presence of a rich body that is able to alter its environment using sensory information.
Bruce believes there is a need to more fully digest some of these philosophical insights and investigate their application to AI. From a practical standpoint, this area of study is critical in robotics development. There are ethical questions of course, and potentially dangerous implications. But from a purely scientific standpoint, there are lots of applications for robots of various sizes. To be able to implement this technology, there is a pressing “need to figure out how to put a lot of intelligence into very small packages”, i.e. the size of rats and insects. He describes a rat’s cortex as being the size of a postage stamp, yet they exhibit a relatively great amount of intelligence, resourcefulness, and energy-efficiency.
Interdisciplinary cooperation amongst the fields of psychology, philosophy, cognitive science, and robotics, could help us move forward down the path of developing a greater understanding in fields like developmental robotics, which “tries to understand how development takes place in humans and other animal and then applies those ideas in robots”.
In a parallel realm of thought, we may be coming face-to-face with the limitations of digital technology, the end of Moore’s law so to speak. We continually seek ways to leverage more computation out of each electrical component, and analog computation seems to be one way to accomplish that feat. Non-electronic computation is another option and another area of research for Bruce, who believes that we “should not assume that computation has to be done electronically”.
Click here to listen on Libsyn