Support or Supplant? Daniel Lindenberger on the Future of A.I.

David Moyer

David Moyer covers emerging technology and ethics. A freelance journalist, he has worked for several clients in a writing and consulting capacity. He graduated from the University of North Texas with degrees in Political Science and Religious Studies. In his spare time he enjoys reading, coffee, and passionate debate.

Support or Supplant? Daniel Lindenberger on the Future of A.I.

What is intelligence? Is it possible to create intelligence, and if so, what will be our role be as the stewards of what we create? Will our creation eventually come to dominate us? The ramifications of the creation and shaping of consciousness itself are impossible to understate, and it can be argued, to understand. It is an ethical question that is potentially vaster by orders of magnitude than any other question humanity has ever faced.

Daniel Lindenberger, founder of the Glia Project (A combination of Gameful Design, Intelligence Augmentation and Crowd-Sourced Knowledge Base), studies the ethical and philosophical questions that the creation of Artificial Intelligence raise, and perhaps more importantly, the steps we can take in the immediate future to ensure that Artificial Intelligence progresses along ethical paths.

Says Lindenberger, “Artificial intelligence has gotten to a place where it’s pretty ubiquitous, and can do some pretty powerful stuff. We live in an age where we are barraged with so much information that we have to synthesize, and A.I. is really good with helping us do that.” However, according Lindenberger, this tends to cause “representational drift.” To translate that into simpler terms, we’re in danger of “outsourcing” our thinking processes to artificial intelligences to too great of a degree, making our decisions on increasingly abstracted data..

As an example, GPS technology has revolutionized the way we get from point A to point B. However, we trust our GPS, sometimes even at the expense of our own thinking. Says Lindenberger, “you’ve got this complicated system in you car and it knows where it’s going … and you drive yourself into a lake or get yourself into a dead end because it told you that was the way to go.” This approach to A.I. concerns Lindenberger, and rightly so.

Letting machines make decisions for us is not without benefit. Certainly doing a Google Search for information is quicker than poring through volumes of books in a library. GPS, despite its faults, has made driving easier, and frequently safer. But the argument can be made that we’re losing something of ourselves in handing our decisions over to machines. The more we let machines make our choices for us, the greater the risk that we remove our self-determination and let critical thinking skills atrophy.

At what point is our future dictated by the devices we surround ourselves with, rather than our own consciousness? The consequences of trusting the GPS AI too much are limited in scope, but as AI systems increase in complexity, real questions are raised about the consequences of governments, the military or the general public trusting AI systems too much.

The argument can be made that human error already exists, and that AI error would be no worse. “It’s a good argument,” says Lindenberger.  However, Lindenberger’s work is on creation of AI that functions as a symbiotic “helper” that augments human decision making capabilities, to supplement, rather than supplant our decision making processes. He envisions AI that provides us with a greater array of tools for making the right decisions, rather than making those decisions for us, choosing to focus on the augmentation side of AI, rather than the intelligence side.

In this way, artificial intelligence is perhaps one of the greatest misnomers in emerging technology. While AI seeks to create intelligent machines, everything about the process and theory has its basis in organic intelligence, in understanding how living creatures process information.

Both technologists and science fiction writers have speculated where AI will eventually take us. Some envision a future where human consciousness is augmented and expanded through symbiotic AI systems. Others have a darker view, envisioning massive computer systems that, due to their superior computational powers, have taken self-determination away from humans. According to Lindenberger, “I think both images are … like looking at people in the 1930’s projecting what it was going to be like in the 2000’s, it’s not really doable.”

Lindenberger takes a different approach, referencing chess Grandmaster Mikhail Tal, who when asked how many moves ahead he sees quipped “I only see one, but it’s the right one.”

“I think that’s the approach we need to take.” Says Lindenberger. While it’s useful to speculate where AI will eventually lead, Lindenberger is concerned with using AI to augment and empower humanity in the here and now. Just as in a corporate setting, long-term vision won’t mean anything unless you can focus on the immediate steps needed to move towards that vision.

The symbiotic relationship between assistive and mainstream artificial intelligence holds the key to advancing augmentative AI. Lindenberger believes that both fields advance each other, as mainstream technologies are adapted into assistive technologies.  Likewise, the advances in assistive technologies like Braingate have mainstream applications. What’s important in the process is that we don’t let AI replace our critical thinking and decision making skills, but use it to augment our capabilities.

For more information on Project Glia, visit http://projectglia.com

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.