Why We Must Hardwire AI if We Want to Sustain the Human Race – A Conversation with Louis Del Monte

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Why We Must Hardware AI if We Want to Sustain the Human Race - A Conversation with Louis Del Monte

Episode Summary:

Is it possible to make AI friendly to humans via software or will we have to hardwire consideration for humanity into an advanced AI? Louis Del Monte, best-selling author and expert in the field of Artificial Intelligence, argues the latter. In this discussion, Del Monte talks about how he came to these conclusions and wrote a book on the topic, in part inspired by a particular AI study that provoked his grave concern for where AI may take us in the future.


Guest: Louis Del Monte

ExpertiseArtificial Intelligence and Physics

Recognition in Brief: Louis Del Monte is the best-selling author of The Artificial Intelligence Revolution (2014) and other books. A past leader in the development of microelectronics for IBM and Honeywell, Del Monte received his BS in Physics and Chemistry from Saint Peter’s University and his MS in Physics from Fordham University. He is the recipient of the H. W. Sweatt Award for Scientific/Engineering Achievement and the Lund Award for Human Resource Management Excellence.

Current Affiliations: CEO of Del Monte & Associates, Inc.

A Turning Point

Del Monte was inspired to write his book, The Artificial Intelligence Revolution, after coming across a study in 2009. The experiment, conducted by the Swiss Institute for Intelligent Systems, consisted of robots that were programmed to cooperate with each other and search for food. The robots were supposed to trigger a light at the top of their heads when they found food, so that the other robots could then go and participate in the source and avoid poisonous resources. Almost immediately, the scientists found that some robots performed better than others.

The scientists copied the successful robots’ programming, the types of decision making algorithms that they used to find resources, and then put those algorithms into future robots; in a very real sense, the scientists evolved the robots. About 50 generations into the experiment, the scientists noticed that some robots had stopped cooperating.

This was a fairly primitive move, but those robots that were “winning” had figured out that they weren’t getting as many points because they were eventually pushed over by the other robots. By the 200th evolution, the scientists observed that none of the robots lit their light. Had the robots learned greed and self-preservation?

When Del Monte read this experiment in detail, he became concerned that strong AI machines would eventually have a mindset of their own, and based on the experiments, those AI’s agenda may not align with humanity’s agenda. Right now, says Louis, we sense that they (the AI) will serve us, but the machines, as they get to the point where they are really advanced –  that relationship seems much less clear.

Though the topic is wide open for debate, Louis predicts that between 2025 and 2030, AI will have advanced as far as a human mind, and that between 2040 to 2045, machines will exist that are more intelligent than the entire human race combined. “Knowing the results of the experiment,” says Del Monte, “I decided I needed to write a book as a warning, which is how it starts out.”

Can We Program “Friendly”?

“If you look at the most rudimentary life forms”, explains Del Monte, “an insect for example…it naturally seeks protection from its environment and harm.” Try to imagine a machine that has intelligence that equals or exceeds that of humans. We really don’t know that an intelligent machine running on software would “work” any better, or more morally, than a human. Just like in software, we have rules and laws for society, though humans end up breaking these laws all the time for a myriad of reasons, which is why we have need for a police to enforce those laws.

Treaties are another example of human law that are broken all too often; as a means of insurance and defense, civilizations have sustained armies to protect the rights of those protected by those treaties, says Del Monte.

As an alternative to running potentially faulty software on AI,e Del Monte suggests that we hardwire sets of laws and rules – Asimov’s famous laws or any other concepts we deem important – into future AI.

Hardware, acting as an integrated, solid state circuit, would act as a filter to ensure no harm to humans. In his book, Del Monte called these circuits “Asimov chips.” These laws could, technically, be expressed in hardware by using an algorithmic decision tree i.e. “will this action result in harm to a human being?” There exists some probability for error, but if it’s programmed at a certain level in the hardware, it would be safe to say that if a machine wanted to shake your hand, it would know never to crush your hand in the process.

Operating in a World of Varied Interests

Einstein, the consummate pacifist, had yearned to develop the atomic weapon, knowing folks in Germany were aiming toward the same objective. This is a really unfortunate consequence, the notion that an arms race might be inevitable. If other nations also develop intelligent AI, which would include autonomous weapons, and are not all that interested in following through on a similar peaceful protocol, how do we move forward in the world to best preserve humanity?

Louis uses the analogy of nuclear weapons to explain his views on this subject. The U.S. is not the only nation to have nuclear weapons, but we are the only one to have ever used them. Since that time, the mutually assured destruction doctrine seems to have held to the point that no other nation has since used them.

Del Monte believes that this can be, in part, attributed to the fear that if a nation does use such weapons, it will be retaliated against. “The whole concept that if I use it, I’m going to pay – you have places like Iran, North Korea, that are pursuing nuclear weapons; however, even the dictator of North Korea knows that if he were to use it, that it’s likely our response would have to be proportionate.”

In the same light, if a nation develops autonomous weapons and indiscriminately attacks targets, killing innocent civilians in the process, those nations should expect a retaliation. Louis acknowledges that there is, in this case, an unfortunately complex web of human motivation.

What the scientific community really wants is for these weapons to essentially be banned. Louis believes that, at the very least, there should be severe limits placed on AI uses, set forth in a similar doctrine to that which the world has used concerning nuclear weapons. We may soon have in our hands a technology that is just as, if not potentially more so, dangerous to the continued existence of humanity.

 

 

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe