Become a Client

Robots Could Get their Own Sense of Morality

Corinna Underwood

Corinna Underwood has been a published author for more than a decade. Her non-fiction has been published in many outlets including Fox News, CrimeDesk24, Life Extension, Chronogram, After Dark and Alive.

Robots Could Get their Own Sense of Morality

In his 1942 short story Runaround, science fiction writer Issac Asimov introduced “the three laws of robotics.” The laws were designed to provide a set of simple morals to guide advanced artificial intelligence. In reality, it has been argued that we are not even close to creating robots that could even comprehend such rules, but this may be about to change.

A research team at Tufts University, Brown University and the Rensselaer Polytechnic Institute are collaborating with the U.S. Navy to explore the possibility of developing robots with a sense of morality. If they realize their goal, they will be able to design a robot with the ability to analyze complex situations and autonomously make ethical decisions.

The team, led by Professor Matthias Scheutz of Tufts University, has a tough task ahead. In order to create a robot with a sense of morality, they must first break down human morality into basic concepts. They must then convert this basic framework into an algorithm that can be implanted within artificial intelligence. In theory, the algorithm would be able to enable the robot to use new evidence at hand to override its pre-programmed instructions and to justify its decision to the humans controlling it.

One example of such autonomous moral decisions would be a situation in which a robot has been instructed to deliver important supplies to a military installation. Should the robot encounter an injured soldier along the way, it would have the ability to assess the situation and decide if it should postpone the original mission and assist the soldier.

Scheutz and his team are addressing the task with a two-step process. Initially, all of the robot’s possible decisions would be processed through an ethical checking system, something similar to those used by advanced question and answering AIs, such as IBM’s Watson. If this system is insufficient to help the robot come to a decision, it will utilize the system created by Scheutz and his team – one which models the framework of human moral codes. If successful, this advanced technology may be used to help soldiers in the battlefield.

Image credit: Futuris Tech

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the ‘AI Advantage’ newsletter:

Stay Ahead of the Machine Learning Curve

At Emerj, we have the largest audience of AI-focused business readers online - join other industry leaders and receive our latest AI research, trends analysis, and interviews sent to your inbox weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.