Do Unto Your Smartphone as You Would Do Unto Others

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Do Unto Your Smartphone as You Would Do Unto Others

Episode SummaryWhen should we care about robots? How quickly should and will that change? These are just some of the thought points addressed by Professor David Gunkel, whose work on the moral valuations of AI is some of the first of its kind. In this interview, we consider the extent to which our “moral weighing” of other entities is arbitrary, and ask what a biased process might imply when we create other aware entities.

Guest: David Gunkel

ExpertiseCommunication Technology and Ethics

Recognition in Brief: Dr. David Gunkel is a recognized educator and author who received his PhD from DePaul University in 1996. He currently holds the Presidential Teaching Professor of Communication at Northern Illinois University, NIU’s highest honor for excellence in teaching. He is the author of over 40 scholarly articles, as well as five published books. His most recent book, The Machine Question: Critical Perspectives on AI, Robots and Ethics, was published by MIT Press in 2012. He has lectured on the philosophical aspects of the Internet, computer technology and critical theory at several institutions and organizations in the US and abroad.

Current Affiliations: Professor of Communication at Northern Illinois University; Managing Editor and Co-Founder of the International Journal of Žižek Studies; Co-Editor of the Indiana University Press series in Digital Game Studies

The Matter of AI Policy

Policy around the treatment of AI matters. It matters, says Dr. David Gunkel, if for no other reason than the inevitable effect that the treatment of AI will one day have on us as a human species. Taking into consideration the impending reality of sentient-level AI (and scholars continue to debate whether or not we should), thinking about these issues now is the only way to promote a best possible set of outcomes.

Promoting this topic of discussion has been one of Gunkel’s preoccupations for the past decade. But before we can set policy, there’s a need to ask and debate upon the fundamental question of how we establish moral standing. In other words, how do we – humans – decide if something does or does not have moral weight? We live in a world where some organic and inorganic forms are allotted more moral credence than others.

David gives the obvious but useful example that we place the highest level of moral value on our children, but that the rock we kick down the road, or the iPhone in our pocket, has almost no moral weight in terms life value. Other contexts are impossibly grey. “The question becomes where and how do we draw the line which decides who’s inside and who’s outside the moral community?”, asks Dr. Gunkel.

Shades of Moral Philosophy

Traditionally, humans have based moral evaluations on the  “properties approach”. The evolution of moral philosophy has in part helped to determine which properties qualify for moral standing, which we in turn use – both consciously and subconsciously – to analyze if an entity has such qualities i.e. rationality, sentience, language, etc. More often than not, these qualities hinge on matters of degree.

A primary problem with this approach is that over time, these properties have changed, proving more dynamic than static. In the Greco Roman period, for instance, a land-owning male could exclude his wife and children from certain rights that would be considered fundamentally human by today’s western standards. Because women and children were viewed as property, less than a human being, they received a restricted and limited moral evaluation.

David notes that this has changed again very recently, thanks in part to the innovations in thought by Peter Singer and Tom Regan in animal rights and ethics. We have lowered the bar once again, attributing moral weight to creatures that do not have recognized language, but which do suffer and can feel pain and pleasure.

Weighing the Conceptual Problem

According to Dr. Gunkel, there are two main problems with the properties approach at the end of this 2000 year stretch, both an ontological and an epistemological one.

Ontologically, how do we know which qualities qualify? Will we know if we’ve raised the bar too high or too low? Who gets to decide the answers to these questions anyhow? Humans do, and unfortunately those decisions have historically yielded bad outcomes for humanity (doubtless generations of women and minorities would agree). We seem to be – as a whole – choosing a more enlightened path in terms of defining qualities, but there are still ways in which the moving bar is a problem.

Take the sentience of animals, for example – where does it begin and where does it end? Compare a lobster to a dog – is it fair to make the statement that only mammals are sentient? This changing speculation appears to be a rather brutal way of deciding, yielding a moral philosophical struggle within our own species and between other species.

On an epistemological level, if you decide on a set of properties (generally internal states), then we are unable (at least, not yet) to directly observe such properties; we must look at the external evidence (primarily behavior). This leads to another mental conundrum – how do I know that another entity is a thinking, feeling thing like myself? Dr. Gunkel argues that we really cannot know for certain.

The annual version of the Turing test, used by AI researchers to interrogate artificial entities and determine whether they truly pass as intelligent, was recently hailed in the media as being passed for the first time by a chat bot. While many debate the results, the concept is troubling on another level. If you create a machine that is able to simulate pain, David poses, do you assume that the robot is really in pain? How do you assign causality to behavior?

Gunkel recognizes Daniel Dennett’s seminal essay Why You Can’t Make  a Computer that Feels Pain, and attributes the basic reason to the fact that we simply don’t know what pain is, we can’t compute it. We have assumptions, but the actual thing itself is a conjecture based on external behaviors. The same argument could be made for intelligence.

Shifting Our Mental Frames

One way to think about these perplexing notions is to shift the question. Maybe morality is not a matter of properties; instead, maybe moral standing is more configurable through a social-constructivist lens and taking a relational approach. We live in a world with other entities and decide based on interactions who gets to be inside and who is outside. Once humans invent a robot that is more similar to a human in physicality and behavior, then our inclination might be to start considering the implications of treatment – but not until then.

“My response is it’s real now, and if we don’t start asking and answering these types of questions immediately, we will be behind when that time comes. Sentience may be a red herring, a way we excuse thinking about the problem”, says David. “I don’t think it’s a question of level of awareness of a machine – it’s not about the machine – what will matter is how we (humans) relate to AI.”

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.