Calling Siri Names? You’re Not Alone – A Closer Look at Misuse of AI Agents

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Calling Siri Names? You’re Not Alone - A Closer Look at Misuse of AI Agents

Episode Summary: After receiving her PhD in Computer Science from the University of New York in 2002, Dr. Sheryl Brahnam’s research interests steered her toward studying human abuse and misuse with computers, specifically conversational agents such as Siri, phone-based auto agent systems, and even chat support. Her research yields questions in relative new territory: Are AI prone to receiving misuse?; why do people misuse these agents in ways that they would not treat a human?; and, what types of regulations will we need as AI improves and becomes more intelligent?


Guest: Sheryl Brahnam

Expertise: Artificial Intelligence and Computer Science

Recognition in Brief: Sheryl Brahnam has authored many academic publications in the areas of bioinformatics, biometrics, computer science, and human-computer interactions. She has co-edited multiple books, the latest being Medical Technologies of Inclusive Well-Being: Serious Games, Alternative Realities, and Play Therapy (in press), and organized several workshops and conferences on human well-being and virtual interactions. She and her colleagues were the recipient of the HCII 2015 Best Paper Award for HCI and the NEDSI 2013 Best Contribution to Theory Award, and Brahnam is the recipient of the Daisy Portenier Loukes Research Fellow 2013-2015, among other recognitions.

Current Affiliations: Assistant Professor at Missouri State University; Assistant Editor for Intelligence Decision Technologies (IDT) Journal

Face-to-Face with Computers

“Human-computer interaction is usually based on a metaphor, and two of the biggest are human-computer-interaction (HCI) is communication and human-computer-interaction is manipulation”. In the first case, says Brahnam, we use language to communicate. As we’ve evolved, we’ve moved into the second realm as well, with our desktop as the visible metaphor.

These days, instead of deleting a file with a cryptic message, we use a mouse to drag and drop an unwanted item into a trash can. There are other metaphors, says Sheryl, but those are the two big ones and those that she and her team are most interested in studying.

HCI is communication and HCI is manipulation have both evolved over the last 50 years. We’re now entering the era of HCI communication as an interaction using natural language. “We call these interfaces by many names, but the name I often use is ‘conversational agent’, and if I can see them – an embodied conversational agent (ECA)”, explains Sheryl.

She points out that HCI doesn’t have to be this way, but that humans have engineered it this way. To us, this approach to computer interaction makes sense and aligns with how we perceive interactions with other entities.

Perhaps more reflective of our desire to connect, those interactions are increasingly user-friendly. We’re now moving away from the screen to virtual reality and augmented reality, even manipulating virtual objects in 3D space. We continually lower the barrier of what it takes to interact with these extensions of ourself.

“The ease of understanding what is going on (is important), but it also enables people to cooperate more because you’re dealing with (something) like a person, and it also makes for more engaging interfaces and activates a large number of unconscious social interactions as well, so there’s a lot of benefits in making a computer interface resemble or behave more like a human being,” Sheryl remarks.

The Inevitable Evolution of the ‘Diss’ in HCI?

We’ve all had the experience where we’ve been too rough with our car and our toaster. What happens when we treat our conversational agents in a similar manner?

If you’ve ever called your phone company, the first “person” you likely hear on the other end is a conversational agent. The agent greets and guides you, asking you to press buttons that align with your request, and if things aren’t moving fast enough or the agent doesn’t understand your request…chances are you’ve spoken in gruff tones or even yelled profanities at a conversational agent that didn’t get you in touch with the right person or department.

Brahnam and her team have made it their mission to look at the nitty-gritty details of these less-than-stellar human-agent interactions. “They’ll call them names based on their gender or what they perceive as the social attributes of the agent”, she says.

Brahnam and her team look at interaction logs online, or in other settings, and examine what people do during their CA or ECA encounters. Often, the humans don’t just do or say what they’re intended to do or say. Students don’t just ask questions about a subject, for example, if the agent is intended to be pedagogical. They might, instead, talk about drugs, sex, offend the agent about their race, their embodiment, their age, and play out other unintended scenarios with these agents.

“We’ve been fascinated by this, while others ignored this as noise, but we were wondering ‘why are people doing this’ and ‘how much of this is going on?'” According to Brahnam, about 10% to 50% of interactions are abusive, which we define in two ways: 1) A persona literally misuses the interface in some way (i.e. if the agent is supposed to be a teacher, the student treats it as something other than the traditional teacher) and 2) Figuratively, in that whatever words or messages are exchanged from person to agent would not be condoned as appropriate if issued from person to person.

The disconnect might seem obvious, but why do humans choose to interact with objects in subversive ways? Brahnam poses that humans have never before dealt with talking things, at least not outside of the realm of fairy tales. “Here for the first time in human history, we actually do have speaking things…we don’t know how to react to it; we’ve been taught that if you start talking to things, there is something wrong with you.”

Sheryl describes this as a form of anthropomorphism, or overlaying of human attributes onto objects or animals that are not human. An example is when kids talk to their dolls or stuffed animals, which we’re told to give up as we grow up. We are consciously and subconsciously taught to get rid of these anthropomorphic tendencies.

Suddenly, humans are in a situation where they are supposed to interact and talk with computers. “So people go in and out of believing that what they’re talking to is real – sometimes they’re testing the agent to see how they react,” explains Sheryl. We can imagine that our ancestors did something similar when they threw a rock at strange animals to see how they behaved.

But people often take it beyond mere observation. “These agents are designed to imitate human beings, and in certain domains – Siri or Cortana search engines, for example – if they’re talking to you, and you ask if they have a boyfriend, then “yes, his name is Steve”, invites all kind of abuse,” remarks Brahnam. Computers clearly can’t have a boyfriend, sexual preferences, or be a fan of the RedSox.

Even more troubling is the thought that these conversational agents are often learning how to interact based on experience. Sheryl mentions Jabberwock, a chat bot developed in the late 1990s. “It learned from other people what to say…it might say “I’m God” because that’s what users say – people get upset when agents act too human, they want to put them in their place.”

Brahnam points out that a lot of people behave very nicely and are very polite toward CAs. The misuse or abuse likely stems from disinhibition; when we’re communicating online, we don’t have all the social cues available, and in turn we don’t have normal social boundaries.

Humans do this with email sometimes, where we have seen too many cases of bullying, flaming or failing to cooperate. There is both benign and negative types of interaction, and Brahnam and her team continue to study the negative – the misuses and abuses – to better understand what makes us tick when we interact with something not quite, but almost, human.

 

 

 

 

 

Subscribe