Making Robots More Humane at Brown University

Dyllan Furness

Dyllan explores technology and the human condition for Tech Emergence. His interests include but are not limited to whiskey, kimchi, and Catahoulas.

Making Robots More Humane at Brown University 2

Before we welcome a new technology into our lives, it’s wise to consider what effect it will have on us as human beings. What might a technologically disruptive app do to our innate empathy or self-esteem? When that technology is so sophisticated to actually resemble human beings, this forethought is that much more important. Robots and artificial intelligence will disrupt the very fabric of society, and dramatically change the way we relate to technology and to each other. (In fact, they already are.) So preparing for this change is perhaps as important – if not more important – than the development of the technology itself.

In this vein, Brown University recently put their support behind the Humanity-Centered Robotics Initiative (HCRI), a faculty-lead effort to explore, uncover, and report on the many facets of integrating robotics into our everyday lives. As anyone who’s read Isaac Asimov or Arthur C. Clarke can attest, even if we’re very cautious, this integration has the potential to augment or destroy. HCRI hopes to anticipate this disruption and help engineers, researchers, and social scientists steer robotics in the most reasonably right direction.

“We want to leverage the atmosphere, interests, and talent at Brown University with the goal of creating robotic systems that work with people for the benefit of people,” computer science professor Michael Littman told Emerj. “And we’re dedicated to understanding what the actual problems are – not just to create fancy technology, but actually to try to understand where the difficulties and short comings are and to focus on those.”

HCRI’s focus will be split into six, collaborative research focuses: robots for scientific research; motion systems science; design and making; perception and decision-making; robots for independent living; and ethics, policy, and security. Combining elements of design and making with ethics, policy, and security, one DARPA-funded project plans to explore ways of engineering robots that have some awareness of social norms. 

Littman co-founded HCRI with Professor Bertrand Malle three years ago, with the intent to focus a number of academic perspectives on robotics and collaborate in the process. Brown’s recent support now allows Littman and Malle to bring an associate director and a postdoctoral researcher on board, as well as offer seed funds to new robotics research and symposia. Already, two HCRI-sponsored symposia have brought more than 60 Brown faculty members from 20 teams together in the interest of a better robotic future. 

hcri-logo-for-flyer

Provost Richard M. Locke calls the initiative “distinctively Brown,” implying that the efforts made will bring an array of faculty together, though not necessarily from outside the university. “This is about addressing big problems in the world by bringing together people with diverse perspectives. That’s something Brown does particularly well, and it’s what will set HCRI apart from robotics programs elsewhere.”

Among its many approaches, the initiative plans to review how human-robot engagement is different than human-human engagement, or human engagement with other technology. “Results show that people can often be more frank to a robot, because they don’t typically feel judged,” Littman told us. “They may be more compliant to a robot than to an app. The same information delivered by an app versus a robot seems to have very different effects in terms of people listening to them.”

As an example of this theory applied, Littman outlines a startup project for robots to help recovering alcoholics refrain from another drink. The robot would essentially perform the role of a counselor. Though inherently less dynamic than a professional human counselor, the robot could engage the subject in similar ways, like by asking poignant questions such as “How long have you been sober for?” The hope is that these questions would grant the subject another perspective and encourage him to reflect on his actions. Littman says, “By having the robot play the role of counselor or therapist, we will to make the insights and capabilities of highly-trained counselors more available to a wider set of people.”

Despite their enthusiasm, neither Littman nor Malle are naive about burden of the task ahead of them. Rather, they both recognize that many unforeseen obstacles lie ahead. Issues like privacy and security stand out as some of the most complicated. “A robot in someone’s house that can just call an authority when it thinks they’re in trouble, is a really complicated issue,” Littman says. “We’re not even at the point of being able to put down concrete recommendations of what the guidelines ought to be in those scenarios.”

“There are many values we want to uphold for ourselves and for other people,” Malle says. “Privacy and autonomy are sometimes in contrast with safety and health. So I can leave it to you to be stupid and get drunk and drive. Or I can try to limit the harm by taking away your autonomy and ripping the keys out of your hand. Or telling somebody that you’re going to do it. The same goes for robots. If they’re trying to benefit health, it might at the same time take away autonomy.” Malle points out that our autonomy is already under threat by AI and robotics, for better or worse. Consider self-driving cars, text autocorrect, or spam filters. 

Another big hurdle Malle raises is individuals’ wants and expectations. You may want a robot that speaks straightforwardly. I may want a robot that speaks in subtleties and pleasantries. “We have to decide how to balance these values,” he says. “There’s no fixed robot for everybody.”

Littman and Malle stress the importance of a collaborative effort with the people who’ll eventually engage with robots. And that can mean almost anyone from infants to geriatrics. It’s vital that we as individuals and society as a whole is comfortable and secure with our new found partners. “We want to make sure were building machines that empower people and make their lives better,” Littman says. “Not ones that become Big Brother or make their lives scary.”

Image credit: Martin Dee, Brown University

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe