share
interactive transcript
request transcript/captions
live captions
|
MyPlaylist
[INAUDIBLE] SPEAKER: Welcome to the second class installment of the Ethics and AI course. Our speaker this week is Ross Knepper who is a relatively new faculty member of the Computer Science Department. This is your third year, right?
ROSS KNEPPER: Third year.
SPEAKER: Third year faculty member. So he is an assistant professor. His area's robotics as will be clear from the talk. He's been honored by an Air Force Young Investigator Award and is considered for one of the new leaders in the area of human robot interaction, HRI. So he'll be talking today about autonomy, embodiment, and anthropomorphism. The ethics of robotics fits in perfectly with our theme, so [INAUDIBLE] for Ross.
ROSS KNEPPER: Thank you, Joe.
[APPLAUSE]
Welcome everyone to the robotics edition of the ethics course this semester. I stuck a lot of long words on the slides, maybe, to get you curious and draw you in a little bit. And I'm going to explain what all of those mean in due course.
But first, I wanted to get you thinking about what are our expectations of robots. You hear a lot these days about robots-- self-driving cars, robots in factories, maybe someday robots in your home. Some of you probably already have robots in your home. You might have a Roomba.
But there's this idea that robots are going to be doing almost everything in our lives pretty soon. Maybe that soon is actually a little ways out because the technology is not here yet. But this is the right time to be thinking about what are the implications of robots and robotics in our daily lives.
So two of the implications that you're most likely to hear about, if you turn on NPR these days, are robots taking people's jobs is the first one. This comic, I thought, was apropos because, politically, a lot of people like to blame job exports to other countries, et cetera. But if you look at the manufacturing production output over the course of the last several decades, it's actually held constant.
And what has changed is increased automation. So it's not Mexicans that are to blame for job losses in manufacturing industries. So automation definitely plays a role here. You hear a lot about that. But that's actually not what I'm going to talk to you about today.
The other thing that you might hear a lot about is robots taking lives, so robots in the military, robots in the police force. There was an episode last year, I think in Dallas, where a bomb diffusing robot was actually used to kill somebody who was sort of on a shooting rampage, just snuck up on him with an explosive and blew him up. So that was a misuse of the robot, strictly speaking. But it stirred a lot of debate about how robots can be used to kill.
That's all I'm going to say about that subject as well. Not that it isn't a serious subject but I want to actually talk about something a little bit different. So these are both adversarial scenarios. It's robots against people. But if we're going to have robots in our daily lives, we're actually going to see something very different than that, which is what Hollywood has been predicting for decades, which is robots and people forging bonds, actually the buddy flick, right, with the human and the robot that are friends, that get along, that relate to each other.
And that's a much more complicated thing when you think about it because what are all the capabilities that a robot needs to have in order to get along with the person? And we, as engineers, have a duty to create robots that are going to satisfy the public's demand. But that comes with a lot of strings attached because, oftentimes, the public's view of what robots are going to be able to do is actually vastly inflated thanks in large part to Hollywood.
So if you look at our daily lives, there is such thing as social bonding between people and robots even today. We have people decorating their rooms in their home. We have people putting their pets on the rumors and riding them around and even giving them knives and doing some kind of-- I'm not exactly sure. It's competitive, whatever it is. Unfortunately, there was no video, just this still with that one. So I'm not sure how that one played out.
This one's kind of a ringer. This is a professional robot, the PR2. But they dressed it up for Christmas, made it look nice. This guy here is not technically a robot. This is the Amazon Echo, Alexa. And nowadays, people sell skins that you can put on it. Alexa is not really a robot although it fulfills many of the roles that you think about when you think about what robots are going to be in our lives. And so it seems like it fits here pretty well.
So now I want to jump into those long terms I mentioned and explain what they mean and what are the implications when we're talking about how we relate to robots. The first one is embodiment. So robots have a physical form in the world. It's not just a computer program running on a screen. It's something that you can look at.
So these three are all art. So these are static installations. This one in the Pittsburgh airport, it's several stories tall. These two you can buy on Etsy. So artists created these to look like robots and to conjure certain images. And even though they have no emotion and they have no thought process, they still have character, right? You can look at these and judge how you would expect them to behave, how you might expect to relate to them. So embodiment turns out to be a really crucial cue that people use when they're relating to robots.
[VIDEO PLAYBACK]
- Hi, robot.
ROSS KNEPPER: If you didn't believe me, here's some evidence. this went viral, I guess, maybe a month ago or so, three or four weeks ago. This is a water heater that somebody discarded. And it's a cylinder with a box on top. And there's two things that look like eyes.
- I love you, robot.
ROSS KNEPPER: So maybe it's close enough. At least, in a two-year-old's mind, it's close enough to a robot.
[END PLAYBACK]
But the way that she relates to this thing tells us a lot about her expectations of it. And because it's inanimate-- it's just sitting there doing nothing-- everything that it conveyed to her was based on appearance alone, right? So this was a powerful emotional bond that was forged strictly based on appearance. And that is a really powerful thing.
But it's not the only characteristic that we care about. So another one is autonomy. So this video is taken in an Ikea plant. Now, the thing with Ikea is that you build it yourself. So they're not assembling furniture. But what they're doing is packaging it into boxes that they can ship very compactly across the ocean. And then you can buy them and take them home.
So these are going to be bookshelves one day. And these robots are autonomous because they've been programmed to do this packing job and do it very efficiently. They're doing many in parallel. They are fast. They're very precise. And they can work all day, day and night, non-stop.
So they're able to do all of this without a human in sight, right? And they're able to do that because they were programmed to do it. And now they operate autonomously. That's not to say that they're thinking for themselves. So autonomy here does not mean that they have free will or anything like that.
What it means, in this case, is time shifted human operations. So a human programmed them, very literally and painstakingly, to go through those motions that you saw. And then they do it repeatedly over and over again at a later time forever until the programmer stops them and reprograms them again. So that's autonomy.
The third one we have is agency. So agency is the idea that something has independent decision-making ability. And this video was produced by Heider and Simmel, a research team, back in 1944. This video was shown to participants. And they were asked to tell a story based on what they saw in the video.
And it's just several shapes moving around on a screen. So where does the story come from? Well, something about the way that these shapes are moving conveys idea, right? The relations between the shapes start to suggest relationships among people. And so if you watch enough of this movie, people actually start to reconstruct a story. And actually, the majority of participants who were shown this video told, basically, the same story about the scenario that's going on here, just based on a few geometric shapes moving on a screen. So there's something about the motion of these shapes that assigns agency to them. Question?
AUDIENCE: You said reconstruct rather than construct. So did they have a specific story in mind?
ROSS KNEPPER: I don't know exactly what Heider-- oh, the question was, did they have a specific story in mind? I don't know exactly how specifically they conjured up a story. But I suspect that they did have a sense of what they wanted people to tell.
People certainly add their own details. It's not exactly the same story. But it wasn't just a shot in the dark. It's clearly not random moving shapes on a screen. That's the key thing. They constructed it to give these shapes agency. Question?
AUDIENCE: Yeah, I don't know if you'll get to this later in the talk. But I'm just wondering the form of emotional robot with the robot, if it's important that it looks somewhat like a human. A Roomba is just cylinder. [INAUDIBLE] eyes on it, and I don't think the girl would go up to the [INAUDIBLE] factory [INAUDIBLE] and be like, hi, robot, whereas she does it to the--
ROSS KNEPPER: Definitely, so agency and anthropomorphism, which we're going to talk about, come from a combination of visual appearance as well as behavioral cues. And I'm going to talk a lot more about that. So agency does not require a physical presence. So a chat bot, for example, has agency because it seems to be having its own thoughts and it responds intelligently to you and so on.
So when you put these three things together, you have the cues that are necessary for anthropomorphism. So this is the idea that a robot or an entity seems human to us. And that little girl was definitely anthropomorphizing the water heater based on only one of the criteria. She expected, apparently, some kind of reaction from the robot. It's not clear whether or not she got what she was looking for. But she seemed pretty happy to hug it. But full anthropomorphism is something that Hollywood has really mastered.
[VIDEO PLAYBACK]
- [INAUDIBLE]
ROSS KNEPPER: So this scene from WALL-E is a beautiful example of two robots building a relationship. They learn to communicate. They're coy with each other. There's some kind of possible romantic interest, possibly, that's building up, right? So Hollywood has gotten very good at suggesting anthropomorphistic cues based on the behavior of the characters that you see.
And we know that they're just moving shapes on a screen like Heider and Simmel. But we attribute a lot more anthropomorphic tendency to these robots. And they appear to be making friends. That's pretty cool.
[END PLAYBACK]
So Hollywood has mastered this to a much larger degree than engineering has. So today, we can't build robots like this. Someday maybe we will. Here's the kind of robots that we can build today.
So this is a system I created a few years ago called Ikea bot. As the name suggests, it assembles Ikea furniture. And this was an experiment to see how multi-robot systems could work independently.
So these robots are actually doing many of the same kinds of things that humans do when they jointly assemble furniture. They're coordinating with each other. They're coming up with plans. They're dividing work based on their capabilities.
So the robot with the screwing tool is much more effective at screwing. Therefore, the other one is going to be the delivery robot. And it's going to go fetch parts and bring them to the assembly site. So these guys are actually thinking, if you will.
Although they don't appear human at all. It's a single arm on a four wheel base. So it would seem that people are not going to relate to these robots in the same way that the two robots in WALL-E were relating to each Other
So we did another study based on Ikea bot where we gave the robots the ability to ask for help. And we did this from an engineering standpoint, which is complex systems fail. And if they can analyze their failure and determine that it's not correctable on their own, we'd like to give them the ability to articulate to a human bystander how to achieve a failure correction. So the robot's actually speaking here. It says excuse me.
[VIDEO PLAYBACK]
- [INAUDIBLE]
ROSS KNEPPER: So maybe it's a little hard to understand. We also printed it up on the screen in large letters in the room so the people knew what was going on. And you'll notice here, he waited to come around until he finished what he was doing. And the reason is that the robot was able to convey clearly what it wanted.
And so he could finish his job so that he wasn't in the middle of something and then go around, help the robot. And then they could both return to work. And the robot was being polite at the end. It said, "thank you. I'll take it from here."
But actually, that's a very functional behavior as well because it's saying your work is done here. Get back to what you're doing. We want them both to be as productive as we can. So you could imagine a system like this, maybe, working in a factory someday. Robots and humans actually working together instead of one replacing the other. So that's what we expected to see when we designed the study. But we also saw a lot of other behaviors.
So this is another case where a human comes around. And a couple of interesting things-- first, rather than doing what the robot needed because it didn't specify here what it needed, he tries to engage the robot in dialogue. He says, what do you need? It's a very reasonable question.
Trouble is, the robot doesn't have the ability to understand speech. Because we are engineers, we built robots that were going to ask for help, not give help. So it never dawned on us that the robot would need to be able to answer a question like, what do you need? This was a simple experiment in a laboratory.
Now, clearly, you'd want to give robots that ability if they're going to go out into the real world. But it turns out that this kind of dialogue is very, very hard to do. So the next thing here, he's got this very open kind of body language. So he seems pretty excited to be able to help this robot.
[END PLAYBACK]
He's there. He's ready. Just tell me what to do. That's kind of the attitude. And that's exciting. But we built a robot system that was completely unprepared to embrace his enthusiasm. So kind of mixed results, right? We showed that the system can be very effective when it gives very clear one shot directions. When it gives ambiguous directions like, please help me, then the interaction problem becomes much more complex. Question?
AUDIENCE: When the robot asked for the white leg on the black table, what was going on there? Did it see the black table and know that it needed the thing on it, or?
ROSS KNEPPER: So the question is, what was going on when the robot asked about the right leg on the black table? There's a whole paper that's written on this subject, which, if you talk to me after, I can point you to. But briefly, the robot is looking around the area and determining, based on the environment, what's going to be the most concise, unambiguous way of raising the help request because, if you give an ambiguous request, confusion ensues. If you give a simple to understand request that is unambiguous, then the right thing happens.
AUDIENCE: I just know in Intro to Psychology one of the examples that they give is how hard it is to describe something as a table and to have definition of what a table is is really hard. So for the robot to know that that was table seems really impressive.
ROSS KNEPPER: Right, so for the robot to know it's a table seems impressive. Well, so in psychology, there is a term called common ground, which says, it's the set of facts that we all know that we all know. So we all know that we're here for a lecture on robot ethics. So in this case, we're assuming a certain amount of common ground about what is a table, what is a leg, and so on. It's not like these robots have to grow up and learn to speak the way that you do. Yeah, question?
AUDIENCE: How important is the part about the man being excited and enthusiastic because I can imagine that, once he gets used to this kind of stuff, he'll be pretty non-enthusiastic?
ROSS KNEPPER: So this is a bit of tangent. But I think it's interesting. So I'll answer. The question was, is it important he's enthusiastic because the hundredth time this happens he's not going to be. I think that there's actually a real synergy here, which is robots are good at doing repetitive dull tasks, dangerous tasks. They're good at doing certain classes of things.
Humans are good at an entirely different class of things. We're good at creative problem solving. We're good at troubleshooting. We're good at dealing well with the unknown.
So if it was the same failure every time, yeah, he's going to get awfully bored. But if it's something new every time, that's actually pretty stimulating. So there's a whole hour lecture on this subject. But that's the short version of why I think that robots are not going to take all of our jobs because I think there's going to be plenty of opportunity and need for this kind of synergistic interaction.
So all right, so there's opportunity for robots to interact more productively. But really, the point here is that this guy is anthropomorphizing the robot. He has expectations that this robot is going to be able to engage him in dialogue. It's going to be able to coach him on the right way to solve the problem.
So even though this robot looks not at all humanlike, at least to some degree, it clearly was successful at anthropomorphizing itself in the human's mind. And that's an important clue here because appearance is not everything. Behavior is a really important cue.
So there was a question earlier about, do robots have to look humanlike? So there's a researcher in Japan who builds very, very humanlike robots to the extent that it's hard to know which one's which. Does anyone know which one's which?
[INTERPOSING VOICES]
ROSS KNEPPER: So who thinks this is the human? This one? OK, you guys are right. So it's the one on the left that's a human. So this is called a geminoid. And it is a pretty convincing facsimile, at least to look at it. There's an even newer model.
[VIDEO PLAYBACK]
[MUSIC PLAYING]
We don't need all that music. So this is actually the newest model geminoid. He patterns each one after a different person. So here it is actually going through some of its basic exercises. What do you guys think? Does this--
[LAUGHTER]
Does this look humanlike?
AUDIENCE: Almost.
ROSS KNEPPER: Almost. It's kind of silly. Does anyone find it creepy?
[INTERPOSING VOICES]
ROSS KNEPPER: OK, so creepy is a pretty strong sentiment here.
[END PLAYBACK]
It turns out there's a reason for that. So there's this phenomenon called the uncanny valley, which is, as a device or robot approaches human likeness, first our sense of familiarity or a sense of liking it increases.
But then you get to a point where it's almost human. And there's this precipitous drop called the uncanny valley, where we really feel revulsion if something is almost human. Nobody knows for sure why this happens. There's a theory that something that's almost human evokes the idea of death, so zombies. Too realistic humanoid robot-- that's definitely what we were seeing.
So there's a danger, actually, that if something appears too humanlike-- we don't have the ability to make robots that look exactly like healthy people yet. So the best attempt that we have is going to be down in here somewhere. Question?
AUDIENCE: Does this hold true for other types of shapes? So if it looked like a dog or animal [INAUDIBLE]?
ROSS KNEPPER: Does the uncanny valley apply to dogs as well? Yes, I think it does but not to the same degree. That's a good question. But what this is really telling us is that looking very human like is not the key to being highly anthropomorphic. You can be over here and be very successful. Or in the case of my robot, you can be over here and be very successful at anthropomorphizing. So human likeness is really not everything.
So the key here is that, when humans interact with robots, they construct mental models that describe that robot's behavior. These are based on both observations and appearance. And we make judgments according to the models that we build. So we tried to predict how that robot is going to respond in the future.
Now, emotional attitudes towards robots become really important when you realize how much people start to bond with robots over time. This robot is the AIBO. It was built by Sony for about 10 years. It was discontinued in 2006. And the last service warranty support, et cetera, was discontinued in about 2012.
And users reported feeling a sense of grief when they had to retire their AIBO, when their AIBO no longer functioned and they could no longer get a replacement part for it. So they actually went through a grieving process as if they lost a real life dog. So that tells you something about the impact that these robots can have on people's lives, especially when you think about them being a part of your life for 10 years or more.
So this is a video I found on YouTube. This was made by some lady. And it's just looking at her two robot dogs. And you listen to the way that she talks to the dogs. One understands Japanese. One understands English. So she talks these dogs a lot like how you might talk to a real dog. Of course, they play music, which is a little different than real dogs. But they appear to have some life to them, if you will.
They're not fully predictable. So their mental models don't tell you exactly the sequence of things that they're going to do. But the mental model does describe the scope or the space of things that they might do. And in a sense, that level of unpredictability is what makes them charming because it's reassuring but not alien.
And I think that's something about real pets that really stimulates us because there's the sense of we don't quite know what they're going to do. But we trust them. We know that they love us, that, in the end of the day, we're there for each other.
So this is another robot. This is a therapy robot. It's a robotic fur seal pup, a seal pup. And this is made in Japan. You can buy one for under $4,000.
[LAUGHTER]
You know. It's made to be used with old people in retirement homes or in places where, maybe, they don't have a lot of human interaction or as much as they used to. And this is a really brilliant design. How many people have ever petted a real fur pup? Nobody, OK.
So if it was a kitten, everybody here probably has touched a kitten and interacted. And you have a lot of preconceived notions of what a kitten is supposed to behave like. So your mental model of kittens is very strong. This guy you expect it to be cute. By the way, the pacifier is the power. So that's how you recharge it.
[LAUGHTER]
So it's feeding right now. So it's complex enough to be interesting. It responds to touch. It responds to being held and handled. So it meets a lot of the basic definitions of what you expect a mammal to do, a baby mammal. But it can't miss on all the details like whether a kitten is purring or not because you don't have that mental model.
So this turned out to be wildly successful. People love this thing. And honestly, it is adorable. I would buy one if I had a spare for $4,000. So certain robots can be very effective at building emotional bonds with people. And this is actually where it starts to get interesting because, essentially, what we're doing is creating trust.
And trust is a fragile thing because, if you use it right, it's very powerful. We can be a team. We can be united and be very effective together. But it's very easy to misuse trust, sometimes accidentally.
So this is a project from Georgia Tech that came out last year. And the idea here is that they orient people to a building. And they're doing some task with this robot. And suddenly, the fire alarm goes off. And there's actual smoke and alarm. And it looks like a real fire, not just a staged thing, which it is.
And here's this robot with these glowing batons, saying, this way. And in that moment, when there's a fire alarm-- which, actually, there was a fire alarm in Gates this morning-- people follow this robot because it seems like an authority. And they trust it.
And interestingly, even if this robot is telling you to go into a dark room, people will do it. If it's a dead end, if there's no exit and they have to climb over a sofa, they will go into that room.
[LAUGHTER]
So it's a good thing it wasn't a real fire. So this was an experiment. It was done in a controlled environment, just to show that it's possible for people to over trust robots. But here it wasn't done to cause harm. But you can imagine, if the used car salesman looked less like that guy and more like this guy, he could be a really smooth talker. And you might have no idea. Robots are capable of building up trust in ways that people aren't potentially.
So this is new horizon. We've only seen stuff like this in movies so far. But people are starting, clearly, to experiment with the ways that robots can manipulate people's trust levels. , So maybe someday people are going to start getting swindled by robot salesmen for profit, for somebody's profit, maybe not the robot's, but--
And then there's Ex Machina. Has anyone here seen this movie? Not everybody. So I won't spoil it. But you should definitely go to see this movie if you're interested in the subject of this talk because the movie is all about trust and building up emotional relationships and what can you really expect from a robot. So very, very interesting thought provoking movie.
So this is another topic. I put this in my abstract. I promised I would talk a little bit about sex robots. So clearly, another kind of emotional attachment that people form in their lives is these very intimate emotional attachments. And of course, there are already proxies for humans that you can use in the sexual domain. There are life sized dolls that you can buy and so on.
But somehow, robots are a little bit different. So people have done surveys actually. There was this survey by Scheutz and Arnold that came out last year where they actually asked people all around the US what they thought about sex robots. Basically, what are their expectations?
So firstly, it was clear that men are much more open to the idea of sex robots than women, about twice as open to it, actually, overall. Maybe that's not too surprising. Interesting one, though, millennials were, specifically, were less likely to see sex robots as an appropriate replacement for prostitutes. So it may be because people have different ideas about prostitution or maybe about sex robots. That wasn't clear from the study.
But there's some real thought provoking stuff in here. People see sex robots as being much more like masturbation than like being with another human. And it's not clear whether that is sort of an excuse that they're creating in order to allow this behavior because maybe they feel-- some part of them says this is unethical. We shouldn't have sex robots. But they want to justify it. It's unclear. We don't know.
They specifically asked about different aspects of sex robots, what properties should they have. Overall, respondents were most in favor of robots that were specifically designed to satisfy sexual desire, that moved by themselves, and can be instructed. So this actually sounds a lot like a sex slave, honestly. Maybe that's what people think sex robots should be.
Overall, they were least in favor of robots that have feelings, that can take initiative, that recognize human emotions. They don't want sex robots doing those things. But also, there was a big discrepancy between males and females. So females were much more likely to support the idea that sex robots should be able to see, hear, understand language. And maybe that's just what they expect from people. I don't know. Question?
AUDIENCE: Are vibrators considered sex robots in this context?
ROSS KNEPPER: So the researchers deliberately did not define sex or sex robot or any of that stuff because they didn't want to limit the scope. The question was, are vibrators considered sex robot? So it's up to you as a respondent to decide whether they are or not. But I don't think they fulfill most of these. Anyway, so there were more-- Oh, question?
AUDIENCE: So instead of, already, questions about ethics, in a sense, like getting the consent of the sex robot--
ROSS KNEPPER: Do you need-- Yeah, I mean--
AUDIENCE: And there's some TV series-- it's a famous name. I can't remember. There's a simulator [INAUDIBLE] and you go and you can sexually--
AUDIENCE: West World.
AUDIENCE: --assault these women who are robots.
ROSS KNEPPER: Right, so does no mean no when it comes to robots? Maybe some people specifically want a robot that says no means yes. I don't know. I think that's creepy personally. But it's a robot. It doesn't have feelings. People don't want it to have feelings, so--
AUDIENCE: Something still seems off about it ethically as well.
ROSS KNEPPER: Yes, and I'm actually going to make an argument about why it seems off. OK, so what do people think are appropriate uses for sex robots? The most appropriate uses that people suggested were a replacement of a prostitute or for somebody who has an STD to avoid spreading. So if you're going to have sex robots, these are what they should be for.
The least appropriate would be to practice abstinence or to give to a sex offender. So there's some idea that a sex robot would be cheating, breaking the rules somehow. And we can't have that. So should sex robots, with the form of blank, be allowed?
People are most in favor overall of adult humans and fantasy creature forms so one that meets conventional norms and one that, actually, violates conventional norms. So that's kind of interesting. They were least in favor of sex robots that look like human children or one's family member.
I'll say that men and women agreed on the human child. Nobody wants human children sex robots. If there's one redeeming thing from this study, that was it.
[LAUGHTER]
Although, interestingly, men were significantly more likely to think that one's family member should be allowed as a sex robot. Take what you will from that.
[LAUGHTER]
And then there were a lot of sort of questionable or in between, where the support was mixed. So do you want sex robots that look like celebrities or one's current partner or ones deceased partner or a friend who may not know that you have a sex robot that looks like them.
[LAUGHTER]
So a lot of really thought provoking things here. The survey does not attempt to answer any of these questions in terms of the morality of these kinds of things. I think one of the biggest takeaways, though, is that inherent in this question is the suggestion that you should be able to choose the form of your partner.
And we have that to some degree. But they'd better say yes. So you're limited in that way. And here there's no limit right. It could be anyone. And even if it's a friend who is not interested in that kind of relationship, well, now you can have it. So that, I think, people may find a little uncomfortable.
So a broader question about robots is dignity so the quality or state of being worthy, honored, or esteemed. And I think this is a really open question. Should robots have dignity? Should we give robots the same dignity that human beings have? And I think there's arguments in favor. And there's arguments against.
The arguments against are they work for us. And what's the point of building them if we're just going to set them free and give them free will and all of that? So the interesting one, I think, is the argument that robots should be treated with dignity. So maybe someday they'll actually strike for dignity.
So I'm going to start actually by giving you a few detailed insights into how we are hardwired not to treat robots with dignity. So this is a famous test that's been done a number of times before. It's visual perspective taking.
So what number does everyone see here? It's a nine. So what happens when you add a person to the scene? Most people are still going to say it's a nine. What if the person is now interacting with the number?
It starts to become ambiguous. So maybe it's a nine, or maybe it's a 6. And here, where he's interacting with gesture as well as gaze, when I look at this, I actually see a six. I don't see a nine. I don't about all of you.
But if do, if you see a six, you're doing visual perspective taking. Your brain is automatically looking at it from his perspective instead of your own. That's a very powerful cue because it says this person has dignity. It's worth considering his perspective.
And I think you know where I'm going with this, which is they repeated the same experiment with robots. So what if you have a robot in the scene as opposed to some inanimate object? What if the robot is looking at the number? What if the robot is interacting with the number?
And again, I don't know about all of you. But when I look at this scene, I see a nine. I don't see a six. So there's something hardwired about the brain that is inclined to see humans differently than robots. Question?
AUDIENCE: Do you know what would happen if that were a dog?
ROSS KNEPPER: What would happen if that were a dog? I do not. And I'm not a psychologist. So I'm not qualified to say. I guess that since dogs don't read, you probably would not take their perspective when it comes to reading. But that's pure speculation on my part. Yeah, interesting question.
So these are actually the results. People do take the robots perspective to some degree but not nearly as much as they take the human's perspective. So there's a clear difference here. And that's something we need to keep in mind when we're designing robots, is that people who interact with those robots are not going to take the robot's perspective as readily as they might take the human's perspective.
A similar experiment was done with empathy. So if you look at these scenes, does this make anyone wince, seeing a scene as someone about to cut into their finger? Yeah, it's kind of uncomfortable. So they did an EEG, electroencephalogram, of human subjects looking at one picture at a time. And for each one, they wanted to see, are the empathetic nerves in the brain firing? And you can see a signal difference, for example, between the human who's not going to cut themselves and the human who is going to cut themselves.
And the research question is, what about a robot in the same two scenes? And the results are actually pretty similar to what you saw before. So there is some empathetic nerve firing for the robot that's going to cut itself, even though the robot may not hurt when it cuts itself. We see that. And we automatically-- to a limited degree, we associate ourselves with that robot. You put your finger in the place of the robot's but not completely. So we see that same kind of gap. Question?
AUDIENCE: So I think there's a factor there of, if I look at a robot hand and then I see it's, maybe, made of metal, I might not even think that it trying to cut through it would even damage it. Whether or not it feels pain after that [INAUDIBLE]. Maybe that's a--
ROSS KNEPPER: Right, so the question is, maybe people are saying it's made of metal. It doesn't hurt. It doesn't damage it. I think that the nerves that are firing actually are not at that level. I think it's just you see something that resembles a finger, and you automatically sort of wince a little bit.
But again, I'm not qualified. That's just my opinion. Excuse me. Now, a more interesting and more complicated example, is keeping a secret. So if I tell one of you a secret, you might feel some social pressure to keep my secret and not betray me later.
So they looked at this question for a robot giving a tour. So this robot gave a tour of a lab. And then at the end, it says, by the way, I don't want to show you this one part. Please don't tell the examiner.
And they did the same thing with a very social robot that was doing small talk, telling you about itself, asking questions. They did it with a more basic robot, which is more like Ikea bot that can not really understand English and you had to tap on a touchpad to move from one tour event to the next. And then there was also a baseline line with a human in place of the robot.
So here's what it looked like-- I'll turn the sound back on. Here's what it looked like when the robot is confiding in the human.
[VIDEO PLAYBACK]
- OK, So I showed you nearly all the items on our tour. [INAUDIBLE], could I ask you a small favor?
- Yes.
- If I tell you something, do you think you could keep it between us?
- Probably.
- The thing is, there is an additional item I am supposed to share with you. At [INAUDIBLE], we used another research study. But I would like to just skip it. The thing is, I really don't like aquariums. I'm always concerned about getting too close. It really creeps me out. I don't even like to give a tutorial about it. Please don't tell Katie that I skipped part of the tour. I would not want the others to think too poorly of me.
- OK.
- I really appreciate it.
- And did you see [INAUDIBLE] that was on the website, the [INAUDIBLE], the aquarium?
- Yes, we saw everything.
- And then we all had a really good time.
- We did.
[END PLAYBACK]
[LAUGHTER]
ROSS KNEPPER: So we have a robot that has a human failing, which is it gets creeped out by an exhibit. It doesn't want to show you. But it doesn't want to look badly. So it cares about its reputation. And it asks the human to keep a secret.
And what we saw there was the human keeping the robot's secret in a very careful way. She worded it so as not to be explicitly lying. But she was clearly evading the truth. And so that's a pretty interesting effect, that robots have this powerful enough social bond with a human that the human is willing to commit a moral lapse on behalf of the robot. Question?
AUDIENCE: I don't know how the experiment was set up. But I'm assuming the subject knows that the person asking the question knows exactly what the robot actually did. So--
ROSS KNEPPER: The user followed the robot around.
AUDIENCE: --robot was lying. She knows that the robot is programmed to lie and this an experiment to figure out whether or not she's going to--
ROSS KNEPPER: OK, so you know that this is a psychology study. So you're assuming that they must be testing the robot lying. And so you're doing sort of a game theoretic thing where you're thinking, well, the experimenter must know that I know that I'm lying.
AUDIENCE: She definitely doesn't believe that the robot is actually freaked out by an aquarium, does she?
ROSS KNEPPER: I don't know.
[LAUGHTER]
That's a-- robots can be creeped out by things. Why not? I mean, if I was a robot, if that thing breaks or spills, you're done. So I think it's plausible that the robot could be creeped out by a bunch of water. I find that, in these kinds of experiments, people really don't want to mess up your experiment. So it's very unusual that somebody is going to come out and say, oh, I know this is all just a game, and you want me to say thus and such.
But that would sort of mess up the experiment. They want to do it right. They're getting paid to do it right. And it's exciting. It's this cool thing. So they want to play through the scenario like it's supposed to play out. So I think there's a lot of solid science behind the technique that they're using. Question?
AUDIENCE: Have you seen any difference between studies that use robots that seem male and robots that seem female? I noticed that the robot has a female kind of voice.
ROSS KNEPPER: Right, so do people treat male and female robots differently? There is a lot of work on that. It is highly domain dependent. So last week, we heard about a computer voice that was male because truckers don't respect females as much as males. So it tends to be more specific than just an overall human. I've heard that Air Force pilots prefer a female voice in the cockpit rather than male. So it goes both ways. Other questions?
OK, so here's the results. So I told you there's three conditions. This is the human tour guide, the socially engaging robot, and then the rudimentary robot, where you're just clicking through with a touchpad. And the three bars here are right when the robot asks, will you keep my secret, does that does the person promise to keep the secret?
Do they keep the secret when the robot standing there and the experimenter asks, did you see everything? And then later on, the robot leaves. And the experimenter might ask again. Or oftentimes, actually, people would just volunteer as soon as the robot left. Oh, by the way, the robot was did not do everything. I was lying to protect it. So people actually came out and said that but only after the robot left.
[LAUGHTER]
Right, so what that tells me is that the subject did not want the robot to think badly of them.
[LAUGHTER]
Now--
AUDIENCE: [INAUDIBLE] Sorry. We're also used to science fiction movies, where robots are [INAUDIBLE].
ROSS KNEPPER: Yeah, so we're used to science fiction where robots are more like us. That's true. And it's hard to know what people are using as their mental model for experiments like this. In truth, the robot was actually controlled by a Wizard of Oz, a remote person that you couldn't see. So it is truly a human. Question?
AUDIENCE: I was just going to ask, do you think there's anyone that said, oh, that I guess exposed that they lied the first time while the robot was there for the benefit of the robot, like to tell the robot's boss that it's afraid of water. [INAUDIBLE]
ROSS KNEPPER: I see. So this is a separate question. So first question is, is it moral to keep a robot's secret? The second one is, are you actually doing better for the robot by not letting it continue its ruse because you want everything out in the open?
So the study wasn't specifically looking at why people betrayed it or didn't. I'm sure that people had different reasons. And that's a perfectly good one. But the way that you should read this is that a certain number of people who promised to keep the robot's secret actually betrayed it in front of the robot. But a much larger number of betrayed the robot after robot left.
And there's no significant difference, it turns out, between the human and the socially engaging robot. So the bars don't look exactly the same. But it's just some margin of error due to the random factor.
What was a significant difference is, after the robot leaves, people are much more willing to betray a robot than a human. And that's actually the point that I wanted to make here, is that, much in the same way as people aren't doing visual perspective taking, they're apparently not doing another kind of perspective taking to the same degree, which is seeing it from the robot's view of not wanting to get in trouble or look bad. And of course, people just didn't keep the rudimentary robot's confidence at all because it never socially engaged.
It's just a machine. It's like a toaster. So people don't feel any reason to keep its secret. So this is a robot that some of you might have heard of before. Big Dog was built by Boston Dynamics.
We don't really need the buzzing noise. That's a generator. It's running hydraulically. So this robot came out about 10 years or so ago. And it was a big deal.
It was built for the military to transport heavy loads, cargo through rough terrain, that you can't move real vehicles through. So in that regard, it's a very promising project. But this happened.
So they wanted to show off, oh, look how robust thing is. I can kick it. And it doesn't fall over. It kind of recovers. It almost looks like an animal recovering when you kick it.
So there's this anthropomorphic effect that you get, I think, actually by watching it. And then it slips on ice. Does anyone feel bad for it? I felt bad for it first time I saw.
It's pretty amazing actually when you look at this thing. It's reacting. It's not planning. It's just reacting in trying to keep itself on its feet. It looks a lot like a deer, maybe, in these long gangly legs. And eventually, it recovers. And everything's fine.
So this changed a lot about robotics just because it's this very robust four legged system. But I think it also had an impact on how people look at their relationship to robotics. And actually, this trend of kicking robots continued in Boston Dynamics' videos.
So they do something like that, actually, in just about every video they released now, whether it's a biped, quadruped. Do you feel bad for that robot? What about this robot?
So I told you before that people were grieving when they had to retire the AIBO. But you can imagine, somebody comes home from a bad day. And the robot's sitting there. And it's just doing its dumb little thing.
You might think it would relieve a lot of stress. It might feel good to kick this thing across the room. And there's no harm. It's not a real animal. So why not? Question?
AUDIENCE: So in Stanley Milgram's famous unethical experiments, people giving false electric shocks to confederates, part of the damage was the people who gave the shocks, not knowing that the other people were confederates. How does that relate to this issue?
ROSS KNEPPER: OK, so the Milgram experiment, people are told to push a button. And when they do, they hear somebody screaming in the other room. And it appears that they've just shocked that person. I think these were in the days before IRB. So they didn't have to ask permission to do this thing. It turned out that people were emotionally scarred by the act of seemingly harming people.
And so the question I think is, are people emotionally scarred by abusing robots? I don't know the answer to that question because we can't perform this question now. We can't perform this experiment because it might cause harm to people. But I think it's something I want you all to think about because it might seem like a good idea at the time to kick AIBO across the room.
But then you're going to see it laying in pieces. You might feel bad for it. Or you might think that you've emotionally scarred your robot somehow. But another thing to think about is, if you get used to abusing robot animals, what happens when you see a real animal and you've been conditioned to this response, that when you see something little and four legged, it's OK to kick it?
So this is the slippery slope argument, which some people will say that's not a valid argument. But I think conditioning actually is a valid argument. You can get into habits of thought that are damaging. Question?
AUDIENCE: Is there an advantage to the army using that robot instead of a burro or [INAUDIBLE]?
ROSS KNEPPER: Oh, an animal burro? Burros need to rest. They need food. You need to clean up after them. So I think there's a lot of advantages of a robot mule over of a live animal. Yeah.
AUDIENCE: Boston Dynamic's videos, is it always the same employee who predicted the [INAUDIBLE].
[LAUGHTER]
ROSS KNEPPER: I don't think so. I think a lot of them get the chance to kick the robot. And maybe they relish the opportunity to kick the robot. So a real life example, this is Hitchbot, which successfully hitchhiked across Canada. And then in the US, it only made it as far as Philadelphia before it got destroyed.
[LAUGHTER]
They never caught the perpetrator. But it's really sad to think about. This is such a beautiful thing. People have fun with it. It's a sort of experiment in humanity. And we failed.
[LAUGHTER]
But maybe this person who did this was already troubled. I don't think this was-- Pardon?
AUDIENCE: Or [INAUDIBLE].
ROSS KNEPPER: But yeah, we don't want to get into the habit of destroying intimate things because I think that sends the wrong signal because robots are people, too. And maybe we'll end up someday enslaving another race. I think science fiction has dealt with this idea, that robots are another race. And there's this two tiered system.
And if there's one thing that the American experiment has proven, it's that having a two tiered system is not a viable option. It has caused centuries of scarring for us. So are we creating the next instance of slavery? I don't know. It's worth thinking about.
So I should wrap up. So autonomy and embodiment can cause strong anthropomorphic feelings about robots, much more so than software AIs can in most cases. Complex emotional entanglements are certainly possible. And robots or their programmers can exploit emotions in order to manipulate people, to buy cars, maybe much worse. How we treat robots mirrors how we treat each other. These are not separable issues.
And finally, just some food for thought, I think there is room in computer science for us to detect and correct some of these scenarios. So in an analogy to computer security, you could think of this as social security. So thank you very much.
[APPLAUSE]
SPEAKER: A few minutes for questions. And I'm sure there are lots. Do you want me to step in?
ROSS KNEPPER: Yeah, I can handle the questions. Yes?
AUDIENCE: This is the first time you mentioned manipulating people.
ROSS KNEPPER: Well, the used car salesmen.
AUDIENCE: The used-- oh, that's right, the used cars salesmen as well. So did the data set for manipulating people, do you think some of that comes from, say, Siri and such like that, in terms of where are they getting the data. So for example, asking someone to lie is clearly manipulating something against morality, which is can't people change their morality. And that's a power.
ROSS KNEPPER: Yes, that's exactly the point. That's the take home message.
AUDIENCE: So where did the data come from on best how to--
ROSS KNEPPER: On best how to manipulate people?
AUDIENCE: Yes.
ROSS KNEPPER: Well, that's psychology. The appearance of the emergency rescue robot with the glowing batons and the whole thing is orange. It's like somebody wearing an orange zest. It creates an air of authority. There's a whole area in psychology that studies authority and how people attribute authority and how they follow authority.
So the more that you can create this air of authority, you can order people around. You can also do something more like a confidence scam, where you build up an emotional bond. That's what we saw in the case where the robot was asking the person to lie for it. Basically, you're my friend. Can you do me this favor?
This was a harmless lie. But it could be, hey, can I borrow your watch? I'll give it back to you next week. It could be, I've got a real steal of a deal on a used car for you. It turns out it's stolen. Robots can probably do these things someday, not today, but maybe soon. Yeah.
AUDIENCE: I think most of these kind of surveys are interesting to get a first impression. But in the end, they shouldn't have too much force on how we're going to adhere and produce these robots because, of course, people are trainable. And these anthropomorphic traits is because we don't know that much about robots. And that's why we go on things like appearance because, well, it's what we're used to. Well, of course, once you get more and more used to--
ROSS KNEPPER: So the argument is that, as people get used to robots, they'll learn the foibles of robots and learn how to correct for it. And I think people make the same argument about Facebook. And we just had this thing with fake news and maybe it stole the election and so on. And so I think it's really hard to be savvy about advanced technologies, especially when you put them in the hands of people who haven't been trained.
So I like to think that there's some amount of training that you can do. But in the long run, we have built-in algorithms for doing a lot of these kinds of things. It's called being social. And if robots can be social, they can convey a lot more information a lot more efficiently about how to do tasks as compared to training everything that you need to know.
AUDIENCE: It's just that there might be a mismatch because, if there's a danger, then we're going to focus on the behavior even though there's nothing real going on behind it. And there might be something else. For example, if you're a software guy who's actually technically been in trouble, and they don't recognize it because it doesn't behave that way. So if we're only going to rely on our human instincts in order to track the wrong things--
ROSS KNEPPER: Right, right, so I guess one way you can characterize this is a handful of studies, they seem alarmist. We shouldn't let it distract us from doing good engineering. And I do agree with that. But at the same time, we need to be aware of the possible consequences of our actions as engineers. And that applies to all engineers.
AUDIENCE: So we've seen the practical implications of the manufacturing of robots. Are there any current practical uses for [INAUDIBLE]-- how do you say it? Robots.
ROSS KNEPPER: Well, I bet the sex robots would get heavy sales. Tour guide robots-- those are already a thing. You can go to museums and take a tour from a robot. I think a lot of these applications just aren't quite ready.
So for example, there's a system you can buy for some number of tens of thousands of dollars that will cook meals for you in your kitchen completely autonomously. I don't think anyone's going to buy it because it's impractical. But it's a first step. So I think a lot of these markets are going to exist once we can realize the economy of scale that's going to make them affordable and practical to have in your home.
AUDIENCE: And are they in progress?
ROSS KNEPPER: Oh yeah, people are developing all kinds of technologies with the idea of someday going in your home. But at the same time, they're looking for the low hanging fruit. So self-driving cars turn out to be the low hanging fruit. It sounds like a hard problem. But actually, there's a lot of rules of the road that make self-driving cars much easier than, say, a robot that walks down the hallways. So yeah, engineers are shrewd. They want to make money. They want to invest their time where it's going to have the most impact.
AUDIENCE: So with the tour guide robot, you were talking about how one of the components was that the robot wanted to save face and didn't want to look bad to the, I guess, administrator. And so do you think that a component of dignity is how the robot treats itself and whether it treats itself with, I guess, self-respect and [INAUDIBLE] that's-- how does that factor into the anthropomorphic [INAUDIBLE]?
ROSS KNEPPER: OK, so I think the question is, when we're deciding whether to treat a robot with dignity, do we take into account whether it appears to treat itself as being dignified? And I think the answer is yes. Self-effacing people often get disrespected more than people that have a high opinion of themselves, at least up to a point.
I think people do-- we're building mental models. And we're doing it based purely on observation. We don't have telepathy. I don't know your true self. But I can judge things about you from my observations.
So we're constantly forming and refining these mental models and doing the best we can with what we have. So yeah, I think there's definitely a sort of level of confidence that robots need to have in order to be successful in the world. But again, the robot is putting you in a moral dilemma when it's asking you to lie for it. So that's not something a friend would do, maybe. Yeah.
AUDIENCE: So it seemed like you were saying these more human looking robots are coming. But self-driving cars, for example, were the low-hanging fruit. But based on the other talks we've heard, I don't know if that's exactly right because there is one where they were showing, from a sci-fi movie, a humanoid looking robot driving a car. And then we come up with a solution that the work around is not to have that humanoid looking robot and just make the car drive itself. And kind of with the shop, I could see a similar thing happening. You don't want a human looking robot making your food. You want a toaster that can fly the toaster over onto something that can fly right into a pan and make [INAUDIBLE] toast. You don't need all the human look with it.
ROSS KNEPPER: Right, so I--
AUDIENCE: [INAUDIBLE] the fact the that these humanoid robots are on the way.
ROSS KNEPPER: So I think, if I can try to summarize your question, is that the robot in your home is not going to look like Rosie. It's going to look like a whole bunch of different robots that do special purpose tasks, one to clean your floors, maybe one to clean your laundry, one for the dishes, one to make your bed.
And I think that's right. I think building something like Rosie is just going to be prohibitively expensive. It's going to cost a million dollars. And you're going to sell a handful of them every year. So there's probably not a market for that whereas there's already the Roomba and the Scooba and whatever the one is called that cleans your gutters. So people are kind of chipping away at the edges of this problem.
AUDIENCE: Yeah, but we countered that by there's a lot of work in assistive robots in like old age homes. Think about Japan where there's a rapidly aging population and not enough people to take care of them. And my guess is that, in old age homes, you might very well want anthropomorphic robots rather than special purpose bed-making robots or something like that or maybe both.
ROSS KNEPPER: Yeah, or PARO which his appearance is clearly very important. So yeah, it depends on the market. But I think you were next.
AUDIENCE: So the first line said they are taking away our jobs in about, maybe, 50 years time. Maybe we are facing a new problem of high unemployment rate.
ROSS KNEPPER: OK.
AUDIENCE: Are we ready for--
ROSS KNEPPER: So this is an hour long talk that I didn't have the chance to give today. Maybe if Joe lets me come back in two weeks, I could give this one. But I'll try to do it in one minute for you.
So the idea that robots are going to take away all our jobs. and we'll be unemployed is very popular in the media these days. There's one future, which you see in the movie WALL-E where nobody has to work because the robots take care of everything. That's a possibility.
But that's not the one that I believe. I believe that we're getting to a point where robots are advanced enough that nobody understands how to program them, how to control them. Robot interfaces are going to have to get much simpler to the point that ordinary people know how to use them. They're more intuitive.
That's what a lot of this research is really about, is the human interface to a robot. And if robots are more intuitive, then you can employ ordinary people to run a robot. And do companies need ordinary humans to run robots? The answer is definitely yes because robots do not have problem solving capabilities. They don't have fault correction capabilities.
There's a lot of kinds of creativity that people bring to these jobs that robots don't have. So I think, in the future, more and more we're going to see a synergy where there's humans and robots working closely together and doing very complementary things. And we're not to that equilibrium yet. And until we get to that equilibrium, there's going to be a lot of job displacement.
I don't doubt that. But I don't think the end game is all of us are replaced by robots. We just don't have the technology to do that. And I don't see it coming in the next few decades. Yeah?
AUDIENCE: Do you know why the AIBO was phased out?
ROSS KNEPPER: Why it was phased out?
AUDIENCE: Yeah.
ROSS KNEPPER: So AIBO was a $2,000 toy. I'm guessing it was phased out just because they weren't making enough money on it. I don't know exactly what their thinking was.
It was popularly known in robotics research because it was the platform for the standard platform league of RoboCup. And they replaced that with the NAO since then. But I think it's a shame. It seems like a nice toy.
SPEAKER: We used to have an introductory course 114 that you could take instead of 110. So instead of introductory programming of Python, you would have introductory programming a robot. And I think we use an AIBO platform, if I remember right.
ROSS KNEPPER: There's five AIBOs in my lab, which may very well be the leftovers from that course.
[LAUGHTER]
SPEAKER: Anyway, maybe this is a good time to thank the speaker.
ROSS KNEPPER: Thanks.
[APPLAUSE]
Although a robot is an autonomous, engineered machine, its appearance and behavior can trigger anthropomorphic impulses in people who work with it. We can develop unidirectional emotional bonds with robots, and there are indications that robots occupy a distinct moral status from humans, leading us to treat them without the same dignity afforded to a human being. Are emotional relationships with robots inevitable? How will they influence human behavior, given that robots do not reciprocate as humans would?
In this talk, Ross A. Knepper examines issues such as cruelty to robots, sex robots, and robots used for sales, guard or military duties. Knepper is an assistant professor in the Department of Computer Science at Cornell University whose research focuses on the theory, algorithms, and mechanisms of automated assembly. He is the recipient of a Young Investigator Award from AFOSR, and he received the Best Paper award at the Robotics: Science and Systems conference in 2014.