Excerpt from I Robot: Robots and Emotions (13:08 min)
Nora Young (NY): You can kind of see where it’s going. The clerk at the grocery store is just an image on the computer screen that remembers that you like green and you don’t like to be called by your first name. And you actually have that robot to clean parts of your house, just like the Jetsons always promised. Or you have a virtual life coach to help you reach your goals. These things are already almost here, and it sounds kind of cool. But that’s nothing compared to what science fiction promises. Remember that movie AI: | ![]() |
Prof. Hobby: I propose that we build a robot child who can
love, a robot child who will genuinely love the parent or parents it
imprints on, with a love that will never end.
Male Colleague 1: A child-substitute mecha. Prof. Hobby: But a mecha with a mind, with neuronal feedback. You see what I’m suggesting is that love will be the key by which they acquire a kind of subconscious, never before achieved, an inner world of metaphor, of intuition, of self-motivated reasoning, of dreams. Male Colleague 2: How exactly do we pull this off? |
![]() |
Female Colleague: You know it occurs to me, with all this
animus existing against mechas today, it isn’t simply a question of
creating a robot who can love. If a robot could genuinely love a person,
what responsibility does that person hold toward that mecha in return?
It’s a moral question, isn’t it?
Prof. Hobby: The oldest one of all. But in the beginning, didn’t God create Adam to love him? |
![]() |
[music]
NY: More than robots just lifting heavy stuff and number crunching, your relationship with them may be more intimate than you ever imagined. Here’s Alan Mackworth again, our robotics prof from UBC.
Alan Mackworth (AM): One of the most important areas, I think, will be in the area of robots that are your friend. Consider a senior who lives alone in the house, who wants to stay in the family home, but the kids may be off in other parts of North America, and you could imagine a robot that could interact with such a person in various ways. I mean, on a functional level, could help them get food from the fridge and vacuum the floor and so on – these things exist already. But also, at an emotional level, could understand the emotional state of the senior and could help the senior. If we had that kind of emotional understanding in robots, then I think we’d have much more useful companions. Our rational life is not all that we are. Emotional intelligence is key and if we are to have true companions, they’ll have to understand us and we’ll have to understand them in ways that we haven’t done yet. I think that’s, that’s where the true potential lies.
NY: Is it possible that an artificial intelligence system
could have emotions?
AM: Clearly, a lot of what’s going on now is faking it. That’s what people usually mean when they say simulated emotions – it’s being faked. And I think, you know, the Kismet kind of work is really faking it. But I think we will, as we go forward, understand these connections between, say rational, reflective behaviour and emotional behaviour. You know, there are emotions like happiness and joy and so on that guide us. And that without them, we’d be without ultimate purpose in a way. I mean, those are ways of knowing that we’re fulfilling our meaning in our lives. And so I think they enrich us enormously. |
![]() |
![]() |
NY: Of course, academics debate all this stuff. I mean, can you encode a machine with all those nuanced responses we have from being born, from really being in the physical world, from having a soul? But, in terms of a rough approximation, Alan's kind of right. I mean, you can see potential benefits from virtual agents and robots, especially if the alternative is loneliness. But maybe the answer to alienated society isn’t creating a simulation of genuine human interaction. Maybe it’s just creating more actual human interaction. Are we really that lonely? Are we in danger of giving over too much of our humanity, or of using robots inappropriately?
Gigolo Joe: They hate us, you know. The humans. They’ll
stop at nothing.
David: My mommy doesn’t hate me because I’m special and unique. Because there’s never been anyone like me before, ever. Mommy loves Martin [her biological son] because he is real and I am real. Mommy’s going to read to me and come to me in my bed and sing to me and listen to what I say, and she will cuddle with me and tell me every day, a hundred times a day that she loves me. Gigolo Joe: She loves what you do for her. But she does not love you, David. She cannot love you. You are neither flesh nor blood. You are not a dog or a cat or a canary. You were designed and built specific like the rest of us. And you are alone now only because they tired of you or replaced you with a younger model or were displeased with something you said or broke. They made us too smart, too quick and too many. We are suffering for the mistakes they made because when the end comes, all that will be left is us. That’s why they hate us, and that’s why you must stay here, with me. |
![]() |
AM: Marvin Minsky used to say he hoped that robots would keep us around, if only as pets. So, I think, yeah, we’d better breed in some ethical responsibilities into them. When I started in this area, partly because I read Isaac Azimov’s I Robot with the Three Laws of Robotics, which essentially are ethical and moral laws, saying that robots should not harm humans, they shouldn’t harm other robots, and they should obey their creators, but they were arranged in a priority order, so not harming humans was the highest priority. And I think that, as designers of these systems, we do have a very strong responsibility to, first of all, create ways in which we could express ethical behaviour, so in the architectures for these systems we have to design in those kinds of ethical safeguards, so that they can be appropriately enforced. And I think, you know, just as we have done with all our technologies, we put limits on them. You know, for cars, we put in stoplights, we have traffic rules and all the rest of it. These are social conventions that are absolutely essential to present carnage. And I think we are going to have to evolve fairly quickly social conventions and codes and enforce them with laws, if necessary – the designers and owners of robots. | ![]() |
NY: Ever since Frankenstein first electrified his stitched-together creation, we’ve had that dream – to create an artificial being. Of course, we don’t leave them to run across the moors by themselves anymore. We’re more responsible now. We think about how to use them wisely and safely. But we don’t think about how their very existence changes us. I know a guy who thinks about these things. Mark Morley lectures at the Centre for Society, Technology and Values at the University of Waterloo. But I met him at a church downtown.
Mark Morley (MM): I think of how, say, in the service industry in a store 30 years from now, I might encounter a humanoid robot there at the cashier. And if this is programmed well, it’ll recognize me when I come back and it’ll ask me questions based upon my interests of previous purchases in much the same way that sales people do now. I think the difference will be that, in the world as it is today, there’s at least the potential to move from the imitation or the simulation of rapport to, at some point, actually appropriating a relationship or friendship. So, at some point, we might be able to step out of that store and pursue our own interests as friends. That’s not going to happen with the robot.
NY: Yeah, that’s the problem, I guess, is that most of the time, we act like robots and we might as well be interacting with AI, because, you know, we follow scripts and we obey accepted conventions. But I guess I’m wondering, too, how the presence of this sociable AI affects what we think of as distinctively human and is there something distinctively human that’ll still be there in the future?
MM: It would seem not. It would seem that, going right back to the Industrial Revolution with machines replacing labourers to artificial intelligence, machines replacing our intellect and knowledge, so to speak, and now we’ve got computers, machines, simulating our emotions – you might think, well, what’s left? Well, my sense is that – that we as human beings are unique, I believe, in our ability or our capacity to be still. To hear silence. I think that looking at a machine, a machine is always waiting for input and for a machine there will only be information and noise. As human beings, if we come to understand ourselves as different from those machines, we can come to a point where something in us, maybe you call it the spiritual – something in us opens up to stillness, opens up to silence in a way that I don’t think any, any machine would be capable of embodying or situating itself in such a way to experience.
NY: What robots present us with is the challenge to be human and if science fiction, like AI, is anything to go by, we’re not up to it.
Lord Johnson-Johnson: Do not be fooled by the artistry of this creation. No doubt there was talent in the crafting of this simulator. Yet with the very first strike, you’ll see the feeble lie come apart before your very eyes. | ![]() |
David: Don’t burn me. Don’t burn me. I’m not Pinocchio.
Don’t make me die. I’m David. I’m David. I’m David.
Voice in the crowd: Mechas don’t plead for their lives. Who is that? He looks like a boy. Lord Johnson-Johnson: Built like a boy to disarm us. See how they try to imitate our emotions now. David: I’m David. |
![]() |
Lord Johnson-Johnson: Whatever performance this sim puts on, remember: we are only demolishing artificiality. | ![]() |
MM: I think we will commonly see these as devices, as objects, as property. But I think where this becomes a little more interesting and a little more challenging ethically, is when we would expect that we would treat these machines differently than other human beings ethically. And what concerns me about that is not so much how acknowledging that this robot is a slave of mine, in a sense, enables me to have the rights and privileges as a slave owner to do whatever I want with this. But I think it challenges us to look at the other side. When we think of slavery, we may sympathize with the emancipation of slaves because we think of those slaves as humans. But, no, I would say that here’s where we might start thinking about how, when you’re in a master-slave relationship, we might think it’s better to be the master and have all these robots as our slaves, but from an ethical position, we have to then start thinking, well, is that the way that I want to relate to the world, even though these are machines? Do we want to be masters? And I would say, no, that I don’t want to have machines as my slaves.
NY: I know this might sound kind of silly, but it reminds me of my feelings about my cat, about caring for something more because it can choose to care for you. Or not. I’d like to think that AI offers us the potential to resuscitate what’s distinctively human, but I’m not as optimistic as Mark. What do you think about AI and emotions? Possible? Practical, as long as it’s controlled? Scares the boots off you? Let us know, at NEXT at CBC dot ca.
NEXT was made by Allison Moss, Joe Mahoney and me, Nora Young.
Original broadcast July 9th, 2004 on CBC Radio One
AI images courtesy of DreamWorks
Pictures and Warner Bros. Pictures
I Robot images courtesy of Twentieth
Century Fox
Hosted by: www.technologian.org
Accessed 1191 times since April 8, 2006