About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).

Monday, January 17, 2011

Do robots love and suffer if they say they do?

Love and Sex with Robots by David Levy is very sanguine about the idea that we may, in the near future, not only use lifelike robots as sex toys but actually fall in love with them - quite literally. Now I suppose it's possible that we could create robots that are just as much conscious, intelligent beings as we are, and if so I don't see any obvious reason why they couldn't be appropriate objects of love and partners in sex. I don't think, for example, that all sex should be procreative. There may or may not be other reasons to avoid creating conscious artificial beings - hey, we've all seen the Terminator movies (some of us ... ahem ... have even written material for the Terminator franchise) - but the inappropriateness of falling in love with conscious robots isn't one of them. Well, at least it's not high on my list.

Moreover, I'm disinclined to see carbon as a magic element. Assuming, as I do, that consciousness supervenes causally on the functioning of certain complex material things, such as human brains (and, I don't doubt, the brains of many other animals), there's still a question as to exactly what is required for consciousness. I'm betting that it's going to be to do with structure and functioning rather than the actual material involved, so, in principle, there could be conscious things made of other sorts of stuff than carbon compounds. If so, it seems possible in principle, however difficult it may be in practice, to make robots that are conscious, even though they will be made of different materials from our brains. (Once again, forget for the moment whether it's a good idea all things considered.)

The problem is, Levy isn't talking about robots of such intricacy as to match our brains and to possess whatever sort of structure and functioning we eventually decide (though how?) is necessary for consciousness. The robots he's talking about will pass the Turing Test for practical purposes - they will be programmed to profess feelings, including feelings of love, with such plausibility that it will be natural for ordinary people to believe them. Fine, I don't necessarily care if people are willingly deceiving themselves ... though it can become a complicated argument. But I'm aghast at the idea that we should accept that the feelings are real as long as they seem so.

Again, it might turn out that we can't create a robot that can pass the Turing Test, administered by someone truly expert, without that robot actually having the consciousness it professes to have (it will need to talk about its feelings and experiences if it's probed by even a moderately sophisticated interlocutor). That might turn out to be correct, I suppose, because a robot that can maintain a probing conversation with an expert interlocutor will have to very sophisticated and complex indeed. It will need to use language in highly novel and complex ways, and to discuss deep emotional responses to things that it can't know about in advance.

But Levy isn't talking about a machine that has some kind of structural equivalent to our tens of billions of neurons, connected intricately. From his description, he could be talking about something much simpler. Programmers will be able to use other methods to fool us, taking advantage of our tendencies to anthropomorphise. In that case, we could end up creating robots that are not conscious at all but give a very good impression of actually having the feelings they report. They might not stand up to interrogation by an expert, but they might sound sincere to the rest of us. After all, think how uncommunicative real people can often be. Surely uber-skilled programmers of love-and-sex robots could do better than that.

The mere ability of future programmers to fool us, in relevant situations, that we're dealing something with real feelings does not entail that feelings are really present, and I'm somewhat surprised at the suggestion that we will and should blithely act as if it does. If I'm right on this, a lot of Levy's book is far more problematic than he acknowledges or apparently realises. He's suggesting we fall in love with things that give a plausible appearance of loving us back but have no actual feelings for us or anything else. And when it's put that way, I doubt that it's what many of us want.

18 comments:

Brian said...

"He's suggesting we fall in love with things that give a plausible appearance of loving us back but have no actual feelings for us or anything else. And when it's put that way, I doubt that it's what many of us want."

Some people have pet cats. I'm not sure why.

Others keep animals that maintain a much more plausible appearance of loving, such as goldfish and tarantulas.

Jonathan Meddings said...

Sounds like every relationship I have ever been in, though I am afraid to say I think I was the robot with no real concept of love.

Greywizard said...

Ya know, I don't think it'd be very difficult to create toys that we would treat as if they were conscious. I find it hard to disappoint my GPS, by going another way, and when "she" says, "Make a U-turn", simply ignore her and keep on going straight ahead. A moment or two later "she's" madly calculating an alternate route, and even sounds disappointed when I don't follow "her" directions! So, it wouldn't be at all hard to fool me, at any rate.

Russell Blackford said...

I have the same problem with my GPS - at some level I really don't want to disappoint "Karen" and make "her" have to start "recalculating", even when I know "she"'s wrong. I think this is probably pretty common.

March Hare said...

Not to be overly solipsistic, but isn't this what we do with other people?

I disagree with you about the programmers, I think it will be trivially easy to create a machine that can fool a human compared to creating one that actually has emotions. It will simply have access to a large database of human responses to questions and pick the one that appears best fit (isn't that what chess playing computers do?) Surely there can be no question that hasn't been posed, hence as long as the machine has some form of consistency in its answers we'd be fooled into thinking the parroting of human responses was an actual emotional response from the machine.

PS. Cherry 2000

SarahPH said...

So if I understand correctly, and we're talking about hypothetical robots that would give emotional responses to various stimuli in a way that's basically indistinguishable from what would be expected of a human... Presumably their behavior algorithm is doing something similar to what our brains do. Maybe it's doing it in a more straightforward way, and any sufficiently skilled programmer who looks at the source code could understand why it works the way it does. But I'm not sure I understand why that makes the emotion less authentic, unless the robot is knowingly deceiving you or something.

Svlad Cjelli said...

GPS? I've treated completely passive objects if not like people, at least like pets.

I wonder how expert an interlocutor would have to be before a successful "trickery" would necessarily be genuine. I.e. what's the highest expertise that can still be "fooled" by an actual human, I suppose.

Svlad Cjelli said...

*Or "on what basis do we treat eachother as genuine?"

ANTIcarrot said...

Very flawed premise. Small children and the mentally disabled and animals do not communicate in complex and deeply meaningful ways. Does this mean their feelings and emotions are somehow fake?

A loving afectionate sex toy doesn't necessarily need complex language skills. You might argue that a dumb or mute sex toy makes it pedophillia or bestiality, but that's an emotive arguement (with a tinge of No True Scotsman) not a rational one.

Russell Blackford said...

To be fair, I suppose he's saying that being able to "communicate" "feelings" plausibly is sufficient for having the feelings or inner experience, rather than necessary. But yes, the "communication" and the inner experience do seem to come apart easily.

Russell Blackford said...

Or perhaps I misunderstand your point. I'm not sure who you're criticising.

Tom Clark said...

Seems like there will always be indeterminate, undecidable cases of when feelings are present, since if a system is somewhat simpler than we are, how can we know whether or not it is a conscious subject? Presumably its avowals of being such a subject will sound as sincere to us as a those produced by a system that we're sure is conscious. The only safe policy is to be liberal and assume the system *is* conscious.

What we'd like is a clear cut-off point of complexity below which we know for sure consciousness isn't present, but that may not be forthcoming. In which case we can think of people who attribute conscious states to fairly simple systems as simply playing it very safe. Is there any harm in this? Perhaps our gullibility about sentience in the presence of convincing behavior ends up being good policy.

Theo Bromine said...

What is the qualitative difference between an artificial being's mechanistic programmed response (based on the robot's hardware, initial software, and "on-the-job learning") and my own mechanistic response determined by my current and past physiology and experiences?

Brian said...

Most people can't distinguish between a dog treating them as the pack leader or as a resource they guard, and interpret both as love.

Mer Almagro said...

Most people can't distinguish between a person treating them as the pack leader or as a resource they guard, and interpret both as love.

Why do we think we are so special? Even though I know I am not special, I still feel I am.
We may be more complicated than other mammals (or...), and very good at fooling ourselves, but the basic mechanism is exactly the same.

Svlad Cjelli said...

Oh, I knew this vaguely reminded me of something. Here's a comic strip for your enjoyment: http://xkcd.com/810/

Jambe said...

"He's suggesting we fall in love with things that give a plausible appearance of loving us back but have no actual feelings for us or anything else. And when it's put that way, I doubt that it's what many of us want."

Is he suggesting that we should or that we will? I'm curious as I haven't read the book, and it's a pretty important distinction.

ANTIcarrot said...

"Or perhaps I misunderstand your point. I'm not sure who you're criticising."

I'm criticising the point of view, not anyone in particular. The 'what' rather than the 'who'. :)

The requirement that non-humans must be prove that their feelings are real before we believe them, is in stark contrast with the notion that it is highly immoral to merely suggest that for a human. It strikes me as deeply immoral; in large part because we all know why it is so wrong when humans are involved.

I'm also questioning the use of a test for a complex activity (Turing) for an activity which is much simplier (synthetic emotions). That's like demanding a computer beat you at chess before you'll believe it can do arithmatic. Rats and mice have emotions (or at least behave as if they do, just like humans) and yet their brains are 2-3 orders of magnitude smaller than our own.

But specifically, you questioned whether people will fall in love with a box of carefully crafted tricks. Putting aside the notion that many people are already in love with several highly evolved book-of-tricks (the Bible et all), isn't the endocrine system (IIRC, the mechanism behind out emotions) simply a very mysterious box of tricks? Would a complex mathematical system really be different from a complex chemical one? Apart from taking up a lot less space?