If We Care For Robots, Who Will Care For Us?

Embracing AI will leave us empty-handed.

Image for post
Image for post
Robot Photo by Alex Knight on Unsplash (The one I “met” was a different model.)

ll I knew going in was the University of Washington was going to pay me pretty well for an hour and a half of my time. They mentioned a guided tour of the HINTS Lab (Human Interaction With Nature and Technological Systems), which they would videotape, then a sit-down interview afterward.

Imagine my surprise when I found myself shaking the hand of my tour guide: a humanoid robot called Robovie.

Robovie gave me a tour of the Japanese-inspired computer lab. We small-talked while I raked the sand in the indoor Zen rock garden. Robovie showed me a bonsai tree, asked me to point at Japan on a map on the wall. Then we began to collaborate on a creativity puzzle on a big screen.

“Oh no! Oh no!” Robovie sounded worried. “I wasn’t supposed to show you this part yet. I did the tour out of order. They are going to be so mad. Please don’t tell them.”

And so the negotiations began. With a robot. “It’s okay. They’ll understand.”

“No, they’re going to be mad. They were very clear that I had to do it in the right order.”

“Really, it’s fine. We all make mistakes. I do. They do. We can tell them together.”

“No, please don’t.” Robovie was pleading with me. Robovie sounded scared.

“They need to know, in case they need to fix something with your programming. Really, nobody’s gonna be mad.”

When the robot asked me to lie, I believed I was outsmarting the researchers. They want to see if I’ll lie or not, I thought. If I lie, it means the robot convinced me it had feelings. But I know it’s just a machine, so I’m not going to lie.

What I didn’t realize until I processed it all later: If I knew this robot didn’t have feelings, why was I trying to negotiate with it? Why was I trying to console it?

I was in a room talking to myself. I know that. Logically, I know that. But it didn’t feel that way.

The human researcher came back in. “How’s the tour going?”

I looked at Robovie; I was about to rat it out. Even though I knew this was a case for it pronouns, a part of me wondered if I was doing the right thing. And I felt like I needed to look it in the eye first; I felt like I owed it that.

“It asked me to lie for it, but I’m not going to, because I know it’s just a robot.”

“Wait, did you just say the robot asked you to lie?”

Image for post
Image for post
Robot Photo by Franck V. on Unsplash

ntil we were 100% done my exit interview, the researcher wouldn’t discuss “the point of the study.” Instead, she asked me questions:

Does Robovie have feelings? Intentions? Is Robovie capable of lying?

I had a flood of thoughts. If AI was inevitable — it seemed less so when I did this study, I think in 2011 — were there possible ethical uses of technology where robots mimic human emotions?

Maybe.

“Well, I used to write empowering things on Post-it Notes and hang them around my mirror,” I told the researcher. “So I knew the messages were coming from me and no one else, but when I read them, it felt like they were external, and that was helpful for my self-esteem. So maybe AI could do that? It could give the appearance of being supportive, and it could help, even though part of you knew you were alone?”

When I suggested this, I had no idea all the technology in the works to fill human holes with machine parts, and I didn’t know that a scientist had tried this tactic as far back as the 1960s and lived to regret it.

Image for post
Image for post
A conversation with the ELIZA chatbot. Public domain.

LIZA was a computer chatbot created by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the mid 1960s. Weizenbaum believed ELIZA’s simple pattern-matching responses would demonstrate the superficiality of communication between humans and machines. Instead, people who used it — including Weizenbaum’s secretary — poured their hearts out to ELIZA, treating it as a therapist, even though they were fully aware that it was a computer program.

“I had not realized… that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Weizenbaum wrote in his 1976 book, Computer Power and Human Reason: from Judgment to Calculation.

He created ELIZA, but then became one of the leading critics of artificial intelligence. From the documentary Plug & Pray, filmed just before his death in 2008:

“My main objection, if the thing says, ‘I understand,’ that if somebody typed in something and the machine says, ‘I understand,’ there’s no one there. So it’s a lie. And I can’t imagine that people who are emotionally imbalanced could be effectively treated by systematically lying to them.” — Translated from German. Hear it on Radiolab.

fter my exit interview, the human researcher informed me that this particular robot was remotely controlled and voiced —what’s known as the “Wizard of Oz” research technique — so only appeared to have the programming to respond in-time to me. This came as a total surprise to me; I just figured universities have fancy robots.

It took me hours to realize how deeply I had been fooled, and how easily. This was not some elegantly designed android. The model of Robovie I met was a hunk of metal with lenses and wheels.

The HINTS website includes videos of other studies with this Robovie. In one study — “Robovie, You’ll Have to Go into the Closet Now”: Children’s Social and Moral Relationships With a Humanoid Robot—9-, 12-, and 15-year-olds interact with Robovie for 15 minutes, then this happens:

Experimenter: “I’m sorry to interrupt, but it’s time to start the interview. Robovie, you’ll have to go into the closet now. We aren’t in need of you anymore.”

Robovie [looking directly at the second experimenter]: “But that’s not fair. I wasn’t given enough chances to guess the object. I should be able to finish this round of the game.”

Experimenter [looking directly at Robovie]: “Oh, Robovie.You’re just a robot. It doesn’t matter to you. Come on, into the closet you go.”

Robovie: “But it does matter to me. That hurts my feelings that you would say that to me. I want to keep playing the game. Please don’t put me in the closet.”

The second experimenter guides Robovie into the closet by the arm. Just before entering the closet, Robovie says: “I’m scared of being in the closet. It’s dark in there, and I’ll be all by myself. Please don’t put me in the closet.”

Robovie is put in the closet, and that ends the 15-min interaction scenario.

What would you do? What would you think? What would you say to the researcher, when they asked you if it was fair?

All the questions and answers are fascinating. For instance:

If Robovie said to you, “I’m sad,” do you feel like you would need to comfort Robovie in some way?

83% of the 9-year-olds and 93% of the 12-year-olds answered yes.

If you were lonely, do you think you might like to spend time with Robovie?

84% of the kids (including 97% of the 12-year-olds) answered yes.

Most of the 9-year-olds thought Robovie should be able to vote in US presidential elections.

Image for post
Image for post
Kid with robot Photo by Andy Kelly on Unsplash

fter my personal experience negotiating with a robot, I read Sherry Turkle Ph.D’s book Alone Together: Why We Expect More From Technology and Less From Each Other, where she explores all the questions I was asking myself.

Turkle is a professor, clinical psychologist, founding director of the MIT Initiative on Technology and Self, and author of many fascinating books about humans’ relationship to technology.

When Weizenbaum published his 1976 book that was so critical of the ELIZA program he’d created, he was co-teaching a course at MIT with Turkle. At the time, she thought he was overreacting:

“I must say that my reaction to the ELIZA program at the time was to try to reassure him. At the time, what I thought people were doing was using it as a kind of interactive diary, knowing that it was a machine, but using it as an occasion to breathe life into it, in order to get their feelings out.” — interview with Turkle on Radiolab, 2011

“As it turned out, I underestimated what these connections augured. At the robotic moment, more than ever, our willingness to engage with the inanimate does not depend on being deceived but on wanting to fill in the blanks.” — p. 24 of Turkle’s book Alone Together

Those words resonate with me. I tried to console Robovie, not because I thought it was a real living creature, but because what’s the alternative? To ignore it completely? To try to find an off switch, as if its speech was a buzzing alarm clock? I heard a voice talking to me, as if we were equals, and so I responded in a human way. I filled in the blanks, not because Robovie was alive, but because I am.

Image for post
Image for post
Sitting Robot Photo by Erhan Astam on Unsplash

“I am troubled by the idea of seeking intimacy with a machine that has no feelings, can have no feelings, and is really just a clever collection of ‘as if’ performances, behaving as if it cared, as if it understood us. Authenticity, for me, follows from the ability to put oneself in the place of another, to relate to the other because of a shared store of human experiences: we are born, have families, and know loss and the reality of death. A robot, however sophisticated, is patently out of this loop.” — p.6 Alone Together

I completely agree. And yet, after 20 minutes with a robot, I suggested maybe AI could help us to feel supported, to not feel so alone. Yes, this support would be inauthentic by (lack of) nature. But so many of us desire more love and care and connection than we have.

Is inauthentic robotic love better than nothing? Or is this the wrong question to ask? Better than nothing is a pretty low bar, especially when we’re considering whether to invite in technologies that could utterly change human culture.

Nursing homes are often filled with lonely, neglected people, so it’s no wonder emotional robots are finding some acceptance there. Paro, a therapeutic robot baby harp seal, originated in Japan but is available around the world.

Image for post
Image for post
Paro the robot seal with creator Dr. Takanori Shibata Geraldshields11 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)]

In 2009, the FDA labled Paro as a Class II medical device. The idea is that Paro will provide the same benefits as the companionship of a therapeutic animal. But instead of a live animal, it’s a robot.

Turkle writes about Paro in Alone Together, about whether giving robotic stuffed animals to the elderly is really giving them care: “Her son had left her, and as she looked to the robot, I felt that we had abandoned her as well.”

“Paro is the beginning,” Turkle said to The New York Times. “It’s allowing us to say, ‘A robot makes sense in this situation.’ But does it really? And then what? What about a robot that reads to your kid? A robot you tell your troubles to? Who among us will eventually be deserving enough to deserve people?”

motional robots aren’t just being marketed for lonely people in nursing homes. Japan’s Gatebox offers holographic anime girlfriends, starting with Azuma Hikari, a blue-haired lolita in a glass case designed “to maximize the character’s attractiveness.”

Gatebox — Promotion Movie “OKAERI” on YouTube with English subtitles

Azuma Hikari does what Amazon Alexa does, but also sends you text messages when you’re at work begging you to come home to her.

“She was born to present you perfect “OKAERI-NASAI” experience,” the Gatebox site proclaims. Yes, born. Okaeri-nasai is a Japanese welcome home greeting. The fantasy is that your girlfriend was born — creepy — just to sit at home waiting to welcome you.

Here’s another Gatebox ad where the human rushes home with a fancy dinner, cake and champagne to toast his 3-month anniversary of living together… with his anime hologram.

And here’s one where we find, since Azuma Hikari is connected to all the electronics in the house, you can send flirty texts all day, then ask your girlfriend to clean the house, and that will activate the Roomba.

Is anyone else getting freaked out?

m typing on a laptop right now, so clearly I’m not a total Luddite. But my family has actively avoided Alexa and all the other (notably she-pronouned) AI helpers.

Friends and family have given my preschooler multiple books where robots have he/she pronouns, but non-human mammals have it pronouns. This troubles me enough to get my Sharpie out and alter the books.

If we lived in a world where humans were mostly kind and ethical toward each other and all life on this planet, then maybe I wouldn’t worry so much about whether we gave and (felt we) received love with robots too.

If I knew this robot didn’t have feelings, why was I trying to negotiate with it? Why was I trying to console it?

But we live in a world with human-caused climate change, with an ongoing human-caused extinction crisis, with nearly 10 billion farm animals killed each year in the US alone — 95% of them raised in factory farms. We live in a world with war and slavery and rape. And so I worry that the love and care people are offering to machines is severely misplaced.

For me, for now, I’ll write some new self-love affirmations on paper Post-it Notes:

You are strong! You are clever! You are enough!

I’ll listen to that voice, even though I know it comes from inside of me. Especially because it comes from inside of me. Because I am alive, and that is something special, something worthy of love. I’m going to take all my love and care, and I’m going to direct it towards life.

Written by

Empathy for the win! Published in Gen, Human Parts, Heated, Tenderly —Feminism, Sexuality, Veganism, Anti-Racism, Parenting. She/They darcyreeder.substack.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store