Home » Blog » Berkman » Kate Darling on Robot Ethics

Kate Darling on Robot Ethics

Kate Darling (@grok_) offers a talk to the Berkman Center that’s so popular, the talk needs to be moved from Berkman to Wasserstein Hall, where almost a hundred people come for lunch and her ideas on robot ethics. Kate is a legal scholar completing her PhD during a Berkman fellowship (and a residence in my lab at the Media Lab), but tells us that these ideas are pretty disconnected from the doctoral dissertation she’s about to defend on copyright. She’s often asked why she’s chosen to work on these issues – the simple answer is “Nobody else is”. There’s a small handful of “experts” working on robots and ethics, and she feels an obligation to step up to the plate and become genuinely knowledgeable about these issues.

Robots are moving into transportation, education, care for the elderly and medicine, beyond manufacturing where they have been for years. She is concerned that our law may not yet have a space for the issues raised by the spread of robots, and hopes that we can participate in the construction of a space of robotics law, following on the healthy and creative space of cyberlaw.

She begins with a general overview of robot ethics. One key area is safety and liability – who is responsible for dysfunction and damage in these complex systems where there’s a long chain from coder to the execution of the system. It sounds fanciful, but people are now trying to figure out how to program ethics into these systems, particularly around autonomous weapons like drones.

Privacy is an area that creates visceral responses in the robotics space – Kate suggests that talking about robots and privacy may be a way to open some of the discussions about the hard issues raised by NSA surveillance. But Kate’s current focus is on social robots, and specifically on the tendency to project human qualities on robots. She references Sherry Turkle‘s observation that people bond with objects in a surprisingly strong way. There are perhaps three reasons for this: physicality (we bond more strongly with the real world than with the screen), perceived autonomous action (we see the Roomba moving around on its own, and we tend to name it and feel bad when it gets stuck in the curtains), anthropomorphism (robots targeted to mimic expressions we associate with states of minds and feelings.)

Humans bond with robots in surprising ways – soldiers honor robots with medals, demand that robots be repaired instead of being replaced, and demand funerals when they are destroyed. She tells us about a mine-defusing robot that looked like a stick insect. It lost one of six legs each time it exploded a mine. The colonel in charge of the exercise called it off on the grounds that a robot reduced to two or three legs was “inhumane”.

Kate shows her Pleo dinosaur, named for Yochai Benkler. The robot was inspiration from an experiment she ran at a workshop with legal scholars where she encouraged participants to bond with these robots, then to destroy one. Participants were horrified, and it took threats to destroy all robots to get the group to destroy one of the six. She observes that we respond to social cues from lifelike machines, even if we know they are not real.


Kate encourages workshop participants to kill a robot. Murderer.

So why does this matter? People are going to keep creating these sorts of robots, if only because toy companies like to make money. And if we have a deep tendency to bond with these robots, we may need to discuss the idea of instituting protections for social robots. We protect animals, Kate explains. We argue that it’s because they feel pain and have rights. But it’s also because we bond with them and we see an attack on an animal as an attack on the people who are bonded with and value that animal.

Kate notes that we have complicated social rules for how we treat animals. We eat cows, but not horses, because they’re horses. But Europeans (though not the British) are happy to eat horses. Perhaps the uncertainty about rights for robots suggests a similar cultural challenge: are there cultures that care for robots and cultures that don’t. This may change, Kate argues, as we have more lifelike robots in our lives. Parts of society – children, the elderly, may have difficulty distinguishing between live and lifelike. In cases where people have bonded with lifelike robots, are we comfortable with people abusing these robots? Is abusing a robot someone cares about, and may not be able to distinguish from a living creature, a form of abuse if it hurts the human emotionally?

She notes that Kant offered a reason to be concerned about animal abuse: “We can judge the heart of a man by his treatment of animals for he who is cruel to animals becomes hard also in his dealings with men.” Some states look at reports of animal abuse and conduct investigations of child abuse when there’s been a report of animal abuse in a household because they worry that the issues are correlated. Is robot abuse something we should consider as evidence of more serious underlying social or psychological issues?

Kate closes by suggesting that we need more experimental work on how human/robot bonding takes place. She suggests that this work is almost necessarily interdisciplinary, bringing together legal scholars, ethicists and roboticists. And she hopes that Cambridge, a space that brings these fields together in physical space, could be a space where these conversations take place.

Jessa Lingel of MSR asks whether an argument for protecting robots might extend to labor protections for robots. “I’m not sure I buy your arguments, but if so, perhaps we should also unionize robots?” Kate argues that we should grant rights according to needs and that there’s no evidence that robots mind working long hours. Jessa suggests that the argument for labor rights might parallel the Kantian argument – if we want people to treat laborers well, maybe we need to treat our laboring robots well.

There’s a long thread on intellectual property and robots. One question asks whether we can demand open source robots to ask for local control rather than centralizing control. Another asks about the implications of self-driving cars and the ability to review algorithms for responsibility in the case of an accident. I ask a pointed question about whether, if the Pentagon begins advertising ethical drones that check to see whether there’s a child nearby before we bomb a suspected terrorist, will we be able to review the ethics code? Kate notes that a lot of her answers to these questions are, “Yes, that’s a good question – someone should be working on this!”

Andy Sellars of Digital Media Law Project asks Kate to confront her roboexceptionalism. He admits that he can’t make the leap from the Pleo to his dog, and can’t see any technology on the horizon that would really blur that line for him. Her Pleo experiment could be replicated with stuffed animals – would we worry as much about people torturing stuffed animals? Kate cites Sherry Turkle, who has found evidence that children do distinguish between robots and stuffed animals. More personally, she tells a story about a woman who told her, “I wouldn’t have any problem torturing a robot – does that make me a bad person?” Kate’s answer, for better or for worse, is yes.

Tim Davies of the Berkman Center offers the idea that Kate’s arguments for robot ethics is virtue ethics: ethics is the character we have as people. Law generally operates in the space of consequentialist ethics: it’s illegal because of the consequences of behavior, not its reflection on your calendar. He wonders whether we can move from language of anthropomorphism around robots and talk about simulation. There are legal cases where simulation of harm is something we consider to be problematic, for instance, simulated images of child abuse.

Boris Anthony of Nokia and Ivan Sigal of Global Voices (okay, let’s be honest – they’re both from Global Voices) both ask about cultural conceptions of robots through science fiction – Boris references Japanese anime and suggests that Japanese notions of privacy may be very different from American notions; Ivan references Philip K. Dick. Kate notes that, in scifi, lots of questions focus on the inherent qualities of robots. “Almost Human”, a near-future show that posits robots that have near-human emotions, is interesting, but not very practical – we’re not going to have those robots any time soon. Issues of projection are going to happen far sooner. In the story that becomes Blade Runner, the hero falls in love with a robot who can’t love him back, and he loves her despite that reality – that’s a narrative that had to be blurred out in the Hollywood version because it’s a very complex question for a mainstream movie.

Chris Peterson opens his remarks by noting that he spent most of his teenage years blowing up furbies in the woods. “Was I a sociopath, a teenager in New Hampshire, or are the two indistinguishable?” Kate, whose Center for Civic Media portrait, features her holding a flayed Furby shell absolves Chris: “Furbies are fucking annoying.” Chris’s actual question focuses on the historical example of European courts putting inanimate objects on trial, citing a case where a Brazilian colonial court put a termite colony on trial for destroying a church (and the judge awarded wood to the termites who had been wronged in the construction.) Should emergent, autonomous actors that have potentials not intended by designers have legal responsibilities. “Should the high frequency trading algorithm that causes harm be put to death? Do we distinguish between authors and their systems in the legal system?” Kate suggests that we may have a social contract that allows the vengeance of destroying a robot that we think has wronged people, but notes that we also try to protect very young people from legal consequences.

10 thoughts on “Kate Darling on Robot Ethics”

  1. Thanks for sharing this amazing article Ethan. It takes articles and studies like this to draw our attention to the possibility of developing human-machine bond which is often formed sub-consciously and in a very subtle manner especially now that the world is being opened up to a lot more robots and inanimate objects that we are capable of getting attached to.

  2. Pingback: Is it OK to torture or murder a robot? « Engineering Evil

  3. Pingback: Is it OK to torture or murder a robot? | Learn How to be Prepared

  4. Pingback: Weil’s sonst niemand macht | tautoko

  5. Pingback: Should Robots Have Rights? | Smart News

  6. Pingback: Weekend reading recommendations « Martin's thoughts on the web. And life.

  7. Pingback: Could you kill a robot? | Lion in a sidecar

  8. Pardon me for an autobiographical note here but way back in mid 1970s, I had written a short skit titled the “Marriage and Murder of a Robot” (for a radio program) which was about a robot working at then Jacqueline Kennedy’s home and gradually falling in ‘love’ with her and wanting to ‘marry’ her. I felt a lot of pain in ending the short play with the robot being murdered to save Mrs Kennedy of its intentions and one of my friends changed the script to just cutting off the power supply (some kind of euthanasia!)

    [PS: I am not an engineer or a science graduate but I had a fascination for robots even then. Allow for my ignorance and naivette as I was just over 20 then! :P]

  9. Pingback: ArtLung : TabSweep.txt PART 2 ~ 16 Dec 2014

  10. Pingback: Â¿Pueden tener derechos los robots? | Replicante Legal

Comments are closed.