The article recounts an experiment run by Kate Darling, a researcher at MIT, who gave people cute-looking dinosaur robots that behaved like "helpless newborn pupp[ies]." After allowing people to interact with the robots for a time, Darling asked that the participants destroy their robots. This request was met with widespread resistance by the group, requiring Darling to ultimately reach an ultimatum that one robot needed to be sacrificed to protect the rest -- a request that was reluctantly met.
The article recounts Darling's reaction to this result, and its broader implications:
Darling, however, believes that we could go further than a few ethical guidelines. We may need to protect “robot rights” in our legal systems, she says.
If this sounds sound absurd, Darling points out that there are precedents from animal cruelty laws. Why exactly do we have legal protection for animals? Is it simply because they can suffer? If that’s true, then Darling questions why we have strong laws to protect some animals, but not others. Many people are happy to eat animals kept in awful conditions on industrial farms or to crush an insect under their foot, yet would be aghast at mistreatment of their next-door neighbour’s cat, or seeing a whale harvested for meat.
The reason, says Darling, could be that we create laws when we recognise their suffering as similar to our own. Perhaps the main reason we created many of these laws is because we don’t like to see the act of cruelty. It’s less about the animal’s experience and more about our own emotional pain. So, even though robots are machines, Darling argues that there may be a point beyond which the performance of cruelty – rather than its consequences – is too uncomfortable to tolerate.
Fisher goes on to note other laws that ban behavior that is viewed as cruel, or that normalizes cruel behavior. He notes that bans on E-Cigarettes in public places persist despite the lack of health consequences to bystanders because this behavior normalizes public smoking. And while simulating illegal activities such as rape or pedophilia with robots is not (yet) illegal, the thought of such activity seems to raise an instinctive negative reaction.
I think that Fisher raises questions that are important now, and that will become far more important as technology continues to develop. While certain examples of carrying out illegal activities with robots are not yet illegal, as technology develops to the point where these robots are widely and cheaply available, the law may indeed develop to outlaw certain behaviors towards these robots.
And when technology does reach this point, will the law develop to make cruel treatment of robots illegal? Will there be a rational basis for these laws? How "human" will a robot need to appear in order for these laws to apply? Would laws be correct to treat unsophisticated robots that look and act like humans more sympathetically than robots that do not appear human, but that have much more complex programming?
All of these questions seem forward-thinking, and, as stated, they don't have many current implications. But the intuitions behind these questions seem to inform some of our current laws and regulation trends. Asking and answering these questions may help us unearth and examine intuitions that underlie our current policies.