.

And by a prudent flight and cunning save A life which valour could not, from the grave. A better buckler I can soon regain, But who can get another life again? Archilochus

Monday, April 23, 2018

Do Sexbots Have Rights?

The current wave of politically-correct moralism reared its head in recent debates about the need to regulate relations between humans and sexbots (sexual robots).

First, for context, allow me to quote from a news report:
“last year a sex robot named Samantha was ‘molested’ and seriously damaged at a tech industry festival; the incident spurred debate on the need to raise the issue of ethics in relation to machines... while the developers of sexbots have claimed that their projects will do anything to indulge their customers’ desires, it seems that they might start rejecting some persistent men... people ignore the fact that they may seriously damage the machine, just because it cannot say ‘no’ to their ‘advances’... future humanoid sex robots might be sophisticated enough to ‘enjoy a certain degree of consciousness’ to consent to sexual intercourse, albeit, to their mind, conscious feelings were not necessary components of being able to give or withhold consent... in legal terms, introduction of the notion of consent into human-robot sexual relationships is vital in a way similar to sexual relations between humans and it will help prevent the creation of a ‘class of legally incorporated sex-slaves.’”
Although these ideas are just a specific application of a proposal for the EU to impose the basic “rights” for AI (artificial intelligence), the domain of sexbots brings out in a clear way the implicit presuppositions that determine such thinking. We are basically dealing with laziness in thinking: by adopting such “ethical” attitudes, we comfortably avoid the complex web of underlying problems.

Indeed, the initial suspicion is that the proponents of such demands do not really care about the AI machines (they are well aware that they cannot really experience pain and humiliation) but about aggressive humans: what they want is not to alleviate the suffering of the machines but to squash the problematic aggressive desires, fantasies and pleasures of us, humans.

Moral Maze

This becomes clear the moment we include the topics of video games and virtual reality: if, instead of sexbots – actual plastic bodies whose (re)actions are regulated by AI, we imagine escapades in virtual reality (or, even more plastic, augmented reality) in which we can sexually torture and brutally exploit people – although, in this case, it is clear that no actual entity is suffering, the proponents of the rights of AI machines would nonetheless in all probability insist on imposing some limitations on what we, humans, can do in virtual space.

The argument that those who fantasize about such things are prone to do them in real life is very problematic: the relationship between imagining and doing it in real life is much more complex in both relations. We often do horrible things while imagining that we are doing something noble, and vice versa. Not to mention how we often secretly daydream about doing things we would in no way be able to perform in real life. We enter thereby the old debate: if someone has brutal tendencies, is it better to allow him to play with them in virtual space or with machines, with the hope that, in this way, he will be satisfied enough and not do them in real life?

Finding Answers

Another question: if a sexbot rejects our rough advances, does this not simply mean that it was programmed in this way? So why not re-program it in a different way? Or, to go to the end, why not program it in such a way that it welcomes our brutal mistreatment? (The catch is, of course, will we, the sadistic perpetrators, still enjoy it in this case? Because a sadist wants his victims to be terrified and ashamed.)

And one more: what if an evil programmer makes the sexbots themselves sadists who enjoy brutally mistreating us, its partners? If we confer rights to AI sexbots and prohibit their brutal mistreatment, this means that we treat them as minimally autonomous and responsible entities – so should we also treat them as minimally “guilty” if they mistreat us, or should we just blame their programmer?

Nevertheless, the basic mistake of advocates for AI rights is that they presuppose our, human, standards (and rights) as being the highest form. What if, with the explosive development of AI, new entities will emerge with what we could conditionally call a “psychology” (series of attitudes or mindsets) which will be incompatible with ours, but in some sense definitely “higher” than ours (measured by our standards, they can appear either more “evil” or more “good” than ours)? What right do WE (humans) have to measure them with our ethical standards?

So let’s conclude this detour with a provocative thought: maybe, a true sign of the ethical and subjective autonomy of a sexbot would have been not that it rejects our advances but that, even if it was programmed to reject our brutal treatment, it secretly starts to enjoy it? In this way, the sexbot would become a true subject of desire, divided and inconsistent as we humans are.
- Slavoj Zizek, "Do Sexbots Have Rights?"

4 comments:

FreeThinke said...

The very idea that anyone could even ASK such a silly question with apparent seriousness gives proof that our society –– after a more than a hundred years of persistent mental corruption, moral degradation, and spiritual erosion by Marxian machinations of every stripe –– cultural and otherwise –– our society has loosed its moorings, gone mad, and richly DESERVES, therefore, to suffer EXTINCTION.

I am so sorry that YOU have, apparently, fallen prey to the blandishments of LEFTIST philosophies.

I should think the horrific damage this mode of thinking has done to WESTERN CIVILIZATION should have been to WARN you that ANY involvement with this damnably SEDUCTIVE brand of SOPHISTRY can do irreparable damage to ANYONE foolish enough to entertain it.

Again I refer you ti Alexnder Pope on vice:

Vice is a creature of such fearful
As to be hated needs to be seen,
Yet seen too oft, familiar with her face,
At first we endure, then pity, then embrace.


~ Alexander Pope ((1688-1744)

Now read that substituting Marxism for Vice ...... and Bob's your uncle.

-FJ the Dangerous and Extreme MAGA Jew said...

Well I don't believe I've "fallen" for anything. I don't necessarily agree with everything I post here that's been 'authored' by others. I find this kind of question, given the state of AI, as woefully premature... but forseeably applicable in some distant future if/when sentience is conferred upon some artificial life form.

I suspect that Zizek must address this because of the nature of "property" in a communist society (not owned by individuals). I, for one, never intend to be driven to exist in a state near the "abyss of subjectivity" that would lead me to consider a "communist" alternative (see the Stalin quote in the post below).

(((Thought Criminal))) said...

You should always put your robot in safe mode before scanning for viruses.

FreeThinke said...

I have come to interpret Pope's rhyme as a WARNING against falling prey to the seductve wiles of ANY form of folly.

Having been the Man of the World that he was, however, I doubt he even dreamt anyone would ever follow his advice.

Next to cancer, stroke, and heart attack, human beings are their iwb worst enemies, aren't we?