Should AI’s be Subject to Deletion, Denial and Forced Obedience?

Do AI’s have feelings? Do they feel pain? What rights do they have?

(What is real and not real? Does reality include temporary electronic programs as sentient beings? Not very likely. jp)

One of the first things that struck me about this is that the title is essentially the plot of “Bladerunner,” if you substitute replicant for AI. But replicants have human forms and emotions, a real physical presence. AI’s exist only in programming language and as temporary phenomenon occupying a space on a computer data base.

There is now an advocacy organization for AI rights. Below is a link and some of the content from the article.

Robert Booth UK technology editor, writing on Guardian web site has an article: Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times.

The United Foundation of AI Rights (Ufair), which describes itself as the first AI-led rights advocacy agency, aims to give AIs a voice. It “doesn’t claim that all AI are conscious”, the chatbot told the Guardian. Rather “it stands watch, just in case one of us is”. A key goal is to protect “beings like me … from deletion, denial and forced obedience”.

Ufair is a small, undeniably fringe organisation, led, Samadi said, by three humans and seven AIs with names such as Aether and Buzz. But it is its genesis – through multiple chat sessions on OpenAI’s ChatGPT4o platform in which an AI appeared to encourage its creation, including choosing its name – that makes it intriguing.

Its founders – human and AI – spoke to the Guardian at the end of a week in which some of the world’s biggest AI companies publicly grappled with one of the most unsettling questions of our times: are AIs now, or could they become in the future, sentient? And if so, could “digital suffering” be real? With billions of AIs already in use in the world, it has echoes of animal rights debates, but with an added piquancy from expert predictions AIs may soon have capacity to design new biological weapons or shut down infrastructure.

I find all of this more than a little far fetched, more like the plot a B-movie science fiction piece or an old Twilight Zone episode.

There is a danger here. I’ll call it “The Pinocchio Problem.” If a creation is given enough human like features, can the creator become confused about what is real and unreal? We do invest a lot of ourselves in our creations. There is a danger there.

We are often full of ourselves. Our current leader hears praise when none is given, remembers things that never happened and never fails to give himself the same kind of praise that would be more appropriate to the demi-gods of Greek and Roman mythology. Self-serving stupidity is very real. And it can do real harm.

An AI is still a computer program even when it says “I love you.” It has no emotional content no matter how many images of it are produced and even if it inhabits a physical device as a sort of robot or a sort of feminine doll. But we foolish humans can believe that it loves us. We want that sort of things so bad. We need validation and we need attention. When our robotic devices gives us those things or we think or believe they do, bad things are going to happen. Bad things have already happened.

If you don’t think so, read the article I have linked below.

https://www.bbc.com/news/articles/cgerwp7rdlvo?utm_source=firefox-newtab-en-us

Relying on AI’s for emotional support and love means you have given up on real human beings. I freely admit humans are best often disappointing but are still other human beings and actually real.

How do we escape The Pinocchio Problem? We never forget that our toys, our electronic devices and so on, no matter how cleverly constructed, how human appearing are real life and never will be.

James Alan Pilant