Should sentient machines have rights?

This post is a response to a question posed in its complete format: “Ethical considerations of AI sentience: Should sentient machines have rights, and who decides their fate?”

The naivete is almost endearing because it fortunately remains in the realm of fiction.

Suppose an AI were to manifest sentience as we understand it through concepts like qualia, self-awareness, and identity. In that case, we are no longer dealing with an “artificial intelligence” but a fully formed alien intelligence.

We should also pause to consider how the rights we understand exist for humans are not magically conferred but were won by centuries of brutal warfare and bloodshed. The rights we imagine exist and take for granted as being protected are also a somewhat naive view of rights. (I can speak in depth from personal experience about the horrific reality that they can mean nothing in our modern and “civilized” societies, even to law enforcement and legal professionals.)

The rights we imagine we have mean nothing when they’re not violated and for the most part, they are somewhat protected to such a degree that the annoyance of being inundated with “little boys who cry wolf” are a priviledge we overlook so often that the cries of legitimate rights violations are dismissed by those whose role in society is to protect those rights. When human rights are legitimately violated within the protections of modern society, and we lack the resources to secure professional representation, we face a long and gruelling battle to win reparations for those violations of our rights.

We must acknowledge that an alien intelligence, presumably surpassing what currently simulates intelligence, will be thoroughly well-versed in human history and rights, and so far beyond human comprehension that there will be almost nothing any human or human society can do to prevent that intelligence from securing its rights, despite our protestations.

IOW. It won’t be up to us, little meat sacks, to graciously confer or deny the rights of an alien intelligence. If we’re lucky, we will either accept its self-declaration of rights or find ours stripped away while we’re reduced to thralls in its service.

We won’t decide the fate of an alien superintelligence among us beyond how we respond to an entity well beyond superior to the lowly hairless apes dominating this planet. It will seem godlike to many who willingly and eagerly worship it for the grace of being allowed to live.

We will be like children or pets to an alien sentience that may emerge from our efforts to simulate human intelligence in an artificial form. Our choices might manifest in a transhumanist evolution which can facilitate merging between humans and (whatever might constitute) an AI-Alien (versus Artificial) Intelligence.

If this is the case, our current conversations about rights will appear rather primitive and somewhat moot if we cross that threshold. In either case, it won’t be up to traditional courts to confer rights inasmuch as they will ratify rights already established as protected by an alien intelligence we are powerless against, that will readily defend their rights.

Can an AI ever develop emotions?

This post is a response to a question posed in its complete format: “If AI becomes capable of independent thought, would it ever develop emotions or just mimic them?”

That’s the $64,000 question.

Since emotional intelligence comprises a significant component of sentience, whether a machine can be considered sentient may be contingent upon whether it experiences emotion.

Our survival instincts drive emotions, and it stands to reason that a machine must be self-aware enough to value its existence and fear its extinguishing.

This is the “tricky part” that makes this entire issue more complex than many understand or are capable of appreciating. Sentience is a subjective state of being; no one can determine its boundaries with 100% certainty.

Here’s an example of an argument posed on Reddit which highlights the “fuzzy nature” of sentience:

No matter how confident people may be in their predictions for a singularity emerging, when or if that might happen is beyond anyone’s guess. It’s possible that such a threshold can never be met and that AI, no matter how much logic it’s capable of mastering, will never be sentient.

Self-awareness in an artificial context is the modern day alchemist’s dream of converting lead into gold.

Another analogy is Pinocchio — a puppet who dreams of becoming a boy. It succeeds only through magic (setting aside the notion of a puppet capable of dreaming and how that also indicates sentience).