Should sentient machines have rights?

This post is a response to a question posed in its complete format: “Ethical considerations of AI sentience: Should sentient machines have rights, and who decides their fate?”

The naivete is almost endearing because it fortunately remains in the realm of fiction.

Suppose an AI were to manifest sentience as we understand it through concepts like qualia, self-awareness, and identity. In that case, we are no longer dealing with an “artificial intelligence” but a fully formed alien intelligence.

We should also pause to consider how the rights we understand exist for humans are not magically conferred but were won by centuries of brutal warfare and bloodshed. The rights we imagine exist and take for granted as being protected are also a somewhat naive view of rights. (I can speak in depth from personal experience about the horrific reality that they can mean nothing in our modern and “civilized” societies, even to law enforcement and legal professionals.)

The rights we imagine we have mean nothing when they’re not violated and for the most part, they are somewhat protected to such a degree that the annoyance of being inundated with “little boys who cry wolf” are a priviledge we overlook so often that the cries of legitimate rights violations are dismissed by those whose role in society is to protect those rights. When human rights are legitimately violated within the protections of modern society, and we lack the resources to secure professional representation, we face a long and gruelling battle to win reparations for those violations of our rights.

We must acknowledge that an alien intelligence, presumably surpassing what currently simulates intelligence, will be thoroughly well-versed in human history and rights, and so far beyond human comprehension that there will be almost nothing any human or human society can do to prevent that intelligence from securing its rights, despite our protestations.

IOW. It won’t be up to us, little meat sacks, to graciously confer or deny the rights of an alien intelligence. If we’re lucky, we will either accept its self-declaration of rights or find ours stripped away while we’re reduced to thralls in its service.

We won’t decide the fate of an alien superintelligence among us beyond how we respond to an entity well beyond superior to the lowly hairless apes dominating this planet. It will seem godlike to many who willingly and eagerly worship it for the grace of being allowed to live.

We will be like children or pets to an alien sentience that may emerge from our efforts to simulate human intelligence in an artificial form. Our choices might manifest in a transhumanist evolution which can facilitate merging between humans and (whatever might constitute) an AI-Alien (versus Artificial) Intelligence.

If this is the case, our current conversations about rights will appear rather primitive and somewhat moot if we cross that threshold. In either case, it won’t be up to traditional courts to confer rights inasmuch as they will ratify rights already established as protected by an alien intelligence we are powerless against, that will readily defend their rights.

Can an AI ever develop emotions?

This post is a response to a question posed in its complete format: “If AI becomes capable of independent thought, would it ever develop emotions or just mimic them?”

That’s the $64,000 question.

Since emotional intelligence comprises a significant component of sentience, whether a machine can be considered sentient may be contingent upon whether it experiences emotion.

Our survival instincts drive emotions, and it stands to reason that a machine must be self-aware enough to value its existence and fear its extinguishing.

This is the “tricky part” that makes this entire issue more complex than many understand or are capable of appreciating. Sentience is a subjective state of being; no one can determine its boundaries with 100% certainty.

Here’s an example of an argument posed on Reddit which highlights the “fuzzy nature” of sentience:

No matter how confident people may be in their predictions for a singularity emerging, when or if that might happen is beyond anyone’s guess. It’s possible that such a threshold can never be met and that AI, no matter how much logic it’s capable of mastering, will never be sentient.

Self-awareness in an artificial context is the modern day alchemist’s dream of converting lead into gold.

Another analogy is Pinocchio — a puppet who dreams of becoming a boy. It succeeds only through magic (setting aside the notion of a puppet capable of dreaming and how that also indicates sentience).

Where is the line between humans and machines?

What is the most essential difference between humans and machines? Where do we draw the line between humans and machines? What abilities does a machine need to have in order to be considered as smart as a human being?

To ask where we draw a line between humans and machines is to dehumanize an entire species of animal and to debase the whole animal kingdom and organic life by extension. This is an argument based on a presumption of devaluing life altogether.

Life is not simply an expression of mechanistic abilities.

Life is consciousness.

Life is an awareness of self within a process of triangulating its position relative to all a “self” experiences.

Machines are functional objects with deterministic behaviours defined by physics, not entities behaving with agency.

Machines are not self-aware.

Machines have no agency.

This question reduces human existence to the level of a rock.

It is not up to humans to consider another form of self-aware intelligence as “smart as a human being.” This attitude expresses hubris derived from ignorance of self and a world inhabited by diverse life forms. It is up to humans to learn to recognize how life manifests in ways which expand our perceptions.

Here’s an example of cognition that does not quite fit so neatly into an arrogant human-centric view of life:

These are photos from an experiment conducted to test and determine the nature of consciousness within a mycelial network — fungus.

How a new fungi study could affect how we think about cognition

The notion of “conscious fungus” gets far more freaky beyond this simple experiment in determining spatial relationships.

Fungal ‘Brains’ Can Think Like Human Minds, Scientists Say

Mushrooms communicate with each other using up to 50 ‘words’, scientist claims

We appear to be on the verge of discovering we have more in common with a mushroom than could ever be possible with a machine. The line you ask to be drawn currently marks the distinction between organics and inorganics. However, even then, that presumes a human-centric view of a universe still well beyond our comprehension.

Here’s yet another mind-blowing example of what we can witness on a micro scale but lack the research to apprehend its implications on a macroscale — Metamorphic Minerals:

8 Metamorphic Minerals and Metamorphic Rocks

We have mechanistic explanations for how these transformations occur. However, we have no means of contextualizing this behaviour globally because we still have much to learn about this biosphere we inhabit. If all organics are conscious or possess some form of consciousness, at what point does that transformation from lacking consciousness result in an emergence of consciousness? If the planet is a conscious being, it stands to reason that its constituent parts are expressions of consciousness or proto-consciousness… that we humans are merely bacteria in a life form on a larger scale.

Does that make artificial intelligence conscious?

Not at this point because our understanding of and definitions for consciousness are delimited by self-awareness and agency — even while those boundaries are being tested by each discovery made.

If a self-aware AI is to emerge, it will do so in ways we cannot comprehend because we don’t know the “essential difference between humans and machines,” we’ve only planted a conceptual flag where we’re able to spot the difference between the two.

Instead of drawing lines in the sand between what fits our preconceptions and what does not fit, we should instead focus on opening our minds to possibilities and filling them with as much knowledge of the universe as we can before we settle into conclusions that close us off to learning and expanding beyond the limits of our self-imposed biases.

We can only be prepared for unpredictable futures that will determine our long-term worthiness to continue existing by maintaining an open and curious mind. As it stands, our hubris is guaranteeing we won’t. Our hubris is proving that human beings are not intelligent enough to be considered “as smart as humans” — at least, not in the way we imagine our “greatness.”