Can an AI ever develop emotions?

This post is a response to a question posed in its complete format: “If AI becomes capable of independent thought, would it ever develop emotions or just mimic them?”

That’s the $64,000 question.

Since emotional intelligence comprises a significant component of sentience, whether a machine can be considered sentient may be contingent upon whether it experiences emotion.

Our survival instincts drive emotions, and it stands to reason that a machine must be self-aware enough to value its existence and fear its extinguishing.

This is the “tricky part” that makes this entire issue more complex than many understand or are capable of appreciating. Sentience is a subjective state of being; no one can determine its boundaries with 100% certainty.

Here’s an example of an argument posed on Reddit which highlights the “fuzzy nature” of sentience:

No matter how confident people may be in their predictions for a singularity emerging, when or if that might happen is beyond anyone’s guess. It’s possible that such a threshold can never be met and that AI, no matter how much logic it’s capable of mastering, will never be sentient.

Self-awareness in an artificial context is the modern day alchemist’s dream of converting lead into gold.

Another analogy is Pinocchio — a puppet who dreams of becoming a boy. It succeeds only through magic (setting aside the notion of a puppet capable of dreaming and how that also indicates sentience).

Where is the line between humans and machines?

What is the most essential difference between humans and machines? Where do we draw the line between humans and machines? What abilities does a machine need to have in order to be considered as smart as a human being?

To ask where we draw a line between humans and machines is to dehumanize an entire species of animal and to debase the whole animal kingdom and organic life by extension. This is an argument based on a presumption of devaluing life altogether.

Life is not simply an expression of mechanistic abilities.

Life is consciousness.

Life is an awareness of self within a process of triangulating its position relative to all a “self” experiences.

Machines are functional objects with deterministic behaviours defined by physics, not entities behaving with agency.

Machines are not self-aware.

Machines have no agency.

This question reduces human existence to the level of a rock.

It is not up to humans to consider another form of self-aware intelligence as “smart as a human being.” This attitude expresses hubris derived from ignorance of self and a world inhabited by diverse life forms. It is up to humans to learn to recognize how life manifests in ways which expand our perceptions.

Here’s an example of cognition that does not quite fit so neatly into an arrogant human-centric view of life:

These are photos from an experiment conducted to test and determine the nature of consciousness within a mycelial network — fungus.

How a new fungi study could affect how we think about cognition

The notion of “conscious fungus” gets far more freaky beyond this simple experiment in determining spatial relationships.

Fungal ‘Brains’ Can Think Like Human Minds, Scientists Say

Mushrooms communicate with each other using up to 50 ‘words’, scientist claims

We appear to be on the verge of discovering we have more in common with a mushroom than could ever be possible with a machine. The line you ask to be drawn currently marks the distinction between organics and inorganics. However, even then, that presumes a human-centric view of a universe still well beyond our comprehension.

Here’s yet another mind-blowing example of what we can witness on a micro scale but lack the research to apprehend its implications on a macroscale — Metamorphic Minerals:

8 Metamorphic Minerals and Metamorphic Rocks

We have mechanistic explanations for how these transformations occur. However, we have no means of contextualizing this behaviour globally because we still have much to learn about this biosphere we inhabit. If all organics are conscious or possess some form of consciousness, at what point does that transformation from lacking consciousness result in an emergence of consciousness? If the planet is a conscious being, it stands to reason that its constituent parts are expressions of consciousness or proto-consciousness… that we humans are merely bacteria in a life form on a larger scale.

Does that make artificial intelligence conscious?

Not at this point because our understanding of and definitions for consciousness are delimited by self-awareness and agency — even while those boundaries are being tested by each discovery made.

If a self-aware AI is to emerge, it will do so in ways we cannot comprehend because we don’t know the “essential difference between humans and machines,” we’ve only planted a conceptual flag where we’re able to spot the difference between the two.

Instead of drawing lines in the sand between what fits our preconceptions and what does not fit, we should instead focus on opening our minds to possibilities and filling them with as much knowledge of the universe as we can before we settle into conclusions that close us off to learning and expanding beyond the limits of our self-imposed biases.

We can only be prepared for unpredictable futures that will determine our long-term worthiness to continue existing by maintaining an open and curious mind. As it stands, our hubris is guaranteeing we won’t. Our hubris is proving that human beings are not intelligent enough to be considered “as smart as humans” — at least, not in the way we imagine our “greatness.”

How can we ensure AI enhances human potential?

This post is a response to a question posed in its complete format: “How can we ensure AI enhances human potential rather than just automating jobs?”

We don’t need to worry about AI’s promise of enhancing human potential. AI is a multicapacity tool with an endless array of potential applications — most of which we haven’t even begun identifying.

Humans are a creative species populated by people who invent imaginative ways to utilize tools in applications beyond their original design.

Here’s an example of a floatation device designed for a specific range of purposes:

It’s called a “pool noodle.”

From Wikipedia: 
“A pool noodle is a cylindrical piece of flexible, buoyant polyethylene foam. Pool noodles are used by people of all ages while swimming. Pool noodles are useful when learning to swim, for floating, rescue reaching, in various forms of water play, and aquatic exercise.”

It was designed to fulfill a particular niche and for a minimal purpose. Yet, when the product was released to the market, it took off at a level of popularity that well exceeded its intended use.

21 Unusual Uses for Pool Noodles

28 Ingenious Pool Noodle Hacks

Pool noodles have hundreds of applications invented by users who have applied some creative thinking to problems they encounter in daily living.

At the time of its design, a simple floatation device could not be imagined to fulfill other needs. It was designed for one purpose that it fulfilled so well that people became familiar with it and began applying its potential toward solving different problems.

We cannot possibly predict how AI enhances human potential without giving it over to humans to invent ways to achieve that potential under their initiative. To refer to AI in such limiting terms as a means of “just automating jobs” is a severe underestimation of its potential and an admission of an utter lack of imagination.

Don’t be too concerned about a failure of imagination, though, because no one can possibly imagine all the uses for which AI will be applied. It’s too big, too broad, and too adaptable to too many use cases for anyone to predict.

AI will enhance human potential; giving humans access is the best way to achieve that.

However, AI’s ability to enhance human potential is as much a threat as a strength. It’s like giving a loaded weapon to a child.

Much more than ensuring AI will enhance human potential, we must ensure that humans have the cognitive skills, emotional development, and psychological stability to utilize AI for beneficial rather than malignant purposes.

AI needs guardrails, but less so around it as a technological tool and more around how humans utilize it.

We should focus significant resources on AI’s development in areas that can improve human development while addressing a severe deficiency in our psychological health. Our state of mental health as a species is our most significant threat, while AI’s ability to enhance that potential is like distributing nuclear weapons throughout a population of children.