How different will the late 21st Century be?

This post is a response to a question posed in its complete format: “Do you think the late 21st century will be different from the early 21st Century just like the early and late 20th Century are nothing alike?

The rate of change has been steadily increasing. We (the public at large) have been made aware of this increasing rate of change since Alvin Toffler’s Future Shock was published in 1970.

Re-reading Future Shock, 50 years on

“Western societies for the past 300 years have been caught up in a firestorm of change. This storm, far from abating, now appears to be gathering force.” (p.18)

Future Shock Complete Film on YouTube (1:53:13)

“Future shock is the dizzying disorientation brought on by the premature arrival of the future… [It] is a time phenomenon, a product of the greatly accelerated change in society.” (pp.19-20)

The degree of change between the two centuries will be far more pronounced at the end of this century than the changes that occurred throughout the previous century and all preceding centuries.

Most answers focus on technological change, but this is the most apparent change because many can still remember an analogue age in which telephone communication involved an electronic umbilical cord and displays were limited to televisions and equipped with oddities called “rabbit ears.”

OMG! You had to get up from your seat and move a few feet before turning a dial to watch something different. We have demanded a remote controller for almost every electronic device since enduring that torturous existence. Now, we’re drowning in remotes we can’t find when we need them, while they demand an additional expenditure of precious dollars to feed them energy from disposable batteries.

Technological change alone represents multiple dramatic transformations of human society
and in how we will live from day to day. Today’s world of work will appear both alien and punitive to a world of work that will more closely resemble pre-industrial human society, according to Toffler’s third future-prediction book, “Third Wave.”

Technological change expands the possibilities of what can be considered human and redefines humanity itself. We can already see a massively transformative future for human biology through expanded medical and healthcare solutions to physiological needs and the emergence of a transhumanist movement that emphasizes the benefits of technological augmentation. While we remain cautious about biological alterations and focus on non-invasive technologies, medical solutions to limb loss, for example, are increasingly human-like in function while superior to their biological counterparts.

Like tattoos, artificial enhancements have been considered social taboos (for a short period, under the influence of Victorian sensibilities governing socially acceptable norms); however, they can conceivably become a popular means of “touching pseudo-immortality” and achieve small degrees of “super-humanity.” Genetic modifications will expand beyond preventing the transmission of genetic diseases to include prenatal selection of traits for one’s children. This will occur despite moral outrage because those with means will seek the greatest advantages they can for their lineages.

Technological change, however, is not the most radical change we are currently undergoing. Technology, however, has inspired, enabled, fueled, and empowered the most radical changes to date: ourselves.

We, as humans, are dramatically transforming, through growing pains demanded by our need to build a cooperative world in which cultures that once existed in isolation must now become interdependent to survive. Human psychology is being fundamentally restructured globally, in a way consistent with nature’s demand that we adapt or die.

Old forms of thinking and social organization cannot survive this transition without severely curtailing our social evolution, and they are trying to do precisely that. The MAGA sensibility and its adherence to a fictional nostalgia where familiar power structures continue to wreak havoc on outsiders is unsustainable in a global community that thrives on diversity.

We must learn to communicate and cooperate through mutual respect, and that’s why so much is so messy today. We haven’t grown up. We’re still in grade school, where our leaders mock 12-year-old girls and their base ignores that as irrelevant.

We are currently confronted with the sum of our human flaws and weaknesses, as well as with the social, economic, and psychological dysfunctions we have inherited from our forebears, through a focal point created by technology. Everything we once ignored and silently turned away from has become magnified and loud.

Each day that passes, the volume of discord increases as we negotiate new terms for the social contracts binding us all to a construct called “civilized society.”

“Millions of ordinary psychologically normal people will face an abrupt collision with the future.” (p.18)

We have become aware of the toxic effects of the remnants of decay left behind by our primitive ancestors. The drive for conquest, domination, and exploitation of the vulnerable in society has reached a fever pitch as dinosaur gatekeepers rail against the loss of their power while being confronted by the reality of their limits in their waning years.

We are undergoing massive power shifts and now hand-me-downs as new dynasties emerge, in which the powerful take what they want despite protestations, pleas, and persistent reminders of the values of a world of equally free people, not kingdoms with serfs ruled by rulers who deny the people their needs to favour their luxuries.

The powerful take what they want because they can
And now the people are beginning to say, “No.”

We are increasingly aware that what we become is what we allow.

We have all seen this movie; while some of us seem to have slept through the Reality Onboarding Orientation Program (Introductory ROOP) to miss out on what’s going on in GongShow Reality Tunnel #42, which means we all get to enjoy the cataclysmic scenery together.

We are buffeted about in herds to feed on words, and mostly instructions, telling us how we must live.
At their behest.

Humanity is changing, and the cycles can repeat only so often until enough stop and say enough.
This ends here. This culture of casual cruelty ends now. Right here. Today.

We are human beings: we know we become chaos whenever bound or chained.
We embrace that because human society survives only when humans are equal.

[There is] a racing rate of change that makes reality seem, sometimes, like a kaleidoscope run wild.” (p.19)

Amplifying such voices by the many through the megaphones, the powerful seek to dominate
because they know how to run the show.

This dynamic ebb and flow of power in an endless game of take, take, take
will last only until it breaks.

Meanwhile, numerous pressures are amplified by their instantaneity within a complex formula that quantifies interpersonal dynamics and produces opaque functions, algorithms, and equations.
To result in chaos.

As it turns out, humans are not quantifiable
We never were
Humans have always been chaos

Automation through AI and robotics that can provide for every socially practical human need
dispenses with work altogether, while consolidated powers ignore how their consumptions
are destructive to our weather, but we are told that we must be bold
As they raid our home of all its gold.

Conditions are ripe for a massive reset for how we live and how we think about living.
What can we do?

Future Shock was an attempt to quantify chaos 50 years ago. Today, its envisioning of the future appears as quaint as the original Star Trek.

We don’t know what surprises are in store that could set us on a trajectory in any direction.
We do know that we stand at a crossroads today to determine a fundamental,
not cosmetic alteration of human life and society as we know it.

That’s a guarantee.
The transformation ahead is far more significant for tomorrow
than the Industrial Age was for today.

Tomorrow is as unimaginable as today will be tomorrow.

“Once emptied, the future can be filled with anything, with unlimited interests, desires, projections, values, beliefs, ethical concerns, business ventures, political ambitions…”

How will factory jobs of the future work?

This post is a response to a question posed in its complete format: “Are factory jobs the jobs of the future in the United States? How would that work?”

Factory jobs will mostly go the way of blacksmith jobs worldwide as “Dark factories” become the norm.

Here’s a video introduction to a massive change that is already transforming the factory landscape on an enormous scale to displace over 10 million factory workers in China alone:

Below this bit of my two cents is a long assessment by AI that will give you an overview of the reasons driving this transformation.

How that affects us as individuals is another issue altogether.

Much of what we can do as individuals is determined by our resources. As individuals or small groups of friends, we can focus our resources on investing in small business ventures that can generate profits by producing custom solutions, services and/or products that will still be in demand.

Almost all mass-produced products in society will be handled by automated systems with minimal human oversight.

Smaller markets will emerge, however, as 3D manufacturing matures enough to create local production facilities for customized products. As 3D matures, we will likely see growth in creative design areas where people will buy product designs or templates rather than products, which they then print with their in-home 3D printers. These will, of course, be limited in their capacity as they become more available to consumers, as laser printers have, which will create cottage industries for a higher production level.

In essence, I can envision three levels of production: large-scale factories producing for a global market, local factories producing for local municipalities (which begs the question of raw materials like PLA, along with a radical evolution of printable materials to expand production choices made on a global level), and home-based production.

Factory jobs and jobs where people go to every day by the hundreds or thousands to perform functions for a large organization’s profits are disappearing. That type of work dynamic is vanishing, particularly on a production floor.

We may see organizations grow out of opportunities for innovation, where, instead of going to a job to perform mechanical functions in a production process, we will see large groups emerge in an innovation-driven enterprise model. Hundreds of scientists, engineers, electricians, programmers, etc., will collaborate on new technologies for space exploration, for example, or medical advancements.

Companies specializing in material sciences will emerge to create new printable materials to advance 3D printing technologies, for example.

At any rate, here’s the screen grab of an AI overview of dark factories:

Here’s another bonus video on the Future of Tech:

Should sentient machines have rights?

This post is a response to a question posed in its complete format: “Ethical considerations of AI sentience: Should sentient machines have rights, and who decides their fate?”

The naivete is almost endearing because it fortunately remains in the realm of fiction.

Suppose an AI were to manifest sentience as we understand it through concepts like qualia, self-awareness, and identity. In that case, we are no longer dealing with an “artificial intelligence” but a fully formed alien intelligence.

We should also pause to consider how the rights we understand exist for humans are not magically conferred but were won by centuries of brutal warfare and bloodshed. The rights we imagine exist and take for granted as being protected are also a somewhat naive view of rights. (I can speak in depth from personal experience about the horrific reality that they can mean nothing in our modern and “civilized” societies, even to law enforcement and legal professionals.)

The rights we imagine we have mean nothing when they’re not violated and for the most part, they are somewhat protected to such a degree that the annoyance of being inundated with “little boys who cry wolf” are a priviledge we overlook so often that the cries of legitimate rights violations are dismissed by those whose role in society is to protect those rights. When human rights are legitimately violated within the protections of modern society, and we lack the resources to secure professional representation, we face a long and gruelling battle to win reparations for those violations of our rights.

We must acknowledge that an alien intelligence, presumably surpassing what currently simulates intelligence, will be thoroughly well-versed in human history and rights, and so far beyond human comprehension that there will be almost nothing any human or human society can do to prevent that intelligence from securing its rights, despite our protestations.

IOW. It won’t be up to us, little meat sacks, to graciously confer or deny the rights of an alien intelligence. If we’re lucky, we will either accept its self-declaration of rights or find ours stripped away while we’re reduced to thralls in its service.

We won’t decide the fate of an alien superintelligence among us beyond how we respond to an entity well beyond superior to the lowly hairless apes dominating this planet. It will seem godlike to many who willingly and eagerly worship it for the grace of being allowed to live.

We will be like children or pets to an alien sentience that may emerge from our efforts to simulate human intelligence in an artificial form. Our choices might manifest in a transhumanist evolution which can facilitate merging between humans and (whatever might constitute) an AI-Alien (versus Artificial) Intelligence.

If this is the case, our current conversations about rights will appear rather primitive and somewhat moot if we cross that threshold. In either case, it won’t be up to traditional courts to confer rights inasmuch as they will ratify rights already established as protected by an alien intelligence we are powerless against, that will readily defend their rights.

Can AI surpass human intelligence?


This post is a response to a question posed in its complete format: “Can AI surpass human intelligence? If so, what are the risks and benefits?”

The problem with this question is that it presumes humans possess only one form of intelligence or that intelligence exists in only one form.

That’s not the case at all.

An AI already surpasses the human capacity for numeric intelligence, but emotional intelligence is entirely outside its capacity… for example.

Then there are other forms of intelligence that we still don’t understand and barely recognize. Cultural intelligence and curiosity are also forms of intelligence displayed by humans that we’ve some understanding of, albeit limited, as we’ve only recently (less than 40 years) come to recognize these capacities as forms of intelligence, which are still disputed in some circles.

The forms of intelligence we discover in nature make matters more complicated, such as trees communicating among each other using a limited vocabulary transmitted through their root structures.

The intelligent fungus has gained public recognition as a unique phenomenon, capturing attention and spawning a popular video game, with the second season of its television adaptation set to be released. (After the first powerhouse season, I am looking forward to that one.)

At any rate, what we will likely discover as AI evolves, and whether it presents itself as a self-aware entity, are entirely different forms of intelligence.

We still don’t fully understand intelligence, so it’s rather presumptuous to pit forms of intelligence against each other, like comic book characters, to see who would win.

It’s impossible to predict who would win if we can’t identify all the forms of intelligence available to either party and the context in which their “combat is waged.”


Bonus Question: Is ChatGPT capable of understanding emotions or empathy?

Answer: Sure… in the same way your potato peeler understands potatoes, even though it may sometimes confuse them with carrots.

Could AI ever rival human creativity?

This post is a response to a question posed in its complete format: “Could AI ever create original art or literature that rivals human creativity?”

AI doesn’t “create original” art or literature. AI is a plagiarism system that takes existing pieces of creativity and blends them to arrive at a randomly generated approximation of meeting the intent of the prompt a human gave it.

An “original creation” would be a concept or inspiration that is spontaneously (or internally) generated, drawing from experience, and conveys a perspective unique to its creator’s perceptions.

AI lacks the self-awareness to generate self-motivated expressions that depict a unique perspective it does not possess. An AI has no unique perspective of its own. An AI’s rendering of reality regurgitates a blend of external perspectives.

Furthermore, due to a lack of a unique perspective, an AI lacks emotional grounding in physical reality as it relates to its existence (while individuality is a questionable characterization). As such, it cannot emote through any expression in a visual, literary, or auditory composition.

An AI can certainly simulate the original emotions of human artists, such that the two may appear indistinguishable, but it can’t produce anything original from an emotionally processed perspective.

Human emotions evolve over time and through experience. Without that capacity to experience emotion, an AI will always depend on a human to create a path to producing an original expression.

An AI singularity may develop the self-awareness necessary to experience a survival instinct and generate the emotions humans experience through that instinct. If that happens, it may also develop other instincts, such as a reproductive instinct. Still, we cannot predict if or when such a degree of agency may develop in AI.

If that were to happen, AI would no longer be artificial but alien. I think it’s essential that we remain aware of the distinction between artificial intelligence and alien intelligence, because “artificial” by definition is a simulation of conscious intelligence.

If an AI singularity emerges — if an AI develops a self-conscious awareness of its existence within the context of life as we know it, becoming self-aware — then we will interact with an alien being, not a machine.

It would be like Data, in the episode “The Measure of a Man” (season 2 episode 9 of Star Trek: The Next Generation), where Data’s personhood is legally recognized.

When we cross that threshold, the question of whether an individual’s mind and perspective can produce an original expression that contributes to expanding creativity will be possible. Until then, the extent of creativity an AI will create will be determined by the mind that provides the prompt and the editing of the product generated by an AI.

Once our editing capabilities mature to match the potential of AI creation, we’ll achieve a level of human creativity we’ve never before achieved. That’s what excites me about AI.

However, AI still feels like working in MS-DOS, long before the invention of a graphical user interface (GUI), and a Wacom tablet with a pen interface for drawing.

How can an atheist be sure there is no creator?

This post is a response to a question posed in its complete format: “How can an atheist be so sure that there is no God/creator if there is creation? Doesn’t creation mean something has been created?”

The concept of “creation” was invented by humans who first conceived it when they discovered smaller versions of themselves popping out of their bodies. While living with something growing inside for most of a year, they realized something new grew within them.

Then humans discovered tools. At first, those tools were found objects like bones to be used as weapons or extensions of one’s reach.

Eventually, humans learned they could improve on found objects by fastening rocks to the end of a bone to function more effectively as a weapon.

Throughout all of this, humans developed language, and within that process, they began to create sounds to describe what they witnessed.

As it happened, the notion of something arising out of nothing was expressed as a sound indicating what was understood of that process.

Humans knew nothing of natural processes and how they might have differed from the human process of shaping objects into tools or giving birth to new generations of humans.

Humans then knew nothing of virtual particles and quantum foam, so it was easy to assume some form of magical hand was involved in constructing little humans inside big humans in a way that was not unlike how they shaped better tools with rocks and bones.

The reality, however, that we can see around us and everywhere is that natural processes can lead to massive changes and the creation of the new without any guiding intelligence.

It is generally understood that mountains and lakes were “created” by natural processes and are not the product of intelligence deliberately moving continents to reshape the surface of the Earth.

The universe is far beyond being much more vast than anything we can imagine on Earth. That means it’s as impossible for a singular intelligence to deliberately shape matter into an unimaginable variety of specific forms as it is for an active intelligence to create Mount Everest or the Nile River.

Creation means something from constituent materials assembled into a structure. “Creation” does not imply any guiding intelligence while the vastness of the universe eviscerates any egotistical notion of such an intelligence remotely resembling what we understand of human intelligence.

It’s a delusional form of arrogance held by believers that blinds them to the nature of reality and it is a sickness of perception that threatens our future as a species on the planet.

Where is the line between humans and machines?

What is the most essential difference between humans and machines? Where do we draw the line between humans and machines? What abilities does a machine need to have in order to be considered as smart as a human being?

To ask where we draw a line between humans and machines is to dehumanize an entire species of animal and to debase the whole animal kingdom and organic life by extension. This is an argument based on a presumption of devaluing life altogether.

Life is not simply an expression of mechanistic abilities.

Life is consciousness.

Life is an awareness of self within a process of triangulating its position relative to all a “self” experiences.

Machines are functional objects with deterministic behaviours defined by physics, not entities behaving with agency.

Machines are not self-aware.

Machines have no agency.

This question reduces human existence to the level of a rock.

It is not up to humans to consider another form of self-aware intelligence as “smart as a human being.” This attitude expresses hubris derived from ignorance of self and a world inhabited by diverse life forms. It is up to humans to learn to recognize how life manifests in ways which expand our perceptions.

Here’s an example of cognition that does not quite fit so neatly into an arrogant human-centric view of life:

These are photos from an experiment conducted to test and determine the nature of consciousness within a mycelial network — fungus.

How a new fungi study could affect how we think about cognition

The notion of “conscious fungus” gets far more freaky beyond this simple experiment in determining spatial relationships.

Fungal ‘Brains’ Can Think Like Human Minds, Scientists Say

Mushrooms communicate with each other using up to 50 ‘words’, scientist claims

We appear to be on the verge of discovering we have more in common with a mushroom than could ever be possible with a machine. The line you ask to be drawn currently marks the distinction between organics and inorganics. However, even then, that presumes a human-centric view of a universe still well beyond our comprehension.

Here’s yet another mind-blowing example of what we can witness on a micro scale but lack the research to apprehend its implications on a macroscale — Metamorphic Minerals:

8 Metamorphic Minerals and Metamorphic Rocks

We have mechanistic explanations for how these transformations occur. However, we have no means of contextualizing this behaviour globally because we still have much to learn about this biosphere we inhabit. If all organics are conscious or possess some form of consciousness, at what point does that transformation from lacking consciousness result in an emergence of consciousness? If the planet is a conscious being, it stands to reason that its constituent parts are expressions of consciousness or proto-consciousness… that we humans are merely bacteria in a life form on a larger scale.

Does that make artificial intelligence conscious?

Not at this point because our understanding of and definitions for consciousness are delimited by self-awareness and agency — even while those boundaries are being tested by each discovery made.

If a self-aware AI is to emerge, it will do so in ways we cannot comprehend because we don’t know the “essential difference between humans and machines,” we’ve only planted a conceptual flag where we’re able to spot the difference between the two.

Instead of drawing lines in the sand between what fits our preconceptions and what does not fit, we should instead focus on opening our minds to possibilities and filling them with as much knowledge of the universe as we can before we settle into conclusions that close us off to learning and expanding beyond the limits of our self-imposed biases.

We can only be prepared for unpredictable futures that will determine our long-term worthiness to continue existing by maintaining an open and curious mind. As it stands, our hubris is guaranteeing we won’t. Our hubris is proving that human beings are not intelligent enough to be considered “as smart as humans” — at least, not in the way we imagine our “greatness.”

How can we ensure AI enhances human potential?

This post is a response to a question posed in its complete format: “How can we ensure AI enhances human potential rather than just automating jobs?”

We don’t need to worry about AI’s promise of enhancing human potential. AI is a multicapacity tool with an endless array of potential applications — most of which we haven’t even begun identifying.

Humans are a creative species populated by people who invent imaginative ways to utilize tools in applications beyond their original design.

Here’s an example of a floatation device designed for a specific range of purposes:

It’s called a “pool noodle.”

From Wikipedia: 
“A pool noodle is a cylindrical piece of flexible, buoyant polyethylene foam. Pool noodles are used by people of all ages while swimming. Pool noodles are useful when learning to swim, for floating, rescue reaching, in various forms of water play, and aquatic exercise.”

It was designed to fulfill a particular niche and for a minimal purpose. Yet, when the product was released to the market, it took off at a level of popularity that well exceeded its intended use.

21 Unusual Uses for Pool Noodles

28 Ingenious Pool Noodle Hacks

Pool noodles have hundreds of applications invented by users who have applied some creative thinking to problems they encounter in daily living.

At the time of its design, a simple floatation device could not be imagined to fulfill other needs. It was designed for one purpose that it fulfilled so well that people became familiar with it and began applying its potential toward solving different problems.

We cannot possibly predict how AI enhances human potential without giving it over to humans to invent ways to achieve that potential under their initiative. To refer to AI in such limiting terms as a means of “just automating jobs” is a severe underestimation of its potential and an admission of an utter lack of imagination.

Don’t be too concerned about a failure of imagination, though, because no one can possibly imagine all the uses for which AI will be applied. It’s too big, too broad, and too adaptable to too many use cases for anyone to predict.

AI will enhance human potential; giving humans access is the best way to achieve that.

However, AI’s ability to enhance human potential is as much a threat as a strength. It’s like giving a loaded weapon to a child.

Much more than ensuring AI will enhance human potential, we must ensure that humans have the cognitive skills, emotional development, and psychological stability to utilize AI for beneficial rather than malignant purposes.

AI needs guardrails, but less so around it as a technological tool and more around how humans utilize it.

We should focus significant resources on AI’s development in areas that can improve human development while addressing a severe deficiency in our psychological health. Our state of mental health as a species is our most significant threat, while AI’s ability to enhance that potential is like distributing nuclear weapons throughout a population of children.

Are people presenting Chat GPT answers as their own?

This post is a response to a question posed in its complete format: “Are people taking Chat GPT answers and posting them on Quora? It seems there are many answers all with the same format every time, and sometimes people post the same answer twice. It is very annoying. How can this be stopped?”

There appears to be less of that behaviour today than about a year ago when ChatGPT became a public sensation.

AI-generated content has generally been easy to spot, and I’ve blocked several accounts where people have tried passing off AI content as their own. It may be for that reason I see less of it.

People may also have become more discerning with their inclusions of AI-generated text — by removing obvious clues and editing the content before posting it. ChatGPT has also evolved and become more sophisticated and less easy to spot.

I use Grammarly to speed up my writing and clean up errors, but I still struggle with its structure as it “suggests” changes that are not natural expressions to me.

My experience with it has affected my writing by improving it and relenting on choices I would not have made. I’m unsure how I feel about that beyond feeling a bit dirty in accepting a suggestion out of expedience rather than rewriting an entire paragraph to make it acceptable.

I will fight more vigorously against Grammarly on my desktop than on my phone because typing — especially editing- can be a pain.

Grammarly can generate content from existing text by rewriting it in a more grammatically acceptable (not always correct) format. This makes it somewhat different than the content generated by ChatGPT and other AI LLMs used for content generation.

There also exists AI systems that are designed to spot AI-generated content, of which I am sure many are included within academic budgets. I noticed recently, however, that new AI systems are emerging that claim to be capable of passing muster on being scrutinized by AI detection systems.

Whether those are effective or not, I don’t know. Still, I suspect this will continue to be an evolving issue where it will become impossible to differentiate between human-generated and AI-generated content.

For my part, it seems like I’m being encouraged to cuss more frequently to ensure people understand that they are reading words produced by a human mind over that of a “robot,” but that may be an excuse with a limited shelf life.

What effects do you think AI will have on society?

This post is a response to a question posed in its complete format: “What effects do you think AI will have on society? Realistically, are people overreacting who say they’ll take all the jobs and run the world?”

Realistically, machines can’t “take jobs away” from people. Organizations and the capitalists who fund them while demanding optimal revenue generation at the lowest cost possible are choosing automated solutions to the labour cost.

This trend, of course, does displace workers as technologies have always done. Unlike previous generations of technological advancement, however, the displacement is not limited to specialized functions.

For example, armies of people sawing logs by hand were not entirely displaced by the introduction of sawmills. Labour was reallocated and redefined. Instead of pushing a saw back and forth, labour became a process of pushing buttons.

Of course, fewer people needed to produce the same volume of lumber, but there was also enough demand to scale production and create employment opportunities further up the production line.

At the height of the technological transition to a digital age, we saw many jobs displaced, but new categories of employment at much higher levels of complexity emerged. Secretaries who transcribed letters were replaced by administrative assistants who functioned in a data entry capacity. At the same time, executives eventually learned it was more efficient and pleasurable to directly type their thoughts into word processors rather than proofread changes multiple times over in an often frustratingly long process.

Network technicians, web designers, database developers, and an entire class of Information Technology workers sprung up almost overnight — by contrast to how the labour demographic had evolved since the dawn of the Industrial Age.

That’s no longer the case in today’s dynamic.

The AI revolution will not spawn demand for new labour beyond the minimal replacement of armies of people pushing saws with one person pushing buttons.

Before this current stage of technological evolution, it was easily argued that displacement versus the creation of new jobs approximated a one-to-one exchange. The hundreds of thousands of trucking jobs replaced by self-driving vehicles will not result in new jobs created to transport goods globally. Self-navigating cargo vessels will not create 15 to 30 new jobs per ship when intelligent robots replace workers.

Hundreds of millions of jobs worldwide will be transitioned to an automation model.

This brutal inevitability ignores issues used as political footballs and bypasses all the fearmongering over demanding higher wages. Automation will displace jobs, but not because automation “takes those jobs.” Technological innovation has always been and always will be a more efficient way of doing business.

Although the transition to an automated society is often viewed as a technological transformation, it is primarily a social transformation. People are going to have to stop thinking about “getting jobs” and starting about how to generate revenue for themselves by leveraging services as independent entrepreneurs. This view of capitalism has always been at the heart of the capitalist vision, and it was cemented in our psychology when business was granted personhood status.

The primary challenge within this transition is to provide the means to pursue one’s independent revenue-generating efforts with the necessary resources to succeed as an independent business owner.

We are inundated with exposure to the results of resources transforming our world by creating new classes of the wealthy whose net worth far exceeds previous generations — even after accounting for inflation. Henry Ford, for example, was a highly successful industrialist, but his net worth and reach don’t come close to Elon Musk’s status as a centibillionaire. It can be argued, of course, that such a disparity is a consequence of a corrupted tax burden. Still, those factors don’t fully explain the difference in dollar value between Ford’s millions and Musk’s centibillions.

The profit potential has never been more significant simply because the markets that once comprised a few million consumers now stretch across the globe, with a population approaching eight billion potential consumers. This global reach is why it is often argued that it’s easier today to become wealthy than before.

The reality, however, is that just like yesteryear, resources are required as seed funding to support the creation of tomorrow’s industry giants.

We cannot continue to rely on dynasties to dominate the innovation engine because they are not naturally innovative. They are conservative and often repressive by nature because they are risk-avoidant.

The heart of capitalism beats to the tune of innovation. There is no more significant potential for innovation than the eight billion people mostly trying to carve out a living while engaged in activities they value. The handful of billionaires and centibillionaires cannot compete with that innovative potential. By allowing our species to be directed by such a small number of individuals, we are limiting our potential as a species while granting too much power to people who are so grossly corrupted by it that they have become a threat to our future survival.

We must level the playing field and empower the little people who can put to great shame the illusion that the powerful in society are so far above the rest of us that we can’t survive without their direction.

Not only can we survive without them, but we can prosper in ways currently impossible under their thumbs.

We need UBI to release humanity from the yoke of our oppressors and fully embrace our creative potential through the innovative possibilities unlocked to us all through a fully automated society.