The Lie That Makes AI Dangerous

The greatest threat posed by artificial intelligence is not the technology itself. It is the widespread failure to question the direction in which society is being taken.

Inaction – or passive acceptance – is still a choice. And right now, society is choosing to welcome every new form of AI, and every narrative that accompanies it, without scrutiny, challenge, or the responsible oversight that should exist on behalf of the public.

Because of this, only the part of the story that sounds good is visible. The rest is ignored, dismissed, or assumed to be impossible.

Yet inevitability is not a fact. It is a story – one that has been repeated so often that it feels like truth.

Before exploring a healthier, human‑centred approach to AI, it is important to acknowledge something uncomfortable: the warnings and “scare stories” are not entirely wrong. They could become real if the current trajectory continues. Job losses, surveillance capitalism, social credit systems, and the ability for authorities or corporations to monitor, restrict, or condition everyday life are no longer distant possibilities. They are emerging realities.

If left unchecked, society could drift into a world where freedoms – physical, economic, and even cognitive – are constrained by digital systems that ordinary people do not control. A world where what individuals buy, read, watch, eat, or even think is shaped or limited by algorithms designed to serve interests that are not their own.

At its most extreme, AI could resemble the dystopias portrayed in films like Terminator. Not because AI is inherently malicious, but because the motives driving its development today are rooted in profit, power, and control – the same flawed incentives that already distort economic and political systems.

AI is not evil. But the forces shaping it often act without the moral responsibility, empathy, or long‑term thinking that such powerful tools demand.

They are supported by institutions that have drifted far from the responsibilities they are meant to uphold.

As a result, the guardrails, safeguards, and “dead man’s switches” that should have been built into these technologies from the beginning simply aren’t there.

Under a different approach, none of this would be inevitable.

Jobs would not be threatened.

People would be prioritised above profit.

Technology would remain under local, human control – never remotely overridden, never given ultimate authority over human decisions.

But to understand why this alternative feels unrealistic, a deeper truth must be confronted – one that goes far beyond AI.

Why Change Feels Impossible

The system society lives within today depends on people believing that alternatives are impossible. It depends on the assumption that the digital world is the “future”, that physical experience is outdated, that human sovereignty is negotiable, and that the only meaningful actions are those that fit neatly inside the structures already in place.

This conditioning is not new.

It is the same mindset that discourages questioning the role of money, the pursuit of endless growth, or the economic assumptions inherited without consent.

It is the same mindset that responds to new ideas with “that wouldn’t work”, not because the ideas are flawed, but because the system has taught people to believe that nothing outside its boundaries is realistic.

Now, this same psychological trap is being applied to AI.

The public is encouraged to see AI as unstoppable, unquestionable, and unquestionably beneficial. Resistance is framed as futile. Questioning is framed as naïve. The only “sensible” response, it is implied, is to adapt human life to the technology rather than shaping the technology to serve human life.

But this is not inevitability.

This is learned helplessness – a belief engineered by a system that benefits from centralisation, dependency, and digital control.

The Human Future Is Physical, Not Digital

Human beings are not digital creatures.

They are physical, relational, meaning‑seeking beings. Wellbeing, identity, and purpose come from the physical world – from relationships, contribution, community, and agency.

Digital tools can support these things, but they cannot replace them.

When AI is used to replace physical experience, human judgement, or human contribution, it becomes harmful.

When it is used to support these things, it becomes a force for good.

The difference is not technological. It is philosophical. It is about who the technology serves – and who it is allowed to control.

Human Sovereignty Must Remain a Foundational Principle, Not a Variable to be Optimised

A human‑centred approach to AI begins with a simple principle:

Human sovereignty must remain a foundational principle of any responsible approach to AI.

A human‑centred approach begins with the recognition that AI exists to support human judgement, human values, and human experience – not to replace or override them.

When systems are allowed to condition behaviour, arbitrate choices, or quietly close off alternatives, human agency is diminished, even if efficiency appears to increase.

This is not simply a technical concern. It is a constitutional one. Societies that delegate meaningful decisions to systems they cannot collectively understand, contest, or control risk allowing tools to become authorities, and optimisation to quietly replace consent.

For AI to serve human flourishing, it must remain subordinate to human decision‑making and grounded in the physical and social realities of human life.

Any system that becomes a gatekeeper of freedom, access, or participation ceases to be a neutral tool and instead reshapes the conditions of sovereignty itself.

Actions Speak Louder Than Digital Words

Change will not come from thinking alone.

It will not come from digital debate, symbolic gestures, or waiting for someone else to act.

Change happens when people behave differently – physically, locally, and together. When they stop acting as if the system is unchangeable. When they stop accepting inevitability as truth. When they stop believing that alternatives are impossible.

The future of AI will not be determined by technology.

It will be determined by whether people rediscover their own agency.

Stepping Back Requires Stepping Out of the Story

A different relationship with AI is possible.

A different direction is possible.

A different future is possible.

But only if society first recognises that the barriers to change are psychological, not technological. The limits that feel immovable are not real limits. They are inherited assumptions – and assumptions can be questioned, challenged, and replaced.

A human‑centred future is unlikely to emerge from the system as it currently exists – without deliberate rupture, refusal or redesign. It will emerge from people rediscovering their own sovereignty, their own capacity for contribution, and their own ability to shape the world around them – physically, not digitally.

AI is not the threat.

The belief that society cannot choose differently is.

The Human Future is Built on Physical Experience – Not a Digital One

Everything meaningful about the human journey is built on the foundation of physical experience. Not digital. Not remote. Not externalised. When our locus of attention, qualification, and authority is moved outside ourselves, we become willing to accept any tool or process that makes that external dependency feel easier. This is why our surrender to digitisation appears logical and natural, even though it is actively eroding the last remnants of freedom and sovereignty we still hold within ourselves.

Digitisation is not the beginning of this problem. It is the final step in a long process of externalising personal power – a process in which individuals gradually handed over their sovereignty to third parties that were once local, then regional, then national, and now global. As these centres of power moved further away, our agency, control, and ability to determine anything of real meaning in our own lives diminished. Into that vacuum flowed the tool that now shapes every motive, every thought, and every decision: money.

Progress vs. Direction

Some will argue that this chronology is simply “progress”. But progress in the sense of advancement is not the same as progress in the sense of direction. Humanity can advance technologically while moving in entirely the wrong direction. And that is exactly what has happened.

The dynamics of inequality have existed long enough that many now believe hierarchy – even the patriarchal structures that underpin it – is natural. But at the root of all hierarchy lies something far simpler: an imbalance in the basic human relationship. Some take more than they should because they believe it is acceptable to do so. Others give more than they should for the same reason. Fear – fear of lack, fear of isolation, fear of consequence – drives both sides of this imbalance.

Over generations, this dynamic hardens into a chain of hierarchy. Those at the top come to believe it is their right to control everything beneath them. Those at the bottom come to believe this is the natural order of things. But it is not natural. It is simply the next generation of victims inheriting a system built on the myth that domination is normal.

Distance: The Medium of Disempowerment

Distance is the mechanism through which personal sovereignty is removed. When power is located somewhere else – in a distant institution, a remote authority, or an unseen system – people begin to assume that their own power no longer exists. The lived experience of this reality becomes compelling enough to create a cultural belief system that reinforces the very conditions that disempower us.

This is how centralisation becomes self‑perpetuating.

The Digital and AI Revolution: A False Promise

Many questions already surround the so‑called “technical revolution” and the rise of AI.

What would it mean for humanity if technology took over everything? And more importantly: how could such a transformation even be financially sustained?

Because whether intended or not, the direction of travel is clear: the complete submission of humanity to an external locus of power. Every element that makes human life valuable is being placed under the control of third parties.

Thought, creativity, decision‑making, even the right to act – all increasingly require permission from someone or something else.

This is the antithesis of human freedom. It is the extreme opposite of what life is meant to be.

Human experience is built on the freedom to choose – even if that choice leads to difficulty, even if it leads to suffering, even if it leads to mistakes. Choice is the mechanism through which we learn. Without choice, there is no growth, no meaning, no journey.

Free Will Requires the Absence of Undue Influence

Free will can only exist when no external factor exerts undue influence over the life of the individual. This is why few can remember anything before their current lifetime, and why doubt about what comes next is necessary.

If we remembered everything – past lives, consequences, outcomes – our choices would be influenced.

If we knew with certainty what comes after death, our decisions would be predetermined.

For free will to be genuine, the field must be clear.

But the patterns established by our earliest ancestors – patterns of fear, domination, control, and imbalance – echo down through generations. These inherited distortions shape the world we now inhabit, creating the very mess humanity must confront today.

AI Is Not the Enemy – The System Using It Is

AI itself is not inherently bad. It is extraordinary technology with the potential to support and enhance human experience.

But under the current money‑centric system, AI has become a tool for profit, centralisation, and control.

Instead of enabling humanity to flourish, it is being used to replace human value, not elevate it.

And the truth is this:

The current AI model is financially unsustainable.

The infrastructure, energy, hardware, and investment required to maintain and expand AI systems exceed anything humanity has ever attempted – far beyond roads, railways, or industrial revolutions. The global economic system, already overburdened and extractive, cannot sustain the demands of the tech industry. The imbalance is too great.

Like any ecosystem pushed beyond its limits, collapse becomes inevitable.

The Coming Correction

The collapse ahead is not a single event. It is a necessary correction – the unavoidable consequence of a world that has moved too far out of balance.

AI may be the catalyst, but it could just as easily be:

  • financial market failure
  • global supply chain breakdown
  • geopolitical conflict
  • civil unrest
  • or all of these combined

The signs are already visible. The system is cracking under the weight of its own contradictions.

Those who control the technology will not achieve what they intended. The masses, whose lives have been upended by a chapter in history defined by selfishness and self‑interest, will eventually recognise that the system is too broken to repair.

At that point, humanity will have no choice but to begin again.

Universal Law Will Not Allow Otherwise

Natural or universal law is not mystical. It is the simple truth that life cannot be built on domination, coercion, or imbalance. Human existence is not meant to revolve around material gain, control, or the belief that some lives are worth more than others.

Civilisations that violate this law collapse.

Atlantis – whether literal or symbolic – stands as a warning. Cultures that believe they can override the basic principles of existence eventually destroy themselves. The parallels with today are striking: a belief that anything can be reshaped to our will, that limits do not exist, that consequences can be ignored.

But life is not meant to be conquered. It is meant to be lived, learned from, and respected.

The Return to Local Sovereignty

People are meant to be sovereign. And sovereignty is meant to be local – rooted in physical presence, real relationships, and face‑to‑face human experience.

Our relationship with the outside world was intended to be a learning tool: people, community, environment, and the processes that sustain life.

Personal sovereignty is the freedom to know oneself, to be self‑aware, and to navigate life from a place of internal authority rather than external dependency.

Contrary to what the powerful claim, this ability is not reserved for the educated or privileged. Real intelligence and real love are embedded in the way we choose to live together.

Care and contribution are among life’s greatest lessons. The greatest fulfilment does not come from competition or superiority, but from the experience of facing each moment with the freedom to do the right thing without fear.

The True Meaning of Freedom

To be fully present – to work with both past and future in healthy ways – is to experience a peace that no material wealth, power, or control can buy.

This is the freedom that digitisation, centralisation, and the money‑centric system have taken from us.

And this is the freedom humanity must reclaim.