FAQs & Key Takeaways: The Human Sovereignty Charter for Artificial Intelligence

What is the Human Sovereignty Charter?

The Charter is a set of principles designed to make sure people stay in charge of technology, not the other way around. It sets out clear expectations for how AI should be used in society, and what rights individuals and communities have when AI affects their lives.

It is not a technical manual. It is a human‑centred framework for fairness, dignity, and accountability in an AI‑enabled world.

Key Takeaways

1. Humans must always remain in control

AI can support decisions, but it must never replace human judgement or authority. People make final decisions — not machines.

2. AI must respect human dignity

No system should reduce people to data points or treat them as objects to be optimised.

3. You have the right to know when AI is being used

There should be no hidden systems or secret automated decisions.

4. You can challenge decisions made with AI

If an AI system affects you, you have the right to question it and get a human review.

5. Your data belongs to you

Organisations must protect your information and use only what is necessary, with clear justification.

6. Communities have rights too

AI must not harm neighbourhoods, groups, cultures, or vulnerable populations. Communities can say “no” to harmful uses.

7. AI must be transparent and accountable

Organisations must be able to explain how their systems work and take responsibility for their impact.

8. AI cannot replace meaningful human work

Technology should support people, not push them aside or deskill entire professions.

9. Oversight must be independent

No organisation should be allowed to regulate its own AI systems without external scrutiny.

10. The Charter evolves with technology

It includes a process for updates so it stays relevant as AI develops.

Frequently Asked Questions

Is this Charter based on Asimov’s I, Robot?

No. The Charter was developed independently and is not inspired by Asimov’s work.

Asimov wrote science fiction stories about robots and their internal programming.
The Charter is a real‑world governance framework focused on human rights, community protection, and accountability.

People sometimes make the comparison because Asimov is culturally associated with “rules for robots,” but the Charter is about protecting people, not programming machines.

Why do we need a Charter for AI?

AI is increasingly used in decisions about:

  • jobs
  • healthcare
  • education
  • policing
  • public services

Without clear rules, these systems can become unfair, intrusive, or harmful.
The Charter provides a principled foundation to prevent misuse and protect human dignity.

Who is the Charter for?

It is designed for:

  • citizens
  • communities
  • workers
  • public institutions
  • policymakers
  • technologists
  • educators

Anyone affected by AI — which increasingly means everyone — can use it.

Does the Charter oppose AI?

No. It supports responsible, human‑centred use of AI.

It opposes:

  • replacing human judgement
  • unnecessary automation
  • unaccountable systems
  • harmful or opaque uses of technology

The Charter encourages innovation that strengthens society rather than undermining it.

Does the Charter have legal force?

Not automatically.

It is designed to be:

  • voluntarily adopted
  • used as a governance framework
  • referenced in policy development
  • a foundation for future legislation

It gives organisations a clear, principled way to use AI responsibly.

How does the Charter protect communities?

It recognises that AI affects groups as well as individuals.
Communities have the right to:

  • reject harmful technologies
  • demand transparency
  • expect fairness
  • protect cultural, social, and economic wellbeing

This is a major difference from most AI frameworks, which focus only on individuals.

How does the Charter protect workers?

It states clearly that AI must not:

  • replace meaningful human work
  • deskill professions
  • remove human expertise
  • centralise power in ways that harm workers

AI should support people, not make them redundant.

How does the Charter protect personal data?

It requires:

  • data minimisation
  • clear justification for data use
  • strong safeguards
  • transparency
  • accountability

Your data should never be used in ways that harm you or your community.

What makes this Charter different from other AI ethics guidelines?

Most AI guidelines focus on:

  • technical safety
  • risk management
  • responsible innovation

The Human Sovereignty Charter focuses on:

  • human rights
  • community protection
  • sovereignty and dignity
  • limits on automation
  • preserving human judgement

It is a constitutional‑style document, not a corporate ethics checklist.

In one sentence

The Human Sovereignty Charter ensures that AI serves humanity — never the other way around.

To Read The Charter

The Human Sovereignty Charter for Artificial Intelligence can be read in full, online without charge HERE:

To pay to download a copy for Kindle, please follow this link HERE:

Overview: The Human Sovereignty Charter for Artificial Intelligence

The Human Sovereignty Charter for Artificial Intelligence – Published on 3 March 2026 – establishes a constitutional‑style framework designed to ensure that AI systems always remain subordinate to human authority, aligned with human dignity, and governed in ways that protect individuals, communities, and democratic values.

It provides a principled foundation for organisations, institutions, and governments seeking to adopt responsible, human‑centred approaches to AI.

The Charter is built on the belief that technology must enhance human life rather than replace human judgement, labour, or autonomy.

It sets out clear obligations for those who design, deploy, or manage AI systems, and it defines the rights and protections that individuals and communities retain in an AI‑enabled society.

Key Takeaways

1. Human sovereignty is non‑negotiable

The Charter asserts that humans must always remain the final decision‑makers. AI may support judgement, but it must never override, replace, or diminish human agency.

2. AI must serve human dignity and wellbeing

Every use of AI must be evaluated through the lens of human impact. Systems that undermine dignity, fairness, or community cohesion are incompatible with the Charter.

3. Transparency and accountability are mandatory

Organisations must be able to explain how AI systems work, what data they use, and how decisions are made. Hidden or unaccountable systems are prohibited.

4. Communities have rights, not just individuals

The Charter recognises that AI affects groups as well as people. Communities have the right to protection from harmful deployment, surveillance, or automated decision‑making.

5. AI must not replace human labour or judgement

Automation cannot be used to remove meaningful work, displace human expertise, or centralise power in ways that weaken democratic or social structures.

6. Oversight must be independent and ongoing

AI governance cannot be left to the organisations that build or profit from the systems. Independent oversight, community participation, and transparent review processes are essential.

7. Consent and understanding are essential

People have the right to know when AI is being used, how it affects them, and what alternatives exist. Consent must be informed, meaningful, and revocable.

8. Data belongs to people, not systems

The Charter reinforces that personal and community data must be protected, minimised, and used only with clear justification and safeguards.

9. AI must be designed for safety, not optimisation

The goal is not to make AI as powerful or efficient as possible, but to ensure it remains safe, predictable, and aligned with human values.

10. The Charter is adaptable and future‑proof

It includes mechanisms for amendment, review, and evolution as technology changes, ensuring it remains relevant and effective over time.

What the Charter Enables

  • A shared ethical foundation for organisations adopting AI
  • A governance model that prioritises human rights and community wellbeing
  • A practical framework for policymakers and institutions
  • A safeguard against harmful, opaque, or exploitative AI practices
  • A clear statement of human‑centred values in a rapidly changing technological landscape

Who the Charter Is For

  • Policymakers and public institutions
  • Educators and academic researchers
  • Technologists and AI developers
  • Community leaders and civil society organisations
  • Citizens seeking clarity on their rights in an AI‑enabled world

Why It Matters Now

AI is advancing faster than most governance systems can respond. Without clear principles, societies risk drifting into forms of automation that erode human judgement, weaken democratic accountability, and centralise power.

The Charter provides a structured, principled response – one that protects what is uniquely human while still enabling responsible technological progress.

1646

The greatest benefit of AI today will be a new dark age of stupidity and ignorance that our surrender to it will bring

There’s something very wrong with the AI story that we are all being sold:

Nobody seems to have noticed that the script of man’s pathway to the pinnacle of human intelligence is about to come to its end, by handing our ability to think, over to machines.

As I write, I’m wondering if the name ‘Artificial Intelligence’ was a deliberate way to hide the truth in plain sight, all along.

Not because the technological breakthroughs that are coming at us thick and fast aren’t very clever.

But because just like the surrender of our value set to an artificial, valueless and damaging world dominated by money that manipulates everything about the way we think, we are about to give away our ability to even do that, to systems and technologies that cannot genuinely benefit any human being – other than those who own and run them.

In my eBook Actions Speak Louder than Digital Words, I talked about AI only having the ability to look back at history and the past. Even where back meant what had been published or ‘sensed’ by the Internet up to the very moment when the system was responding to a specific command.

This overlooked or deliberately whitewashed flaw of AI echoes one of the greater faults in the Human experience, where we inherently look backwards to our past experiences to provide guidance for the future.

This should be troubling enough.

But what wasn’t apparent even when I published that book in June 2023, was that as AI began filling everything across the web and digital sphere with its own responses, muses and anything else we can give AI-derived content as a name, it would then begin leveraging just as much and increasingly more of its own diluted content as a source, which is almost certain to increase as human input or creativity dries up.

And the contribution of human creativity and intelligence to the smorgasbord board of information and data that the AI engines feast on is most certainly drying up, as more and more of us surrender to the narrative we have been fed that tells us AI is now the only way, and jobs are threatened by the accompanying suggestion that AI can do things that we can never do!

Dictating our future by using the past as our point of reference certainly holds us back and creates all sorts of difficulties at all levels of life that we didn’t ever need to have.

However, the one thing that makes that experience manageable and, in some ways, arguably beneficial too, is that our human creativity and ability to look at every new situation and make sense of it and its context in ways that allow us to build bridges into the future, means that we are making progress all the time. Even if that progress is slower, whereas a machine that is limited only to reading what has already happened simply cannot.

People – and many of them too – genuinely accept the stories and myths that we have and are now being sold.

They believe and, in many cases, have become fearful that AI can already or very soon will take over every function that humans currently carry out within any business or organisation. Despite the reality that anyone using their common sense or daring to listen to their inner voice will recognise a very big question, ‘Where in this future does that leave any need for me?’

AI is very fast at what it does and is able to look at potentially all the information that is available to us in digital form at the very moment in time that a question is asked or an instruction is given.

That – and only that – is the real magic of AI.

It is the reason that we are all just accepting the idea that AI is already infinitely cleverer than Humans could ever be. Just as those who benefit from us believing this to be true intend us to believe.

However, our acceptance that we no longer need to be creative or think for ourselves, means that we will not only increasingly become dependent upon a pool of ‘knowledge’ outside of ourselves – albeit a very large one of everything that has been recorded, spoken, considered and then committed to the internet and digital platforms up to some point in history before. But this pool of knowledge that we will use for everything will become increasingly diluted by the growing amount of poor and corrupt information, data and ‘understanding’ that our already burgeoning use of AI with everything is now spaffing out into the digital ether.

As you read, Humanity is literally giving up the ability to think and create for itself, to a machine-driven world that is incapable of doing any more.

What is more, Humanity is surrendering these cornerstone abilities for survival voluntarily. Because someone who benefits from us believing we are inadequate without technology has told us this, when a change of the kind that overreliance on AI could be about to usher in would have needed something akin to an extinction-level event to take place at any time in world history before.

This uncomfortable truth will not stop those who stand to benefit from the AI takeover from pushing and promoting this path. They will continue peddling the myths that the AI takeover will be in our best interests and will be inevitable all the same. When it is nothing of the sort.

The Technology we have available to us today will not live up to its greatest potential. Because the greatest potential any technology that man invents will have, is to help improve the lives and experiences of all men, rather than to replace any one of them.

We know this to be true, as this has regrettably been the way that technological advancements have always impacted Humanity since the ending of the Agricultural Age.

Technology has always been employed to make money for those who own and control it since then.

The rise of new technology has always been at the cost of all others at some level. No matter who they are or what their connections might be.

The reality we face is that it may already be too late to save the world we recognise from a fate that we have all unwittingly chosen. Rather than there being any kind of event or catastrophe at the heart of future change that no one person could have been responsible for.

However, if we are to address the slide towards universal ignorance, with the accompanying potential to take us back into the dark ages once more, we must reassess, reimagine and regulate the uses of every kind of technology. So that technology’s master can only be the public good. Rather than profit and the disaster that is following hard in its footsteps right now.

If we value the Human experience and wish to improve it, it is time to learn, share and then live the truth that there is no need for any technology to replace jobs, other than so just a few can increase their profits and control.

The best way for everyone and everything to live well, is without the complications and diversions that misappropriated technology imposes upon us, and the technology we do embrace should always be used for the greater good and for the benefit of everyone involved.