Overview: The Human Sovereignty Charter for Artificial Intelligence

The Human Sovereignty Charter for Artificial Intelligence – Published on 3 March 2026 – establishes a constitutional‑style framework designed to ensure that AI systems always remain subordinate to human authority, aligned with human dignity, and governed in ways that protect individuals, communities, and democratic values.

It provides a principled foundation for organisations, institutions, and governments seeking to adopt responsible, human‑centred approaches to AI.

The Charter is built on the belief that technology must enhance human life rather than replace human judgement, labour, or autonomy.

It sets out clear obligations for those who design, deploy, or manage AI systems, and it defines the rights and protections that individuals and communities retain in an AI‑enabled society.

Key Takeaways

1. Human sovereignty is non‑negotiable

The Charter asserts that humans must always remain the final decision‑makers. AI may support judgement, but it must never override, replace, or diminish human agency.

2. AI must serve human dignity and wellbeing

Every use of AI must be evaluated through the lens of human impact. Systems that undermine dignity, fairness, or community cohesion are incompatible with the Charter.

3. Transparency and accountability are mandatory

Organisations must be able to explain how AI systems work, what data they use, and how decisions are made. Hidden or unaccountable systems are prohibited.

4. Communities have rights, not just individuals

The Charter recognises that AI affects groups as well as people. Communities have the right to protection from harmful deployment, surveillance, or automated decision‑making.

5. AI must not replace human labour or judgement

Automation cannot be used to remove meaningful work, displace human expertise, or centralise power in ways that weaken democratic or social structures.

6. Oversight must be independent and ongoing

AI governance cannot be left to the organisations that build or profit from the systems. Independent oversight, community participation, and transparent review processes are essential.

7. Consent and understanding are essential

People have the right to know when AI is being used, how it affects them, and what alternatives exist. Consent must be informed, meaningful, and revocable.

8. Data belongs to people, not systems

The Charter reinforces that personal and community data must be protected, minimised, and used only with clear justification and safeguards.

9. AI must be designed for safety, not optimisation

The goal is not to make AI as powerful or efficient as possible, but to ensure it remains safe, predictable, and aligned with human values.

10. The Charter is adaptable and future‑proof

It includes mechanisms for amendment, review, and evolution as technology changes, ensuring it remains relevant and effective over time.

What the Charter Enables

  • A shared ethical foundation for organisations adopting AI
  • A governance model that prioritises human rights and community wellbeing
  • A practical framework for policymakers and institutions
  • A safeguard against harmful, opaque, or exploitative AI practices
  • A clear statement of human‑centred values in a rapidly changing technological landscape

Who the Charter Is For

  • Policymakers and public institutions
  • Educators and academic researchers
  • Technologists and AI developers
  • Community leaders and civil society organisations
  • Citizens seeking clarity on their rights in an AI‑enabled world

Why It Matters Now

AI is advancing faster than most governance systems can respond. Without clear principles, societies risk drifting into forms of automation that erode human judgement, weaken democratic accountability, and centralise power.

The Charter provides a structured, principled response – one that protects what is uniquely human while still enabling responsible technological progress.

1646