Selective morality in business and government is still self‑interest – and AI exposes that truth.
“Selective morality in business and government is self‑interest nonetheless.”
Selective morality in business and government is still self‑interest. You either act ethically in every instance, or you aren’t acting ethically at all.
Amid the fear, excitement, and confusion surrounding the rapid rise of AI, remarkably little attention is paid to the words and behaviour of the people driving it. Tech leaders tend to appear only when unveiling the next breakthrough, not when answering for the consequences of the last one.
Much of the public debate focuses on whether AI will destroy more jobs than it creates, and whether ideas like universal basic income could soften the blow.
Industry figures often speak as if a post‑work utopia is inevitable – a world where everything is paid for and nobody needs to labour. But this narrative conveniently ignores the obvious question: who funds such a system when millions, perhaps billions, are stripped of agency, purpose, and the ability to contribute?
We may be heading toward a future in which vast numbers of people have nothing to do, no way to regain independence, and no meaningful choices left.
The myth that AI will “improve life for everyone” is easy to sell while the technology still feels novel and addictive. But nobody has invested billions into AI for altruistic reasons. The motivation is profit, power, and control – and the benefits will not be evenly shared.
Some of those leading the charge may genuinely believe they are building a utopia. But intelligence is not morality, and we routinely mistake technical brilliance for ethical authority.
We make the same mistake in politics when we assume legality and morality are interchangeable.
Recent events have made this clearer. A major AI company publicly pushed back against the US government’s desire to use its systems for military purposes. Whatever one thinks about AI on the battlefield, the episode revealed something crucial: the industry can say “no” when it wants to. The idea that AI’s advance is unstoppable or outside human control is a convenient fiction. The people building these systems can halt or redirect progress – they simply choose not to when the consequences fall on everyone else.
I’m not opposed to technological progress. I’ve written about AI for years, and I believe it can improve human life in extraordinary ways. But the greatest danger is not sentience or runaway autonomy. It is the fact that AI is being built and steered by people whose incentives are profit and dominance, not human flourishing.
AI should exist to elevate human life, not to replace human purpose.
Yet those controlling its development are already choosing which impacts they want and which they don’t. Their occasional flashes of “morality” appear only when their own interests are threatened.
If genuine morality had guided AI’s development, we would already see clear safeguards, transparent policies, and protections against the harms we are now scrambling to address.
Instead, we see selective ethics deployed only when convenient.
Policymakers and tech companies share responsibility for what AI becomes. But morality applied only at moments of their choosing is not morality at all. It is strategy – and we should treat it as such.
Further Reading: Context, Consequences, and Control
The essays below expand on the central claim of this piece: that AI is not a neutral force, and that selective ethics – applied only when convenient – undermine both human dignity and democratic control.
Together, they form a coherent critique of technological inevitability, post‑work mythology, and the moral shortcuts taken by those shaping the AI future.
I. First Principles: Work, Human Worth, and Moral Limits
These pieces establish the ethical baseline: why work matters beyond income, and why technological capability does not equal moral justification.
People Need Jobs More Than AI – and the Tech Revolution
https://adamtugwell.blog/2025/09/01/people-need-jobs-more-than-ai-and-the-tech-revolution/
This essay argues that work is not merely an economic function but a cornerstone of identity, agency, and social stability. It challenges the assumption that replacing human labour is an unqualified good, framing job displacement as a moral issue rather than a technical inevitability. It provides essential grounding for the claim that AI should serve human life, not hollow it out.
Just Because AI and Tech Can Make Roles Redundant Doesn’t Mean That We Should
https://adamtugwell.blog/2024/02/01/just-because-ai-and-tech-can-make-roles-redundant-doesnt-mean-that-we-should-make-them-so/
Building on the above, this piece confronts the “can therefore should” logic that dominates technology discourse. It draws a clear distinction between capability and responsibility, reinforcing the argument that ethical restraint is a choice – one that is currently being avoided rather than exercised.
Technology and Artificial Intelligence Should Only Fill Jobs When No Humans Are Available
https://adamtugwell.blog/2025/11/13/technology-and-artificial-intelligence-should-only-fill-jobs-when-no-humans-are-available/
This essay proposes a human‑first principle for automation: AI should supplement human effort, not pre‑empt it. It directly supports the central thesis that AI replacing human purpose is a failure of governance and values, not progress.
II. The Economic Myth: UBI, Abundance, and the Illusion of Care
These essays dismantle the comforting narrative that mass automation will be offset by generosity, redistribution, or effortless abundance.
As AI Ends Work: Waking Up to the Illusion of UBI and the Need for a New System
https://adamtugwell.blog/2026/01/20/as-ai-ends-work-waking-up-to-the-illusion-of-ubi-and-the-need-for-a-new-system/
This piece directly interrogates the promise of universal basic income as a solution to large‑scale job loss. It exposes UBI as a political placeholder rather than a structural answer, asking who truly benefits from a system where agency is removed and compensation replaces participation.
AI Won’t Make Life Cheaper for Those Who Cannot Work – and the Mega‑Rich Would Be Helping Now If They Planned To Later
https://adamtugwell.blog/2025/01/15/ai-wont-make-life-cheaper-for-those-who-cannot-work-and-the-mega-rich-would-be-using-their-money-to-help-others-right-now-if-they-were-going-to-do-it-for-everyone-in-the-future/
This essay challenges the faith placed in future benevolence from those currently accumulating unprecedented wealth through automation. It reinforces the argument that selective morality is strategic, not principled – and that promises of future fairness ring hollow when present injustice is ignored.
III. Power, Control, and the Fiction of Inevitability
These works expose how narratives of inevitability mask human decision‑making, profit incentives, and political convenience.
Do You Believe That AI Is About Progress? Think Profit, Think Greed – Then Think Again
https://adamtugwell.blog/2024/08/26/do-you-believe-that-ai-is-about-progress-think-profit-think-greed-then-think-again/
This essay strips away the rhetoric of progress to reveal the economic motivations driving AI adoption. It aligns closely with the claim that AI is not being developed altruistically, and that public benefit is often an afterthought rather than a design goal.
Just Like AI, the Tools, Actions, Rules, and Infrastructure of Tomorrow Will Be Good or Bad Depending Upon Who – and What – Is in Control
https://adamtugwell.blog/2024/09/24/just-like-ai-the-tools-actions-rules-and-infrastructure-of-tomorrow-will-be-good-or-bad-for-us-depending-upon-who-and-what-is-in-control/
This piece broadens the lens from AI alone to systems of governance and infrastructure. It reinforces the idea that outcomes are shaped by power structures, not technology itself – supporting the argument that “unstoppable AI” is a narrative used to avoid accountability.
IV. Actions vs. Words: When Ethics Become Strategy
This final piece directly confronts performative morality and selective restraint.
Actions Speak Louder Than Digital Words (Full Text)
https://adamtugwell.blog/2025/03/20/actions-speak-louder-than-digital-words-full-text/
Serving as a thematic bridge to the present essay, this work critiques public ethical posturing unaccompanied by meaningful change. It underlines the central warning of If AI Replaces Us, It No Longer Serves Us: morality applied only when convenient is not morality – it is strategy.