What if the greatest threat to your freedom isn’t a dictator, a law, or a weapon — but a suggestion box that’s almost always right?
Welcome, dear reader. We’re glad you’re here. This piece was written specifically for you by FreeAstroScience.com, where we translate hard ideas into plain language and refuse to let science, philosophy, or ethics stay locked inside academic walls. Our mission stays simple: never let you switch off your mind, because the sleep of reason breeds monsters.
Today we’re wrestling with a question that touches your phone, your job, your medical file, and maybe your soul. Stay with us to the end — the payoff isn’t a slogan, it’s a sharper way of seeing the world you already live in.
📌 Table of Contents
- What’s really changing when AI enters our lives?
- Métron vs. métrion: two ways of measuring the world
- How does quantification squeeze our rights?
- Are we trading truth for performance?
- Why does the Grand Inquisitor matter today?
- What can’t an algorithm ever do?
- Where does bioethics fit in all this?
- Final reflections
AI, Human Freedom, and the Bioethics of Rights in the 21st Century
Artificial intelligence isn’t just another tool we’ve invented. It’s changing the shape of how we think, choose, and judge. That’s a much bigger story — and it deserves more than a product review.
This article builds on a talk delivered by philosopher Palma Sgreccia at the Round Table “Artificial Intelligence, Human Enhancement and the Bioethics of Rights: Challenges of the 21st Century”, held on 27 April 2026 at the State University of Moldova in Chișinău. We’ll walk you through its core arguments in everyday English.

measure and human conscience at the heart of 21st-century bioethics.
What’s really changing when AI enters our lives?
AI isn’t only a technical leap. It reshapes the very idea of what counts as knowing, deciding, and judging .
Talking about AI only in terms of efficiency or safety misses the real story. This technology touches freedom, autonomy, responsibility, truth, and conscience — the heart of moral and political philosophy .
Three philosophical keys help us see clearly: Plato’s distinction between métron and métrion, Dostoevsky’s Grand Inquisitor, and Augustine’s reflection on interiority . Each one lights up a different corner of the same problem.
Métron vs. métrion: two ways of measuring the world
In Plato’s vocabulary, these two Greek words carry very different weights.
| Aspect | Métron (μέτρον) | Métrion (μέτριον) |
|---|---|---|
| Type of measure | Quantitative, objective, calculable | Qualitative, situated, prudential |
| Logic | Standardization, optimization | Discernment, context-awareness |
| Strength | Handles huge data, spots patterns | Weighs vulnerability, exceptions, relationships |
| Risk | Treats people as interchangeable cases | Can’t be fully translated into code without loss |
Today’s AI lives mostly inside the métron zone. It analyses data, finds patterns, builds recommendations, ranks priorities . That’s useful — often astonishingly so.
The trouble starts when métron stops being one tool among many and becomes the only acceptable way to reason. At that point, whatever can be measured starts looking like whatever matters. Whatever can be optimized starts looking like whatever is right .
How does quantification squeeze our rights?
Quantitative measure doesn’t ban freedom. It doesn’t need to. Its effect is subtler — and that’s exactly what makes it hard to spot.
When you’re translated into a profile, a score, a risk category, a statistical bucket, something shifts. You become a case to be handled . Three layers of damage follow.
1. Autonomy gets hollowed out
Nobody forbids you to choose. Your choices simply arrive pre-filtered, pre-ranked, pre-optimized. Freedom isn’t crushed — it’s politely relieved of its burden .
2. Justice gets flattened
Standardization treats unequal situations as equivalent. Worse, data carry the fingerprints of past social inequalities. The calculation can replicate old unfairness while wearing a neutral mask .
3. Explainability fades
If a decision about your job, your loan, or your treatment is opaque and hard to challenge, your right remains on paper but weakens in real life. Rights need intelligibility, challengeability, and public justification to actually work .
“Rights aren’t threatened by numbers as such, but by the moment when quantification claims to decide alone about concrete people, replacing judgement with pure optimization.”
Are we trading truth for performance?
Western philosophy tied knowing to giving reasons. To know meant to understand, to justify, to make sense of something .
AI plays a different game. It favours whatever is effective, predictive, plausible, operationally solid. The question drifts from “what is true?” to “what works?”. Explanation slides into correlation. Understanding slides into prediction. Justification slides into output .
Here’s the catch. When functionality takes over, critical thinking shrinks. Why keep questioning a process that already delivers? Why slow down for discernment when the algorithm is faster?
That’s how métron quietly stops being a tool and becomes the boss of reason itself.
Why does the Grand Inquisitor matter today?
In Dostoevsky’s The Brothers Karamazov, the Grand Inquisitor scolds Christ for loading humans with something too heavy: freedom. People don’t really want freedom, he argues. They want bread, safety, authority, reassurance .
Here’s the detail that should make us sit up. The Inquisitor isn’t a snarling tyrant. He presents himself as a caring figure — someone who wants to relieve us of the anxiety of deciding. His power isn’t repressive. It’s paternalistic. It rules in the name of comfort .
Sound familiar?
The algorithm isn’t the Grand Inquisitor literally. But it can carry a similar temptation: lightening the weight of judgement, uncertainty, and responsibility — not through force, but through convenience. Not by banning dissent, but by making it feel unnecessary .
This is why the real risk isn’t using AI. The risk is absolutizing it. When the algorithm becomes the new obvious, freedom doesn’t die in a dramatic scene. It empties out from within. Every human choice starts looking slower, more fragile, less efficient .
Call it freedom lost in the form of organized relief. One of the subtlest faces of modern power.
What can’t an algorithm ever do?
To see what’s at stake, we need a third idea: interiority.
Augustine’s famous phrase intimior intimo meo — “more inward than my innermost self” — points to something in us that can’t be fully turned into data: memory, moral judgement, conscience, the lived feel of time, the ability to give meaning to what happens .
You don’t need to be religious to see the force of this. A human being isn’t just an information-processing system. You remember. You interpret. You doubt. You carry responsibility. You live the tension between options that pull you in opposite directions .
AI operates on a different plane. It can calculate, simulate, predict. It doesn’t live doubt. It doesn’t feel the weight of a choice. It doesn’t experience time the way you do .
That’s why interiority protects freedom. Not as a cozy private hideout, but as the place where you don’t collapse into your measurable profile. Take interiority away and métrion empties too — because qualitative judgement needs reflection, memory, and the capacity to own a decision as your own .
Where does bioethics fit in all this?
Bioethics deals with situations marked by vulnerability, care, dependence, selection, access to basic goods, power asymmetries — all moments when decisions land on a person’s life .
In those zones, reducing a person to a data point isn’t just a technical shortcut. It’s a normative act. The ethical question rises when practical reasoning can’t be collapsed into “which output is best?” We still have to ask: best for whom? At what cost? With which exclusions? Under what conditions of justification?
This is why the tension between métron and métrion is a bioethical matter, not just a philosophical one. Bioethics defends the concrete person against reduction to an abstract case. It protects singularity when standardization risks going morally blind. And it guards the institutional conditions that let rights actually function: comprehension, participation, contestability, accountability .
Read this way, the bioethics of rights isn’t about slapping external limits onto innovation. It’s about guarding a human space that can’t be fully delegated .
💡 Key Takeaways
- AI changes how we reason, not just what we do.
- Quantity (métron) is useful — until it replaces judgement (métrion).
- Freedom can be lost through comfort, not only through force.
- Interiority is the non-delegable core of a moral subject.
- Rights need explainability and contestability to stay real.
Final reflections
The challenge of the algorithmic age isn’t only governing more powerful systems. It’s keeping the quantitative métron from colonizing the qualitative métrion — the slow, situated, prudential measure that only humans can carry .
The Grand Inquisitor reminds us that people can be tempted to swap freedom for safety and simplification. AI makes that swap sweeter, smoother, almost invisible. Interiority pushes back, reminding us that within each of us lives a place — conscience, memory, lived time, responsibility — that can’t be fully outsourced .
Rights don’t survive on declarations alone. They survive when we protect the concrete conditions of freedom: the chance to understand, to judge, to challenge, to own a decision as yours .
Between métron and métrion, what’s really at stake is whether the human will keep being more than what can be measured — whether you’ll keep being someone who can still answer for yourself.
At FreeAstroScience.com, we wrote this for you because we believe you deserve the tools to think clearly about the technology reshaping your life. Don’t let your mind fall asleep. Come back often — we’re here to keep it awake with you.
📚 References & Further Reading
- Sgreccia, P. (2026). IA: sollievo organizzato o esonero dalla libertà? MagIA – Magazine Intelligenza Artificiale. (Primary source for this article.)
- Sgreccia, P. (2026). IAetica su palafitte. Roma: Armando Editore.
- Dostoevsky, F. M. The Brothers Karamazov (Chapter “The Grand Inquisitor”).
- Augustine. Confessions; De libero arbitrio.
- Plato. Statesman (Politico), in Tutti gli scritti, ed. G. Reale, Milano: Bompiani, 2006.
- Jonas, H. (1990). Il principio responsabilità. Torino: Einaudi.
- Crawford, K. (2021). Atlas of AI. Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.
- Harari, Y. N. (2017). Homo Deus. Breve storia del futuro. Milano: Bompiani.
- Ricœur, P. (1993). Sé come un altro. Milano: Jaca Book.
- Searle, J. R. (1980). “Minds, Brains, and Programs”. Behavioral and Brain Sciences, 3(3), 417–457.
- Wachter, S., Mittelstadt, B., Russell, C. (2018). “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”. Harvard Journal of Law & Technology, 31(2), 841–887.
✍️ Written for you by Gerd Dani — President of Free AstroScience. Keep your mind on. Always.
