Group questioning predictive policing's impact.

Does Predictive Policing Silence Marginalized Voices?


What happens when a machine decides your neighborhood is dangerous — not because it is, but because it was watched more closely than others? What happens when no one believes you when you say the algorithm is wrong?

Welcome to FreeAstroScience.com, where we break down complex ideas into language everyone can grasp. We’re Gerd Dani and the Free AstroScience team, and today we’re stepping outside our usual orbit of stars and physics to explore something just as universal: how knowledge, power, and technology collide. Because science isn’t only about telescopes and equations. It’s about how we understand the world — and who gets to shape that understanding.

Group questioning predictive policing's impact.

If you’ve ever felt that automated systems hold too much power over human lives, this article is for you. Stick with us to the end. What you’ll find here might change the way you think about fairness, data, and the quiet violence of being ignored.

When the Algorithm Is Always Right: Epistemic Injustice, Bias, and the Hidden Cost of Silence

We tend to trust machines. That’s not irrational — machines measure, calculate, and process data faster than any human brain. But what happens when that trust becomes blind? When an algorithm’s output carries more weight than a living person’s testimony?

This is the story of predictive policing, discriminatory feedback loops, and a type of injustice most people have never heard of: **epistemic injustice**. It’s the kind of injustice that doesn’t take your freedom. It takes your voice.

1. The PredPol Story: How a “Neutral” Tool Went Wrong

In 2012, the city of Chicago adopted a predictive policing system called **PredPol** . The idea sounded clean and efficient. The algorithm analyzed historical crime data — type of crime, location, time — and generated heat maps. Those maps told patrol officers where to concentrate their presence .

Simple. Scientific. Objective. Or so it seemed.

Years later, communities of color in the surveilled zones reported a paradox. More officers appeared on their streets. More stops happened. More arrests followed. And then? The algorithm took that fresh data — data it had *generated* through increased policing — and pointed right back to the same neighborhoods .

Think of it as a dog chasing its own tail, except the tail belongs to someone else’s life. The circle closes. It tightens. And the people inside it can’t break free.

This wasn’t a bug in the code. As Alessandro Spada argues in a 2026 analysis for *MagIA* (the Magazine of Artificial Intelligence at the University of Turin), this is a form of structural epistemic injustice — a concept that analytic philosophy has been developing for over twenty years.

2. What Is Credibility Excess — and Why Should You Care?

Let’s back up for a moment. The term **epistemic injustice** was developed systematically by philosopher Miranda Fricker in her landmark 2007 book, *Epistemic Injustice: Power and the Ethics of Knowing*. At its heart, it means this: some people are unfairly denied the status of being reliable knowers. Their words carry less weight — not because they’re wrong, but because of who they are.

Now, most discussions focus on *credibility deficit* — when someone isn’t believed enough. But philosopher Jennifer Lackey identified an equally dangerous flip side: credibility excess.

What Does Credibility Excess Look Like?

Lackey studied false confessions in the American criminal justice system. She found that confessions receive wildly disproportionate credibility compared to later retractions — even when those retractions come with supporting evidence. The system doesn’t doubt the confession. It doubts the person who takes it back.

Now apply that same logic to an algorithm. When a machine learning model flags a neighborhood as high-risk, that assessment is treated as a fact — not a point of view, not one perspective among many, not a debatable interpretation . The algorithm’s output doesn’t just *inform* the decision. It practically *is* the decision.

That’s credibility excess. And it’s dangerous because it operates invisibly. No one votes on it. No one questions it. The number on the screen just… wins.

3. The Self-Feeding Loop: When Data Confirms Its Own Bias

Here’s where the mechanics get truly troubling. Philosopher Renée Jorgensen described a phenomenon she calls data redundancy — a feedback loop where the data doesn’t reflect reality, but reflects its own history of surveillance .

How the Bias Feedback Loop Works

Let’s walk through it, step by step:

  1. The algorithm scans historical crime records** for a given area.
  2. It flags that area as high-risk and recommends more police presence.
  3. More officers on the ground means more stops, more checks, more arrests.
  4. Those new arrests become new data — fed right back into the system.
  5. The algorithm “confirms” its own prediction. The area stays flagged.

The area isn’t classified as dangerous because it *is* more dangerous. It’s classified that way because it was *treated* as dangerous . The system learns from itself. It’s like an echo chamber made of numbers.

And the effects don’t land equally. The communities caught in this loop are almost always the same: poor, racialized, historically underrepresented in institutions. Data redundancy, in Jorgensen’s terms, is the algorithmic encoding of prejudices that already existed. The machine doesn’t create bias from scratch. It amplifies it, institutionalizes it, and makes it nearly impossible to challenge.

🔄 The Predictive Policing Feedback Loop
Step 1: Algorithm scans historical crime data
Step 2: Area flagged as “high risk” → more police deployed
Step 3: More officers → more stops, more arrests
Step 4: New arrests fed back into the algorithm as “evidence”
Step 5: Algorithm “confirms” the original prediction → cycle restarts

The loop reinforces itself indefinitely, encoding existing bias as objective data.

4. How Are Marginalized Voices Silenced by Algorithms?

This is where the human cost becomes painfully clear. Philosopher José Medina, in his 2021 work on agential epistemic injustice in the criminal justice system, identified four forms through which the voices of marginalized people are systematically neutralized. Two of them are especially relevant here.

Illocutionary Silencing: Heard but Not Counted

Imagine a resident of an over-policed neighborhood telling an officer: “This block isn’t as dangerous as your system says.” The complaint is received. Maybe it’s even written down. But it carries zero weight in the next algorithmic cycle. The system doesn’t argue against the resident’s testimony — it simply doesn’t register it.

That’s illocutionary silencing. The speech act happens, but its force is stripped away. You speak, yet nothing changes.

Perlocutionary Silencing: When People Stop Talking

The second form runs even deeper. Over time, people living under heavy surveillance learn something bitter: speaking up is pointless — or worse, risky. You might be misunderstood. You might attract attention to yourself. So you stop reporting minor crimes. You stop engaging with law enforcement. You stop existing as a witness.

That’s perlocutionary silencing. The silence isn’t chosen freely. It’s imposed by experience.

🔇 Two Forms of Algorithmic Silencing (Medina, 2021)

Illocutionary Silencing
Perlocutionary Silencing
You speak, but your words have no effect on the system.
You stop speaking entirely because experience taught you it’s useless or dangerous.
Example: A complaint is filed but doesn’t change the algorithm’s risk score.
Example: Residents avoid reporting crimes for fear of becoming targets themselves.
The voice exists, but its force is erased.
The voice itself disappears — silence becomes survival.

In both scenarios, the algorithm isn’t the root cause of the injustice. But it’s the multiplier. The excess credibility it enjoys transforms a correctable bias into a self-confirming structure. A wall that gets thicker every time you push against it.

5. Can Epistemic Activism Break the Cycle?

If the problem is that too much credibility pools inside a closed system, then the answer has to involve redistributing that credibility. That’s exactly what José Medina proposes with his concept of epistemic activism.

The idea is straightforward yet powerful: when the injustice isn’t just about rights but about being *recognized as someone who knows something*, the resistance must target knowledge itself . It’s not enough to demand access to justice. People must also demand the right to *shape the knowledge* that feeds that justice system.

What Would That Look Like in Practice?

Medina and others suggest several concrete steps :

  • Community oversight boards** that include residents of surveilled areas — not just data scientists and police officials.
  • Independent expert review panels** that can evaluate algorithmic outputs and challenge their accuracy.
  • Transparency mechanisms** designed not for specialists, but for ordinary people. If you can’t understand how the algorithm works, you can’t contest its results.
  • Platforms for affected communities** to actively participate in the evaluation and critique of the systems that govern their lives.

Will this be easy? Absolutely not. The institutional resistance is real. Resources are often scarce. And the very power imbalances that epistemic activism tries to correct are the same ones that make it hard to organize.

But the logic holds. A closed system that talks only to itself will always confirm what it already “believes.” Opening it up — letting different voices in, giving them real weight — is the only way to break the loop.

6. Key Thinkers and Concepts at a Glance

We’ve covered a lot of ground. Here’s a compact reference table so you can keep these ideas straight.

📚 Scholars, Concepts & Key Works

Scholar
Key Concept
Publication
Miranda Fricker
Epistemic Injustice
Epistemic Injustice: Power and the Ethics of Knowing (2007)
Jennifer Lackey
Credibility Excess
False Confessions and Testimonial Injustice (2020)
Renée Jorgensen
Data Redundancy
Algorithms and the Individual in Criminal Law (2022)
José Medina
Epistemic Activism & Silencing
Agential Epistemic Injustice and Collective Epistemic Resistance (2021)
Patrick Perrot
AI in Criminal Intelligence
What about AI in criminal intelligence? (2017)

Each of these thinkers contributes a different piece to the puzzle. Together, they show us something that no single data point can: the human cost of algorithmic authority.

7. What Can We Actually Do About It?

Predictive policing isn’t an edge case. It’s a visible example of a much wider trend: delegating high-impact decisions to algorithmic systems and granting those systems a credibility that no human voice can easily contest.

So where do we go from here?

**As individuals**, we can start by questioning the outputs of automated systems — whether they’re crime heat maps, credit scores, or hiring filters. An algorithm is a tool. Tools can be built badly. Tools can carry the fingerprints of whoever designed them.

As communities**, we can push for oversight that includes the people most affected. Real transparency means more than publishing a technical whitepaper. It means designing systems that can be understood and challenged by everyone — not just engineers .

**As a society**, we can recognize a simple truth: *when the algorithm is always right, it’s not because it is right — it’s because we stopped asking who might be wrong* .

That sentence, from Spada’s original analysis, deserves to be read twice. Maybe three times.

## Conclusion: The Sleep of Reason Breeds Monsters

We’ve traveled a long way in this article — from Chicago’s streets in 2012 to the philosophical depths of epistemic injustice, from Jennifer Lackey’s work on false confessions to José Medina’s call for collective resistance. The thread connecting all of it is a warning: when we stop questioning who holds credibility and why, we hand power to systems that confirm their own assumptions.

Predictive policing algorithms don’t hate anyone. They don’t intend to discriminate. But intention isn’t the point. The point is impact — and the impact falls heaviest on those who were already carrying the most weight.

At FreeAstroScience.com, we believe that explaining complex ideas in clear language is itself a form of activism. Whether we’re talking about black holes or algorithmic bias, the mission stays the same: *never turn off your mind.* Keep it awake. Keep it sharp. Because — as Goya once painted and history keeps proving — the sleep of reason breeds monsters.

Come back to FreeAstroScience anytime you want to sharpen your understanding of the world. We’ll be here, making the difficult simple and the invisible visible.

📖 References & Sources

  1. Spada, Alessandro. “Quando l’algoritmo ha sempre ragione. Credibilità epistemica, polizia predittiva e silenziamento delle voci marginali.” MagIA – Magazine Intelligenza Artificiale, Università di Torino, 19 April 2026. magia.news
  2. Fricker, Miranda. Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press, 2007.
  3. Lackey, Jennifer. “False Confessions and Testimonial Injustice.” Journal of Criminal Law and Criminology 110, no. 1 (2020): 43–82.
  4. Jorgensen, Renée. “Algorithms and the Individual in Criminal Law.” Canadian Journal of Philosophy 52, no. 1 (2022): 61–77.
  5. Medina, José. “Agential Epistemic Injustice and Collective Epistemic Resistance in the Criminal Justice System.” Social Epistemology 35, no. 2 (2021): 185–96.
  6. Perrot, Patrick. “What about AI in criminal intelligence? From predictive policing to AI perspectives.” European Police Science and Research Bulletin 16 (2017): 65–76.

*Written for you by FreeAstroScience.com — where complex scientific and philosophical principles are explained in terms everyone can understand.*

Leave a Reply

Your email address will not be published. Required fields are marked *