How facial recognition technology and AI-powered courts are quietly dismantling the presumption of innocence
There’s a woman called Angela Lipps. She’s a grandmother from Tennessee. And for more than five months, she sat in a jail cell for crimes committed in North Dakota… a state she says she had never visited in her life.
The thing that put her there wasn’t a witness. It wasn’t forensic evidence. It wasn’t a confession. It was a piece of software. A facial recognition algorithm looked at her face, cross-referenced it against a database, and said, essentially: “that’s our suspect.” Law enforcement in Fargo, North Dakota, have since acknowledged “a few errors.” They’ve pledged to make changes. They stopped short of a direct apology.
I want to sit with that for a moment. Five months. That’s five months of Angela Lipps’ life consumed by a machine’s mistake. Five months of her family not knowing what the next day would bring. And the official response was, essentially, a managerial shrug dressed up in bureaucratic language.
This isn’t a one-off. This isn’t the growing pain of a promising technology finding its feet. This is a pattern. And the further you pull the thread, the worse it gets.
The Wild West Is Open for Business
Watchdogs and civil liberties organisations have been saying it for years now, and the phrase keeps surfacing in research papers, legal journals, and congressional hearings: this is a “wild west.” Ungoverned. Under-scrutinised. And operating at speed.
At least 13 cases across the United States have been dismissed, at least in part, because facial recognition errors contaminated the investigative process. Thirteen that we “know” about. Thirteen that made it to a stage where someone could push back hard enough to have the charges dropped. The darker question, of course, is how many didn’t.
The technology itself isn’t simple. Facial recognition works by parsing an image through machine learning models trained to identify distinctive facial features. It’s probabilistic… meaning it doesn’t definitively say “this is the person.” It evaluates statistical similarity and flags a match above a pre-programmed error threshold. The system, in other words, is built on the language of probability, not certainty. But in practice, by the time that probability has been handed to a detective, typed into a report, and read out in a courtroom, the word “match” carries the weight of something far more conclusive than it actually is.
And then there’s the bias problem. Because the technology doesn’t fail equally across the population. Research has consistently shown that facial recognition performs markedly worse on Black faces than on white faces. It shows gender disparities. It has, in documented cases, flagged two Black men as suspects when the only connection between them and the person in the evidence image was the colour of their skin. In Detroit, Robert Williams was arrested in front of his daughters. His face shape, his tattoos, his physical appearance bore little resemblance to the actual suspect. Later, Michael Oliver was pulled over on his way to work and charged with larceny. The sole piece of evidence? A single still image from a witness’s mobile phone. The only common characteristic between Oliver and the man in the image was that they were both Black men.
I don’t know how you read that and not feel a cold, creeping dread about where this is heading.
Europe Isn’t Much Better — Just Better at Pretending
If you’re reading this from the UK or Europe and thinking “well, at least we have stronger regulations,” I’d gently suggest you reconsider.
A peer-reviewed study published in the journal “Data & Policy” examined four European cases involving police use of live facial recognition: London, South Wales, Berlin, and Nice. The researchers, Karen Yeung of the University of Birmingham and Wenlong Li of Zhejiang University, described what they found as a “serious governance deficit.” They called the environment a wild west. Sound familiar?
The Metropolitan Police and South Wales Police used facial recognition software at the Notting Hill Carnival and Cardiff City football matches, scanning tens of thousands of people who had given no consent whatsoever to being biometrically catalogued. The framing was that these were “trials,” research exercises, learning experiences. Except people were arrested as a result of them. Real arrests. Real consequences. Based on systems that independent evaluations later found to be deeply flawed in transparency, community engagement, and accuracy.
In 2020, the court ruling in “Bridges v Chief Constable of South Wales Police” found that the Welsh trials had violated data protection law, equality law, and human rights law. It was a landmark ruling. And yet, here we are in 2026, and the conversations are still largely the same. The technology is still being deployed. The governance still lags years behind the capability.
Berlin conducted its own trials at Südkreuz station between 2017 and 2018. Volunteers participated. But ordinary commuters… people who simply walked through the station… were also recorded without their knowledge or consent. Germany’s Federal Commissioner for Data Protection approved the study under strict conditions. Critics labelled it a “trial in hiding.” The Interior Ministry eventually paused plans for nationwide adoption, citing unresolved legal and ethical questions. Which is more than can be said for many jurisdictions, but hardly a ringing endorsement.
The pattern across every European case study was the same: inconsistency, opacity, and a blurring of the line between research and live policing. Experiments with consequences. Testing on the public, without the public’s knowledge, while calling it something more benign.
From the Street to the Courtroom
Here’s where it gets genuinely alarming, because the facial recognition conversation is only part of the story. The same technological impulse… the belief that an algorithm can process human complexity better than a human can… is now reshaping what happens *after* the arrest, inside the courtroom itself.
AI tools are being used to influence sentencing. Not as a theoretical proposal. Right now. Today.
The most well-known of these is COMPAS… Correctional Offender Management Profiling for Alternative Sanctions… a risk assessment tool that analyses data points including socioeconomic status, family background, neighbourhood crime rates, and employment status, and generates a score predicting the likelihood of reoffending. Judges use this score when deciding sentence length, bail conditions, and parole decisions.
In 2016, investigative journalists at ProPublica examined COMPAS and found it was nearly twice as likely to mislabel Black defendants as high-risk compared to white defendants. The algorithm was, effectively, encoding the biases of the historical criminal justice data it had been trained on, and presenting those biases back to judges dressed up in the neutral, authoritative clothing of a risk score.
The landmark case is “State v. Loomis” in Wisconsin. The Wisconsin Supreme Court allowed COMPAS to be used in sentencing but warned judges not to rely on it exclusively. The defendant argued he had no way to challenge the validity of the tool because the software was a trade secret. The court’s response was, essentially, that this was fine because the algorithm was “one factor among many.” But as critics pointed out then, and have pointed out consistently since: if you can’t interrogate how the score was generated, how do you assess how much weight to give it? You’re being asked to factor in a number you cannot meaningfully understand or challenge. That’s not just ethically uncomfortable. That’s arguably incompatible with the very concept of a fair trial.
Shenzhen Is Already There
If you want to see where the trajectory leads, look east.
In June 2024, the Shenzhen Intermediate People’s Court became the first court in the world to systematically integrate a large language model into judicial reasoning. The AI, trained on two trillion Chinese characters of legal texts, assists judges in civil and commercial cases. It summarises case facts. It generates hearing prompts. It assists in drafting judgements. The judges review and refine the output, at least in theory.
The stated benefit is efficiency. China’s court system handles a staggering volume of cases. AI can reduce trial times. It can cut administrative workload. And those aren’t trivial benefits in a system where access to justice is so often delayed. Reports suggest automated systems have reduced average trial times by roughly 30 percent in some contexts.
But there’s a question beneath the efficiency argument that doesn’t get asked nearly enough: what are we losing when we optimise the courtroom?
Law… particularly common law as we know it in England and Wales… is not a static rulebook. It is a living, evolving body of reasoning. Judges don’t just apply rules. They interpret them. They weigh competing principles. They make judgements about context, about character, about the specific and unrepeatable circumstances of a human life. An AI trained on historical legal data doesn’t evolve the law. It consolidates the past. It averages the outcomes that have already been. And if those outcomes contain bias… racial, socioeconomic, gendered bias… then the AI doesn’t correct it. It crystallises it.
There’s a phrase from the Stimson Center’s analysis of AI in global judicial systems that stuck with me: “Widespread AI adoption risks promoting a one-size-fits-all approach that undermines contextual justice.” One-size-fits-all justice. Read that again and let it properly land.
The Knowledge Gap Nobody Wants to Talk About
Here’s a number that should make anyone involved in judicial administration deeply uncomfortable. UNESCO conducted a global survey of judicial operators and found that only 9% had received any AI-related training or information… despite 44% having already used AI tools in their work.
Forty-four percent using tools they have received no training on. In a context where the stakes are a person’s liberty. Their record. Their future.
Interestingly, and perhaps counterintuitively, research published in “Computers in Human Behavior” found that greater knowledge about AI actually “decreases” public trust in facial recognition technology. This directly undermines the argument that public resistance is simply ignorance that education will dissolve. The more people understand about how these systems work, the less comfortable they are with their use in law enforcement. That’s not Luddism. That’s a rational response to actual information.
And yet the systems are expanding. In the United States, at the start of 2025, fifteen states had some form of legislation around facial recognition in policing. Some require a warrant. Some require disclosure to defendants. Some require nothing at all. Seven more states were considering legislation. Which means the majority of the country was still operating without meaningful rules. In the meantime, the arrests keep happening.
What “Justice” Means When a Machine Decides
I want to be careful here, because I’m not making a simple argument against technology in the justice system. I don’t think that’s either possible or particularly useful. AI tools “can” reduce inconsistency in sentencing. They *can* help overburdened courts process their backlogs. They *can* flag patterns that human investigators miss. These are real benefits that serve real people.
But.
There is an enormous difference between AI as a tool that *informs* human judgement and AI as a mechanism that “substitutes” for it. The moment an algorithm’s output becomes the primary evidence against you… the moment a risk score you cannot interrogate determines how many years of your life are spent in a cell… you are no longer in a justice system. You’re in an optimisation system. And you are a variable.
What accountability exists when the system gets it wrong? In Angela Lipps’ case, Fargo police acknowledged errors and promised changes. No apology. No explanation of precisely what went wrong, or how, or what would prevent it happening to someone else. And critically: no mechanism by which Angela Lipps, or anyone like her, can examine the algorithm, scrutinise its training data, and hold the technology to the same standard of scrutiny that human witnesses are subjected to in court.
Attorney Dr. Monroe Mann put it plainly: AI can sometimes offer solutions that human eyes miss, but it must always be reviewed by a human. Every recommendation checked. Every output challengeable. The technology supports; it does not replace. That principle sounds obvious when you say it out loud. It is apparently not obvious enough in practice.
UNESCO launched formal guidelines for judicial AI use in December 2025, with 15 principles covering information security, auditability, and human oversight. Admirable. Necessary. And arriving, as these things tend to do, some years after the technology has already embedded itself in practice.
The Bigger Picture
We are at a strange and uncomfortable juncture. We’ve built tools of extraordinary power and have, in many cases, deployed them before we’ve properly thought through what deploying them means. That’s not unusual in the history of technology. What is unusual… what should concern us more than it seems to… is that this time, the deployment isn’t in a factory, or a financial model, or a recommendation engine deciding what you watch next on a streaming platform.
It’s in the machinery of criminal justice. It’s in the systems that decide whether you sleep at home or in a cell. Whether you see your children in the morning. Whether your name ends up attached to a crime you didn’t commit, in a state you’ve never visited, because a piece of software said so.
Angela Lipps described being put on a plane for the first time in her life, terrified and exhausted and humiliated, to be transported to a state she’d never been to, to answer for crimes she didn’t commit. That journey… that particular horror… was authored by an algorithm and enabled by a chain of human decisions that collectively failed to ask the most basic question: “are we certain?”
We are handing certainty to machines that trade in probabilities. We are asking algorithms to answer questions about human lives that require context, nuance, and moral judgement… qualities that no training dataset has ever successfully encoded.
The wild west is open. The question is whether we’re going to bother building the town, or just keep riding through it and hoping we’re not the ones who get shot.
______
If this landed with you, pass it on. And if you think algorithmic justice is someone else’s problem… well, Angela Lipps probably thought that too.
Until Next Time

Discover more from Dominus Owen Markham
Subscribe to get the latest posts sent to your email.


