
In October, the judiciary published updated guidance on the use of AI. A Tax Tribunal judge recently disclosed he’d used AI to draft a decision on a disclosure application. He wasn’t obliged to tell anyone – he chose to. The guidance doesn’t require disclosure.
So, how many other decisions have been AI-assisted without anyone knowing?
Compounding the jury trial problem.
We’re moving from 12 diverse people deciding guilt to a single judge who may be using AI trained on historical case law reflecting decades of systemic bias. And you might never know.
The guidance says judges must check AI outputs for accuracy. But who checks whether they actually did? What happens when AI, generating incorrect information, makes it into a judgment unchecked?
Juries question each other, challenge assumptions, and bring different perspectives. That’s built-in quality control. A judge using AI alone doesn’t have that.
The transparency gap.
If a judge uses AI to research precedents, summarise evidence, or draft judgment sections, that’s relevant. Different AI tools have different training data, biases, and error rates.
But what about complex health and safety prosecutions? Corporate manslaughter trials? Cases where liberty hangs in the balance?
Why is this alarming?
The Ministry of Justice has an entire “AI Action Plan” – AI assistants for court staff, semantic search for probation officers, and chatbots for the public. It’s about efficiency.
But criminal trials aren’t supposed to be efficient. They’re supposed to be fair.
We’ve defended organisations where conviction versus acquittal came down to context – understanding business pressures, risk management realities and split-second decision making. That requires human judgment informed by real-world experience.
Can an algorithm trained on case law understand that? Should it?
Our take:
We’re layering opacity on opacity. Fewer juries means less diverse decision making. AI assistance without disclosure means we don’t know what influenced the decision.
We are not anti-technology by any means. AI has genuine benefits for research, document management, administration and so much more. But when it touches criminal trial decision making, we need ironclad safeguards.
At a minimum, mandatory disclosure when AI is used in any judicial decision. Clear explanation of which tool and for what purpose. Right for parties to challenge AI-assisted findings.
The government is also making these changes piecemeal. Scrap juries to clear backlogs. Use AI for efficiency. Each seems pragmatic in isolation.
But step back: we’re fundamentally redesigning how justice works, with less human oversight, less transparency and less democratic participation.
And we’re doing it without proper debate about whether this is the justice system we actually want.
The backlog needs fixing. AI has potential. But not like this.