Dealing with the courts can be daunting, and the idea of a free AI tool like ChatGPT drafting your Particulars of Claim or finding “the perfect case law” sounds like a dream.
However, in the last year, the UK courts have seen a surge in cases where AI has turned a simple claim into a legal nightmare. If you are representing yourself or your business in the Small Claims Track (claims up to £10,000), you need to know the risks before you hit “generate.”
1. The “Hallucination” Headache
The biggest risk is what tech experts call “hallucination.” AI doesn’t search for law; it predicts the next likely word in a sentence. This has led to litigants, and even some reckless lawyers, submitting non-existent cases to the court.
Real-World Warning: In the recent case of Ayinde v London Borough of Haringey [2025], a legal representative was found negligent after citing five fictitious cases that simply did not exist. The court warned that misusing AI has “serious implications for the administration of justice.”
If you submit a fake case, the Judge will likely find out. This destroys your credibility, can lead to your claim being struck out and even put you in contempt of court.
2. Confusing UK Law with US Law
Most popular AI models are trained heavily on American data. In a Small Claims case, using US terminology isn’t just a “small error”, it can make your claim legally invalid.
Using the wrong terminology shows the court you haven’t followed the Civil Procedure Rules (CPR), which is the rule book for courts in England and Wales. It is crucial to follow the rules. Even jurisdictions as close as Scotland have different rules and processes.
3. The “Statement of Truth” Risk
Every document you send to the court (like your N1 Claim Form) must be signed with a Statement of Truth. By signing, you are confirming that the facts and the law you’ve presented are accurate to the best of your knowledge.
It never makes a good impression with a Judge, when you are asked to explain something that you have drafted, and you don’t understand it because you didn’t prepare it in your own words or didn’t even have much input it drafting it.
-
If an AI makes up a fact or a legal principle and you sign off on it, you are the one responsible.
-
In extreme cases, submitting false information to a court can be seen as contempt of court.
4. Data Privacy: Your Case is No Longer Private
When you type the details of your dispute into a public AI tool, that information is often used to train the model further. You could inadvertently be sharing sensitive business secrets or personal data with a public server. Once it’s in the AI’s “memory,” you can’t take it back.
How to use AI Safely
We aren’t saying you should never use technology. AI can be a great tool for summarising long emails or improving your spelling and grammar. But it should never be your legal researcher.
The Golden Rules:
Never trust a citation: If an AI gives you a case name (e.g., Smith v Jones [2022]), verify it on a reliable site like
Verify the Rules: Ensure any procedural advice matches the current Civil Procedure Rules.
Human Oversight: Always read drafts to ensure that the content is accurate and true, and makes sense.
Need a hand?
Don’t let a “hallucinating” chatbot ruin your chance of getting paid. At Small Claims Court Genie, we provide the human expertise and proven templates you need to navigate the courts of England and Wales safely.


