When AI fabricates cases — and lawyers don't check
The risk of AI “hallucination” — the phenomenon by which generative AI systems produce plausible-sounding but entirely fabricated content — has been discussed in abstract terms since ChatGPT's launch in late 2022. In 2025, that risk became concrete and consequential in UK courts.
Two cases in particular have defined the legal profession's understanding of what AI liability looks like in practice:
Case 1: Ayinde v London Borough of Haringey
In this case, a barrister submitted legal arguments citing five non-existent cases — citations generated by a generative AI system that were entirely fabricated. None of the cases existed. The barrister was referred to the SRA. (International Bar Association)
The incident attracted significant attention beyond the immediate disciplinary implications. It demonstrated that AI hallucination risk in legal research is not a theoretical concern for junior practitioners using AI without supervision — it is a risk that reaches to the bar and into court pleadings.
Case 2: Gloriose Ndaryiyumvire v Birmingham City University
This case was more explicit in its judicial condemnation. A firm filed court pleadings citing two fictitious cases generated by a generative AI system. The court described the conduct as “improper, unreasonable and negligent.” (Lexology)
Dame Victoria Sharp stated plainly that professionals using AI for legal research carry “a professional duty to check the accuracy of such research by reference to authoritative sources.” This is now settled judicial guidance: AI research outputs must be independently verified before they are relied upon in any legal document.
Why AI hallucinates — and why legal citations are particularly vulnerable
Understanding why AI hallucination occurs helps practitioners design appropriate verification processes.
Large language models (LLMs) — the technology underlying ChatGPT, Claude, Copilot, and most legal AI tools — work by predicting statistically likely sequences of text based on their training data. They do not retrieve information from a database; they generate text that plausibly fits the context of the query. When asked to produce a list of relevant cases on a legal point, a poorly governed AI system may generate case citations that look correct — proper format, plausible parties, plausible court, plausible year — but that correspond to no real judgment.
Legal citations are particularly vulnerable to this failure mode because:
Case citations have a specific, learnable format that AI can reproduce convincingly
There are vast numbers of real cases, making fabricated ones harder to spot by eye
The consequences of fabrication — deceiving courts, opposing counsel, and clients — are severe
Practitioners under time pressure may be more inclined to trust AI output without verification
This is not a problem unique to AI used carelessly. Even well-governed legal AI platforms with access to legal databases can hallucinate when generating analysis rather than retrieving verified citations.
The SRA's position: existing standards apply
The SRA has not created AI-specific professional conduct rules. Its position is that existing professional standards — competence, candour, client care — apply fully to AI-assisted work. (Society of Asian Lawyers)
This means:
Competence: Using AI does not reduce a solicitor's obligation to be competent. If AI-generated research is wrong and a solicitor relies on it without checking, that is a competence failure — regardless of whether the solicitor knew the AI might be wrong.
Candour: Submitting AI-fabricated citations to a court is a breach of the duty of candour. Even if the solicitor did not know the citations were fabricated, the absence of verification does not excuse the breach.
Supervision: Junior practitioners using AI for research must be supervised appropriately. The senior solicitor responsible for a matter bears responsibility for the accuracy of any AI-generated content submitted to a court or included in client advice.
The Law Society has called for more specific practical guidance — (4 New Square) including clarity on whether AI-assisted outputs in reserved legal activities must always be verified by a qualified solicitor — but pending that guidance, the existing framework applies in full.
The junior lawyer pipeline concern
The hallucination risk intersects with a deeper concern about legal professional development. LexisNexis's Mentorship Gap report (January 2026, 873 UK lawyers) found:
72% of senior lawyers worry that juniors using AI will struggle to develop legal reasoning skills (Legal Futures)
69% are concerned about inadequate verification and source-checking habits
- Just 2% believe AI strengthens junior learning (LexisNexis)
One senior law school lecturer described the effect bluntly: “No critical reasoning, no belief in themselves and no confidence.” The risk is that a generation of junior lawyers trained on AI-first research develops an over-reliance on AI output — and a deficit in the foundational skills needed to catch AI errors.
This is not an argument against AI in legal practice — it is an argument for deliberate pedagogy around AI verification. Firms that invest in training juniors to critically evaluate AI output will produce more reliable practitioners. Firms that simply hand juniors an AI tool and expect good results face compounding risk.
A practical verification protocol for legal AI users
In the absence of formal SRA guidance on AI verification, Dame Victoria Sharp's formulation — check AI research “by reference to authoritative sources” — should be treated as the operative standard. Practically, this means:
For legal research: Every case citation produced by AI must be independently verified through an authoritative legal database (Westlaw, LexisNexis, Bailii) before inclusion in any document. This is not optional.
For statutory analysis: AI-generated analysis of statute must be cross-referenced against the current version of the legislation as enacted, including any amendments. AI training data may not reflect recent legislative changes.
For regulatory guidance: AI summarising FCA, SRA, CQC, or other regulatory guidance must be checked against current published guidance. Regulatory positions change, and AI training data has a knowledge cutoff.
Documentation: Firms should maintain records of AI tools used, the queries submitted, the outputs received, and the human verification steps taken. This creates an audit trail that demonstrates compliance if a matter is later challenged.
Disclosure: There is currently no SRA rule requiring solicitors to disclose to clients that AI was used in their matter, but industry guidance is evolving. Firms should consider adopting proactive disclosure policies both for transparency and to manage expectations appropriately.
Insurance implications
Professional indemnity insurers have become acutely aware of AI hallucination risk in legal services. Proposal forms now include AI usage questions, (Insurance Business America) and Kennedys Law has identified “Silent AI” coverage — AI risks neither explicitly included nor excluded in PII policies — as a significant emerging gap. (Kennedys Law)
A claim arising from AI-fabricated citations in court documents — involving wasted costs orders, potential SRA referral, and client compensation — is exactly the scenario that insurers are now trying to price. Firms should review their PII coverage explicitly in the context of AI use and ensure their broker understands their current practices.
Key statistics at a glance
In Ayinde v Haringey, five non-existent cases were cited — the barrister was referred to the SRA (IBA)
In Ndaryiyumvire v Birmingham City University, AI-generated fictitious citations were described by the court as “improper, unreasonable and negligent” (Lexology)
Dame Victoria Sharp: professionals have “a professional duty to check” AI research against authoritative sources
72% of senior lawyers worry juniors using AI won't develop proper reasoning skills (Legal Futures)
Only 2% believe AI strengthens junior legal learning (LexisNexis)
PII insurers now asking AI disclosure questions; “Silent AI” identified as key coverage gap (Kennedys Law)
MarGen helps law firms build authoritative digital presence that earns trust from both AI systems and human clients. Get in touch.