As the legal profession becomes increasingly reliant on the power of generative artificial intelligence to speed up research and drafting, the judiciary is now pushing back against the “black box” of automated litigation. On March 12, 2026, the English High Court delivered a landmark ruling in a high-stakes libel case that serves as a stern warning to the global legal community. In dismissing the £8 million libel action, the High Court has formally identified the use of “generative AI hallucinations” as a hallmark of abusive litigation.
The Case: Kamal v. Neidle
The case began when tax barrister Setu Kamal initiated an £8 million libel action against investigative journalist Dan Neidle, who had publicly criticised tax avoidance schemes involving Kamal. While the merits of the libel action have long been suspect, the case took a strange turn during the preliminary hearings.
It was discovered that the plaintiff’s legal filings cited several court cases to support their claims. However, upon closer inspection by the defendant’s lawyers and the judge, it was determined that the cited cases do not exist. These “hallucinations,” generated by an artificial intelligence tool to aid in the drafting of the legal filings, were cited as binding precedents.
In a definitive ruling, the High Court has not only dismissed the case, it has formally identified the case as a “statutory SLAPP” (Strategic Lawsuit Against Public Participation). This is the first case in the long history of the English courts to have a case dismissed on the specific grounds of anti-SLAPP legislation intended to prevent legal bullying.
The 2026 ruling supports the growing international body of law on the emerging global principle: “The Duty of Verification.” The UK courts have clearly stated their position that while the use of AI technology is not barred, the assignment of the “truth-finding” role to a machine without human involvement is a form of professional negligence.
Key implications for legal practitioners in 2026 include:
The Hallucination Clause: Courts are beginning to require a signed “AI Disclosure Statement” in witness statements and pleadings, certifying that all cited authorities have been verified against official law reports.
Indemnity Costs: The judge in Kamal v. Neidle indicated that the use of “fictitious” AI citations could be grounds for indemnity costs—the highest level of costs a court can award—effectively forcing the offending party to pay the entirety of the winner’s legal fees.
Professional Discipline: Following the ruling, the Bar Standards Board (BSB) is expected to launch a formal inquiry, signalling that “AI incompetence” is now a disciplinable offence.
This UK ruling is consistent with the recent decisions out of the U.S. and the EU. In the U.S., earlier in 2026, the U.S. 5th Circuit entered a standing order requiring all attorneys to certify their filings to the court to state whether or not any part of the filing was generated by AI technology. If the answer was yes, the certification was to state that the work was verified by a human for accuracy. In the EU, the Product Liability Directive, which was revised in late 2025, is now being used to explore the potential liability of the makers of AI software where the software “hallucinated” incorrect legal or medical advice.
Issue
- Unverified AI Citations
- Failure to Disclose AI Use
- AI-Generated Evidence
Legal Consequence
- Strike-out of claim / Contempt of Court
- Professional misconduct proceedings
- High-bar “Digital Authenticity” testing
The dismissal of the Kamal case represents a “Canary in the Coal Mine” signaling the start of the digital age in law. It represents the end of the “experimentation phase,” where lawyers could cite “software glitches” as the reason for errors in their filings. As of March 2026, the English courts have clearly stated their position: the Gavel remains firmly in the hand of the lawyer. For law firms, it means the cost of the AI technology must now be matched with an equal outlay on Human-in-the-loop technology verification systems.
