Cite and Wrong: when lawyers stop thinking for themselves
Public shaming doesn’t seem to sting as much as it used to. Why else would a couple of Ontario lawyers submit documents to the court that cited fictional cases and unsuitable judgments? They also never bothered to check the accuracy of the results that were hallucinated by ChatGPT.
It boggles the mind to see this just 18 months after the first known and highly publicized Canadian instance of a lawyer submitting fake AI-generated case law at a court in British Columbia.
The two Ontario cases arose in May 2025.
One involved a request to set aside a divorce order at Ontario Superior Court. In his review of the documents, Judge Fred Myers detected four big problems with case citations and oral arguments from lawyer Ms. Jisuh Lee. They included a link to one of those 404 Error pages; two citations that directed the judge to rulings entirely unrelated the case before him, along with a link to a decision that contradicted arguments Ms. Lee had argued in court.
Judge Myers called out the misuse of generative AI to conduct case law research. “It is the lawyer’s duty to use technology, conduct legal research, and prepare court documents competently,” he wrote before ordering Ms. Lee to show cause why she should not be cited for contempt.
The matter concluded on May 20th when Judge Myers described Ms. Lee having thrown herself “on the mercy of the court”, with a full apology and promise to change practices at her firm. She also promised to take specific training on how to use generative AI.
A few days later, at the Ontario Court of Justice, Judge Joseph Kenkel dealt with the same problems in a criminal case, where the defence lawyer submitted documents with citations to unrelated civil cases. Lawyer Arvin Ross had submitted a document that included a citation from an entirely hallucinated case, “The court was unable to find any case at that citation. There was no case by that name with that content at any other citation,” wrote Judge Kenkel.
He ordered Mr. Ross to prepare a new set of submissions, even going as far as to remind him to number his pages and paragraphs.
Ontario’s courts are notoriously backlogged. Judges should not be spending their precious time on special motions, hearings and orders for lawyers who fail to demonstrate professional competence.
The law is a self-regulated profession. Lawyers make an oath to observe and uphold ethical standards. In April 2024, the Law Society of Ontario produced a special white paper to guide their licensees on their use of generative AI. It begins by saying, “the professional conduct rules apply to the delivery of legal services empowered by Generative AI.” Beyond that, the Ontario Superior Court’s own Rules of Civil Procedure instruct lawyers to certify that every authority cited in their documents is authentic.
Even ChatGPT knows its own failings! When I prompted it to explain the risks of using ChatGPT for caselaw research, it generated six big risks. Among them, it said that ChatGPT may invent legal cases, citations, or judicial quotes that sound plausible but don’t exist; the AI might mix up laws or rulings across jurisdictions (e.g., U.S. law in a Canadian context); and warned that relying on unverified AI outputs could breach professional obligations leading to disciplinary action or court sanctions.
When asked if the Law Society of Ontario would investigate the two cases above, its spokesperson replied, “I’m unable to share any information concerning specific complaints made to the Law Society of Ontario or investigations by the Law Society of Ontario, as they remain confidential, until or unless they result in public regulatory action.”
For those of you wondering what happened in the B.C. case, the Law Society told me, “The investigation has concluded and we believe the concerns raised have been addressed.”
While I’m no lawyer, I am a well-informed citizen who works regularly with judges and legal professionals across Canada. I’ve learned to care a great deal about public confidence in the justice system. When lawyers rely on generative AI to do their research, they don’t just risk their reputations. They also risk eroding Canadians’ trust in our courts, the law and the justice system.