Recent BC Case Highlights Dangers of Generative AI
The recent Supreme Court of British Columbia decision, Zhang v. Chen, 2024 BCSC 285, highlights the danger in relying on generative AI tools, such as ChatGPT, for legal research. The lawyer in this case relied upon two non-existent cases, which were discovered subsequently to have been invented by ChatGPT. The lawyer later admitted they made a serious mistake by referring to two cases suggested by ChatGPT without verifying the source of information.
While it was found that the circumstances did not justify the imposition of a special costs award against the lawyer, the Court did recognize the extra steps required of opposing counsel as a result of the insertion of the fake cases and the delay in remedying the confusion they created. The additional effort and expense incurred because of the insertion of the fake cases is to be borne personally by the lawyer. The lawyer was also ordered to review all their files before the court to determine if any case citations or case summaries were obtained by ChatGPT or other generative AI tool. The Court also noted it would be prudent for the lawyer to advise the Court and opposing counsel when any materials submitted to the Court include content generated by AI tools.
At para. 38 the Court cited a recent study to highlight the risks of using ChatGPT and other similar tools for legal purposes. “Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models” by Matthew Dahl et. al. found that:
… legal hallucinations are alarmingly prevalent, occurring between 69% of the time with ChatGPT 3.5 and 88% with Llama 2. It further found that large language models (“LLMs”) often fail to correct a user’s incorrect legal assumptions in a contrafactual question setup, and that LLMs cannot always predict, or do not always know, when they are producing legal hallucinations. The study states that “[t]aken together, these findings caution against the rapid and unsupervised integration of popular LLMs into legal tasks.”
Be aware of the dangers and see the guidance recently prepared by several law societies. Our post, Generative AI Guidance, provides a compilation of these resources to assist lawyers to better understand how lawyers can incorporate AI into their proactive and professional responsibility considerations.