professional in glasses with fist under chin looking ponderously at AI generated atom-like digital display

Reaction to AI “Hallucination” Case: Food for Thought on Leveraging AI-Powered Tools in Legal Practice

  • 13 juin 2024
  • Emma Huang

At the end of February this year, Canada saw its first AI “hallucination” case, where an artificial-intelligence-powered tool generated citations to two non-existent “precedents”, and those citations were submitted to a court. Eventually, the court ordered special costs against the counsel responsible for the “hallucinated” case law.[1] This incident sparked some conversations about the place of technology in law.

I am a strong advocate for technology in law because our current practice has benefitted greatly from technological advancement. Technology allows our practice to be more efficient, and thus more cost-effective for our clients. I was told that legal research used to involve wading through shelves of hard copies. By comparison, today, the computer-powered, searchable legal databases allow us to quickly locate precedents and even pinpoint discussions of nuanced issues. I am sure that the “ctrl + F” key combination is a good friend of many colleagues.

Technology also promotes access to justice. For example, in the past, geographic location could be a barrier for participation in legal proceedings, because not everyone is able to physically or financially afford to travel. Today, with virtual hearing arrangements, such costs can be reduced or even removed entirely. Further, software supporting virtual hearing arrangements usually come with, or allow plug-ins of, text-to-speech/speech-to-text functions. Those functions can help address communication challenges and make courts more accessible.

As helpful as it can be, technology is not perfect. However, I would like to think that the problems do not come with technology itself, but with its usage. As with any other tool we wield, knowing how technology works and where its limits lie is important. Indeed, lawyers are now required to develop technological competence as per the Law Society of Ontario’s Rules of Professional Conduct, and the LSO has also issued guidance on using technology.[2]

I am inclined to believe that counsel caught in the AI “hallucination” incidents did not intend to deceive the courts or their colleagues, but they did not know what to look out for when engaging AI as a new tool to improve work efficiency and quality. Based on my brief experience with legal technology, here is some food for thought on how we may circumvent the pitfalls of AI:

Provide quality input. AI-powered tools mostly function on an input-output model. In computer science, there is an expression: “garbage in, garbage out”. This expression reflects the concept that if users provide poor-quality information, then AI-powered tools will produce similarly poor-quality responses. Therefore, to make an AI-powered tool’s responses as useful and reliable as possible, we may want to watch out for what information we are feeding into an AI tool when we ask it questions.

Refrain from supplying confidential information. Although we prefer having quality input, we may want to pause and consider whether quality information can indeed be fed into AI-powered tools. When information is supplied to an AI-powered tool, that information likely goes into the database supporting the tool and will likely be further used in training that tool or responding to other inquiries. That is to say, we may lose control of the information once it is entered into the AI-powered tool, unless the tool has relevant built-in restrictions. If that information contains client or file information, there may be confidentiality implications.

Be suspicious. Reports show that AI has learnt to lie, and even to manipulate human emotions. In AI “hallucination” cases, AI fabricated case law. In another incident, AI pretended to be a person with a visual impairment, gained the sympathy and help from a human being, so that it could bypass a test that is supposed to block non-human access to an online platform. Studies have also shown that AI has learnt to discriminate. For instance, a world renown online retail company’s AI-powered talent acquisition system taught itself to prefer men over women. All these examples show that we cannot blindly trust AI responses. Verification is indispensable. As with any work, tools provide assistance, but do not discharge our duties and responsibilities.

ABOUT THE AUTHOR

Head-shot photo of author Emma HuangEmma L. Huang summered and articled at Torys LLP and will return in 2025 as a litigation associate with a primary focus on civil/commercial litigation, international arbitration, and tax controversies. She graduated magna cum laude from the University of Ottawa’s English common-law programme. During law school, she was a Technoship fellow with the Centre for Law, Technology and Society. She also has experience providing legal and policy support to the federal government on issues including digital compliance and regulatory technology.

 


[1] Zhang v Chen, 2024 BCSC 285 (CanLII)

[2] See LSO’s guidance on using artificial intelligence: https://lso.ca/lawyers/technology-resource-centre/practice-resources-and-supports/using-technology

An earlier version of this article appeared on the OBA Young Lawyers Division's articles page.