In a cautionary Q&A, the Honourable Peter Lauwers, justice of the Court of Appeal for Ontario and chair of the Civil Rules Committee's Artificial Intelligence Subcommittee, weighs in on some alarming implications of AI-generated evidence, the crucial questions litigators should be asking themselves when considering when to introduce or how to respond to such evidence, and his worries about how a “head-in-the-sand approach” might impact our justice system.
Justice Lauwers, you established and chaired the Civil Rules Committee’s Artificial Intelligence (AI) Subcommittee because you, rightly, expected that AI would soon be showing up in trial courts. What specific issues or scenarios were you seeing in other jurisdictions, or anticipating here, that rule amendments or additions might be required to address?
There were three provoking issues: First, hallucinated case precedents that might pollute the case law. Examples of such hallucinated case law put forward by lawyers occurred in both the United States and in British Columbia. Second, the use of invalid or unreliable evidence generated in part by artificial intelligence. Third, the intentional use of deep fake evidence to mislead the court.
On what kinds of evidentiary issues, with respect to AI, is the committee currently focused?
We passed rule amendments requiring lawyers to authenticate judicial precedents in their filings. We also passed rule amendments requiring experts to authenticate source materials they were relying on in their reports. We are mulling requiring parties to disclose the AI program they're using to generate evidence, for example, in motor vehicle accident reconstruction files.
You’ve referred to the “AI tsunami” on the way for those who work in the courts. What is your sense of how rapidly evolving the landscape is – in terms of the frequency and the variety of ways in which AI evidence is coming into play in the civil litigation process?
We are monitoring the situation, which led to the rule changes I mentioned earlier. There is a school of thought that AI is overhyped, but there is no doubt that law firms are using it for research purposes and are starting to use it in the formulation of pleadings and contracts. Whether those uses turn out to be problematic, time will tell. I am most worried about the use of AI in the generation of evidence.
"There is no avoiding the future"
How prepared, generally, do you feel lawyers and judges are to navigate this shifting terrain?
I do not sense that lawyers are particularly conversant with the technology although some may have experimented with large language models (LLMs), for example. For a number of years now, the major law firms have made good use of AI in technology-assisted document review, for instance. Judges are nervous, and some courts have issued guidelines about the use of AI in legal research and writing. I have some doubts whether lawyers and judges are even familiar with the extent to which AI is embedded in the standard research and word-processing technology they routinely use. The consensus is that lawyers and judges need to be better educated in this new technology.
If you were a litigator, what kinds of circumstances related to AI in the courts would you be proactively formulating strategies or approaches to handle – whether in advance of or during trial ?
As a litigator, I would have several questions: First, how can I use AI and should I? Second, how can my friend use AI and how should I respond? Third, I would have the same questions with respect to the witnesses for each side, particularly the experts. Among the relevant concerns would be the cost of tapping into AI in proportion to the value and nature of the case.
You’re spearheading and lending your expertise to a novel two-part program the OBA is hosting on November 1st and 29th called AI Trial Advocacy: Your Guide to Handling the Evidentiary Implications of Artificial Intelligence, which is designed to get lawyers up to speed on these complex and fast-changing considerations. Why did you feel it was important to participate, and how do you think the ‘demonstration and debrief’ format will serve attendees?
The intuition that led to the program was that neither lawyers nor judges are sufficiently familiar with AI to use or respond to it effectively. I worry about the ‘head in the sand’ approach. This program seeks to educate the bench and the bar about typical situations in which AI can be expected to come to the fore in the progress of a lawsuit. Consider what a wise philosopher once said: “the purpose of thinking ahead – of using your imagination – is so that your ideas can die instead of you.” Applied more civilly, imaginatively going through the typical process where AI is likely to become involved will better prepare lawyers and judges. There is no avoiding the future. Be prepared!
What do you see as the biggest challenge on the horizon when it comes to dealing with AI evidence?
The consensus among the experts is that deepfake AI will be easy to produce but very hard to detect. Whether it will become a major problem for the courts is unclear. But we need to be ready.
What are the implications of AI in the courts for the public’s perception of and access to justice?
I fear that a judge will be fooled by a precedent that is hallucinated by AI, will let AI write a decision, will accept unreliable evidence with AI engagement, or will be fooled by a deepfake audio or video into making an unjust decision. When those possibilities become realities publicly known, the justice system will take an enormous hit.
Hear more from Justice Lauwers and other experts in this area, and see how AI evidence comes into play in the civil process and how to approach it, by registering for AI Trial Advocacy: Your Guide to Handling the Evidentiary Implications of Artificial Intelligence, a two-part program taking place on November 1 and November 29 as part of the OBA’s Real Intelligence on AI offerings.