CBA memberships expired on August 31, 2025. You can still renew today and continue to enjoy benefits. Please email or call us at 1-800-267-8860 with any questions.

OBA AI Week Day Two

Multicolour abstract bar with lines that look like a circuit board.

DEMYSTIFYING: What puts the intelligence in Artificial Intelligence

It is well documented that AI tools often make mistakes and even create entirely fictional responses. Errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. When AI tools make mistakes, or make something up entirely, these are referred to as "Hallucinations".

Multicolour abstract bar with lines that look like a circuit board.

RISK: What AI is learning from you, it may be sharing with others

In a recent Notice to the Profession, the Federal Court raised concerns with "the potential fabrication of legal authorities through AI." If a lawyer relies implicitly on the answers and data provided through an AI system, such as ChatGPT, they run the risk of founding an argument on inaccurate information or providing the courts with false evidence or misstated law. The more novel the topic, the greater that risks becomes.

Multicolour abstract bar with lines that look like a circuit board.

Mitigation

In keeping with your professional obligations - and in addition to ensuring you understand the nature of the product you're using, as per yesterday's tip - any time you use AI tools:

Remain responsible for your final product and apply the same rigour to the product you intend to submit to the court or client that you would if the product were produced by any untested or inexperienced source, human or otherwise;

Keep in mind you can ask the AI tool to use a specific source or group of sources (e.g., CanLII);

Ask the AI tool to "provide sources with links," and then verify the output by checking a trusted source. AI is far from infallible, and the buck stops with you

That said, while AI tools make errors, they are designed to learn and improve from them. When you come across an error or an answer that doesn't seem right, continue the conversation with the AI tool - ideally, until a point where it produces the correct response. It will likely be to your benefit the next time you use it.

Policy Checklist for Your Organization

  • Direct that AI be used only as a secondary source, requiring due diligence of finding the primary source to verify the information.
  • Even when the AI tool cites a credible source for its information, the next step should be to check that source directly.

did you know…?

The best answers come from good questions. AI is not like a Google search with a 'one-and-done' query. For best results, you need to engage in a conversation. The act of asking an AI tool a question is referred to as "Prompting", and the more you prompt an AI tool, the more you will get out of it. To perfect your AI prompting, you should ask clear questions but avoid being over-specific (excessive detail can confuse AI tools) and steer clear of figurative language (AI tools often interpret prompts literally). Then, follow up with clarifications and more questions - it will remember and build off your previous conversation. 

Have you tried this for fun…?

Using AI to make a stunning slideshow in minutes? Many systems have a presentation-maker component that, in answer to a few prompts from you, can generate a captivating script and seamless deck - even finding fitting footage, graphics and soundtrack.