In keeping with your professional obligations - and in addition to ensuring you understand the nature of the product you're using, as per yesterday's tip - any time you use AI tools:
Remain responsible for your final product and apply the same rigour to the product you intend to submit to the court or client that you would if the product were produced by any untested or inexperienced source, human or otherwise;
Keep in mind you can ask the AI tool to use a specific source or group of sources (e.g., CanLII);
Ask the AI tool to "provide sources with links," and then verify the output by checking a trusted source. AI is far from infallible, and the buck stops with you
That said, while AI tools make errors, they are designed to learn and improve from them. When you come across an error or an answer that doesn't seem right, continue the conversation with the AI tool - ideally, until a point where it produces the correct response. It will likely be to your benefit the next time you use it.