Disclaimer: Contributors to this session/article are clear to warn users not to use GenAI tools for client matters where the tool may store or be trained on client data and stress that lawyers are ultimately responsible for reviewing all outputs. Their prompting tips that follow presumed the use of a data-secure LLM.
* * * * * * * *
Would you assign a task to a generative AI tool in the same way you would to an articling student? The answer, if you go by the enlightening discussion on “Prompt Perfection” that the OBA hosted as part of its ChatGPT for Lawyers Series, is yes … and no.
In offering tactics in effective prompting, speakers Monica Goyal, Carvel Law LLP, and Al Hounsell, Norton Rose Fulbright, emphasized that generative AI performs optimally with specific, step-by-step instruction – as any new member on your team would – but that task components need to be broken down even further when using tools that don’t have the education on, for instance, what a legal memo entails, that an articling student would (though, importantly, whether the output comes from human or machine, you are equally responsible for its careful review!). Generative AI tools – whether broadly used ones like Chat GPT, Gemini, Claude or Copilot, or legal-specific ones, like Harvey CoCounsel, Lexis+ AI or Vincent – are most useful when they’re given clear context. Skilled prompters are diligent directors. Below are just five of several strategies Goyal and Hounsell shared for setting the stage to best effect.
1. Assign the Persona
Step one toward prompt perfection is defining the role of the AI agent – essentially telling the tool who it is to reason and respond as. This is known as “assigning a persona” – personae that might be relevant to lawyers, depending on the type of task with which you require assistance, include law clerk, legal assistant, expert, opposing counsel, idea generator, teacher or critic. Hounsell notes that for the legal-specific AI tools – trained on legal data and trained to do legal tasks – some of these prompts or understanding of the role may be “baked in” to the background. In any case, the more exact you can be the better, because the way you assign the persona “really influences every aspect of the way that the model is going to interpret the prompt, as well as the output from the prompt.” By way of example, he says, you might situate the AI tool this way: “You are a paralegal in the province of Ontario with meticulous attention to detail” – which offers both a sense of jurisdictional context and the rigour the task demands.
2. Be Mindful of Limitations at the Outset
Both speakers emphasize that it’s useful, generally, to tailor the task to the tool (different LLMs – large language models – have different capabilities) and to understand the limitations of the model you’re using, so that the direction you offer in your prompt is easily actionable. The context window – the amount of text or tokens the model can accept as input – is one significant limitation to consider. As Goyal notes, if you’re looking for AI to generate a long-form piece – a 20,000-word essay, for instance – “you may need to structure your prompts to write that essay in sections.” Or, in the unlikely situation that you wanted the tool to analyze millions of documents, you might need to generate document summaries to fit the context window. While Goyal stresses that “less is not better” when it comes to directing the behaviour of the tool, better results do often come with breaking down prompts into smaller pieces: “I found that if you give it too much information (at once), sometimes it loses the thread.”
Other limitations the speakers encouraged users to keep in mind: how recently has the tool been trained (i.e., how up-to-date and accurate is its info on current events)?; can it access the internet in real time?; can it do math?; what kind of formats can it produce – presentations, charts, videos, etc.?; can you upload a document to generate more relevant results? Answers to these questions will affect your prompt, not to mention the purpose for which you use the tool at hand.
3. Structure the Task Effectively
Clear, detailed, sequential instruction is key when directing AI models, panelists agreed. Breaking the task down into all its smaller components, with a “first do this, “then do this,” “then repeat that step for this other item” approach, yields optimal results. Each of these steps should begin with an action word describing what you want the model to do (e.g., clarify, create, find, generate, write/rewrite, translate, identify, explain, draft, etc.) By way of examples, Goyal offered that you might direct the tool to “Highlight the major weaknesses of our case and outline the arguments we could use to mitigate against them,” or “Draft a 50-word LinkedIn post for me to promote my bulletin.” With the latter assignment, she noted, she might not use what the tool comes up with, but it’s a starting point she can work from. Hounsell added that a more chain-of-thought approach is also possible. For those tasks you’re not clear on the components of, you can, after assigning a persona, ask the model to outline the steps for you – and then have it execute on them.
4. Provide Appropriate Context
Context is crucial, says Hounsell. You can help prevent AI misinterpretation by grounding it in specific information: you can upload relevant documents; you can utilize case law and statute excerpts to guide AI’s legal analysis; and you should, as noted, provide jurisdictional context by incorporating wording in your prompt like “pursuant to the laws of Ontario,” or “this person is in Ontario.” These details in your input will, says Hounsell, “drastically reduce the possibility of hallucinations, because you’ve filled the context window of the tool with real data.” One critical piece to keep in mind – something that the presenters underscored at the outset: You should never use LLM/GenAI tools for client matters where the tool is storing or being trained on client data. LLM tools where data is kept securely in your own environment are the ones to use for client work – never free tools.
5. Set Precise Constraints for Output
When it comes to the output you seek, you can set the constraints in ways that might not have occurred to you. In addition to format (table, email, memo, executive summary, etc.) and media type (text versus image), you can also specify a style and tone for the output (for instance, directing it to fit the style of formal legal communication). When you tell it what to include in its output, remember to tell the tool if there are things you’d like it to exclude, as well (e.g., “don’t consider these specific years or decisions in your analysis”). Most of the tips the panelists provided amount to what is called “zero-shot prompting” – providing the persona, task, context, constraints and description of the output – but Hounsell notes that, in some instances, you might want to try “few-shot prompting,” in which you guide the tool by giving it a few examples of the desired input and output pairings. This, says Hounsell, can “assist the model in understanding the output.”
The Power of an Iterative Process
Key to “prompt perfection” is knowing that you don’t have to sit with the output if it doesn’t suit your needs – you can keep refining. “The real power of the tools is the iterative approach to this,” says Goyal. After looking at your output, you can “think about how you would change the prompt, (then) change the prompt and get it to redraft or regenerate the output.” What works in one context may not work in another, so it pays to “be a little bit flexible in your approach,” says Goyal. When you find a prompt that works well, make sure to save it in a Word doc if the tool isn’t saving your prompts for you. Goyal also encourages creating a daily habit out of generative AI use – employing AI tools for tasks she could complete with other programs, just to “build up that skill.” The LLMs are learning – but so are you! “Don’t get discouraged if it doesn’t work right away,” advises Goyal, “Stick with it, keep trying, keep using the tools, you’re going to get there.”
You can view the full “Prompting Perfection” program, chaired by Fabian Suárez-Amaya, Osler, Hoskin & Harcourt LLP, on demand here.