The Canadian Judicial Council (“CJC”), the federal body created under the Judges Act, R.S.C., 1985, c. J-1 to oversee Canada’s federal judges and maintain and improve the quality of judicial services in Canada's superior courts, recently released the first edition of its Guidelines for the Use of Artificial Intelligence in Canadian Courts (the “Guidelines”).
Overview of the Guidelines
The CJC published the Guidelines in September 2024. The Guidelines recognize that judges hold exclusive responsibility for their judicial decisions, and are not permitted to delegate decision-making authority, whether to a law clerk, administrative assistant, or artificial intelligence (“AI”) computer program, regardless of their capabilities. That being said, the Guidelines also recognize that some judges have already embraced AI tools to improve their efficiency and accuracy, while others may be relying on AI tools without even realizing it. The purpose of the Guidelines is therefore to raise awareness of the risks of using any form of AI in court administration and judicial decision-making and to prevent the delegation of decision-making authority while encouraging the safe, effective and appropriate uses of AI by the judiciary.
The CJC provides 7 specific guidelines, which can be summarized as follows:
- Protect Judicial Independence: Numerous generative AI applications proposed for courts, including case management systems and alternative dispute resolution, harbour the potential to erode judicial agency and independence. Placing too much reliance on any proprietary AI (whether commercial or publicly funded) could compromise judicial independence. It must be understood that the role of judges transcends the resolution of individual disputes. It encompasses the crucial task of interpreting the law, moving the law forward, and providing a stable, separate third branch of government. Even as governments move forward with legislation governing the use of AI, judicial independence must be preserved.
- Use AI Consistently with Core Values and Ethical Rules: Any consideration of the use of assistive AI by judges should always be consistent with the core values of independence, integrity and respect, diligence and competence, equality and impartiality, fairness, transparency, accessibility, timeliness and certainty.
- Have Regard to the Legal Aspect of AI Use: The integration of AI into any court process must consistently adhere to applicable laws, including those governing privacy, intellectual property, and criminal activities. Courts should be particularly attentive to the nature of the source material used to train proposed AI systems, aiming to strike an optimal balance between safety and accuracy.
- AI Tools Must be Subject to Stringent Information Security Standards (and Output Safeguards): A robust information and cybersecurity program must be put in place to give special consideration to addressing additional AI-specific threats, such as AI algorithms inadvertently exposing sensitive training data (e.g. sealed court files) or unauthorized tampering with AI algorithms or training data to influence outcomes or to inject malicious behavior.
- Any AI Tool Used in Court Applications Must be able to Provide Understandable Explanations for their Decision-Making Output: Explainability is the need for AI tools to provide clear, understandable explanations for their output, making it easier for users (and for those affected) to interpret, trust, contest, or accept AI output in critical workflows. Explainability is akin to the requirement for judges to provide reasoned explanations for their decisions in law.
- Courts Must Regularly Track the Impact of AI Deployments: Before introducing AI into any court, administrators must perform a comprehensive, formal, and impartial assessment of its impact on judicial independence, workload, backlog reduction, privacy, security, access to justice, and the court’s reputation (e.g. through pilot projects or controlled testing environments before full-scale deployment). Even after deployment, impact assessments should be performed continually.
- Develop a Program of Education and Provide User Support: Training of judges and the provision of technical support for AI integration in court administration are indispensable. AI should not be employed without users undergoing a comprehensive educational process and understanding best practices for interacting with the technology, whether as a standalone service or an integral component of court software.
Takeaways
The CJC’s Guidelines provide a valuable framework for the use of AI, not only for courts, but also for administrative decision-makers such as professional regulators. The Guidelines highlight the risks of using any form of AI in decision-making and administration, while providing a roadmap for mitigating those risks so that any implementation of AI tools follows stringent ethical and safety standards.
Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.