Introduction
The Ontario Bar Association (“OBA”) appreciates the opportunity to provide feedback on the Law Commission of Ontario’s (“LCO”) AI in Criminal Justice Project. The OBA commends the LCO on this comprehensive and collaborative initiative, and recognizes the significant dedication and effort undertaken to produce the Project Materials. For the purposes of this public consultation, our submission offers targeted advice in response to a question raised in Paper 3: AI and the Assessment of Risk in Bail, Sentencing and Recidivism. We hope this input will support the LCO’s efforts to evaluate the benefits and risks of AI technologies in criminal sentencing. We would be pleased to provide further feedback on specific questions that may arise from our submission.
Established in 1907, the OBA is the largest and most diverse volunteer lawyer association in Ontario, with close to 16,000 members, practicing in every area of law in every region of the province. Each year, through the work of our 40 practice sections, the OBA provides advice to assist legislators and other key decision-makers in the interests of both the profession and the public and we deliver over 325 in-person and online professional development programs to an audience of over 20,000 lawyers, judges, students, and professors.
This submission was prepared by members of the OBA’s Constitutional, Civil Liberties and Human Rights section. Members of this section include barristers and solicitors across Ontario who represent clients from a wide range of backgrounds, whose rights and interests are engaged in the issues addressed in this consultation.
As previously noted, the following comments respond to a question posed on page thirteen of Paper 3: AI and the Assessment of Risk in Bail, Sentencing and Recidivism, wherein the LCO asked: “Can algorithmic risk assessments, producing outputs on the basis of a discriminatory history, have a place in sentencing Indigenous and Black offenders in light of current sentencing caselaw like Ipeelee and Morris?”
We are of the view that such algorithmic risk assessments are not compatible with the sentencing principles, applicable to Indigenous and Black offenders, as articulated in current caselaw. More specifically, these tools risk:
- Contradicting the tailored practice of sentencing under section 718.1 of the Criminal Code.
- Perpetuating systemic discrimination through “apparent neutrality.”
- Being built on biased data, thereby leading to biased results.
- Removing the human factor essential to understanding individual offenders. Each of these concerns are addressed in greater detail below.
Algorithmic Risk Assessments are Incompatible with Canada’s
Sentencing System
Pursuant to section 718.1 of the Criminal Code, “A sentence must be proportionate to the gravity of the offence and the degree of responsibility of the offender.”1 As interpreted by the court in R v Jackson, sentencing in Canada is a highly personalized process. According to the Court, “[t]he more a sentencing judge truly knows about the offender, the more exact and proportionate the sentence can be.”2 Consequently, the tailored practice of sentencing is at odds with AI risk assessment tools that use “group-based analytics in sentencing decisions for individualized offenders.”3 As such, these tools are incompatible with the individualized sentencing process required by Canadian law, as interpreted by the courts.
The Dangers of Apparent Neutrality
An additional concern with the use of these tools is that, while often described as neutral or objective, they can produce discriminatory outcomes, exacerbated in cases of equity- deserving groups.
As described in R v Natomagan, the Alberta Court of Appeal cautioned that although such tools “are described as objective, scientific, and accurate … their methods and results must be carefully examined and understood rather than accepted without inquiry.”4 The Justices further explained that the use of “apparently neutral socio-economic factors, such as stable employment and family supports” generates systemic discrimination when Indigenous peoples are those who are “disproportionately rendered unemployed, transient or poorly educated.”5 The unanimous Court also stressed that discriminatory inputs into actuarial tools that create “racial disparity in the data” will produce “racial disparity in prediction too.”6
As such, the consequences of these tools do not end at sentencing. Rather, such ratings influence penitentiary placement and delineate an individual’s residual liberties, including access to correctional programming, conditions of confinement, and ultimately the timing of release and parole applications.
Biased Data May Lead to Biased Results
The issue of bias extends beyond the concern of “apparent neutrality “discussed above. When AI assessment tools rely on flaws or biased input data, they risk reinforcing the inequities that currently exist in our justice system. Much of the data used to train these
tools are based on historical information collected from an “era rife with biased policing as well as biased bail and sentencing regimes.”7 A direct consequence of relying on such data is the risk of replicating those biases in sentencing outcomes.
It is therefore important that AI tools be designed and trained to mitigate such biases. To that end, transparency is crucial. The public must be made aware of the data that is used, the sources of those datasets, and how the AI algorithms determine the data’s weight or priority. Likewise, it must be clear which software programs and apps are being used.
There are also accountability issues when risk assessment tools are developed by private commercial actors. For example, in California, People v Chubbs, a concerning decision arose where an offender was sentenced to the death penalty but denied access to the forensics software used to convict him.8 While the judge ordered disclosure of the proprietary information, the developer successfully appealed the order, such that the formulae remained under a statutory trade secret privilege. This was a precedent-setting decision, now used “to justify withholding proprietary information from defendants in criminal proceedings.”9
Ultimately, when AI tools are trained on data shaped by biased policing and sentencing regimes, they replicate those patterns of discrimination. Without full transparency and accountability regarding what data is used, how it is weighed, and who controls it, these tools risk perpetuating systemic mistreatment and over punishment of Indigenous and Black offenders.
The Risk of Removing the Human Factor
The subject technology and software programs only report on hard data. However, correctional and reformatory systems work with individuals who each present unique challenges and backgrounds. Consequently, these tools may not reflect the necessary complexities of individual offenders, complexities that are better appreciated through human analysis.
Indeed, when human expertise is removed in favour of such tools, the results could have significant consequences. As demonstrated in R v B.H.D.,10 the results of the actuarial tool used by the Crown greatly overstated the likelihood of a high-risk offender re-offending as compared to a human expert’s assessment - generating outcomes of 70% versus 52%, respectively. The Court ultimately concluded that the “relevance and reliability of risk assessment tools was not proven at law,” and assigned no weight to the findings generated by the actuarial tool.11
Accordingly, future policy addressing the use of these tools must guard against the overreliance on AI and big data. Human input enables the criminal justice system to recognize the individualized circumstances of offenders and to support their unique path to both rehabilitation and reintegration back into society.
Correctional and reformatory institutions should therefore consider implementing mandatory requirements for personnel or management to provide administrative oversight wherever AI tools are used. In all files involving AI, there should be a balanced approach that combines technological tools with human oversight to ensure that sensitive
issues are addressed. For example, if an offender has a substance abuse problem, how can that be best addressed for the individual? What are their specific needs that may not be fully captured by an AI program?
Paths Forward
There must be a means of ensuring accountability if AI tools are to be utilized in sentencing. Before their implementation, policies or guidelines should be developed that consider the following:
- How to ensure sufficient oversight that maintains important qualities of human expertise and the individualized nature of sentencing.
- If AI tools are provided by third parties, how does the criminal justice system ensure there is accountability and transparency by the parties?
- How can we ensure the data sets being used by AI tools are free of biases?
- How can problems and issues be addressed, including technical issues, in a timely fashion?
- How do we ensure that the public interest is fully considered?
Moreover, mandatory annual reporting to the Privacy Commissioner or other relevant agencies should be considered. Correctional facilities (prisons and jails) should disclose what AI risk assessment tools are being considered or used, and in what capacity, as part of their annual reporting to the Solicitor General for provincial matters and the Attorney General for federal matters.
1 Criminal Code, RSC 1985, c C-46, s 718.1.
2 R v Jackson, 2018 ONSC 2527 at para 103.
3 Gideon Christian, "Legal Framework for the Use of Artificial Intelligence (AI) Technology in the Canadian Criminal Justice System" (2024) 21:2 CJLT 109 at 120.
4 R v Natomagan, 2022 ABCA 48 at para 99.
6 Ibid at para 113, citing Sandra G Mayson, “Bias in, Bias Out” (2019) 128:8 Yale LJ 2218 at 2251.
7 Gideon Christian, "Legal Framework for the Use of Artificial Intelligence (AI) Technology in the Canadian Criminal Justice System" (2024) 21:2 CJLT 109 at 113.
8 People v. Chubbs, No. B258569 (Cal. Ct. App., January 9, 2015).
10 Notably, in this case, the risk assessment tool was not proven at law. Since risk assessment tools do continue to be considered, particular in dangerous offender/long-term-offender cases –it is a matter of how it is applied and weighed.
11 R. v B.H.D., 2006 SKPC 32 at para 75. Importantly, the Ontario version of this tool (the LSI-OR), was considered in R. v. Capay, 2019 ONSC 535. Here, the Court heard evidence that the LSI-OR failed to recognize and contextualize Gladue factors (see paras. 166-169).