Artificial intelligence and automated decision-making are no longer theoretical. These systems are already embedded in how Immigration, Refugees and Citizenship Canada (IRCC) triages applications, flags risk, and issues some approvals without an officer ever opening the file. At the OBA's recent two-hour CPD program, a panel of leading experts discussed AI: Honourable Justice G. Richard Mosley, Professor Paul Daly from the University of Ottawa, Anna Lillicrap from the Department of Justice, Emmanuelle Deault-Bonin, a senior policy director at IRCC, and Mario Bellissimo, a senior immigration lawyer. Their message was consistent: artificial intelligence is expanding in immigration decision-making, the legal framework is growing but incomplete, and counsel must sharpen their understanding of these AI tools to protect both their clients and the integrity of the process.
The use of advanced analytics began in 2013, initially to manage Temporary Resident Visa backlogs. By 2018, IRCC was relying on automated models to triage files, moving low-risk applications through streamlined processes. Today, positive decisions in certain categories may be generated without an officer’s review, while negative decisions continue to require full officer input. The legislative authority for this system sits in Part 4.1 of the Immigration and Refugee Protection Act, which permits the use of electronic systems in administering the Act. That statutory authority is reinforced by internal governance structures, algorithmic impact assessments, and operational policies. A department-wide artificial intelligence strategy, aligned with the federal Digital Charter, is expected to be released soon.
What this means for legal practitioners is that many applications are now filtered through digital models before an officer sees anything. Counsel needs to understand how these systems prioritize and assess risk, particularly in applications where procedural fairness may hinge on data inputs, risk flags, or missing documents. Submitting a complete and clearly structured application is more important than ever, not just to satisfy an officer, but to ensure that the case passes quietly and safely through machine-led filters.
The use of artificial intelligence does not displace the fundamental rights and privacy protections that apply to government decision-making. Department of Justice counsel Anna Lilly explained that the Privacy Act continues to govern data collection and use. Any collection must relate directly to an operating program, and use or disclosure must fall within clearly defined statutory rules. The proposed Secure Borders Act (Bill C-2) would create a more specific legislative framework for sharing biometric or identity data with foreign and domestic partners, with narrow and defined exceptions.
Professor Daly raised deeper questions about transparency and review. Because positive decisions may be generated without reasons and negative decisions still reflect only the officer’s rationale, systemic problems in the underlying models may never reach judicial review. He questioned whether courts are well-equipped to oversee automated systems and suggested that Parliament consider mechanisms such as mandatory public disclosures or independent audits. In the absence of clear legal doctrine, counsel must remain vigilant in pressing for transparency and due process, particularly when the operation of the model is not disclosed in reasons for refusal.
The session underscored that immigration lawyers must now treat technological competence as part of their professional obligations. Bellissimo emphasized the importance of understanding how artificial intelligence functions at a basic level and identifying potential sources of bias in both human and machine systems. Lawyers should also be prepared to question the evidence behind automated decisions and demand plain-language explanations when outcomes appear to lack coherence. Self-represented applicants may increasingly rely on AI-generated documents and legal submissions, raising new challenges for the courts and for counsel responding to those filings.
Justice Mosley concluded with a reminder that judges themselves are beginning to use artificial intelligence as a tool but remain cautious. Generative models can produce fabricated citations and arguments that erode credibility. Counsel who rely too heavily on these tools without verification risk losing not only their analytical edge, but their standing before the court. The duty to verify, assess, and present reasoned argument cannot be automated.
Artificial intelligence is no longer a background issue in immigration practice. It shapes who gets approved quickly, who is flagged for review, and what evidence matters most. It also raises real concerns about fairness, bias, and transparency. The legal framework is evolving, but the responsibility to monitor, understand, and respond falls on us.
Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.