The OBA office will be closed from December 24 to January 2 for the holidays and will resume normal operations on January 5.

Skip to main content

Digital Borders and Racial Codes in AI Migration Control

July 2, 2025 | Yoann Emian

Across the globe, states are embracing artificial intelligence and surveillance technologies to manage borders, process migration applications, and monitor mobile populations. These tools are often marketed as solutions: efficient, objective, and immune to human bias. In Canada, the UNHCR, and other institutional contexts, AI promises to deliver smarter migration governance with faster decisions, better data, and fewer errors. 

But behind the promise of innovation lies a more troubling reality. The digitization of migration governance is not neutral. AI and algorithmic technologies are being deployed within systems already shaped by structural exclusion. The result is a new frontier of control where black-box systems replace legal reasoning, and where migrants are datafied, risk-assessed, and filtered before they are ever seen as rights-holders.

Built on datasets that reflect past discrimination, coded by actors who lack racial literacy, and deployed in legal systems with limited transparency or appeal mechanisms, AI in migration replicates the very biases it claims to solve. In the absence of strong safeguards, it has become a tool of algorithmic border control that reinforces racialized exclusions under the guise of efficiency.

Please login to access this article.

Login to MyCBA