Artificial intelligence algorithms deployed by governments for decision-making can be the difference between intelligence and imbecility (yes, the irony), equity and discrimination, competence and ineptness, and participation and alienation for an entire population. Singularity and sentience may realistically be beyond AI machines for now, but systemic bias certainly is not. If a government can remedy bias in its use of AI for servicing clients, there’s much to win.
The Government of Canada’s Directive on Automated Decision-Making, that took effect on April 1 this year and requires compliance within a year, sets standards for the federal government’s procurement and use of AI in providing services to external clients. The Directive also applies to any system, tool, or statistical models used by the government to recommend or make an administrative decision about a client. Automated Decision-Making (ADM) systems or AIs include “technology that either assists or replaces the judgement of human decision-makers”.
One of the Directive’s requirements is the completion of an Algorithmic Impact Assessment (AIA) prior to the implementation of an ADM system. The assessment evaluates the impact the ADM system will have on rights, health, well-being or economic interests of individuals or communities, and the ongoing sustainability of an ecosystem. The impacts are categorized from Level 1 (little/no impact) to Level IV (highest impact). The Directive also provides for, among other things, releasing government-owned custom source codes, quality assurance tests for unintended data biases, validating data quality and training employees.
Please log in to read the full article.