Prepare for AI Regulation Ahead When Advising Employer Clients

  • 20 septembre 2024
  • Maciej Lipinski

To paraphrase a common adage: technology gets halfway around the world before regulation puts its pants on.

Generative artificial intelligence (“AI”) technologies are developing at a rapid pace and are increasingly a presence in many workplaces whether employers know it or not. The 2024 Work Trend Index on the state of AI at work released earlier this year by LinkedIn and Microsoft revealed that 78% of respondent generative AI users are already bringing AI tools to work (dubbed “bring-your-own-AI” or “BYOAI”), and the majority of those users report being reluctant to admit that they use AI for their most important work tasks.

While many employers are already considering, or are in the process of, taking steps to address the growing adoption and use of generative AI among their employees, important legal regulatory frameworks are also in development at both the Provincial and Federal levels.

For lawyers advising employer clients grappling with generative AI in the workplace, it is increasingly important to keep an eye on the regulatory developments ahead.

Federal Legislation and a Voluntary Code Establish Requirements for Private Businesses

In Canada, the Federal Artificial Intelligence and Data Act (the “AIDA”) is making its way through the House of Commons. Like the European Union’s Artificial Intelligence Act, Canada’s AIDA is particularly targeted toward the regulation of high-risk or high-impact AI systems. In the employment context, this would likely include AI systems used in hiring and employee discipline, which systems may introduce new risks such as increased potential for bias and discrimination in these processes.

Under the AIDA’s framework, obligations on businesses that develop and use high-impact AI systems would be guided by six key principles:

  1. Human Oversight and Monitoring: Systems must be developed and designed to ensure meaningful monitoring and measurement can be carried out by humans assigned to oversee the technology in operation.
  2. Transparency: The public must have sufficient access to information on how the system is used, including its capabilities and limitations. 
  3. Fairness and Equity: Systems must be designed to address and mitigate the risks of systemic bias and discrimination.
  4. Safety: Proactive measures must be taken to assess and mitigate foreseeable misuses of the technology and risks of harm.
  5. Accountability: Organizations must establish policies and processes to ensure regulatory compliance requirements surrounding generative AI are met.
  6. Validity and Robustness: Systems must work as intended over time and across circumstances.

As of this publication, 30 major Canadian companies have already signed on to the Government of Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems encompassing these principles.

Notably, if passed, the Federal laws will apply to private sector entities but will not apply to Ontario’s public sector.

Ontario’s Beta Principles for the Ethical Use of AI Establish Key Guardrails for the Public Sector

Ontario is in the process of developing a Trustworthy Artificial Intelligence Framework that will govern AI adoption, usage and risk management by public sector entities. For the time being, Ontario has released a set of Principles for Ethical Use of AI [Beta] outlining guardrails for public sector AI usage that will likely inform future regulation. Developed by the Ontario Government in collaboration with at AI Expert Working Group, these Principles are modelled on laws in jurisdictions such as New Zealand, the United States, and the European Union, and are consistent with the research of the Organization for Economic Cooperation and Development (the OECD).   

Similar to the Federal AIDA’s governing principles, Ontario’s proposed Principles for Ethical Use of AI are as follows:

  1. Transparent and explainable: AI systems must be understandable, and their decisions open for review and discussion.
  2. Good and fair: AI systems must respect the rule of law and human rights, civil liberties and democratic values, all as broadly defined.
  3. Safe: Every AI application must function securely and safely, with risks continuously assessed and mitigated.
  4. Accountable and responsible: Clear lines of responsibility must be maintained within organizations, ensuring that decisions made by or with AI are justifiable and equitable.
  5. Human centric: AI should be developed with a focus on public benefit, involving input from those who use and are impacted by these systems.
  6. Sensible and appropriate: Consideration must be given to the sector-specific and the broader societal impact of AI technologies.

The Government of Ontario has stated that future steps will aim to bring the Principles “to life”. What that looks like remains to be seen, but may include significant measures such as:

  • New laws and legislated rules on AI use;
  • New regulators overseeing AI usage in the public sector; and
  • New powers and expanded mandates for existing regulators such as the Office of the Information and Privacy Commissioner of Ontario.

Looking Ahead

As regulatory requirements governing organizations’ generative AI usage continue to take shape in Ontario, in Canada writ large, and abroad, employers are well-advised to begin the process of updating and revising policy frameworks to address the technologies of today while remaining on course to meet the regulatory requirements ahead.

On November 4, 2024, the author will co-chair a CPD program for the OBA entitled “How to Advise Your Clients in the Artificial Intelligence Age”.

ABOUT THE AUTHOR

Maciej Lipinski, JD, PhD, is a Toronto area employment and labour lawyer, coach, and workplace dispute resolver. He is the principal of The Process Legal. Maciej is a member-at-large of the Ontario Bar Association’s Labour & Employment Law section executive.

Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.