“How to Advise Your Labour and Employment Clients in the Artificial Intelligence Age”: A Program Summary

  • November 15, 2024
  • Tyler Sparrow-Mungal

On November 4, 2024, the OBA Labour & Employment Law Section presented a CPD program entitled How to Advise Your Labour and Employment Clients in the Artificial Intelligence Age.  The program was chaired by Section Chair, Rob Richler of Bernardi Human Resource Law, and Member-at-Large Maciej Lipinski of The Process Legal. Speaking at the program were Lisa Stam (Spring Law), Saneliso Moyo (Goldblatt Partners LLP), Danielle Rawlinson (Borden Ladner Gervais LLP) and Professor Stephanie Kelley (St. Mary’s University).

I would like to thank the OBA for the opportunity to attend the program, and to prepare and share the following summary of its key takeaways.

The Growing Gap in Artificial Intelligence Regulation

There is a growing gap between how technology is being utilized and how it is being regulated. Canada does not yet have overarching artificial intelligence (AI) legislation, which has led to concerns regarding its use in labour and employment contexts.

Artificial Intelligence and Data Act (AIDA)

The federal AIDA, once passed, will serve as the overarching and comprehensive AI-based law. It will regulate any use of AI in Canada, outline permissible uses of AI, and require organizations to uphold certain ethical standards. AIDA is a risk-based legislation. This means that only “high-risk” or “high-impact” AI systems will be subject to the regulation. “High-risk” systems will include AI tools that affect hiring and employment decisions in a labour and employment context.

Working for Workers Four Act

In March 2024, Ontario’s Working for Workers Four Act introduced rules surrounding the use of artificial intelligence in hiring. Section 8.4 (1) requires every employer who advertises a publicly advertised job posting and who uses artificial intelligence to screen, assess or select applicants for the position to include a statement disclosing the use of artificial intelligence in the job posting.

Common Applications of AI in Hiring

AI tools have become widely used during the hiring process to streamline application reviews. AI tools apply keyword-matching programs to narrow down the candidate pool by selecting candidates that have certain keywords in their application materials. Further, these tools can subsequently rank candidates through consideration of the employer's past hiring decisions. Interestingly, in theory, AI tools can make hiring decisions without the influence of human biases or ego. However, currently, these tools are not without major limitations and risks.

Limitations and Risks of AI in Hiring

  1. A major risk associated with AI tools is automation bias. Humans tend to be overconfident in the conclusions that are made by automated decision-making systems.
  2. AI systems may be trained with data that is not predictive of job performance or biased. As a result, these systems may make poor hiring decisions or inadvertently perpetuate biases in their outputs.

​​Labour Unions and Technological Change

The extensive adoption of AI systems has led to an ongoing conversation around addressing technological change in collective agreements. Interestingly, this conversation is not entirely novel; collective agreements have adapted to new technology since the 1980s and 1990s during the emergence of computer technology and the internet. Today, unions are focused on modifying technological provisions in existing agreements to address AI-specific concerns. For example, a common concern is the use of AI for employee monitoring in remote work environments. Thus far, unions have been successful in expanding technological definitions in these agreements to address worker concerns.   

AI in Termination Decisions: A Developing Area

While AI is being used in hiring, its role in making termination decisions is not established. The use of AI in termination-related decisions has primarily been observed in the realm of litigation following a termination. For example, AI-powered software like Blue J Legal provides predictions of common law reasonable notice and has inadvertently created a common language in negotiations as legal professionals are being guided toward the same most commonly used cases. Further, outputs of this software have begun to be included in demand letters to substantiate termination package requests.

Strategic Transparency and Knowing Your AI Tools

Transparency in the use of AI tools by employers is essential for strategic reasons. This transparency not only fulfills a moral obligation but also strengthens the employer's position in potential litigation. For example, being transparent about the use of AI worker surveillance tools allows an employer to use the information to their advantage in the event of a dispute arising from a termination. Concealing this tool usage only serves to weaken the employer's position.

Additionally, it is crucial for employers to have a comprehensive understanding of how the tools they deploy function. Issues commonly arise when employers unknowingly collect excessive or inappropriate data, typically due to a lack of comprehensive impact assessments or an insufficient understanding of the AI systems they use. Without this understanding, employers may fail to recognize the need for strategic transparency as well as their requirement to comply with statutory requirements previously discussed.

Conclusion

Overall, the regulation of AI is gradually aligning with the rapid adoption it has seen in the labour and employment sector, as reflected by recent statutory developments. Employers must prioritize transparency in the deployment of AI tools and dedicate the necessary resources to understanding the systems they utilize. While these tools offer significant efficiency benefits, it is crucial to recognize that perfection should not be expected from AI technologies.

About the Author

Tyler Sparrow-Mungal is a third-year law student at the Lincoln Alexander School of Law with a keen interest in pursuing a career in labour and employment law. He can be reached at tyler.sparrow@torontomu.ca.

Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.