Tackling the Problem of AI “Revenge Porn” in Canada: existing law and upcoming legislative reform

  • February 15, 2024
  • Mavra Choudhry, Shalom Cumbo-Steinmetz (Torys LLP)

The popularity and availability of generative AI applications has skyrocketed in the past few years—users can easily ask AI programs to write entire essays and generate photorealistic images of anything imaginable just by entering in simple prompts. This has naturally led to an explosion of unique legal issues and potential harms that the law has been called upon to address.

One of the most severe of these harms is the use of deepfake technology or AI image generation to create fake images depicting a real, identifiable person in an intimate, explicit, and/or pornographic manner (commonly referred to as “AI revenge porn”). This is not just a theoretical legal puzzle—it has already become all too common a practice with a disproportionately harmful impact on women.[1] While difficult to quantify the exact scope and extent of this activity, a 2019 study estimated that approximately 96% of all deepfake content on the Internet was pornographic in nature and virtually all of this pornographic deepfake content depicted women.[2]

In this article, we examine how both existing law and upcoming legal reforms can be applied to address AI revenge porn. So far, the law has been slow to respond, though federal AI legislation on the horizon aims to regulate organizations that develop AI systems and make them available for use. Canadian privacy regulators have also indicated that existing privacy laws governing private and public sector organizations can and should be applied to address privacy issues that arise from generative AI, including the non-consensual distribution of intimate images (“NCDII”).

However, the problem lies not only with developers and deployers of AI systems, but with end users that misuse AI image generation programs to create revenge porn content. Provincial statutory protections and common law privacy torts related to NCDII are, and will be for the foreseeable future, the only non-criminal avenues for holding end users legally accountable for the harmful content they create.