AI rendering of young woman against colourful pixelated computer-screen-like background

Tackling the Problem of AI “Revenge Porn” in Canada: Existing Law and Upcoming Legislative Reform

  • 13 juin 2024
  • Mavra Choudhry and Shalom Cumbo-Steinmetz, Torys LLP

The popularity and availability of generative AI applications has skyrocketed in the past few years—users can easily ask AI programs to write entire essays and generate photorealistic images of anything imaginable just by entering in simple prompts. This has naturally led to an explosion of unique legal issues and potential harms that the law has been called upon to address.

One of the most severe of these harms is the use of deepfake technology or AI image generation to create fake images depicting a real, identifiable person in an intimate, explicit, and/or pornographic manner (commonly referred to as “AI revenge porn”). This is not just a theoretical legal puzzle — it has already become all too common a practice with a disproportionately harmful impact on women.[1] While difficult to quantify the exact scope and extent of this activity, a 2019 study estimated that approximately 96 per cent of all deepfake content on the Internet was pornographic in nature and virtually all of this pornographic deepfake content depicted women.[2]

In this article, we examine how both existing law and upcoming legal reforms can be applied to address AI revenge porn. So far, the law has been slow to respond, though federal AI legislation on the horizon aims to regulate organizations that develop AI systems and make them available for use. Canadian privacy regulators have also indicated that existing privacy laws governing private and public sector organizations can and should be applied to address privacy issues that arise from generative AI, including the non-consensual distribution of intimate images (“NCDII”).

However, the problem lies not only with developers and deployers of AI systems, but with end users that misuse AI image generation programs to create revenge porn content. Provincial statutory protections and common law privacy torts related to NCDII are, and will be for the foreseeable future, the only non-criminal avenues for holding end users legally accountable for the harmful content they create.

END USER MISUSE: APPLYING CURRENT NCDII STATUTORY AND COMMON LAW REMEDIES TO THE AI CONTEXT

Several provinces have enacted legislation addressing the non-consensual distribution of intimate images, including Alberta, British Columbia, Manitoba, Nova Scotia, Newfoundland & Labrador, and Saskatchewan. These statutes provide a civil right of action to those who have had intimate images distributed without their consent. In other provinces including Ontario, NCDII victims rely on common law privacy and harassment torts for civil redress, including the tort of “public disclosure of private facts” established specifically to provide a remedy for NCDII by the Ontario Superior Court of Justice in a 2016 decision.[3]  

Other relevant tort claims can include intrusion placing a person in a false light, defamation, and intentional infliction of emotional distress. Neither common law nor statutory privacy torts require a plaintiff to prove economic harm. The underlying principle here is that individuals have a reasonable expectation of privacy in intimate images of themselves, and violating this expectation of privacy can result in harm that requires civil redress.

Most notably and recently, British Columbia’s Intimate Images Protection Act came into effect on January 29, 2024. The Act, passed in March 2023, is the first of its kind to directly address digital image alteration and generation. The text of the Act defines intimate images to include images that have been “altered”,[4] and accompanying guidance explicitly includes “digitally altered images, digitally altered videos (deep-fakes), and AI generated material.”[5]

The British Columbia precedent may provide support for the interpretation of the other provincial statutes to include digitally altered intimate images, even if the statutes themselves are not amended to include similar language in the future. In a similar vein, newly proposed federal legislation aimed at promoting online safety (Bill C-63) also currently includes deepfake images in its definition of “intimate content communicated without consent” — a form of harmful online content that Bill C-63 seeks to address by placing content moderation obligations on social media services.[6]

On the common law side, provincial courts have not yet dealt with the question of whether individuals have a reasonable expectation of privacy in digitally altered images or AI-generated facsimiles of themselves, leaving it an open question to what extent common law privacy torts apply to this context.

In Ontario, courts have acknowledged that the rapid development of technology can be addressed through privacy-related civil remedies. When establishing the need for a tort of public disclosure of private facts — several years before the AI and deepfake uptick of the early 2020s — Justice Stinson observed: “In the electronic and Internet age in which we all now function, private information, private facts and private activities may be more and more rare, but they are no less worthy of protection.”[7]

DEVELOPERS AND DEPLOYERS OF AI SYSTEMS: UPCOMING AI REGULATION AND EXISTING PRIVACY LAWS

UPCOMING AI LEGISLATION

While NCDII laws and privacy torts may be able to address end-user misuse of AI applications, upcoming AI legislation focuses on regulating organizations that develop AI systems and make them available for use.

The proposed federal Artificial Intelligence and Data Act (“AIDA”), currently in committee following its second reading at the House of Commons as part of Bill C-27, is the federal government’s answer to the problem of AI under-regulation. The AIDA creates Canada-wide obligations and prohibitions pertaining to the design, development and use of artificial intelligence systems in the course of international or interprovincial trade and commerce. Among these obligations, organizations that develop or make AI systems available for use are required to identify, assess, and mitigate the risk of harm caused by the system. The Act also creates a criminal offence for making an AI system available for use knowing that it is likely to cause serious harm where its use actually causes that harm.

It is unclear how this will apply in practice to the misuse of image generation programs, given their already overwhelming popularity for usually innocuous purposes. The obligation to implement risk mitigation measures could require developers to, for instance, build rules preventing users from generating explicit or pornographic images in general.

The structure of the AIDA is principles-based, such that the more specific substantive content is expected to come from regulations. Encouragingly, the federal government is alive to the harm of AI deepfakes in general, noting in its companion document to the AIDA that “AI systems have been used to create "deepfake" images, audio, and video that can cause harm to individuals.”[8]  Notably, even after Bill C-27 is passed, the AIDA is not likely to come into effect for two more years.

This state of affairs is to some degree replicated in the EU’s landmark AI Act,[9] which was formally approved by the EU Council in May 2024 and which is expected to enter into force in June or July of 2024. The Act creates transparency obligations for deepfake-generating AI systems but does not address the revenge porn issue head-on. However, most of the Act, including these transparency obligations, will not be applicable until two years after it enters into force.

APPLICATION OF EXISTING PRIVACY LAW

In December, the federal Office of the Privacy Commissioner of Canada (OPC), jointly with all Canadian provincial and territorial privacy regulators, released guidance interpreting existing privacy legislation that govern public and private sector organizations in the context of developing, providing access to, and using generative AI systems.[10]

Generally, the guidance made it clear that organizations collecting or using personal information to train, develop or deploy generative AI systems must have a legal authority for doing so—generally, this means that organizations are required to obtain the consent of individuals to use their personal information to train AI systems. Reinforcing the consent requirement may make organizations developing and deploying AI systems think twice about scraping personal information and images to use as training data—this is helpful because generating an AI image of an identifiable person typically requires the AI system to have images of that person in its training data.

In a promising look ahead, these principles anticipate that the creation of AI content to “generate intimate images of an identifiable person without their consent” will be a legal “no-go zone” in the privacy context, even though firm rulings on the legality of AI-related practices have yet to be made.

CONCLUSION

While existing law and upcoming legislative reform in combination cover some ground to regulate and sanction the creation and distribution of AI revenge porn, gaps and uncertainties remain.

One common thread of existing laws – between civil NCDII remedies and privacy obligations for organizations – is that someone who has been affected or harmed needs to bring a complaint, whether to a court, tribunal, or privacy regulator. For AI revenge porn, it can be difficult for victims to even know that such images were generated or distributed. If they do find out, they are then required to expend time and resources to navigate this complex and ever-evolving web of legalities for a chance at some redress.

As well, with AI legislation not set to come into force for several more years (in Canada and globally), and the use of generative AI becoming seemingly more ubiquitous with each passing day, there is a real risk that this problem is, by now, a runaway train that will not be easily brought under control.

About the authors

head-shot photo of author Mavra ChoudhryMavra Choudhry is a lawyer specializing in privacy and data protection. She advises clients on privacy compliance, data governance, and data breach response, as well as privacy law matters in relation to M&A transactions and financings.

 

 

 

head-shot photo of author Shalom Cumbo-SteinmetzShalom Cumbo-Steinmetz is a litigator with an active data security and data privacy practice, representing some of Canada’s largest institutions. He is a member of the OBA Privacy and Access to Information Law Executive and an amateur sailor of small boats. 


An earlier version of this article appeared on the OBA Privacy and Access to Information Law Section’s articles page.

 

 

[2] Tatum Hunter, “AI porn is easy to make now. For women, that’s a nightmare”, The Washington Post (13 February 2023).

[3] Doe 464533 v N.D., 2016 ONSC 541.

[4] Intimate Images Protection ActSBC 2023, c 11, ss. 1, 2(a).

[5]  Government of British Columbia, “Intimate Images and Consent” (January 29, 2024).

[6] Bill C-63, An Act to enact the Online Harms Act, 1st Sess, 44th Parl, 2024, s. 2(1).

[7] Doe 464533 v N.D., 2016 ONSC 541 at para 44.

[10] Office of the Privacy Commissioner of Canada, Principles for responsible, trustworthy and privacy-protective generative AI technologies (7 December 2023).