Snapshot:
- The Framework is voluntary, flexible guide designed to assist organizations implementing trustworthy and responsible AI systems.
- Part I helps organizations assess AI-related impacts and risks; Part II outlines functions that will allow AI actors to address those risks in practice.
- Part I set outs the characteristics of trustworthy AI systems: they are valid and reliable; safe; secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair (harmful bias managed).
- Part II sets out the four “functions” that need to be in place for organizations to manage the risks that are posed by an AI system: Govern, Map, Measure, and Manage.
- Concepts from the Framework will provide a roadmap for jurisdictions seeking to regulate the AI space – as is currently underway in Canada. Organizations should consider the Framework in their development or acquisition of AI systems, and consider developing AI governance programs that align with the concepts in the Framework.
Background
The U.S. National Institute of Standards and Technology (“NIST”) recently released version 1.0 of its Artificial Intelligence Risk Management Framework (“AI RMF” or “Framework”), which amounts to a practical, flexible, and adaptable set of guidelines for AI actors across the AI lifecycle to use when they design, develop, deploy or use AI systems. The goal of the AI RMF is to provide a voluntary, rights-preserving, sector- and use-case agnostic guide for AI actors to implement in order to promote trustworthy and responsible AI systems.
This is a notable development in an area that has had few voluntary or obligatory requirements imposed on such actors to date, and is particularly relevant in the Canadian context as Bill C-27, which includes a proposed Artificial Intelligence and Data Act (“AIDA”), makes its way through second reading at the federal legislature.
Please log in to read the full article.