
6 minute read
A Governance Program: The Optimal Path to AI Success
Futurist and author, Ray Kurzwell, predicts AI will reach human level intelligence by 2029, and that by 2045 we will attain “singularity.” 1 Singularity is predicted to include the capability for our brains to interface (i.e., merge) with the cloud and expand intelligence beyond our wildest dreams. Meanwhile, lawyers may already feel overwhelmed with the pace of technological development and its legal and ethical challenges.
Our use of AI as lawyers, as well as our clients’ use of AI in their businesses, are predicted to yield enormous benefits if done right, and catastrophic risks if not. High risk, high reward activities and choices are not new. Fortunately, our society has developed a statistically reliable way to manage high risk endeavors with our eyes on the prize.
In life we can handle risks along a spectrum from “No Guts, No Glory” to “Better Safe than Sorry.” Either end of that spectrum requires little thought or effort. We close our eyes and just take the leap, or we hide in a corner. It is the effort to stay somewhere in the middle—allowing for the opportunity while working to minimize the worst collateral damage—that requires thought and vigilance.
This effort to stay somewhere between the extremes of recklessness and avoidance is the stuff of Governance. Governance is simply risk management that begins with the company’s governing board. It does not guarantee success, but it can improve our chances.
Risk management is a well-established science that requires some artful judgment, and we as lawyers are trained to do it. The classic risk management steps are to (i) identify the risks, (2) evaluate and rank the severity and likelihood of occurrence, (3) choose tools for managing each top-level risk, and (4) implement and monitor the tools. These also are the basic steps for an AI Governance Program. Of course, the devil is in the details and the risks of AI are very numerous, many of them quite technical. Available risk management tools are not all perfect or reliable. And everything costs time and money.
Nevertheless, we basically have the steps already laid out. The laws—and perhaps our survival—require that we use what we have and begin immediately.
Identifying AI risks may seem overwhelming at first. MIT has accumulated an AI Risk Repository of more than 700 risks,2 and these include legal, operational, social and ethical risks. The type of AI model, the intended uses by the business, the data on which the model was trained or tested, the model architecture and intended purpose, and any tests performed to ensure quality, accuracy and non-discrimination, are among the information necessary to narrow down the number of risks.
Using this information, companies can be ready to select the most important risks for that model and use case. With a cross-functional team, priority risks initially can be narrowed typically to fewer than 25 for any model and use case. And among those, a business can usually target a handful for initial management. AI governance is a dynamic work in progress and can be implemented in steps and scaled to a company’s resources.
Evaluating the risks and ranking them as High, Medium or Low (or gradations in between) provides the order in which a business can address those most important risks. Risk assessment matrices and principles (e.g., any use of sensitive personal data merits a higher ranking) can provide guidance for this evaluation process.
The selection of risk management tools to use to minimize risks is the next step. Avoidance is still a risk management choice, as is wholesale acceptance of the risk, but for most risks, we select from among a variety of other management tools. These include insurance, policies and procedures for the use of the AI model, training, notices and disclaimers, human-in-the-loop monitoring and testing, and contractual assurances or disclaimers.
The scale and scope of AI has led to an ongoing development of risk management initiatives that include quantitative governance, AI tools for testing and monitoring, privacy-enhancing technologies like synthetic data, and other technologies. This collision of technical and legal considerations makes AI the challenging puzzle it is and also is what requires that the effort be a team effort with all stakeholders participating.
Existing and developing AI laws, frameworks, and guidance nearly all include governance or risk management as a legal requirement. The Organisation for Economic Co-operation and Development reports that by May 2023, governments reported more than 930 policy initiatives across 71 jurisdictions for trustworthy AI.3 The U.S. thus far has taken a sectoral approach, and every federal agency and many states have issued guidance, passed or proposed laws, and/or used existing laws to enforce AI governance standards.
For example, many Federal Trade Commission enforcement actions under the FTC Act involving AI have resulted in consent orders that impose liability for the precise reason that a business had no AI risk management program.4 The Colorado Anti-Discrimination in AI Law requires risk management programs for high-risk AI use cases.5 The Department of Justice’s recent updated guidance on evaluation of corporate compliance programs makes clear that among the priorities for prosecutors is how a business is navigating risks related to new technologies like AI.6 These are only a few examples emphasizing the importance of a robust AI governance program for minimizing risks and liability.
Resources for AI governance frameworks are available, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework7 and the International Organization for Standardization (ISO) Standards 238948 and 42001.9 These frameworks are helpful, but they are not enough by themselves. Legal and compliance participation, along with input from other stakeholders, is needed to appropriately identify and manage AI legal risks.
Creating and implementing an AI governance program will promote AI success and minimize AI risk. Does this description of AI governance simplify a very complicated endeavor? Of course. But it also allows a business, a law firm or a legal office to be able to act to protect itself and move forward with some control within available resources. And as applicable legal requirements and uncertain consequences rapidly accumulate, a governance program will provide grounding, a framework for managing that increasing risk, a defense to liability, and a pathway for continuous evolution.

Footnotes
1 Ray Kurzwell, The Singularity is Nearer When We Merge with AI (2024).
2 The AI Risk Repository
3 How countries are implementing the OECD Principles for Trustworthy AI - OECD.AI
4 See, e.g., Rite Aid Corporation, FTC v., Federal Trade Commission
5 2024a_205_signed.pdf
6 Evaluation of Corporate Compliance Programs, U.S. Department of Justice Criminal Division
7 Artificial Intelligence Risk Management Framework (AI RMF 1.0), National Institute of Standards and Technology
8 ISO/IEC 23894:2023 - AI - Guidance on risk management
9 ISO/IEC 42001:2023 - AI management system