As more companies adopt artificial intelligence and algorithmic decision-making becomes integral to many core business functions, directors on corporate boards are considering their oversight obligations in this area.
The promise of AI is evident from recent corporate spending. Stanford University's 2023 AI index report found that private investment in AI in 2021 was approximately $91.9 billion — 18 times what it was in 2013.
Balanced against AI's promise are significant business risks. For example, real estate company Zillow made headlines in 2021 when it shut down its “Zillow Offers” business and laid off 25% of its workforce, in part because its house-buying algorithm failed to accurately price homes. This resulted in a number of shareholder derivative suits and securities class actions against Zillow, its executives and board alleging, among other things, materially misleading statements about “Zillow Offers.” Public scrutiny over facial recognition, credit algorithms, hiring tools, bias and other AI systems continues to create substantial regulatory and reputational risk for companies.
The Rapidly Evolving Regulatory Landscape
Over recent years, regulators across the globe have started passing legislation or providing regulatory guidance on AI. The European Union is widely viewed as leading these efforts through its attempt to pass a comprehensive, cross-sectoral AI regulation. Regulators in Hong Kong, the Netherlands, Singapore, the United Arab Emirates, the United Kingdom and the United States — among others — have also been outspoken on the need for appropriate corporate governance to address AI-related risks, including risks relating to bias, model drift, privacy, cybersecurity, transparency and operational failures.
One notable commonality among these regulatory pronouncements, particularly in the financial sector, is the express focus on board-level oversight of AI risks. For example:
- The Hong Kong Money Authority has issued principles stating that the board and senior management remain accountable for AI-driven decisions and should ensure the implementation of appropriate AI governance, oversight, accountability frameworks and risk-mitigating controls.
- The Netherlands Authority for the Financial Markets similarly emphasized the need to assign final accountability for AI applications at the board level and called for such accountability to extend explicitly to externally developed AI applications.
- The Monetary Authority of Singapore has suggested that firms should set the approval for highly material AI decisions at the CFO or board level and should periodically update the board on the use of AI within the company so that the board maintains a central view of all material AI-driven decisions.
- The United Arab Emirates financial sector supervisory authorities' recent guidelines likewise call for the governing body and senior management to be held accountable for the outcomes and decisions arising from AI applications, including the appropriate delegation of key AI development and implementation responsibilities to personnel with the requisite skill sets.
- The United Kingdom Financial Conduct Authority and Bank of England both recently underscored that the ultimate responsibility for AI risks resides with boards and senior management.
- In the United States, the National Association of Insurance Commissioners' Draft Model Rule on the Use of Artificial Intelligence by Insurers would require that insurers' boards or appropriate committees adopt their AI programs and that their senior management be held responsible for AI development and oversight.
These converging regulatory expectations relating to board-level responsibility for overseeing AI risks mirrors a trend we are already seeing in the cybersecurity space, including with the SEC's newly adopted Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure.
AI Oversight and Caremark
Specific regulatory oversight obligations aside, board-level oversight of AI risks may be important for companies in light of potential Caremark claims. In order to prevail on a Caremark claim, a plaintiff must establish either that there was an “utter failure to attempt to assure a reasonable information and reporting system exists” or, if such a system exists, that the board consciously failed to monitor or oversee its operations. Although Caremark claims have historically been hard to prove, recent Caremark litigation underscores the continuing need for boards to exercise and document oversight of mission-critical risks, including, potentially, AI.
For example, in Marchand v. Barnhill, the Delaware Supreme Court allowed Caremark claims to proceed against the defendant directors of an ice cream company, finding that the complaint sufficiently pled facts regarding the board's failure “to put in place a reasonable board-level system of monitoring and reporting” of food safety and sanitation risks (such as a board committee charged with monitoring food safety compliance), despite the clear importance of food safety to the company's operations.
Though the Delaware Court of Chancery has so far dismissed Caremark claims against directors brought in the aftermath of cybersecurity incidents based on the plaintiffs' failure to show bad faith, the court has acknowledged that poor oversight “involving liability for bad faith actions of directors” and “a deliberate failure to act” in the cybersecurity context may give rise to Caremark claims.
Overlaps with ESG
Another reason for board-level focus is AI's relationship to ESG issues:
- Environmental. As AI models grow in size and complexity, so does the necessary computer-processing power, which can carry a very large carbon footprint. At the same time, AI systems can also be used to optimize energy consumption.
- Social. Companies that deploy AI for hiring, lending, housing or insurance decisions need to consider ways to assess and, if necessary, remediate potential bias or discrimination associated with those initiatives. Some AI applications have been criticized for exacerbating income inequality, displacing large numbers of jobs, facilitating human rights abuses and manipulating individuals' behavior.
- Governance. For AI programs to meet burgeoning regulatory requirements, as well as emerging ethical standards, these risks must be identified and mitigated through appropriate corporate governance, including policies, procedures, training and oversight.
Key Considerations for Boards on AI
For companies in which AI has become (or is likely to become) a mission-critical regulatory compliance risk, directors may wish to consider several issues:
- Board responsibility. Consider including AI as a periodic board agenda item. AI oversight can reside with the full board, an existing committee (e.g., audit, technology, or cybersecurity, where one exists) or a newly formed AI-specific committee. The board should consider whether it has the necessary expertise to oversee AI opportunities and risks and whether board-level AI training would be warranted.
- Awareness of critical AI uses and risks. Consider ensuring that the board is made aware of the company's most critical AI systems (and the data used for those systems), their risks to the company and steps taken to mitigate those risks.
- Understanding resource allocation. Consider requiring periodic review and assessment of resources devoted to AI development, operations, regulatory compliance and risk mitigation.
- Senior management responsibility. Consider assigning to a member of management or a management committee the responsibility for AI risk and regulatory compliance (including any necessary AI regulatory risk disclosures).
- Compliance structures. Consider ensuring that management-level AI compliance and reporting structures are in place to facilitate board oversight, which may include periodic AI risk assessments and monitoring of high-risk AI systems, written AI policies and procedures, and training. Such policies and procedures may include those that address material AI-related incidents, whistleblower complaints and oversight of third-party providers of critical AI-related resources.
- Board briefings on material AI incidents. Consider ensuring that the board is appropriately briefed on the company's response to serious AI incidents and related impacts, the status of any material investigations and the effectiveness of response efforts.
- Board minutes and materials. Consider ensuring that the board's AI oversight activities and management's compliance efforts are well documented in board minutes and supporting materials.
Some directors may be uncomfortable with AI risk oversight because of their lack of specific expertise in the area. As the SEC has made clear regarding cybersecurity, however, boards must find a way to exercise their supervision obligations, even in technical areas, if those areas present enterprise risks. This does not mean that directors must become AI experts or that they should be involved in the day-to-day management of AI operations and risk management. It does mean, however, that directors at companies with significant AI programs should consider how they will ensure effective board-level oversight in light of the growing opportunities and risks presented by AI.