Takeaways

As artificial intelligence (AI) expands into virtually every industry, companies should consider AI’s potential impacts on corporate governance and internal controls.
Companies should integrate AI thoughtfully to ensure they meet good governance standards by minimizing AI’s risks while leveraging its benefits.

We recently waved goodbye to 2023, and we remember many things from last year (besides Taylor Swift), including two important letters—A.I. These two letters arguably received more attention than any others, ranging from companies developing and implementing breakthrough AI technology, to government regulators expressing caution and high school students becoming best friends with ChatGPT. As AI expands into virtually every industry—whether cutting edge technology and financial companies or “old school” industries, such as construction and transportation—another letter merits our attention: the letter G.

G, as in “Governance,” specifically within the context of Environmental, Social, and Governance (ESG). While attention, understandably, has been focused on AI’s glitz and luster and its ability to transform businesses, it is important for companies—particularly those regulated by the SEC—to focus equally on more mundane blocking-and-tackling as they sprinkle AI throughout their ecosystems. The SEC repeatedly has made clear that ESG-related disclosures and internal controls are a priority, and SEC Chair Gary Gensler specifically warned that they are watching for “AI washing.”

Accordingly, companies should integrate AI in a thoughtful way to ensure they meet good governance standards while leveraging AI’s benefits. As if directors and officers don’t already have enough on their plates, we can add AI policies to the growing list (including cybersecurity and climate disclosure) of good governance demands placed firmly on the shoulders of today’s corporate leaders.

The Industry-Spanning Ubiquity of AI 
Broadly speaking, AI is a branch of computer science that allows machines to solve problems using human-like creativity. This definition captures many examples of AI, such as “generative AI” (which generates text or other media using a data set, such as ChatGPT), “machine learning” (task-specific machines), or robotics.

AI means different things to different companies and industries—and it can be deployed in myriad ways within an organization. For some, it may mean implementing AI-driven technology via the robotic arms on a production line. For others, it could mean an algorithm that creates predictive logistics for retail and shipping, answers customers’ questions via chatbot or recommends what TV show to watch next. Financial firms might use AI for fraud detection, financial advisory services or automated trading. And others might use AI for marketing purposes, such as to track data to enable personalized advertisements. The possibilities are endless, limited in many respects only by our imagination—but with endless possibilities comes the responsibility to ensure that AI does not disrupt an otherwise healthy environment of internal controls and good governance.

The Implications of AI on Governance
Good governance (i.e., the “G” in ESG) is about making sure company leadership adheres to guidelines and standards to promote the overall wellness of the company. This ensures that a company follows ethical business practices, treats stakeholders fairly, manages risks and crises and maintains transparency, while maximizing profitability and increasingly, avoids creating the public perception that the company is harming the local or global community.

In short, governance deals with optimizing profit, transparency and integrity. AI can enhance each of these things. It can boost a company’s profitability and efficiency by automating processes, making output consistent and developing cutting-edge technology. But it also can alter business practices themselves. For example: what if a key function of a company is performed by AI or if a key element of a company’s profit is due to its ability to perform an AI-driven function, or if AI allows the company to lay off a large number of employees?

Companies must be careful to implement AI appropriately to maximize the benefits of AI while minimizing its risks.

Properly Implementing AI: The Role of Management and the Board
Management and boards of directors each play important roles with respect to integrating AI into a company. While the devil is in the details, it behooves the board to ensure its oversight duties are discharged with respect to asking the right questions and pressure-testing management’s plans to implement AI within an organization. In many ways, AI is no different than any other technological tool, but in other ways it is not—in part, this is because of the perception (which may in fact be a reality under some circumstances) that AI has a life of its own such that it does not report to a human being within a company.

As we discussed in connection with the SEC’s recently-finalized cybersecurity rules, companies must make periodic disclosure of board of directors’ oversight of risks from cybersecurity threats—which makes it critical that directors have an appropriate understanding of cyber risks. Similarly, the SEC’s climate change rule proposal requires additional disclosure relating to board governance, including disclosure of the identity of board members or board committee responsible for the oversight of climate-related risks and whether any board member has expertise in climate-related risks. The SEC consistently has tried to raise the bar with respect to board governance and, in time, we expect them to take a similar approach with respect to AI.

At bottom, boards of directors should be aware of the SEC’s focus, updated appropriately regarding their company’s use of AI, and equipped to ask questions regarding any disclosures made about such use.

Disclosure Obligations: What to Say, and When to Say It
Public companies and other regulated entities face a growing variety of disclosure requirements, including under the Securities Exchange Act of 1934 and Investment Advisers Act of 1940. Put simply: if you choose to speak on a subject, that disclosure must be complete and accurate; if a company chooses not to speak, that omission must not be material.

It is important that companies have a firm grasp on how to disclose their use of AI given the SEC’s increased focus on ESG-related disclosures. For example, in 2022, the SEC proposed a rule that would require investment advisers to disclose information regarding their ESG investment practices. While the SEC’s rule proposal focuses on the use of ESG factors in investment strategies (i.e., it was not designed with AI in mind), the SEC’s increased focus on ESG as a general matter further underscores the need to ensure good governance (i.e., the “G”) when implementing AI-enabled technologies—especially if such implementation has or could have material impacts on a company’s functions or potential profitability.

The SEC further demonstrated its interest in regulating AI-associated risks by proposing a rule that would require broker-dealers and investment advisers to take steps to address conflicts of interest associated with the use of predictive data analytics and similar technologies to interact with investors. Put simply, predictive data analytics implements AI to predict future outcomes based on existing data sets. The proposal seeks to ensure the use of such technology does not result in firms placing their interests ahead of investors’.

The SEC undoubtedly will focus on risks associated with AI-related issues in 2024. As recently as on December 5, 2023, SEC Chair Gary Gensler warned companies against “AI washing,” or overstating their AI capabilities—a practice similar to “greenwashing.” The SEC is already reviewing instances where companies and investment advisers claim a product uses AI when it does not. As Chair Gensler stated, “Fraud is fraud … [a] human that is using a model that is defrauding the public, depending on the facts, is likely going to hear from us.”

The key is for companies to determine, among other things, whether the use of AI affects the company’s financial performance; whether the use of AI is a main driver of the company’s revenue; whether the company uses AI to make financial or investment decisions; whether AI has access to sensitive customer data; and how reliant the company is on AI overall.

Implementing Controls: General Recommendations for Companies Using AI
When implementing and utilizing AI, companies should consider the following recommendations to promote good corporate governance and adherence to core ESG principles:

  • Ensure internal controls are strong. Any house needs a strong foundation and internal controls are essential to a well-functioning company. Integrating AI into such controls is fundamental, including creating or supplementing policies governing the use of AI. Companies would benefit from revisiting their governance standards to incorporate AI.
  • Promote healthy communication within the organization. As with any relationship, communication is key. With respect to implementation of AI within a company, communication within the organization should be frequent and clear. Ensure that those individuals with their hands on the disclosure wheel (i.e., those who decide whether and how to disclose a company’s use of AI) are sufficiently educated on how AI is, in fact, used at the company. This often requires a 360-degree approach to communication. Understanding how AI works on a technical level can help prevent misrepresenting a company’s use of AI (the “AI washing” flagged by SEC Chair Gensler). In so doing, be mindful of maintaining any attorney-client privilege issues if utilizing outside technical consultants or counsel to implement AI.
  • Consider the other ways AI is used in achieving ESG goals. AI may help a company work towards achieving certain ESG benchmarks. For example, to the extent AI is used from a human resources perspective to accomplish diversity goals in furtherance of the “S” aspect of “ESG,” companies should similarly consider how good governance can ensure thoughtful implementation.
  • Adhere to other ongoing regulatory requirements. Companies may be subject to myriad regulatory regimes on top of their disclosure obligations. For example, companies in the health care industry may be subject to regulations ensuring the protection of highly sensitive customer information. Companies in the digital sector may be subject to data protection regulations. Given the implications of AI for cybersecurity and data privacy, it is advisable to work with in-house or outside counsel to ensure the company is meeting these various requirements.
  • Be prepared for an adverse event. Companies should take a proactive approach to adverse events caused by AI, such as a cyberbreach or a software malfunction. The SEC recently finalized cyber-related rules which address some, but not all, of these scenarios. Ensure that those who detect any AI malfunctions or irregularities are trained to escalate the issue up the chain to appropriate personnel.
These and any accompanying materials are not legal advice, are not a complete summary of the subject matter, and are subject to the terms of use found at: https://www.pillsburylaw.com/en/terms-of-use.html. We recommend that you obtain separate legal advice.