Expansion

Director Duties and Augmenting the Corporate Brain

Artificial intelligence and machine learning (for ease of readability, this article uses “AI” to refer to the broad category of machine learning and artificial intelligence) are poised to impact all aspects of society and business. Everything from personal assistants and chat bots to programs designed to assist venture funds with portfolio selection is coming to market while researchers (both at companies and in academic institutions) continue to target the development of general or strong AI. Much has been written about the replacement or augmentation of employees through AI and the impact that may have on society in the long run. However, the effects that those same technologies may have on the decision-making process and operations of corporate boards of directors have not yet come under the mainstream spotlight. AI is poised to have equally impactful effects on boards’ decision making processes and operations. The implementation of AI in the boardroom will not be a binary decision but instead will present itself as the opportunity to take advantage of increasingly powerful tools. These tools will range from currently available AI-enabled products (which utilize targeted approaches to augment human decision making) to general AI with human-like (or super-human, depending on who you ask) capabilities.

While technologists see excitement and promise in AI, experienced directors see a new wave of unpredictability and uncertainty approaching their companies. Well-counseled directors would be encouraged to remember the fundamentals: directors must fulfill their fiduciary duties, and courts apply the relatively lenient business judgment rule as the default standard for board decisions. The duties owed by boards to the companies they serve on, and to those companies’ stockholders, are those of care and loyalty. When directors satisfy those duties and their oversight function within the company by acting in the best interests of the company, in good faith, and on a fully informed basis, the director-favorable business judgment rule will apply to any challenge to the action in question.

The duty of loyalty (not of particular import in the discussion of how to approach the implementation of AI) primarily mandates that directors place the interests of the company and its stockholders ahead of their own interests. The duty of care, on the other hand, should be at the forefront of directors’ minds when considering the implementation of AI in the boardroom.

The duty of care mandates that directors inform themselves of all reasonably available information prior to making business decisions. Directors are ultimately required to make decisions based on the information available to them, but they are not required to do so without the benefit of outside advisors. In fact, Delaware law expressly provides that directors are fully protected in relying on the information, opinions, reports, or statements provided to them by advisors where the board reasonably believes the matters being advised on are within the advisor’s professional or expert competence. Courts and legislatures will be the ultimate arbiters of whether new standards need to be adopted to address this coming age of AI, and the issues presented by AI are too novel to have been directly addressed already. Nevertheless, helpful analogies to guide directors in the interim exist in the better understood contexts of boards engaging professional advisors such as accountants and lawyers and relying on reports presented to the boards.

Determining whether an advisor is competent in a particular area does not require directors to be CPAs or hold J.D.s or M.B.A.s (to use some of the more common examples). Similarly, directors should not be expected to be experts in data science or AI in order to implement new advisors reliant on those technologies. Directors do need to give due consideration to the relevant materials and engage in appropriate deliberation. Questions must be asked by directors, and they must be answered, before directors can consider their duty of care satisfied. Until there is an AI deployed that can answer directors’ questions or is capable of clearly describing its decision-making process, relying on an AI’s reports and advice may mean having representatives of the developer of the AI on hand in order to help the directors interpret that program’s feedback. Directors should test the assumptions and inputs of their advisors, whether human or AI, and continue that process until the board is satisfied that a rational basis exists for reliance on the advisor in each situation and as applied to the specific issue or issues presented before the board.

In the current technological environment and applicable law on the subject, directors should continue to seek out the best information that is available to them that will help them become fully informed. The time may not yet be here, but there may well be a time when a board’s failure to implement AI or other next generation technologies results in litigation claiming that the board has not taken advantage of all the best resources available to it. Boards that are proactive and systematic in their approach to implementing potentially transformative technologies, and that are acting reasonably and in good faith in the fulfillment of their fiduciary duties, will continue to place themselves, together with the corporations and stockholders they serve, in good stead.