As the development and use of AI accelerates at phenomenal pace, it gives us cause to wonder how law makers, regulators and courts might deal with AI in the context of directors’ duties in the not-too-distant future.We all know that law makers, regulators and courts understandably lag behind practice when it comes to being able to give directors and officers guidance on the acceptable use of technology in business (e.g. virtual AGMs in recent times).However, AI is different – as it fundamentally impacts what and how decisions are made.In this short post, we explore and speculate how these bodies could potentially approach AI.Comments welcome!
One approach may be to objectively question and analyse the use of AI by an organisation with the aid of a ‘reasonable person’ test and independent experts, including asking:
- What was the business decision made with the help of AI;
- Was the business decision in question one that could be made with the assistance of AI*;
- What AI system was used to generate the information (and was it open or closed source, fit-for-purpose – the right system to use in the circumstances, well explained by the vendor, well understood by directors and/or officers in terms of its strengths and weaknesses, commonly used in the industry in which the organisation operates, and generally well-regarded for the quality and reliability of its output – i.e. not ‘dodgy’);
- Was the AI system deployed within the organisation’s firewalls, using the organisations’ own data (on top of the public data, acquired as part of the closed source system), over which the organisation has complete control, or was it an open AI system (using data from sources outside the organisation, over which it has little or no control);
- Whether an open-source system provides better information and therefore (potentially) facilitates better decisions than a closed source system (which does not provide access to information from other sources, including competitors);
- If the AI system was closed source, how well was it fine-tuned;
- Were chatbots and/or (digital) AI agents used in making the business decision (noting that whilst chatbots simply simulate conversations from existing content, (digital) AI agents apply reason in creating content; in essence, chatbots regurgitate predefined information, while AI agents learn and handle complex tasks);
- Was the system working properly at the time (although that could be extremely difficult to establish after the fact);
- Where and how widely is AI used within the organisation (e.g. mainly at higher levels or throughout the organisation, which can influence how much information and business activity is driven by AI);
- Does the organisation have an AI policy (describing and explaining how AI can be used within the organisation, including informing key stakeholders);
- Is the AI policy publicly available / accessible;
- How management and the board assesses the risks associated with using AI;
- What prompts (including goals, return formats, warnings and context dumps) were used by those responsible to generate the information produced by AI and relied upon by the board and/or management in making the business decision;
- Did the directors and/or officers appropriately analyse (test and challenge) the prompts and output with appropriate care and diligence (having regard to the application of their skills and experience in the area relevant to the subject matter – i.e. they cannot simply blindly rely on the information or output produced by AI; they must properly engage with it);
- Were any directors and/or officers on notice that the information produced by AI (and relied upon to make a decision) was flawed or deficient in any way;
- How much of the AI generated information was actually used in making a decision (and was the decision based entirely or only partially on the AI generated information); and
- What paper trails exist to evidence the proper exercise of duties and responsibilities.Not surprisingly, discovery is likely to be a critical piece in legal proceedings, although query how that would play out in practice (e.g. aside from the output, should directors be provided with copies of the prompts in board papers, and what does this mean for minutes and the organisation’s retention and disposal of records policy?).
* Directors and officers will need to be trained in the use of AI (including its risks and potential benefits).
These are, of course, preliminary and highly speculative thoughts and it remains to be seen how law makers, regulators and courts will tackle AI in the coming months and years.However, one thing is for sure – they will need to come up with a commercially sensible approach … and fast!
Note:Data privacy, cybersecurity and intellectual property (among many other things) are also likely to attract and require the attention of law makers, regulators and courts.