Should we ever feel guilty about using - or saying we have used - AI in board reports?
Should we ever feel guilty about using - or saying we have used - AI in board reports?
David Cantrick-Brooks | 04/05/2026

Introduction

Artificial intelligence (AI) is rapidly becoming embedded in professional workflows across corporate Australia. From drafting and summarising to data analysis and scenario modelling, its potential to improve efficiency and clarity is significant.

Yet in governance circles — particularly in relation to board and committee reporting — a recurring question arises:

Should we feel uncomfortable about using AI, or even disclosing that we have used it?

The short answer is no.

However, like most governance questions, the more useful answer is nuanced. The issue is not whether AI is used, but how it is used, how its outputs are validated, and how its use is governed and disclosed.

AI Is a Tool — Not a Substitute for Judgement

AI should be understood as the latest in a long line of professional tools. It sits alongside spreadsheets, financial models, board portals and document management systems.

Used appropriately, AI can:

However, AI does not — and cannot — replace human judgement.

Directors rely on management to provide information that is:

The use of AI does not change that expectation. Nor does it dilute the accountability of those preparing and approving board materials.

The Core Governance Principle: Human Accountability Remains Intact

From a governance perspective, the key principle is straightforward:

If you put your name to a board paper, you are accountable for its contents — regardless of how it was produced.

This reflects well-established expectations under Australian corporate law and governance practice. Directors’ duties — including the duty of care and diligence — are unaffected by whether AI has been used in preparing information provided to the board.

In practical terms, this means:

Where this does not occur, the risk is not the use of AI itself — it is the failure of governance discipline.

Key Risks in AI-Assisted Board Reporting

While AI presents clear benefits, its use in board reporting introduces a number of specific risks that require careful management:

1. Accuracy and Reliability

AI outputs can be plausible but incorrect. Without proper verification, this can lead to misinformation being presented to the board.

2. Data Integrity and Confidentiality

The use of external or open AI tools may expose sensitive or confidential information if not properly controlled.

3. Over-Reliance

There is a risk that users may accept AI-generated outputs without sufficient challenge, particularly where the output is well-structured and persuasive.

4. Loss of Context

AI may not fully capture organisational nuance, risk appetite, or strategic context — all of which are critical in board reporting.

5. Auditability and Traceability

Without appropriate record-keeping, it may be difficult to reconstruct how a document was prepared if questions arise later.

These risks are manageable — but only if they are explicitly recognised and addressed.

Transparency and Disclosure: A Matter of Good Governance

One of the more contentious issues is whether, and to what extent, organisations should disclose the use of AI in board papers.

There is no universal legal requirement to do so. However, transparency is increasingly regarded as good governance practice, particularly where AI has materially influenced the content of a document.

In practical terms, this does not require excessive or burdensome disclosure. Rather, a simple, proportionate statement is often sufficient.

For example:

“This paper was prepared with the assistance of AI. AI was used to summarise source materials and assist with structure and drafting. The responsible executive has reviewed and verified the final paper and remains accountable for its content.”

Such a statement:

Importantly, disclosure should be calibrated to materiality. Routine or immaterial use of AI (e.g. minor drafting assistance) may not warrant formal disclosure.

Embedding AI into Governance Frameworks

To support consistent and responsible use of AI, organisations should consider embedding it within their governance frameworks. This may include:

AI Policy Settings

Board Paper Templates

Record-Keeping Practices

Training and Awareness

These measures should be proportionate to the size, complexity and risk profile of the organisation.

A Shift in Mindset

As AI becomes more embedded in professional practice, the framing of the question is likely to change.

Today, the question is often:

“Should we disclose that AI was used?”

In the near future, boards may increasingly ask:

“If AI could have improved the quality, clarity or efficiency of this paper, why wasn’t it used?”

This reflects a broader shift: from viewing AI as a potential risk to recognising it as a capability that, when used responsibly, can enhance governance outcomes.

Conclusion

There is no reason to feel guilty about using AI in board reporting.

However, its use must be:

Ultimately, good governance in this context is not about whether AI is used — it is about maintaining the integrity, reliability and accountability of information provided to the board.

AI can support that objective.

It cannot replace it.

About Governance in Action Pty Ltd

Governance in Action Pty Ltd supports boards and company secretariat functions in designing and implementing practical, fit-for-purpose governance frameworks — including the safe and responsible use of AI in governance and board processes.

IMPORTANT: This article / blog has been produced with the aid of AI.

PreviousNext

Related Articles

Evaluating the Performance of Your Company Secretary – Part 2

Governance in Action Pty Ltd has today published the follow-up article to the previous article “Evaluating the Performance of Your Company Secretary – Part 1”, which was published on 19 July 2025. In Part 2, our focus shifts to workload issues and strategies that can be adopted to meet the ever-increasing and demanding strategies, needs and challenges facing a business, whilst meeting the high and sometimes complex standards of corporate governance imposed by regulators but not overextending the capacity of the even most diligent and conscientious company secretary. We also suggest that simply working longer and harder is not a viable long-term solution for the company secretary (and their team) in this modern era. We suggest that the good company secretary will be proactive, rather than reactive, which includes asking for help when needed (and not left until it is too late).

08/18/2025