Five Questions Every Board Should Ask About AI in 2026
Edition #2
Artificial intelligence adoption is accelerating across almost every industry. There is a powerful opportunity to achieve a competitive advantage through increased productivity, or augmenting value chains and even value propositions. With this opportunity, comes risk, complex ethical dilemmas, and the potential for increased attention of regulators. For directors, the question is no longer “should we care about AI?” but “how do we govern and lead responsibly in the age of AI?”
Too many boards treat AI as a technical project to be delegated to IT. That is a mistake. The consequences of AI, be they strategic, financial, legal, and reputational, fall squarely within the remit of directors. Oversight cannot be abdicated. It is too important an issue to ignore.
To provide effective stewardship, every board should begin with five core questions.
1. How does AI align with our business strategy and enhance our competitive advantage?
AI is not a bolt-on. It has the potential to automate daily tasks, augment your value chain, or reshape entire business models. Boards must ask whether management is using AI to reinforce the organisation’s unique sources of competitive advantage, or merely chasing shiny tools without strategic coherence.
What capabilities do we have that AI can amplify (e.g. data assets, customer relationships, operational scale)?
Are we investing in AI in ways that truly differentiate us from competitors, or simply keeping up?
How will AI shift our industry’s economics in the next 3–5 years, and are we preparing for that shift?
Boards should expect management to articulate how AI initiatives link explicitly to the business strategy, not just as experiments, but as drivers of growth, productivity, and resilience.
2. Do we have the right governance and oversight in place?
AI introduces new risks - data privacy, bias, opacity, accountability gaps - that traditional governance structures are not built to handle. Boards need clarity on how these risks are being identified, managed, and escalated.
Who within management is accountable for AI governance?
Do we have the right board-level committees (Audit, Risk, Ethics) actively monitoring AI?
Are our policies aligned with evolving regulation, including the EU AI Act and UK AI guidelines?
Do we have the expertise on the board, or through external advisors, to scrutinise management effectively?
Without proper oversight, AI can erode trust quickly. Directors should press for clear accountability and transparent reporting.
3. What risks - ethical, legal, regulatory, and reputational - do we face?
AI failures make headlines. From discriminatory recruitment algorithms to deepfake fraud, the downside risks are real and rising. Boards must treat AI risks as part of enterprise risk management, not a separate silo.
What guardrails are in place to ensure AI systems are fair, explainable, and secure?
How exposed are we to regulatory non-compliance in the jurisdictions in which we operate?
What reputational damage could result from misuse of AI, and how prepared are we to respond?
An organisation’s licence to operate increasingly depends on responsible AI. Boards should view this not only as a compliance obligation, but as a matter of trust and legitimacy.
4. How prepared is our organisation for transformation?
AI transformation is not only about technology. It requires data readiness, cultural change, new skills, and re-engineered processes. Boards must probe management’s readiness honestly.
Do we have the quality data, infrastructure, and cybersecurity foundation required?
Are we investing in up-skilling our workforce to work alongside AI?
How are we engaging employees, trade unions, and stakeholders in this transformation?
Are we addressing ethical use internally, not just in products, but in how we deploy AI with staff?
Transformation readiness is where many organisations stumble. Boards should demand a realistic assessment of organisational capabilities, not glossy roadmaps.
5. What opportunities are we missing by moving too slowly?
Caution is prudent, but excessive caution is dangerous. The pace of AI adoption means laggards risk losing competitive ground rapidly. Boards must balance risk management with strategic boldness. As a veteran of dozens of digital and agile transformations over two decades, I have witnessed first hand the risks of an abundance of caution.
Where are competitors already using AI to gain cost or innovation advantages?
Are there adjacent opportunities we could seize by moving faster?
What partnerships or acquisitions could accelerate our AI journey?
Boards must encourage management to explore opportunity as rigorously as risk. Standing still is rarely a safe strategy.
Summary
AI is no longer an emerging technology, it is an enterprise reality. Boards that fail to engage meaningfully risk strategic drift, regulatory exposure, and reputational harm.
Asking these five questions is a starting point, not an endpoint. They help boards move from curiosity about AI to competence in overseeing it.
Directors do not need to be data scientists. But they do need to ensure AI is governed, aligned to strategy, and embedded responsibly in transformation. That is a board duty no different from finance, risk, or sustainability.
Next Steps for Boards
Download my AI Governance Board Checklist (a one-page tool to structure oversight).
Let’s have a conversation about AI strategy, governance, and transformation tailored to your organisation.



