Breakdown of the IoD’s 'AI Governance in the Boardroom'
Edition #3 - Why every director needs to pay attention to the IoD’s new 12 principles for AI governance
As we have seen in previous posts, AI is no longer an IT side project. It is strategy. It is risk. It is reputation. It is also regulation - arriving faster than many boards think. The new business paper from the Institute of Directors (IoD), AI Governance in the Boardroom, gives directors a practical framework: 12 principles to govern AI use sensibly and safely. Treat it like a board pack, not a blog. Read it, adopt it, and assign owners.
This article breaks down the paper, showing its connection to current regulations and providing board-level actions. It also provides credible case examples, enabling you to brief your colleagues with confidence. Where possible, I’ve linked to primary sources.
What’s new - and why it matters
The IoD paper updates the 2023 guidance, incorporating the reality of today’s AI: wider use, increased risk, and tighter expectations from regulators, investors, and customers. It also includes results from an IoD Policy Voice survey of ~700 directors and business leaders: two-thirds use AI personally; half say their organisations use AI; a quarter lack any AI policy or governance. That gap is where fines, headlines, and value erosion live.
The EU AI Act entered into force on 1 August 2024, with staged obligations through 2025–2027. Boards operating in or selling into the EU will need to understand the risk-based model, new obligations for deployers, and the timeline for compliance.
The UK is taking a regulator-led approach. As such, there is no single AI Act. Expect sector regulators (FCA, ICO, CMA, Ofcom, MHRA) to tighten guidance and enforcement using existing powers, backed by the Department for Science Innovation and Technology (DSIT), the AI Security Institute, and the Digital Regulation Cooperation Forum (DRCF). Boards should assume that expectations will continue to rise, even without a single statute.
Bottom line: the governance bar is moving up. Your board can either guide this, or get guided by events.
The IoD’s 12 principles - what they mean in practice
The paper’s strength is practicality. It provides a simple, board-owned approach to guiding AI across strategy, risk, compliance, and culture. Here’s what to take into your next meeting.
1) Monitor the evolving regulatory and (geo)political environment
Map your exposure to the EU AI Act, UK regulators, and any other jurisdictions where you operate or sell. Decide who updates the board and how often. Use a simple dashboard: obligations, deadlines, status, red flags.
Board question: Which rules will apply to us over the next 12 months, and are we prepared?
2) Continually audit and measure
Create one source of truth for all AI systems in use; internal, vendor, embedded in platforms, and “shadow” tools your people already use. Tie each system to an owner, purpose, data sources, risks, and controls. Align to standards where it helps. ISO/IEC 42001 is now the AI management system benchmark.
Board question: Do we know where AI shows up across our technology stack and supply chain?
3) Undertake impact and risk assessments
Don’t stop at model metrics. Assess impact on employees, customers, suppliers, and communities. Determine your organisation’s risk appetite. Decide where humans must stay in the loop. In higher-risk settings, seek independent assurance.
Board question: Who might be harmed, and how would we know?
4) Establish Board accountability and management responsibilities
Name a board committee with oversight. Name an executive owner. Put AI into risk and audit cycles. Keep veto rights with the board for high-risk deployments. Communicate this to staff and investors.
Board question: Who signs off on AI risks today? Is that formal?
5) Set high-level strategic goals
No “AI because everyone else is doing it.” Nor should AI be a shiny solution searching for a problem. Define a small set of plain-language goals: augment people, improve quality, speed decisions, protect customers, cut waste. Make success measurable. Keep it tied to values and ESG commitments.
Board question: What “AI-shaped” problems are we solving, and how will we measure success?
6) Empower a cross-functional, operational, independent review committee
Cross-functional. Trained. Resourced. With the authority to pause or reframe projects. The role is to surface issues early, not to block innovation. Build a clear Terms of Reference and define the reporting line to the board.
Board question: Can this committee actually stop a launch if needed?
7) Validate, document and secure data sources, and assess data assets
Provenance, quality, bias controls, logging, and retention. Be explicit about synthetic data. Treat decision logic like any other controlled asset: understandable and auditable. Build tripwires to detect model drift or suspicious behaviour.
Board question: Where does this data come from, and who has checked it?
8) Train and upskill people
AI literacy is not optional. Tailor training for the board, the exec team, frontline users, and engineers. Teach people to challenge outputs and escalate concerns. Make this part of the induction. Reward good practice.
Board question: Do our people know when to question the machine?
9) Comply with privacy requirements
Minimise personal data. Honour rights. Use Data Protection Impact Assessments (DPAIs) where needed. Follow your regulator’s guidance on AI and data protection.
Board question: Can we evidence privacy-by-design for each AI system?
10) Comply with security-by-design requirements
Adopt the SSDLC for AI, penetration test, red-team, monitor suppliers, and log incidents. Consider standards like ISO/IEC 27001 and the UK AI Cyber Security Code of Practice.
Board question: Have we stress-tested our AI like we stress-test our finances?
11) Test and evaluate systems and remove from use
Pre-deployment testing is table stakes. More importantly, a straightforward process to pause or retire systems that drift, degrade, or cause harm. Contract for this with vendors. Keep the board’s veto alive.
Board question: If this system fails, who is responsible for shutting it down, and how quickly?
12) Review systems, policies and governance practices regularly
AI governance is not “set and forget.” Plan reviews. Track KPIs. Invite independent assurance. Keep a human-in-the-loop where the stakes are high.
Board question: When did we last review our AI inventory, risks, and controls?
Real-world signals directors should know
Boards often ask for proof that AI governance matters. Here are some real-life examples to use in the boardroom.
Recruitment bias: Amazon scrapped its internal AI screening tool
Amazon ended an experimental CV-screening system after discovering it penalised resumes that included indicators associated with women. The issue is that the model was trained on historical, male-dominated hiring data. The outcome was that the project was scrapped, a lesson for everyone. This is not about one company; it is about data and governance.
Hiring platforms under scrutiny: EEOC vs Workday
In the U.S., a class-action suit alleges discriminatory screening by Workday’s tools. The court allowed key claims to proceed in July 2024, and the EEOC has argued that anti-bias laws can cover Workday as an “employment agency.” Workday disputes the allegations. Regardless of the outcome, this shows where regulators are heading and why boards must ask tough questions of vendors.
NHS diagnostics: AI to speed stroke treatment
NHS England has deployed AI in stroke pathways. Brainomix e-Stroke has been credited in government reporting with significantly cutting “door-in-door-out” times, enabling faster treatment. NHS England has also been piloting a central AI Deployment Platform, routing radiology images to approved AI tools for decision support. The key lesson is that governance, vendor assurance, and clinical oversight are essential.
Cancer pathways: AI in chest X-ray and imaging networks
Regional NHS alliances report using AI to accelerate chest X-ray review in suspected cancer, with published research on procurement and early deployment across 66 Trusts. This is a valuable case when discussing explainability, clinical validation, and change management at scale.
These are not “tech” stories. They are governance stories: data, bias, assurance, safety, and accountability.
How directors can use this IoD paper
The IoD’s guidance is not abstract. It is a direct response to cases like these and to the regulatory shift now underway. The paper offers checklists and “what boards should consider” prompts for each principle. If you only do one thing this month, make sure to incorporate those prompts into your next board agenda.
Three practical moves I recommend:
Stand up a one-page AI register and review it quarterly
Maintain a comprehensive inventory that includes: system, purpose, owner, training data, decision logic, risks, controls, KPIs, vendor status, and review date. Tie this to risk and audit cycles. Reference ISO/IEC 42001, where it helps you bring order and cadence.Adopt a plain-English AI policy with real guardrails
Cover acceptable use, privacy-by-design, security-by-design, human-in-the-loop, red lines, escalation, and incident reporting. Make the policy visible to staff and vendors. Bake it into onboarding and procurement.Form (or empower) an independent review committee
Cross-functional, empowered to pause launches, and trained to read assurance reports. Give it a clear Terms of Reference and a line to the board. Use it to normalise complex trade-off discussions before systems go live.
And four frequently-asked questions to be ready for:
“Do we really need to care if we are UK-only?”
Yes. UK regulators are active. And your vendors may be operating under EU rules, which push obligations onto deployers through contracts and product requirements.“Can’t we just rely on the vendor’s assurance?”
No. Vendor assurance is necessary, not sufficient. You must validate fit-for-purpose in your context: your data, your processes, your risks, your people. Keep testing, even post-deployment.“What does ‘human-in-the-loop’ actually mean?”
A trained person who can understand the system’s role, challenge the output, and override it when needed. That requires process, time, and authority—not just a box tick.“Where do we start if we have nothing?”
Begin by establishing an inventory and a basic policy. Pick one pilot system. Run a lightweight impact assessment. Prove the muscle, then scale.
What to do before your next board meeting
Download the IoD’s paper and my one-pager summary (see below). Bring it to your next board meeting.
Map AI use in your organisation. Don’t rely on assumptions—include third-party tools.
Agree on accountability. Name a director and a committee to oversee AI.
Set clear goals. Align AI projects with strategy, values, and measurable outcomes.
Plan regular reviews. AI governance is a cycle, not a tick-box.
Final thought
Boards don’t need to be AI experts. But they must be AI-literate governors. The IoD has given directors a framework to start. The next move is yours.
👉🏽 Download the IoD’s AI Governance in the Boardroom Business Paper.
👉🏽 Download my IoD - AI Governance in the Boardroom - Principles One-Pager.
And if you want regular insights on strategy, governance, and AI at the board level, subscribe to AI in the Boardroom.
Note: The IoD principles are based on the original work, Anekanta Responsible AI Governance Framework for Boards by Anekanta Ltd, Licensed under CC BY-NC-SA 4.0




This article realy hits the mark. Do you think smaller orgs will adapt fast enough to the EU AI Act? Alwais appreciate your insights here, so clear.