<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI in the Boardroom]]></title><description><![CDATA[Strategy, governance, and transformation in the age of AI.]]></description><link>https://www.aiintheboardroom.com</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 11:38:50 GMT</lastBuildDate><atom:link href="https://www.aiintheboardroom.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Karim Harbott]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[karimharbott@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[karimharbott@substack.com]]></itunes:email><itunes:name><![CDATA[Karim Harbott]]></itunes:name></itunes:owner><itunes:author><![CDATA[Karim Harbott]]></itunes:author><googleplay:owner><![CDATA[karimharbott@substack.com]]></googleplay:owner><googleplay:email><![CDATA[karimharbott@substack.com]]></googleplay:email><googleplay:author><![CDATA[Karim Harbott]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Most Organisations Are Investing in AI Without Deciding Where It Creates Advantage]]></title><description><![CDATA[Edition 12 &#8212; Why boards must decide where AI creates advantage before they fund it.]]></description><link>https://www.aiintheboardroom.com/p/most-organisations-are-investing</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/most-organisations-are-investing</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Fri, 17 Apr 2026 07:01:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8nWg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8nWg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8nWg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!8nWg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!8nWg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!8nWg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8nWg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2271613,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/194422713?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8nWg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!8nWg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!8nWg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!8nWg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d42f9c-eec8-4135-bcaf-9a4fd8eef138_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>The Illusion of Progress: Why AI Investment Feels Strategic</strong></h3><p>AI investment now creates a powerful illusion of strategic progress. Boards see budgets approved, pilots launched, and new capabilities emerging across functions. Management teams can point to copilots, automation, summarisation tools, and workflow improvements as evidence that the organisation is moving. Yet much of this activity is better understood as capability accumulation than strategic choice. McKinsey&#8217;s 2025 global survey found that 88 per cent of organisations were using AI in at least one business function, but nearly two-thirds had not yet begun scaling AI across the enterprise, and only 39 per cent reported any EBIT impact at the enterprise level. That gap between adoption and economic effect should be read as a strategic signal. It suggests that many organisations are investing before they have decided where AI is supposed to improve their competitive position.</p><p>This is the first governance issue boards need to recognise. In most organisations, AI investment is being driven by external pressure as much as internal logic. Competitors are moving, vendors are persuasive, consultants are presenting large value estimates, and investors increasingly expect visible progress. In that environment, investment becomes a signal of competence rather than a consequence of strategy. The &#8220;AI race&#8221; narrative accelerates adoption, but only a minority of firms translate that activity into meaningful enterprise value. Movement becomes a substitute for direction, and capability becomes a proxy for strategy.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A useful way to frame this problem is to distinguish between what I refer to as <strong>AI parity investment</strong> and <strong>AI advantage investment</strong>. Most organisations are currently doing the former while assuming they are achieving the latter. Until that distinction is made explicit, investment will continue to outpace strategic clarity.</p><h3><strong>Threshold Capabilities vs Distinctive Capabilities: The Critical Distinction Most Organisations Miss</strong></h3><p>The most useful distinction in this context is between threshold capabilities and distinctive capabilities. Threshold capabilities are those that an organisation must possess to remain credible in its market. Distinctive capabilities are those that shape why customers choose the firm, why margins persist, and why competitors struggle to respond.</p><p>Most current AI investment sits firmly in the first category. Rolling out generative AI assistants, automating internal workflows, improving customer-service triage, and accelerating knowledge work are rational responses to technological change. They improve efficiency and, in some cases, quality. The problem is that competitors are doing exactly the same. Over time, they will become expected.</p><p>The challenge is that these benefits are inherently replicable. Microsoft Research&#8217;s study of M365 Copilot users demonstrates measurable productivity gains, but those gains are available to any organisation deploying the same tools. As adoption spreads, the effect is not differentiation but convergence. The industry becomes more efficient, but relative positions remain largely unchanged.</p><p>This is where the distinction between AI parity and AI advantage becomes operationally important. Threshold capabilities should be treated as necessary investments to maintain competitiveness, with a clear focus on cost efficiency and execution discipline. Distinctive capabilities, by contrast, require concentrated investment and are justified only when they can alter pricing power, cost position, or customer preferences in ways competitors cannot easily replicate.</p><h3><strong>The Resource-Based View Applied to AI: Why Access to Models Does Not Create Advantage</strong></h3><p>The resource-based view of the firm provides a disciplined way to assess whether AI can create advantage. For a capability to generate sustained advantage, it must be valuable, rare, difficult to imitate, and organised to capture value (VRIO). Most AI technologies fail this test at the level of access.</p><p>Foundation models, cloud infrastructure, orchestration tools, and development frameworks are now widely available. The barriers to entry are falling, and the expertise required to deploy them is becoming more widely available. The technology is valuable, but it is neither rare nor difficult to imitate. Under a VRIO lens, it is unlikely to create sustained advantage on its own.</p><p>This becomes clearer when compared with established sources of distinctive capability. Amazon&#8217;s logistics network is not valuable because warehouses exist; it is valuable because of its scale, integration, and continuous optimisation, which together support superior service levels and cost economics. Zara&#8217;s supply chain is not distinctive because supply chains are uncommon; it is distinctive because of the speed and coherence with which it links demand signals to production and distribution, enabling faster inventory turns and reduced markdown risk.</p><p>In both cases, advantage arises from an integrated system of resources and capabilities that shapes the firm&#8217;s economic model. The same principle applies to AI. The question is not whether the organisation has access to the technology, but whether it can embed that technology within a system of resources and processes that competitors cannot replicate quickly or cheaply.</p><h3><strong>Where AI Can Actually Become a Distinctive Capability</strong></h3><p>AI can contribute to distinctive capability, but only when it is embedded within resources and systems that meet the VRIO criteria. There are three credible routes through which this occurs.</p><p>The first is <strong>proprietary data</strong> that improves with use and is difficult for competitors to assemble. Organisations that control unique operational or behavioural datasets can build models that outperform those trained on generic data. The advantage compounds as data accumulates and feedback loops strengthen, creating a widening performance gap over time.</p><p>The second is <strong>deep integration into core workflows and decisions</strong>. When AI becomes part of how pricing decisions are made, how risk is assessed, how inventory is allocated, or how products are designed, it becomes embedded in the operating model. Replication then requires not just access to the tool, but replication of the underlying processes, data flows, and organisational alignment that support those decisions. This is far more challenging for others to imitate, as demonstrated for decades by the Toyota Product System.</p><p>The third is <strong>organisational capability</strong>. Firms that can redesign workflows, combine technical and domain expertise, and continuously improve their systems create a form of advantage that extends beyond any single application. This often manifests in faster decision cycles, better resource allocation, and more consistent execution.</p><p>The distinction is critical. Competitors can access the same tools. They cannot easily replicate the same data histories, embedded processes, and organisational learning curves. That is where <strong>AI moves from being a productivity lever to a source of sustained competitive advantage</strong>.</p><h3><strong>The Strategic Evasion: Why Boards Avoid Defining Where AI Will Win</strong></h3><p>So why then do so many organisations still have a conspicuous absence of strategic clarity? The answer lies in the nature of strategic choice. Defining where AI will create an advantage forces difficult trade-offs. It requires deciding where to concentrate investment, where to accept parity, and which opportunities to <em>deprioritise</em>. These decisions reduce flexibility and create accountability. They also expose differences in perspective across leadership teams.</p><p>Many organisations avoid this discipline. Instead, they pursue multiple initiatives across functions, creating the appearance of progress without committing to a clear direction. Both Roger Martin&#8217;s and Richard Rumelt&#8217;s work on strategy is relevant here. They both argue that an effective strategy is not a list of initiatives, but a coherent set of choices about where to focus and how to compete. That coherence is precisely what many AI portfolios lack. The result is fragmentation. Investment is spread across use cases that deliver incremental benefits but do not combine to shift competitive position. The organisation becomes busier, but not necessarily better positioned, and capital is allocated across initiatives that reinforce parity rather than advantage.</p><h3><strong>Capability Without Positioning: The Hidden Risk in Current AI Strategies</strong></h3><p>This leads to a specific strategic risk. Organisations that build AI capability without defining strategic intent can become more efficient without becoming more competitive. Costs increase through investment in infrastructure, licences, data engineering, experimentation, governance, and talent. These costs are often justified individually but rarely assessed collectively against a clear source of advantage. At the same time, competitors adopt similar capabilities, neutralising much of the gain. Efficiency improvements are competed away, and the organisation is left with a higher cost base but little change in relative position. The net effect is convergence rather than separation.</p><p>This dynamic is already visible. Many organisations report measurable improvements at the use-case level, but struggle to demonstrate how those improvements translate into sustained financial performance. Boards should interpret this as a warning. Capability without positioning consumes capital, increases organisational complexity, and creates governance challenges without strengthening competitive position.</p><h3><strong>Reframing the Question: From &#8220;Where Can We Use AI?&#8221; to &#8220;Where Must We Win With AI?&#8221;</strong></h3><p>The starting point for many organisations has been the question: &#8220;Where can we use AI?&#8221; It is a natural question, but it encourages breadth rather than focus. A more effective question is: &#8220;Where must we win, and how can AI strengthen that position?&#8221; This reframing aligns AI with core strategic choices. It forces organisations to consider how AI supports their market positioning, rather than treating it as a separate domain of innovation.</p><p>This aligns with Roger Martin&#8217;s &#8220;where to play&#8221; and &#8220;how to win&#8221; framework. AI should not sit alongside strategy; it should be integrated into it. As I have written previously, AI must be tied to strategic intent, not deployed opportunistically. The implication is that AI investment should be evaluated in the context of competitive positioning and economic impact, rather than solely on technical feasibility.</p><h3><strong>A Board-Level Discipline: Defining AI Advantage Explicitly</strong></h3><p>For boards, this implies a more explicit discipline than most organisations currently apply. They should require management to clearly distinguish between threshold and distinctive uses of AI and to justify that distinction in economic as well as technical terms. A practical way to enforce this discipline is through a structured decision lens. Before approving material AI investment, boards should require management to answer four questions.</p><p>First, is this investment building parity or creating a plausible source of advantage? Second, what makes this difficult for competitors to replicate within a defined time horizon? Third, where does the economic value accrue&#8212;in cost reduction, revenue growth, pricing power, or risk mitigation&#8212;and is that value likely to persist? Fourth, what changes to the operating model, governance, and organisation are required to realise that value?</p><p>These questions force clarity. They distinguish between investments that improve efficiency and those that may reshape competitive position. They also provide a basis for governance. AI initiatives that are central to competitive advantage should be subject to different oversight, risk management, and assurance than those that primarily improve internal efficiency. Regulatory developments reinforce this need for discipline. The EU AI Act is being phased in progressively, with increasing obligations around governance, risk management, and accountability. Boards, therefore, need to understand not only where AI is being used, but also how critical it is to the business model and what risks it introduces.</p><h3><strong>Closing Argument: AI Will Not Create Advantage Without Strategic Choice</strong></h3><p>AI does not create an advantage by default, certainly not a lasting one. It amplifies the quality of the strategic choices an organisation makes. Where those choices are weak or undefined, AI produces activity without meaningful differentiation. Where they are clear and disciplined, AI can deepen capabilities that competitors struggle to match.</p><p><strong>The distinction between AI parity investment and AI advantage investment should therefore become explicit at board level</strong>. Without it, organisations will continue to invest broadly, improve incrementally, and converge with their peers. Boards need to separate two questions that are currently conflated. The first is how to adopt AI effectively. The second is where AI will create an advantage. The first is necessary. The second is decisive.</p><p>Organisations that make explicit choices about where AI matters will concentrate resources, build distinctive capabilities, and shape their competitive position. Those that do not will continue to invest, improve, and remain broadly indistinguishable.</p><p>If you want more board-level insight on AI strategy, governance, and transformation, subscribe to AI in the Boardroom. Each edition is designed to help directors and senior leaders turn AI activity into sustained competitive advantage.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiintheboardroom.com/subscribe?"><span>Subscribe now</span></a></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/p/most-organisations-are-investing?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! This post is public, so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/p/most-organisations-are-investing?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiintheboardroom.com/p/most-organisations-are-investing?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[Most Organisations Are Managing AI Risk Incorrectly]]></title><description><![CDATA[Edition 11 - Why Boards Need to Stop Treating AI as a Technology Risk &#8212; and Start Treating It as a Decision-Making Risk]]></description><link>https://www.aiintheboardroom.com/p/most-organisations-are-managing-ai</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/most-organisations-are-managing-ai</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Tue, 31 Mar 2026 07:30:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!05NO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!05NO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!05NO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!05NO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!05NO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!05NO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!05NO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2298693,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/192502953?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!05NO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!05NO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!05NO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!05NO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1faf0fc-f479-4c52-9f4c-37f516048101_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Walk into most boardrooms today and ask how AI risk is being managed. You will hear confident answers. It sits with IT. Data governance is in place. Legal is reviewing regulatory exposure. Internal audit has added AI to the risk register. On paper, this appears structured and comprehensive. It is also misdirected.</p><p>Boards are not ignoring AI risk. They are approaching it through the wrong lens. AI is being treated as a technology risk&#8212;something to secure, validate, and control within existing structures. That framing misses the shift already underway. AI is altering how decisions are formed, shaped, and executed across the organisation. Governance that remains focused on systems will struggle to keep pace with that change.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3><strong>Boards Think They Are Managing AI Risk. They Are Not.</strong></h3><p>Most boards take comfort from the fact that AI risk has been allocated. IT manages infrastructure and security. Data teams oversee models. Legal and compliance monitor regulatory exposure. These structures work for systems that behave predictably, where outputs can be traced and logic audited.</p><p>AI systems behave differently. They generate outputs based on statistical patterns rather than defined rules. The same input can produce different results, and in many cases, the reasoning behind those outputs cannot be fully explained. This creates a persistent gap between perceived control and actual control. Risk appears contained because it has been assigned to familiar functions, yet the organisation remains exposed in ways those functions are not designed to detect or manage.</p><h3><strong>The Category Error: AI Is Not Just Another Technology Layer</strong></h3><p>Treating AI as another form of software leads to a fundamental misunderstanding. Traditional systems execute instructions defined by humans. When those instructions are wrong, failures tend to be predictable and traceable. Testing and remediation follow a clear path.</p><p>AI operates on a different basis. It identifies patterns in data and uses those patterns to generate predictions, recommendations, or content. The system does not &#8220;know&#8221; whether it is correct. It produces outputs that are statistically plausible given what it has seen before. When conditions change, those outputs can become unreliable without any obvious signal.</p><p>This introduces uncertainty directly into the organisation&#8217;s decision processes. What was once a question of system accuracy becomes a question of how much uncertainty is being embedded into decisions, and whether that uncertainty is understood or governed.</p><h3><strong>AI Systems Do Not Make Decisions. They Shape Them.</strong></h3><p>The distinction between AI making decisions and supporting them is often overstated. In practice, AI systems shape decisions by influencing the information on which people rely. That influence is rarely neutral and often invisible.</p><p>Consider how this plays out across functions. A customer service team uses AI to draft responses and gradually relies on those drafts with minimal review. A credit team uses a model to prioritise applications, shaping which cases receive attention. A strategy team uses AI-generated analysis as a starting point, with embedded assumptions left largely unchallenged.</p><p>In each case, the final decision remains with a human. What changes is the quality and direction of the inputs that inform that decision. When those inputs are accepted without scrutiny, governance frameworks are bypassed in practice, even if they remain intact on paper.</p><h3><strong>The Five Failure Modes of AI-Driven Decision-Making</strong></h3><p>AI risk rarely presents as a visible system failure. It tends to surface as a decline in decision quality, often gradually and without a clear trigger. These failures follow patterns that are already emerging across industries.</p><p><strong>Misleading outputs</strong> are treated as reliable because they are presented clearly and confidently. This has been seen in legal contexts, where AI-generated case references were accepted without verification and later found to be incorrect. The system produced plausible outputs; the failure lay in how those outputs were interpreted.</p><p><strong>Bias</strong> embedded in training data is reproduced at scale. Amazon&#8217;s experimental hiring tool illustrates this clearly. It penalised CVs containing indicators associated with women because it was trained on historical hiring data that reflected existing bias. The model amplified a pattern that was already present.</p><p><strong>AI systems also struggle when conditions change</strong>. During the early stages of COVID-19, many predictive models became unreliable as consumer behaviour shifted in ways not reflected in historical data. Models that performed well under stable conditions proved fragile under disruption.</p><p><strong>Accountability</strong> becomes blurred when AI influences decisions. Responsibility is distributed across teams, and no single role owns the outcome. When something goes wrong, organisations often struggle to determine who is accountable for the decision itself.</p><p><strong>Reputational exposure</strong> increases when AI-driven decisions affect customers at scale. Errors in automated responses or moderation systems can spread quickly, damaging trust. These incidents are rarely the result of a single failure. They reflect a lack of clarity over how decisions are governed and reviewed.</p><h3><strong>Why Existing Risk Frameworks Fall Short</strong></h3><p>Most enterprise risk frameworks assume that systems are observable and controllable. They are built on the premise that behaviour can be tested, monitored, and corrected when deviations occur. These assumptions hold for traditional systems.</p><p>AI challenges each of these assumptions. Many models operate with limited explainability, and their behaviour can evolve as new data is introduced. Validation processes tend to focus on performance at a point in time, rather than how outputs will be used in practice or how they will behave under changing conditions. Compliance processes assess whether systems meet regulatory requirements, but they do not capture how decisions are influenced once those systems are deployed.</p><p>The gap is structural. Risk frameworks remain focused on systems, while risk is emerging through decisions. Until governance reflects that reality, organisations will continue to manage the visible risks while missing the more consequential ones.</p><h3><strong>The Strategic Blind Spot: AI Risk Without AI Ambition</strong></h3><p>Risk cannot be defined without a clear sense of ambition. Yet many boards are discussing AI risk without establishing where AI is expected to create value. Without that clarity, defining an appropriate risk appetite becomes difficult.</p><p>The risk profile associated with using AI to improve efficiency differs significantly from one where AI is embedded in core products, pricing, or customer interactions. Without an explicit ambition, governance tends to become reactive. Some organisations take on risk without recognising it, deploying AI broadly with limited oversight. Others become overly cautious, slowing down initiatives because governance cannot support them.</p><p>The underlying issue is not governance in isolation. It is the absence of a clear link between strategy and risk. Decisions about where to deploy AI and how much risk to accept need to be made together.</p><h3><strong>The Governance Gap: No One Owns AI-Driven Decisions</strong></h3><p>AI spans multiple functions, but governance structures remain siloed. IT manages infrastructure, data teams manage models, legal oversees compliance, and business units apply outputs. This reflects organisational design, not the reality of decision-making.</p><p>When AI influences a decision, no single role owns the outcome. A model may perform as intended, yet be applied in a context that introduces risk. A system may meet regulatory requirements, yet still lead to poor decisions in practice. These gaps are not anomalies; they are a consequence of how responsibility is distributed.</p><p>Governance has not adapted to the way decisions are now being shaped. Until accountability is aligned with decision-making, these gaps will continue to emerge.</p><h3><strong>What Boards Should Govern Instead: Decisions, Not Systems</strong></h3><p>Boards need to shift their focus from systems to decisions. The critical question is where AI is influencing decisions that matter to the organisation, and how those decisions are governed.</p><p>This begins with visibility. Boards need to understand where AI is shaping decisions, whether directly or indirectly. It requires clarity on where human judgement is required and where automation is appropriate. It also requires explicit accountability, ensuring that responsibility for outcomes is clearly defined.</p><p>This does not demand complex frameworks. It demands alignment between governance and how decisions are actually made. Organisations that achieve this tend to simplify governance, focusing on clarity and accountability rather than expanding controls.</p><h3><strong>The Implication: Competitive Advantage Will Come from Better Governance</strong></h3><p>There is a tendency to equate advantage with speed of adoption. Organisations that move quickly are expected to gain an edge. Speed plays a role, but it does not determine who captures the most value.</p><p>Advantage will come from the ability to govern decision-making effectively. Organisations that understand where AI influences decisions, and how those decisions are managed, can deploy AI in areas that matter with confidence. They can scale without undermining trust or performance.</p><p>Others will find progress more difficult. Some will remain in pilot mode, constrained by governance concerns. Others will move quickly and encounter issues that erode trust. The difference will not be technical capability. It will be governance discipline.</p><h3><strong>AI Risk Is a Leadership Issue</strong></h3><p>AI is reshaping how decisions are made across organisations. That shift sits at the centre of strategy and governance. It cannot be addressed solely through technical or compliance functions.</p><p>Boards need to define where AI will be used, what level of risk is acceptable, and how decisions will be governed. Without that clarity, decisions will still be influenced by AI, but without clear oversight or accountability.</p><p>AI risk sits at the core of leadership responsibility. Treating it as a peripheral technical issue delays the work that matters.</p><h3><strong>Final Thought</strong></h3><p>Most organisations are asking how to manage AI risk. A more useful question is how AI is changing decision-making, and whether that change is being governed deliberately.</p><p>Until that question is addressed, AI risk will continue to be managed in the wrong place.</p><h3><strong>Subscribe for more</strong></h3><p>If you want practical guidance on how boards can govern AI effectively and turn it into a strategic advantage, subscribe to <strong>AI in the Boardroom</strong>.</p><p>Each issue explores how boards and executive teams can turn AI disruption into a strategic advantage, covering strategy, governance, and transformation in the age of artificial intelligence.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiintheboardroom.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share AI in the Boardroom&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiintheboardroom.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share AI in the Boardroom</span></a></p>]]></content:encoded></item><item><title><![CDATA[Why AI Needs Committee-Level Attention]]></title><description><![CDATA[Edition 10 - How boards can turn AI governance from policy on paper into real oversight.]]></description><link>https://www.aiintheboardroom.com/p/why-ai-needs-committee-level-attention</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/why-ai-needs-committee-level-attention</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Tue, 17 Mar 2026 07:45:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Qk2V!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Qk2V!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Qk2V!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!Qk2V!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!Qk2V!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!Qk2V!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Qk2V!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:12162741,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/191125935?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Qk2V!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!Qk2V!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!Qk2V!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!Qk2V!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d13d4d-6427-4a47-a8d5-33463b5b5801_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3><strong>AI Is Now a Boardroom Issue</strong></h3><p>Artificial intelligence is now a regular topic in boardrooms. Almost every board is discussing AI. Far fewer are governing it. Across the world, it appears on the agenda every quarter. Directors ask management about the latest initiatives. Executives describe pilots involving generative AI tools, predictive analytics, customer service automation, or internal productivity systems built on large language models. The tone is often optimistic. AI is framed as a powerful opportunity to increase productivity, create new products, and sharpen competitive advantage.</p><p>Yet when the conversation moves from opportunity to governance, clarity often disappears. Ask most boards a simple question - who actually governs AI in this organisation? - and the answers are often vague. Related questions quickly follow: Who oversees AI risks? Which AI systems exist across the organisation? Who decides whether an AI system should be deployed?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>In many companies, the answers are uncertain. AI governance is often documented in a policy written by legal or risk teams and circulated internally. Meanwhile, real-world experimentation continues across business units, innovation teams, and technology groups, sometimes officially sanctioned, often not. The gap between policy and practice is growing.</p><p>As AI becomes embedded in decision-making, products, and operations, this gap becomes increasingly dangerous. Boards cannot govern AI effectively through occasional agenda items or general policy statements. AI requires structured oversight, and in many organisations that means committee-level attention.</p><h3><strong>AI Is Not Just a Technology Topic</strong></h3><p>Many boards still treat AI as an extension of the technology agenda. It often appears alongside updates on cybersecurity, cloud migration, or digital infrastructure. While understandable, this framing is incomplete.</p><p>Artificial intelligence is what economists describe as a <strong>general-purpose technology</strong>. Like electricity or the internet before it, AI has the potential to reshape how organisations operate across almost every domain. It can influence how strategic decisions are analysed and made, how customers are served, how employees work, how products are designed, how risks are assessed, and how organisations compete.</p><p>Consider a common example. A retail bank introduces an AI-driven credit scoring model designed to assess loan applicants more accurately than traditional methods. The system analyses thousands of data points and generates predictions about default risk. From a business perspective, the benefits are clear: faster approvals, better pricing, and potentially improved portfolio performance.</p><p>But the implications extend far beyond efficiency. Directors and executives must also ask whether the model unintentionally disadvantages certain customer groups, whether regulators will expect explainability in decisions made by algorithms, how the model will be monitored over time, and who is accountable if the system behaves incorrectly. These questions sit at the intersection of strategy, risk, ethics, regulation, and reputation.</p><p>Boards routinely rely on committees to examine complex matters in depth. Audit committees examine financial integrity. Risk committees scrutinise enterprise risk frameworks. Remuneration committees oversee executive incentives. AI increasingly demands the same level of attention.</p><h3><strong>The Problem With AI Governance on Paper</strong></h3><p>Many organisations believe they already have AI governance in place. Typically, this governance takes the form of a written policy outlining principles such as fairness, transparency, and accountability. These policies often align with international guidance from organisations such as the OECD or national regulators.</p><p>The challenge is not the existence of these policies. It is their lack of operational impact.</p><p>Across industries, a familiar pattern is emerging. AI adoption spreads quickly across departments. Marketing teams experiment with generative AI tools, data science teams build predictive models, and product teams embed machine learning into digital services. Oversight remains fragmented. Legal teams review compliance risks, IT teams review security implications, and HR teams consider workforce impacts. Yet no single forum evaluates the full picture.</p><p>The consequences are predictable. Organisations discover AI risks only after systems have already been deployed.</p><p>One large consumer company experienced this firsthand when a marketing team implemented an AI system designed to generate personalised promotional messages. The system performed well initially, increasing engagement and reducing campaign costs. Several months later, however, a customer complaint triggered an internal investigation. Analysts discovered that the model had been trained on historical marketing data that reflected past biases. As a result, certain customer groups received significantly fewer promotional offers. Situations like this are becoming increasingly common as AI adoption accelerates across organisations.</p><p>The problem was not malicious intent. The system had simply replicated patterns in historical data. The deeper issue was the absence of structured oversight. No committee had reviewed the model&#8217;s training data, fairness implications, or monitoring processes before deployment.</p><p>AI governance had existed on paper, but it was not being actively practised.</p><h3><strong>Why Committee-Level Oversight Matters</strong></h3><p>Boards cannot examine every complex issue in depth during full board meetings. AI governance requires sustained attention for several reasons.</p><p>First, AI systems evolve after deployment. Traditional software behaves predictably once implemented, but machine learning models can drift over time as real-world data changes. Generative AI systems can produce unexpected outputs. Autonomous systems may behave in ways designers did not anticipate. Oversight, therefore, cannot stop at the moment a system goes live. It must continue as the system operates.</p><p>Second, AI decisions increasingly carry ethical and societal implications. Algorithms are already influencing decisions related to hiring, insurance pricing, credit approval, healthcare diagnostics, and content moderation. Errors or bias in these systems can lead to legal exposure and reputational damage. Committee-level review creates a structured environment where such implications can be examined from multiple perspectives.</p><p>Third, AI strategy and AI risk are inseparable. Organisations that adopt AI too loosely expose themselves to regulatory and reputational risks. Organisations that govern AI too cautiously may fall behind competitors that use it more effectively. Oversight must therefore balance innovation with accountability, and a committee provides a forum for evaluating this balance.</p><h3><strong>What an AI Oversight Committee Should Do</strong></h3><p>The phrase &#8220;AI committee&#8221; can create confusion. Some boards assume it requires a new permanent board committee. In practice, oversight can take several forms depending on the organisation.</p><p>Some boards embed AI oversight within an existing <strong>risk or technology committee</strong>. Others establish a <strong>digital, data, or AI oversight committee</strong> reporting to the board. Regardless of the structure, the responsibilities tend to be similar. The committee&#8217;s purpose is to ensure AI governance moves from principle to practice.</p><p>One core responsibility is reviewing AI proposals and deployments at defined stages. A useful question for directors to consider is simple: where does this scrutiny currently happen in your organisation? Before significant AI systems are implemented, the committee should ensure adequate consideration of the system&#8217;s purpose, the data used for training, the expected business value, the potential risks, and the monitoring mechanisms to be used once the system is operational. The objective is not to slow innovation but to ensure meaningful scrutiny before systems affect customers, employees, or markets.</p><p>Committees should also request risk and impact assessments. AI systems introduce risks such as bias, privacy violations, cybersecurity vulnerabilities, lack of explainability, and operational dependency on automated decisions. Impact assessments should examine transparency, fairness, security, privacy, and, where relevant, broader societal implications.</p><p>Another key role is ensuring alignment between AI initiatives and organisational values. Many companies publish statements on responsible AI or the ethical use of technology. A committee can ensure these commitments are reflected in actual deployment decisions rather than remaining aspirational language.</p><p>Committees can also provide a channel for stakeholder input. Employees, customers, regulators, and civil society groups often identify risks that technical teams overlook. Structured oversight enables these concerns to be raised and evaluated in a controlled, constructive manner.</p><p>Critically, the committee must have the authority to raise red flags when necessary. It should be able to pause, recommend revision, or veto AI projects that fail to meet agreed ethical, legal, or operational standards. Without this authority, the committee risks becoming symbolic rather than effective.</p><h3><strong>Designing an Effective Oversight Committee</strong></h3><p>For committee-level oversight to be credible, the structure must be thoughtfully designed.</p><p>First, the committee should be <strong>diverse in expertise and perspective</strong>. AI governance cannot be managed solely by technologists. Effective oversight requires input from legal and compliance specialists, HR leaders, cybersecurity professionals, strategy and operations executives, and technical experts such as data scientists. Diversity should also extend beyond professional disciplines. Representation across gender, ethnicity, and organisational perspectives improves the committee&#8217;s ability to identify unintended consequences.</p><p>Second, the committee must be <strong>embedded within the organisation&#8217;s formal governance framework</strong>. This means having a clearly defined Terms of Reference, reporting lines to the board, scheduled review cycles, and documented decisions. The board should receive regular reports summarising AI deployments, emerging risks, and key governance decisions.</p><p>Third, the committee must be <strong>properly resourced and supported</strong>. Members should receive training on core AI concepts, model risks, and regulatory developments. Where necessary, organisations should provide access to independent experts who can assist with complex technical or ethical assessments.</p><h3><strong>Proportionate Governance for Smaller Organisations</strong></h3><p>Not every organisation can establish a formal AI oversight committee with dedicated staff. Smaller organisations often lack the internal expertise or resources to build such structures.</p><p>Yet the governance need still exists.</p><p>For micro-organisations and SMEs, proportionate approaches may include fractional advisory panels composed of external experts, periodic independent reviews of AI deployments, or outsourced governance support from specialist firms. The objective is not bureaucracy. It is ensuring that important AI decisions receive thoughtful evaluation before they shape products or services.</p><h3><strong>The Committee Must Not Become a Rubber Stamp</strong></h3><p>The committee&#8217;s culture matters as much as its structure. If the committee exists only to approve management proposals, it will fail.</p><p>A credible oversight body must function as a deliberative forum. Members should challenge assumptions, request additional evidence, and examine long-term implications. At the same time, the committee should support responsible innovation. The objective is not to stop AI development but to guide it.</p><p>A well-functioning committee often accelerates innovation rather than slowing it. By identifying risks early, organisations avoid costly regulatory problems, reputational damage, and public controversy.</p><h3><strong>The Strategic Advantage of Responsible AI Governance</strong></h3><p>Some boards worry that strong AI governance will slow innovation. Experience across industries suggests the opposite.</p><p>Organisations with credible governance frameworks often move faster because employees understand the boundaries within which they can experiment. Teams know what standards AI systems must meet. Management gains a clear process for evaluating proposals and prioritising initiatives.</p><p>Responsible governance also builds trust. Employees feel more confident experimenting with AI when guardrails are clear. Customers and regulators are more comfortable engaging with organisations that demonstrate thoughtful oversight.</p><p>Over time, governance becomes a strategic asset rather than a constraint.</p><h3><strong>What Boards Should Do Now</strong></h3><p>For directors and executives, three practical steps can help move AI governance from discussion to action.</p><p>First, map where AI is currently used across the organisation. Many boards are surprised by how widespread experimentation has already become.</p><p>Second, identify who currently oversees AI risk and deployment decisions. If accountability is unclear, governance gaps likely exist.</p><p>Third, establish a structured oversight mechanism. This may involve one or more of the following:</p><ul><li><p>empowering an existing committee,</p></li><li><p>establishing an AI or digital oversight committee with the authority and expertise required to review AI initiatives effectively.</p></li><li><p>introducing formal review processes for AI initiatives</p></li></ul><p>These steps move AI governance from aspiration to action.</p><h3><strong>Final Thoughts</strong></h3><p>Artificial intelligence is moving rapidly from experimentation to infrastructure. As AI becomes embedded in products, services, and decision-making processes, the governance challenge will only intensify.</p><p>Boards cannot rely solely on occasional discussions or policy statements. <strong>Effective AI governance requires active practice</strong>. Committee-level oversight is one of the most practical ways to ensure oversight is consistent and thoughtful.</p><p>Done well, this structure does not slow innovation. It enables organisations to pursue AI opportunities with discipline and confidence while protecting customers, employees, and shareholders from unintended harm. In the long run, the organisations that succeed with AI will not be those that experiment most aggressively. They will be those who combine ambition with responsible governance.</p><div><hr></div><p>If you found this useful, consider subscribing to <strong>AI in the Boardroom</strong>.</p><p>Each issue explores how boards and executive teams can turn AI disruption into a strategic advantage, covering strategy, governance, and transformation in the age of artificial intelligence.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[You Can’t Outsource Your Way to AI Advantage]]></title><description><![CDATA[Edition 9 - How core competencies, not vendors, will decide who wins in the age of AI.]]></description><link>https://www.aiintheboardroom.com/p/you-cant-outsource-your-way-to-ai</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/you-cant-outsource-your-way-to-ai</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Mon, 19 Jan 2026 13:29:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!l8YF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!l8YF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!l8YF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!l8YF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!l8YF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!l8YF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!l8YF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10290032,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/185059234?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!l8YF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!l8YF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!l8YF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!l8YF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ca4c347-ad56-4e0d-9ae2-b9e220e53d47_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Why boards must treat AI as a core competence, not a vendor service</strong></h3><p>In 2000, Toys &#8220;R&#8221; Us made a decision that looked sensible at the time. It partnered with Amazon to run its online sales. The logic was simple: Amazon was good at e-commerce, Toys &#8220;R&#8221; Us was good at toys. Let the specialist handle the technology.</p><p>A few years later, the channel became the market. The firm that had outsourced its digital capability no longer controlled the most important part of its business model. When the partnership unravelled, Toys &#8220;R&#8221; Us found itself without the internal muscle to compete in a world it had helped create.</p><p>This pattern repeats. Boeing outsourced large parts of its software engineering and system integration, hollowing out the very capability needed to assure safety in complex aircraft. Ford allowed vehicle software to fragment across hundreds of supplier-built modules, then discovered that speed, integration, and innovation had become impossible to orchestrate. In each case, what was once seen as &#8220;support&#8221; quietly became strategic. By the time leadership realised, the organisation no longer owned the competence required to respond.</p><p>AI now sits at the same inflexion point.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Many boards still treat AI as a technology to be procured, piloted, and rolled out. In reality, AI is becoming a general-purpose production and decision-making system that spans the entire enterprise. It shapes how products are designed, how prices are set, how risks are assessed, how customers are served, and how capital is allocated. Once a capability reaches that position, it is no longer an IT concern. It becomes part of the firm&#8217;s competitive identity.</p><p>Strategy, at its core, is not a plan. It is the set of decisions, assets, and capabilities that allow a company to do things rivals cannot, at a speed and quality they cannot match. You can outsource tasks. You cannot outsource the capability on which your advantage depends.</p><h3><strong>AI and the logic of core competencies</strong></h3><p>The idea of a &#8220;core competence&#8221; is simple. Some capabilities do more than support operations. They underpin multiple products, shape how the firm competes, and are difficult for others to copy. These capabilities deserve long-term investment, executive attention, and board oversight because they define what the organisation is.</p><p>AI increasingly meets all three tests.</p><p>It is not confined to a single product line. It cuts across marketing, operations, finance, risk, and R&amp;D. It is not a marginal efficiency tool. It influences cost curves, service levels, speed of innovation, and the quality of decision-making. And when built on proprietary data, domain knowledge, and organisational learning, it becomes hard to replicate.</p><p>This is where the resource-based view of the firm becomes useful. Sustainable advantage comes from resources and capabilities that are valuable, rare, hard to imitate, and embedded in organisational systems. Generic AI services accessed through an API fail this test. They are widely available, easily copied, and outside the firm&#8217;s control. In-house AI capability built on proprietary data, tailored models, and disciplined operating routines can meet it.</p><p>Michael Porter&#8217;s value chain offers another lens. Competitive advantage is realised through activities: how the firm designs, builds, sells, and supports its products and services. AI now sits at the heart of many of these primary activities. It shapes demand forecasting, pricing, production planning, customer interaction, fraud detection, credit assessment, and compliance monitoring. When a capability permeates the value chain, it cannot be treated as a peripheral service. It becomes part of the way the firm creates value.</p><p>Once a capability reaches that point, the strategic question is no longer &#8220;Which vendor should we use?&#8221; It becomes &#8220;Do we own the competence to design, govern, and evolve this capability ourselves?&#8221;</p><h3><strong>Three levels of AI adoption, three strategic positions</strong></h3><p>Most organisations move through distinct stages in their use of AI. Each stage carries different implications for control, learning, and advantage.</p><h4><strong>Level 1: Using general models</strong></h4><p>At this stage, firms consume foundation models through standard interfaces. They build chatbots, automate document processing, and support knowledge work. Time to value is short. Costs are variable. Differentiation is minimal. Learning is limited because the underlying models, data pipelines, and evaluation methods are owned by someone else.</p><p>This is a sensible place to start. It is not a place to stay.</p><h4><strong>Level 2: Customising and fine-tuning</strong></h4><p>Here, organisations adapt general models using their own data. Performance improves. Use cases become more specific. Some intellectual property is created. Yet architectural control, safety mechanisms, and core learning loops remain external. Dependence on vendors persists. Switching costs rise. Strategic freedom remains constrained.</p><h4><strong>Level 3: Building and operating in-house capability</strong></h4><p>At this level, the firm owns the data architecture, model lifecycle, evaluation standards, and deployment pipelines. It may still use external cloud platforms and open models, but it controls how intelligence is trained, tested, monitored, and integrated into business processes. It develops its own talent, tools, and governance routines. Learning compounds over time.</p><p>This is the point at which AI becomes a core competence rather than a purchased feature.</p><p>The strategic distinction between these levels mirrors the distinction between renting capacity and owning capability. The former enables experimentation. The latter enables sustained advantage and effective governance.</p><h3><strong>Toys &#8220;R&#8221; Us: when a channel becomes the business</strong></h3><p>The lesson from Toys &#8220;R&#8221; Us is not about e-commerce. It is about misclassifying a strategic capability as a support function. Many organisations, and sadly consultancies, still make this mistake.</p><p>By outsourcing its online presence, the company also outsourced customer data, experimentation, and the learning cycles that would later define retail competition. When digital became central, it no longer possessed the skills, systems, or culture needed to adapt. The cost was not just lost revenue. It was lost strategic freedom.</p><p>AI now plays a similar role. For many firms, it is moving from a productivity tool to the primary interface between the organisation and its customers, employees, and partners. If that interface is built, trained, and controlled elsewhere, the firm may retain operational access but lose strategic agency.</p><h3><strong>Boeing: outsourcing engineering judgment</strong></h3><p>Boeing&#8217;s difficulties reveal a different dimension of the same issue. Complex systems demand system-level understanding. When software development and integration are fragmented across suppliers, knowledge becomes dispersed, accountability blurs, and management loses the ability to interrogate risk with confidence. Technical assurance becomes contractual rather than intrinsic.</p><p>AI systems exhibit similar characteristics. They are probabilistic, adaptive, and deeply dependent on data quality and operational context. If the organisation cannot inspect model behaviour, test failure modes, and understand interactions with other systems, governance becomes symbolic. Boards remain accountable for outcomes they cannot truly oversee.</p><p>Owning AI capability is not only about competition. It is about control.</p><h3><strong>Ford, Tesla, and the Chinese EV manufacturers: software as the product</strong></h3><p>Ford&#8217;s public acknowledgement of its software challenge markeda significantt shift. Over time, vehicle software had grown into a patchwork of supplier-built modules, each optimised locally but poorly integrated. Innovation slowed. Updates became complex. Learning cycles stretched.</p><p>Tesla and several Chinese electric vehicle manufacturers took a different path. They built vertically integrated software and data platforms. They treated software and, increasingly, AI as central to product identity and performance. This allowed rapid iteration, tight integration between hardware and intelligence, and continuous improvement based on real-world data.</p><p>The strategic difference is not simply technical. It is organisational. One model treats software and AI as components to be sourced. The other treats them as capabilities to be cultivated.</p><p>As AI becomes embedded in products and services across industries, the same divergence will appear. Firms that own the full learning loop will adapt faster than those that manage ecosystems of suppliers.</p><h3><strong>AI as a system of resources and routines</strong></h3><p>Competitive advantage does not arise from a single model. It arises from a system of resources and organisational routines, for example:</p><ul><li><p>Proprietary data that reflect unique customer behaviour, processes, and risks.</p></li><li><p>Talent that understands both the domain and the methods.</p></li><li><p>Platforms that support training, testing, deployment, and monitoring.</p></li><li><p>Governance mechanisms that define acceptable risk, ensure compliance, and enable intervention.</p></li><li><p>Cultural norms that encourage experimentation and disciplined review.</p></li></ul><p>Together, these form a dynamic capability: the ability to sense opportunities, test responses, and scale what works while containing what does not.</p><p>This is the true core competence in the age of AI. It cannot be bought off the shelf.</p><h3><strong>What to buy and what to own</strong></h3><p>Boards need clarity on the boundary between sourcing and stewardship.</p><p>Infrastructure, commodity tooling, and generic applications can be procured. Strategic architecture, data governance, model evaluation, and the integration of AI into critical decisions must be owned.</p><p>In practice, this means:</p><ul><li><p>Retaining internal accountability for AI strategy and prioritisation.</p></li><li><p>Owning the data pipelines that feed learning systems.</p></li><li><p>Defining and enforcing standards for model performance, bias, robustness, and security.</p></li><li><p>Building internal capability to challenge vendors and to operate independently if required.</p></li><li><p>Ensuring that knowledge generated by AI use remains within the organisation.</p></li></ul><h3><strong>Questions for boards</strong></h3><p>A small set of questions can reveal whether AI is being treated as a core competence or a utility:</p><ol><li><p>Which parts of our value chain will be shaped most by AI within three years?</p></li><li><p>Where does AI influence differentiation, cost, or risk in ways competitors cannot easily match?</p></li><li><p>Which of these capabilities do we currently rent rather than own?</p></li><li><p>Who is accountable for the end-to-end AI lifecycle, from data to decision?</p></li><li><p>How do we test, explain, and override the models that affect critical outcomes?</p></li><li><p>What learning loops allow us to improve faster than our peers?</p></li><li><p>What is our exit strategy from any vendor whose technology has become mission-critical?</p></li></ol><p>These are strategic questions, not technical ones. They belong in the boardroom.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/p/you-cant-outsource-your-way-to-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! This post is public, so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/p/you-cant-outsource-your-way-to-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiintheboardroom.com/p/you-cant-outsource-your-way-to-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h3><strong>The strategic choice</strong></h3><p>Toys &#8220;R&#8221; Us lost a channel. Boeing lost system-level engineering coherence. Ford lost software speed and integration. Each case reflects the same underlying error: mistaking a future-defining capability for a support service.</p><p>AI is now crossing that threshold in many sectors. It is becoming part of how firms compete, not just how they operate. The decision facing boards is whether to treat AI as something to be purchased or as something to be built, governed, and renewed as a core competence.</p><p>You can outsource delivery. You cannot outsource learning. You can buy tools. You cannot buy the organisational capability that turns those tools into sustained advantage and effective control.</p><p>In the age of AI, strategy is inseparable from the capabilities you own.</p><p>If you want more practical, board-level insight on how to govern AI, link it to strategy, and build the capabilities that matter, subscribe to <strong>AI in the Boardroom</strong>. This newsletter is written for directors and senior leaders who want to turn AI from a source of anxiety into a source of advantage.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Three Levels of AI Automation Every Board Must Understand]]></title><description><![CDATA[Edition 8 - Three levels of AI. Three very different board responsibilities.]]></description><link>https://www.aiintheboardroom.com/p/the-three-levels-of-ai-automation</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/the-three-levels-of-ai-automation</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Mon, 05 Jan 2026 16:35:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!h2e-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h2e-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h2e-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!h2e-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!h2e-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!h2e-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h2e-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:12817937,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/183564680?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!h2e-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!h2e-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!h2e-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!h2e-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05cc2262-5b6d-48e5-9644-9039572a6ec3_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Boards are increasingly asking for an AI strategy. Most organisations respond with enthusiasm, pilots, and slide decks. A year later, little has changed. Some teams use AI daily. Others block it. Risk sits on the sidelines, and value is unclear. This is not a failure of ambition or technology. It is a failure of clarity.</p><p>Most boards talk about &#8220;AI&#8221; as if it were a single thing. It is not. AI enables different levels of automation, and each level changes the game in material ways. It affects how value is created, where risk sits, who is accountable, and what the board must oversee. When boards miss this, two predictable patterns emerge.</p><p>Some overreach. They talk about autonomy and transformation before basic controls are in place. That leads to stalled programmes, regulatory anxiety, and quiet reversals.</p><p>Others underreach. They treat AI as a productivity tool for analysts and marketers. They gain small efficiency wins and miss the strategic upside that competitors are already capturing.</p><p>Both outcomes stem from the same root problem: a lack of shared language. In my work with boards and executive teams, there is a simple, practical model to fix this. It breaks AI down into three levels of automation. Each level is real and already in use. Each carries a very different value and risk profile.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>This model does not require technical depth. It gives boards a way to ask better questions, set clearer boundaries, and govern AI without slowing the organisation down. If you want AI to drive advantage rather than confusion, this distinction matters. The rest of this article explains the three levels, what they enable, what can go wrong, and what boards should do at each stage.</p><h3><strong>Automation Level 1 - AI to Augment Tasks Performed by Humans</strong></h3><p>At the augmentation level, AI supports humans in their work; it does not replace them. The AI acts as a partner or co-pilot: it gathers data, surfaces insights, suggests options&#8212;and the human retains final decision-making authority. The human remains in the loop, guiding strategy, interpreting nuance, and bearing accountability. This means the organisation uses AI to enhance human judgement, speed and scale tasks rather than handing over control. For boards and executives, this is often the safest entry point: lower risk, faster value, less governance burden, but still substantial potential upside.</p><p>Consider a large professional services firm&#8217;s audit department. They deploy an AI system that reads contracts, extracts clauses, flags anomalies, and offers the audit manager a ranked list of risks to review. The manager then chooses which flagged items to pursue, drafts questions, and leads the human conversations. An AI model augmented human judgment; it did not replace it. Similarly, in HR, AI might sift thousands of employee survey responses, detect underlying sentiment patterns, and surface the top themes for the HR Director to address&#8212;human insight remains central.</p><p>Think of Level 1 as a competent analyst. It produces output. You still edit and sign off on it.</p><p>Augmentation is a pragmatic first step: you retain human control, deploy AI models to improve your people&#8217;s decision-making, reduce mundane load, improve speed and insight, and thereby increase productivity. From a board governance perspective, the required oversight is manageable. Use this level to build AI literacy, governance frameworks, and internal trust before moving to higher autonomy.</p><h4><strong>The three types of AI you must not confuse.</strong></h4><p>At Level 1, boards often lump all AI into one category. That creates sloppy oversight. You need to distinguish three common model types because their risks and controls differ.</p><p><strong>1) Classification: &#8220;Which bucket does this belong in?&#8221;</strong></p><p>Classification sorts items into categories.</p><p>Typical uses:</p><ul><li><p>Fraud detection: suspicious vs everyday transactions</p></li></ul><ul><li><p>Spam filters: spam vs not spam</p></li></ul><ul><li><p>Customer support: route to the right team</p></li></ul><ul><li><p>HR: identify CVs that match baseline criteria (with care)</p></li></ul><p>Classification is powerful because it improves consistency and speed. It is also easier to test because the outputs are discrete classes.</p><p><strong>Board lens</strong>: classification tends to fail in predictable ways. You can measure error rates and track drift.</p><p><strong>2) Regression: &#8220;What number should we predict?&#8221;</strong></p><p>Regression predicts a number or a probability.</p><p>Typical uses:</p><ul><li><p>Forecasting demand</p></li></ul><ul><li><p>Predicting churn risk</p></li></ul><ul><li><p>Estimating delivery times</p></li></ul><ul><li><p>Credit risk scoring (within a governed framework)</p></li></ul><p>Regression is often the hidden engine in planning and forecasting. It can be very valuable, but only if the underlying data is clean and stable.</p><p><strong>Board lens</strong>: regression failures often show up as bad forecasts that look &#8220;reasonable&#8221;. The harm is business-value erosion rather than scandal until it touches regulated decisions.</p><p><strong>3) Generative AI: &#8220;Create text, images, code, or summaries&#8221;</strong></p><p>Generative AI produces new content. Large language models, such as ChatGPT and Gemini, sit here.</p><p>Typical uses:</p><ul><li><p>Drafting documents and emails</p></li></ul><ul><li><p>Summarising reports and meetings</p></li></ul><ul><li><p>Searching internal knowledge bases via natural language</p></li></ul><ul><li><p>Drafting code, test cases, and documentation</p></li></ul><ul><li><p>Creating first drafts of policies, training content, or comms</p></li></ul><p>Generative AI is the most visible form of AI right now, and the most misunderstood. Generative AI is not a truth machine. It produces plausible output based on patterns. It can be wrong in a confident tone. It can also leak sensitive data if used carelessly.</p><p><strong>Board lens</strong>: generative AI risk is less about &#8220;accuracy in a test set&#8221; and more about real-world use: confidentiality, IP, bias, audit trail, and who relies on output without checking.</p><h4><strong>Real Level 1 examples that boards will recognise</strong></h4><p>Many organisations are already integrating these use cases into their everyday work, boards included. Examples include:</p><ul><li><p><strong>Board pack summarisation</strong>: AI creates an executive summary, a draft risk heatmap, and a list of questions to ask. The company secretary and the exec team verify.</p></li><li><p><strong>Contract and policy drafting support</strong>: AI drafts clauses or compares versions. Legal reviews and owns the final document.</p></li></ul><ul><li><p><strong>Finance analysis</strong>: AI flags anomalies, drafts commentary, and suggests drivers. Finance signs off on numbers and narrative.</p></li></ul><ul><li><p><strong>Risk and incident support</strong>: AI clusters incidents, suggests root causes, and drafts post-incident reports. The incident manager validates.</p></li></ul><p>Level 1 is not &#8220;small&#8221;. It can change cycle times, reduce rework, and lift decision quality across the organisation. This can have a tangible impact on bottom lines. However, some things can go wrong at this level. Some common failure modes include:</p><ul><li><p><strong>Over-trust</strong>. People treat AI output as correct because it reads well.</p></li></ul><ul><li><p><strong>Data leakage</strong>. Staff paste confidential content into public tools.</p></li></ul><ul><li><p><strong>Poor prompts</strong>, poor results. Output quality varies wildly by user skill.</p></li></ul><ul><li><p><strong>Shadow AI</strong>. Teams use tools outside policy because it is easy.</p></li></ul><p>Level 1 failures are usually containable if you set rules early. The key here is to keep governance simple and enforceable by doing the following:</p><ul><li><p><strong>Define where AI is allowed.</strong></p></li></ul><ul><li><p>Internal-only drafts? Fine.</p></li></ul><ul><li><p>Customer-facing comms? Tight controls.</p></li></ul><ul><li><p>Regulated decisions? Treat as Level 2 governance even if &#8220;advisory&#8221;.</p></li></ul><ul><li><p><strong>Set &#8220;human-in-the-loop&#8221; standards.</strong></p><ul><li><p>What must be checked every time?</p></li><li><p>What can be sampled?</p></li><li><p>Who signs off?</p></li></ul></li></ul><ul><li><p><strong>Lock down data handling.</strong></p><ul><li><p>Approved tools and approved accounts.</p></li><li><p>Clear rules on confidential and personal data.</p></li><li><p>Central logging, where feasible.</p></li></ul></li></ul><ul><li><p><strong>Train staff on safe use.</strong></p><ul><li><p>Not a vague e-learning. A short, sharp playbook:</p></li><li><p>What to use AI for,</p></li></ul></li></ul><ul><li><p>What not to use it for,</p></li><li><p>How to verify outputs.</p></li></ul><p>Questions directors should ask at Level 1</p><ul><li><p>Where is AI being used today, formally and informally?</p></li><li><p>What data is going into prompts, and where does it go?</p></li><li><p>What outputs are customer-facing or regulator-facing?</p></li><li><p>Where is human review required, and is it happening?</p></li><li><p>How do we capture incidents and near-misses?</p></li></ul><p>If you cannot answer these, you do not have Level 1 under control. <strong>Do not move to Level 2</strong>.</p><h3><strong>Automation Level 2 - AI to automate well-defined tasks previously done by humans</strong></h3><p>At this level, the AI takes over well-defined tasks (or workflows) that humans previously did, with minimal human intervention. The task is sufficiently structured and repeatable that the AI can automate it reliably. Humans still monitor, manage exceptions and escalations, but the daily execution is machine-led. AI for automation means replacing specific tasks with AI models. This works best with tasks that are repeatable, bounded, high-volume, and have clear success criteria. This is where measurable cost savings and speed gains arrive. It is also where risk rises sharply because errors scale.</p><p>Typical Level 2 use cases include</p><ul><li><p><strong>Claims handling for simple cases</strong> (with escalation for edge cases)</p></li></ul><ul><li><p><strong>Invoice processing</strong> and reconciliation</p></li><li><p><strong>Customer service resolution</strong> for routine issues</p></li><li><p><strong>Document processing</strong> and extraction at scale</p></li><li><p><strong>Automated KYC checks</strong> in defined scenarios</p></li><li><p><strong>Scheduling and routing</strong> in operations</p></li></ul><p>At Level 2, the risks shift from &#8220;bad output&#8221; to &#8220;bad outcomes&#8221;. Key risks include:</p><ul><li><p><strong>Scale of harm</strong>. A model error hits thousands of cases quickly.</p></li><li><p><strong>Fairness and bias</strong>. If it affects customers or staff, scrutiny increases.</p></li><li><p><strong>Explainability and audit</strong>. You need to show what happened and why.</p></li><li><p><strong>Control failures</strong>. No clear kill switch or escalation path.</p></li><li><p><strong>Drift</strong>. Performance degrades quietly as data changes.</p></li></ul><p>The board does not need to understand the complex mathematics of these models, but it does require confidence that controls match the impact.</p><p>Board-level controls that matter at Level 2</p><ul><li><p><strong>Explicit decision rights</strong></p><ul><li><p>Who approved automation?</p></li><li><p>Who owns outcomes?</p></li><li><p>Who can pause it?</p></li></ul></li><li><p><strong>Clear boundaries and exception handling</strong></p><ul><li><p>What cases are automated?</p></li><li><p>What cases must escalate?</p></li><li><p>What is the fallback process?</p></li></ul></li><li><p><strong>Monitoring of outcomes, not activity. Track:</strong></p><ul><li><p>error rates,</p></li><li><p>rework rates,</p></li><li><p>complaint rates,</p></li><li><p>time to resolve,</p></li><li><p>fairness indicators where relevant,</p></li><li><p>financial leakage.</p></li></ul></li><li><p><strong>Independent testing before deployment</strong></p><ul><li><p>Test on historic data.</p></li><li><p>Test on edge cases.</p></li><li><p>Red-team the process: how could it fail in practice?</p></li></ul></li><li><p><strong>Audit trail</strong></p><ul><li><p>What data was used?</p></li><li><p>What version of the model ran?</p></li><li><p>What decision was made?</p></li><li><p>Who overrode it?</p></li></ul></li></ul><p>Automation is the next step beyond augmentation: you&#8217;re handing over execution of tasks to AI. The human oversight role remains, but the operational burden shifts. For board/executive teams: this is where you must raise governance, risk monitoring, exception management and change management sharply, because the scale is bigger. Value is real &#8212; cost down, throughput up &#8212; but risk moves up too.</p><h3><strong>Automation Level 3 - AI to behave as an autonomous agent to plan &amp; execute actions to achieve goals</strong></h3><p>Agentic AI refers to AI systems that behave as autonomous agents: they set or are given goals, plan the tasks to achieve those goals, decide which tools to use, take actions, and adjust course, often without human prompting. The human might be in the loop, but AI is doing far more than executing rules; it is orchestrating workflows, making decisions, and continually adapting. According to Oracle: &#8220;Agentic AI refers to an AI system that&#8217;s capable of making autonomous decisions &#8230; then executing on its decisions.&#8221;</p><p>From the board&#8217;s perspective, this is the highest autonomy tier, offering significant value potential, but also the highest risk. Value arises from scalability, speed, adaptive workflows, and &#8220;digital workers&#8221; replacing or extending humans significantly.</p><p>Some carefully bounded examples of what Level 3 can do in real organisations include:</p><ul><li><p><strong>IT operations agents</strong> that diagnose incidents and run approved fixes</p></li><li><p><strong>Security response agents</strong> that triage alerts and trigger defined actions</p></li><li><p><strong>Procurement agents</strong> that run sourcing workflows within thresholds</p></li><li><p><strong>Sales enablement agents</strong> that build account plans and schedule outreach (with controls)</p></li></ul><ul><li><p><strong>Finance agents</strong> that chase approvals, reconcile variances, and prepare close packs</p></li></ul><p>What boards should demand before Level 3 scales:</p><ul><li><p><strong>A clear agent charter</strong></p><ul><li><p>What goals is it allowed to pursue?</p></li><li><p>What goals are prohibited?</p></li><li><p>What constraints are non-negotiable?</p></li></ul></li><li><p><strong>Permission design</strong></p><ul><li><p>Least privilege access.</p></li><li><p>Separate environments.</p></li><li><p>No shared credentials.</p></li><li><p>Strong identity controls.</p></li></ul></li><li><p><strong>Human approval gates</strong></p><ul><li><p>external communications,</p></li><li><p>payments,</p></li><li><p>contract changes,</p></li><li><p>customer decisions,</p></li><li><p>production deployments.</p></li></ul></li><li><p><strong>Continuous logging and replay</strong></p><ul><li><p>prompts, actions, tool calls, outputs, timestamps.</p></li></ul></li><li><p><strong>Testing for failure behaviours</strong></p><ul><li><p>edge cases,</p></li><li><p>adversarial inputs,</p></li><li><p>conflicting objectives,</p></li><li><p>unexpected tool responses.</p></li></ul></li><li><p><strong>A shut-down path that works</strong></p><ul><li><p>immediate revoke of permissions,</p></li><li><p>stop job queues,</p></li><li><p>fall back to manual process.</p></li></ul></li></ul><p>Agentic AI is the frontier of autonomy: significant potential gains, but also big governance stakes. Boards and executives must not treat it as just another automation tool. They need to think about goals, boundaries, oversight, accountability, and learning loops. If the governance is weak, the risk is high. If it&#8217;s done well, value can scale.</p><h3><strong>How to Use the Three Levels as a Board</strong></h3><p>Below is a simple, board-friendly way to operationalise this model.</p><p><strong>1) Create an &#8220;AI Automation Register&#8221;</strong></p><p>This is a one-pager. Updated quarterly and presented to the board or a board committee.</p><p>For each AI system:</p><ul><li><p>owner,</p></li><li><p>business area,</p></li><li><p>automation level (1/2/3),</p></li><li><p>what it does,</p></li><li><p>what data it uses,</p></li><li><p>what outcomes it affects,</p></li><li><p>key controls and monitoring.</p></li></ul><p>If you cannot list it, you cannot govern it.</p><p><strong>2) Match governance intensity to automation level</strong></p><ul><li><p><strong>Level 1</strong>: policy, training, approved tools, human review rules, and incident logging.</p></li><li><p><strong>Level 2</strong>: formal risk assessment, monitoring of outcomes, audit trail, kill switch tests.</p></li><li><p><strong>Level 3</strong>: agent charter, permission controls, approval gates, deep logging, tighter board scrutiny.</p></li></ul><p>Do not over-govern Level 1. It kills adoption and drives shadow use.</p><p>Do not under-govern Level 2 and 3. It creates systemic risk.</p><p><strong>3) Link AI to strategy in plain terms</strong></p><p>Boards get stuck because &#8220;AI strategy&#8221; sounds abstract.</p><p>Use a simple framing:</p><ul><li><p>Which strategic outcomes do we need? (growth, cost, risk, service, speed)</p></li><li><p>Which business constraints block them? (capacity, cycle time, decision quality, cost to serve)</p></li><li><p>Which automation level addresses each constraint safely?</p></li></ul><p>AI becomes a portfolio of interventions, not a hype programme.</p><p><strong>4) Set a risk appetite for autonomy</strong></p><p>The board should make an explicit call:</p><ul><li><p>Where are we comfortable with augmentation only?</p></li><li><p>Where are we comfortable with automation?</p></li><li><p>Where do we allow agents, and under what constraints?</p></li></ul><p>Make it explicit. If you do not, the organisation will decide by default, project by project, with uneven controls.</p><h3><strong>Summary: What to Remember</strong></h3><p><strong>Level 1 augments people</strong>. It improves speed and quality. Keep humans accountable. Know the difference between classification, regression, and generative AI.</p><p><strong>Level 2 automates tasks</strong>. This is where savings scale. Risks scale too. Govern outcomes, not tools.</p><p><strong>Level 3 uses agents</strong>. Autonomy makes goal-setting, permissions, and logging the main governance issues. Treat this like delegation, not software.</p><p><strong>The board&#8217;s job is not to become technical. It is to ensure the organisation is making explicit choices about autonomy, value, and risk</strong>.</p><p>If you found this useful, subscribe to AI in the Boardroom. I write for directors and senior leaders who want clear thinking, practical controls, and real-world examples of what works and what fails.</p>]]></content:encoded></item><item><title><![CDATA[The Four Levels of AI Adoption: A Practical Guide for Boards and Executives]]></title><description><![CDATA[Edition 7 - Why most companies stall, how leaders should respond, and what good looks like at each stage.]]></description><link>https://www.aiintheboardroom.com/p/the-four-levels-of-ai-adoption-a</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/the-four-levels-of-ai-adoption-a</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Sat, 06 Dec 2025 16:52:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MCJH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MCJH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MCJH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!MCJH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!MCJH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!MCJH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MCJH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10327702,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/180889745?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MCJH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!MCJH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!MCJH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!MCJH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F926504d4-4a29-4e1e-b200-15f910fcb9aa_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In my work advising boards around strategy, AI and transformation, one thing has become abundantly clear in recent years: AI is no longer a technical project sitting three levels down in the organisation. It is now a strategic force that shapes cost, pace, innovation, risk, trust, and competitive position.</p><p>Yet most leadership teams cannot answer a fundamental question: <strong>What level of AI adoption are we aiming for?</strong></p><p>This sounds simple. It isn&#8217;t. Many organisations jump between isolated use cases without a clear direction. Some chase hype. Others avoid action due to fear of risk. The result is the same: confusion, slow progress, and little value delivered.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>To help organisations understand the scope of their AI ambitions, I use a simple framework with my clients: <strong>the Four Levels of AI Automation</strong>. It shows how AI can reshape personal work, operations, products, and whole business models. Each level brings new expectations for leadership involvement, data requirements, in-house capabilities, risk, and governance.</p><p>This article explains the four levels in clear terms. It includes real examples and straightforward guidance for board oversight. No buzzwords. No over-the-top promises. Just a clear roadmap that shows where you are, what comes next, and what you need to do.</p><h3><strong>Level 1: AI for Personal Productivity &amp; Decision-Making - The Co-pilot Phase</strong></h3><p>At this level, AI tools are adopted by individuals to serve as intelligent collaborative partners, helping individuals work faster and make better decisions. The primary objective is to enhance the productivity, creativity, and effectiveness of knowledge workers by augmenting human tasks or providing rapid, data-informed decision support. Think of it as a smart aide. It drafts, summarises, plans, and checks. You stay in charge. The tool speeds you up and reduces cognitive load. It feels small, but it isn&#8217;t. If every knowledge worker speeds up by 20&#8211;40%, entire workflows change.</p><p>A CFO uses AI to draft board finance papers. She still checks everything, but she starts from a smarter first draft. Her team follow her lead. Cycle times dropped by days, and her analysts now spend more time on judgment and less on formatting.</p><p>A sales director uses AI inside his CRM to qualify the pipeline. It reviews inputs, flags missing data, and predicts which accounts might be at risk of slipping. The team catch risks far earlier.</p><p>A legal head uses AI to summarise case files and produce early contract outlines. It reduces time spent on admin and frees capacity for real thinking.</p><h4><strong>Leadership, accountability, and focus</strong></h4><p>At this level, leaders set the tone. If the CEO and executive team use AI, others will follow. If they avoid it, adoption dies. Leaders should model use cases in public. They should share how they use AI to read papers or prepare meetings. These signals change culture far more than a policy memo.</p><p>Accountability sits with functional heads. Their job is to redesign the daily workflow. AI only helps when people change how they work, not when they bolt a tool onto existing habits.</p><h4><strong>Organisational data readiness</strong></h4><p>The data requirement is low. AI tools can create value from basic inputs, but teams soon hit a wall when internal data is unclear, missing, or scattered. Level 1 exposes gaps that must be fixed later for Levels 2 and 3.</p><h4><strong>Proprietary model development</strong></h4><p>None needed. Off-the-shelf tools deliver strong returns, but guardrails matter. Staff must know when they can and cannot share sensitive content.</p><h4><strong>AI literacy</strong></h4><p>This is the highest return investment you can make at this stage. Teach people how to write prompts, check output, and make better decisions with AI. Most people use only 5&#8211;10% of the capabilities of the tools they have.</p><h4><strong>Governance and compliance</strong></h4><p>At this level, organisations must ensure clear rules for:</p><ul><li><p>What data can be shared</p></li><li><p>When humans must check AI output</p></li><li><p>How quality should be reviewed</p></li><li><p>How prompts and outputs are monitored</p></li></ul><p>Mistakes often happen through carelessness, not malice. Good governance prevents these problems before they become headlines.</p><p>This level represents the lowest-risk entry point for AI. Organisations should start here. It is fast to deploy, cheap to test, and easy to govern. Roll it out to knowledge workers first. Start with clear tasks: drafting emails, notes, briefs, slides, code, and simple analysis. Add guardrails, measure gains, prove value, build literacy, then expand the scope.</p><h3><strong>Level 2: AI for Efficient Operations - The Value Chain Optimisation Phase</strong></h3><p>This level focuses on applying AI across specific, established functional departments or points in the organisational value chain to drive measurable efficiency, cost reduction, and process improvement. AI systems are integrated into existing workflows to automate high-volume, repetitive tasks and optimise resource allocation. This is where things get interesting. The focus shifts from improving individual productivity to influencing core processes. You start redesigning how work flows across teams.</p><p>AI models run and improve core processes. They forecast, schedule, route, classify, and detect. They reduce waste and errors across Ops, Supply Chain, Finance, HR, and IT. Humans remain in control of the exceptions and own the outcomes. Pick high-volume, repeatable flows with clear targets. This is where material P&amp;L gains show up.</p><p>Banks use AI to detect fraud, assess credit risk, and flag suspicious activity. These systems run thousands of checks per second and spot patterns that humans would miss.</p><p>Retailers use machine learning to predict demand, reduce overstock, adjust prices, and optimise delivery routes. The effect on the margin is real.</p><p>Manufacturers use predictive maintenance systems that tell engineers when a critical part will fail. Unplanned downtime drops. Safety improves.</p><p>A large insurer I worked with now uses AI to classify claims documents. Staff no longer sift through long reports. The system reads, tags, and routes each item. Case handlers focus on decisions rather than scanning PDFs.</p><h4><strong>Leadership, accountability, and focus</strong></h4><p>Ownership shifts to senior operational leaders. AI now touches throughput, cost, and risk.</p><p>KPIs must be clear:</p><ul><li><p>Cycle-time</p></li><li><p>Accuracy</p></li><li><p>Cost-to-serve</p></li><li><p>Failure rates</p></li></ul><p>The COO becomes a key sponsor. Cross-functional collaboration becomes essential because processes rarely fall neatly within a single department.</p><h4><strong>Organisational data readiness</strong></h4><p>Now data quality matters. Organisations need clean, structured, reliable datasets that connect across systems. If your ERP and CRM don&#8217;t speak to each other, AI will struggle. Leaders must treat data as infrastructure, not as a nice-to-have. Without this level of data readiness, Level 2 becomes slow and painful.</p><h4><strong>Proprietary model development</strong></h4><p>Many firms fine-tune models on their own data. You rarely need to build models from scratch.</p><p>Your advantage usually comes from strong pipelines, clear labels, and frequent updates.</p><h4><strong>AI literacy</strong></h4><p>Teams in operations need to understand:</p><ul><li><p>How models behave</p></li><li><p>When to override predictions</p></li><li><p>How to spot drift</p></li><li><p>How to manage exceptions</p></li></ul><p>These skills must sit across technology, risk, and business units.</p><h4><strong>Governance and compliance</strong></h4><p>As opportunity rises, risk rises with it. You now automate decisions that affect money, safety, and trust. This demands:</p><ul><li><p>Model testing</p></li><li><p>Fairness checks</p></li><li><p>Documentation</p></li><li><p>Audit trails</p></li><li><p>Regulatory alignment</p></li><li><p>Clear human oversight</p></li></ul><p>Boards should ask for evidence, not reassurance.</p><h3><strong>Level 3: AI for Innovative Value Propositions - The Product &amp; Service Enhancement Phase</strong></h3><p>Level 3 marks a shift from internal efficiency to external value delivery. AI is used as a core component to fundamentally enhance the quality, personalisation, or capability of the organisation&#8217;s products, services, and customer interactions, creating tangible competitive advantages.</p><p>AI shapes the product and the experience. It personalises content, pricing, offers, and journeys. It powers service across channels. It becomes part of the product&#8217;s value, not just the back office.<strong> </strong>Use it where choice is vast, context shifts often, and speed matters. Media, retail, travel, telco, and banking are prime ground. Personalisation increases use, retention, and spend.</p><p>Streaming platforms personalise every user&#8217;s experience.</p><p>Retail banks offer AI-driven insights on spending, saving, and money habits.</p><p>Learning platforms supply AI tutors that adapt to each student.</p><p>Car manufacturers provide real-time alerts, predictive servicing, and assisted driving features.</p><p>I advised a firm that used AI to personalise every step of its customer onboarding. The system predicted the best sequence of actions for each client. This reduced drop-off by a third. The board had expected a modest uplift. Instead, they saw a dramatic shift in customers&#8217; perception of the product.</p><p>This level turns AI into part of the product promise. Done well, it drives growth and loyalty. It becomes hard to copy because it relies on your data and your learning. It also raises the bar on ethics and accountability.</p><h4><strong>Leadership, accountability, and focus</strong></h4><p>The focus moves to product strategy. AI capability becomes a source of differentiation.</p><p>Ownership sits with the CPO and CTO, but the board must stay close due to customer impact, brand risk, and regulatory exposure.</p><p>The organisation must work cross-functionally. Product, engineering, data, marketing, legal, and risk teams must align. AI-driven features create new kinds of failure. Slow coordination kills momentum.</p><h4><strong>Organisational data readiness</strong></h4><p>Customer-level data must be accurate, linked, and accessible. Organisations need real-time flows, strong tagging, and consistent standards. Data privacy becomes a design constraint rather than an afterthought. Customers must have complete trust in how their data is used.</p><h4><strong>Proprietary model development</strong></h4><p>This is where proprietary models start to matter. The value lies in training on your own datasets: behaviour patterns, usage signals, support cases, equipment telemetry, or user feedback loops. Competitors cannot copy this without your data. This is the first point where <a href="https://www.aiintheboardroom.com/p/leveraging-ai-for-strategy-and-competitive">AI creates a lasting strategic edge</a>.</p><h4><strong>AI literacy</strong></h4><p>Product managers must understand how to build with AI. Designers must learn how users interact with conversational agents. Engineers need deeper model skills. This is not optional. If leaders do not understand AI at this level, they cannot govern it.</p><h4><strong>Governance and compliance</strong></h4><p>Customer-facing AI creates new classes of risk:</p><ul><li><p>Inaccurate recommendations</p></li><li><p>Unfair outcomes</p></li><li><p>Unclear decisions</p></li><li><p>Hallucinations</p></li><li><p>Harmful advice</p></li></ul><ul><li><p>Breach of trust</p></li></ul><p>Boards should require:</p><ul><li><p>Impact assessments</p></li><li><p>Testing environments</p></li><li><p>Safety reviews</p></li><li><p>Monitoring dashboards</p></li><li><p>Red-teaming</p></li><li><p>Clear escalation paths</p></li></ul><p>Level 3 offers high reward but also high exposure.</p><h3><strong>Level 4: AI for Disruptive Business Models - The Industry Reimagination Phase</strong></h3><p>This is the highest level of transformation, where AI is leveraged to create entirely new operating models, redefine industry structures, or capture new markets that were previously inaccessible. The focus moves beyond incremental improvement to leveraging proprietary AI capabilities as the foundational architecture of the business.</p><p>Here, AI is no longer a tool inside the business. It is the business. It enables new models, breaks cost curves, and reshapes value pools. Incumbents feel it in margins and share. New entrants scale fast. AI changes the unit economics or the speed of discovery. It creates experiences that were previously not feasible.</p><p>Consider autonomous vehicles. The entire economic model of transport changes when driving becomes software. Costs fall. Safety improves. Asset use rises.</p><p>Or think about factories run by autonomous systems. You no longer need large teams on the floor. Sensors, robots, and predictive systems handle most tasks. The cost structure collapses.</p><p>The typical pattern is clear: <strong>AI enables work to be done in ways that were previously impossible</strong>.</p><h4><strong>Leadership, accountability, and focus</strong></h4><p>This becomes a CEO and board-led agenda. These decisions affect the organisation&#8217;s entire strategy. They involve long-term bets, heavy investment, and new forms of oversight.</p><p>Leaders must have clear answers to:</p><ul><li><p>What markets are we entering?</p></li><li><p>What risks do we accept?</p></li><li><p>How do we protect customers?</p></li><li><p>How do we manage liability?</p></li><li><p>What skills do we need?</p></li><li><p>What systems must we build?</p></li></ul><p>Strategy becomes technology-led. Execution involves partners, regulators, and new ecosystems.</p><h4><strong>Organisational data readiness</strong></h4><p>Data becomes mission-critical. You need high-quality, real-time input streams, strong data governance, and monitoring that detects failure early. If the data is weak, the model collapses. If the model collapses, the business model collapses.</p><h4><strong>Proprietary model development</strong></h4><p>At this stage, many firms build advanced models or large R&amp;D teams. The advantage comes from unique data, strong telemetry, and deep domain knowledge. These models often require new infrastructure and new talent.</p><h4><strong>AI literacy</strong></h4><p>Everyone must improve their skills. Executives need enough understanding to govern complex systems. The technical teams need world-class expertise. Product teams need to understand limits and failure modes. Risk teams need new tooling and new methods.</p><h4><strong>Governance and compliance</strong></h4><p>This is the highest-risk level. Failures become system-wide. A single model error can cause significant harm.</p><p>Boards should expect:</p><ul><li><p>advanced safety frameworks</p></li><li><p>continuous audits</p></li><li><p>fail-safe design</p></li><li><p>independent review</p></li><li><p>crisis playbooks</p></li><li><p>regulatory engagement</p></li></ul><p>Level 4 is where trust becomes a strategic asset.</p><p>This level is not a tech pilot. It is a strategic bet. It can redefine cost curves, cycle times, and customer norms. Treat it like a new business and build models with strong governance in place.</p><h3><strong>A Simple Way to Use This Framework</strong></h3><p>When I present this model to boards, I ask three questions:</p><p><strong>1. Which level are we operating at today?</strong></p><ul><li><p>Most firms sit between Level 1 and Level 2.</p></li><li><p>A few reach Level 3.</p></li><li><p>Very few operate at Level 4.</p></li></ul><p><strong>2. What level are we trying to reach?</strong></p><ul><li><p>This forces clarity on ambition, risk appetite, and investment.</p></li></ul><p><strong>3. What must change in leadership, data, skills, and governance to get there?</strong></p><ul><li><p>This brings discipline and order to AI discussions that often spin in circles.</p></li></ul><p>Boards find this grounding. Executives find it helpful because it cuts through noise. It gives both sides a shared ambition and language.</p><h3><strong>Summary</strong></h3><p>AI adoption is not one thing; it is a ladder. Each level changes what is possible and what is required.</p><p><strong>Level 1: Personal Productivity</strong></p><ul><li><p>AI boosts individual output and judgment.</p></li><li><p>Leaders must model usage.</p></li><li><p>The focus is on literacy and safe practice.</p></li></ul><p><strong>Level 2: Efficient Operations</strong></p><ul><li><p>AI supports the value chain.</p></li><li><p>Operational leaders take control.</p></li><li><p>Data and process redesign matter.</p></li></ul><p><strong>Level 3: Innovative Value Propositions</strong></p><ul><li><p>AI shapes products and services.</p></li></ul><ul><li><p>It creates new customer value and a competitive edge.</p></li><li><p>Risks become public and must be governed carefully.</p></li></ul><p><strong>Level 4: Disruptive Business Models</strong></p><ul><li><p>AI rewrites industry rules.</p></li><li><p>Boards and CEOs must lead from the front.</p></li><li><p>The risks are high, but so are the rewards.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DcLu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DcLu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png 424w, https://substackcdn.com/image/fetch/$s_!DcLu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png 848w, https://substackcdn.com/image/fetch/$s_!DcLu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png 1272w, https://substackcdn.com/image/fetch/$s_!DcLu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DcLu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png" width="1456" height="889" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:889,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1351611,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/180889745?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DcLu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png 424w, https://substackcdn.com/image/fetch/$s_!DcLu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png 848w, https://substackcdn.com/image/fetch/$s_!DcLu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png 1272w, https://substackcdn.com/image/fetch/$s_!DcLu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0afa6ee-90eb-4f10-9aab-6dc37bff1a6d_1904x1162.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(download a PDF of the visual <a href="https://drive.google.com/file/d/1_rkgQRP4yPNmYRzdOjjhxZDu2raSoAEG/view?usp=drive_link">here</a>)</p><p>The organisations that win treat AI as a leadership discipline, not a technical project. They build clarity, maturity, and strong governance at each step.</p><p><strong>If you found this helpful&#8230;</strong></p><p>&#128073;&#127997; Subscribe to AI in the Boardroom to get weekly insights, tools, and guidance on how directors and executives can use AI to strengthen strategy, improve oversight, and build stronger organisations.</p>]]></content:encoded></item><item><title><![CDATA[Leveraging AI for Strategy & Competitive Advantage]]></title><description><![CDATA[Edition 6 - How boards and executives can use artificial intelligence to sharpen strategy, make better decisions, and build lasting competitive advantage.]]></description><link>https://www.aiintheboardroom.com/p/leveraging-ai-for-strategy-and-competitive</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/leveraging-ai-for-strategy-and-competitive</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Mon, 17 Nov 2025 08:50:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XL1M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XL1M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XL1M!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!XL1M!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!XL1M!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!XL1M!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XL1M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/edd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9658920,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/179059107?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XL1M!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!XL1M!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!XL1M!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!XL1M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedd18568-5ca0-47d1-abe0-462e3eac87cf_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>For many decades, boards and executives have worked together to define an organisation&#8217;s strategy. This undertaking has always required a clear vision, relevant data, and tough choices. The pace of change, the volume of available data, and the complexity of ecosystems are all increasing. According to McKinsey &amp; Company, AI and generative AI are now &#8220;a new inflexion point in strategy design&#8221; that magnifies the capacity for insight and choice.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>For boards and executives, this raises a dual challenge:</p><ul><li><p>How do we embed AI into the strategic process so it strengthens rather than fragments our efforts?</p></li><li><p>How do we turn AI into a competitive advantage, not just cost savings or pilot inertia?</p></li></ul><p>If you fail to address these challenges, you risk falling behind organisations that treat AI as foundational to strategy, rather than an add-on. There are many opportunities in the strategy process to embed AI tools. These include:</p><ul><li><p>vision, purpose, mission, &amp; values,</p></li><li><p>external analysis - macro environment,</p></li><li><p>external analysis - industry dynamics,</p></li><li><p>internal analysis - core capabilities,</p></li><li><p>strategy formulation - competitive advantage,</p></li><li><p>strategy implementation &amp; execution,</p></li><li><p>risk, governance &amp; ethical dimension.</p></li></ul><p>We will explore each of these in turn.</p><h3><strong>Vision, Purpose, Mission &amp; Values</strong></h3><p>A good strategy provides both a direction of travel and a high-level positioning. Before zooming in on tools and frameworks, you must start with direction. A strategy without an anchor will likely fail. A clear Vision, Purpose, Mission &amp; Values (VPMV) sets the north star. Without them, you may apply AI tactically&#8212;but miss where to play and how to win. The board must ask: Does our VPMV still reflect AI-era realities?</p><h4><strong>Practical steps for boards</strong></h4><ul><li><p>Conduct a workshop with senior leadership: examine the Vision statement through an AI lens. Ask: if AI doubles our data speed or automates major processes, does our Vision still reflect what we seek to become?</p></li><li><p>Use AI to test assumptions. For example, take transcripts of stakeholder interviews and run natural language processing (NLP) to identify emerging values or contradictions (we&#8217;ll return to this).</p></li><li><p>Define one or two &#8216;North-Star&#8217; metrics tied to your AI ambition (see next section).</p></li><li><p>Once VPMV is refreshed, communicate widely: the board sets the tone, leadership drives.</p></li></ul><p>Map your existing Mission (&#8220;We deliver customer-centric financial services&#8221;) and ask: if we embed AI, what shifts? Perhaps you become &#8220;We deliver real-time anticipatory financial services.&#8221; Then you test stakeholder sentiment&#8212;via AI-driven sentiment analysis of customer and employee feedback&#8212;to validate whether the shift resonates.</p><h4><strong>How AI can help</strong></h4><ul><li><p>Use AI to digest interviews, customer feedback, and leadership round-tables to identify stakeholder values and tensions.</p></li><li><p>Run rapid A/B testing of different phrasings of Vision and Mission statements: show them to random internal/external audiences, measure sentiment, clarity and resonance.</p></li><li><p>Use generative AI to draft alternative Mission/Value statements, then review and refine with the board.</p></li></ul><p>That embeds AI into the core of strategic grounding &#8212; not as an afterthought.</p><h3><strong>External Analysis &#8211; Macro Environment</strong></h3><p>Strategy must start outward. What is happening in the world that could affect you? What is the likelihood and impact of each identified threat or opportunity? What will you do to mitigate or capitalise on each of them?</p><h4><strong>Common tools</strong></h4><ul><li><p>PESTLE (Political, Economic, Social, Technological, Legal, Environmental) analysis</p></li><li><p>Impact/Certainty matrix (which trends are high-impact &amp; high-certainty)</p></li><li><p>Horizon-scanning</p></li><li><p>Scenario planning (extremes plus central cases)</p></li><li><p>Country attractiveness when operating internationally</p></li></ul><h4><strong>How AI could add value</strong></h4><ul><li><p>Use AI tools to continuously automate horizon scanning: monitor news, regulations, geopolitical signals, and industry commentary to flag signs of change. Deep research can be an excellent tool for this, and can do in a few minutes what would take a consultant weeks, and at a fraction of the cost.</p></li><li><p>Detect strategic inflexion points: e.g., a regulatory shift in AI governance, or a new entrant disrupting with a business model.</p></li><li><p>Use simulation or generative modelling to test different macro-scenarios and their impact on your business.</p></li><li><p>Quick &#8220;what-if&#8221; models: if interest rates rise and AI adoption doubles in X segment, what&#8217;s the effect on your cost base or competitor speed?</p></li></ul><h4><strong>Example anecdote</strong></h4><p>A major infrastructure services company used an AI-driven platform to scan regulatory filings, patent filings and global news feeds. It flagged a shift in Chinese infrastructure financing 12 months ahead of their internal PESTLE review. The board then scheduled a scenario planning day and reallocated strategy time to that region.</p><h4><strong>Why boards should care</strong></h4><p>Often, boards receive a static macro environment slide deck once a year. AI enables living insight. For non-executive directors, the questions become:</p><ul><li><p>Are we asking the right questions of management around these signals?</p></li><li><p>Do we require updates on horizon-scanning and scenario shifts at each board meeting?</p></li><li><p>Are we prepared for both opportunity and risk at the macro level?</p></li></ul><h3><strong>External Analysis &#8211; Industry Dynamics</strong></h3><p>Zoom in: what&#8217;s happening in your specific industry? Who are the main competitors, suppliers, and customers? What changes are coming?</p><h4><strong>Common tools</strong></h4><ul><li><p>Porter&#8217;s Five Forces analysis</p></li><li><p>Technology S-Curve (where is the adoption tipping point?)</p></li><li><p>Ecosystem mapping (partners, competitors, new entrants, regulators)</p></li></ul><h4><strong>How AI could add value</strong></h4><ul><li><p>AI can speed up competitor and industry analysis by scraping public filings, news feeds, and patent databases, and by building competitor profiles.</p></li><li><p>Predict future industry attractiveness: AI can model adoption curves, supply-chain shifts, and customer behaviour trends.</p></li><li><p>Analyse market/customer behaviour: apply AI to internal and external customer data to spot shifts earlier.</p></li><li><p>Strategic &#8220;game-theory&#8221; modelling: what likely moves will our competitors make if they embed AI?</p></li></ul><h4><strong>Example</strong></h4><p>An insurance board asked: &#8220;What happens if a fintech start-up uses AI to undercut us on risk assessment and pricing?&#8221; Using AI competitor mapping and scenario modelling, they estimated a 15-per-cent erosion of premium income within 36 months if they did nothing. They then accelerated their AI-embedded product launch accordingly.</p><h4><strong>Board considerations</strong></h4><ul><li><p>Are we adjusting our Five Forces analysis to include AI as a force (e.g., AI lowering switching costs and creating new-entrant threats)?</p></li><li><p>Do we have metrics to monitor our position on the S-Curve of AI adoption in our industry?</p></li><li><p>Does our ecosystem map include new nodes enabled by AI (data partners, ecosystem incumbents, regulatory technology)?</p></li></ul><h3><strong>Internal Analysis &#8211; Core Capabilities</strong></h3><p>Having looked outside, we now look inside at ourselves. What do we have and what do we need? What are our key resources and capabilities that give us our unique competitive advantage?</p><h4><strong>Common tools</strong></h4><ul><li><p>Value Chain analysis</p></li><li><p>Core-competency identification</p></li><li><p>VRIO/VRIN framework (Value, Rarity, Imitability, Organisation)</p></li><li><p>SWOT (Strengths, Weaknesses, Opportunities, Threats)</p></li></ul><h4><strong>How AI could add value</strong></h4><ul><li><p>Automate or augment processes in your value chain: e.g., AI in procurement, logistics, marketing, and service.</p></li><li><p>Use AI models and proprietary data as strategic resources that competitors cannot easily replicate.</p></li><li><p>Use AI to analyse your internal capability gaps: for example, talent, data architecture, governance, culture.</p></li><li><p>Real-time dashboards using AI to monitor internal performance, identify bottlenecks and run early-warning indicators.</p></li></ul><h4><strong>Example</strong></h4><p>A retail business used AI to map its end-to-end value chain, overlaying internal data flows and external customer touchpoints. It was discovered that its data capture in the last-mile delivery segment was weak, making targeted AI-enabled personalisation impossible. The board mandated investment in data capture, which enabled a differentiator in customer loyalty.</p><h4><strong>Board questions</strong></h4><ul><li><p>What parts of our value chain are subject to automation or augmentation via AI, and what does that mean for our business model?</p></li><li><p>Do we treat our data, AI models, and workflows as strategic assets (i.e., VRIO)?</p></li><li><p>How well are we organised for AI: talent, infrastructure, governance, culture?</p></li><li><p>Are we monitoring internal capability gaps and addressing them proactively?</p></li></ul><h3><strong>Strategy Formulation &amp; Competitive Advantage</strong></h3><p>This is where it all comes together. Having analysed vision, the external and internal environment, you can now choose where to play and how to win&#8212;and build competitive advantage.</p><h4><strong>Common tools</strong></h4><ul><li><p>Generic strategies (cost leadership, differentiation, focus)</p></li><li><p>Where to play / How to win frameworks</p></li><li><p>Portfolio analysis / BCG Matrix</p></li><li><p>Ansoff Matrix / McKinsey 3&#8209;Horizons Framework</p></li><li><p>Blue Ocean Strategy</p></li><li><p>Game-theory modelling</p></li></ul><h4><strong>How AI could add value</strong></h4><ul><li><p>You can build AI features into products and services: making the offering smarter, more adaptive, more valuable.</p></li><li><p>Personalise customer interactions at scale: using AI to tailor experiences, offers, and service.</p></li><li><p>Use AI to simulate &#8220;what-if&#8221; strategies: test different business models, pricing, customer segments, and service levels rapidly.</p></li><li><p>Automate insights from customer behaviour and generate novel strategic options: AI can surface patterns humans miss.</p></li></ul><h4><strong>Example</strong></h4><p>A healthcare company asked: &#8220;Should we enter diagnostics as a service using AI?&#8221; They used scenario modelling with AI to evaluate market sizes, reimbursement risks, and ecosystem partners. They then decided to adopt a &#8220;focus-plus-differentiation&#8221; strategy: target a niche segment (chronic disease diagnostics), use AI for predictive detection and partner with digital health platforms. The board reviewed the models, agreed on the strategy, and allocated resources accordingly.</p><h4><strong>Key for boards</strong></h4><ul><li><p>Make the hard choices: where will we play, how will we win? AI supports, but does not replace judgment.</p></li><li><p>Ensure resource allocation aligns: AI investments must map to strategic choices and the business model.</p></li><li><p>Monitor strategic options as dynamic: AI changes can shift &#8220;how to win&#8221; faster than in past eras.</p></li></ul><h3><strong>Strategy Implementation &amp; Execution</strong></h3><p>A strategy is only as good as its implementation. Boards must ensure execution, not just ambition.</p><h4><strong>Common tools</strong></h4><ul><li><p>OKRs (Objectives &amp; Key Results)</p></li><li><p>Hoshin Kanri / Traction / Cascaded goals</p></li><li><p>McKinsey 7&#8209;S Framework</p></li><li><p>Balanced Scorecard</p></li></ul><h4><strong>How AI could add value</strong></h4><ul><li><p>AI can support the the creation of objectives and goals by using historical data and scenario modelling to set realistic yet stretching OKRs.</p></li><li><p>AI can automate risk and compliance audits: freeing leadership to focus on value, while governance remains robust.</p></li><li><p>Real-time monitoring and dashboards: AI-driven tools alert boards or executives when strategy drift occurs.</p></li><li><p>Continuous feedback loops: AI models learn from execution data and refine strategic metrics or goals.</p></li></ul><h4><strong>Example</strong></h4><p>A manufacturing firm used AI to monitor the execution of a cost-reduction strategy. The board set OKRs focused on efficiency, customer lead time, and service level. The AI platform tracked thousands of sensor data points and flagged when a production line was deviating. This allowed leadership to intervene early rather than wait for the quarterly report.</p><h4><strong>Board checklist</strong></h4><ul><li><p>Are strategy implementation metrics aligned with AI-capable monitoring?</p></li><li><p>Does our governance oversight include dashboards that use AI-driven signals for early warning?</p></li><li><p>Do we have a clear cascading of goals (board &#8594; executive &#8594; function) with AI-aware measures?</p></li><li><p>Is our organisation structured (7-S) to embed AI: shared values, skills, systems, style, staff, structure, strategy?</p></li></ul><h3><strong>Risk, Governance &amp; Ethical Dimensions</strong></h3><p>This thread runs across all the stages above. Boards cannot ignore it. Research shows that firms with strong governance around AI build greater trust and can convert it into a competitive advantage. For example, according to EY, companies that embed responsible AI practices gain a premium in stakeholder trust and long-term value.</p><p>IBM asserts responsible AI is a differentiator in competitive markets: it strengthens brand, attracts talent, and retains customers.</p><h4><strong>Practical board mechanisms</strong></h4><ul><li><p>Establish an AI oversight committee (or designate board sub-committee) linking strategy, risk, ethics and technology.</p></li><li><p>Set AI literacy expectations: directors need basic knowledge of AI capabilities, risks and architecture&#8212;the board article on governing AI underlines this.</p></li><li><p>Embed ethics, fairness, and transparency metrics into the balanced scorecard.</p></li><li><p>Require scenario testing of AI failures/risk events (e.g., data bias, model drift, regulatory shock).</p></li><li><p>Link AI strategy to enterprise risk management (ERM): AI is not separate; it&#8217;s at the heart of many risks (operational, reputational, strategic).</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Sajz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Sajz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png 424w, https://substackcdn.com/image/fetch/$s_!Sajz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png 848w, https://substackcdn.com/image/fetch/$s_!Sajz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png 1272w, https://substackcdn.com/image/fetch/$s_!Sajz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Sajz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png" width="1456" height="911" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:911,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:530533,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/179059107?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Sajz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png 424w, https://substackcdn.com/image/fetch/$s_!Sajz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png 848w, https://substackcdn.com/image/fetch/$s_!Sajz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png 1272w, https://substackcdn.com/image/fetch/$s_!Sajz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d92b88-c5e0-4167-bce2-49875d3bc7f0_1806x1130.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Actionable Tips for Boards &amp; Executives</strong></h3><ul><li><p><strong>Start with a one-pager</strong>. At your next board meeting, ask: &#8220;How does AI affect each element of our Vision &amp; Values?&#8221; Use this as a catalyst for deeper reflection.</p></li><li><p><strong>Require an AI-strategy briefing</strong>. Ensure management presents how AI will support strategy formulation and execution&#8212;not just &#8220;pilot projects&#8221;.</p></li><li><p><strong>Ask for a capability heat-map</strong>. Request from leadership a map of AI readiness across people, data, models, infrastructure, and culture.</p></li><li><p><strong>Set a board dashboard</strong>. Insist on a simple, high-level AI-strategy dashboard: adoption metrics, value delivered, risk indicators, model drift alerts.</p></li><li><p><strong>Embed scenario planning</strong>. Make AI-driven scenario modelling part of your regular strategic review cycle. Choose at least one &#8220;wild card&#8221; scenario (e.g., regulatory shock in AI, competitor uses AI to halve cost).</p></li><li><p><strong>Focus on value, not hype</strong>. Don&#8217;t let AI become a distraction from core strategy. Ask: &#8220;What business value will this deliver? How will we monetise it or build differentiation?&#8221;</p></li><li><p><strong>Monitor sustainability</strong>. Competitive advantage from AI can be fleeting unless you embed systems, data, culture, and governance. Ask: &#8220;Can our competitors copy this within 18-24 months?&#8221; If yes, you need a new edge.</p></li></ul><h3><strong>Summary</strong></h3><p>Boards and senior leaders must treat AI as a strategic instrument, not an optional tool.</p><p>You begin with Vision, Purpose, Mission &amp; Values, then map the macro and industry context, assess internal capabilities, formulate strategy, and execute it&#8212;all with AI in mind.</p><p>From horizon scanning to internal dashboards, from scenario modelling to value chain automation, AI offers both opportunities and risks.</p><p>Your job as a director or executive: ask the right questions, insist on clarity of value, allocate resources, monitor execution and governance, and ensure that AI becomes a sustainable competitive advantage, not just noise.</p><p><strong>&#128073;&#127997; Download a handy <a href="https://drive.google.com/file/d/1QF16BHoGcoax8TGXbeZ8wlTy-X9L_MZr/view?usp=drive_link">AI for Strategy &amp; Competitive Advantage One-Pager.</a></strong></p><p><strong>&#128073;&#127997; If you found this useful, subscribe to AI in the Boardroom. You&#8217;ll receive future deep-dives packed with actionable insights on AI strategy, governance, transformation and board oversight. Let&#8217;s make AI a source of strength&#8212;not risk&#8212;for your organisation.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Ultimate AI Glossary for Directors & Leaders]]></title><description><![CDATA[Edition 5 - 200+ AI terms every director and executive should understand &#8212; explained simply, without the hype.]]></description><link>https://www.aiintheboardroom.com/p/the-ultimate-ai-glossary-for-directors</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/the-ultimate-ai-glossary-for-directors</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Fri, 07 Nov 2025 09:15:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7HSQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7HSQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7HSQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!7HSQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!7HSQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!7HSQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7HSQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:13477142,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/178254095?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7HSQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!7HSQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!7HSQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!7HSQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F358940b4-2639-4013-9c03-2f08ac340874_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Whether we like it or not, directors need to face a simple fact: <strong>artificial intelligence, particularly its governance, is now firmly a board issue</strong>. Many directors know it, but few feel confident talking about it, let alone exercising leadership.</p><p>I see it in my advisory work every week &#8212; people nodding along to terms like foundation models, RAG pipelines, or AI governance frameworks, while quietly wondering what they really mean. It&#8217;s understandable given the speed of change. The language of AI is evolving faster than most organisations and directors can adapt.</p><p>That&#8217;s why I created <strong><a href="https://www.aiintheboardroom.com/p/ai-glossary">The Ultimate AI Glossary for Directors &amp; Leaders</a></strong> &#8212; a single reference for leaders who need clarity, not hype or jargon.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3><strong>Why it matters</strong></h3><p>Boards are being asked to sign off AI strategies, risk frameworks, and compliance plans. But without a shared vocabulary, conversations stall, misunderstandings creep in, and risk oversight weakens.</p><p>This glossary aims to fix that. It covers over 200 key terms &#8212; from algorithmic bias to zero-shot learning &#8212; all explained in plain English, written for directors, not data scientists. Every definition has been checked for accuracy and aligned with current UK, EU, and OECD guidance.</p><h3><strong>How to use it</strong></h3><p>Use it before your next strategy session, risk review, or AI presentation. Keep it open when reviewing proposals or supplier claims. It will help you ask sharper questions, challenge assumptions, and make better decisions.</p><p>AI governance isn&#8217;t about knowing how to code. It&#8217;s about knowing what to ask, what to approve, and what to challenge.</p><h3><strong>Where to get it</strong></h3><p>To access <strong>The Ultimate AI Glossary for Directors &amp; Leaders</strong>, you may either <a href="https://www.aiintheboardroom.com/p/ai-glossary">view an online version</a> or <a href="https://drive.google.com/file/d/1eUQeUoDJUSrBks-x4Enjd-voqlcUJS4A/view?usp=sharing">download a handy PDF version</a> to view offline.</p><p>And if you want regular, board-level insights on AI strategy, governance, and transformation, subscribe to my newsletter AI in the Boardroom.</p><p>Every week, I share practical tools and insights to help boards turn AI disruption into strategic advantage &#8212; safely, ethically, and profitably.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiintheboardroom.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Breakdown of the OECD’s 'Principles for Trustworthy AI']]></title><description><![CDATA[Edition 4 - Why every board should know these five principles: and act on them]]></description><link>https://www.aiintheboardroom.com/p/breakdown-of-the-oecds-principles</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/breakdown-of-the-oecds-principles</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Tue, 28 Oct 2025 12:30:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WdmH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WdmH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WdmH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!WdmH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!WdmH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!WdmH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WdmH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11450656,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/177362591?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WdmH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!WdmH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!WdmH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!WdmH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fd38bc-9a72-4343-aad1-e1a3f98c4f24_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>AI now consistently appears in board packs, strategy sessions, and risk registers. Directors are being asked to approve AI investments, manage regulatory obligations, and protect their organisations from reputational harm. Yet many still don&#8217;t know what &#8220;responsible AI&#8221; means in practice.</p><p>One place to start is with the <strong><a href="https://oecd.ai/en/ai-principles">OECD&#8217;s Principles on Artificial Intelligence</a></strong>. These were the world&#8217;s first intergovernmental AI standards, adopted in 2019 by 46 countries, including the UK, the US, and EU members. They <strong>underpin the EU AI Act</strong> and are shaping regulations worldwide.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>At their core are the <strong>five principles for responsible stewardship of trustworthy AI</strong>. They set out what &#8220;good AI&#8221; looks like&#8212;not for engineers, but for leaders. Boards that ignore them are taking unnecessary risks.</p><p>Let&#8217;s break them down.</p><h3><strong>1. Inclusive growth, sustainable development, and well-being</strong></h3><p>This principle is about making sure AI drives prosperity for people and the planet&#8212;not just profit. Trustworthy AI can support inclusive growth, sustainable development, and the <strong><a href="https://sdgs.un.org/goals">UN Sustainable Development Goals</a></strong> across areas such as education, health, transport, agriculture, and the environment.</p><p>Boards must also recognise the risks. AI can widen inequalities if access is uneven or if systems reinforce existing biases. Vulnerable groups&#8212;minorities, women, children, older people, and low-skilled workers&#8212;are especially exposed. These risks are even greater in low- and middle-income countries.</p><p>Responsible stewardship means guiding AI to reduce, not amplify, these divides. It requires clear safeguards, cross-sector collaboration, and open public dialogue. The aim is to use AI to empower all members of society, build trust, and create outcomes that benefit everyone.</p><p><strong>Example</strong>: When UPS introduced AI-driven route optimisation (ORION), it cut fuel costs and carbon emissions. Shareholders benefited, but so did drivers and the environment. Compare that with Amazon&#8217;s failed AI recruitment tool, which downgraded CVs from women. It created reputational harm and regulatory exposure.</p><p><strong>For directors</strong>:</p><ul><li><p>Ask if your AI projects create value beyond short-term profit.</p></li><li><p>Challenge management to explain the human impact.</p></li><li><p>Consider sustainability and workforce implications as part of every AI business case.</p></li><li><p>Consider the implications of increased energy use on ESG policies.</p></li></ul><h3><strong>2. Respect for the rule of law, human rights and democratic values, including fairness and privacy</strong></h3><p>This principle states that AI must be built on human-centred values: freedom, fairness, equality, the rule of law, privacy, and consumer rights.</p><p>Poorly designed systems risk infringing human rights, whether by accident or intent. Boards should ensure AI is &#8220;values-aligned&#8221;, with safeguards, human oversight, and the ability to intervene when needed. Doing so keeps AI behaviour consistent with democratic values, reduces discrimination, and builds public trust.</p><p>Tools like human rights impact assessments, ethical codes, and quality certifications can embed fairness and accountability into AI development and use.</p><p><strong>Example</strong>: In the US, a global bank had to scrap its AI CV screening system after it amplified gender bias. Regulators and the media took notice. In healthcare, several NHS trusts are piloting AI diagnostics tools. These raise critical questions: are they equally accurate for all patient groups?</p><p><strong>For directors</strong>:</p><ul><li><p>Insist on bias audits before systems are deployed.</p></li><li><p>Demand clear evidence that AI decisions can be explained in plain English.</p></li><li><p>Hold management to account for fairness metrics, not just financial ones.</p></li></ul><h3><strong>3. Transparency and explainability</strong></h3><p>Transparency means people should know when they&#8217;re dealing with AI&#8212;whether it&#8217;s a chatbot, a recommendation, or a decision. It also means providing meaningful information about how a system was built, trained, and deployed so that users can make informed choices. Transparency does not mean handing over source code or proprietary data, which is often unnecessary or impractical.</p><p>Explainability goes further. People affected by AI decisions should be able to understand the primary factors and logic behind an outcome, and challenge it if needed. The level of detail depends on the context. High-stakes decisions demand clarity; low-risk interactions may need less.</p><p>Boards should note the trade-offs: explainability can reduce accuracy and add costs, but without it, trust and accountability collapse. The goal is clear communication, simple, accessible explanations that respect privacy, protect IP, and still allow scrutiny.</p><p><strong>Example</strong>: Credit scoring algorithms are notorious for opacity. The EU has already fined firms for failing to explain automated decisions to consumers. Boards that sign off on opaque systems are exposing themselves to legal and reputational risk.</p><p><strong>For directors</strong>:</p><ul><li><p>Require management to present AI systems in language that non-technical directors can understand.</p></li><li><p>Test explainability yourself&#8212;if you can&#8217;t explain a decision to a regulator, you shouldn&#8217;t approve the system.</p></li><li><p>Ensure your risk committee tracks AI transparency as a standing item.</p></li></ul><h3><strong>4. Robustness, security, and safety</strong></h3><p>AI systems must not pose unreasonable safety risks in everyday use, foreseeable misuse, or over their whole lifecycle. Existing consumer protection laws already define many of these risks, and governments are deciding how they apply to AI.</p><p>Two tools stand out:</p><ul><li><p><strong>Traceability</strong>: keeping records of data sources, cleaning, and processes so outcomes can be analysed, mistakes corrected, and accountability strengthened.</p></li><li><p><strong>Risk management</strong>: applying structured methods to identify, assess, and mitigate risks, from bias and privacy breaches to digital security threats, at every stage of the AI lifecycle. Different uses demand different levels of protection.</p></li></ul><p>For boards, the message is clear: without robust systems, secure design, and continuous risk management, AI becomes a liability rather than an asset.</p><p><strong>Example</strong>: Self-driving cars highlight the stakes. A software glitch or adversarial attack can cost lives. In an enterprise, a single faulty algorithm can wipe millions from market value in minutes, as Knight Capital&#8217;s trading debacle proved in 2012 (though not AI, the lesson is clear).</p><p><strong>For directors</strong>:</p><ul><li><p>Ask if AI models have been stress-tested.</p></li><li><p>Confirm cyber teams are included in AI governance from day one.</p></li><li><p>Demand incident response plans for AI failures&#8212;just as you would for data breaches.</p></li></ul><h3><strong>5. Accountability</strong></h3><p>While responsibility, liability, and accountability overlap, accountability is the most relevant for AI. It means organisations and individuals must ensure AI systems function properly throughout their lifecycle and can show how and why decisions were made.</p><p>It is not just about blame after something goes wrong. It is about taking ownership, documenting key decisions, enabling audits, and showing regulators, customers, and stakeholders that governance is in place.</p><p>For boards, the message is simple: you cannot outsource accountability to an algorithm or a vendor. The buck stops with leadership.</p><p><strong>Example</strong>: The Dutch childcare benefits scandal starkly illustrates this. An AI system falsely accused thousands of families of fraud. The fallout forced the government to resign. The lesson: accountability is non-negotiable.</p><p><strong>For directors</strong>:</p><ul><li><p>Ensure clear ownership of AI risks across the organisation.</p></li><li><p>Mandate board-level oversight&#8212;whether through the audit, risk, or ethics committee.</p></li><li><p>Include accountability in vendor and partner contracts.</p></li></ul><h3><strong>Why this matters for boards</strong></h3><p>These five principles are not abstract ideals. They are becoming the baseline for regulation. The OECD framework influenced the EU AI Act, the US AI Bill of Rights, and the UK&#8217;s AI White Paper.</p><p>For boards, adopting them now means:</p><ul><li><p><strong>Strategic clarity</strong>: AI projects link to sustainable value creation.</p></li><li><p><strong>Risk protection</strong>: You stay ahead of regulatory and reputational risks.</p></li><li><p><strong>Trust building</strong>: Customers, employees, and investors see AI as responsible rather than reckless.</p></li></ul><p>The question is not whether you adopt these principles. It is whether you do so before regulators or the public force your hand.</p><h3><strong>Takeaway for directors</strong></h3><p>When AI is on your board agenda, test proposals against these five principles. If they don&#8217;t measure up, push back.</p><ul><li><p>Does it benefit more than just shareholders?</p></li><li><p>Does it respect fairness and human rights?</p></li><li><p>Can you explain it to a regulator or journalist?</p></li><li><p>Has it been stress-tested and secured?</p></li><li><p>Is there clear accountability?</p></li></ul><p>If you can&#8217;t answer &#8220;yes&#8221; with confidence, you have work to do.</p><h3><strong>Final thought</strong></h3><p>The OECD AI Principles give boards a practical framework for responsible stewardship. They are not a compliance burden. They are a guide to turning AI into a strategic advantage, while avoiding the pitfalls that have already cost others dearly.</p><p>If your board is serious about AI, these five principles should be on your agenda.</p><p><strong>&#128073;&#127997; If you found this helpful, subscribe to </strong><em><strong>AI in the Boardroom</strong></em><strong>. Each week, I share practical insights to help directors and executives turn AI disruption into opportunity&#8212;while staying safe, ethical, and compliant.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Breakdown of the IoD’s 'AI Governance in the Boardroom']]></title><description><![CDATA[Edition #3 - Why every director needs to pay attention to the IoD&#8217;s new 12 principles for AI governance]]></description><link>https://www.aiintheboardroom.com/p/breakdown-of-the-iods-ai-governance</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/breakdown-of-the-iods-ai-governance</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Wed, 22 Oct 2025 07:45:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2j1-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2j1-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2j1-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!2j1-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!2j1-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!2j1-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2j1-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11666082,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/176778684?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2j1-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!2j1-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!2j1-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!2j1-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab392bd-12dc-476f-be91-c07d8c3a4f65_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As we have seen in <a href="https://www.aiintheboardroom.com/p/five-questions-every-board-should">previous posts</a>, AI is no longer an IT side project. It is strategy. It is risk. It is reputation. It is also regulation - arriving faster than many boards think. The new business paper from the Institute of Directors (IoD), AI Governance in the Boardroom, gives directors a practical framework: 12 principles to govern AI use sensibly and safely. Treat it like a board pack, not a blog. Read it, adopt it, and assign owners.</p><p>This article breaks down the paper, showing its connection to current regulations and providing board-level actions. It also provides credible case examples, enabling you to brief your colleagues with confidence. Where possible, I&#8217;ve linked to primary sources.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading <em><strong>AI in the Boardroom</strong></em> &#128591;&#127997; Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h3>What&#8217;s new - and why it matters</h3><p>The IoD paper updates the 2023 guidance, incorporating the reality of today&#8217;s AI: wider use, increased risk, and tighter expectations from regulators, investors, and customers. It also includes results from an IoD Policy Voice survey of ~700 directors and business leaders: two-thirds use AI personally; half say their organisations use AI; a quarter lack any AI policy or governance. That gap is where fines, headlines, and value erosion live.</p><p>The EU AI Act entered into force on 1 August 2024, with staged obligations through 2025&#8211;2027. Boards operating in or selling into the EU will need to understand the risk-based model, new obligations for deployers, and the timeline for compliance.</p><p>The UK is taking a regulator-led approach. As such, there is no single AI Act. Expect sector regulators (FCA, ICO, CMA, Ofcom, MHRA) to tighten guidance and enforcement using existing powers, backed by the Department for Science Innovation and Technology (DSIT), the AI Security Institute, and the Digital Regulation Cooperation Forum (DRCF). Boards should assume that expectations will continue to rise, even without a single statute.</p><p><strong>Bottom line</strong>: the governance bar is moving up. Your board can either guide this, or get guided by events.</p><div><hr></div><h3>The IoD&#8217;s 12 principles - what they mean in practice</h3><p>The paper&#8217;s strength is practicality. It provides a simple, board-owned approach to guiding AI across strategy, risk, compliance, and culture. Here&#8217;s what to take into your next meeting.</p><h4><strong>1) Monitor the evolving regulatory and (geo)political environment</strong></h4><p>Map your exposure to the EU AI Act, UK regulators, and any other jurisdictions where you operate or sell. Decide who updates the board and how often. Use a simple dashboard: obligations, deadlines, status, red flags.</p><p><strong>Board question</strong>: Which rules will apply to us over the next 12 months, and are we prepared?</p><h4><strong>2) Continually audit and measure</strong></h4><p>Create one source of truth for all AI systems in use; internal, vendor, embedded in platforms, and &#8220;shadow&#8221; tools your people already use. Tie each system to an owner, purpose, data sources, risks, and controls. Align to standards where it helps. ISO/IEC 42001 is now the AI management system benchmark.</p><p><strong>Board question</strong>: Do we know where AI shows up across our technology stack and supply chain?</p><h4><strong>3) Undertake impact and risk assessments</strong></h4><p>Don&#8217;t stop at model metrics. Assess impact on employees, customers, suppliers, and communities. Determine your organisation&#8217;s risk appetite. Decide where humans must stay in the loop. In higher-risk settings, seek independent assurance.</p><p><strong>Board question</strong>: Who might be harmed, and how would we know?</p><h4><strong>4) Establish Board accountability and management responsibilities</strong></h4><p>Name a board committee with oversight. Name an executive owner. Put AI into risk and audit cycles. Keep veto rights with the board for high-risk deployments. Communicate this to staff and investors.</p><p><strong>Board question</strong>: Who signs off on AI risks today? Is that formal?</p><h4><strong>5) Set high-level strategic goals</strong></h4><p>No &#8220;AI because everyone else is doing it.&#8221; Nor should AI be a shiny solution searching for a problem. Define a small set of plain-language goals: augment people, improve quality, speed decisions, protect customers, cut waste. Make success measurable. Keep it tied to values and ESG commitments.</p><p><strong>Board question</strong>: What &#8220;AI-shaped&#8221; problems are we solving, and how will we measure success?</p><h4><strong>6) Empower a cross-functional, operational, independent review committee</strong></h4><p>Cross-functional. Trained. Resourced. With the authority to pause or reframe projects. The role is to surface issues early, not to block innovation. Build a clear Terms of Reference and define the reporting line to the board.</p><p><strong>Board question</strong>: Can this committee actually stop a launch if needed?</p><h4><strong>7) Validate, document and secure data sources, and assess data assets</strong></h4><p>Provenance, quality, bias controls, logging, and retention. Be explicit about synthetic data. Treat decision logic like any other controlled asset: understandable and auditable. Build tripwires to detect model drift or suspicious behaviour.</p><p><strong>Board question</strong>: Where does this data come from, and who has checked it?</p><h4><strong>8) Train and upskill people</strong></h4><p>AI literacy is not optional. Tailor training for the board, the exec team, frontline users, and engineers. Teach people to challenge outputs and escalate concerns. Make this part of the induction. Reward good practice.</p><p><strong>Board question</strong>: Do our people know when to question the machine?</p><h4><strong>9) Comply with privacy requirements</strong></h4><p>Minimise personal data. Honour rights. Use Data Protection Impact Assessments (DPAIs) where needed. Follow your regulator&#8217;s guidance on AI and data protection.</p><p><strong>Board question</strong>: Can we evidence privacy-by-design for each AI system?</p><h4><strong>10) Comply with security-by-design requirements</strong></h4><p>Adopt the SSDLC for AI, penetration test, red-team, monitor suppliers, and log incidents. Consider standards like ISO/IEC 27001 and the UK AI Cyber Security Code of Practice.</p><p><strong>Board question</strong>: Have we stress-tested our AI like we stress-test our finances?</p><h4><strong>11) Test and evaluate systems and remove from use</strong></h4><p>Pre-deployment testing is table stakes. More importantly, a straightforward process to pause or retire systems that drift, degrade, or cause harm. Contract for this with vendors. Keep the board&#8217;s veto alive.</p><p><strong>Board question</strong>: If this system fails, who is responsible for shutting it down, and how quickly?</p><h4><strong>12) Review systems, policies and governance practices regularly</strong></h4><p>AI governance is not &#8220;set and forget.&#8221; Plan reviews. Track KPIs. Invite independent assurance. Keep a human-in-the-loop where the stakes are high.</p><p><strong>Board question</strong>: When did we last review our AI inventory, risks, and controls?</p><div><hr></div><h3>Real-world signals directors should know</h3><p>Boards often ask for proof that AI governance matters. Here are some real-life examples to use in the boardroom.</p><p><strong>Recruitment bias: Amazon scrapped its internal AI screening tool</strong><br>Amazon ended an experimental CV-screening system after discovering it penalised resumes that included indicators associated with women. The issue is that the model was trained on historical, male-dominated hiring data. The outcome was that the project was scrapped, a lesson for everyone. This is not about one company; it is about data and governance.</p><p><strong>Hiring platforms under scrutiny: EEOC vs Workday</strong><br>In the U.S., a class-action suit alleges discriminatory screening by Workday&#8217;s tools. The court allowed key claims to proceed in July 2024, and the EEOC has argued that anti-bias laws can cover Workday as an &#8220;employment agency.&#8221; Workday disputes the allegations. Regardless of the outcome, this shows where regulators are heading and why boards must ask tough questions of vendors.</p><p><strong>NHS diagnostics: AI to speed stroke treatment</strong><br>NHS England has deployed AI in stroke pathways. Brainomix e-Stroke has been credited in government reporting with significantly cutting &#8220;door-in-door-out&#8221; times, enabling faster treatment. NHS England has also been piloting a central AI Deployment Platform, routing radiology images to approved AI tools for decision support. The key lesson is that governance, vendor assurance, and clinical oversight are essential.</p><p><strong>Cancer pathways: AI in chest X-ray and imaging networks</strong></p><p>Regional NHS alliances report using AI to accelerate chest X-ray review in suspected cancer, with published research on procurement and early deployment across 66 Trusts. This is a valuable case when discussing explainability, clinical validation, and change management at scale.</p><p>These are not &#8220;tech&#8221; stories. They are governance stories: data, bias, assurance, safety, and accountability.</p><div><hr></div><h3>How directors can use this IoD paper</h3><p>The IoD&#8217;s guidance is not abstract. It is a direct response to cases like these and to the regulatory shift now underway. The paper offers checklists and &#8220;what boards should consider&#8221; prompts for each principle. If you only do one thing this month, make sure to incorporate those prompts into your next board agenda.</p><p>Three practical moves I recommend:</p><ol><li><p><strong>Stand up a one-page AI register and review it quarterly</strong><br>Maintain a comprehensive inventory that includes: system, purpose, owner, training data, decision logic, risks, controls, KPIs, vendor status, and review date. Tie this to risk and audit cycles. Reference ISO/IEC 42001, where it helps you bring order and cadence.</p></li><li><p><strong>Adopt a plain-English AI policy with real guardrails</strong><br>Cover acceptable use, privacy-by-design, security-by-design, human-in-the-loop, red lines, escalation, and incident reporting. Make the policy visible to staff and vendors. Bake it into onboarding and procurement.</p></li><li><p><strong>Form (or empower) an independent review committee</strong><br>Cross-functional, empowered to pause launches, and trained to read assurance reports. Give it a clear Terms of Reference and a line to the board. Use it to normalise complex trade-off discussions before systems go live.</p></li></ol><p>And four frequently-asked questions to be ready for:</p><ol><li><p><strong>&#8220;Do we really need to care if we are UK-only?&#8221;</strong><br>Yes. UK regulators are active. And your vendors may be operating under EU rules, which push obligations onto deployers through contracts and product requirements.</p></li><li><p><strong>&#8220;Can&#8217;t we just rely on the vendor&#8217;s assurance?&#8221;</strong><br>No. Vendor assurance is necessary, not sufficient. You must validate fit-for-purpose in your context: your data, your processes, your risks, your people. Keep testing, even post-deployment.</p></li><li><p><strong>&#8220;What does &#8216;human-in-the-loop&#8217; actually mean?&#8221;</strong><br>A trained person who can understand the system&#8217;s role, challenge the output, and override it when needed. That requires process, time, and authority&#8212;not just a box tick.</p></li><li><p><strong>&#8220;Where do we start if we have nothing?&#8221;</strong><br>Begin by establishing an inventory and a basic policy. Pick one pilot system. Run a lightweight impact assessment. Prove the muscle, then scale.</p></li></ol><div><hr></div><h3>What to do before your next board meeting</h3><ul><li><p><strong>Download the IoD&#8217;s paper and my one-pager summary (see below)</strong>. Bring it to your next board meeting.</p></li><li><p><strong>Map AI use in your organisation</strong>. Don&#8217;t rely on assumptions&#8212;include third-party tools.</p></li><li><p><strong>Agree on accountability</strong>. Name a director and a committee to oversee AI.</p></li><li><p><strong>Set clear goals</strong>. Align AI projects with strategy, values, and measurable outcomes.</p></li><li><p><strong>Plan regular reviews</strong>. AI governance is a cycle, not a tick-box.</p></li></ul><div><hr></div><h3>Final thought</h3><p>Boards don&#8217;t need to be AI experts. But they must be <strong>AI-literate governors</strong>. The IoD has given directors a framework to start. The next move is yours.</p><p>&#128073;&#127997; Download the IoD&#8217;s <em><strong><a href="https://www.iod.com/resources/business-advice/ai-governance-in-the-boardroom/">AI Governance in the Boardroom Business Paper</a></strong></em>.</p><p>&#128073;&#127997; Download my <strong><a href="https://drive.google.com/file/d/1HY0sx9uOTka1OZ8iuMih3NQklAuSPdOf/view?usp=sharing">IoD - AI Governance in the Boardroom - Principles One-Pager</a></strong>.</p><p>And if you want regular insights on strategy, governance, and AI at the board level, subscribe to AI in the Boardroom.</p><p></p><p><strong>Note</strong>: The IoD principles are based<em> on the original work, <a href="https://anekanta.co.uk/ai-governance-and-compliance/anekanta-responsible-ai-governance-framework-for-boards/">Anekanta Responsible AI Governance Framework for Boards</a> by Anekanta Ltd, Licensed under CC BY-NC-SA 4.0</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Five Questions Every Board Should Ask About AI in 2026]]></title><description><![CDATA[Edition #2]]></description><link>https://www.aiintheboardroom.com/p/five-questions-every-board-should</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/five-questions-every-board-should</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Sat, 18 Oct 2025 13:02:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i8Z7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i8Z7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i8Z7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!i8Z7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!i8Z7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!i8Z7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i8Z7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11640718,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/176452150?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i8Z7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!i8Z7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!i8Z7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!i8Z7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a75986c-973b-47c6-857f-2519adf3d5b2_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Artificial intelligence adoption is accelerating across almost every industry. There is a powerful opportunity to achieve a competitive advantage through increased productivity, or augmenting value chains and even value propositions. With this opportunity, comes risk, complex ethical dilemmas, and the potential for increased attention of regulators. For directors, the question is no longer &#8220;should we care about AI?&#8221; but &#8220;how do we govern and lead responsibly in the age of AI?&#8221;</p><p>Too many boards treat AI as a technical project to be delegated to IT. That is a mistake. The consequences of AI, be they strategic, financial, legal, and reputational, fall squarely within the remit of directors. Oversight cannot be abdicated. It is too important an issue to ignore.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>To provide effective stewardship, every board should begin with five core questions.</p><div><hr></div><h3>1. How does AI align with our business strategy and enhance our competitive advantage?</h3><p>AI is not a bolt-on. It has the potential to automate daily tasks, augment your value chain, or reshape entire business models. Boards must ask whether management is using AI to reinforce the organisation&#8217;s unique sources of competitive advantage, or merely chasing shiny tools without strategic coherence.</p><ul><li><p>What capabilities do we have that AI can amplify (e.g. data assets, customer relationships, operational scale)?</p></li></ul><ul><li><p>Are we investing in AI in ways that truly differentiate us from competitors, or simply keeping up?</p></li></ul><ul><li><p>How will AI shift our industry&#8217;s economics in the next 3&#8211;5 years, and are we preparing for that shift?</p></li></ul><p>Boards should expect management to articulate how AI initiatives link explicitly to the business strategy, not just as experiments, but as drivers of growth, productivity, and resilience.</p><div><hr></div><h3>2. Do we have the right governance and oversight in place?</h3><p>AI introduces new risks - data privacy, bias, opacity, accountability gaps - that traditional governance structures are not built to handle. Boards need clarity on how these risks are being identified, managed, and escalated.</p><ul><li><p>Who within management is accountable for AI governance?</p></li></ul><ul><li><p>Do we have the right board-level committees (Audit, Risk, Ethics) actively monitoring AI?</p></li></ul><ul><li><p>Are our policies aligned with evolving regulation, including the EU AI Act and UK AI guidelines?</p></li></ul><ul><li><p>Do we have the expertise on the board, or through external advisors, to scrutinise management effectively?</p></li></ul><p>Without proper oversight, AI can erode trust quickly. Directors should press for clear accountability and transparent reporting.</p><div><hr></div><h3>3. What risks - ethical, legal, regulatory, and reputational - do we face?</h3><p>AI failures make headlines. From discriminatory recruitment algorithms to deepfake fraud, the downside risks are real and rising. Boards must treat AI risks as part of enterprise risk management, not a separate silo.</p><ul><li><p>What guardrails are in place to ensure AI systems are fair, explainable, and secure?</p></li></ul><ul><li><p>How exposed are we to regulatory non-compliance in the jurisdictions in which we operate?</p></li></ul><ul><li><p>What reputational damage could result from misuse of AI, and how prepared are we to respond?</p></li></ul><p>An organisation&#8217;s licence to operate increasingly depends on responsible AI. Boards should view this not only as a compliance obligation, but as a matter of trust and legitimacy.</p><div><hr></div><h3>4. How prepared is our organisation for transformation?</h3><p>AI transformation is not only about technology. It requires data readiness, cultural change, new skills, and re-engineered processes. Boards must probe management&#8217;s readiness honestly.</p><ul><li><p>Do we have the quality data, infrastructure, and cybersecurity foundation required?</p></li><li><p>Are we investing in up-skilling our workforce to work alongside AI?</p></li><li><p>How are we engaging employees, trade unions, and stakeholders in this transformation?</p></li><li><p>Are we addressing ethical use internally, not just in products, but in how we deploy AI with staff?</p></li></ul><p>Transformation readiness is where many organisations stumble. Boards should demand a realistic assessment of organisational capabilities, not glossy roadmaps.</p><div><hr></div><h3>5. What opportunities are we missing by moving too slowly?</h3><p>Caution is prudent, but excessive caution is dangerous. The pace of AI adoption means laggards risk losing competitive ground rapidly. Boards must balance risk management with strategic boldness. As a veteran of dozens of digital and agile transformations over two decades, I have witnessed first hand the risks of an abundance of caution.</p><ul><li><p>Where are competitors already using AI to gain cost or innovation advantages?</p></li></ul><ul><li><p>Are there adjacent opportunities we could seize by moving faster?</p></li></ul><ul><li><p>What partnerships or acquisitions could accelerate our AI journey?</p></li></ul><p>Boards must encourage management to explore opportunity as rigorously as risk. Standing still is rarely a safe strategy.</p><div><hr></div><h3>Summary</h3><p>AI is no longer an emerging technology, it is an enterprise reality. Boards that fail to engage meaningfully risk strategic drift, regulatory exposure, and reputational harm.</p><p>Asking these five questions is a starting point, not an endpoint. They help boards move from curiosity about AI to competence in overseeing it.</p><p>Directors do not need to be data scientists. But they do need to ensure AI is governed, aligned to strategy, and embedded responsibly in transformation. That is a board duty no different from finance, risk, or sustainability.</p><div><hr></div><h3>Next Steps for Boards</h3><p>Download my <a href="https://drive.google.com/file/d/1raPaRQbUMggaJXtI1LpPAv3Rww8ZT_IV/view?usp=drive_link">AI Governance Board Checklist</a> (a one-page tool to structure oversight).</p><p>Let&#8217;s have a conversation about AI strategy, governance, and transformation tailored to your organisation.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Introducing AI in the Boardroom]]></title><description><![CDATA[Edition #1 - Everything you need to know about this newsletter]]></description><link>https://www.aiintheboardroom.com/p/introducing-ai-in-the-boardroom</link><guid isPermaLink="false">https://www.aiintheboardroom.com/p/introducing-ai-in-the-boardroom</guid><dc:creator><![CDATA[Karim Harbott]]></dc:creator><pubDate>Fri, 10 Oct 2025 22:47:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QX3d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QX3d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QX3d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!QX3d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!QX3d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!QX3d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QX3d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11662357,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aiintheboardroom.com/i/175841040?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QX3d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png 424w, https://substackcdn.com/image/fetch/$s_!QX3d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png 848w, https://substackcdn.com/image/fetch/$s_!QX3d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png 1272w, https://substackcdn.com/image/fetch/$s_!QX3d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51fdb01a-fca8-41bf-b26b-58fd8f4c6597_4550x3275.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Welcome to <em>AI in the Boardroom</em></h1><p>Hi, and welcome to <em><strong>AI in the Boardroom</strong></em>, a newsletter for board directors, executives, and senior advisors who need to <strong>understand and govern artificial intelligence responsibly, strategically, and with confidence</strong>. As AI transforms industries at pace, leadership teams must ensure they are equipped to manage the ethical, legal, and strategic dimensions of this shift. This publication exists to bridge that gap.</p><p>This is a new space dedicated to one thing: <strong>helping boards and senior leaders make sense of AI.</strong> It will deliver clear, board-level insights on <strong>strategy, governance, and transformation in the age of AI.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>AI is no longer just a technology topic, it&#8217;s a boardroom issue. It touches strategy, risk, regulation, culture, and transformation. The stakes are high: <strong>get it right, and AI can reshape competitive advantage; get it wrong, and you face reputational, regulatory, and operational risk.</strong></p><p>Boards can&#8217;t afford to ignore it, nor can it be delegated to IT teams.</p><p>That&#8217;s where <em><strong>AI in the Boardroom</strong></em> comes in.</p><div><hr></div><h2>Who this newsletter is for</h2><p><em>AI in the Boardroom</em> is written for:</p><p><strong>&#128077;&#127997; Non-Executive Directors or Chairs</strong> overseeing risk and strategy<br><strong>&#128077;&#127997; C-suite executives</strong> accountable for transformation and compliance<br><strong>&#128077;&#127997; PE-backed leaders</strong> navigating disruption and governance expectations<br><strong>&#128077;&#127997; Board committee members</strong> (Audit, Risk, Governance etc.) seeking clarity on AI<br><strong>&#128077;&#127997; Consultants</strong> advising on AI strategy and transformation<br><strong>&#128077;&#127997; Senior policymakers</strong> or regulator shaping corporate oversight</p><p>If you sit at the top table, this is for you.</p><div><hr></div><h2>What this newsletter delivers</h2><p><em><strong>AI in the Boardroom</strong></em> focuses on what directors need to know - no hype, no jargon, no technical rabbit holes.</p><p>Subscribers will receive:</p><p>&#9989; <strong>Concise board briefings twice a month</strong> on AI governance, strategy, and risk<br>&#9989; <strong>Exclusive practical tools</strong> like frameworks, checklists, and cheatsheets<br>&#9989; <strong>Regulatory intelligence</strong> &#8212; what EU and UK policy shifts mean for your board<br>&#9989; <strong>Case studies and governance lessons</strong> from real transformations<br>&#9989; <strong>Invitations to private director events</strong> including briefings, masterclasses, and roundtables</p><p>This isn&#8217;t about hype or technical deep dives. It&#8217;s about <strong>giving directors the right lens to ask the right questions and make better decisions in the boardroom</strong>.</p><div><hr></div><h3>Who I am</h3><p>I&#8217;m <a href="https://www.karimharbott.com/">Karim Harbott</a>, a <strong>Chartered Director, board advisor, and executive educator</strong>.</p><p>&#10145;&#65039; I teach <strong>strategy</strong> for the <strong>Institute of Directors</strong>.<br>&#10145;&#65039; I serve on boards, governance committees, and Finance Audit &amp; Risk Committees<br>&#10145;&#65039; I&#8217;ve worked internationally in <strong>strategy, digital transformation, and AI governance</strong>, across financial services, government, retail, technology, and sport.<br>&#10145;&#65039; I&#8217;m the co-founder of a global strategy, AI, and transformation consultancy.<br>&#10145;&#65039; I&#8217;m the author of <em><a href="https://www.6enablers.com/">The 6 Enablers of Business Agility</a></em>.</p><p>My focus is always at the top table: helping boards and executives oversee AI responsibly, govern effectively, and drive transformation with confidence.</p><div><hr></div><h3>My purpose</h3><p>In my board and advisory work, I see a growing gap:</p><p><strong>&#10071;&#65039;Directors know AI is critical, but don&#8217;t know what questions to ask.<br>&#10071;&#65039;Executives are under pressure to &#8220;do AI,&#8221; but boards struggle to oversee it responsibly.<br>&#10071;&#65039;Regulators are moving fast, yet many boards are behind.</strong></p><p>It can be overwhelming trying to keep up. My goal is to close that gap, giving leaders the clarity, tools, and frameworks to navigate disruption without the noise without spending hours following every latest development.</p><p>This newsletter is my way of sharing those insights more widely.</p><div><hr></div><p>If you&#8217;re still reading &#8212; thank you. I want this to be useful, practical, and directly relevant to your role as a leader.</p><p>Now, I&#8217;d love to hear from you:<br>&#128073;&#127997; <strong>What&#8217;s the biggest AI-related challenge or question facing your board right now?</strong><br>&#128073;&#127997; <strong>What would you like to see more of in this newsletter?</strong></p><p>Reply to this email or drop a comment below.</p><p>Welcome aboard. I&#8217;m glad you&#8217;re here.</p><p><strong>Karim</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aiintheboardroom.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI in the Boardroom! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>