Most Organisations Are Investing in AI Without Deciding Where It Creates Advantage
Edition 12 — Why boards must decide where AI creates advantage before they fund it.
The Illusion of Progress: Why AI Investment Feels Strategic
AI investment now creates a powerful illusion of strategic progress. Boards see budgets approved, pilots launched, and new capabilities emerging across functions. Management teams can point to copilots, automation, summarisation tools, and workflow improvements as evidence that the organisation is moving. Yet much of this activity is better understood as capability accumulation than strategic choice. McKinsey’s 2025 global survey found that 88 per cent of organisations were using AI in at least one business function, but nearly two-thirds had not yet begun scaling AI across the enterprise, and only 39 per cent reported any EBIT impact at the enterprise level. That gap between adoption and economic effect should be read as a strategic signal. It suggests that many organisations are investing before they have decided where AI is supposed to improve their competitive position.
This is the first governance issue boards need to recognise. In most organisations, AI investment is being driven by external pressure as much as internal logic. Competitors are moving, vendors are persuasive, consultants are presenting large value estimates, and investors increasingly expect visible progress. In that environment, investment becomes a signal of competence rather than a consequence of strategy. The “AI race” narrative accelerates adoption, but only a minority of firms translate that activity into meaningful enterprise value. Movement becomes a substitute for direction, and capability becomes a proxy for strategy.
A useful way to frame this problem is to distinguish between what I refer to as AI parity investment and AI advantage investment. Most organisations are currently doing the former while assuming they are achieving the latter. Until that distinction is made explicit, investment will continue to outpace strategic clarity.
Threshold Capabilities vs Distinctive Capabilities: The Critical Distinction Most Organisations Miss
The most useful distinction in this context is between threshold capabilities and distinctive capabilities. Threshold capabilities are those that an organisation must possess to remain credible in its market. Distinctive capabilities are those that shape why customers choose the firm, why margins persist, and why competitors struggle to respond.
Most current AI investment sits firmly in the first category. Rolling out generative AI assistants, automating internal workflows, improving customer-service triage, and accelerating knowledge work are rational responses to technological change. They improve efficiency and, in some cases, quality. The problem is that competitors are doing exactly the same. Over time, they will become expected.
The challenge is that these benefits are inherently replicable. Microsoft Research’s study of M365 Copilot users demonstrates measurable productivity gains, but those gains are available to any organisation deploying the same tools. As adoption spreads, the effect is not differentiation but convergence. The industry becomes more efficient, but relative positions remain largely unchanged.
This is where the distinction between AI parity and AI advantage becomes operationally important. Threshold capabilities should be treated as necessary investments to maintain competitiveness, with a clear focus on cost efficiency and execution discipline. Distinctive capabilities, by contrast, require concentrated investment and are justified only when they can alter pricing power, cost position, or customer preferences in ways competitors cannot easily replicate.
The Resource-Based View Applied to AI: Why Access to Models Does Not Create Advantage
The resource-based view of the firm provides a disciplined way to assess whether AI can create advantage. For a capability to generate sustained advantage, it must be valuable, rare, difficult to imitate, and organised to capture value (VRIO). Most AI technologies fail this test at the level of access.
Foundation models, cloud infrastructure, orchestration tools, and development frameworks are now widely available. The barriers to entry are falling, and the expertise required to deploy them is becoming more widely available. The technology is valuable, but it is neither rare nor difficult to imitate. Under a VRIO lens, it is unlikely to create sustained advantage on its own.
This becomes clearer when compared with established sources of distinctive capability. Amazon’s logistics network is not valuable because warehouses exist; it is valuable because of its scale, integration, and continuous optimisation, which together support superior service levels and cost economics. Zara’s supply chain is not distinctive because supply chains are uncommon; it is distinctive because of the speed and coherence with which it links demand signals to production and distribution, enabling faster inventory turns and reduced markdown risk.
In both cases, advantage arises from an integrated system of resources and capabilities that shapes the firm’s economic model. The same principle applies to AI. The question is not whether the organisation has access to the technology, but whether it can embed that technology within a system of resources and processes that competitors cannot replicate quickly or cheaply.
Where AI Can Actually Become a Distinctive Capability
AI can contribute to distinctive capability, but only when it is embedded within resources and systems that meet the VRIO criteria. There are three credible routes through which this occurs.
The first is proprietary data that improves with use and is difficult for competitors to assemble. Organisations that control unique operational or behavioural datasets can build models that outperform those trained on generic data. The advantage compounds as data accumulates and feedback loops strengthen, creating a widening performance gap over time.
The second is deep integration into core workflows and decisions. When AI becomes part of how pricing decisions are made, how risk is assessed, how inventory is allocated, or how products are designed, it becomes embedded in the operating model. Replication then requires not just access to the tool, but replication of the underlying processes, data flows, and organisational alignment that support those decisions. This is far more challenging for others to imitate, as demonstrated for decades by the Toyota Product System.
The third is organisational capability. Firms that can redesign workflows, combine technical and domain expertise, and continuously improve their systems create a form of advantage that extends beyond any single application. This often manifests in faster decision cycles, better resource allocation, and more consistent execution.
The distinction is critical. Competitors can access the same tools. They cannot easily replicate the same data histories, embedded processes, and organisational learning curves. That is where AI moves from being a productivity lever to a source of sustained competitive advantage.
The Strategic Evasion: Why Boards Avoid Defining Where AI Will Win
So why then do so many organisations still have a conspicuous absence of strategic clarity? The answer lies in the nature of strategic choice. Defining where AI will create an advantage forces difficult trade-offs. It requires deciding where to concentrate investment, where to accept parity, and which opportunities to deprioritise. These decisions reduce flexibility and create accountability. They also expose differences in perspective across leadership teams.
Many organisations avoid this discipline. Instead, they pursue multiple initiatives across functions, creating the appearance of progress without committing to a clear direction. Both Roger Martin’s and Richard Rumelt’s work on strategy is relevant here. They both argue that an effective strategy is not a list of initiatives, but a coherent set of choices about where to focus and how to compete. That coherence is precisely what many AI portfolios lack. The result is fragmentation. Investment is spread across use cases that deliver incremental benefits but do not combine to shift competitive position. The organisation becomes busier, but not necessarily better positioned, and capital is allocated across initiatives that reinforce parity rather than advantage.
Capability Without Positioning: The Hidden Risk in Current AI Strategies
This leads to a specific strategic risk. Organisations that build AI capability without defining strategic intent can become more efficient without becoming more competitive. Costs increase through investment in infrastructure, licences, data engineering, experimentation, governance, and talent. These costs are often justified individually but rarely assessed collectively against a clear source of advantage. At the same time, competitors adopt similar capabilities, neutralising much of the gain. Efficiency improvements are competed away, and the organisation is left with a higher cost base but little change in relative position. The net effect is convergence rather than separation.
This dynamic is already visible. Many organisations report measurable improvements at the use-case level, but struggle to demonstrate how those improvements translate into sustained financial performance. Boards should interpret this as a warning. Capability without positioning consumes capital, increases organisational complexity, and creates governance challenges without strengthening competitive position.
Reframing the Question: From “Where Can We Use AI?” to “Where Must We Win With AI?”
The starting point for many organisations has been the question: “Where can we use AI?” It is a natural question, but it encourages breadth rather than focus. A more effective question is: “Where must we win, and how can AI strengthen that position?” This reframing aligns AI with core strategic choices. It forces organisations to consider how AI supports their market positioning, rather than treating it as a separate domain of innovation.
This aligns with Roger Martin’s “where to play” and “how to win” framework. AI should not sit alongside strategy; it should be integrated into it. As I have written previously, AI must be tied to strategic intent, not deployed opportunistically. The implication is that AI investment should be evaluated in the context of competitive positioning and economic impact, rather than solely on technical feasibility.
A Board-Level Discipline: Defining AI Advantage Explicitly
For boards, this implies a more explicit discipline than most organisations currently apply. They should require management to clearly distinguish between threshold and distinctive uses of AI and to justify that distinction in economic as well as technical terms. A practical way to enforce this discipline is through a structured decision lens. Before approving material AI investment, boards should require management to answer four questions.
First, is this investment building parity or creating a plausible source of advantage? Second, what makes this difficult for competitors to replicate within a defined time horizon? Third, where does the economic value accrue—in cost reduction, revenue growth, pricing power, or risk mitigation—and is that value likely to persist? Fourth, what changes to the operating model, governance, and organisation are required to realise that value?
These questions force clarity. They distinguish between investments that improve efficiency and those that may reshape competitive position. They also provide a basis for governance. AI initiatives that are central to competitive advantage should be subject to different oversight, risk management, and assurance than those that primarily improve internal efficiency. Regulatory developments reinforce this need for discipline. The EU AI Act is being phased in progressively, with increasing obligations around governance, risk management, and accountability. Boards, therefore, need to understand not only where AI is being used, but also how critical it is to the business model and what risks it introduces.
Closing Argument: AI Will Not Create Advantage Without Strategic Choice
AI does not create an advantage by default, certainly not a lasting one. It amplifies the quality of the strategic choices an organisation makes. Where those choices are weak or undefined, AI produces activity without meaningful differentiation. Where they are clear and disciplined, AI can deepen capabilities that competitors struggle to match.
The distinction between AI parity investment and AI advantage investment should therefore become explicit at board level. Without it, organisations will continue to invest broadly, improve incrementally, and converge with their peers. Boards need to separate two questions that are currently conflated. The first is how to adopt AI effectively. The second is where AI will create an advantage. The first is necessary. The second is decisive.
Organisations that make explicit choices about where AI matters will concentrate resources, build distinctive capabilities, and shape their competitive position. Those that do not will continue to invest, improve, and remain broadly indistinguishable.
If you want more board-level insight on AI strategy, governance, and transformation, subscribe to AI in the Boardroom. Each edition is designed to help directors and senior leaders turn AI activity into sustained competitive advantage.



