Breakdown of the OECD’s 'Principles for Trustworthy AI'
Edition 4 - Why every board should know these five principles: and act on them
AI now consistently appears in board packs, strategy sessions, and risk registers. Directors are being asked to approve AI investments, manage regulatory obligations, and protect their organisations from reputational harm. Yet many still don’t know what “responsible AI” means in practice.
One place to start is with the OECD’s Principles on Artificial Intelligence. These were the world’s first intergovernmental AI standards, adopted in 2019 by 46 countries, including the UK, the US, and EU members. They underpin the EU AI Act and are shaping regulations worldwide.
At their core are the five principles for responsible stewardship of trustworthy AI. They set out what “good AI” looks like—not for engineers, but for leaders. Boards that ignore them are taking unnecessary risks.
Let’s break them down.
1. Inclusive growth, sustainable development, and well-being
This principle is about making sure AI drives prosperity for people and the planet—not just profit. Trustworthy AI can support inclusive growth, sustainable development, and the UN Sustainable Development Goals across areas such as education, health, transport, agriculture, and the environment.
Boards must also recognise the risks. AI can widen inequalities if access is uneven or if systems reinforce existing biases. Vulnerable groups—minorities, women, children, older people, and low-skilled workers—are especially exposed. These risks are even greater in low- and middle-income countries.
Responsible stewardship means guiding AI to reduce, not amplify, these divides. It requires clear safeguards, cross-sector collaboration, and open public dialogue. The aim is to use AI to empower all members of society, build trust, and create outcomes that benefit everyone.
Example: When UPS introduced AI-driven route optimisation (ORION), it cut fuel costs and carbon emissions. Shareholders benefited, but so did drivers and the environment. Compare that with Amazon’s failed AI recruitment tool, which downgraded CVs from women. It created reputational harm and regulatory exposure.
For directors:
Ask if your AI projects create value beyond short-term profit.
Challenge management to explain the human impact.
Consider sustainability and workforce implications as part of every AI business case.
Consider the implications of increased energy use on ESG policies.
2. Respect for the rule of law, human rights and democratic values, including fairness and privacy
This principle states that AI must be built on human-centred values: freedom, fairness, equality, the rule of law, privacy, and consumer rights.
Poorly designed systems risk infringing human rights, whether by accident or intent. Boards should ensure AI is “values-aligned”, with safeguards, human oversight, and the ability to intervene when needed. Doing so keeps AI behaviour consistent with democratic values, reduces discrimination, and builds public trust.
Tools like human rights impact assessments, ethical codes, and quality certifications can embed fairness and accountability into AI development and use.
Example: In the US, a global bank had to scrap its AI CV screening system after it amplified gender bias. Regulators and the media took notice. In healthcare, several NHS trusts are piloting AI diagnostics tools. These raise critical questions: are they equally accurate for all patient groups?
For directors:
Insist on bias audits before systems are deployed.
Demand clear evidence that AI decisions can be explained in plain English.
Hold management to account for fairness metrics, not just financial ones.
3. Transparency and explainability
Transparency means people should know when they’re dealing with AI—whether it’s a chatbot, a recommendation, or a decision. It also means providing meaningful information about how a system was built, trained, and deployed so that users can make informed choices. Transparency does not mean handing over source code or proprietary data, which is often unnecessary or impractical.
Explainability goes further. People affected by AI decisions should be able to understand the primary factors and logic behind an outcome, and challenge it if needed. The level of detail depends on the context. High-stakes decisions demand clarity; low-risk interactions may need less.
Boards should note the trade-offs: explainability can reduce accuracy and add costs, but without it, trust and accountability collapse. The goal is clear communication, simple, accessible explanations that respect privacy, protect IP, and still allow scrutiny.
Example: Credit scoring algorithms are notorious for opacity. The EU has already fined firms for failing to explain automated decisions to consumers. Boards that sign off on opaque systems are exposing themselves to legal and reputational risk.
For directors:
Require management to present AI systems in language that non-technical directors can understand.
Test explainability yourself—if you can’t explain a decision to a regulator, you shouldn’t approve the system.
Ensure your risk committee tracks AI transparency as a standing item.
4. Robustness, security, and safety
AI systems must not pose unreasonable safety risks in everyday use, foreseeable misuse, or over their whole lifecycle. Existing consumer protection laws already define many of these risks, and governments are deciding how they apply to AI.
Two tools stand out:
Traceability: keeping records of data sources, cleaning, and processes so outcomes can be analysed, mistakes corrected, and accountability strengthened.
Risk management: applying structured methods to identify, assess, and mitigate risks, from bias and privacy breaches to digital security threats, at every stage of the AI lifecycle. Different uses demand different levels of protection.
For boards, the message is clear: without robust systems, secure design, and continuous risk management, AI becomes a liability rather than an asset.
Example: Self-driving cars highlight the stakes. A software glitch or adversarial attack can cost lives. In an enterprise, a single faulty algorithm can wipe millions from market value in minutes, as Knight Capital’s trading debacle proved in 2012 (though not AI, the lesson is clear).
For directors:
Ask if AI models have been stress-tested.
Confirm cyber teams are included in AI governance from day one.
Demand incident response plans for AI failures—just as you would for data breaches.
5. Accountability
While responsibility, liability, and accountability overlap, accountability is the most relevant for AI. It means organisations and individuals must ensure AI systems function properly throughout their lifecycle and can show how and why decisions were made.
It is not just about blame after something goes wrong. It is about taking ownership, documenting key decisions, enabling audits, and showing regulators, customers, and stakeholders that governance is in place.
For boards, the message is simple: you cannot outsource accountability to an algorithm or a vendor. The buck stops with leadership.
Example: The Dutch childcare benefits scandal starkly illustrates this. An AI system falsely accused thousands of families of fraud. The fallout forced the government to resign. The lesson: accountability is non-negotiable.
For directors:
Ensure clear ownership of AI risks across the organisation.
Mandate board-level oversight—whether through the audit, risk, or ethics committee.
Include accountability in vendor and partner contracts.
Why this matters for boards
These five principles are not abstract ideals. They are becoming the baseline for regulation. The OECD framework influenced the EU AI Act, the US AI Bill of Rights, and the UK’s AI White Paper.
For boards, adopting them now means:
Strategic clarity: AI projects link to sustainable value creation.
Risk protection: You stay ahead of regulatory and reputational risks.
Trust building: Customers, employees, and investors see AI as responsible rather than reckless.
The question is not whether you adopt these principles. It is whether you do so before regulators or the public force your hand.
Takeaway for directors
When AI is on your board agenda, test proposals against these five principles. If they don’t measure up, push back.
Does it benefit more than just shareholders?
Does it respect fairness and human rights?
Can you explain it to a regulator or journalist?
Has it been stress-tested and secured?
Is there clear accountability?
If you can’t answer “yes” with confidence, you have work to do.
Final thought
The OECD AI Principles give boards a practical framework for responsible stewardship. They are not a compliance burden. They are a guide to turning AI into a strategic advantage, while avoiding the pitfalls that have already cost others dearly.
If your board is serious about AI, these five principles should be on your agenda.
👉🏽 If you found this helpful, subscribe to AI in the Boardroom. Each week, I share practical insights to help directors and executives turn AI disruption into opportunity—while staying safe, ethical, and compliant.



