AI Glossary

📋 Get the downloadable PDF here.


A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


Thanks for reading AI in the Boardroom 🙏🏽 Subscribe for free to receive new posts and support my work.


A

Active Learning - A training approach where the model requests labels for the most informative data points, reducing labelling effort while improving performance on scarce or costly data.

Adaptive System - A system that updates behaviour based on new data or feedback, typically under defined controls to preserve safety, compliance, and auditability.

Adoption (AI Adoption) - The process of embedding AI into business operations, shaped by technology, organisation, and regulatory context, with impacts on processes, culture, and firm-wide performance.

Agent / AI Agent - Software that autonomously plans and executes tasks using tools or data sources, bounded by policies, guardrails, and human oversight.

Algorithm - A structured set of instructions that computers follow to process inputs and generate outputs; the foundation of all AI systems.

Algorithmic Affect Management (AAM) - Technologies that use biometrics and AI to monitor and influence workers’ emotions or stress levels, raising ethical concerns around privacy, autonomy, and workplace fairness. An emerging, often controversial field that intersects AI ethics, privacy, and employee monitoring.

Artificial General Intelligence (AGI) - A theoretical form of AI capable of human-level reasoning, learning, and adaptation across diverse tasks, unlike today’s narrow AI systems.

Artificial Intelligence (AI) - Computer systems performing tasks usually requiring human intelligence (e.g., perception, language understanding, decision-making - via algorithms and statistical learning).

Artificial Neural Network (ANN) - Machine learning models inspired by the brain, consisting of interconnected “neurons” that learn patterns in data; the building blocks of deep learning.

Assurance (AI Assurance) - Independent activities (testing, evaluation, certification) providing evidence that an AI system meets stated requirements for safety, performance, and compliance.

Audit Trail - Tamper-evident records of model versions, data lineage, prompts, outputs, and interventions, enabling traceability, accountability, and regulatory compliance.

Augmentation - The use of AI to enhance human abilities - supporting better judgement, decision-making, and performance - rather than replacing people outright.

Autonomy - A system’s capacity to operate and make decisions independently, within defined boundaries, without direct human control.

Automation - Using technology to execute tasks without human intervention, typically improving efficiency, consistency, and cost-effectiveness.

Automation Bias - The human tendency to over-trust AI or automated outputs, accepting them without sufficient scrutiny. Automation bias can lead to poor or unsafe decisions.

Autoregressive Generation - Method where an AI model generates text one token at a time, using previous outputs as inputs, producing fluent sequences such as paragraphs or dialogue.

Back to top

B

Benchmark - A standardised task or dataset used to evaluate AI model performance consistently across systems, tracking progress and informing deployment decisions.

Bias (Statistical/Algorithmic) - Systematic error producing unfair outcomes for individuals or groups, arising from data, features, or modelling choices; must be detected, mitigated, and monitored.

Black-Box Model - A system or model whose internal logic is opaque, making its decision-making process difficult or impossible for humans to interpret directly and hence to govern.

Bounded Autonomy - Design principle granting AI limited decision rights within explicit constraints, escalation paths, and kill-switches.

Back to top

C

Chatbot - AI-powered conversational agent that automates interactions, providing scalable support and improving customer experience across service, sales, and internal functions.

Classification - A machine learning task where inputs are assigned into predefined categories (e.g., emails labelled as “spam” or “not spam”).

Cloud Infrastructure - The underlying physical and software components that deliver cloud computing, including servers, networking, storage, and virtualisation layers.

Cloud-Based Platforms - Technology environments that deliver computing resources - storage, processing, and software - via the internet. Providers such as AWS, Azure, and GCP enable scalability, cost-efficiency, and access to advanced AI tools.

Computational Power - The processing capacity of a computer system to execute tasks and analyse data; critical for training and running AI models at scale.

Computational Time - The duration required for a computer system to complete a program or task; shorter times enable faster AI training and inference.

Confidence / Calibration - How well a model’s predicted probabilities match reality; poor calibration misleads risk judgements and downstream decisions.

Connected AI - Integrated AI systems combining data, technology, and human input to support dynamic decision-making, collaboration, and enterprise transformation.

Constitutional AI - AI trained and governed using explicit ethical or legal principles (“constitutions”) to guide behaviour, aiming for safer, values-aligned outputs.

Content Moderation - Processes and tools that detect, filter, or flag harmful, illegal, or policy-violating content—including hate speech, extremism, and explicit material.

Copyright and AI - Legal considerations around training data, outputs, and derivative works; organisations need policy, licensing strategies, and risk controls.

Counterfactual Explanation - “Minimal change” description showing how input alterations would change an outcome, improving transparency and recourse.

Consumption (AI Consumption) - The use of pre-built AI applications or services to enhance business processes, without creating or managing models or infrastructure internally.

Creators (AI Creators) - Individuals or teams that design, build, and deploy AI systems, algorithms, or applications.

Creator Bias - Bias introduced by developers through design choices, training data selection, or model objectives, which can affect fairness and outcomes.

Back to top

D

Data Bias - The tendency for AI systems to inherit and reinforce societal biases present in the training data, creating risks of unfair or discriminatory outcomes.

Data-Driven Risk Assessment Methodology for Ethical AI (DRESS-eAI) - A framework for systematically identifying, assessing, and mitigating ethical risks in AI systems using structured, data-centric evaluation techniques.

Data Governance - Policies, roles, processes, and controls ensuring data quality, security, lineage, and lawful use across the AI lifecycle.

Data Labelling - The process of tagging raw data (e.g., images, text, audio) with meaningful annotations, making it usable for training machine learning models.

Data Poisoning - Malicious manipulation of training or retrieval data to degrade performance or induce harmful behaviour.

Data Privacy and Security - Safeguards to protect sensitive personal data, particularly in applications like biometrics, ensuring lawful use, confidentiality, and trust.

Data Readiness - The extent to which data is collected, cleaned, and structured to be reliable, compliant, and usable for analytics or machine learning.

Deep Neural Networks (DNN) - Artificial neural networks with many layers of neurons that learn complex patterns from large datasets, powering advanced AI tasks such as vision or language.

Deep Learning - A machine learning approach using multi-layer neural networks to automatically learn representations from data, enabling breakthroughs in vision, speech, and language.

Deepfake - Synthetic media generated by deep learning to convincingly depict events or people that never existed, raising concerns over misinformation and fraud.

Differential Privacy (DP) - A mathematical framework that adds noise so insights can be learned about populations without revealing information about individuals.

Digital Footprint - The trace of data generated by an individual’s online activity - searches, clicks, posts - that can be collected, analysed, or exploited by AI systems.

Drift (Data/Concept) - Changes in input data or real-world relationships that degrade model performance; requires monitoring and timely remediation.

Back to top

E

Ecosystem (AI Ecosystem) - The interconnected network of developers, platforms, organisations, and industries that build, adopt, and evolve AI technologies globally.

Embedding - Numeric representation of text, images, or other data that captures semantic meaning for search, clustering, or retrieval.

Ethical AI - The responsible development and use of AI systems that are fair, transparent, and aligned with human values and societal norms.

Ethical AI Framework - A structured approach embedding ethical principles—such as fairness, accountability, and transparency—into the design, deployment, and governance of AI systems.

EU AI Act - The European Union’s regulatory framework for AI, classifying systems by risk and setting obligations for safety, transparency, and accountability.

Evaluation (Model Evals) - Structured tests assessing accuracy, safety, robustness, bias, and alignment against defined criteria and use-case requirements.

Explainability - The degree to which the internal workings and decision-making processes of an AI system can be understood and articulated by humans. High explainability enables transparency, trust, accountability, and effective governance—particularly in regulated or high-stakes contexts.

Explainable AI (XAI) - Methods and tools that make AI decision-making understandable by providing human-readable explanations of how and why outputs are produced.

Back to top

F

FAIR Principles - Guidelines ensuring data is Findable, Accessible, Interoperable, and Reusable—promoting efficient data sharing, collaboration, and innovation across organisations and sectors.

Federated Learning - Training models across distributed devices or organisations without centralising raw data, improving privacy and sovereignty.

Fine-Tuned Models - Adapting a pre-trained model on domain-specific data to improve task performance or tone while controlling computing resources and cost.

Fine-Tuning - Adapting a pre-trained model on domain-specific data to improve task performance or tone while controlling compute and cost.

Foundation Model - Large, general-purpose model trained on broad data, adaptable to many downstream tasks via prompting or fine-tuning.

Back to top

G

General Data Protection Regulation (GDPR) - A comprehensive UK and EU law governing how organisations collect, process, and protect personal data—ensuring transparency, accountability, and individual rights in data use. Highly relevant to the governance and deployment of AI systems.

General Purpose Technology (GPT) - A transformative class of innovations—such as electricity, the internet, or artificial intelligence—that fundamentally reshape economies and industries by enabling widespread productivity gains, new business models, and continuous waves of complementary innovation.

Generative AI (GenAI) - AI models that create new content—text, images, audio, code—by learning patterns in data and sampling plausible outputs.

Generative Pre-trained Transformer (GPT) - A class of large language models initially developed by OpenAI that uses the Transformer architecture. GPT models are pre-trained on vast text datasets and fine-tuned to generate coherent, context-aware language for a wide range of tasks, including writing, analysis, and reasoning.

Giants (AI Giants) - Large technology firms (e.g., Amazon, Microsoft, Google, Meta, Alibaba) that dominate AI infrastructure, cloud, and model development, shaping global standards and market dynamics.

Governance (AI Governance) - The framework of roles, policies, and controls ensuring AI is safe, ethical, legal, compliant, and aligned with organisational strategy.

Guardrails - Technical and policy controls that prevent or detect unsafe or non-compliant model behaviour at design and runtime.

Back to top

H

Hallucination - Model output that is fluent but factually incorrect or fabricated, often presented with unwarranted confidence or apparent authority.

Human and Machine Decision-Making - The process through which decisions are made by humans, AI systems, or both in combination—balancing intuition, ethics, and computational analysis.

Human-in-the-Loop (HITL) - Design where humans review, approve, or override AI outputs at critical points to manage risk and uphold accountability.

Human–Machine Frontier - The shifting boundary between tasks performed by humans and those handled by AI, reflecting changes in roles, skills, and organisational design.

Back to top

I

Impact Assessment (AI/Algorithmic) - Structured evaluation of an AI system’s risks, harms, and mitigations for individuals, groups, and society.

Imputation - A data preparation technique used to fill in missing or incomplete values within a dataset, improving data quality and ensuring more reliable analytics and machine learning outcomes.

Inference - The process of using a trained machine learning model to analyse new, unseen data and generate real-time predictions or decisions.

Instruction Tuning - Training that teaches a model to follow natural-language instructions, improving usefulness and controllability.

Interpretability - The degree to which humans can understand how an AI model works, enabling users to detect bias, validate reasoning, and trust its results.

ISO/IEC 42001 - Management system standard for AI. A set of requirements for establishing, implementing, maintaining, and improving an AI management system.

Integration (AI Integration) - The process of embedding AI into systems, processes, or products to add functionality, automate tasks, or improve decision-making.

IoT (Internet of Things) - A network of connected physical devices—such as sensors, vehicles, and machinery—that collect and exchange data to enable monitoring, automation, and smarter decision-making.

Back to top

J

Jailbreak - Attack that coerces a model into bypassing safeguards, producing policy-violating or unsafe outputs; countered with layered defences.

Job Demands–Resources (JD–R) Model - A framework for analysing employee well-being and performance by distinguishing between job demands—factors that cause strain—and job resources, which buffer stress and drive motivation. The balance between the two shapes engagement, burnout, and productivity.

JSON (JavaScript Object Notation) - A lightweight, human-readable data format used to structure and exchange information between systems—commonly employed in APIs and web applications for reliable, efficient data transfer.

Back to top

K

Kill-Switch - Immediate mechanism to suspend or disable an AI system when risk thresholds are breached or anomalies arise.

Knowledge Base - Curated repository of authoritative content used to ground AI outputs and improve factual accuracy.

Back to top

L

Label Encoding - A data preprocessing method that converts categorical values into numeric codes for machine processing. Effective for ordered categories (e.g., small–medium–large) but unsuitable for non-ordinal data where no hierarchy exists.

Latency - The time it takes for an AI system to respond; critical to user experience and certain operational decisions.

Large Language Model (LLM) - An advanced AI system trained on vast text datasets to understand, generate, and manipulate human language, enabling applications like chatbots, summarisation, and code generation.

Learning Algorithms - Computational methods that enable machines to detect patterns and improve performance through data exposure, forming the foundation of machine learning and AI systems.

Learning-Based Systems - Computer systems that adapt and enhance performance by learning from data and experience, rather than relying solely on static, rule-based logic.

Least Privilege (for AI) - A security principle granting only the minimum access needed for a model or agent to perform its function.

Lifecycle (AI Lifecycle) - End-to-end phases—strategy, design, data, development, testing, deployment, monitoring, retirement—with controls at each stage.

Local Interpretable Model-Agnostic Explanations (LIME) - A technique that explains complex model predictions by generating simple, local approximations for individual cases, improving transparency across diverse AI models. See also SHapley Additive exPlanations (SHAP).

LLMOps - Operational practices and tooling to deploy, monitor, secure, and continuously improve LLM-based applications.

Back to top

M

Machine Learning - A branch of artificial intelligence that enables computers to identify patterns in data and improve their performance over time without explicit programming.

Model - A mathematical or computational construct trained on data to identify patterns, make predictions, or support decision-making; the output of an algorithm trained on a particular dataset.

Model Card - Standardised documentation outlining a model’s purpose, data, performance, limitations, and ethical considerations.

Model Context Protocol (MCP) - An open standard that allows AI applications to communicate with external data and tools by providing a standardised, two-way connection. It acts like a universal translator, enabling large language models (LLMs) to access live information, perform actions, and use specialised features beyond their initial training data.

Model Owner / Product Owner - Accountable leader for an AI system’s performance, risk, and compliance outcomes across its lifecycle.

Model Risk Management (MRM) - Policies and controls governing model development, validation, deployment, and monitoring to manage operational and regulatory risk.

Multi-Modal Model - A model that processes or generates more than one data type (e.g., text and images) for richer tasks.

Back to top

N

Narrow AI - AI designed for specific tasks (e.g., triage, summarisation), rather than general human-level intelligence.

Natural Language Processing (NLP) - A field of artificial intelligence focused on enabling computers to understand, interpret, and generate human language in text or speech form.

Network Capacity - The capability of a neural network to model and represent complex relationships in data, determined by its size, depth, and number of parameters.

Neural Network (NN) - Often used as a shorthand for artificial neural network (ANN); machine learning models inspired by the brain, consisting of interconnected “neurons” that learn patterns in data; the building blocks of deep learning.

NIST AI Risk Management Framework (AI RMF) - US framework offering functions and practices to manage AI risks across governance, mapping, measurement, and management.

Back to top

O

Observability - End-to-end visibility—metrics, logs, traces, prompts, outputs—supporting performance, drift, security, and safety monitoring.

One-Hot Encoding - A data transformation technique that converts categorical variables into binary vectors, allowing algorithms to process non-numeric data while preserving the uniqueness of each category.

Open-Source Model - A Model whose weights and/or code are publicly accessible under a licence, enabling inspection, customisation, and self-hosting.

Operationalisation - Turning pilots into durable production services with SLAs, controls, support, and measurable business outcomes.

Overfitting - A modelling error where an AI system learns patterns and noise specific to its training data, resulting in poor generalisation and weaker performance on new, unseen data.

Back to top

P

Parameter - A learned weight inside a model that determines how inputs transform into outputs; more parameters often imply higher capacity.

Personally Identifiable Information (PII) - Data that can identify an individual; subject to strict handling, minimisation, and privacy controls.

Post-Training Safeguards - Techniques applied after pre-training—such as reinforcement learning from human feedback (RLHF), safety tuning, and content filters—to refine model behaviour.

Pre-Trained Models - AI models initially trained on large, general datasets and later fine-tuned for specific domains or tasks, improving efficiency and performance while reducing training costs.

Precision / Recall - Metrics used to evaluate classification models, especially in imbalanced datasets. Precision measures the accuracy of positive predictions, answering the question, “Of all the items the model predicted as positive, how many were actually correct?”. Recall measures how many of the actual positive cases the model was able to find, answering the question, “Of all the items that were actually positive, how many did the model correctly identify?”

Predictive Analytics - The use of statistical methods, machine learning, and AI to analyse historical and real-time data to forecast future outcomes and trends.

Predictive Optimisation - An algorithmic approach that uses machine learning to forecast individual outcomes and make data-driven decisions, often applied to personalise offers, allocate resources, or manage risk.

Production (AI Production) - The stage in the AI value chain where models are trained, tested, and optimised into deployable systems that deliver business-ready applications.

Prompt - The input text (and optional structured instructions) provided to an LLM to elicit a desired output.

Prompt Injection - An attack where crafted inputs hijack model behaviour or tool use, overriding instructions or exfiltrating secrets.

Provenance Watermarking - Signals embedded in content or metadata to indicate origin or tool usage, aiding authenticity verification.

Back to top

Q

Quality Assurance (QA) for AI - Planned activities verifying that AI meets defined requirements for accuracy, robustness, safety, and compliance before release.

Quantisation - Compressing model weights or activations to lower precision to reduce cost and latency with minimal performance loss.

Back to top

R

Retrieval-Augmented Generation (RAG) - A technique that enhances AI responses by retrieving relevant, factual information from trusted sources before generating outputs, improving accuracy and grounding.

Random Forests - A machine learning technique that builds multiple decision trees and aggregates their results to enhance accuracy, reduce variance, and prevent overfitting.

Readiness (AI Readiness) - The extent to which an organisation is equipped—with the right data, talent, governance, and culture—to effectively adopt, integrate, and scale AI technologies.

Recurrent Neural Networks (RNN) - A class of deep learning models designed for sequential data, where each output depends on previous inputs—useful for tasks like language modelling or time-series forecasting.

Red Teaming (AI) - A structured adversarial testing to uncover safety, security, and misuse risks before and after deployment.

Regression - A machine learning task that predicts continuous outcomes—such as prices, demand, or temperature—based on patterns in input data.

Regulatory Landscape - The evolving set of laws, policies, and governance frameworks that define how AI technologies must be developed, deployed, and managed responsibly.

Reinforcement Learning - A machine learning approach where an agent interacts with an environment and learns optimal actions by maximising cumulative rewards over time.

Reinforcement Learning from Human Feedback (RLHF) - A training method that refines AI behaviour through human-provided feedback, aligning model outputs with human values, intent, and ethical expectations.

Responsible AI - The practice of designing, developing, and deploying artificial intelligence systems in a manner that is ethical, transparent, fair, and accountable. Responsible AI ensures that technology serves human interests, complies with legal and regulatory standards, and manages risks related to bias, safety, and societal impact.

Risk Heat Maps - Visual tools that display the likelihood and impact of identified risks on a colour-coded grid, supporting prioritisation and targeted risk mitigation.

Risk Register (AI) - A living record of identified AI risks, associated controls, owners, and residual exposures, integrated with enterprise risk management.

Rule-Based Systems - Computer systems that make decisions based on predefined human-created rules, using explicit logic rather than learned patterns from data.

Back to top

S

Safety Case - Documented, evidence-based argument that an AI system is acceptably safe for its intended context and use.

Semi-Structured Data - Information that does not fit neatly into traditional databases but contains some organisational elements, such as tags or metadata. Examples include JSON files, XML documents, and system logs—offering flexibility while remaining machine-readable.

Shadow Mode - Operating an AI alongside current processes without acting on outputs, to measure impact and validate safety before go-live.

SHapley Additive exPlanations (SHAP) - An explainability technique that attributes a model’s prediction to each feature using game theory principles, providing transparent, fair insights into model behaviour. See also Local Interpretable Model-Agnostic Explanations (LIME).

Stochastic - Describes processes or models that incorporate randomness or probability, resulting in outputs that may vary even under similar conditions.

Strategic Coherence - The alignment of initiatives and decisions across business units to ensure all actions—including AI projects—reinforce the organisation’s overall strategic direction.

Strategy (AI Strategy) - A structured organisational plan for using artificial intelligence to achieve strategic goals, drive innovation, and build sustainable competitive advantage.

Strong AI - A hypothetical form of artificial intelligence with human-level reasoning, understanding, and consciousness—capable of performing any intellectual task a human can.

Structured Data - Information organised into a defined schema, such as rows and columns in databases, making it easily searchable, analysable, and compatible with most data tools.

Supervised Learning - A machine learning approach where models are trained on labelled datasets to predict specific outcomes, commonly used for forecasting, classification, and risk modelling.

Surveillance Capitalism - An economic model that monetises personal data by collecting, analysing, and using behavioural information to predict and influence consumer actions.

Sustainable AI - The design and deployment of AI systems in ways that minimise environmental impact while upholding ethical, social, and governance standards.

Symbolic AI - An early AI approach that represents knowledge using human-defined rules, symbols, and logic rather than data-driven learning—useful for reasoning and rule-based decision systems.

Synthetic Data - Artificially generated data created to supplement or replace real datasets for training and testing AI models. It helps protect privacy, address data imbalance, and expand sample diversity. Synthetic data must be rigorously validated for accuracy, fidelity, and representativeness to ensure reliable model performance.

Back to top

T

Temperature (LLM) - A parameter that controls the creativity and randomness of an AI model’s output. Higher temperatures generate more varied and imaginative responses, while lower settings produce more precise and consistent results.

Tokenisation - The process of breaking text into smaller units—called tokens—such as words, subwords, or symbols, allowing algorithms to analyse and process language efficiently.

Traceability - Ability to trace outputs back to data, prompts, models, and decisions; key for audits and investigations.

Training - The phase in which an AI model learns from data by identifying patterns and relationships, enabling it to make accurate predictions or decisions in real-world applications.

Transformer Model - A deep learning architecture designed to process sequential data by examining relationships between all elements simultaneously, underpinning modern language models such as GPT and BERT.

Transparency - Clarity about how AI systems work, the data used, and limitations; communicated via documentation, notices, and dashboards.

Tuning (Instruction/Fine/Preference) - Techniques to adapt models to tasks, tone, or policies, using curated data and constrained processes.

Back to top

U

UK AI Guidelines - A set of national principles promoting the safe, fair, transparent, and accountable development and deployment of AI, balancing innovation with ethical and regulatory safeguards.

Ultimate Control - A minimal standard of human oversight in AI systems, referring to the ability to override or deactivate a system when necessary—preserving human authority but not guaranteeing timely intervention.

Unsupervised Learning - A machine learning approach where models find hidden patterns or groupings in unlabelled data, commonly used for customer segmentation, anomaly detection, and exploratory analysis.

Unstructured Data - Information without a predefined format—such as emails, documents, videos, or social media posts—that requires advanced tools and AI techniques to analyse and extract insights.

Use-Case Catalogue - Governed list of approved AI use cases with owners, controls, metrics, and lifecycle status.

User-Centricity - A design philosophy that places user needs, preferences, and experiences at the core of product or system development, ensuring usability, adoption, and satisfaction.

Back to top

V

Validation (Model Validation) - Independent testing and review confirming that a model is fit for purpose, robust, and compliant with policies and regulations.

Vendor Risk (AI) - Risks from external model/API providers—security, IP, resilience, data use—managed via contracts, testing, and contingency plan

Back to top

W

Weak AI - Artificial intelligence built to perform specific, well-defined tasks—such as translation, image recognition, or recommendation—without genuine understanding or consciousness; also known as narrow AI.

Whitelisting / Allow-Listing - Restricting models, tools, and data sources to pre-approved items to reduce attack surface and compliance risk.

Winters (AI Winters) - Periods when AI research stagnated due to reduced funding, limited progress, and diminished public and investor interest.

Back to top

X

Back to top

Y

Yield (Business Value Yield) - Measured value realised from AI—e.g., cost reduction, revenue uplift, risk reduction—relative to investment, enabling disciplined prioritisation.

Back to top

Z

Zero-Shot / Few-Shot - Performing tasks with no or minimal examples by relying on pre-trained knowledge; often enhanced by well-crafted prompts.

Zero Trust (for AI Systems) - Security model assuming no implicit trust; continuously verifies identities, devices, and requests across AI components and data flows.

Back to top