AI’s Supply Chain of Trust Faces Its First Major Test

443 Views

Artificial intelligence is becoming a defining feature of business and society across the UK and Europe. From financial services and healthcare to government and education, AI is being integrated into critical operations at extraordinary speed.

Yet the risks that accompany this adoption look strikingly familiar. Supply chains remain vulnerable, trust is being eroded at multiple points, and malicious actors are already exploiting weaknesses at scale. The lessons of past technological booms suggest that unless these challenges are addressed early, they will become entrenched in the very systems we are now racing to build.

Old Lessons, New Stakes

The traditional software supply chain is already highly exposed. Developers rely heavily on free and open-source components, which can represent as much as 70 to 90 per cent of modern applications. This reliance on shared code creates efficiency but also creates systemic fragility. A single weak link can compromise entire industries.

The XZ Utils Backdoor in 2024 was an example of a supply chain attack using a sophisticated backdoor that was implanted in the widely used XZ compression library, which is bundled with many Linux distributions. Attackers used social engineering to infiltrate the project, posing as contributors and gradually gaining trust. The backdoor was designed to bypass SSH authentication, potentially compromising millions of servers globally.

Beyond software components, adversaries are exploiting the infrastructure that connects them. The domain name system (DNS), which quietly directs traffic across the internet, has become a favoured target. DNS hijacking and poisoning allow attackers to redirect traffic, capture sensitive data, and silently compromise updates. For UK and EMEA businesses that increasingly depend on cloud-based services, DNS represents an overlooked but fundamental layer of trust.

AI Inherits and Amplifies the Risk

AI supply chains inherit all these vulnerabilities and then multiply them through scale and complexity. Every AI system is built on multiple layers of code, frameworks and models, each of which may contain weaknesses. Already, attackers are targeting open-source repositories for AI models, seeking to insert malicious elements where they are least expected.

One study by researchers at the University of Notre Dame and the University of Hawaii examined Hugging Face, a leading open-source platform for AI models. They found that most models used insecure methods of serialisation and that many could be exploited through their APIs. In some cases, entire models were discovered to be deliberately malicious. For European enterprises seeking to integrate AI into sensitive domains like critical national infrastructure, such findings underscore the scale of the trust challenge. Digital signing and verification can also be an effective way for cybersecurity teams to pick approved AI models, rather than using free choice options.

Cyber criminals have also begun targeting AI developers directly. A group known as NullBulge was recently uncovered using public repositories to lure developers into importing corrupted libraries. By embedding themselves into the very building blocks of AI systems, attackers are exploiting the enthusiasm and speed of AI adoption.

Training Data and the Battle for Integrity

Beyond software, the integrity of training data is becoming one of the most pressing risks in the AI supply chain. AI systems depend on vast datasets to function effectively. If that data is incomplete or biased, outcomes are weak. If the data is deliberately poisoned, outcomes can be dangerous.

This is not a hypothetical problem. The Open Worldwide Application Security Project (OWASP) already lists data poisoning as a top threat to generative AI. Military strategists are even exploring it as a weapon of cyber conflict. For Europe, the issue is particularly acute. With strict data protection laws and high expectations around ethical AI, a model compromised by poisoned training data would not only present security risks but also undermine regulatory compliance and public trust.

Adding to the challenge, recent forecasts suggest that the supply of high-quality training data may be exhausted as early as 2026. As availability shrinks, organisations may be forced to rely on lower quality sources, making systems more vulnerable to manipulation.

When AI Becomes the Supply Chain

Generative AI is not only a product of complex supply chains; it is also becoming a creator of them. Developers are increasingly using AI to generate code, automate testing, and streamline development processes. While these tools boost productivity, they also introduce new weaknesses. Code produced by AI often contains subtle flaws or vulnerabilities. If trusted without scrutiny, it can embed risks that resurface later in the supply chain.

This dynamic means that AI can simultaneously be a consumer of supply chains, a contributor to them, and a generator of new ones. Each role carries risk, and each requires new approaches to trust and verification.

Rebuilding Trust Across Every Link

The path forward must be grounded in trust. At a technical level, this means greater adoption of Software Bills of Materials (SBOMs), which provide visibility into the components of a system and allow for independent audit. For regulators, it means enforcing transparency. The EU’s AI Act, now moving through legislative stages, requires providers of high-risk AI systems to demonstrate technical safeguards, cybersecurity measures and documentation of their models.

But trust cannot stop with code. The documents that govern the AI lifecycle—from compliance attestations to technical specifications—are also being targeted. Ensuring document trust through cryptographic assurance and tamper-evident audit trails will be essential in proving provenance and preventing manipulation. Content authenticity also comes into play because deepfakes have swept the internet and social media, and therefore it is increasingly important for organisations to ensure technology is being used to help prove what is real, from what is fake.

DNS integrity must also be prioritised. Without securing the naming infrastructure of the internet, even the most carefully verified systems risk compromise during transmission. As AI adoption accelerates, trust must be built into the very fabric of connectivity as well as the artefacts that move through it.

A European Opportunity

The risks facing AI supply chains are significant, but they also present an opportunity. Europe has the chance to lead in setting standards for trustworthy AI. By embedding principles of transparency, provenance and security into every stage of the supply chain, the UK and EMEA can build a competitive advantage rooted not just in innovation but in resilience.

Technology booms are rarely slowed, but their risks can be shaped. AI is still in its formative years. If trust becomes the central principle of its supply chains—across code, data, DNS, AI LLM models, media content and documents—the systems built in this region may stand as models for the world. If not, Europe risks inheriting vulnerabilities that will prove far harder to unwind in the years ahead.