Modern defence capability no longer sits mainly in hardware. It sits in software that is updated, retrained and reconfigured long after deployment. The question is no longer just whether a platform works, but whether the code running it can be trusted at any given moment.
That trust is fragile. It depends on who wrote the code, how it was built, how it is updated, and who can access it. Each of those points can fail.
As more countries move towards autonomous systems, this becomes harder to manage. Capability now relies on software, data pipelines, cloud infrastructure and continuous updates. The supply chain is no longer a list of parts. It is a web of code, dependencies and access paths.
For the Ministry of Defence, this changes what “assurance” needs to look like. Many suppliers still meet baseline requirements such as Cyber Essentials, but that no longer proves much on its own. What matters is whether controls are actually working. Can a supplier prove their build pipeline is secure? Are updates signed and verified? Is access tightly controlled and monitored? If not, the risk sits inside the system whether the paperwork says otherwise.
The SolarWinds breach showed how easily trust can be misplaced. Attackers did not break into every target. They compromised one supplier and used its update mechanism to distribute malicious code into government systems.
The Capita breach brought the same issue closer to UK defence. When a supplier handling sensitive data is compromised, the impact does not stop at that organisation. It spreads across every system and contract connected to it.
This is what digital supply chain risk looks like in practice. It includes software, firmware, build pipelines, AI models, cloud platforms and managed services. Even hardware carries risk through embedded software and remote access.
In autonomous systems, these are not supporting components. They are part of the capability. If they fail, the system fails.
Defence programmes are no longer delivering static platforms. They are delivering systems that evolve. Software updates, model retraining and configuration changes continue throughout the lifecycle.
That creates three immediate pressures.
First, update speed matters. Patches and model updates deliver capability, but they also introduce risk. If integrity checks are weak, compromised updates can move straight into operational systems.
Second, trust is shared. A single weak supplier, whether a small SME or an open-source dependency, can compromise a much larger system.
Third, the attack surface expands. Integrators, cloud providers, subcontractors and maintenance teams all sit inside the operational boundary.
These risks play out differently depending on the domain.
On land, edge AI and sensor fusion create openings for spoofing and model tampering. Field maintenance adds further risk if software authenticity is not tightly controlled.
At sea, systems operate for long periods with limited connectivity. Updates may be delayed, and multiple contractors may touch the same platform over time. Integrity has to hold under those conditions.
In the air, tempo is the pressure point. Frequent updates to mission data and control systems mean a single weak link can quickly affect safety and mission success.
Much of defence supplier assurance still relies on snapshots. Questionnaires, audits and compliance reports show what a supplier claims at a point in time. They do not show what is actually happening day to day.
A supplier can pass an audit and still introduce a vulnerable dependency the next week. A subcontractor can weaken controls without the prime contractor seeing it. A compromised update can pass through a trusted pipeline if no one is checking integrity in real time.
This model worked when supply chains moved slowly. It does not work when software changes constantly.
What is needed is continuous assurance. That means evidence that can be checked, not statements that can be filed. It also means visibility beyond Tier 1 suppliers, where many of the real risks sit.
AI is useful here, but only in a specific role.
The problem is not a lack of data. Defence programmes already generate large volumes of it: audit reports, SBOMs, vulnerability feeds, build logs and supplier submissions. The issue is that this information is fragmented and hard to compare.
AI can process that volume. It can flag unusual dependencies, highlight gaps in controls, and track whether expected behaviours such as patching or access controls are actually happening. It can also help prioritise risk by combining signals that would otherwise sit in separate systems.
Used well, this gives teams a live view of supplier risk instead of relying on periodic reviews.
However, AI does not solve the trust problem on its own. If the input data is weak, the output will be misleading. If the model cannot explain its decisions, it will not stand up in a defence context. And if attackers manipulate the inputs, they can distort the picture.
AI should support decisions, not make them. It can surface issues, assemble evidence and highlight priorities. The decision to trust a supplier or release still needs human accountability.
Before AI is scaled, the basics need to be in place.
Supplier access must be tightly controlled and visible, including below Tier 1. Evidence needs to be standardised so it can be compared across suppliers. High-risk programmes should face stricter assurance requirements. And any release that fails integrity checks should be stopped automatically.
Adoption will be gradual.
First, standardise evidence and expectations.
Then, strengthen software supply chain controls such as signing, provenance and SBOMs.
Then, introduce continuous monitoring.
Only after that does deeper AI integration make sense.
The UK defence ecosystem has started this journey, but maturity is uneven. Some programmes are advanced. Others still rely heavily on documentation and trust.
Compromise is not hypothetical. Systems will be breached. The question is how well they handle it.
That comes down to three things: resilience, recovery and accountability.
Systems need to continue operating safely under attack. They need to be restored quickly to a trusted state. And it must be clear who is responsible when something goes wrong.
Accountability is often the weakest part.
If contracts reward speed and cost alone, suppliers will treat security as secondary. If compliance is measured through policies rather than outcomes, failing controls will go unnoticed. If responsibilities are unclear, incidents will stall while organisations argue over ownership.
This can be fixed, but it requires changes to incentives and contracts. Suppliers should be rewarded for meeting measurable security outcomes. Controls should be tested and evidenced, not assumed. Responsibilities for update integrity, data integrity and access must be explicit and enforceable.
Without that, “accountability” remains a word in a document.
There is no shortage of frameworks to guide this work, from NIST and ISO through to emerging software supply chain standards. The challenge is not choosing one. It is applying them in a way that can be tested in real programmes.
Defence does not need more policy. It needs proof that controls hold under pressure, across every supplier that touches the system.
That is where trust in code is either earned or lost.