When “sovereignty” means everything, it means nothing
One word features more than any in startup pitch decks, at VC AGMs and in government tech comms at the moment: “sovereignty”. It’s come to define a heady mix of manufacturing reshoring, national security capability, billion-dollar investments, anti Big-Tech, and frontier technologies.
Ironically for a word that is neither novel nor British (thank the Romans and the French), it’s the big new thing in tech and economic policy. The Chancellor used it six times in her flagship Mais Lecture in March, we’ve had parliamentary debates about its meaning, and it’s the moniker of a new government AI unit and investment fund. Only yesterday Keir Starmer claimed that “steel is the ultimate sovereign capability” in his attempt to re-launch his faltering government.
But what exactly does it mean? With Sovereignty quickly becoming the organising principle for UK tech policy, we’re at risk of wasting vast amounts of time and money unless we define it more precisely.
The Chair of the UK Science and Technology Committee, Chi Onwurah MP, articulated this concern perfectly in the FT a few weeks ago: “As technology sovereignty rockets up the geopolitical agenda, each minister who has come before my committee has defined it differently. My favourite definition of sovereignty, however, is that it means exactly what you want it to mean.”
Most “sovereign” concerns boil down to autonomy, prosperity, and security
“Sovereignty” captures a moment of acute British political, economic and technological insecurity. It is invoked with respect to a vast litany of overlapping, but distinct, concerns:
Preventing physical and digital harm; maintaining territorial integrity; creating “homegrown champions”; leading the world in frontier tech; securing foreign direct investment into the UK; reducing the UK’s dependence on foreign supply chains/ Big Tech; creating manufacturing jobs; removing foreign influence from critical infrastructure; energy self-sufficiency; boosting exports; controlling sensitive data in the UK; the ability to carry our regulatory and policy preferences in the tech domain (e.g. enforce rules on global companies).
When you scratch at the surface, these broadly fall into three clusters, all of which apply particularly to AI.
Autonomy - The capacity to make informed, independent decisions, and then to act upon them, free from external control. In the tech context, this often centres on the UK’s ability to make policy decisions and see them enacted. We value the ability to make up our own minds as a means to prosperity and security, but also (as Brexit clearly demonstrated…) as an end in itself.
As applied to AI: How do we want the AI revolution to play out? To what degree are we able to influence how this revolution plays out? Where we want to regulate AI, can we enforce those regulations?
Prosperity -This discussion tends to focus on GDP growth and raising living standards, but on a more granular level, it involves discussions on jobs (both quantity and quality); productivity; and the tax receipts this creates. FDI, domestic business investment, “high value” R&D, the existence of British unicorns - these seem to be proxies for, and means to, a wider reach for prosperity.
As applied to AI: Will individuals be able to sustainably sell their labour for reasonable pay in the future, and if so, how? Is any sector or part of the value chain safe? What education/ upskilling needs to take place to support the new world? How and where can the UK participate in the huge value-creation happening in AI (be “AI makers, not AI takers” in government language.
Security - Protecting the population from harm. This has a kinetic, physical dimension which has come rapidly to the fore following the Ukraine and now Iran conflicts, but also a cyber dimension which was escalating even before Mythos put it into the stratosphere.
As applied to AI: What parts of the stack do we really need to on-shore for security reasons, versus relying on close allies; versus even relying on the open market? How do we ensure resilience in AI supply chains, against either companies or nation states? Can we increase our own leverage at the same time as minimising the leverage of third party countries?
These concerns aren’t novel. We deal with them in many other strategically significant sectors. Take financial services, where so much policy and regulation is aimed at: security (e.g. capital adequacy, operational resilience), ensuring continuity in our domestic financial services provision is not compromised by deliberate or accidental action by a third party; prosperity - FS is one of the eight industrial strategy priority sectors, as a strategically significant source of both jobs and services; and parts of the Brexit debate were focused on autonomy in the financial services regulatory rule-book.
But these concerns are particularly acute with AI. The speed at which the revolution is happening; the potential breadth of the upheaval (every single sector); and the number of unanswered questions about how this will play out, across our framework, means that it’s both true that we should be spending a lot of money on tracking and answering these questions, and that we should be absolutely clear what specific concern we are talking about.
We’ll destroy value if we’re unclear on the trade-offs
Autonomy, prosperity and security naturally overlap, but they also sometimes conflict. What is best for future prosperity may not be what is best for future security; and what is best for future security is often not best for prosperity. What may be narrowly best for autonomy is probably best for neither security nor prosperity.
There’s no doubt that having access to pretty much the whole tech stack nationally would be advantageous for security reasons, but using state or private sectors funding to achieve that would likely be a vast waste of investment and sub-optimal for future jobs. The AI stack almost fundamentally suits a division of labour across a large, international ecosystem - given the extreme levels of both capex and talent needed. Resisting this is naturally going to result in certain parts of our domestic ecosystem being internationally uncompetitive. We need to identify where having an international dependence on others might pose a security risk, but also be honest about the cost to prosperity that might come from battling specialisation and international integration.
When Keir Starmer invoked “sovereignty” as the justification for the nationalisation of British Steel, he is presumably making an argument about one or more of autonomy, prosperity and security. But when the government has already spent hundreds of millions on successive attempts to prevent the failure of the industry, and is likely to face bills of billions more, it is important that he specifies well beyond the simple term “sovereignty” exactly what his objectives are.
Similarly, investments in AI by the new Sovereign AI fund, as well as the British Business Bank and other arms of the UK government, need a more refined, consistent and measurable definition of “sovereignty” if they are to fulfil their potential and withstand the criticism that these initiatives invariably face over time.
Across the tech and economic policy landscape, as “sovereignty” becomes the most-used justification for intervention, it must cease “meaning whatever you want it to mean”.


