Choose integration and stay ahead of the innovation gap

Cloud sovereignty and AI acceleration: the collision nobody is naming

  • Blog
  • 09/03/26
Ragnar van der Valk

Ragnar van der Valk

Partner Technology, PwC Netherlands

Gerwin Naber

Gerwin Naber

Partner, PwC Netherlands

The Dutch CIO has two additional tasks, both of which cannot wait. Firstly, scaling AI to move from experiments to business-critical applications. In addition, they must secure digital sovereignty, maintaining control over data, infrastructure and software in a world where geopolitical relationships are shifting and European legislation is tightening.

Both tasks are urgent, unavoidable and explicitly interconnected. Yet most organisations treat them as parallel workstreams: the AI team accelerates on infrastructure whilst the security and compliance team develops a sovereignty strategy. Two tracks, two steering groups, and two timelines. Until they collide.

Our view: those who do not treat cloud sovereignty and AI acceleration as an integrated strategic issue are building an innovation gap they can no longer close.

Leading and exposed: the Dutch paradox

The Netherlands is the most cloud-mature market in Europe, the Middle East and Asia. The figures from PwC's EMEA Cloud Business Survey 2025 speak for themselves:

  • 39 per cent of Dutch organisations report high cloud maturity, compared to 25 per cent in the rest of EMEA.
  • 67 per cent apply FinOps (financial operations) to AI processes, nearly double the European average of 39 per cent.
  • 48 per cent modernise data architectures for cloud-native analyses, versus 35 per cent elsewhere.

Impressive figures, but they tell an uncomfortable story as soon as you place the sovereignty layer alongside them. Because every step forward in cloud maturity is simultaneously a step deeper into dependency. Every AI process built on a non-European AI model and running on a non-European hyperscaler deepens the operational and software entanglement. The further you are on the cloud journey, the more you must untangle if the geopolitical context requires it.

Most organisations do not experience this as a problem, precisely because they are so well integrated. But integration is not the same as control. It is often the opposite.

Three layers of sovereignty and the blind spot

When executives discuss cloud sovereignty, the conversation almost always concerns data location. 'Are our data in Frankfurt? In Dublin? Then we're fine.' They simply forget that this is only one of the three layers of sovereignty.  

Many organisations stop here. But if an American party manages the servers on which those data reside, legal claims can still be made on them via the American Cloud Act, regardless of physical location. Data location is necessary, but utterly insufficient.

This is the layer where the real vulnerability begins. It concerns the question of whether you can independently manage the IT infrastructure, security policy and operational processes. Or whether recovery, access control and continuity of business-critical applications are guaranteed without dependence on foreign operational decisions. If a hyperscaler is affected by sanctions tomorrow or unilaterally changes service terms, who holds the key?

The deepest and most complicated layer. Can you run, modify and migrate your systems and applications without depending on one specific vendor? Or are you locked into APIs controlled by others, platform-bound tooling and licensing models that can change at any moment? True software sovereignty relies on open standards, vendor-neutral architecture and decentralised technology.

Most organisations address the first layer, struggle with layer two and would rather not think about layer three. Precisely there, at layers two and three, the sovereignty question hits AI acceleration the hardest.

How AI acceleration systematically widens the sovereignty gap

AI at scale requires three things: enormous computing power, enormous data volumes and a rapid iterative development process. The tech giants deliver these like no other. Their platforms are optimised for precisely this combination – that is their business model. But every time you train, deploy and scale an AI model on hyperscaler infrastructure, the dependency grows across all three layers:

  • Computing power becomes operationally irreplaceable. Your AI does not simply run elsewhere. Integration with GPU clusters, specific APIs and platform services makes migration not only costly but, in many cases, operationally unthinkable in the short term.
  • Data architectures become platform-bound. Feature stores, ML pipelines, monitoring tooling and model registries become increasingly interwoven with the specific cloud environment. Each iteration strengthens the coupling.
  • Speed itself increases dependency. The faster you iterate – and that is precisely the promise of AI – the more proprietary services you consume. The way back becomes more difficult not with each month, but with each sprint.

Imagine the following: a Dutch financial institution trains an AI model for fraud detection on customer data of European citizens. The model runs on the infrastructure of an American hyperscaler. The data are in Europe (layer 1 addressed). But the computing power, the operational environment and the underlying software are managed from the US. One morning the geopolitical climate changes: new sanctions, adjusted service terms, a legal conflict over data access. The data are European. The control is not.

The way out: segment radically

The solution is not fully switching to European alternatives. The European market has serious initiatives, but a switch is easier said than done in practice.

The solution is equally not ignoring sovereignty and blindly accelerating on AI. That is a strategic risk that grows with each geopolitical escalation. The way out is to segment more radically: for each process, each data layer, each AI application, make one explicit decision about the balance between sovereignty and speed.

Examine each process across three layers, not one. Stop the reflex of only checking data location. Map out for each AI application, each data pipeline, each operational system:

  • Who manages the infrastructure (layer 2)?
  • How dependent is the application on one vendor (layer 3)?

An AI model on customer data of European citizens has a fundamentally different sovereignty profile from an internal prediction model on anonymised operational data. Treat them differently, then.

Make the trade-off visible and a matter for governance. For some AI applications, dependence on a tech giant is acceptable because innovation speed justifies it and the sovereignty risk is manageable. For other applications, you deliberately bring processes under European or hybrid control, even if that costs more and proceeds more slowly. The crucial point: this must be a documented governance decision (this is a matter for the board), not a creeping default that emerges because the AI team chooses what works fastest.

Design for portability – now, not later. Invest in containerisation, open standards and vendor-neutral architectures. Not as technical hygiene, but as strategic optionality. The costs of migration in two years are determined today by the architectural choices you make this month.

Build the internal capabilities that a segmented environment requires. Managing multicloud and hybrid architectures is fundamentally different from single cloud. It requires cloud architects who master multiple platforms, data governance teams who assess sovereignty requirements per process, and a security organisation that operates across environments. Without these skills, segmentation is a paper exercise.  

Cloud sovereignty and AI acceleration: the collision nobody is naming

Four questions for the next board meeting

The strategic choices about sovereignty and AI acceleration do not belong in a technical meeting. They belong in the boardroom. Four questions that could be on the table there tomorrow:

  1. Which of our AI processes are currently creating invisible dependencies at layers two and three? Not where our data are, but who manages our systems and whether we can run our applications independently. If nobody in the organisation can answer this question with precision, that is the first problem to solve.
  2. Where do we consciously accept sovereignty risk for speed, and where do we do so unconsciously? The difference between a calculated risk and a creeping dependency is documentation and board approval. Make it visible and discussable.
  3. What are our actual migration costs if we are forced to switch provider tomorrow? Not the theoretical architecture diagram, but the real costs in time, money and operational disruption. This figure determines your negotiating position and your vulnerability.
  4. Do we treat sovereignty and AI acceleration as one integrated issue, or as two separate workstreams? If the answer is 'two workstreams', the collision is not a question of if, but when.

Building compliance and resilience

82 per cent of organisations in Europe are currently reconsidering their cloud strategy because of geopolitical pressure and new regulation. That this reconsideration is taking place is good. The question is whether it goes far enough. Because the organisations that are now working through the three-layer model, making explicit choices per process between sovereignty and speed, and designing AI acceleration and digital autonomy as two sides of the same strategic coin, are building a position that is not only compliant but also resilient. A position that not only performs today but also remains standing when the geopolitical reality shifts again.  

Would you like to stay up to date with the latest developments in cloud and AI?

Then subscribe to our newsletter.

About the authors

Ragnar van der Valk
Ragnar van der Valk

Partner Technology, PwC Netherlands

Ragnar leads the technology practice at PwC Netherlands. He specialises in digital transformations. With a background in organisational change and strategic management, combined with years of experience in technical (IT) environments, he bridges the gap between strategy and sustainable, smart solutions.
Gerwin Naber
Gerwin Naber

Partner, PwC Netherlands

Gerwin is a partner at PwC Netherlands and specialises in cyber, forensic investigation, and artificial intelligence (AI). He helps organisations navigate the complexities of AI and cybersecurity, particularly in preventing or responding to crisis situations.
Follow us