Sovereign AI Has Emerged as an Extension from Cloud Sovereignty
|
NEWS
|
Recently, Jensen Huang, NVIDIA Chief Executive (CEO), called for countries to extend beyond the sovereign cloud to focus on developing “sovereign AI” infrastructure. He argues that this framework will enable enterprises to take advantage of Artificial Intelligence (AI) opportunities, while mitigating against some of the inherent risks to data sovereignty and culture. Although NVIDIA’s announcements should be viewed as a “market-making tour,” which offers it significant upside, sovereign AI does still have validity and ABI Research believes it will be increasingly adopted over the next few years. All countries need access to more cloud compute resources to sustain ambitious AI strategies/agendas, while building these locally will help maintain control over data usage.
Sovereign AI, as advocated by Huang, would focus on the development of a national AI infrastructure. The most important element would be the rollout of AI factories (i.e., data centers) to ensure that enterprises have access to compute resources for model training and inference; but he also advocated for the development of Large Language Models (LLMs) based on local languages. The goal is to support national strategic development and protect national sovereignty, while ensuring international competitiveness.
Is Sovereign AI the Right Approach?
|
IMPACT
|
Sovereign AI is gaining plenty of attention with NVIDIA taking the lead. Singtel (a leading communication service provider) chose NVIDIA’s AI platform to support the delivery of generative AI to regional customers. Meanwhile, Scaleway (a cloud service provider and subsidiary of French telco Iliad) and Swisscom have also partnered with NVIDIA to deploy AI factories in Europe. But as sovereign AI scales, competitors like Intel and AMD should play a role, especially if supply chain constraints hinder NVIDIA’s ability to keep up with global demand.
ABI Research expects sovereign AI deployments to grow steadily in the short term with rapid acceleration expected in 2 to 3 years in line with our enterprise AI deployment expectations. Demand for low-cost compute capacity from enterprise will be a major factor that contributes to this growth, but there are plenty of other benefits contributing to the acceleration of enterprise AI. One factor is that sovereign AI will help the central development of national data embassies/repositories providing enterprises with cheap access to reliable, regulatory-compliant datasets to cost-effectively develop applications. These will also foster national Research and Development (R&D) collaboration around common standards, tools, and compliant datasets. Model zoos will also be developed as a result of this collaboration, which will lower barriers to compliant enterprise deployment. Sovereign AI will also limit exposure to hyperscalers (which are mostly U.S.-based), reducing geopolitical risk and cross-border regulatory exposure.
Sovereign AI will certainly bring opportunities for national R&D and enterprise deployment, but there are significant challenges that stakeholders must be aware of:
- “Echo Chamber Effect”: Developing AI models leveraging shared regional or vertical databases and pre-trained models will increase the likelihood of bias and lower accuracy, as effective training relies on exposure to a variety of ideas. Sovereign AI strategies cannot be built in global isolation, but must connect across borders within the AI Forum to ensure exposure to a variety of ideas.
- Geopolitical Fragmentation: Funnel innovation toward national-focused projects and away from international cooperation. This could deepen technological inequality globally, but also drive geopolitical division and conflict given the national security implications of asymmetric AI innovation. Similar to the above, international cooperation remains imperative, complementing a sovereign AI strategy.
- Exploding Energy Consumption: It will be important to manage this to ensure sustainable AI deployment.
- AI Talent Shortage: Talent is concentrated in innovation hubs like the United States and China. Sovereign AI strategies should mesh with ground-up policies targeting human capital development.
What Does NVIDIA's Call for "Sovereign AI" Mean for Innovators and Implementers?
|
RECOMMENDATIONS
|
Sovereign AI fits closely with the maturing concept of cloud sovereignty. This leads ABI Research to expect that it would garner plenty of attention from vertical implementers, especially those outside the United States working in highly-regulated industries (healthcare) or markets (e.g., Europe). Highly regulated, as well as highly distributed (logistics and supply chain) implementers have been initially slow with AI, as they struggle to efficiently scale upward from Proofs of Concept (PoCs) to secure, compliant, and commercially-viable deployments. However, a sovereign AI framework may help overcome some of the infrastructure, regulatory, compliance, and data challenges that continue to hinder widespread adoption.
For market innovators, sovereign AI also offers opportunities, especially for infrastructure players, which have, so far, been overlooked in the AI market. ABI Research expects that sovereign AI strategies will be built through alignment between infrastructure providers, AI hardware vendors (i.e., NVIDIA), and governments providing subsidies, grants, and tax exemptions to encourage development. The following key infrastructure players are in a strong position to play leading roles:
- Regional Cloud Service Providers: Strong compute and infrastructure capabilities with existing cloud platform and AI expertise. Focus on AI data center factories.
- Neutral Host Providers: Strong disaggregated infrastructure, but limited AI capabilities. Best renting infrastructure to third-party AI specialists, e.g., regional cloud service providers, to deploy sovereign AI infrastructure at the edge.
- Telco Operators: Natural aggregation point with strong infrastructure, but limited compute and AI expertise. Opportunity exists to monetize highly distributed networks, especially with sovereign edge AI deployments at scale.
- Interconnection Providers: Strong networking capabilities with connections to other cloud providers enabling direct connection between enterprise, cloud, and other infrastructure. Positioned to sit between public, private, and sovereign AI cloud deployments. Some players like Equinix are already building out their AI platforms with NVIDIA DGX, which could be transformed into regional sovereign AI platforms.
One recommendation for infrastructure players is not to just assemble the equipment and build AI factories, but to play a role in aggregating software services/tools to enable enterprises to build AI within the platform. AI factories should be “one-stop shops” for enterprises to develop and run AI models. Companies like VMware will certainly play a role in this given its existing Sovereign Cloud platform.