Can Arm help NVIDIA become even more dominant?
NVIDIA exited 2024 with an over 84.9% share of accelerators deployed in the cloud, as shown by Liftr Insights data. Given this dominant market position, what else can NVIDIA do to add customer value and drive more growth from cloud deployments?

What does the GPU competition look like in the cloud?
Of course, AI remains a high-growth use case, and in the near future, it should be assumed that NVIDIA will continue to take the majority of AI/ML share. But given the growth of AI, we are starting to see some competition to NVIDIA.
Who are these other vendors, and what portion of the remaining 15.1% market share are they each acquiring? Google and AWS are two prominent players with a mix of their own GPUs and other accelerated computing technologies (TPUs and DPUs). Additionally, AMD, with its Radeon GPU, has a 5.5% share distributed across Azure, Oracle, and Vultr clouds. So, while there is some competition, NVIDIA may need to look for growth in other computing technologies.

NVIDIA is looking beyond the GPU
NVIDIA is more than just an accelerator provider; it also manufactures motherboards and complete servers.
By the end of 2024, the most prevalent accelerated computing configuration in the cloud consisted of an NVIDIA GPU paired with x86 processors. In fact, 86.3% of the time, NVIDIA GPUs were paired with either Intel or AMD CPUs. However, NVIDIA is more than just an accelerator provider; it also manufactures motherboards and complete servers. These motherboards and server products feature NVIDIA’s own Grace processor.

And that Grace processor is built on Arm, specifically Arm’s Neoverse 2 architecture. Although the processor was primarily aimed at on-premise customers, Liftr Insights noted that the first appearances of the Grace Hopper 200 CPU / accelerator combinations occurred within the smaller cloud providers near the start of 2024. The first previews only began in the major cloud providers in December 2024.
While the number of cloud Grace Hopper solutions remains limited, the numbers are expected to increase, particularly cloud providers pledging to support not only Grace Hopper but also Grace Blackwell solutions in 2025.
Why Grace matters to enterprises
Better performance—The NVLink 2 functionality enables higher performance by Unifying the CPU and GPU. It also has a shared memory model that speeds up data sharing performance.
Lower energy consumption—Both the CPU and GPU use 4nm and 5nm processes, which reduces energy consumption and, in some cases, can also reduce cooling requirements.
Reduced complexity—Putting all components on a single board reduces the system's complexity and provides a better means for NVIDIA to support the solution.
Ability to better leverage NVIDIA solutions—Grace solutions are very well integrated for enterprises using NVIDIA software solutions such as AI Enterprise or Omniverse.
Is there a cost advantage to Grace in the cloud?
Liftr Insights customers will know the price differences as soon as the cloud providers make additional prices available.
Despite these advantages, cost remains a major factor in cloud decision-making. There is not enough data yet to determine if there will be a substantial difference in price between an H200 with x86 and a GH200 instance. As of January 2025 the only provider with both types of instances online is Lambda, but pricing for either was not publicly available as of early 2025. Liftr Insights customers will know the price differences as soon as the cloud providers make additional prices available.
It’s worth reminding that the price differences between x86 and Arm have been significant, as shown in the article about Microsoft Cobalt.

Conclusion
There may be even more potential for Arm in the future.
The purpose of these series of articles about Arm Rising was to illustrate how data show Arm has become a substantial player in the cloud. By leveraging multiple types of partnerships, Arm has reduced the cost of cloud computing and given cloud providers more control over its roadmaps.
However, what may be most interesting is how Arm is beginning to be adopted into more advanced workloads, such as AI. As evidenced by NVIDIA’s embedding of Arm into its solutions, there may be even more potential for Arm in the future.
At Liftr Insights, we have observed the increase in GPUs handling AI and ML workloads in the cloud over the past six years. In addition to our deep understanding of accelerated cloud infrastructure, we also have been obtaining data on the scale and use of AI and ML models. With our series about AI models, we guarantee you will learn more about AI from the data-driven content than your competitors reading from traditional sources.