05 Aug 2021 | IN-6253
The development of non-CPUs is focused on creating a more evenly distributed workload and a more efficient process.
Log In to unlock this content.
This content falls outside of your subscription, but you may view up to five pieces of premium content outside of your subscription each month
You have x unlocks remaining.
Multiple CPU-Offload Technology Releases Target Operational Workloads, Confusing Messages for Consumers |
NEWS |
Over the last few months, there have been several announcements giving details of processing units that augment the work of the Central Processing Unit (CPU). Intel announced its Infrastructure Processing Unit (IPU) and Marvell released a Data Processing Unit (DPU), as did NVIDIA. This has resulted in some confusion amongst consumers and a lack of understanding of how these products are relevant to them.
Marvell claims to have released the industry’s first DPU back in 2005, the OCTEON multicore, although the term DPU was not in use back then. Marvell described these versatile units as “infrastructure processors”. In June this year Marvell announced the 7th generation OCTEON processor, the OCTEON 10 DPU. There are four models in the family, and all feature the 2.5 GHz 5nm, 64-bit Arm Neoverse N2 cores. They also feature integrated 1 terabit switches, inline artificial intelligence and machine learning (AI/ML) hardware acceleration, and vector packet processing.
Where Marvell have dropped the infrastructure processor nomenclature, Intel has picked it straight back up and applied it to their newly released acceleration technology, the Intel infrastructure processing unit (IPU). Intel’s IPU is a programmable network device designed to allow cloud and communications service providers to reduce overhead on the CPU. It allows the service providers to move to a fully virtualized storage and network architecture which can accelerate infrastructure functions with dedicated protocol accelerators. It also allows for flexible workload placement which drives up utilization and improves efficiency for cloud and communication service consumers. The IPU has been designed for deployment at scale in cloud and communications service provider data centers. This means that the enterprise is unlikely to be deploying the technology initially, but it will be using it, and the benefits will be tangible: the service operators will be able to optimize performance and make 100% of the CPU available to customers.
NVIDIA settled on the term DPU for its Bluefield-branded CPU augmentation technology. NVIDIA inherited the Bluefield technology and brand as a result of the Mellanox acquisition which completed in April 2020. The Bluefield-2 is available now and the Bluefield-3 is expected in Q1 2022. NVIDIA claims that “One BlueField-3 DPU delivers the equivalent data center services of up to 300 CPU cores” which is a bold claim. If true, it means that this technology will certainly feature in the inventory of hardware employed to process next generation workloads in enterprise data centers. It has been built with AI and accelerated computing in mind and, by isolating infrastructure operational data, it allows business application data to be treated in a more secure manner. NVIDIA has released the DOCA SDK, which exposes a programmable API that gives developers the ability to program the acceleration of data processing and data movement out of servers, virtual machines, and containers.
How Will the Non-Central Processing Unit Change Data Center Dynamics? |
IMPACT |
The migration of the enterprise away from multiple mid-size servers and towards the Software Defined Data Center (SDDC) is dramatically changing the data center dynamic. Not only do we see hardware consolidated onto fewer hyperconverged appliances, but we also see every component in a technology solution being replaced by a software service. These services need to run on a processor which means they are directly competing with the operating system and application software for CPU cycles. In the SDDC, the CPU is being burdened with these service workloads, diverting it from the core job it was designed and optimized to do. These acceleration technologies aim to take tasks away from the CPU. The benefits are twofold; firstly the CPU is free to concentrate on high value computation, and secondly, the tasks offloaded can be directed to a more appropriate processing unit, one optimized for the particular task. The offloaded tasks are generally data-related tasks that ensure the data is available in the correct format when and where it is needed. Examples of this are video processing, encryption, compression, and movement or tracking of data. Much of this work is performed far more efficiently on devices other than CPU.
When applied to a traditional data center that is not software defined or to a High-Performance Compute (HPC) solution, these acceleration technologies still yield benefits for most workloads. This means that over time they will occupy an increasingly important role in the enterprise data center. ABI Research believes that this technology will continue to evolve and become more specialized, its place in the data center will be essential for modern processing workloads.
Which Processor Augmentation Technology Do I Need? |
RECOMMENDATIONS |
As the systems integrators and hardware resellers become more familiar with these technologies, we will start to see them being deployed in enterprise data centers. Marvell’s DPU has been born out of a networking infrastructure-focused product line; its strengths lie in networking and communications. Intel’s IPU has emerged from the hyperscaler environment; its strengths will likely come to the fore when deployed at scale. With the DOCA SDK released alongside it, NVIDIA’s DPU has the potential to be the most versatile of the three. NVIDIA’s track record in developing a powerful software ecosystem to help customers extract the maximum benefit from its technology means that it is likely to be popular with customers that are happy to invest in the optimization of resources. As the workloads of the enterprise modernize and evolve to require acceleration technologies to stay relevant, it is certain that there will be a processor on the market that will enable that acceleration. Understanding your workloads as they evolve and benchmarking the available technology will determine which non-Central Processing Unit will accelerate enterprise workloads in the most cost-efficient manner.