Why Do We Need Layer-1 Acceleration for OPEN RAN?
|
NEWS
|
Open Radio Access Network (Open RAN) and Open Virtualized RAN (Open vRAN) are gaining momentum. However, the voice of doubt from incumbent vendors, network operators, and many other players is Open RAN’s capability to run latency-sensitive and high-complex layer-1 RAN processing functions on standardized Commercial Off-the-Shelf (COT) servers. Meanwhile, it is commonly believed that executing full baseband processing functions on COTS Central Processing Units (CPUs) is not cost- or energy-efficient, especially for accommodating high data traffic and enabling broad-scale network connectivity. This consensus motivates the front runners of Open RAN to promote acceleration hardware to boost the layer-1 RAN processing capability and, at the same time, reduce both energy and deployment costs.
In September 2020, Xilinx announced the first of many telco layer-1 RAN acceleration cards, named the T1, which has reached a significant milestone in the evolution of the Open RAN ecosystem. ABI Research expects several more similar announcements to take place over the next few months, to signal that hardware is still essential in Open RAN. Xilinx’s acceleration card targets offloading layer-1 RAN and fronthaul processing functions, addressing the Distributed Unit (DU) portion of the Open RAN market. Such design simplifies the achievement of network scalability and accelerates the implementation of the Open RAN approach. NVIDIA, in collaboration with Ericsson, also looks at Graphics Processing Unit (GPU)-based acceleration card development, and by acquiring eASIC, Intel positions itself on a Field-Programmable Gate Array (FPGA)-based acceleration solution.
Different Methods for Layer-1 Acceleration
|
IMPACT
|
The layer-1 RAN acceleration has the potential to create a critical mass of supply chain change; therefore, truly understanding the technology and its business value will help accelerate the evaluation of the Open RAN approach. In general, the existing layer-1 acceleration solution can be divided into three categories:
- Application-Specific Integrated Circuit (ASIC) Card: This type of acceleration solution uses the same chipset type as traditional RAN equipment, but it is built on the industry-standard Peripheral Component Interconnect Express (PCIe) format. Thus, the customized chipset is used to run layer-1 RAN functions, targeting improved energy efficiency and reduced cost per unit. A typical example is the Radio Frequency System-on-Chip (RFSoC) module embedded in the Xilinx T1 acceleration card.
- GPU Card: In collaboration with specific chipset suppliers, certain infrastructure vendors develop the GPU-based acceleration solution for layer-1 RAN processing functions. Such an implementation takes an extreme multi-threaded/parallel approach to increase layer-1 compute capability and efficiency. A typical example is the NVIDIA GPU card.
- FPGA/eASIC Card: The FPGA-based solution gives full flexibility for adapting various processing changes. Alternatively, the eASIC is a type of chipset in between ASIC and FPGA, where it can enjoy the customized performance and also be configured from outset (but not from hardware). Typical examples are the Intel eASIC/FPGA card and the Xilinx FPGA card.
By understanding the unique features of each card, a network deployer and its vendor partner can accurately estimate the market positioning and offer best-of-breed solutions. This strategy is consistent with the goal of Open RAN for promoting multi-vendor interoperability. Specifically, an ASIC-based acceleration card is exceptionally well suited for the established brownfield public network deployment with high data traffic, large volume shipment, and low energy consumption requirements, e.g., Mobile Network Operators’ (MNOs) macro base stations equipped with distributed baseband units. For network operators or industrial enterprise users that have dynamic network configuration and deployment flexibility demands, a FPGA/eASIC-based solution seems to be a preferred option; for example, running layer-1 RAN processing functions at the edge or in the cloud to provide customized services. Finally, the GPU-based acceleration card is well-positioned for the network deployer and its vendor partner, which have strategical plans to integrate Artificial Intelligent (AI) technology with layer-1 RAN processing functions to improve network spectrum efficiency and communication reliability. However, ABI Research does not expect this type of acceleration solution will be commercialized shortly due to cost-efficiency concerns and unclear application scenarios. But once actual values of AI in layer-1 RAN processing have been verified, GPU-based acceleration can become a viable option. Apart from those, the “plug and play” feature of all kinds of acceleration cards simplifies scalable radio resource management and enables multi-vendor interoperability to expand supply diversity.
What Will Be the Best Acceleration Option?
|
RECOMMENDATIONS
|
Unlike previous generations of wireless technology, 5G will be a real driver to enable network flexibility for industrial verticals with diverse connectivity needs. Many Tier One network operators across the world have already launched their 5G services, starting with the consumer market. However, the full potential of 5G is still being explored, especially for enterprise users aiming to gain access to the transformative benefits. In this case, the blind pursuit of highly configurable hardware to run communication functions, including layer-1, has nothing to do with 5G evolution. Although the cloud-native and virtualized network architecture has shown its benefits on the core side, the full advantage of network flexibility on the RAN side has not been clarified, especially for layer-1 RAN processing functions. For example, The 3rd Generation Partnership Project (3GPP) has standardized Low-Density Parity-Check (LDPC) codes for 5G user data. This layer-1 channel coding method is unlikely to change for the entire network evolution.
On the other hand, Multiple-Input Multiple-Output (MIMO) beamforming technology requires functional flexibility, and FPGA/eASIC and even GPU-based acceleration solutions are well-positioned. However, the cost-effective FPGA/eASIC/GPU acceleration for MIMO beamforming technology is still underdeveloped for both traditional RAN and Open RAN approaches. Certain network operators may think that the highly configurable hardware can be used to adapt to future changes, but they also need to take the equipment replacement life cycles into account before starting a network deployment. With the current virtualized network deployment, ABI Research believes a hybrid layer-1 RAN acceleration solution is necessary to balance the deployment cost and functional flexibility. Xilinx’s T1 solution sets a great example for offloading fronthaul processing functions, which may need frequent requirement changes, to FPGA and leaving the channel coding type of functions with an ASIC chipset. Depending on the site and location requirements, as well as traffic demands, different layer-1 acceleration models should be used to leverage their processing capability for specific network function needs.