While 5G specifications are under development and mobile core network virtualization is progressing, the industry is working on Radio Access Network (RAN) infrastructure enhancements for significantly better modularity and flexibility. At this point, it’s pretty clear across the industry that 5G infrastructure will adopt SDN and NFV technologies. But what additional enhancements on top of a virtualized and software-defined RAN infrastructure are essential for enabling the diverse set of new services and capabilities that 5G promises?
Disaggregating the RAN
Traditional and current RAN equipment implementations are monolithic system designs – take, for example, an eNodeB system. Innovation and optimization happen at the system level including both software and hardware.
NFV, of course, drives one level of disaggregation by separating the application software, i.e. vBBU (virtualized baseband unit), from the hardware. Here, hardware transforms into virtual machines provisioned from racks of commercial off-the-shelf (COTS) servers. In reality, though, RAN processing has proven to be too complex for general purpose servers and platform to perform with reasonable efficiency.
For example, the LTE protocol definition imposes strict latency requirements, where RAN processing needs to meet a one millisecond time interval. This hard, real-time requirement is already highly challenging for a software-only implementation of vBBU running on a general purpose platform. And it’ll only get harder. For 5G, this time interval will be reduced to only 200 microseconds. This is a fivefold improvement in latency for supporting new services like tactile internet. This is reflected in the RAN processing perspective, where complexity increases by five times.
Additionally, LTE baseband processing involves air-interface cryptographic operations using LTE unique cryptographic algorithms, like SNOW3G and ZUC. These algorithms typically are not accelerated by general purpose processors. As such, effective RAN processing requires COTS servers based on workload optimized processors which integrate relevant accelerators while supporting standard software platform and standard acceleration API.
As with NFV, SDN enables another level of disaggregation by separating control and user plane processing. This enables dynamic scaling of processing resources at refined granularity. For example, massive IoT and machine-to-machine (M2M) connectivity involve significant control processing to manage a large number of connections. The actual data throughput, and corresponding user plane processing resources required, may be low. On the other hand, e-surgery sessions call for reserved bandwidth and high user plane processing throughput for communicating high resolution video and data at very low latency and high reliability. A virtualized and disaggregated mobile infrastructure can dynamically scale up or down control processing resource and user plane data processing resource respectively to optimize for the specific services being deployed cost effectively and dynamically.
Splitting the RAN with intelligent front-haul
Monolithic eNodeBs get virtualized by migrating most or all of the baseband processing to a centralized datacenter infrastructure in the form of vBBU VNF. The high level concept is that only the antennas and radios, as remote radio units (RRU), would be required at cell sites. The switch fabric connecting RRUs and centralized vBBUs is the front-haul. This eNodeB virtualization approach is usually referred to as C-RAN, where the ‘C’ stands for “centralized” or “cloud”. C-RAN provides the typical virtualization benefits of dynamic provisioning of processing resource to meet elastic usage demand and optimizing resource utilization.
In addition to virtualization benefits, C-RAN improves network performance by enabling efficient implementation of CoMP (Co-ordinated Multi Point). CoMP is a LTE-Advanced Release 11 capability for improving network performance especially near cell edges. With CoMP, multiple eNodeBs, which may be in different sectors and geographical locations, work together to boost network performance using a variety of techniques.
For example, by coordinating these eNodeBs properly, inter-cell interference can be avoided. Multiple eNodeBs can communicate with the same user equipment to boost throughput by jointly scheduling, beamforming, and transmitting in coordinated fashion. With individual eNodeBs, CoMP poses challenges in terms of how to achieve low latency communication across physically and geographically separated eNodeBs. With C-RAN, the same vBBU can work directly with geographically separated RRUs to scale up CoMP benefits much more efficiently and effectively than individual eNodeBs can.
On the other hand, C-RAN poses front haul challenges in terms of bandwidth and latency requirements. For example, the front haul bandwidth required for connecting a 20 MHz 2×2 MIMO sector RRH with a centralized vBBU is 2.4 Gbps. A 100 MHz sector with eight antennas may require 200 Gbps front haul bandwidth. Such high bandwidth requirements pose serious scalability challenge for C-RAN front haul and limit deployment opportunity.
The most promising solution is to reduce the front haul bandwidth required by performing at least part of the L1 PHY processing in the RRUs. In other words, the processing of the LTE stack, which a monolithic eNodeB performs in its entirety, is split between the RRU and centralized vBBU. The industry has identified multiple places where this split within the LTE stack may occur and the corresponding trade-offs in terms of the resulting front haul bandwidth requirement and latency tolerance.
In order to facilitate interoperability, standardization for specifying the split options and defining the software API is essential. In addition, merchant silicon SoC (system-on-chip) offerings for implementing intelligent RRUs which can perform L1 PHY, control processing, and additional functionality can expedite C-RAN deployment in the industry.
Slicing the RAN
To support the diverse network requirements imposed by the various ground-breaking 5G services, the end-to-end mobile infrastructure needs to be orchestrated dynamically into logical slices. Each logical slice is individually optimized for supporting the subscribed services that are active at the moment. For example, a logical slice of the network can be optimized for highly reliable connections with reserved bandwidth for each connection. Another logical slice can be optimized for maintaining a large number of active IoT connections with best-effort throughput while keeping operating cost below the cost target.
As of today, mobile infrastructure network virtualization has been realized primarily at the mobile core only. C-RAN is much more complex than EPC virtualization. RAN slicing further complicates C-RAN implementation and deployment. It highlights the importance of workload optimized processor solutions for COTS servers which provide high performance compute and relevant integrated hardware acceleration while supporting standard software platform and API for interoperability.
Wrapping it up
5G calls for significant enhancements to the mobile infrastructure, much beyond applying SDN and NFV and adopting new radio technologies. Disaggregation improves operational efficiency and resource utilization. Intelligent front haul split options maximize deployment opportunity by relieving the very challenging front haul bandwidth and latency requirements. Network slicing enables individually optimized logical network slices which cater for the characteristics of the active services and needs of individual subscribers serviced by each slice.
Kin-Yip Liu is Senior Director of Solutions Architecture at Cavium, a San Jose, Calif.-based fabless semiconductor company specializing in ARM-based and MIPS-based network, video, and security processors, and SoCs.
Filed Under: Infrastructure, Wireless, IoT • IIoT • internet of things • Industry 4.0