The Evolution of Fronthaul Networks

June 14, 2017
The continuous evolution of wireless networks involves multiple changes that enable support of applications with a diverse set of bandwidth and latency requirements. The diversity ultimately fuels the concept of network slicing: a network of networks that can support multiple types of service-level agreements (SLAs) on the same infrastructure and that is software defined and, where applicable, virtualized. Planning and deploying this highly programmable and scalable network requires thorough analysis of performance requirements and understanding of SLAs among the participants of the ecosystem.

The continuous evolution of wireless networks promises to enhance user experience and introduce completely new applications and revenue streams for operators. The network evolution involves multiple changes that enable support of applications with a diverse set of bandwidth and latency requirements. The diversity ultimately fuels the concept of network slicing: a network of networks that can support multiple types of service-level agreements (SLAs) on the same infrastructure and that is software defined and, where applicable, virtualized.

Planning and deploying this highly programmable and scalable network requires thorough analysis of performance requirements and understanding of SLAs among the participants of the ecosystem (Figure 1), which can include wireless providers, transport/network operators, and increasingly, network function virtualization (NFV) infrastructure providers.

Figure 1. Wireless transport networks.

Planning fronthaul

Fronthaul networks represent the part of radio access networks between a baseband unit (BBU) and remote radio head (RRH). The prominent fronthaul protocol is Common Public Radio Interface (CPRI), which defines a simple synchronous protocol to map user data that ultimately will be modulated and transmitted from the radio unit. This simplicity enables cost-effective and small form factor designs, as well as permitting coordination of cells with features such as Coordinated Multipoint (CoMP) and Inter-Cell Interference Coordination (ICIC). However, the simplicity also comes with a cost: The bandwidth requirements for CPRI are high, and as operators ready deployment of higher order multiple inputs/multiple outputs (MIMOs) and higher spectral bandwidth (100-MHz) services, CPRI line rates (currently limited to 24Gbps) will not be able to scale up. Bandwidth is not the only restrictive aspect; CPRI also demands tight delay and delay variation budgets that lead to limitations in the distance and transport technologies that can be used for aggregation and switching.

These limitations have driven a new discussion about the split of functions between the BBU and RRH (Figure 2). The new functional splits are also referred to as Next-Generation Fronthaul Interface (NGFI) and have drawn an increasing number of contributions from the 3gpp, IEEE, and CPRI/eCPRI standard bodies. The current CPRI specification relies on placing almost the entire functionality in the BBU, and leaves the RRH with the basic RF function. As one locates more functions (from option 7 to option 1) inside the RRH, the bandwidth requirements will decrease and latency budgets relax. However, this displacement will also eliminate some of the advantages that are associated with advanced LTE functions such as CoMP.

Figure 2. Options for functional split.

There is no one size fit all solution. Consequently, ecosystem participants are in the process of agreeing on a small set of functional split options that can be used for various sets of applications. One recent recommendation -driven by 3gpp- relates to option 2. This option seems to be uniquely advantageous for fixed wireless applications that naturally don’t rely on cell site coordination and are dependent on a fronthaul interface with relaxed latency budget and low bandwidth requirements. Latency requirements could be in the range of several milliseconds in contrast to CPRI constraints on latency to 200 us! The bandwidth can be reduced by factor of 10 and even more in some use cases.

Beyond option 2, there is continuing discussion on selecting one (or two) other option(s) for advanced mobility applications that demand cell coordination. By simply adding some level of PHY functionality to RRH, one can expect to gain an acceptable level of bandwidth efficiency and still support advanced mobility functions. The latency and jitter requirements, however, will still need to be carefully analyzed.

Packet switched fronthaul

The stringent latency requirements have kept the fronthaul interface away from packet switched technologies. However, with the exploding rise in bandwidth requirements in 5G networks, there is no way of staying away from packet technologies. The economics of scale and the statistical multiplexing gain of Ethernet are essential for the new fronthaul. In cooperation with CPRI, IEEE 802.1 took the task of defining the new fronthaul within the IEEE 802.1cm project. The project is titled Time-Sensitive Networking (TSN) for Fronthaul and defines profiles for bridged Ethernet networks that will carry fronthaul streams in response to requirements contributed by the CPRI organization.

The requirements can be divided into three categories:

  1. Class 1: In-phase-Quadrature (IQ) and Control and Management (C&M) data
  2. Synchronization
  3. Class 2: eCPRI to be added in near future.

IQ and C&M flows can be transported independently; roundtrip delay for IQ is limited at 200 us, and maximum frame loss ratio is set at 10-7. C&M has more relaxed budgets.

Synchronization signals represent an interesting aspect, with a wide range of SLAs driven by wireless standards such as 3gpp. Four classes have been defined:

  1. Class A+: Strictest class with time error budget of 12.5 ns (one way) for applications such as MIMO and Tx-diversity
  2. Class A: Time error budget up to 45 ns for applications including contiguous intra cell carrier aggregation (CA)
  3. Class B: Budgets up to 110 ns for non-contiguous intracell CA
  4. Class C: The least strict class, delivers a budget up to 1.5 us from the primary reference telecom clock (PRTC) to the end application clock recovery output for LTE applications.

Meeting these ever-increasing and demanding synchronization requirements poses new challenges for network designers. Traditional backhaul networks mostly rely on GPS receivers at cell sites. It is the simplest solution from the perspective of backhaul network design, but GPS systems have their own vulnerabilities and are not available in certain locations (indoor) or geographies. Therefore, operators around the world have increasingly begun to deploy Precision Time Protocol (PTP)/IEEE 1588v2 technologies as a backup mechanism, and in some cases as the primary synchronization source in absence of a viable GPS solution.

In parallel to the developments for a new fronthaul, standard bodies such as ITU-T have continued to refine and enhance the architectures and metrics for packet-based synchronization networks. The IEEE G.826x and G.827x series provide a rich set of documents that define the architectures, profiles and network limits for frequency and time/phase synchronization services, respectively.

Phase synchronization delivers an especially interesting challenge for synchronization experts as pointed out by the fronthaul synchronization requirements above. PTP has been defined to synchronize the time and phase of end applications to a primary reference. The PTP protocol continuously measures and attempts to eliminate any offset between the phase of the end application and the primary reference. However, in conventional Ethernet networks packet delay variation has posed a major challenge to transferring acceptable clock qualities in wireless applications. Ethernet switch manufacturers responded to this challenge by delivering new classes of PTP-aware nodes such as boundary clocks and transparent clocks.

PTP-aware nodes are being increasingly deployed in wireless access networks around the world. While the packet delay variation is not a major concern for these deployments, time error analysis remains a major point of focus. "Time error" defines the difference between the time of a clock at any relevant part of the network and the time of a reference clock such as one delivered by a GPS source at another part of the network. It can result from network asymmetries and node configuration/performance issues.

The evolution of networks is a double-sided coin. While advances promise to enhance user experience and bring completely new applications and revenue streams for operators, they also carry risks. The new time-sensitive fronthaul interface will require thorough analysis with a stable timing reference and an accurate packet processing engine. This application demands a new class of synchronization measurement instruments that are essential for lab verification and field deployment.

Reza Vaez-Ghaemi, Ph.D., is senior manager of product line management at Viavi Solutions.

Sponsored Recommendations

Understanding BABA and the BEAD waiver

Oct. 29, 2024
Unlock the essentials of the Broadband Equity, Access and Deployment (BEAD) program and discover how to navigate the Build America, Buy America (BABA) requirements for network...

Meeting AI and Hyperscale Bandwidth Demands: The Role of 800G Coherent Transceivers

Nov. 25, 2024
Join us as we explore the technological advancements, features, and applications of 800G coherent modules, which will enable network growth and deployment in the future. During...

On Topic: Fiber - The Rural Equation

Oct. 29, 2024
RURAL BROADBAND:AN OPPORTUNITY AND A CHALLENGE The rural broadband market has always been a challenge for service providers. However, the recent COVID-19 pandemic highlighted ...

Next-Gen DSP advancements

Nov. 13, 2024
Join our webinar to explore how next-gen Digital Signal Processors (DSPs) are revolutionizing connectivity, from 400G/800G networks to the future of 1.6 Tbps, with insights on...