As 5G networks begin to take hold, the rising expectations of users are only adding to the pressure that service providers already feel in providing an ever-increasing array of services to millions of subscribers. From 4K video – soon to be 8K (with about four times more pixels than 4K) – to massive multi-player online gaming, augmented and virtual reality offerings, real-time IoT connectivity, and artificial intelligence and machine learning applications, providers have a lot to contend with now, and we’re only getting started.
Clearly, traditional concepts of access are not going to be able to accommodate network connectivity needs. Even when bandwidth is routinely measured in gigabits, high speeds alone cannot overcome the challenges that providers face. Network operators recognize the need to upgrade to increasingly intelligent networks that reduce the operational costs of providing advanced services to users.
Let’s start with the central office
Foundational to this new network is a dynamic and scalable approach to network provisioning, especially in a 5G world where applications will demand high bandwidth and low latency. The transition from a traditional hardware-centric network to one that is software-defined requires highly scalable resources that can be deployed and managed on demand and rapidly adjusted.
A cloud-based edge computing approach has been guiding us toward the future of networking for several years, but only now are providers really focusing on distributing compute resources to improve service quality and security while adding agility and scalability.
The central office is the initial focus of this transition. By that, we mean the final node in the network controlled by the carrier; for wireline, that is the traditional central office (CO), from a 5G perspective it is the remote edge (baseband unit pools’ locations), and for cable companies the headend or hub. It makes sense to focus there, although it does present challenges. For instance, when all traffic is aggregated to the CO, the resulting bottleneck can affect throughput and latency, negating the gains from technologies such as fiber-to-the-home (FTTH).
The solution? A virtualized, distributed network at the edge, blending the economies of a data center and the agility of a software-defined network (SDN). Efforts such as the ETSI Multi-access Edge Computing (MEC) initiative, OpenNFV, and the Central Office Re-Architected as a Datacenter (CORD) project have begun the process of applying cloud design principles to the CO.
Part of this transition involves a migration from proprietary edge hardware, such as edge routers, to commercial off-the-shelf (COTS) server arrays in the CO. This evolution enables software to be ported onto any server, bypassing the restrictions of vendor lock-in with proprietary systems in favor of open environments that offer edge computing with programmability.
The advantages of virtual COs
With their proximity to end users, virtual COs represent a significant latency advantage over centralized data centers. By placing edge COTS servers at the network edge that offer general purpose computing resources and the ability to run any function, providers gain the necessary network and service agility and ability to introduce new low-latency services.
Virtual COs leverage the physical assets that providers already have at the network edge, enabling them to better compete with cloud companies. Value-added services such as streaming video and augmented and virtual reality applications become less challenging to introduce when virtualization and cloud design are applied to the CO. An example is content distribution network (CDN) video streaming, in which a geographically dispersed network of bare-metal servers temporarily caches and delivers live, on-demand, high-quality videos to web-connected devices based on a user’s location.
Virtualized CO challenges and solutions
CO virtualization addresses the critical requirements of disaggregating traffic, overcoming vendor lock-in, enabling value-added low-latency services, and controlling costs – but there are challenges. For instance, the disaggregated network remains a work in progress. While most providers are moving in the direction of CO virtualization, there is still a great deal of effort required to reach the objectives.
And then there are the issues of space and power. Remote locations such as COs weren’t designed to be data centers, so they are not necessarily well suited to accommodating the arrays of servers needed for a software-based virtualized approach or to supplying sufficient power for these platforms.
One solution is to use programmable hardware acceleration. This technology enables data to be processed more efficiently with the same flexibility as software and provides additional functionality and significant savings on traditional operating expenses.
Rather than add servers to boost performance – which increases costs and requires additional space and power – providers can accelerate existing hardware and at the same time add flexibility, enhance scalability, and reduce costs. FPGAs mounted on network interface cards (creating programmable SmartNICs) for hardware acceleration delivers a software-defined solution in a compact silicon chip. In this scenario, the virtual network function data layer is offloaded from the CPU and the network and security functions are ported onto the FPGA. The result is that fewer CPU cores need to be burned and the CPUs are reserved for the computing functions and user applications to which they are best suited.
FPGA acceleration can be combined with a data plane development kit (DPDK) for standard acceleration APIs and simplified integration. Multiple virtual network functions can use a single FPGA SmartNIC to achieve better cost-efficiency and further reduction of physical space and power. For example, a typical 24-core server can consume more than 500 W to run a 20-Gbps virtual router, but running the same functions on a 100-Gbps FPGA SmartNIC uses less than 50 W.
In addition to savings in server cores, physical space, and power, this approach yields superior scalability even at high bandwidths by plugging additional FPGA SmartNICs into the servers. It also offers enhanced security since providers can choose to bypass the CPU entirely for encryption and decryption tasks.
As we continue to build a more comprehensive picture of the next-generation CO, it is clear that providers will rely heavily on edge virtualization and disaggregation of network traffic and functions. This is how they can effectively relieve bottlenecks and bring innovative value-added services right to the network edge.
To address the space, power, and scalability challenges of a next-generation virtual CO, an SDN solution combined with a programmable FPGA-based acceleration architecture will optimize the network edge and deliver the efficiency that providers will need in a 5G-centric connected world. Such a strategy, with its advantages of reducing costs and avoiding vendor lock-in, will give providers their best weapons for combating OTT competition.
Barak Perlman is chief technology officer of Ethernity Networks, provider of programmable hardware-based acceleration for networking and security on FPGAs for virtual COs and cloud edge deployments.
Barak Perlman | Chief Technology Officer, Ethernity Networks
Barak Perlman is chief technology officer of Ethernity Networks, provider of programmable hardware-based acceleration for networking and security on FPGAs for virtual COs and cloud edge deployments.