What’s New in Data Center Networks? A 2019 Forecast

March 26, 2019
As the technology industry looks back with fondness to all we accomplished in 2018, we’re all excited for the endless possibilities of what 2019 will bring. The high-speed data center interconnect (DCI) market’s no different. Following are three things we predict will happen this year.

As the technology industry looks back with fondness to all we accomplished in 2018, we’re all excited for the endless possibilities of what 2019 will bring. The high-speed data center interconnect (DCI) market’s no different. Following are three things we predict will happen this year.

1. Data center geographic disaggregation will become more common

Data centers consume large volumes of physical space that need myriad support, including infrastructure like power and cooling. Data center geographic disaggregation will become even more commonplace, as it is increasingly difficult to build a single, large, contiguous mega data center. Disaggregation will also be more critical in metropolitan areas where land is at a premium and for disaster recovery where geographically diverse locations are needed. Large-bandwidth interconnects are essential to connect these data centers.

Figure 1. Three data center interconnect applications categories.

Figure 1 shows the reach of commonly used DCI nomenclature categories, including:

  • DCI-Campus: These connect data centers located close together, such as in a campus environment. The distances are typically limited to between 2 km and 5 km. There’s also an overlap of CWDM and DWDM links over these distances, depending on fiber availability.
  • DCI-Edge: The reaches for this category range from 2 km to 120 km. These links are generally latency limited and used to connect regional, distributed data centers. DCI optical technology options include direct detection and coherent, both of which are implemented using the DWDM transmission format in the C-Band (192 THz to 196 THz window) of the optical fiber. Direct-detection modulation formats are amplitude modulated, have simpler detection schemes, consume lower power, cost less, and in most cases need external dispersion compensation. For 100 Gbps, a 4-level pulse amplitude modulation (PAM4), direct-detection format is a cost-effective approach for DCI-Edge applications. The PAM4 modulation format has twice the capacity of the traditional non-return-to-zero (NRZ) modulation format. For the next-generation 400-Gbps (per wavelength) DCI systems, a 60-Gbaud, 16-QAM coherent format is the leading contender.
  • DCI-Metro/Long Haul: As a group, this category lumps fiber distances beyond DCI-Edge up to 3,000 km for terrestrial links and longer for subsea. Coherent modulation format is used for this category, and the modulation type may be different for the diverse distances. Coherent modulation formats are also amplitude and phase modulated, need a local oscillator laser for detection, require sophisticated digital signal processing, consume more power, have a longer reach, and are more expensive than direct detection or NRZ approaches.

Although they’re not part of the DCI infrastructure, wireless networks are also becoming integrated from the data center network.

2. Data centers will continue to evolve

Large-bandwidth interconnects are essential to connect these data centers. Given that, DCI-Campus, DCI-Edge, and DCI-Metro/Long Haul data centers will continue to evolve.

The DCI space has become an increased focus for traditional DWDM system suppliers over the last few years. The growing bandwidth demands of cloud service providers (CSPs) offering software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS) capabilities have driven demand for optical systems to connect switches and routers at the different tiers of the CSP’s data center network. Today, this requires operation at 100 Gbps, which inside the data center, can be met with direct-attach copper (DAC) cabling, active optical cables (AOCs), or 100G “gray” optics. For links connecting data center facilities (campus or edge/metro applications), the only choice that was available until recently was full-featured, coherent transponder-based approaches, which are sub-optimal.

Along with the transition to the 100G ecosystem, there’s been a shift in data center network architectures away from more traditional data center models where all the data center facilities reside in a single, large “mega data center” campus. Most CSPs have converged on distributed, regional architectures to achieve the required scale and provide cloud services with high availability.

Data centers regions are typically located in close proximity to large metropolitan areas, with high population densities, to provide the best possible service (with regards to latency and availability) to end customers nearest those regions. Regional architectures vary slightly among CSPs, but consist of redundant, regional “gateways” or “hubs” that connect with the CSP’s wide area network (WAN) backbone (and possibly to edge sites for peering, local content delivery, or subsea transport). Each regional gateway connects to each of the region’s data centers, where the compute/storage servers and supporting fabrics reside. As the region needs to scale, it becomes a simple matter of procuring additional facilities and connecting them to the regional gateways. This enables rapid scaling and growth of a region, compared to the relatively high expense and long construction times of building new mega data centers, and has the side benefit of introducing the concept of diverse availability zones (AZs) within a given region.

The transition to regional from mega data center architectures introduces additional constraints that must be considered when choosing gateway and data center facility locations. For example, to ensure the same customer experience (from a latency perspective), the maximum distance between any two data centers (via the common gateway) must be bounded. Another consideration is that the efficiency of gray optics is too low for interconnecting physically disparate data center buildings within the same geographic region. With these considerations, today’s coherent platforms are not an ideal fit for DCI applications.

In response, low-power, low-footprint, direct-detect options have been conceived employing the PAM4 modulation format. By utilizing silicon photonics, a dual-carrier transceiver featuring a PAM4 application-specific integrated circuit (ASIC), with integrated digital signal processor (DSP) and forward error correction (FEC), was developed and packaged into a QSFP28 form factor. The resulting switch-pluggable module enables DWDM transmission over typical DCI links at 4 Tbps per fiber pair, and electrical power consumption of 4.5 W per 100G.

3. Silicon photonics and CMOS will be central in the optical module evolution

The combination of silicon photonics for highly integrated optical components and high-speed silicon complementary metal-oxide semiconductors (CMOS) for signal processing will play an even larger role in the evolution toward low-cost, low-power, switch pluggable optical modules – enabling massive interconnections between today’s vast regional, live data center deployments.

The highly integrated silicon photonics chip is at the heart of the pluggable module. Compared to indium phosphide, the silicon CMOS platform enables foundry-level access to optical components at much larger 200-mm and 300-mm wafer sizes. The photodetectors for the 1300-nm and 1500-nm wavelengths are built by adding germanium epitaxy to the standard silicon CMOS platform. Further, silica and silicon nitride-based components may be integrated to fabricate low-index contrast and temperate-insensitive optical components.

In Figure 2, the output optical path of the silicon photonics chip contains a pair of traveling-wave Mach Zehnder modulators (MZM), one for each wavelength. The two wavelength outputs are then combined on-chip using an integrated 2:1 interleaver that functions as the DWDM multiplexer. The same silicon MZM may be used for NRZ and PAM4 modulation formats, with different drive signals.

Figure 2. Silicon photonics enables compact, high-speed optical modules.

As bandwidth demands in data center networks continue to grow, Moore’s Law dictates that advances in switching silicon will enable switch and router platforms to maintain switch chip radix parity while increasing capacities per port. The next-generation of switch chips are all targeting per-port capabilities at 400G. Accordingly, work has begun to ensure optical ecosystem timelines coincide with the availability of next-generation switches and routers.

Toward this end, a project has been initiated in the Optical Internetworking Forum (OIF), termed 400ZR, to standardize next-generation optical DCI modules and create a vendor-diverse optical ecosystem. The concept is similar to WDM PAM4, but scaled up to support 400-Gbps requirements.

2019 is definitely an exciting time for DCI. As we finish the first quarter of 2019, we’re all excited to see what the rest of the year will bring – and look forward to these predictions becoming reality.

Dr. Radha Nagarajan has served as Inphi’s chief technology officer, optical interconnect, since June 2013. He has more than 20 years of experience in high-speed optical interconnects. Prior to joining Inphi, he was at Infinera as a Fellow – working on the design, development, and commercialization of large-scale photonic integrated circuits.

Sponsored Recommendations

The Road to 800G/1.6T in the Data Center

Oct. 31, 2024
Join us as we discuss the opportunities, challenges, and technologies enabling the realization and rapid adoption of cost-effective 800G and 1.6T+ optical connectivity solutions...

Understanding BABA and the BEAD waiver

Oct. 29, 2024
Unlock the essentials of the Broadband Equity, Access and Deployment (BEAD) program and discover how to navigate the Build America, Buy America (BABA) requirements for network...

Next-Gen DSP advancements

Nov. 13, 2024
Join our webinar to explore how next-gen Digital Signal Processors (DSPs) are revolutionizing connectivity, from 400G/800G networks to the future of 1.6 Tbps, with insights on...

On Topic: Fiber - The Rural Equation

Oct. 29, 2024
RURAL BROADBAND:AN OPPORTUNITY AND A CHALLENGE The rural broadband market has always been a challenge for service providers. However, the recent COVID-19 pandemic highlighted ...