Cabling Advances for Data Center Interconnect
The data center interconnect (DCI) application was a hot topic at the recent Optical Fiber Communications conference in San Diego. Having emerged as an important and fast-growing segment in the network landscape, the space has been the focus of several exciting advances in fiber-optic cabling. This article will explore some of the reasons for the growth of this segment and focus on several of the new cabling technologies aimed at making this space more installer friendly.
Best Practices for Designing and Deploying Extreme-Density Data Center Interconnects
A quick internet search of large hyperscale or multi-tenant data center spending announcements returns several expansion plans, totaling into the billions of dollars. What does this kind of investment get you? Often, that answer is a data center campus consisting of several data halls in separate buildings that are often bigger than a football field and that typically have over 100 Tbps of data flowing between them (Figure 1).
Without diving too far into the details about why these data centers are growing so large, we can simplify the explanation to two trends. The first is the exponential east-west traffic growth machine-to-machine communication has created. The second trend is related to the adoption of flatter network architectures, such as spine-and-leaf or Clos networks. The goal is to have one large network fabric on the campus -- which drives the need for 100 Tbps of data or greater flowing between the buildings.
As you can imagine, building on this scale introduces several unique challenges across the network, from power and cooling down to the connectivity used to network the equipment together. On this last point, multiple approaches have been evaluated to deliver transmission rates at 100 Tbps (and eventually higher), but the prevalent model is to transmit at lower rates over many single-mode fibers. It is important to note that these lengths are often 2-3 km or shorter. Modeling shows that lower data rates over more fibers will remain the most cost-effective approach for at least the next few years. This cost modeling shows why the industry is investing so much money developing high-fiber-count cables and associated hardware.
Now that we understand the need for high-fiber-count cables, we can turn our attention to the alternatives on the market for data center interconnect. The industry has agreed that ribbon cables are the only feasible solution for this application space. Traditional loose tube cables and single-fiber splicing would take much too long to install and result in splice hardware too large to be practical. For example, a 3456-fiber cable using a loose tube design would require more than 200 hours to terminate assuming 4 minutes per splice. If you use a ribbon cable configuration, splicing time drops to less than 40 hours. In addition to these time savings, ribbon splice enclosures typically have four to five times the splice capacity in the same hardware footprint compared to single-fiber splice density.
Once the industry determined that ribbon cables were the best option, it was quick to realize that traditional ribbon cable designs were not able to achieve the required fiber density in existing conduit. The industry therefore set out to essentially double the fiber density in traditional ribbon cables.
Two design approaches emerged. The first approach uses standard matrix ribbon with more closely packable subunits, and the other approach uses standard cable designs with a central or slotted core design with loosely bonded net design ribbons that can fold on each other (see Figure 2).
Another time-consuming task is ribbon identification and correct ordering to ensure the correct splicing. Ribbons need to be clearly labeled so they can be sorted after the cable jacket is removed, as a 3456-fiber cable contains 288 twelve-fiber ribbons. Standard matrix ribbons can be ink jet printed with identifying print statements, while many net designs rely on dashes of varying lengths and numbers to help identify ribbons. This step is critical because of the magnitude of fibers that must be identified and routed. This ribbon labeling also becomes critical in terms of network repair, when cables get damaged or cut after initial installation.
Forward-Looking Trends
Cable with 3456 fibers looks to be just a starting point, as cables with more than 5000 fibers have been discussed. Since conduit size is not getting bigger, the other emerging trend is to use fibers where the coating size has been reduced from the industry-standard 250 microns to 200 microns. Fiber core and cladding sizes remain unchanged, therefore not affecting optical performance. This reduction in fiber coating size can allow hundreds or thousands of additional fibers in the same size conduits as before.
The other trend will be the rising customer demand for tip-to-tip solutions. Sticking thousands of fibers in a cable solved the problem of conduit density but created many challenges in terms of risk and network deployment speed. Innovative solutions that help eliminate these risks and reduce deployment velocity will continue to mature and evolve.
Demands for extreme-density cabling seem to be accelerating. Machine leaning, 5G, and bigger data center campuses are all trending in a way that drive demand for these DCI links. These deployments will continue to challenge the industry to develop tip-to-tip solutions that can scale effectively to enable maximum duct utilization while not becoming increasingly cumbersome to deploy.
David Hessong is a manager of global data center market development at Corning Inc.
David Hessong
David Hessong is a manager of global data center market development at Corning Inc.