Small-diameter cable provides big benefits for data center applications

April 1, 2010
Planning infrastructure for 40- and 100-Gigabit Ethernet means accommodating parallel optics. But parallel optics doesn’t necessarily equate to ribbon cabling.

By Michael Connaughton, RCDD

Overview

Planning infrastructure for 40- and 100-Gigabit Ethernet means accommodating parallel optics. But parallel optics doesn’t necessarily equate to ribbon cabling.

In spite of the current stagnant global economy, data center growth continues. Data centers are typically defined by their high bandwidth needs, reliability requirements, and connectivity density. The structured cabling plant must accommodate these demands while providing the flexibility to allow for future growth.

However, with the current development of the 40/100-Gigabit Ethernet (GbE) standard (IEEE 802.3ba), data center managers are faced with determining how to best equip their facilities to handle current and future requirements, while staying within fixed budgets. Planning for an efficient data center today includes planning smart for tomorrow’s growth. This includes understanding evolving standards and associated cable technologies and types.

The effects of IEEE 802.3ba

Currently working its way through the standards development process, IEEE 802.3ba is being specifically developed for data centers and high-performance computing (HPC) applications. While it is still Ethernet and provides a number of the protocol’s inherent benefits, the standard departs from its predecessors in a number of important ways.

  • Reduced link lengths: Concerns about the cost of current 10GbE transceivers led to an agreement that the link length would be reduced from the typical LAN backbone of 300 m. A length of 100 m was promoted as inclusive of most data center sizes. There is also activity that may result in extended-length options of 150 m or more. The reduction in link length led to a relaxation of the transceiver specification.
  • Dual data rates: This is also the first time that the data rate did not simply grow by a factor of 10. Two data rates are being developed simultaneously. Ultimately, it was determined that 40 Gbps was a more appropriate speed for data centers to allow for near term growth. The lower-cost implementation of 40GbE for data center servers was a significant factor in this decision. However, 100 Gbps is needed at the interexchange carrier locations (IXC) and in HPC environments.
  • Parallel transmission: For short-reach applications, the signal will be transmitted over multiple fibers in parallel. This technology goes by several names, including parallel optics, space-division multiplexing, and multilane distribution. With this transmission method comes a requirement for oversight on skew performance.
  • Fiber selection for longevity: When properly executed, the cable plant should have a useful service life well in excess of 10 years. In support of this transmission speed migration path, OM3 multimode fiber, with an effective modal bandwidth (EMB) of 2,000 MHz•km at 850 nm, is specified as a minimum. OM4, the newest fiber standard having a 4,700-MHz•km EMB, may be used to extend the link lengths to 150 m.

Impact of parallel optics

As mentioned previously, along with 40/100GbE comes the need for parallel optics in multimode installations. Within most data centers, 40GbE will be the first of these speeds to be implemented. However, these same issues also apply to 100GbE.

While it is possible to transmit 40 Gbps serially or multiplexed across a single fiber, the cost benefits favor parallel transmission. To take advantage of existing technology, four lanes of 10GbE are multiplexed within a transceiver to provide the 40-Gbps signal. This requires a total of eight fibers (transmit plus receive). In most instances, this will likely be a 12-fiber cable (Figure 1).

Since each path becomes its own channel, it is critical that the receiver can reassemble the four paths into the original signal. The relative time delay between the arrival of the fastest and slowest data is referred to as “skew.” Much of the skew can be addressed within the electronics by placing a timing signal on each path (Figure 2).

However, the path itself can contribute to skew. The current draft of IEEE 802.3ba allocates 79 ns of skew to the fiber-optic cable. Five properties, shown in Table 1, have been identified within fiber-optic cables that contribute to skew.

In this table, a distinction is made between static and dynamic skew. Static skew is fixed into the cable construction and does not change over time. Dynamic skew is affected by mechanical and environmental changes to which the cable is exposed. Among these contributors, numerical aperture (NA) and fiber strand length difference are static and the largest contributors to fiber skew. Cabling stress effects, differential mode delay, and the effect of wavelength fluctuations on group delay are dynamic components.

Since strand length is one of the important contributors, the perception exists that a ribbon cable is required to meet the skew requirements. To evaluate the validity of this perception, the Nexans Data Communications Competence Center (DCCC) studied the relative skew performance of various cable types. The resulting data firmly shows that a ribbon cable construction is not required to achieve low optical skew derived from strand length.

The study further shows that:

  • The properties of the optical fibers (NA, differential mode delay, chromatic dispersion, etc.) within a cable are among the most significant contributors to skew. These properties are independent of the cable construction.
  • The stress effects on the optical fibers due to the cabling process have the most significant impact on the dynamic skew. (Prior studies have shown that ribbon constructions might actually have worse performance than other cable types within this parameter. See “Low Skew Optical-Fiber Cord Cable for InfiniBand,” Syami, et al, 55th IWCS Proceedings.)
  • Loose-tube cable designs are shown to exceed the draft requirements of the IEEE standard by an order of magnitude.

Major data center concerns

One of the biggest concerns of data center managers is efficient space utilization. With the increasing bandwidth demands mandated from fixed square-footage footprints, the cabling infrastructure needs to be as small as possible while maintaining sufficient capacity to address expected growth. The most immediate impact of the 40/100GbE standard will be the dramatic increase in the number of multimode fibers needed. Instead of two fibers per port, now a minimum of eight fibers (for 40GbE) or 20 fibers (for 100GbE) will be needed for each port. This highlights the need for compact designs with high fiber counts.

Another significant concern is the cost of power. Significant attention has been placed on the data center’s ability to efficiently use the power supplied to it. Link aggregation can be used to achieve 40 or 100 Gbps, but the power used is not efficient. The use of 40/100GbE switches will reduce the number of watts per bit. This can also reduce the number of switches required, which lowers the amount of heat generated, which, in turn, lowers the amount of cooling needed. This cascading power effect results in significant operational cost savings.

Reliability of operation is also at or near the top of the list of data center concerns. The cost of downtime can be measured in terms of percentages or bits—but in today’s market, the cost to the reputation of the company can be the most significant impact (both positive and negative). Use of proven technologies and dependable vendors is the best way to achieve the necessary reliability targets.

Cable options in the data center: ribbon or loose tube

Many parallel optical systems have used ribbon fiber. Ribbon cables used with low-skew applications are typically constructed with individual 12-fiber ribbons. However, most data center links require significantly more fibers, which results in multiple ribbons that are stacked within the cable.

The biggest impairment of a ribbon cable design is derived from the fibers being locked into immobile positions, even when the cable is flexed. The inability of the fibers to move freely results in strain, which causes increased attenuation. Popular techniques used to mitigate the mechanical deficiencies result in larger, less flexible cable constructions. The constructions limit the amount of bending forces on the fiber, but also restrict the installation ease.

Loose-tube cables come in many varieties. The defining attribute is the use of multiple 250-µm coated fibers loosely floating within a single tube. This construction enables virtually strain-free fibers. It also allows for varied relative fiber lengths. Strand length variation is a primary contributor to skew; however, as mentioned earlier, the skew specification limits of 40/100GbE are well above the worst case values measured by the DCCC in loose-tube cables. In addition, the loose-tube design provides the lowest stress environment for the fibers—and fiber stress is the primary contributor to dynamic skew.

Since no compensation is needed for the ribbon mechanics, loose-tube cables are much more compact than ribbon cables. The smaller diameters and improved flexibility enable better pathway use and routing options within cabinets and racks.

As an example, Table 2 compares ribbon cable with a reduced loose-tube design, designated as “MDP.” The reduced tube size leads to a significant size reduction in the overall cable, as the illustration next to the table shows. The greatly reduced diameter can enable a reduction in pathway costs due to the smaller size and weight requirements. This cable design provides the opportunity to have “ribbon-like” length control with the low-stress environment of a loose tube cable. The cables can be easily routed within the cable management and patch panels.

The MDP cable is best suited for pre-terminated assembly constructions. Pre-terminated assemblies provide the best assurance of quality terminations combined with labor savings during installation.

Choosing the best cabling option

As data rates increase, the amount of fibers per link required will also increase. The number of fibers can grow exponentially. When looking at the needs of the data center, loose-tube cables, specifically small-diameter cables, can provide the best option for multimode installations through these benefits:

  • Standards compliance: The cables can be installed today and used in 1- or 10GbE systems, while ensuring support for 40/100GbE systems.
  • Mechanical advantage: The cable can be easily installed and routed through various cable management systems, minimizing the preparation and labor associated with the installation.
  • Lower total cost: Size and lower cost of labor and materials add up to a lower total cost of ownership, resulting in a better return on investment for the end user.

Michael Connaughton, RCDD, is fiber optic product business manager at Berk-Tek, a Nexans Company.

Links to more information

Lightwave: 40- and 100GbE Transmission over Multimode Fiber
Lightwave webcast: Fiber Handling Essentials for Next-Generation Networks
Lightwave:Certifying MMF for 100G Ethernet Transmission

Sponsored Recommendations

How AI is driving new thinking in the optical industry

Sept. 30, 2024
Join us for an interactive roundtable webinar highlighting the results of an Endeavor Business Media survey to identify how optical technologies can support AI workflows by balancing...

Advances in Fiber & Cable

Oct. 3, 2024
November 7, 2024 1:00 PM ET / 12:00 PM CT / 10:00 AM PT / 6:00 PM GMT Duration: 1 hour Already registered? Click here to log in. A certificate of attendance...

The AI and ML Opportunity

Sept. 30, 2024
Join our AI and ML Opportunity webinar to explore how cutting-edge network infrastructure and innovative technologies can meet the soaring demands of AI memory and bandwidth, ...

Advancing Data Center Interconnection

July 25, 2024
Data Center Interconnect (DCI) solutions provide physical or virtual network connections between remote data center locations. Connecting geographically dispersed data centers...