40-Gbits/sec makes sense now

June 11, 2002
By Ed Harstead, Lucent Technologies--There is a wide consensus that operators should not deploy 40-Gbit/sec technologies solely for bragging rights, but only when they become cost-effective compared to today's technologies; that is, when the ratio of the cost of 40-Gbits/sec to 10-Gbits/sec falls below four. However, consensus evaporates around the question of when this will happen.

[Ed. Note: The following feature was written in response to an article that appeared in the December 2001 edition of Lightwave Magazine, "Implement 10 Gbits/sec now, 40 Gbits/sec when it makes sense," written by Tom Mock of Ciena Corp.]

By Ed Harstead
Lucent Technologies

There is a wide consensus that operators should not deploy 40-Gbit/sec technologies solely for bragging rights, but only when they become cost-effective compared to today's technologies; that is, when the ratio of the cost of 40-Gbits/sec to 10-Gbits/sec falls below four. However, consensus evaporates around the question of when this will happen. For example, it is claimed that the 40-Gbit/sec to 10-Gbit/sec cost ratio is currently 10 [1]. In this note, we explain how the barriers to 40-Gbit/sec transmission have been either exaggerated or misunderstood, and how the cost ratio of less than four has already been attained with commercially available 40-Gbit/sec products.

Fundamentally, the optical signal-to-noise ratio (OSNR) requirement of a 40-Gbit/sec line signal is 6 dB higher than for a 10-Gbit/sec signal, which translates into longer unregenerated reach for 10 Gbits/sec. For applications exceeding the shorter reach of 40-Gbits/sec, 10-Gbit/sec transmission is economically superior just from the elimination of 40-Gbit/sec regeneration. What reaches are 10-Gbit/sec and 40-Gbit/sec systems capable of? On commercially available DWDM products that can support both 10- and 40-Gbit/sec transmission with the same infrastructure (that is, the same repeaters, dispersion compensating fibers, etc.), 10-Gbit/sec reach is about 4,000 km, while 40-Gbit/sec reach is about 1,000 km. This 1,000 km--which exceeds that of current 2nd generation 10-Gbit/sec DWDM systems--represents a very sizeable potential application space for 40 Gbits/sec. The 40-Gbit/sec optical interface, with one-quarter the number of electrical and optical components, has a chance to prove-in. But does it?

Despite the claims in [1], most transmission penalties do not scale with bit rate squared. One penalty that does scale is chromatic dispersion, but with currently available per-channel tunable dispersion compensators, this can be cost-effectively mitigated. Most other penalties scale with bit rate linearly, not at all, or even inversely.

Tolerance to first-order polarization mode dispersion (PMD), often cited as a barrier to 40-Gbit/sec transmission, scales inversely with bit rate; that is, 40-Gbit signals are four times less tolerant to PMD than 10-Gbit signals. Some have therefore argued that 40-Gbits/sec is incompatible with installed fibers without expensive PMD compensation (PMDC). In actuality, the typical PMD of deployed fibers manufactured since 1993 has been very low, around 0.04 ps/sqrt-km, much lower than the worst-case specifications quoted at the time of installation (see e.g. data on 1993 G.652 standard singlemode fiber in [2], post-1993 G.655 fiber in [3], and various fiber types and vintages in [4]). With RZ modulation, the PMD limit of 40 Gbits/sec easily exceeds the OSNR reach of 1,000 km. For older pre-1993 fibers with higher PMD, the effective reach will be reduced, but this reduction can be minimized by trading off the unneeded OSNR margin in shorter reach applications for additional PMD margin. Alternatively, 10-Gbit/sec transmission can be used on routes with older fiber--one of the many benefits of a transmission platform that supports both 10 and 40 Gbits/sec. In the near future, reasonably priced PMDC will be an additional option for 40 Gbits/sec.

So how does the reach of a 10-Gbit/sec system compare to a 40-Gbit/sec system of equal capacity? Reference [1] compares two systems with equal spectral efficiency (0.8 bit/Hz)--50-GHz channel-spaced 40 Gbits/sec vs. 12.5-GHz channel-spaced 10 Gbits/sec. While these spacings may or may not be practical in the near future, the issue here is that inter-channel nonlinear effects scale at least inversely with channel spacing. This impacts the 10-Gbit/sec system at least four times as much as the 40-Gbit/sec system, probably limiting 10-Gbit/sec reach to less than 1,000 km. So while 40-Gbit/sec reach will continue to increase as the technology matures [5], closely spaced 10-Gbits/sec will likely be a technological dead end for long haul.

How do we compare the cost of 10-Gbit/sec and 40-Gbit/sec transmission? Consider again a new generation system whose infrastructure supports both 10 and 40 Gbits/sec. Let's also assume that 10-Gbit/sec services are being transported up to 1,000 km, so that the 40-Gbit/sec line rate is invisible to the rest of the network. Is the price of four 10-Gbit/sec add/drop transponders transmitting on four different wavelengths more or less than the price of a single transponder that transparently multiplexes four 10-Gbit/sec client signals onto a single 40-Gbit/sec line signal? How do we answer this?

Lucent Technologies has launched the LambdaXtreme Transport, a 40-Gbit/sec DWDM product, which supports both 10- and 40-Gbit line rates on the same infrastructure. Operators and industry analysts who have been briefed on the pricing of these transponders know the 40-Gbit/10-Gbit cost ratio is less than four. Furthermore, and again contrary to claims made in [1], the size of the 40-Gbit transponder is less than the size of the four 10-Gbit transponders, and the power dissipation of the 40-Gbit transponder is less than that of the four 10-Gbit transponders. So not only is 40-Gbit capex lower but so also is opex. Finally, LambdaXtreme Transport has been successfully transmitting at 40-Gbits/sec in an R&D field trial in Deutsche Telekom's network since May 2002, on 734 km of 0.12 ps/sqrt-km PMD (average value) fiber, demonstrating that this is not a paper product. So the 10-Gbit/40-Gbit cost question is ultimately resolved not by academic discourse, but by an existence proof.

In conclusion, 40 Gbits/sec proves-in for applications up to 1000 km now and for longer applications in the near future. And while the application space for 10 Gbits/sec has begun to shrink, it is still needed for longer reaches and for transmission over some older fibers. Operators today need a DWDM platform that can support both.

Ed Harstead is senior product manager, Optical Long Haul Solutions, at Lucent Technologies. He can be reached via the company's Web site at www.lucent.com.

References
[1] Mock, T., "Implement 10 Gbits/sec now, 40 Gbits/sec when it makes sense," Lightwave, Dec. 2001.
[2] Judy, A. F., et. Al., "PMD characterization of production cables for evolving lightwave systems," OFC 1993.
[3] Jackson, K. W., et. al., "Polarization mode dispersion in high fiber count outside Plant cable," NFOEC 2001.
[4] Noutsios, P., Piorier, S., "PMD assessment of installed fiber plant for 40 Gbit/sec transmission," NFOEC 2001.
[5] Gnauck, A. H., et. al., "2.5 Tb/s (64x42.7 Gb/s) Transmission Over 40x100 km NZDSF Using RZ-DPSK Format and All-Raman-Amplified Spans," OFC 2002 post deadline paper.

Sponsored Recommendations

How AI is driving new thinking in the optical industry

Sept. 30, 2024
Join us for an interactive roundtable webinar highlighting the results of an Endeavor Business Media survey to identify how optical technologies can support AI workflows by balancing...

Advances in Fiber & Cable

Oct. 3, 2024
November 7, 2024 1:00 PM ET / 12:00 PM CT / 10:00 AM PT / 6:00 PM GMT Duration: 1 hour Already registered? Click here to log in. A certificate of attendance...

The AI and ML Opportunity

Sept. 30, 2024
Join our AI and ML Opportunity webinar to explore how cutting-edge network infrastructure and innovative technologies can meet the soaring demands of AI memory and bandwidth, ...

Advancing Data Center Interconnection

July 25, 2024
Data Center Interconnect (DCI) solutions provide physical or virtual network connections between remote data center locations. Connecting geographically dispersed data centers...