Is there room in 1.6T markets for 100G/lane?

July 6, 2022
The implementation of 100G/lane 1.6T transceivers has an Achilles’ heel, this blogger asserts.

At OFC 2022 in San Diego this year, one of the hottest topics of conversation was “when will 200G/lane be ready?” 200G per lane is widely seen as an essential building block to get to the next speed iterations of communications, for example 1.6- and 3.2-Tbps Ethernet.

Of course, there are two flavors to this discussion, electrical and optical. The general consensus as of this writing is that the market need might occur before the electrical bus technology is viable (not that general consensus cannot be wrong, as it sometimes is in hindsight). For a 1.6T link, this leads to a 16x100G electrical bus driving an 8x200G optical media interface. This situation is analogous to what happened at 400G, where an 8x50G electrical bus drove a 4x100G optical media interface, as at the time of 400G development 100G-per-lane electrical lagged optical capabilities.

So, if everybody is on the same page with 16x100G electrical driving 8x200G optical, what is there to discuss? Namely 16x100G electrical driving 16x100G optical media interfaces.

Technologically speaking, 200G per lane optical does indeed seem to be right around the corner, and in time for 1.6T needs. The better question is if the market will be ready. While arguably the market will be ready for 1.6T aggregate links, one cannot ignore the issue of bandwidth granularity. In other words, will the 1.6T market be ready for 200G per lane granularity; or to flip it around, is there a reasonable 1.6T market window for 100G granularity?

Increasingly, a large proportion of higher bandwidth links comprise breakout connections. For example, 100G switch ports sales jumped dramatically when servers moved from 10G to 25G ports, as a single 100G switch port could be broken out and cross-cabled to four server ports. One large driver of 200G is 2x100G breakout from top-of-rack switches cross-cabled to two 100G servers. One of the largest drivers of 400G adoption is the ability to break out 4x100G and hence increase the radix of the leaf in a Clos switch matrix by 4X. One large driver of 800G is the ability to offer 2x400G-FR4 for the spine of a data center, allowing cross-cabling from data switches to IP routers. Hence, granularity of connectivity can be just as an important factor in market acceptance and adoption as total link throughput.

A 1.6T link implemented as a 16x100G optical media interface would have the advantage of immediate market applicability. A 1.6T DR16 variant could either feed sixteen 100G servers or increase the radix of the leaf of a Clos network 4X over existing 400G solutions. A 4x400G-FR4 variant could either double the spine capacity over 800G or allow 4:1 cross-cabling. It is easy to envision all the ways 1.6T with 100G granularity could be useful today.

The Achilles’ heel of 100G/lane 1.6T implementation is that despite the large number of optical transceiver form factors available today, there is a dearth of options that support 16-wide interfaces. OSFP-XD, COBO, and CPO all have potential to fulfill the need, but development and specification work are still ongoing to date.

So, the real question becomes, by the time a physical device definition is available, will the market window for 100G granularity have closed? If servers move to 200G ports, if the individual ports in the leaf move to 200G, if the individual colors of spine connections move to 200G per lane, there may be no home for 100G per lane at 1.6T.

Jim Theodoras is vice president R&D at HG Genuine USA.

About the Author

Jim Theodoras | Vice President, R&D at HG Genuine USA

Jim Theodoras is vice president, R&D at HG Genuine USA.

Sponsored Recommendations

How AI is driving new thinking in the optical industry

Sept. 30, 2024
Join us for an interactive roundtable webinar highlighting the results of an Endeavor Business Media survey to identify how optical technologies can support AI workflows by balancing...

Advances in Fiber & Cable

Oct. 3, 2024
November 7, 2024 1:00 PM ET / 12:00 PM CT / 10:00 AM PT / 6:00 PM GMT Duration: 1 hour Already registered? Click here to log in. A certificate of attendance...

Advancing Data Center Interconnection

July 25, 2024
Data Center Interconnect (DCI) solutions provide physical or virtual network connections between remote data center locations. Connecting geographically dispersed data centers...

The AI and ML Opportunity

Sept. 30, 2024
Join our AI and ML Opportunity webinar to explore how cutting-edge network infrastructure and innovative technologies can meet the soaring demands of AI memory and bandwidth, ...