Optical transport for cell backhaul: SONET, Carrier Ethernet, and cell-site fiber challenges
With the advent of LTE/4G technology, our communication industry frequently hears and sees that copper T1 service to cell tower sites is quickly becoming inadequate. Wireless carriers continue to increase the number of new cell sites and to upgrade existing 3G sites to 4G/LTE. These carriers are more frequently requesting a minimum of 50-Mbps Ethernet initial service to these new tower sites and Ethernet upgrades to existing T1 services -- often with follow-on commitments to add 150-Mbps Ethernet service with just a three- or four-month notice.
This surge in cell backhaul bandwidth demand threatens to overwhelm facilities sooner rather than later. For example, historically the typical cell site might have been served with eight T1s. More recently, SONET ring networks were extended to include cell sites to facilitate dropping off a mixture of T1, DS3, and Ethernet type circuits. Now, not only is the number of new cell tower sites growing, but these sites are being designed to accommodate as many as six wireless carriers each. The result of this explosion in bandwidth is the need for fiber-fed connectivity based on Carrier Ethernet.
The emergence of Carrier Ethernet transport
The SONET unidirectional path-switched ring (UPSR) architecture continues to be very popular for mobile backhaul given its huge embedded base, variety of interfaces, and scalable bandwidths across OC-3/12/48/192 backbone rates. In addition, SONET offers proven reliability with less than 50-ms ring switching time.
Yet there is a new technology trend in network architectures to support cell-site tower locations – Carrier Ethernet transport. Typically these active platforms support Gigabit Ethernet to 10-Gigabit Ethernet backbone optics and are very scalable.
The drivers behind this new service provider model are quite logical: the ubiquity of the Ethernet interface (whether copper RJ-45 or optical 10/100/1000 Mbps), the advancement of ITU-T G.8031/2 standards for ring protection switching (also sub-50 ms), and five-9s of reliability. These technological advancements will enable Carrier Ethernet to become the predominant technology for serving the ever-growing demand for cell backhaul. In addition, the sheer volume of Ethernet chip sets across the application landscape has facilitated lower silicon component costs, greater availability, and reliability improvements.
At the heart of the acceptance of this network topology is Ethernet Protection Ring Switching (EPRS). EPRS was defined by the ITU and Metropolitan Ethernet Forum (MEF), is widely accepted, and continues to evolve into more complex network architectures with the recent announcement of Carrier Ethernet 2.0 by the MEF. ERPS began at ITU-T as part of the G.8032 Recommendation to provide sub-50-ms protection and recovery switching for Ethernet traffic in a ring topology while ensuring there are no loops formed at the Ethernet layer. G.8032v1 supported a single-ring topology and G.8032v2 supports multiple ring/ladder topologies.
Additional Carrier Ethernet service definitions are expected as new standards-based features are created, implemented in silicon, and deployed in active systems.
All of this bodes well for the continued use of Carrier Ethernet to meet 4G/LTE requirements for bandwidth increases and to expedite the push for fiber ring deployments to cell sites. In some sense cell backhaul is fast becoming the FTTx of the “Mobile Device Generation.”
Don’t neglect the Physical Layer
Yet with all of the technological advances on the active platform side of equation, service providers often neglect the challenges related to the Physical Layer until the installation and service due dates are almost upon them. With up to six cell carriers per new cell tower site and thousands of new tower sites popping up across the nation, service providers are understandably seeking deployment improvements, efficiencies, and best methods for delivering fiber handoffs to multiple carriers.
The cell-site location often represents a harsh environment in which to land these small- to medium-count fiber cables and involves special requirements for separate (non-shared) fiber facilities, separate demarcation points, lockable access fiber cabinets, outdoor NEMA 4 rated fiber cabinets, as well as hut-based fiber cross-connect panel/frame equipment. Existing fiber cable routes near cell sites may be small count, thereby limiting bandwidth without expensive fiber cable overbuilds. Wireless carriers may require multimode fiber cross-connect panel fiber connector/terminations to accommodate the lower-cost Gigabit Ethernet SFPs in their active platforms.
The fiber distribution portfolio used in these environments must address the basic challenges inherent in the central office/hub/outside plant/cell-site environments to gracefully improve fiber deliverability and fiber management for cell backhaul. The point of every component within a fiber management system -- from the cladding on the fiber, cable jacketing, optical component packaging, and the route paths within them -- is to protect and reduce the risk of fiber damage. Period.
Fiber distribution and management equipment that does not accomplish this in an easy and intuitive way is over-thought and costs you money. Fiber management should be approached with three simple goals in mind:
- The first and most important objective is to minimize your fiber risk in the cable plant.
- The second goal is to attempt to eliminate deployment and maintenance headaches.
- Third is to reduce the cost of broadband deployment via careful attention to not only lower capital equipment prices, but also lower operational costs.
Perhaps the biggest key to achieving these three objectives is to reduce risk by eliminating as much interaction with fiber jumpers and the fiber tail as possible. Fiber management equipment that integrates fiber distribution and slack storage within a small footprint enables the service provider to quickly and conveniently deploy the fiber as well as access it at a later time if necessary.
In addition, the costs of delivering the fiber to the site should not be overlooked. A simple in-ground drop cable would be convenient, but often is not possible. New developments in ruggedized microduct that enable service providers to push the fiber through existing conduit -- even environments previously considered exhausted -- are being brought to market and should be investigated.
Bringing together the whole package
While consumer demands for bandwidth will drive the use of Carrier Ethernet, careful system engineering and plant design will enable the service provider to create a backhaul network that economically meets the needs of the wireless carrier. Careful consideration of not only the active electronics but the physical layer as well as will ensure Carrier Ethernet is an economical business driver for the entire industry.
Jon Anderson is an applications engineer at Clearfield Inc. He joined Clearfield’s Market Segment Application Engineering team with nearly 40 years of experience in the telecommunications and data networking industry with companies such as Fujitsu, Teltrend, Lynch Communications, and Alcatel. He has direct engineering and application experience with a variety of optical platforms and technologies such as SONET (TDM), DWDM, Ethernet, and FTTx. Jon studied at Seattle Pacific University and Clover Park Technical College where he earned his degree in telecommunications.