A look ahead at 2012

Jan. 12, 2012
The Lightwave annual technology forecast offers a peek at what we believe will be the top trends over the coming 12 months in networking, FTTx, test and measurement. cable MSO networks, and equipment design.

By Stephen Hardy

This year may have seen ongoing macro-economic distress but not a comparable downturn in technology development. While many of the areas of focus we saw this year were the same as last year (um…have you heard about 100 Gbps?), we also saw advances in packet-optical transport, label-switched routing, gridless reconfigurable optical add/drop multiplexers (ROADMs), photonic integration, and "superchannels," to name just a few areas of technological adventure.

The market reacted well to these trends. Despite the aforementioned issues with the economy, sales of optical networking gear climbed year over year in 2011. And while this boost didn't necessarily trickle down to component and subsystem suppliers, who had to suffer through yet another inventory correction, new technology development continued at a rapid pace at all levels of the optical communications ecosystem.

We can expect similar levels of innovation in 2012. Like every year, the next 12 months will see a continuation of several technology development trends launched previously (no, seriously - have you heard about 100 Gbps?). However, we'll likely be surprised by some new efforts that will bear fruit as well.

The likelihood of the unexpected will not deter the Lightwave staff from taking our annual shot at predicting the hot trends in optical technology for the coming year. We'll organize our discussion into five areas:

  1. Networking
  2. Fiber to the home
  3. Test and measurement
  4. Equipment design
  5. Cable-operator applications

As always, our thoughts derive from a combination of research and a moistened finger stuck in the air to attempt to determine which way the wind is blowing.

We'll admit that in some cases, we've extended our discussion not only into what we think will happen, but what we think ought to happen. But that's all right - wishes sometimes come true, don't they?

Optical networking to speed up - but how quickly?

The 100-Gbps technology reaching the market can be roughly divided into two waves. The first came from systems developers with the necessary expertise in-house to create such platforms themselves. The second wave, which began to appear late this year and should continue into 2012, includes offerings from companies that relied on outside partners to fill gaps in their own capabilities.

Companies in both camps should benefit from the 100-Gbps coherent transponders that should finally reach availability in 2012. The big question will then become, how much do carriers really need 100-Gbps?

The fact that 100-Gbps line cards aren't flying off the shelves shouldn't be much of a surprise. High costs were expected for the first generation of technology; meanwhile, many carriers are still ramping up to 10-Gbps links, never mind 40 or 100 Gbps.

Next year should see companies pursue two paths toward 100-Gbps cost reductions. The first will involve optimizing designs based on dual-polarization quadrature phase-shift keying (DP-QPSK) with coherent detection. The aforementioned transponders should help, as well as more integrated optical components and more efficient electronics. The second path, on which some technology providers embarked this year, will move away from coherent-enabled DP-QPSK and toward direct-detect multichannel approaches such as 4x25 Gbps.

Meanwhile, we'll also see delivery of new technology to increase network flexibility. That includes not only more tunable XFPs, but also more discussion (and perhaps initial shipments) of tunable SFP+ devices. Carriers also can expect progress toward more colorless and directionless ROADMs, with some systems also carrying gridless capabilities. The jury will remain out regarding the need for (as well as the best way to design) contentionless platforms.

Headwinds coming for FTTH?

Fiber to the home (FTTH) deployments should continue their impressive run worldwide. However, obstacles could delay new technology deployment.

First, the good news

Several initiatives promise to make FTTH more economical to deploy and better positioned to meet even the most aggressive bandwidth demand forecasts.

On the economic end, new module technologies, some based on photonic-integrated-circuit (PIC) principles, promise to reduce the capital cost of equipment to new lows. Customer premises equipment will become more integrated, flexible, and multifunctional as well.

With bandwidth increases in mind, 10-Gbps PON systems should become readily available by the end of next year. Meanwhile, the WDM-PON standardization process should accelerate in 2012, with a final ITU-T standard following as soon as 2013.

And now the bad

However, several factors could slow deployment of these and other technologies in 2012:

  • Fiber shortage. Japanese fiber manufacturing facilities damaged by this year's earthquake and tsunami are back online. But their interruption was only one catalyst for a fiber shortage that should continue into 2012.
  • Delays in U.S. broadband stimulus programs. The Rural Utilities Service announced a mechanism through which Broadband Initiatives Program (BIP) project deadlines could be extended; those with funds from the complementary Broadband Technology Opportunities Program (BTOP) won't be so lucky.
  • Improvements in DSL. The near-term availability of vectoring and bonding will enable carriers to delay a switch from copper to fiber and still meet customer and government-fueled broadband deployment objectives in some cases.

Test instruments must mind the gap

Most of the talk of the comparative deployment prospects for 40- and 100-Gbps technology has focused on issues such as cost. But one aspect not discussed that could have a significant dampening effect is the fact that no one has come up with a simple, cost-effective way to test coherent technology in the field.

It's not that test equipment developers aren't thinking about it. As highlighted in a recent Lightwave webcast, the main problems relate to how to measure optical signal-to-noise ratio and diagnose problems in the field. In direct-detection systems, when the laser is on, that signals a logical "1." However, in DP-QPSK, it could mean any one of "00," "01," "11," or "10." The industry has yet to develop a field-friendly optical spectrum analyzer compatible with polarization multiplexing - and a field technician can't be expected to lug an oscilloscope (or more than one), optical-network analyzer, and associated gear from site to site. Not that your average carrier could afford to buy more than one or two sets of coherent-friendly test equipment anyway.

In the webcast, a test equipment representative suggested that he and his cohorts needed to address this problem in the next couple of years. Carriers (and those selling 100-Gbps systems) would be happy to see such a capability in 2012.

Meanwhile, the other area of major development for field test equipment is broadband access. Here the emphasis is on portability, low cost, and ease of use, particularly for field technicians unfamiliar with optical networking basics. For example, this year has seen the availability of optical time domain reflectometers (OTDRs) that don't produce typical OTDR traces and that automate parameter settings. Carriers should expect further innovations for FTTH connection testing as well as the appearance of similar features for testing other parts of the network.

Meanwhile, back in the lab

With coherent 100 Gbps now reaching the field, forward-thinking design engineers have started tinkering with ways to achieve 400-Gbps and 1-Tbps transmission. The current generation of optical-network analyzers and oscilloscopes in theory can address modulation formats such as 16-QAM, but engineers would appreciate real-time oscilloscopes with even more bandwidth than currently available to enable them to work with higher baud rates. Meanwhile, test-instrument vendors who base their systems on optical sampling say they're already capable of meeting such high-baud-rate requirements.

The ability to measure and evaluate other modulation format alternatives, such as optical orthogonal frequency-division multiplexing, will also prove important - not just for transmission rates greater than 100 Gbps but for next generation optical access technologies as well.

The increasing use of parallel optics for applications where a serial approach to high data rates is too expensive also creates a need for test instruments that can evaluate transmit and receive arrays efficiently and easily, both in the lab and on the production floor. Along these lines, companies like Avago Technologies are working on taking optical technology into formerly uncharted areas such as PCI Express. Should these forays uncover previously untapped demand for fiber optics, test equipment manufacturers will have to follow suit for both vendor and end-user environments.

Equipment design: New solutions to old problems

Here's a fearless prognostication: Equipment design efforts will focus on smaller footprint, less power consumption, and lower costs. And now that we've dispensed with the obvious, let's talk about how these goals might be reached.

Functional and service integration will remain an overarching target at both the systems (e.g., packet-optical transport systems and multiservice access platforms) and component/subsystem levels. In the latter sphere, expect to hear any number of multifunction devices called "photonic integrated circuits." But also expect progress on new materials, such as polymers, that will enable closer integration of disparate elements.

Meanwhile, systems houses will continue to rely more on in-house expertise to develop new technology. The increasing importance of electronics will invite systems houses with strong ASIC staffs to leverage them to achieve competitive differentiation, particularly as attention turns to 400 Gbps and 1 Tbps.

In a related move, we're seeing increased interest in replacing lower-speed optical-transceiver modules with elements that system houses can place directly on the board. This trend appears strongest in the FTTH space for optical-network units, but could migrate to other applications as module components become more integrated.

For the data center, directional signposts are shifting. Developing a technology roadmap used to be fairly straightforward; the IEEE would develop PMDs for a specific application, and Cisco would determine which ones it favored. However, large end users such as Google have emerged who not only aren't Cisco, but don't kowtow to the IEEE. If standards don't meet their needs, they'll just create their own multisource agreement. For component suppliers, deciding who's driving the bus - or, perhaps, whose bus to climb aboard - will be an important decision in 2012.

Fiber creeps closer to the user in cable-operator networks

While a fair number of cable operators sent staff to last September's FTTH Conference, the odds of them fully embracing optical access technology won't be any greater in 2012 than in 2011 or prior years. But that doesn't mean cable operators won't have interest in fiber.

New specifications favor fiber

With MSOs increasingly interested in business services support, we may see deployment of DOCSIS Provisioning of EPON (DPoE) equipment, at least on a trial basis, in 2012. CableLabs held several DPoE plugfests this year that demonstrated the readiness of DPoE technology for the market as well as an emerging ecosystem of component and subsystem suppliers. But we'll also see more Carrier Ethernet systems find their way into cable networks than DPoE for the near term.

While FTTH won't replace hybrid fiber/coax anytime soon, we will see fiber creep closer to residential subscribers. DOCSIS 3.0 and node segmentation for capacity expansion will provide one driver. Operators' interest in prepping for an all-IP, multiscreen world via the roadmap CableLabs has set forth in its Converged Cable Access Platform (CCAP) initiative should provide another driver. CableLabs released a technical report this past June that describes a new CCAP device that will integrate the functions of broadcast and narrowcast quadrature amplitude modulators (QAMs) and DOCSIS 3.0 downstream and upstream interfaces. It also will support PON termination equipment as an option. Establishing the foundation for CCAP likely will require cable operators to review their current optical infrastructures, as described in the article "Paving the Path to IP with CCAP" in our July/August issue.

Sponsored Recommendations

On Topic: Tech Forecast for 2025/ What Will Be Hot

Dec. 9, 2024
As we wind down 2024, Lightwave’s latest on-topic eBook will examine the hot topics for 2025. AI is at the top of the minds of optical industry players supporting...

On Topic: Metro Network Evolution

Dec. 6, 2024
The metro network continues to evolve. As service providers have built out fiber in metro areas, they have offered Ethernet-based data services to businesses and other providers...

State of the Market: AI is Driving New Thinking in the Optical Industry

Dec. 5, 2024
The year 2024 marked an inflection point for AI. In August, OpenAI’s ChatGPT reached 200 million weekly active users. Meanwhile, McKinsey reported that 72% of ...

Meeting AI and Hyperscale Bandwidth Demands: The Role of 800G Coherent Transceivers

Nov. 25, 2024
Join us as we explore the technological advancements, features, and applications of 800G coherent modules, which will enable network growth and deployment in the future. During...