More stringent federal legislation and new international banking agreements as well as the need for increased security have contributed to the overall growth of data centers and SANs. The data center is an enterprise’s specialized central data-processing facility; located within it is the company’s SAN. The driving force behind most data-center designs is cost, but that was not the primary concern of software development company Enovia (Charlotte, NC). When the company began planning for a new 300-server facility, it made flexibility the key to its design.
“Our structured cabling and the overall infrastructure of the server room was designed to be flexible in terms of our ability to distribute power, distribute air, and move cable around without a great deal of grief,” explains Brian Hankins, network engineer at Enovia. “That’s in contrast to a more production-type environment like a bank, where you might have 57 mainframes that maybe move once every 15 years.”
Unlike other enterprises, Enovia must continually “sunset” servers-bring them online and offline quickly. The company develops product lifecycle management (PLM) software that provides a three-dimensional, virtual reality simulation of the entire product lifecycle of an industrial process, from concept to commercialization. “In our situation, machinery is always shuffled around,” reports Enovia information systems manager Gregg Swentor. “We have to put up and constantly take down test environments. Our [data-center] design gives us the ability to move equipment around, upgrade equipment, sunset specific equipment, and plan for future growth in other buildings.”
Flexibility is “a big benefit of doing a storage-area network in conjunction with your local-area network,” adds Gary Cawley, product-line manager at TT Electronics/Optek Technologies (Carrollton, TX) and vice chair of the Fiber Optics LAN Section (FOLS) of the Telecommunications Industry Association (TIA-Arlington, VA). “You gain that flexibility. When you add a SAN, what you’re doing is making sure you can add more storage on the fly without interrupting the productivity of your employees on the local-area network. That’s a primary consideration for most SAN customers.”
According to Hankins, Enovia had “hit the wall” in its old facility. The company had expended all of its physical space, and both the power and air conditioning systems were taxed. “We were in a situation where we were going to have to rebuild the data center one way or the other, so the new facility gave us an opportunity to build the infrastructure we needed from the ground up,” he says.
The company’s old facility originally employed underfloor power and structured cabling. Several years ago, Hankins put in cable trays and moved the structured cabling off the floor, but the company still faced significant cooling issues related to the power distribution beneath the floor. “The power cables that were running underneath the floor were essentially blocking the airflow,” he admits. “It was even worse when we had Category 5 cable under the floor.”Cooling is a very real concern in data centers with dense copper cabling, says Swentor, which is one reason why fiber is gaining popularity in these applications. “If you have hundreds of copper cables running back to your switch, you would actually be impeding airflow,” he says. “If you are dealing with relatively smaller, thinner fiber cable, you don’t set up as much of a block of your airflow, especially when you are dealing with overhead, like we are.”
Enovia’s new facility features strictly overhead distribution for power, air, and structured cabling. The old facility employed Cat 5 cabling; the new building features Cat 6 Gigabit Ethernet (GbE) cabling and both multimode and singlemode fiber. The company is currently running (GbE) on a pair of gigabit links for redundancy as well as to increase the bandwidth, reports Hankins. Though Enovia has deployed singlemode fiber, it will remain unlit until the company upgrades to 10 Gbits/sec, though there is currently no timeline in place for this transition.The new building is connected via fiber to the telecommunications provider. The facility sits on two independent SONET rings with dual entry into the building. The fiber is then pulled from the building entry to the server room and from the server room to the riser closet. The company does not have any applications that would warrant pulling fiber beyond the risers, says Hankins, who reports that most of the company’s desktops are still running 10/100-Gbit/sec Ethernet. Enovia has also deployed an IP telephony system in its new facility, and that system has driven Cat 6 GbE cabling to the desktop.
It is not unusual for an enterprise to deploy both copper and fiber in its data center, notes Herb Congdon, global product manager of fiber-optic products at AMP Netconnect, a division of Tyco Electronics. Congdon also serves as the FOLS standards chair. “Fibre Channel is used inside the data center because it tends to be more efficient [than Ethernet] at moving huge amounts of data; there’s less overhead,” he explains. “So Fibre Channel is pretty much the de facto application in data centers, and that drives the desire to go to fiber. However, the whole data center doesn’t run on Fibre Channel necessarily, so there’s still copper and there’s still room for copper. Once you start getting into where the data center interfaces with the distributed network or the commercial building network, that’s where you see a lot of the copper start to come into play.”
Hankins credits the use of high-density MPO connectors for enabling a smooth installation process. The popular MT-RJ connector features two fibers in a single ferrule connector, but the MPO connector goes a step further; it is essentially an array of 1×12 fibers. “You can easily make some high-density trunk cables that you can pull in relatively quickly that have high fiber counts from point A to point B,” offers Congdon. Using a cassette or some other kind of transition cable assembly, the data-center designer can easily break up those 12 fibers from the MPO connector into simplex or duplex connectivity.
“The use of MPO really worked to our advantage,” says Hankins, “because the fiber was pre-tested, pre-terminated, and essentially plug-and-play. The error that can be induced by pulling individual strands across multiple floors, plus the polishing and termination-those issues didn’t come up.”While ease of use is critical, the scalability that the MPO connectors enable make them ideal for use in data-center applications, says Congdon. Enterprises “can drop in a new server, and they can drop in a trunk cable with the MPO connectivity and very quickly be up and running,” he adds. “It allows them to better manage their cable investment. They don’t have to install all of their cabling before they install their equipment and spend a lot of capital up-front. They can build their network cabling as they deploy their network electronics.”
This scalability will help Enovia as it moves ahead with plans to expand the new facility into a campus-like environment. “Everything has been set up to allow us to cookie-cutter the same solution in building number two in about 18 months, if we move forward that fast,” contends Swentor.
In addition to its overhead cooling, the data center also features a hot-aisle/cool-aisle configuration. In the typical data center, rows of servers tend to face the same direction, with the back of each server exhausting hot air directly into the front of the server behind it. Enovia designed its facility so that each row of servers faces inward; the front of each row of servers faces the front of another row of servers, while the backside of each row exhausts into the back of another row of servers, creating alternating aisles of hot air and cold air.
The hot-aisle/cool-aisle configuration improves the efficiency of the data center’s heating and air conditioning systems, explains Congdon. “Setting up the hot aisle and cool aisles improves efficiency because you are not loading one end of your building with cold air while all the hot air goes to the other end of the building. Instead of the heat being additive across the building, [the hot-aisle/cool-aisle configuration] sets up areas where cold air comes in and hot air gets exhausted, and it does that in several locations throughout the facility.”
Recommendations for implementing a hot-aisle/cool-aisle configuration like Enovia’s are included in the TIA’s soon-to-be-released “Telecommunications Infrastructure Standard for Data Centers” (TIA-942). Every data center is different; some are relatively small while others fill up entire warehouses. For this reason, the data-center standard does not provide detailed instructions but instead offers guidelines and recommendations based on certain scenarios. For example, if an enterprise decides to employ an overhead cabling system, how much overhead room is necessary? If an enterprise decides to run cable under the floor, what is the best way to route the data and power cables? If an enterprise opts for a raised floor, which cabling is best for use with a raised-floor architecture?“It was difficult to come up with a document that had a lot of hard and fast requirements for those reasons,” admits Congdon. “But the standard as it is currently drafted has a lot of general guidelines that will point people in the right direction.”
In development for about two and a half years, the standard went out for default ballot in October; at this point, only certain sections of the standard are still subject to debate. If the default ballot comes back without any “no” votes, the standard could be ratified as early as this month. However, Congdon expects a no vote or two, which could push final ratification until June.
Though Enovia did not have the new standard at its disposal during its data-center construction, Congdon believes it will become a critical resource for enterprises in the future. “Once the standard is published, I think it will have a great effect on what happens in data centers,” he reasons. “The standard will allow market forces to work better for the end users because it establishes some baseline criteria so that more people can provide applications, electronics, and other components that will support the standard network architecture.”