Sei sulla pagina 1di 16

21st Century Data Centers

21st Century Data Centers


Contents
Introduction Data Center Considerations are Many Cabling Infrastructure: Copper or Fiber? Logistics Can Be the Key to Success Conclusion 1 2 6 9 10

21st Century Data Centers


Introduction
What is a Data Center?
A Data Center is an information technology resource dedicated to providing uninterrupted service to mission-critical data processing operations. Data Centers centralize and consolidate information technology resources, enabling organizations to conduct business around-the-clock and around the world. Among its many features are: 7 x 24 x 365 availability Fail-safe reliability and continuous monitoring Power and network communications redundancy and path diversity Physical and network access security and surveillance Zoned environmental control Fire suppression and early warning smoke detection systems

Why the Recent Surge in Data Center Activity?


The emergence of the Internet as a universal network, the Internet Protocol (IP) as a common computer "language" and the continued advancements and maturity of Web technology have served as catalysts for a number of common business initiatives, including: Server consolidation and centralization of processing capability Database, content and storage management "Webification" of business applications Information distribution via Intranets and Extranets E-business and Electronic Data Interexchange (EDI) Supply Chain Management (SCM) and Enterprise Resource Planning (ERP) Customer Relationship Management (CRM) Sales Force Automation (SFA) Wireless applications and connectivity Another factor contributing to the surge of Data Center construction is that the number of Internet connected devices per business or household is expanding well beyond the number of actual users. Business people are adopting wireless PDAs (personal digital assistants) in addition to or within their cell phones. Residential customers are experimenting with thin-client appliances, e-mail machines and home-networked PCs. Consequently, a "many-to-one" device-to-user ratio is driving the need for additional network connectivity and Data Center expansion.

1
2004 Anixter Inc.

21st Century Data Centers

Additionally, bandwidth capacity and availability is increasing while monthly access charges are decreasing for wide area, metropolitan and residential services. Web resources must also increase in order to meet the market demand for higher performance and availability. Data Centers are expanding and modernizing to meet the growing demands of mobile professionals, as well as supporting rapid new customer acquisition and enhanced service initiatives. Data Centers are also sprouting up around the world to capture market share in the on-line business and consumer service market.

Common Attributes of Data Centers


There are functions common to any Data Center today. For the most part, all Data Centers provide: Internet access Wide-area communications Application hosting Content distribution File storage and backup Database management Failsafe power Adequate HVAC and fire suppression High-performance cabling infrastructure Voice switching Security

Data Center Considerations are Many


Professional Engineering
With so many electrical, mechanical and communications variables involved, successful data center design and construction begins with professional engineering. Data Centers are unique environments, so developers can benefit from the architect, engineering and consulting (AEC) community, along with construction firms with experience in designing and building Data Centers. Some of the benefits provided by professional engineering include: Familiarity with the trades involved in a project (HVAC, electrical and mechanical) Coordination of the many trades involved in the building process Telecom and datacom expertise Unbiased written specifications based on performances, not general hype Understanding clients network demands Meeting state licensing requirements Assuming professional liability for design and operational problems
2 21st Century Data Centers
2004 Anixter Inc.

Data Centers, once theyre up and running, have zero tolerance for downtime and other problems caused by poor design or flawed installations. Power requirements arent always known at the onset and capacity must be sized with density in mind. Packing as many servers as possible into a rack or cabinet footprint means better asset utilization, yet demands more power. Additional redundancy and route diversity may also be required. Feeds from multiple power grids, piped to devices along different physical paths, have become standard design criteria. Additionally, adequate grounding and equi-potential bonding is key in providing personnel safety and noise-resistant electronic environments. Power requirements are increasing and more times than not, redundant (A&B) and diverse power sources exist in each rack or cabinet. Data Centers today often specify 100 W per square foot and many are provisioning for twice that demand. Servers are supplied with dual power supplies, each having its own power cord. So, racks and cabinets must be designed to provide plentiful power strips and cable routing. Environmental monitoring (temperature, humidity, smoke and vibration), operational monitoring (fan status, incoming voltage and UPS) and access control can provide additional control and management. Adequate cooling becomes more of a challenge when servers are packed closely together and secured by heat-trapping enclosures. This invokes lively discussion among operators, engineering firms, contractors and manufacturers as to delivery of cooling to the devices via access flooring, specialized cabinets and other ducting methods. Some Data Centers have 44+ 1U-size (1 rack space) dual processor servers installed into one cabinet, only adding to the cooling problem. Device stacking restricts airflow and can further restrict cooling in multi-compartment cabinets. This translates into overheating in racks or cabinets at room temperature of 72 degrees, and only threatens to get worse since cooling capacity is rarely increased as electronics are added. Some estimates blame up to 60 percent of downtime on heat-related issues and conditions are projected to worsen with forecasts of heat load expected to double in less than 10 years. Thankfully, enclosure and flooring manufacturers are tuned into this and are working with electronics suppliers, Data Center developers and operators to come up with new design solutions. One example is the hot aisle/cool aisle approach, where racking neighborhood equipment fans are all directed into a common aisle so hot air can be evacuated from the back and cool air can be channeled in through the front. Of course, these considerations are decided up front in the overall layout of the Data Center. Efficient allocation of space is a major feature of good Data Center design. Floor space for networking equipment can vary anywhere from 20 to 70 percent of gross square footage, the remaining space being consumed by support equipment. A good design must juggle all the variables to deliver high density and high availability, with adequate cooling and cabling infrastructure flexibility. For instance, decisions to use ladder rack or cable tray beneath the floor can be influenced by factors such as security, maintaining sufficient airflow to devices and accessibility of Data Center utilities. The amount of space to allocate for aisles between cabinet rows or rack lines must be considered for maintenance purposes. Placement of cabinets and cable trays must be anticipated as it can be highly critical to floor panel access and future growth.
3
2004 Anixter Inc.

21st Century Data Centers

Secure Remote Access


Since Data Centers often depend on a myriad of devices (a mixture of new and legacy equipment, entry-level to enterprise-class servers, various operating systems, firewalls, gateways, switches, and routers), proactive management is required to maintain network equipment. Secure access and control, from anywhere, at any time, is crucial. To meet this need, IT professionals today rely on remote management technology. The tools they use are console managers and KVMs (keyboard, video and mouse) devices. These products provide access to servers and IT equipment, whether its down the hall or across the globe. Users can manage nearly anything in the data center at any time, from anywhere even when the network is down. Via serial ports, console managers provide remote, consolidated access and control over a variety of Linux, Unix or Windows equipment including servers, routers, switches, telecom equipment and building-access devices. Over the Internet, remote KVMs enable users to control the GUIs of an entire rack or room full of servers with a single keyboard, video display and mouse.

Access Floors
One of the key pre-design considerations that affects almost every aspect of success within a Data Center environment is the access floor or raised floor (as its often referred to). This infrastructure is every bit as important to cooling, equipment support, grounding, and electrical and communications connectivity as the building structure supporting it. When an access floor is chosen as a means to distribute services to the Data Center, there are many criteria to consider regarding these utilities, some of which include: Seismic and vibration considerations Need for equipment to be bolted to and stabilized by the flooring structure Positioning of equipment to provide easy access to removable tiles and raceways beneath Spacing of access, cooling and maintenance aisles Panel strength and airflow requirements Electrical bonding and anti-static conductive needs Rolling, ultimate and impact load capacities Minimum height requirements for easy access These environmental conditions will dictate the choice of stringer grids, panel construction, surface laminates and overall design considerations. As the access floor grid is often one of the first structures in place prior to any cabling or equipment location, it is often the victim of plan changes and disruption, sometimes harmful. Often, as the cable tray, power or communications infrastructure is laid, the grid is disturbed and "squaring" or stability is compromised, causing numerous undesirable issues to occur. As always, the more time put into planning, design, coordination of trades, knowledge of support equipment specifications and careful selection of access flooring materials is the best way to ensure overall success and minimal delay.
4 21st Century Data Centers
2004 Anixter Inc.

Racks, Cabinets and Support Infrastructure


Data Centers employ a wide variety of racks, enclosures and "pathway" products like cable tray and ladder racking. Variables are as numerous as the applications they support. They must all in some way, individually and together, support four key areas of need: Climate control, namely cooling and humidity Power management Cable management Security and monitoring In a populated Data Center, a typical enclosure might house 12 to 24 servers, a switch and a monitor generating a heat load in excess of 4500 watts. Its easy to see how cooling problems can arise in such a scenario. Even with computer room cooling and a fan at the top of the cabinet, there can be a wide disparity in temperature at the top versus the bottom of the enclosure. Racks and cabinets must often meet seismic (Zone 4) requirements as well as load-bearing and server depth needs. These support structures must also provide effective cable management. There are many unique designs and innovative approaches that can help ensure neat, manageable bundling and routing of cables with mechanical protection, stability and flexibility. Data Centers also vary widely in their approach to intercabinet or interrack cable distribution. Many prefer cable tray below an access floor while others have adopted an overhead ladder rack approach, while still others see unique merit in each and use both. Cable distribution is a major consideration in the planning stages and overall design of the Data Center. Data Center operators must strive to meet or exceed the legendary "five nines" (99.999% uptime) of the public telephone network. "Five nines" reliability equates to slightly more than five minutes downtime annually for 24-hour service levels. It is a challenge in the data world to achieve that level of reliability, yet that is the customer expectation. N+1 and 2(N+1) component redundancy is required to meet these objectives. This desired level of stability creates a cascading effect on capital investment, usable networking equipment floor space versus support equipment space, dollars invested per square foot, etc.

Security
Data Centers are the lifeblood of the information organism. Company and customer data should be treated like money in a bank vault. Data Centers must have very definite measures in place to limit access only to authorized personnel, ensure use of proper fire prevention and life-safety systems while minimizing the potential of equipment damage. Video surveillance (CCTV) and card access control may be sufficient, but additional security may be required. Besides perimeter-type security, compartment security is required via locked cabinets and additional provisions for cable raceway and raised floor access become necessary to achieve comfort levels. In addition, real-time personnel and asset tracking may be desired.

5
2004 Anixter Inc.

21st Century Data Centers

Storage in Data Centers may migrate to the Storage Area Network (SAN) model over time as the volumes of stored data escalate and the management of content becomes more challenging. Additional or complimentary connectivity concerns must be addressed in the Data Center design to accommodate for flexibility and the most efficient and effective use of space. The use of fiber channel technology and 50-micron glass may cause the reevaluation of overall distribution design. As other data link level transport methods (such as Gigabit Ethernet) are evaluated and/or standardized for use in SANs, there may be an advantage to using the same fiber type to interconnect storage systems and servers throughout the Data Center. Flexible and adequate connectivity is key to bringing users on-line quickly and efficiently. Choice of media in the Data Center may be more critical than in other wired areas, just as equipment reliability and redundancy is more critical in a hospital operating room than in the admissions offices. The right combination of performance, flexibility, headroom, patching and error-resistance are all variables in the same crucial design formula.

Cabling Infrastructure: 10 Gigabit Copper or Fiber Optic Cable


Choices between fiber or copper or the selective use of both depend on a variety of criteria: Bandwidth and performance required per Data Center area Immunity-to-Electromagnetic Interference (EMI) Need for density and space-saving connectivity Flexibility and speed of reconfiguration Device media interface considerations Standardization Future vision and tolerance for recabling

The Case for Fiber:


Fiber can provide several advantages over copper in a Data Center environment: Fiber systems have a greater bandwidth and error-free transmission over longer distances, allowing network designers to take advantage of new Data Center architectures Cost of fiber optic solutions is comparable with extended performance copper cabling Optical Data Center solutions are designed for simple and easy handling and installation Fiber systems are easier to test Optical fiber is immune to EMI/RFI Faster installations (up to 75 percent faster) offer time and cost savings to Data Center developers, operators and contractors

6 21st Century Data Centers


2004 Anixter Inc.

High-density fiber optic systems maximize valuable space. Fibers small size and weight requires less space in cable trays, raised floors and equipment racks. As a result, smaller optical networking provides better under-floor cooling and gives precious real estate back to the Data Center Fiber has the ability to support higher data rates, taking advantage of existing applications and emerging high-speed network interfaces and protocols. Multimode fiber optics support: 10/100/1 Gbps/10 Gbps Ethernet 100/200/400 Mbps Fiber Channel Provides "future vision" in network infrastructure 50-micron fiber is generally recommended by Storage Area Networks (SANs) manufacturers because of its higher bandwidth capabilities Single-mode (SM) fiber capability goes beyond 10 Gbps

Fiber Use in the Data Center


Data Center Wide Area Network (WAN) connections vary but most are typically fed with at least two redundant and diversely routed 10 Gbps fiber pairs. Bandwidth distributed to servers and other devices may range from 1 Mbps to 10 Gbps or more depending on applications and Data Center models. In some data centers, cabling infrastructure often mimics the commercial building distribution model with fiber being used in the backbone (like a vertical-riser) and copper to connect the servers (similar to horizontal distribution). However, there is an increasing trend to take fiber as close to the devices as possible, for several reasons. Fiber can provide increased bandwidth over copper with multimode supporting up to 10 Gbps Ethernet. Fiber can also provide up to 60 percent space savings over copper cabling. This can be an important factor, as equipment density and heat dissipation needs increase. Space below access floors is getting crowded and can seriously restrict airflow needed for cooling. Interlocked armor jacketing can also provide additional protection if necessary and further reduce the need for underfloor raceway. Fiber termination and patching equipment now allows up to 96 fibers to be terminated in one rack space (1U). This feature makes it attractive in some designs to terminate fiber close to server cabinets or racking rows (or even within them). While many servers now come in 1U configurations with dual (A&B) Ethernet interfaces, some have started to appear with optical connections as well. This trend toward optical networking would benefit from just such a "fiber OR copper last meter" flexible patching point within a server cabinet row. Such a design, while providing the maximum flexibility and growth insurance, has many variations and depends entirely on the Data Center operators business objectives. Interestingly enough, the use of fiber over copper is no longer as much a cost consideration as its been in the past. In the Data Center environment, most agree that Gigabit Ethernet will be common so if copper media is to be used, only the highest grade of cabling will provide the required error free transmission, headroom and performance criteria. Today, the cost of a high-quality copper "channel" is economically equivalent to a fiber solution, while the fiber

7
2004 Anixter Inc.

21st Century Data Centers

solution provides additional advantages such as extended distance (beyond 100 meters) and EMI/RFI protection. Installation cost also is favorable with fiber, particularly when a modularized "Plug & Play" solution is adopted, often yielding 75 percent savings over a field-terminated approach. For example, installing pre-terminated fiber in a large (100 + K sq.ft.) Data Center can take as little as 21 days to purchase and install along with offering easy and effective moves, adds and changes as well.

50- or 62.5-Micron Fiber?


In terms of physical properties, the difference between these two fiber types is the diameter of the corethe light-carrying region of the fiber. In 62.5/125 fiber, the core has a diameter of 62.5 microns and the cladding diameter is 125 microns. For 50/125, the core has a diameter of 50 microns with the same cladding diameter. This diameter measurement of fiber core has an indirect relationship to the effective bandwidth and the distance limitations of the fiber. As the core size decreases, the bandwidth capability increases. Bandwidth, or information-carrying capacity, is specified as a bandwidth-distance relationship with units of MHz-km (Megahertz per kilometer). The bandwidth needed to support an application depends on the data rate of transmission. As the data rate goes up (MHz), the distance that rate can be transmitted (km) goes down. So, a higher-fiber bandwidth can enable you to transmit at higher data rates or for longer distances. While 62.5-micron fiber is the most common multi-mode optical cable used in local area network applications, 50-micron fiber is the standard for Storage Area Networks and their Fiber Channel data link connectivity. 50-micron fiber was chosen because it provides greater link lengths (such as 150 m @ 400 Mbps using 850 nm LEDs) than 62.5-micron for Fiber Channel. This is also true for Gigabit Ethernet (600 m). The increased information-carrying ability and adoption for SAN/Fiber Channel usage within Data Centers has caused some designers to recommend a migration to 50-micron fiber throughout the Data Center. One such IDC design envisions Ethernet and Fiber Channel distribution switches sitting between server cabinet rows and SAN equipment, providing any-to-any configuration capabilities and a migration path to all optical networking within the Data Center.

Does Copper Media Have a Home in the 21st Century Data Center?
Today, there are high-performance copper cabling solutions that are capable of supporting Gigabit Ethernet reliably. Only the best expanded performance copper media with full "channel" performance should be usedin other words, copper distribution systems with components designed to work together and provide error-free transmission from transmitter to receiver. This means throughout the entire pathnot just through the cables but through all the interconnection devices such as patch panels, jacks and patch cords. Its important to know the strengths of each cabling system, and install products with characteristics that match the equipment or service demands. One such example involves using high performance patch cords as transition points between fiber-connected Ethernet switches and copper-interfaced servers within server cabinet rows or "racking neighborhoods." At short distances, high-end cables will provide more than enough insurance against dropped packets. At a later date, if desired, the switch cards and server ports may be upgraded to optical and the copper cords may be replaced with fiber equivalents.
8 21st Century Data Centers
2004 Anixter Inc.

Logistics Can Be The Key To Success


Getting materials to Data Center sites on time and within the scope of budgets and in-service dates is a crucial consideration for businesses. Many Data Center developers have become painfully aware of the importance of an integrated supply chain to overall success. After all, for most businesses, success is defined by incoming revenue and that can only occur after a new application or system has been made available. Speed-to-market is essential, yet many companies underestimate the complexity and time requirements of material procurement, transportation, handling and spec control once funding is obtained and the green light is given to a Data Center expansion or construction project. Proper logistics planning can play a key role in bringing projects in on time, on scope and within budget. Many Data Centers feature a finished production floor capacity in excess of 100,000 square feet. Developers wisely plan buildouts based on a phased approach. They will build out sections of the Data Center (for example, 25,000 sq. ft. at a time) in order to optimize capital outlay and capitalize on revenue streams from new customer contracts. As these areas (sometimes called "Pods") reach a predetermined "fill percentage," work on the second phase commences. This not only preserves capital outlay, but also allows for more effective coordination of trades (electrical, mechanical, communications, etc.) In order to utilize this model effectively, especially where there are multiple projects underway in multiple geographies, spec control becomes an important part of the equation. These sites are often mirror images of each other in more ways than one, so sharing and enforcing rigid construction specifications across geographies is a smart approach to maintenance, operations and service stability. Many companies are good at designing, building and operating Data Centers, but logistics is often not their core competency, and is best outsourced to a good logistics partner. Some of the key requirements for successful Data Center logistics are: Dedicated resources Inventory management system Warehousing services Material procurement expertise Distribution expertise Logistical project management experience Global or national network For many Data Center developers, operators and contractors, these logistics capabilities can be the difference between success and failure, between meeting in-service dates or falling behind the competition in customer acquisition or new service introduction.

9
2001 2004 Anixter Inc.

21st Century Data Centers

Conclusion
Data Centers reflect how business is done today, from electronic commerce to the on-line consumer. As Internet-connected devices continue to outnumber network users by at least a two-to-one ratio, more and more users are becoming dependent on instant information access and on-line service offerings. Data processing capabilities will continue at Moores rate (doubling every 18 months) and the need for storage will increase by a factor of 10 over the next few years. Businesses will need to cope with that demand whether they choose to expand existing Data Centers or outsource some of those functions to service companies. As bandwidth becomes more plentiful, available and economical, and as security technology matures, Data Centers will provide a plethora of new and unique communications services. When designing and building a Data Center, planners, implementers and operators must provide: 7 x 24 x 365 availability Fail-safe reliability and continuous monitoring Power and network communications redundancy and diversity Physical and network access security and surveillance Zoned environmental control Fire suppression and early warning smoke detection systems They must consider: Professional engineering Power requirements Adequate cooling Efficient allocation of space Proper racking, enclosures, pathways and access flooring Redundancy and path diversity Security Storage Flexible and adequate connectivity Copper or fiber cabling and management Integrated supply and logistics With so many variables and unknowns to consider, todays IT and business leaders are cutting new trails through uncharted territory. The next few years will unveil the developments, risks and rewards of the new economy as entrepreneurial innovation and the digital age take root, powered by business technology and the Internet and brought to you through 21st Century Data Centers around the world.

10 21st Century Data Centers


2004 Anixter Inc.

Sources
To assist in the implementation of Data Centers, Anixter Inc. offers materials, processes and expert advice in areas such as cabling infrastructure, power distribution, logistics services and physical security products and services. For more information, visit www.anixter.com. Fiber optics opinions were based on testing by Corning Cable Systems. Fiber opinions were also based on the following white papers: The Origins of the Anixter Fiber Testing Program 62.5- or 50-Micron Multimode Fiber The ideas and concepts in this paper reflect Anixters perspective on the Data Center market. However, Anixter would also like to acknowledge the following companies for their contributions (either directly or indirectly) to this paper: Corning Cable Systems EMC Corp. Environmental Systems Design H.F. Lenz Company Hewlett-Packard Company IBM Intel Corp. McClier Sachs Electric Company Rittal Sun Microsystems, Inc. Tate Access Floors, Inc.

11
2004 Anixter Inc.

21st Century Data Centers

Notes

Notes

1 - 8 0 0 - A N I X T E R a n i x t e r. c o m

#233719

1K

07/04UP

Printed in U.S.A.

on Recycled Paper

1W0043Z1

2004 Anixter Inc.

Potrebbero piacerti anche