Sei sulla pagina 1di 10

40 Ways to Make Your Data Center More Efcient

an

Server eBook

Contents
40 Ways to Make Your Data Center More Efcient
This content was adapted from ServerWatch.com. Contributor: Kenneth Hess.

2 4 6 8
6

10 Data Center Management Mistakes You Might Be Making

10 System Administrator Tasks Ripe for Automation 10 Free Server Tools Your Organization Needs Uncover Your 10 Most Painful Performance Bottlenecks

40 Ways to Make Your Data Center More Efcient

10 Data Center Management Mistakes You Might Be Making


By Kenneth Hess
or those who think (falsely) that they have the perfect data center, read on for some enlightenment. Those who work in the data centers of their dreams might beg to differ with your fantasy. Though you may not achieve desired perfection affordably, you can come close by changing the way you handle certain aspects of your data center management. Managing a collection of computer systems is no easy task. But, through better management and proper planning that task might involve popping fewer pain pills. Here are the 10 major data center mistakes to avoid. 1. Inadequate Virtualization If you operate a data center and havent caught on that virtualization saves money, youre way behind the curve. Virtualization saves valuable rack space. It saves additional money on cooling, power and service contracts for those non-existent systems. 2. Untapped Cloud Computing Similar to virtualization, cloud computing requires that you obtain a clue about its capability for your company or your customers. Amazon.com offers exible and scalable plans that t into an on-demand capacity scenario. Using Canonicals Ubuntu Linux Server Edition, for example, you can create your own private cloud or leverage Amazon.coms Elastic Compute Cloud (EC2) dynamically.

3. Design Flaws Design aws of a standing data center are difcult to overcome, but a redesign is less expensive than a fresh build. A 20-year-old data center still looks good, but it doesnt perform up to todays greener standards. Youll also have to retrot your electrical apparatus to handle blade systems. Youll probably need to toss that old cooling system as well, since contemporary servers run cooler and more efciently than their predecessors did. 4. Limited Expandability 640K of RAM ought to be enough for anybody. How many times have you heard that quote thats attributed to Bill Gates, circa 1981? Whether he said it is of little importance now. The lesson to learn is that when you build anything, pretend youre converting a Celsius temperature to Fahrenheit: Double the amount you think you need and add 32. Using the Celsius-to-Fahrenheit equation will allow for some expandability in your data center. Two thousand square feet of oor space isnt enough? Try 4,032 feet instead. Poor planning is no reason to run out of oor space or any other capacity. 5. Relaxed Security Enter any data center and youll see card readers, retina scanners, circle locks, weight scales or other high technology security systems in place. But, next to those extreme security measures, youll see a key entry access

Back to Contents

40 Ways to Make Your Data Center More Efcient an Internet.com Server eBook. 2011, Internet.com, a division of QuinStreet, Inc.

40 Ways to Make Your Data Center More Efcient


door for security bypass. Physical security requires no bypass. If theres a bypass in place, consider your security compromised. 6. Haphazard Server Management To manage your server systems, do you need physical access or can you manage them remotely? Every contemporary server system comes with a maintenance connection with which to manage that system remotely. Use it. Enable it. For each person who enters a data center, you can expect some amount of system failure. Incorrectly labeled systems, incorrect locations, a misread system name the list goes on. Do yourself a favor: Enable those remote access consoles when you provision your physical systems. 7. Ill-fated Consolidation Efforts One order of data center management business is to minimize the number of systems on the oor or in the racks. Server consolidation is the method by which this effort is carried out. Consider a consolidation ratio of 2-to-1 or 3-to-1 unacceptable. Physical systems that operate in the 5 percent to 20 percent utilized range can easily consolidate onto a system with ve, six or more of its peers. Underutilized systems waste rack space, power and money in the form of service contracts. 8. Overcooled/Undercooled Space What temperature is your data center? You should nd out. If your data center operates below 70 degrees Fahrenheit, youre wasting money. Servers need air ow more than they need arctic temperatures. Take a stroll through your data center. If its comfortable for you, its comfortable for your servers. Theres no need to freeze your data center employees or make them sweat. 9. Underpowered Facility How many times have you heard that a particular data center has oor space but no more power? You hear it more than you should, if you hear it at all. An underpowered facility is a victim of poor planning. (See No. 4 above.) Virtualization can help give you back some power. Server consolidation can also assist. But those are short-term xes for the greater problem of an underpowered facility. 10. Rack Overcrowding If youve ever attempted to work in a fully populated rack, you probably wished you had miniature hands or extra long ngers. It might seem inefcient to leave a bit of space between systems, but those who have the job of plugging and unplugging components for those systems will thank you. Poor planning leads to rack overcrowding, and its unnecessary. Virtualization, consolidation and a more efcient arrangement will ease the problem. Experiencing an outage because of accidentally unplugging a server might convince you to leave a bit of space between systems.

Back to Contents

40 Ways to Make Your Data Center More Efcient an Internet.com Server eBook. 2011, Internet.com, a division of QuinStreet, Inc.

40 Ways to Make Your Data Center More Efcient

10 System Administrator Tasks Ripe for Automation


By Kenneth Hess
system administrator who does everything manually wastes not only her time but yours as well. Tasks that a sys admin performs repeatedly should be automated. Automation through scripting, specialized software and system scheduling frees her time, saves you money and prevents mistakes due to human error. These 10 sys admin tasks are prime targets for automation and will help streamline your daily operations. 1. Patching The only time manual patching is called for is when that stubborn minority of systems will not take patches by automated means. Linux and Windows include tools to perform automated updates, but if youd like more control of which patches your systems receive and when they receive them, investigate HPs Data Center Automation Center (HPDCAC) software (formerly Opsware). Much more than just an automated patching application, HPDCAC moves managing a complex infrastructure into a single, simple interface. If youre looking for a patch only solution, check out Ecoras Patch Manager for agentless patch management. 2. User and Group Maintenance Youve probably used Active Directory, LDAP, NIS+ or other user and group account management software, but have you ever used one that really made you happy? The reason you havent is that theres not a lot of automation built into them. Sure, you can create a user account, remove a user account, and create groups and manage

groups, but when it comes down to real management, you probably havent found the right tool. The one you want might have to be the one you create yourself via scripts. In UNIX, its simple to create scripts to prompt you for the accounts that you wish to remove, have the system copy the users les to a new location, change the permissions, search all systems for any les owned by that user, change permissions on those les, or move them and complete the process by removing the user account from the directory service. Check out some of the add-on modules for your user management tool of choice. Microsoft, for example, offers its Active Directory Resource Kit Book and CD that includes utilities for automation scripting. 3. Security Sweeps You should perform regular, automated security sweeps on your entire network to expose and x any wire-borne vulnerabilities. The frequency and intensity of the scans depends on the complexity of your network. Through scripting magic, you can set up scheduled scans, send the output to a database, extract a post-scan report from the database, and email it to yourself or create an HTML version of the report suitable for online viewing. One such tool, available for every modern operating system, is Nmap. Nmap is a free network security scanner designed to rapidly scan large networks and report vulnerabilities. 4. Disk Usage Scans There is a constant turf war raging between users and sys

Back to Contents

40 Ways to Make Your Data Center More Efcient an Internet.com Server eBook. 2011, Internet.com, a division of QuinStreet, Inc.

40 Ways to Make Your Data Center More Efcient


admins, and it is one that the sys admin must ultimately win. To that end, the sys admin has some tools to employ: disk space quotas, disk partitions and disk space scans. Scans are regular audits of disk space usage by user. Offenders usually receive a warning or two before personal contact from a sys admin. Typical remedies for disk space gluttons are temporary account suspension, removal of les, moving the les to a new location or an extension of the users space quota. These automated scans, when performed regularly (about once per week), prevent harsh actions by the sys admin and keep users apprised of their disk use. 5. Performance Monitoring Taking an occasional performance snapshot is a good method for a single point-in-time glance at system performance. That singular peek is only a pixel in the entire performance picture. You need something with more depth and breadth that will provide you with performance trends and predictive peaks and valleys. Setting up such a system is easy with Orca. Orca compiles performance data from disparate sources (UNIX, Windows, Linux) and creates easy-to-read performance graphs. Gathering of data, calculations, graph generation and display are all part of the automated system. 6. File Transfers Using command-line scripting power (Windows, UNIX and Linux), you can perform automated le transfers between hosts. Theres no need to do them interactively. If youre clever in your timing, you can set up elaborate automated schemes that not only transfer your les but also unzip, change permissions, move, copy and insert information into a database. Use the secure versions of your le transfer utilities (e.g., SSH, SFTP, SCP) to ensure that anyone snooping doesnt grab an important password from your network stream. 7. Code Promotion How you promote code from test to staging and into production can have a profound effect on marketing campaigns and other time-specic events. Moving the code from one environment to another manually is cumbersome, error-prone and requires coordination between developers and sys admins. Enable your developers to promote code from one environment to another using an automated code deployment system. Some sys admins use RSYNC for automated code deployment and its safe to use if coupled with SSH keys to secure the transfers between hosts. 8. High-Level Administration You can perform those housekeeping duties, service restarts and maintenance notices through automation. Set up your scripts to re during low-use hours for clearing temporary le dumps, restarting your favorite services and sending out any maintenance or downtime notices via email. Youll nd that automating these tasks takes some of the pressure off of you to remember which day it is and which list of things you need to do. Theres no reason to keep a calendar of these; let the system handle them. 9. Reboots Yes, you can automate system restarts. Sitting around waiting for systems to bounce back to life is a waste of time. Automate the process during low-use hours. Dont worry, your automated monitoring system will notify you if the system doesnt come back online within a reasonable amount of time. 10. Malware Scans You can scan for spyware, malware, viruses and other nasties using automated processes. Using scripts, you can map or mount drives, scan your lesystems, disconnect when nished with the scan, scrape the scan log for positive hits, and send the results to a database or in an email. You dont need to manually perform these scans when your system is perfectly happy and suited to do so on its own.

Back to Contents

40 Ways to Make Your Data Center More Efcient an Internet.com Server eBook. 2011, Internet.com, a division of QuinStreet, Inc.

40 Ways to Make Your Data Center More Efcient

10 Free Server Tools Your Organization Needs


By Kenneth Hess
his list of 10 free, essential tools is an amalgam of tools for all sizes of companies and networks. The range of tools covered here are generally cross-platform (i.e., they run on multiple OSes) but all are extremely useful to the system administrator, network administrator and rstlevel support personnel. While all of these tools are free to download and use in your network without payment of any kind to their developers or maintainers, not all are open source. The 10 essential tools listed here, in no particular order, are from various sources and represent the very best in tools currently used in large and small enterprises alike. 1. PSTools PSTools is a suite of useful command-line Windows tools that IT professionals consider essential to survival in a Windows-infested network. It provides automation tools that have no rival. There is no greater free toolset for Windows available anywhere. Microsoft provides this suite free of charge. If its not part of your Windows diagnostic and automation arsenal, stop reading and download it now. Be sure to come back and nish the list. (You can multitask, cant you?) 2. SharEnum ShareEnum is an obscure but very useful tool. ShareEnum shows you all le shares on your network. Even better, it shows you their associated security information. This very small (94K) tool might become one of the most valuable and useful security tools that you possess. It is another free tool from Microsoft. 6
Back to Contents

3. Nagios Nagios is an enterprise infrastructure monitoring suite. Its free, mature and commercially supported. It has grown from a niche software project to a major force in contemporary network management. Its used by such high-prole companies as Citrix, ADP, Dominos Pizza, Wells Fargo, Ericsson and the U.S. Army. 4. Wireshark If you run a network of any size or topology, Wireshark is a must-have application. It is a network packet capture and analysis program that assists you with your ongoing quest for a trouble-free network. Wireshark wont prevent network problems, but it does allow you to analyze those problems in real time and possibly avoid failure. 5. Apache The Apache project isnt just a web server. The project, ofcially known as the Apache Software Foundation (ASF), consists of almost 100 different projects under the Apache umbrella. Yes, the famous and wildly popular HTTP server, Apache, is the projects namesake and mainstay, but it isnt the only nymph in the forest. 6. IP Plan IP Plan is a little-known project that has potential in any size environment. Its not a DNS service, but it is a Webbased, IP tracking application. The reasoning behind a tool like IP Plan is that DNS tracks systems that are in use. But to whom do you go when an IP address conict, and

40 Ways to Make Your Data Center More Efcient an Internet.com Server eBook. 2011, Internet.com, a division of QuinStreet, Inc.

40 Ways to Make Your Data Center More Efcient


how do you know which IP addresses are free to use? You wont unless you have a tool like IP Plan. Its easy to use and free. What more could you want? 7. Eclipse Eclipse is an Integrated Development Environment (IDE), which you can use to create applications with almost any computer programming language. Eclipse has wide language support, but it is historically viewed as a Java development tool. You can develop Windows applications in this very complete IDE as well as applications for every current operating system. 8. KVM Kernel Virtual Machine (KVM), now owned and supported by Red Hat, is a free, full virtualization solution. Full virtualization means hardware abstraction enables you to use almost any OS in a virtual machine. Each virtual machine has its own display, network, disk and BIOS, and it functions like a physical system. You install an OS into a virtual machine just as you would to a physical system. Yes, even Windows. 9. OpenOfce.org OpenOfce.org (OO.o) is the free equivalent of Microsofts popular ofce suite. OO.o sports a word processor, spreadsheet, presentation program, database and more. It is compatible with Microsoft Ofce and can use or export in almost every imaginable le format. OpenOfce.org is not only easy on the wallet (free), but its also the darling of IBM, which has created its own derivative: Lotus Smartsuite. 10. Webmin Webmin, for the uninitiated, is the ultimate lazy system administrator tool. Its a Web-based interface to your UNIX or Linux system that covers almost every congurable aspect of the system and any add-on program you can ponder. You cant rely on it for 100 percent of your system administration tasks, but you can probably use it for 99 percent of them.

Back to Contents

40 Ways to Make Your Data Center More Efcient an Internet.com Server eBook. 2011, Internet.com, a division of QuinStreet, Inc.

40 Ways to Make Your Data Center More Efcient

Uncover Your 10 Most Painful Performance Bottlenecks


By Kenneth Hess
hen you hear the words, performance bottleneck, the typical hot spots that come to mind are CPU, Memory, Disk and Network. Those are good places to start looking for bottlenecks but they arent the only places performance problems can hide. This list targets six other potential leads for your investigation into the elusive performance breakdown. Sometimes just knowing where to look might prevent your own personal breakdown. Note that listed items are in no particular order. 1. CPU The CPU is the brain of the computer where calculations and instruction operations occur. CPUs can handle millions of calculations and instructions, but performance suffers when the numbers of these operations exceeds capacity. CPUs that sustain greater than 75-percentbusy numbers will slow the entire system. CPUs need some room for activity bursts where loads can reach 100 percent for short periods of time. CPU load is a common source of performance bottlenecks. 2. Memory The rule of thumb on memory is add more. When performance problems point to memory, the general consensus to solve the problem, is to add more. This practice is effective only in the short term, however. Performance bottlenecks that point to memory are often the result of poorly designed software (memory leaks) or

other system aws that manifest themselves as memory issues. The key to solving memory performance problems is to nd the root cause of the symptom before adding more RAM. 3. Storage Disk speed, RAID type, storage type and controller technology all combine to produce whats known as disk I/O. Disk I/O is a common source of performance angst for system administrators and users alike. There are practical and physical limits to performance even when using the best contemporary disk technology. Use best practices when combining and separating workloads on disks. As attractive as leveraged storage is, local disks are still faster than the fastest SAN. 4. Network The network is a commonly blamed source of performance bottlenecks, but it is rarely found to be so. Unless there is a network component hardware failure, such as a damaged switch port, bad cable, jabbering network card or router conguration problem, you should look elsewhere for your network continued performance bottleneck. A perceived slowness on the network usually points to one of the lists other nine entries. 5. Applications Although no application developer wants to hear it,

Back to Contents

40 Ways to Make Your Data Center More Efcient an Internet.com Server eBook. 2011, Internet.com, a division of QuinStreet, Inc.

40 Ways to Make Your Data Center More Efcient


poorly coded applications masquerade themselves as hardware problems. The ckle nger of guilt points to applications when an otherwise quiescent system suffers greatly when the application is on and shows no signs of difculty when the application is off. Its an ongoing battle between system administrators and developers when performance issue occur. Each wants to allege the others guilt. A word to the wise after many hundreds of hours of chasing hardware performance bottlenecks: Its the application. 6. Malware Viruses, Trojan horses and spyware account for a large percentage of perceived performance bottlenecks. Users notoriously complain about the network, the application or their computer when nasties raise their ugly heads. Those performance killers can reside on one or more server systems, the users workstation, or a combination of the two. Malware infections are so common that you must employ multiple defenses against them. Antivirus, antispyware, local rewalls, network rewalls and a regular patching regimen will help protect systems and prevent resultant bottlenecks. 7. Workload Smart workload management can help prevent performance problems associated with poorly balanced workloads or ill-conceived load balancing schemes. Adding another system to a suffering cluster relieves the pressure, but this is easier to do in a virtual environment than in a physical one. The best advice here is to measure capacity and performance of all systems and heed the numbers reported to you. Move workloads, add systems and keep a watchful eye on performance. 8. Failing or Outdated Hardware The older the hardware, the more likely it is to fail. Some hardware components fail with a single nal breath, while others linger on with random complaints and untraceable glitches. Hardware that causes system reboots, disappearance of data or performance bottlenecks frustrate system administrators because of its unpredictable nature. The best way to prevent such tragedies is to keep hardware fresh, use redundant hardware and monitor your systems carefully. 9. Filesystem Did you know that your lesystem choice can have a profound impact on performance? It can. Some lesystems, JFS for example, uses very little CPU. XFS has very high scalability and high performance. NTFS is a recoverable le system with high performance. The new EXT4 lesystem supports very large les efciently. Each lesystem has a purpose, and using the incorrect one for an application can have disastrous results. Consider your lesystem choices wisely and select the best one for the job. There is no one size ts all lesystem. 10. Technology The technology you select for your infrastructure plays an important role in performance. For example, if you dedicate your services to a virtual infrastructure technology, you might have performance problems not experienced on equivalent physical systems. Alternatively, there are some workloads that thrive on virtual technology. LAMP (Linux, Apache, MySQL, PHP) workloads, for example, perform at and greater than native speeds on KVM. However, container-type virtualization (OpenVZ, Parallels, Solaris Zones) boasts native performance ratings for any workload.

Back to Contents

40 Ways to Make Your Data Center More Efcient an Internet.com Server eBook. 2011, Internet.com, a division of QuinStreet, Inc.

Potrebbero piacerti anche