Sei sulla pagina 1di 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/329687408

Quality Management in a Data Center: A Critical Perspective

Conference Paper · December 2018

CITATIONS READS
0 139

3 authors, including:

Efosa Idemudia
Arkansas Tech University
35 PUBLICATIONS   97 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Technology Integration in the Global World View project

Global Market Economy View project

All content following this page was uploaded by Efosa Idemudia on 15 December 2018.

The user has requested enhancement of the downloaded file.


QM in a Data Center

DECISION SCIENCES INSTITUTE


Quality Management in a Data Center: A Critical Perspective

Jyoti Sawhney
University of Maryland University College
Email: jsawhney@umd.edu

Mahesh S. Raisinghani
Texas Woman’s University
Email: mraisinghani@twu.edu

Efosa C. Idemudia
Arkansas Tech University
Email: eidemudia@atu.edu

ABSTRACT

Amazon Web Services shifted focus from infrastructure, hardware and software ownership to
subscription and capacity on demand based on the co-location model. The debate remains
whether to host locally or in the cloud. This paper helps evaluate the various aspects of each
option for making an optimal decision.

KEYWORDS: Data centers, Colocation model, Cloud computing, Demilitarized zone


topology, Virtual security, Green cooling

INTRODUCTION

The success of a project is measured and determined by whether the project was completed on
time, within budget and met the scope while meeting the quality of the product or service
(Project Management Institute, 2013, p.35). If the quality of a project or process is poor, then it
detracts from the value of the product or outcomes, since the result/s may not be acceptable to
the user. However the definition of quality is nebulous at best and can vary depending on the
individual providing the definition. Also, the definition from the same individual may change over
time. Quality management in the project management domain is to ensure that product/service
quality is planned, assured and controlled over the lifecycle of the project (Project Management
Institute, 2013, p. 227). In order to plan for quality management for a data center, one has to
know all the criteria and measures that are specific to this deliverable (Project Management
Institute, 2013, p. 227). This paper captures these values so that one may evaluate their
implementation and measure the quality.

LITERATURE REVIEW

Data center is a physical infrastructure that houses computers, servers and networking systems
to support a company’s Information Technology (IT) needs. Data centers hold vast amounts of
storage and can process and serve large amounts of mission-critical data to the clients (Stroud,
2017). Due to its mission-critical nature, it often has redundant or backup power supplies,
redundant data communications and encompasses tight security.

447- 1 -
QM in a Data Center

A data center includes computers and network equipment (Colorado Sprint Utilities, 2015). The
computers can act as servers that processes can run on using client server
methodology. Servers have become smaller in size but the needs of the dependent
applications have increased so much that the size of most data centers remains the same. The
servers in the data center can provide various services such as database servers, application
servers, and web servers and can work on a variety of operating systems.

Data Center Design and Construction


The servers have infrastructure requirements such as electricity, cooling, cabling, fire
suppression and physical security systems. Often electricity is fed from dual feeds in order to
provide backup power. Data centers also have backup generators to provide power during
failure. Infrastructure design also depends on the tier of the data center.

Tiers

Data centers are classified on a four-level Tier system. These tiers are defined by the Uptime
Institute which also awards companies that meet these standards. These tiers form the
requirements that the data center must be modeled on to meet the availability requirements
(Colocation America Inc. 2018).
Tier 1 - 99.671% (28.8 hours downtime per year)
Tier 2 - 99.741% (22h downtime/year)
Tier 3 - 99.982% (1.6h downtime/year), and
Tier 4 - 99.995% (0.4h downtime/year)
Tier 4 is the least tolerant of downtime and the architecture, security, mechanical and telecom
services must match in order to achieve this standard.

Power

Data centers are often fed by two independent grid sectors by the utility company (Michelson,
M. 2004). This is to ensure no loss in power in case one grid fails. Data centers are also
backed up by diesel generators which can run indefinitely. Power supply is switched to these
generators during scheduled or unscheduled power outages. Many data centers are hooked up
to Uninterruptible Power Supply (UPS) which normalizes the voltage and frequency fluctuations
and keeps the computer equipment safe. UPS also provides 30 minutes to an hour of power
supply when power is switched from utility company to generator power or back. Servers also
have multiple power supply units. If one unit fails, work can be performed on the server without
total power-down of the unit.

Cooling

Cooling of a data center is paramount to the safe and optimal performance of a data center
(Helsin, n.d.). Computer processors generate heat which, if not dissipated, can reduce the
efficiency of the processor. Servers are installed in racks that face each other. Data center are
set at cooler temperatures by blowing cool air through holes in the floor.

Security

A key requirement of a data center is the physical security. Apart from the expensive hardware
mission critical data is often saved on the servers. Physical security of the server is necessary

447- 2 -
QM in a Data Center

in order to prevent any theft of hardware or data. Access to a data center is controlled often by
a key card access. This method allows one to enable/disable access when the role of the
person changes in the organization. Key card access also allows one to audit when the person
entered or exited the location. Freight access is kept separate as well in order to allow easy
transportation of large equipment. Maintenance staff must get authorized for the limited time
use. Often local security personnel will accompany maintenance staff and oversee their
operations.
Virtual security in a data center is just as important as physical security. The data in a data
center resides on the servers and is accessible over the network. This data must be protected
from unauthorized access, malware and viruses. Threats can come in the form of Denial of
Service (DoS) attacks, unauthorized access, spyware, malware, phishing, scam, or any sort of
network abuse.

Demilitarized Zone (DMZ) is a method.to safeguard data from such threats. Figure 1 illustrates
a DMZ Topology designed by Cisco (Cisco, 2009).

Figure 1: DMZ Topology

DMZ is the middle stage between the internet and the organization’s private network. It
prevents direct access to the internal servers. In this network topology mail and web servers
are placed outside the firewall and are accessible to the world. Database servers and
application servers are placed behind the firewall. Network holes are punched in the firewall to
allow access from authorized servers outside to communicate with the secured servers.

DISCUSSION

Data centers can reside locally, on premises (on-prem), hosted or in the cloud (Gillis, T.
2015). An on-prem solution requires an organization to build its own physical and network
infrastructure to support the needs of a data center. Servers and other hardware must be
acquired, a building with enough power, cooling and security must be designed. Due to the cost
involved, large organizations with established server needs can justify the immense cost of an
on-prem data center. Hosted solutions allow one to contract a Service Provider (SP) to offer a

447- 3 -
QM in a Data Center

data center. The SP will be responsible for the electricity, the cooling, security and network
functions of the data center. Even though the hardware and material resources belong to the
organization, the tasks associated with the day-to-day running of the data center belong to the
SP. This is a hands-off approach.

Cloud solution is a pay for what you use service that can provide Infrastructure as a Service
(IaaS) which compares to running a data center. In this scenario, virtual machines are hosted in
the cloud and made available to the customer. The customer can manage the operating system
on up and maintains complete control over the environment. Needless to say, the organization
does not have to devote material or financial resources towards establishing a data center. This
solution provides the most elasticity which allows one to expand or contract the available
machine pool depending on the current need.

Table 1: Comparison of On-Premise, Hosted and Cloud solution

On Premise Hosted Cloud


Data Center Self Self/Service Provider Cloud Solution Provider
Hardware Provisioning Self Self/Service Provider Cloud Solution Provider
Hardware Maintenance Self Self/Service Provider Cloud Solution Provider
Hardware Upgrades Self Self/Service Provider Cloud Solution Provider

Quality Management in a Data Center

There are technical and non-technical measurements of quality in a data center. The technical
measurements have to do with the tangible qualities that can be measured and maintained.
Non-technical measurements of quality have to do with the non-tangible measurements that can
define the quality of service. Even in a very technical service such as a data center, there is
room for the soft qualities such as communication to the customers during an incident. Quality
may be measured by how the data center meets specifications provided by the provider vs.
whether it is fit for use by the customers (Lutzker, 2002). Data center manager job
advertisements often emphasize on communication and business competence to manage IT
systems in the on-demand environment.

Now that we have explored all the requirements for a data center, we can explore the quality
control measures for each of these specifications. When a threat happens to a data center,
whether physical or virtual, it needs to be addressed with speed. Investigation must happen
with precision in order to quickly identify the source of the attack. This can help to improve the
quality of service for the data center and improve prevention strategies. Following are some
standards and prevention strategies that can help implement quality control measures in a data
center.

Uptime Institute’s Tier Certification

Uptime Institute is the only organization that certifies data center designs, facilities and
operations to the Tier Classification System (I-IV) and Operational Sustainability criteria (Uptime
Institute, 2017). Uptime Institute has awarded 864 certifications in 83 countries around the
world. Uptime Institute is a global authority on data center and it helps with designing data
centers that optimize performance, reliability and efficiency. These certifications have been

447- 4 -
QM in a Data Center

awarded to various industries, governments and universities. This certification lends credibility
to the data center that it will meet certain sustained uptime. Each and every setting and part of
the data center is examined and analyzed in the process to ensure that industry measures are
met and maintained.

Tiers also provides a method to benchmark the data center. There is never a one-size-fits-all
solution for a data center. This method of certification allows one to customize the solution to
meet the requirements of the project. Tiers is accepted and respected worldwide. There are no
conflicts with local code or regulations. Tier certifications expire two years after the award
date. This forces the organization to continually improve its methods to adapt to the changing
face of technology. Uptime Institute also provides documentation and education for the owners
and operators of data centers in order to develop expertise in this field. This ensures accurate
day-to-day operations of the data centers.

Security of a Data Center

The following ISO certifications ensure that a data center meets the security requirements.
ISO 27001 - This standard specifies the requirements for establishing, implementing,
maintaining as well as continually improving an information security management system. It is a
risk-based approach which covers confidentiality, integrity and availability aspects of information
that need to be managed. This provides a framework for an Information Security Management
System (ISMS) (ISO, 2017).

ISO 22301 is a standard in the field of business continuity management (BCM) to ensure
continued operation in case of critical situations. This standard sets the requirements for a
business continuity management system to protect against business disruptions and ensure the
organization is able to recover in the event of a disruption.

Virtual Security

Data centers now support a wide range of applications and services. As its applications
increase, so does the security that is required to ensure all levels are protected optimally.
Virtual security is the most crucial in a data center. Hackers can exploit any vulnerability in the
system and have wide impact. By event monitoring, analysis and correlation, one can identify
security threats. This can help mitigate and control security breaches. Network telemetry data
can be collected over time on each appliance. This can be used to devise an accurate view of
the network layer and devise methods to quickly identify threats and confirm compromises. One
can thus determine the severity of the incident and take appropriate action.

Cloud Security Alliance (CSA) is a coalition of industry leaders, global associations and security
experts. CSA publishes guidance for best practices and provides education opportunities on
the use of cloud computing. Their mission statement is to provide security assurance as well as
provide education about Cloud Computing (Cloud Security Alliance, 2017). Machines are
segregated according to their function. One can then detect anomalous activity as we know the
machine’s function, what it is running, who has access to the operating system layer and what
other system it is communicating with. Virtual Private Network (VPN) connections are used to
provide paths for enterprise data. By limiting access to the VPN, one can manage the network
access policies. Applications hosted on the servers are also scanned for vulnerabilities. Any
application must be evaluated for vulnerabilities that a hacker may exploit (Yasin, R. 2009).

447- 5 -
QM in a Data Center

This multi-layer security allows one to be vigilant for any malicious activity. This can help to
detect, contain, minimize and possibly eliminate vulnerabilities.

Green Cooling

Although this does not have a direct impact on the quality of service in a data center, smart and
green solutions are crucial for lowering investment, operational cost and environmental impact
(Carroll, 2012). Microprocessors generate heat. A data center with high-density
microprocessors requires highly efficient and reliable cooling systems. It can come from
efficient design of air flow in the data center as well as using free cooling technologies.
Sometimes data center location may be determined by its local climate. Cooler climates will
allow the use of free air cooling and water. Some companies have designed innovative cooling
systems where individual servers are submerged in a coolant. This keeps the individual
machine cool and does not require cooling the entire infrastructure (Miller, 2010).

Scalability

To be cost effective, one must be careful with the capacity planning. The data center must
support the organization’s current requirements as well as future capacity needs. Over-building
would waste resources which can be repurposed towards other parts of the organization. With
the advances in technology, idle technology can become obsolete before it can be put to use.
Under-building will restrict growth. The organization may have to support a large capital
expense in order to expand their infrastructure. Data center colocation allows one to mitigate
these challenges with a ‘pay-as-you-grow’ model. One can expand as well as shrink their
capacity. This elasticity in capacity improves budgeting and resource allocation to match the
needs of the organization (Alberding, 2015).

Availability

The term availability is the degree to which a system is operational and accessible when it is
required for use. National Archives and Records Administration in Washington D.C. has
recorded the sobering statistic that 93% of businesses that lost availability in their data center
for 10 days or more have filed for bankruptcy within one year (Hartman, S. 2012). Availability of
a data center is expressed as a percentage of uptime in a given year. It is measured in 9’s
(Piedad
& Hawkins, 2001). Table 2 below captures how the availability translates into the amount of
time a system may be unavailable (Wikipedia, 2017).

447- 6 -
QM in a Data Center

Table 2: Availability and downtime of a data center

Availability can be affected by scheduled as well as unscheduled outages. Scheduled outages


may happen due to planned maintenance and can be planned to have the least impact on the
organization. Unscheduled outages may happen due to force majeure, technical failures, crime
or carelessness. One can avoid these by choosing the location of data center to not be in a
location prone to natural disasters; by having geographically mirror imaged servers; carefully
monitoring the server equipment to ensure their optimal performance; ensure cooling is optimal.
One can avoid human error by hiring and training the administrators with proven expertise on
the system, providing oversight to the process and following change management procedures.
The data center is made more reliable by incorporating built-in protections against common
failures. The more critical a system, the less downtime it can afford. Mission critical systems are
often housed in data centers with high availability. Automatic failure from one data center to
another allows for continuous availability of the service.

IMPLICATIONS FOR MANAGEMENT

Interoperability: Portability and Interconnectability

As more and more data centers reside in the cloud, the services may have to interact with other
non-cloud systems. Open Data Center Alliance (ODCA) ensures that there is application and
service interoperability (Open Data Center Alliance, 2012). Interoperability allows a system to
move from one cloud environment to another (portability) as well as for two environments to co-
exist and communicate and interact (interconnectability). ODCA ensures that Virtual Machines
(VMs) operate seamlessly and with high performance in order to provide a unified data center
and cloud infrastructure.

447- 7 -
QM in a Data Center

Cost of running a Data Center vs. Colocation

The cost of operating a data center includes the cost of the equipment as well as managing the
service. Also one must consider the utilization. A local data center is never fully utilized. The
industry measurements have cited ten to twenty five percent utilization of the on premise data
centers. This can lead to a loss of capital. With cloud technology having matured and cloud
providers such as Amazon Web Services (AWS) offering pay-as-you-go pricing, it is becoming
ever so common to choose colocation over running a data center (Gillis, 2015). The non-
tangible advantage of colocation is the agility. The business can grow with the demands without
delays or cost worries about expansion. This agility to support new services can give the
organization an edge over its competitors.

CONCLUSION

The cloud is not a magical location where organizations can store their data and ensure quality.
One still has to do their homework to explore the service provider’s Service Level Agreement in
order to ensure Quality of Service. Transparency, security, interoperability, portability,
availability are all considerations when choosing brick and mortar data centers on premises vs.
a hosted solution or a cloud solution. Outsourcing the risk is important but one must feel
comfortable and confident in the security level of their data. In 2010 majority of the Fortune
1000 companies would not adopt public cloud storage due to concerns about reliability, security,
availability and control over their own data (Wang, n.d.). This trend has changed with the
advancement of technology. Now many organizations are adopting cloud storage and
computing. The concerns over the quality of service remain as maintaining the integrity of the
application and its security becomes critical.

REFERENCES

Alberding, C. (2015). The importance of scalability and cost of data center solutions.
Retrieved from http://www.datacenterknowledge.com/archives/2015/08/05/importance-
scalability-cost-data-center-solutions/

Anca, A., Florina, P., Geanina, U., George, S., & Gyorgy, T. (2014). New Classes of
Applications in the Cloud. Evaluating Advantages and Disadvantages of Cloud
Computing for Telemetry Applications. Database Systems Journal, Vol V, Iss 1, Pp 3-14
(2014), (1), 3.

Aviles, M. E. (2015). The Impact of Cloud Computing in Supply Chain Collaborative


Relationships, Collaborative Advantage and Relational Outcomes. Electronic Theses &
Dissertations. 1244. Retrieved form http://digitalcommons.georgiasouthern.edu/etd/1244

Carroll, A. (2012). Smart and green cooling techniques for data centers. Retrieved from
https://lifelinedatacenters.com/data-center/smart-and-green-cooling-techniques-for-data-
centers/

Cisco (2009). Enterprise internet edge design guide. Retrieved from


http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Security/IE_DG.html

Cloud Security Alliance (2017). Retrieved from https://cloudsecurityalliance.org/about/

447- 8 -
QM in a Data Center

Colocation America, Inc. (2018). Data center standards (tiers I-IV). Retrieved from
https://www.colocationamerica.com/data-center/tier-standards-overview.htm

Colorado Spring Utilities (2015). Energy efficiency in computer data centers. Retrieved
from https://www.csu.org/CSUDocuments/datacenters.pdf

Gillis, T. (2015). Cost wars: data center vs. public cloud. Retrieved from
https://www.forbes.com/sites/tomgillis/2015/09/02/cost-wars-data-center-vs-public-
cloud/#3bed3b93923f

Hartman, S. (2012). Understanding data center reliability, availability and the cost of
downtime. Retrieved from http://blog.schneider-
electric.com/datacenter/2012/10/12/understanding-data-center-reliability-availability-and-
the-cost-of-downtime/

Helsin, K. n.d. A look at data center cooling technologies. Retrieved from


https://journal.uptimeinstitute.com/a-look-at-data-center-cooling-technologies/

I.S.O. (2012). International Organization for Standardization. Retrieved from


https://www.iso.org/standard/50038.html

I.S.O. (2017). International Organization for Standardization. Retrieved from


https://www.iso.org/isoiec-27001-information-security.html

Lutzker, D. (2002). Practical quality management for project managers. Paper presented
at Project Management Institute Annual Seminars & Symposium, San Antonio, TX.
Newtown Square, PA: Project Management Institute.

Michelson, M. (2004). Data center design recommendations. Retrieved from


https://www.ecmag.com/section/systems/data-center-design-recommendations

Miller, R. (2010). Submerged servers: green revolution cooling. Retrieved from


http://www.datacenterknowledge.com/archives/2010/03/17/submerged-servers-green-
revolution-cooling/

Nanavati, M., Colp, P., Aiello, B., & Warfield, A. (2014). Cloud Security: A Gathering
Storm. Communications of ohe ACM, 57(5), 70-79. doi:10.1145/2593686

Open Data Center Alliance. (2012). OPEN DATA CENTER ALLIANCE Usage model:
guide to interoperability across clouds. Retrieved from
https://opendatacenteralliance.org/docs/ODCA_Interop_Across_Clouds_Guide_Rev1.0.
pdf

Piedad, F. & Hawkins, M. (2001). High availability: design, techniques and processes.
Prentice Hall Professional.

Project Management Institute (2013), A Guide to the Project Management Body of


Knowledge (PMBOK Guide) (5th ed.). Newton Square, PA: Project Management
Institute, Inc.

447- 9 -
QM in a Data Center

SAP. N.D. Cloud: to do it yourself or not. Retrieved from


http://www.sapdatacenter.com/article/cloud_fundamentals/

Stroud, F. (2017). Data Center. Retrieved from


https://www.webopedia.com/TERM/D/data-center.html

T5 Data Centers (2017). T5 Facilities management – data center rounds using your
senses. Retrieved from http://t5datacenters.com/t5-facilities-management-data-center-
rounds-using-your-senses/

Uptime Institute (2017). Uptime Institute. Retrieved from https://uptimeinstitute.com/


Wang, S. n.d. Are enterprises really ready to move into the cloud? Retrieved from
https://cloudsecurityalliance.org/wp-
content/uploads/2012/02/Areenterprisesreallyreadytomoveintothecloud.pdf

Wikipedia (2017). High availability. Retrieved from


https://en.wikipedia.org/wiki/High_availability#Percentage_calculation

Woods, J. (2014). The evolution of the data center: timeline from the mainframe to the
cloud. Retrieved from http://siliconangle.com/blog/2014/03/05/the-evolution-of-the-data-
center-timeline-from-the-mainframe-to-the-cloud-tc0114/

Yasin, R. (2009). 5 steps to secure your data center. Retrieved from


https://gcn.com/Articles/2009/11/30/5-steps-to-a-secure-data-center.aspx?Page=1

447- 10 -

View publication stats

Potrebbero piacerti anche