Sei sulla pagina 1di 27

VMware vCloud Implementation Example

Private Enterprise vCloud


TEC H N I C A L W H ITE PA P E R

VMware vCloud Implementation Example

Table of Contents 1. Purpose and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1 Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.2 Business Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.3 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.4 Document Purpose and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 2. VMware vCloud Architecture Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 vCloud Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 2.2 vCloud Component Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 3. vSphere Architecture Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1 High Level Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 3.2 Site Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 3.3 Design Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 4. vSphere Architecture Design Management Cluster . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.1 Compute Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 4.1.1. Datacenters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 4.1.2. vSphere Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 4.1.3. Host Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 4.2 Network Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 4.3 Shared Storage Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 4.4 Management Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 4.5 Management Component Resiliency Considerations . . . . . . . . . . . . . . . . . . . . . . . . .11 5. vSphere Architecture Design Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1 Compute Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1.1. Datacenters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1.2. vSphere Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1.3. Host Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 5.2 Network Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 5.3 Shared Storage Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 5.4 Resource Group Datastore Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 5.4.1. Datastore Sizing Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 6. vCloud Provider Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6.1 Abstractions and VMware vCloud Director Constructs . . . . . . . . . . . . . . . . . . . . . . .16 6.2 Provider vDCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 6.3 Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.4 Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

TECH N I C AL WH ITE PAPE R / 2

VMware vCloud Implementation Example

6.4.1. External Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.4.2. Network Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.4.3. Networking Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.5 Catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 7. vCloud Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 7.1 vSphere Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 7.1.1. Host Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 7.1.2. Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 7.1.3. vCenter Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 7.2 VMware vCloud Director Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 8. vCloud Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 8.1 vSphere Host Setup Standardization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 8.2 VMware vCloud Director Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 8.3 vSphere Host Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 8.4 VMware vCloud Director Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Appendix A Bill of Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

TECH N I C AL WH ITE PAPE R / 3

VMware vCloud Implementation Example

1. Purpose and Overview


1.1 Executive Summary
ACME Enterprise will be implementing an internal next generation datacenter private cloud built on VMware technologies. This document defines the vCloud architecture and provides detailed descriptions and specifications of the architectural components and relationships for the initial implementation. This design is based on a combination of VMware best practices and specific business requirements and goals.

1.2 Business Requirements


The vCloud for ACME Enterprise has the following characteristics and provides: Compute capacity to support 300 virtual machines, which are predefined workloads. Secure multi-tenancy, permitting more than one organization to share compute resources. In a private cloud, organizations typically represent different departments, and each department may have several environments such as development or production. A self-service portal where Infrastructure as a Service (IaaS) can be consumed from a catalog of predefined applications (vApp Templates). A chargeback mechanism, so resource consumption can be metered and the associated cost provided back to the appropriate organization or business unit. Refer to the corresponding Service Definition for further details.

1.3 Use Cases


The target use case for the vCloud includes the following workloads: Development and test Pre-production Demos Training Tier 2 and Tier 3 applications

1.4 Document Purpose and Assumptions


This vCloud Architecture Design document is intended to serve as a reference for ACME Enterprise architects, and assumes they have familiarity with VMware products, including VMware vSphere, vCenter, and VMware vCloud Director. The vCloud architecture detailed in this document is organized into these sections:
SECTION DESCRIPTION

vCloud Definition vSphere Management

Inventory of components that comprise the cloud solution vSphere and vCenter components that support running workloads

TECH N I C AL WH ITE PAPE R / 4

VMware vCloud Implementation Example

SECTION

DESCRIPTION

vSphere Resources

Resources for cloud consumption Design organized by compute, networking, and shared storage Detailed through logical and physical design specifications and considerations Considerations as they apply to vSphere and VMware vCloud Director management components VMware vCloud Director objects and configuration Relationship of VMware vCloud Director to vSphere objects

Management and Security vCloud Logical Design

This document is not intended as a substitute for detailed product documentation. Refer to the installation and administration guides for the appropriate product as necessary for further information.

2. VMware vCloud Architecture Design Overview


2.1 vCloud Definition
The VMware vCloud is comprised of the following components:
v C LO U D C O M P O N E N T DESCRIPTION

VMware vCloud Director

Abstracts and coordinates underlying resources Includes: VMware vCloud Director Server (1 or more instances, each installed on a Linux VM and referred to as a cell) VMware vCloud Director Database (1 instance per clustered set of VMware vCloud Director cells) vSphere compute, network and storage resources Foundation of underlying cloud resources Includes: VMware ESXi hosts (3 or more instances for Management cluster and 3 or more instances for Resource Cluster, also referred to as Compute Cluster) vCenter Server (1 instance managing a management cluster of hosts, and 1 or more instances managing one or more resource groups of hosts reserved for vCloud consumption. In a Proof of Concept installation, 1 instance of vCenter server managing both the management cluster and a single resource group is allowable.) vCenter Server Database (1 instance per vCenter Server)

VMware vSphere

TECH N I C AL WH ITE PAPE R / 5

VMware vCloud Implementation Example

v C LO U D C O M P O N E N T

DESCRIPTION

VMware vShield

Provides network security services including NAT and firewall Includes: vShield Edge (deployed automatically as virtual appliances on hosts by VMware vCloud Director) vShield Manager (1 instance per vCenter Server in the cloud resource groups) Provides resource metering, and chargeback models Includes: vCenter Chargeback Server (1 instance) Chargeback Data Collector (1 instance) vCloud Data Collector (1 instance) VSM Data Collector (1 instance)

VMware vCenter Chargeback

2.2 vCloud Component Design Overview


The components comprising the vCloud are detailed in this document in the following sections:
DESIGN SECTION VC LO U D C O M P O N E N T( S )

vSphere Architecture Management Cluster

vCenter Server and vCenter Database vCenter cluster and ESXi hosts vCenter Chargeback Server and Database vCenter Chargeback Collectors vShield Manager and vShield Edge(s) VMware vCloud Director Cell(s) and Database (Oracle)

vSphere Architecture Resource Group

vCenter Server(s) and vCenter Database(s) vCenter Cluster(s) and ESXi hosts

3. vSphere Architecture Design Overview


3.1 High Level Architecture
vSphere resources are organized and separated into: A management cluster containing all core components and services needed to run the cloud. One or more resource groups or compute clusters that represent dedicated resources for cloud consumption. Each resource group is a cluster of ESXi hosts managed by a vCenter Server, and is under the control of VMware vCloud Director. Multiple resource groups can be managed by the same VMware vCloud Director. Reasons for organizing and separating vSphere resources along these lines are: Facilitating quicker troubleshooting and problem resolution. Management components are strictly contained in a relatively small and manageable management cluster. Otherwise, running on a large set of host clusters could lead to situations where it is time-consuming to track down and manage such workloads.

TECH N I C AL WH ITE PAPE R / 6

VMware vCloud Implementation Example

Management components are separate from the resources they are managing. Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups would not host vCenter VMs. Resource groups can be consistently and transparently managed, carved up, and scaled horizontally. The high level logical architecture is depicted as follows.
Management Cluster
Compute Resources

Resource Groups

vSphere4.1

Org vDC #1
Compute Resources

Org vDC #2 (future)


Compute Resources

Shared Storage

SAN

vSphere4.1

vSphere4.1

Virtual Machines
VM
VM

Shared Storage

Shared Storage

VM

VM

VM

VM

VCD
VM

vSM
VM

vCenter (MC)
VM

VM

VM

VM

SAN

SAN

VCenter (RG)
VM

MSSQL
VM

AD/DNS
VM

VM

VM

VM

Chargeback

Oracle 11g
VM
VM

Log/Mon (optional)

vCenter DB

Figure 1 vCloud Logical Architecture Overview

The following diagram depicts the physical design corresponding to the logical architecture previously described.
Physical Layer vSphere Layer
vCloud Resource Groups
Provider vDC Cluster A Provider vDC Cluster B

Network Infrastructure
10Gbps 10Gbps 10Gbps 10Gbps 10Gbps 10Gbps Switch

Fabric

10Gbps 10Gbps 10Gbps 10Gbps 10Gbps 10Gbps

Server Infrastructure
vCenter01 - Cluster01
Host C1 Host C2 Host C3 Data Store Resource Pool
HA=N+1 CPU=TBD MEM=TBD

Resource Pool
HA=N+1 CPU=TBD MEM=TBD

Fabric

10Gbps 10Gbps

Switch

Data Store

10Gbps 10Gbps

Host C4 Host C5 Host C6 Resource Pool Port Group

Storage Infrastructure

vCenter01 - Cluster02
Host M1

Management Cluster
Management and DB Cluster

10Gbps 10Gbps

Host M2 Host M3 Resource Pool

FC SAN
Data Store

HA=N+1 CPU=TBD MEM=TBD

Port Group

Figure 2 vCloud Physical Design Overview

TECH N I C AL WH ITE PAPE R / 7

VMware vCloud Implementation Example

3.2 Site Considerations


The management cluster and the resource group (compute cluster) reside within a single physical Datacenter. Servers in both clusters are striped across the server chasses. This provides for business continuity of clusters, i.e. HA, should one chassis go down. Neither secondary nor DR sites are in the scope for this project.

3.3 Design Specifications


The architecture is described by a logical design that is independent of hardware-specific details. The focus is on components, their relationships, and quantity. Additional details are found in Appendix A.

4. vSphere Architecture Design Management Cluster


4.1 Compute Logical Design
The compute design encompasses the ESXi hosts contained in the management cluster. In this section the scope is limited to only the infrastructure supporting the management component workloads. 4.1.1. Datacenters The management cluster is contained within a single vCenter datacenter. 4.1.2. vSphere Clusters The management cluster will be comprised of the following vSphere cluster:
AT T R I B U T E S P E C I F I C AT I O N

Number of ESXi Hosts VMware DRS Configuration VMware DRS Migration Threshold VMware HA Enable Host Monitoring VMware HA Admission Control Policy VMware HA Percentage VMware HA Admission Control Response VMware HA Default VM Restart Priority VMware HA Host Isolation Response VMware HA Enable VM Monitoring VMware HA VM Monitoring Sensitivity
Table 1 vSphere Clusters Management Cluster

3 Fully automated 3 stars Yes Cluster tolerances 1 host failure (Percentage based) 67% Prevent VMs from being powered on if they violate availability constraints N/A Leave VM Powered On Yes Medium

TECH N I C AL WH ITE PAPE R / 8

VMware vCloud Implementation Example

4.1.3. Host Logical Design Each ESXi host in the management cluster will have the following specifications:
AT T R I B U T E S P E C I F I C AT I O N

Host Type and Version Processors Storage Networking Memory


Table 2 Host Logical Design Specifications Management Cluster

VMware ESXi Installable x86 Compatible Local for ESX binaries SAN LUN for virtual machines Connectivity to all needed VLANs Sized to support estimated workloads

4.2 Network Logical Design


The network design section defines how the vSphere virtual networking will be configured. Following best practices, the network architecture will meet these requirements: Separate networks for vSphere management, VM connectivity, and vMotion traffic Redundant vSwitches with at least 2 active physical (or vNIC) adapter ports each Redundancy across different physical adapters to protect against NIC or PCI slot failure Redundancy at the physical switch level
S W I TC H NAME S W I TC H TYPE FUNCTION # OF PHYSICAL NIC PORTS

vSwitch0

Standard

Management Console vMotion Production VMs

Table 3 Virtual Switch Configuration Management Cluster

The physical NIC ports will be connected to redundant physical switches. The following diagrams depict the virtual network infrastructure designs:

Host Host Networks


vSwitch0 Management vMotion
Production Virtual Machines

Switch

Native vLAN 443 vLAN 442 vLAN 440 Switch vmnic0 vmnic1

Fabric

Figure 3 vSphere Logical Network Designs Management Cluster

TECH N I C AL WH ITE PAPE R / 9

VMware vCloud Implementation Example

PA R A M E T E R

SETTING

Load Balancing Failover Detection Notify Switches Failover Order

Route based on NIC load Link status Enabled All active except for Management Network Management Console: Active, Standby vMotion: Standby, Active

Table 4 Virtual Switch Configuration Settings Management Cluster

4.3 Shared Storage Logical Design


The shared storage design section defines how the vSphere datastores will be configured. The same storage will be used for both the Management cluster as well as the VMware vCloud Director Resource groups. Following best practices, the shared storage architecture will meet these requirements: Storage paths will be redundant at the host (connector), switch, and storage array levels. All hosts in a cluster will have access to the same datastores.
AT T R I B U T E S P E C I F I C AT I O N

Number of Initial LUNs LUN Size Zoning VMFS Datastores per LUN VMs per LUN
Table 5 Shared Storage Logical Design Specifications Management Cluster

1 dedicated, 1 interchange (shared with Compute cluster) 539 GB Single initiator, single target 1 10 (distribute redundant VMs)

4.4 Management Components


The following components will run as VMs on the management cluster hosts: vCenter Servers vCenter Database vCenter Update Manager Database vCloud Director Cells vCloud Director Database vCenter Chargeback Server vCenter Chargeback Database vShield Manager VMware vCloud Director Cells are stateless in operation with all information stored in the database. There is some caching that happens at the VMware vCloud Director cell level, such as SSL session data, but all refreshes and updates are done to information stored in the database. As such, the database is critical to the operation of VMware vCloud Director. In a production environment, VMware recommends the database be housed in either a managed cluster configuration, or at the very least have a hot standby available.

TECH N I C AL WH ITE PAPE R / 1 0

VMware vCloud Implementation Example

ESXi

ESXi

vCenter Database
vCenter Server

JDBC Data Collector

VIM API

Chargeback Database

vCenter Chargeback

Load Balancer JDBC HTTPS VSM Data Collector vCloud Data Collector JDBC vCenter Chargeback UI

HTTPS VSM

vCD Database

Figure 4 vCenter Chargeback Logical Diagram

4.5 Management Component Resiliency Considerations


The following management components will rely on HA and FT for redundancy.
M A N AG E M E N T C O M P O N E N T HA ENABLED?

vCenter Server VMware vCloud Director vCenter Chargeback Server vShield Manager
Table 6 Management Component Resiliency

Yes Yes Yes Yes

TECH N I C AL WH ITE PAPE R / 11

VMware vCloud Implementation Example

5. vSphere Architecture Design Resource Groups


5.1 Compute Logical Design
The compute design encompasses the ESXi host clusters. In this section the scope is further limited to only the infrastructure dedicated to the cloud workloads. 5.1.1. Datacenters Resource groups can map to different datacenters and are managed by a single vCenter server. 5.1.2. vSphere Clusters All vSphere clusters will be configured similarly with the following specifications.
AT T R I B U T E S P E C I F I C AT I O N

VMware DRS Configuration VMware DRS Migration Threshold VMware HA Enable Host Monitoring VMware HA Admission Control Policy VMware HA Percentage VMware HA Admission Control Response VMware HA Default VM Restart Priority VMware HA Host Isolation Response
Table 7 vSphere Cluster Configuration Resource Group

Fully automated 3 stars Yes Cluster tolerances 1 host failure (Percentage based) 83% Prevent VMs from being powered on if they violate availability constraints N/A Leave VM Powered On

The resource groups will have the following vSphere cluster.


C LU S T E R N A M E VC E N T E R S E R V E R NAME # OF HOSTS H A P E R C E N TAG E

VCDCompute01

ACMEmgmtVC01.vcd. acme.com

83%

Table 8 vSphere Clusters Resource Groups

TECH N I C AL WH ITE PAPE R / 12

VMware vCloud Implementation Example

5.1.3. Host Logical Design Each ESXi host in the resource groups will have the following specifications.
AT T R I B U T E S P E C I F I C AT I O N

Host Type and Version Processors Storage Networking Memory


Table 9 Host Logical Design Specifications Resource Groups

VMware ESXi Installable x86 Compatible Local for ESX binaries Shared for virtual machines Shared for virtual machines Connectivity to all needed VLANs Enough to run estimated workloads

5.2 Network Logical Design The network design section defines how the vSphere virtual networking will be configured. Following best practices, the network architecture will meet these requirements: Separate networks for vSphere management, VM connectivity, vMotion traffic Redundant vSwitches with at least 2 active physical adapter ports Redundancy across different physical adapters to protect against NIC or PCI slot failure Redundancy at the physical switch level
S W I TC H N A M E S W I TC H T Y P E FUNCTION # OF NIC PORTS

vSwitch0 vDSwitch

Standard Distributed

Management Console vMotion External Networks Network Pools

2 x 10 GigE vNIC 2 x 10 GigE vNIC

Table 10 Virtual Switch Configuration Resource Groups

When using the distributed virtual switch, dvUplink ports are the number of physical NIC ports on each host. The physical NIC ports will be connected to redundant physical switches.

TECH N I C AL WH ITE PAPE R / 13

VMware vCloud Implementation Example

The following diagram depicts the virtual network infrastructure design.


Management Cluster Networking
vSwitch0 Management vMotion
Production Virtual Machines

Native vLAN 443 vLAN 442 vLAN 440 vmnic0 vmnic1

Switch

Fabric

vNetwork Distributed Switch(vDS)


Switch External Networks (Production) Network Pools

vLAN 440

vmnic2 vmnic3

Figure 5 vSphere Logical Network Design Resource Groups

PA R A M E T E R

SETTING

Load Balancing Failover Detection Notify Switches Failover Order

Route based on NIC load (for vDS) Link status Enabled All active except for Management Network

Management Console: Active, Standby vMotion: Standby, Active


Table 11 Virtual Switch Configuration Settings Resource Groups

5.3 Shared Storage Logical Design


The shared storage design section defines how the vSphere datastores will be configured. Following best practices, the shared storage architecture will meet these requirements: Storage paths will be redundant at the host (HBA), switch, and storage array levels. All hosts in a cluster will have access to the same datastores.

TECH N I C AL WH ITE PAPE R / 14

VMware vCloud Implementation Example

AT T R I B U T E

S P E C I F I C AT I O N

Number of Initial LUNs LUN Size Zoning VMFS Datastores per LUN VMs per LUN
Table 12 Shared Storage Logical Design Specifications Resource Groups

6 dedicated, 1 interchange (shared with Management cluster) 539 GB Single initiator, single target 1 12

5.4 Resource Group Datastore Considerations


The most common aspect of LUN/datastore sizing is what limit should be implemented regarding the number of VMs per datastore. The reason for limiting this number is to minimize the potential for SCSI locking and to spread the I/O across as many storage processors as possible. Most mainstream storage vendors will provide VMwarespecific guidelines for this limit, and VMware recommends an upper limit of 15 VMs per VMFS datastore, regardless of storage platform. In many cases it is forgotten that the number of VMs per LUN is also influenced by the size and I/O requirements of the VM but perhaps more importantly the selected storage solution and even disk types. When VMware vCloud Director provisions VMs it automatically places the VMs on datastores based on the free disk space of each of the associated datastores in an Org vDC. Due to this mechanism, we will need to keep the size of the LUNs and the number of VMs per LUN relatively low to avoid possible I/O contention. When considering the number of VMs to place on a single datastore, some of the following factors should be considered in conjunction with any recommended VMs-per-LUN ratio: Average VM workload/profile (in particular, the amount of I/O) Typical VM size (including configuration files, logs, swap files, and snapshot files) VMFS metadata Max requirement for IOPs and throughput per LUN, dependency on storage array and design Max RTO, if a LUN is lost, i.e. your backup and restore design If we approach this from an average I/O profile it would be tempting to create all LUNs the same, say as RAID 5, and let the law of averages take care of I/O distribution across all the LUNs and VMs on those LUNs. Another approach would be to create LUNs with different RAID profiles based on anticipated workloads within an Organization. This would dictate creating Provider virtual datacenters (vDCs) that took into account the allocation models as well as the storage profile in use. We would end up with the following types of Provider vDCs as an example: Allocated_High_Performance Allocated_Generic As a starting point, VMware recommends RAID 5 storage profiles, and only creating storage tier-specific Provider vDCs as one-offs to address specific organization or business unit requirements. The VMware Scalable Storage Performance Study provides additional information regarding vSphere storage design.

TECH N I C AL WH ITE PAPE R / 15

VMware vCloud Implementation Example

5.4.1. Datastore Sizing Estimation An estimate of the typical datastore size can be approximated by considering the following factors.
VA R I A B L E VA LU E

Maximum Number of VMs per Datastore Average Size of Virtual Disk(s) per VM Average Memory Size per VM Safety Margin
Table 13 Datastore Size Estimation Factors

12 60 GB 2 GB 10%

For example,
((12 * 60GB) + (15 * 2GB))+ 10% = (720GB + 30GB) * 1.1 = 825GB

6. vCloud Provider Design


6.1 Abstractions and VMware vCloud Director Constructs
A key tenet of the cloud architecture is resource pooling and abstraction. VMware vCloud Director further abstracts the virtualized resources presented by vSphere by providing logical constructs that map to vSphere logical resources: Organization organizational unit to which resources (vDCs) are allocated. Virtual Datacenter (vDC) Deployment environments, scoped to an organization, in which virtual machines run. Provider Virtual Datacenter vSphere resource groupings that power vDCs, further segmented out into organization vDCs. Organization Virtual Datacenter (vDC) An organizations allocated portion of provider vDC.

vCD

Org Network External Network Network Pool

Organization vDC Provider vDC

vSphere
(d)VS Port Group vDS

Resource Pool Compute Cluster Datastore

Physical
VLAN Physical Network Physical Host Storage Array

Figure 6 VMware vCloud Director Abstraction Layer Diagram

TECH N I C AL WH ITE PAPE R / 1 6

VMware vCloud Implementation Example

6.2 Provider vDCs


The following diagram shows how the Provider vDCs map back to vSphere resources:

VCD Compute01 Provider vDC GIS Future vDCs

Vcdcomputecluster1-1

Vcdcomputecluster1-2 VMFS

Vcdcomputecluster1-3 VMFS

Vcdcomputecluster1-4

Vcdcomputeclusterx-1

Vcdcomputeclusterx-2

Vcdcomputeclusterx-3

Vcdcomputeclusterx-4

VMFS

vcd_compute_01 (539GB)

vcd_compute_02 (539GB)

vcd_compute_0X (539GB)

Figure 7 Provider vDCs in Resource Groups

All ESXi hosts will belong to a vSphere cluster which will be associated with one and only one ACME Enterprise vDC. A vSphere cluster will scale to 25 hosts, allowing for up to 14 clusters per vCenter Server (the limit is bound by the maximum number of hosts per datacenter possible) and an upper limit of 10,000 VMs (this is a vCenter limit) per resource group. The recommendation is to start with 8 hosts in a cluster and add resources (Hosts) to the cluster as dictated by customer consumption. However, for the initial implementation, the provider vDC will start with 6 hosts. When utilization of the resources reaches 60%, VMware recommends that a new provider vDC/cluster be deployed. This provides for growth within the provider vDCs for the existing organizations / business units without necessitating their migration as utilization nears maxing out a clusters resources. As an example, a fully loaded resource group will contain 14 Provider vDCs, and up to 350 ESXi hosts, giving an average VM consolidation ratio of 26:1 assuming a 5:1 ratio of vCPU:pCPU. To increase this ratio, ACME Enterprise would need to increase the vCPU:pCPU ratio that they are willing to support. The risk associated with an increase in CPU over commitment is mainly in degraded overall performance that can result in higher than acceptable vCPU ready times. The vCPU:pCPU ratio is based on the amount of CPU over commitment, for the available cores, that ACME is comfortable with. For VMs that are not busy this ratio can be increased without any undesirable effect on VM performance. Monitoring of vCPU ready times helps identify if the ratio needs to be increased or decreased on a per cluster basis. A 5:1 ratio is a good starting point for a multi-core system.

TECH N I C AL WH ITE PAPE R / 17

VMware vCloud Implementation Example

A Provider vDC can map to only one vSphere cluster, but can map to multiple datastores and networks. Multiple Provider vDCs are used to map to different types/tiers of resources. Compute this is a function of the mapped vSphere clusters and the resources that back it Storage this is a function of the underlying storage types of the mapped datastores Networking this is a function of the mapped vSphere networking in terms of speed and connectivity Multiple Provider vDCs are created for the following reasons: The cloud requires more compute capacity than a single vSphere cluster (a vSphere resource pool cannot span vSphere clusters) Tiered storage is required; each Provider vDC maps to datastores on storage with different characteristics Requirement for workloads to run on physically separate infrastructure
AT T R I B U T E S P E C I F I C AT I O N

Number of Provider vDCs Number of Default External Networks


Table 14 Provider vDC Specifications

1 1 (Production)

P R OV I D E R V D C

C LU S T E R

DATA S TO R E S

VSPHERE N E T WOR KS

GIS

VCDCompute01

vcd_compute-01 vcd_compute-02 vcd_compute-03 vcd_compute-04 vcd_compute-05

Production

Table 15 Provider vDC to vSphere Mapping

VMware recommends assessing workloads to assist in sizing. Following is a standard sizing table that can be used as a reference for future design activities.
VM SIZE DISTRIBUTION NUMBER OF VMS

1 vCPU / 1 GB RAM 2 vCPU / 2 GB RAM 4 vCPU / 4 GB RAM 8 vCPU / 8 GB RAM Total


Table 16 Virtual Machine Sizing and Distribution

65% 29% 5% 1% 100%

TECH N I C AL WH ITE PAPE R / 1 8

VMware vCloud Implementation Example

6.3 Organizations
O R G A N I Z AT I O N N A M E DESCRIPTION

AIS
Table 17 Organizations

ACME Information Systems

6.4 Networks
AT T R I B U T E S P E C I F I C AT I O N

Number of Default External Networks Number of Default vApp Networks Number of default Organization Networks Default Network Pool Types Used Is a Pool of Public Routable IP Addresses Available?

1 End-user controlled 2 vCloud Director Network Isolation (vCD-NI) Yes, for access to Production but only a certain range is given to each Organization.

Table 18 Network Specifications

6.4.1. External Networks ACME Enterprise will provide the following External Network for the initial implementation: Production (VLAN 440) Part of the provisioning for an organization can involve creating an external network for each Organization, such as internet access, and a VPN network if desired, and associating them with the required Org Networks. 6.4.2. Network Pools ACME will provide the following sets of Network Pools based on need: VMware vCloud Director - Network Isolation-backed VLAN-Backed (Optional) For the vCD-NI-backed pool VMware recommends the transport VLAN (VLAN ID: 1254) be a VLAN that is not in use within the ACME infrastructure for increased security and isolation. In the case of this initial implementation, we do not have this option so will use Production VLAN 440. 6.4.3. Networking Use Cases ACME will provide the following two use cases for the initial implementation to both demonstrate VMware vCloud Director capabilities and as a use case for deploying their production vApp: 1. Users should be able to completely isolate vApps for their Development and/or Test Users 2. Users should be able to connect to the Organization Networks either directly or via fencing and the Organization Networks will not have access to any public Internet.

TECH N I C AL WH ITE PAPE R / 1 9

VMware vCloud Implementation Example

vApp01

Network Pool DB x.10 Web x.11 App x.12


(vCD-NI-backed/ VLAN- backed/ Portgroup-backed)

vAppNetwork1

Figure 8 vApp Isolated Network

vApp01

vApp02

DB x.10

Web x.11

App x.12

DB x.13

Web x.14

App x.15

vAppNetwork1

vAppNetwork2
Network Pool

Direct

(vCD-NI-backed/ VLAN- backed/ Portgroup-backed)

Isolated Org Network


Fenced

Figure 9 vApp Network Direct Attached to Org Network

This is an example for a Dev/Test environment where developers will use the different IPs in their vApps, so the VMs in a vApp can communicate to the VMs in another vApp without any conflicts.

TECH N I C AL WH ITE PAPE R / 20

VMware vCloud Implementation Example

vApp01

vApp02

DB x.10

Web x.11

App x.12

DB x.10

Web x.11

App x.12

vAppNetwork1

vAppNetwork2
Network Pool

Fenced

(vCD-NI-backed/ VLAN- backed/ Portgroup-backed)

Isolated Org Network


Fenced

Figure 10 vApp Network Fenced to Org Network

This is an example for Dev/Test where developers will have duplicate IPs in their vApps.

vApp01

vApp02

DB x.10

Web x.11

App x.12

DB x.13

Web x.14

App x.15

vAppNetwork1
Direct or Fenced

vAppNetwork2

Network Pool

Org Network
Direct

(vCD-NI-backed/ VLAN- backed/ Portgroup-backed)

External Network

Physical Backbone

Figure 11 vApp Network Bridged or Fenced to an Org Network that is Direct attached to External Network

TECH N I C AL WH ITE PAPE R / 21

VMware vCloud Implementation Example

vApp01

vApp02

DB 1.10

Web 1.11

App 1.12

DB 1.13

Web 1.14

App 1.15

vAppNetwork1
Direct or Fenced

vAppNetwork2

Network Pool
(vCD-NI-backed/ VLAN- backed/ Portgroup-backed)

Org Network
Fenced

External Network

Physical Backbone

Figure 12 vApp Network Fenced to Fenced Org Network

This is one way to connect the External network and preserve VLANs by sharing the same VLAN for the Internet among multiple Organizations. The vShield Edge is needed to provide NAT and firewall services for the different Organizations. Once the External Networks have been created, a VMware vCloud Director Administrator can create the Organization Networks as shown above. The vShield Edge (VSE) device is needed to perform Address translation between the different networks. The VSE can be configured to provide for port address translation to jump hosts located inside the networks or to gain direct access to individual hosts. VMware recommends separating External and Organization networks by using two separate vDS switches. For ACMEs initial implementation, we do not have the option to create two vDS switches as we only had one network (Production VLAN 440) to route vCD-NI traffic between ESX hosts.

6.5 Catalogs
The catalog contains ACME-specific templates that are made available to all organizations / business units. ACME will make a set of catalog entries available to cover the classes of virtual machines, templates, and media as specified in the corresponding Service Definition. For the initial implementation, a single cost model will be created using the following fixed cost pricing and chargeback model:

TECH N I C AL WH ITE PAPE R / 2 2

VMware vCloud Implementation Example

V M C O N F I G U R AT I O N

PRICE

1 vCPU and 512 MB RAM 1 vCPU and 1 GB RAM 1 vCPU and 2 GB RAM 2 vCPUs and 2 GB RAM 1 vCPU and 3 GB RAM 2 vCPUs and 3 GB RAM 1 vCPU and 4 GB RAM 2 vCPUs and 4 GB RAM 4 vCPUs and 4 GB RAM 1 vCPU and 8 GB RAM 2 vCPUs and 8 GB RAM 4 vCPUs and 8 GB RAM
Table 19 ACME Fixed-cost Cost Model

$248.00 $272.00 $289.00 $308.00 $315.00 $331.00 $341.00 $354.00 $386.00 $461.00 $477.00 $509.00

7. vCloud Security
7.1 vSphere Security
7.1.1. Host Security Chosen in part for its limited management console functionality, ESXi will be configured by ACME with a strong root password stored following corporate password procedures. ESXi lockdown mode will also be enabled to prevent root access to the hosts over the network, and appropriate security policies and procedures will be created and enforced to govern the systems. Because ESXi cannot be accessed over the network, sophisticated host-based firewall configurations are not required. 7.1.2. Network Security Virtual switch security settings will be set as follows:
FUNCTION SETTING

Promiscuous Mode MAC Address Changes Forged Transmits

Management cluster Reject Resource Group - Reject Management cluster Reject Resource Group - Reject Management cluster Reject Resource Group - Reject

Table 20 Virtual Switch Security Settings

TECH N I C AL WH ITE PAPE R / 2 3

VMware vCloud Implementation Example

7.1.3. vCenter Security vCenter Server is installed using a local administrator account. When vCenter Server is joined to a domain, this will result in any domain administrator gaining administrative privileges to vCenter. VMware recommends ACME remove this potential security risk by creating new vCenter Administrators group in Active Directory and assign it to the vCenter Server Administrator Role, making it possible to remove the local Administrators group from this role.

7.2 VMware vCloud Director Security


Standard Linux hardening guidelines need to be applied to the VMware vCloud Director VM. There is no need for local users, and the root password will only be needed during install and upgrades to the VMware vCloud Director binaries. Additionally, certain network ports must be open for vCloud Director use. Refer to the vCloud Director Administrators guide for further information.

8. vCloud Management
8.1 vSphere Host Setup Standardization
Host Profiles can be used to automatically configure network, storage, security and other features. This feature along with automated installation of ESXi hosts is used to standardize all host configurations. VM Monitoring is enabled on a cluster level within HA and uses the VMware Tools heartbeat to verify a virtual machine is alive. When a virtual machine fails, causing VMware Tools heartbeat to not be updated, VM Monitoring will verify if any storage or networking I/O has occurred over the last 120 seconds and if not, the virtual machine will be restarted. As such VMware recommends enabling both VMware HA and VM monitoring on the Management cluster and the Resource Group clusters.

8.2 VMware vCloud Director Logging


Each VMware vCloud Director cell logs audit messages to the database where they are retained for 90 days by default. If log retention is needed longer than 90 days and or centralized logging is required, an external Syslog server can be configured and used as a duplicate destination for the events that are logged.

8.3 vSphere Host Logging


Remote logging to a central host provides a way to greatly increase administration capabilities. Gathering log files on a central server facilitates monitoring of all hosts with a single tool as well as enables aggregate analysis and the ability to search for evidence of coordinated attacks on multiple hosts. This will apply to the following log analysis: messages (host log) hostd (host agent log) vpxa (vCenter agent log) Within each ESXi host, Syslog behavior is controlled by the Syslog advanced settings. These settings determine the central logging host that will receive the Syslog messages. The hostname must be resolvable using DNS. For this initial implementation, none of the ESXi hosts at ACME will be configured to send log files to a central Syslog server residing in the management cluster.

TECH N I C AL WH ITE PAPE R / 24

VMware vCloud Implementation Example

8.4 VMware vCloud Director Monitoring


The following items should be monitored through VMware vCloud Director. As of VMware vCloud Director 1.0 this will need to be done with custom queries to VMware vCloud Director using the Admin API to get the consumption data on the different components. Some of the components in VMware vCloud Director can also be monitored by aggregating the Syslog-generated logs from the different VMware vCloud Director cells that would be found on the centralized log server.
SCOPE ITEM

System

Leases Quotas Limits CPU Memory Network IP address pool Storage free space Not in scope

vSphere Resources

Virtual Machines/vApps
Table 21 VMware vCloud Director Monitoring Items

Appendix A Bill of Materials


The inventory and specifications of components comprising the vCloud are provided.
ITEM Q UA N T I T Y NAME/DESCRIPTION

ESXi Host

Vendor X Compute Resource Chassis: 3 Blades per Chassis: 1 Processors: 2 Socket Intel Xeon X5670 (6 core, 2.9 GHz (Westmere) Memory: 96GB Version: vSphere 4.1 (ESXi) Type: VM Guest OS: Windows 2008 x86_64 2 x vCPU 4 GB memory 1 vNIC Min. free disk space: 10GB Version: 4.1

vCenter Server

vCenter and Update Manager Database VMware vCloud Director Cell

0 1

N/A Minimum number of VMware vCloud Director Cells:1 Type: VM Guest OS: RHEL 5 x64 4 vCPU 4 GB memory 2 vNIC Version: 1.0

TECH N I C AL WH ITE PAPE R / 2 5

VMware vCloud Implementation Example

ITEM

Q UA N T I T Y

NAME/DESCRIPTION

VMware vCloud Director Database

Type: VM (unless using an existing, managed db cluster) Guest OS: RHEL Oracle 11g 4 x vCPU 4 GB memory 1 vNICs Type: VM appliance Version: 4.1 1 x vCPU 4 GB memory 1 vNIC Type: VM Guest OS: Windows 2008 x64 2 x vCPU 2 GB memory 1 vNIC Version: 1.5

vShield Manager

vCenter Chargeback Server

vCenter Chargeback Database

Type: VM (unless using an existing, managed db cluster) Guest OS: Windows 2008 x86_64 SQL 2008/ MS 2 x vCPU 4 GB memory 1 vNIC N/A Type: VM 1 vCPU 256MB RAM 1 vNIC

NFS Appliance vShield Edge Appliances

0 Multiple

Domain Controllers (AD)

Isolated AD VM built specifically for PoC infrastructure, no access to other DCs. Type: VM MS Windows 2008 Datacenter 2 x vCPU 4 GB Memory 1 x vNIC N/A N/A N/A

API Servers Monitoring Server Logging Server Storage 1

FC SAN Array VMFS LUN Sizing: 539 GB RAID Level: 5

Table 22 Management Cluster Inventory

TECH N I C AL WH ITE PAPE R / 26

VMware vCloud Implementation Example

ITEM

Q UA N T I T Y

NAME/DESCRIPTION

ESXi Host

Vendor X Compute Resource Chassis: 6 Blades per Chassis: 1 Blade Type: N20-B6625-1 Processors: 2 Socket Intel Xeon X5670 (6 core, 2.9 GHz (Westmere) Memory: 96GB Version: vSphere 4.1 (ESXi) Same as Management Cluster FC SAN Array VMFS LUN Sizing: 539 GB RAID Level: 5

vCenter Server Storage

1 1

Table 23 Resource Groups Inventory

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www .vmware .com
Copyright 2010 VMware, Inc . All rights reserved . This product is protected by U .S . and international copyright and intellectual property laws . VMware products are covered by one or more patents listed at http://www .vmware .com/go/patents . VMware is a registered trademark or trademark of VMware, Inc . in the United States and/or other jurisdictions . All other marks and names mentioned herein may be trademarks of their respective companies . Item No: VMW_10Q3_WP_Private_p27_A_R2

Potrebbero piacerti anche