Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Table of Contents 1. Purpose and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1 Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.2 Business Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.3 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.4 Document Purpose and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 2. VMware vCloud Architecture Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 vCloud Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 2.2 vCloud Component Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 3. vSphere Architecture Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1 High Level Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 3.2 Site Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 3.3 Design Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 4. vSphere Architecture Design Management Cluster . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.1 Compute Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 4.1.1. Datacenters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 4.1.2. vSphere Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 4.1.3. Host Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 4.2 Network Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 4.3 Shared Storage Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 4.4 Management Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 4.5 Management Component Resiliency Considerations . . . . . . . . . . . . . . . . . . . . . . . . .11 5. vSphere Architecture Design Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1 Compute Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1.1. Datacenters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1.2. vSphere Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1.3. Host Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 5.2 Network Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 5.3 Shared Storage Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 5.4 Resource Group Datastore Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 5.4.1. Datastore Sizing Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 6. vCloud Provider Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6.1 Abstractions and VMware vCloud Director Constructs . . . . . . . . . . . . . . . . . . . . . . .16 6.2 Provider vDCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 6.3 Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.4 Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4.1. External Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.4.2. Network Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.4.3. Networking Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.5 Catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 7. vCloud Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 7.1 vSphere Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 7.1.1. Host Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 7.1.2. Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 7.1.3. vCenter Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 7.2 VMware vCloud Director Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 8. vCloud Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 8.1 vSphere Host Setup Standardization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 8.2 VMware vCloud Director Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 8.3 vSphere Host Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 8.4 VMware vCloud Director Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Appendix A Bill of Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Inventory of components that comprise the cloud solution vSphere and vCenter components that support running workloads
SECTION
DESCRIPTION
vSphere Resources
Resources for cloud consumption Design organized by compute, networking, and shared storage Detailed through logical and physical design specifications and considerations Considerations as they apply to vSphere and VMware vCloud Director management components VMware vCloud Director objects and configuration Relationship of VMware vCloud Director to vSphere objects
This document is not intended as a substitute for detailed product documentation. Refer to the installation and administration guides for the appropriate product as necessary for further information.
Abstracts and coordinates underlying resources Includes: VMware vCloud Director Server (1 or more instances, each installed on a Linux VM and referred to as a cell) VMware vCloud Director Database (1 instance per clustered set of VMware vCloud Director cells) vSphere compute, network and storage resources Foundation of underlying cloud resources Includes: VMware ESXi hosts (3 or more instances for Management cluster and 3 or more instances for Resource Cluster, also referred to as Compute Cluster) vCenter Server (1 instance managing a management cluster of hosts, and 1 or more instances managing one or more resource groups of hosts reserved for vCloud consumption. In a Proof of Concept installation, 1 instance of vCenter server managing both the management cluster and a single resource group is allowable.) vCenter Server Database (1 instance per vCenter Server)
VMware vSphere
v C LO U D C O M P O N E N T
DESCRIPTION
VMware vShield
Provides network security services including NAT and firewall Includes: vShield Edge (deployed automatically as virtual appliances on hosts by VMware vCloud Director) vShield Manager (1 instance per vCenter Server in the cloud resource groups) Provides resource metering, and chargeback models Includes: vCenter Chargeback Server (1 instance) Chargeback Data Collector (1 instance) vCloud Data Collector (1 instance) VSM Data Collector (1 instance)
vCenter Server and vCenter Database vCenter cluster and ESXi hosts vCenter Chargeback Server and Database vCenter Chargeback Collectors vShield Manager and vShield Edge(s) VMware vCloud Director Cell(s) and Database (Oracle)
vCenter Server(s) and vCenter Database(s) vCenter Cluster(s) and ESXi hosts
Management components are separate from the resources they are managing. Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups would not host vCenter VMs. Resource groups can be consistently and transparently managed, carved up, and scaled horizontally. The high level logical architecture is depicted as follows.
Management Cluster
Compute Resources
Resource Groups
vSphere4.1
Org vDC #1
Compute Resources
Shared Storage
SAN
vSphere4.1
vSphere4.1
Virtual Machines
VM
VM
Shared Storage
Shared Storage
VM
VM
VM
VM
VCD
VM
vSM
VM
vCenter (MC)
VM
VM
VM
VM
SAN
SAN
VCenter (RG)
VM
MSSQL
VM
AD/DNS
VM
VM
VM
VM
Chargeback
Oracle 11g
VM
VM
Log/Mon (optional)
vCenter DB
The following diagram depicts the physical design corresponding to the logical architecture previously described.
Physical Layer vSphere Layer
vCloud Resource Groups
Provider vDC Cluster A Provider vDC Cluster B
Network Infrastructure
10Gbps 10Gbps 10Gbps 10Gbps 10Gbps 10Gbps Switch
Fabric
Server Infrastructure
vCenter01 - Cluster01
Host C1 Host C2 Host C3 Data Store Resource Pool
HA=N+1 CPU=TBD MEM=TBD
Resource Pool
HA=N+1 CPU=TBD MEM=TBD
Fabric
10Gbps 10Gbps
Switch
Data Store
10Gbps 10Gbps
Storage Infrastructure
vCenter01 - Cluster02
Host M1
Management Cluster
Management and DB Cluster
10Gbps 10Gbps
FC SAN
Data Store
Port Group
Number of ESXi Hosts VMware DRS Configuration VMware DRS Migration Threshold VMware HA Enable Host Monitoring VMware HA Admission Control Policy VMware HA Percentage VMware HA Admission Control Response VMware HA Default VM Restart Priority VMware HA Host Isolation Response VMware HA Enable VM Monitoring VMware HA VM Monitoring Sensitivity
Table 1 vSphere Clusters Management Cluster
3 Fully automated 3 stars Yes Cluster tolerances 1 host failure (Percentage based) 67% Prevent VMs from being powered on if they violate availability constraints N/A Leave VM Powered On Yes Medium
4.1.3. Host Logical Design Each ESXi host in the management cluster will have the following specifications:
AT T R I B U T E S P E C I F I C AT I O N
VMware ESXi Installable x86 Compatible Local for ESX binaries SAN LUN for virtual machines Connectivity to all needed VLANs Sized to support estimated workloads
vSwitch0
Standard
The physical NIC ports will be connected to redundant physical switches. The following diagrams depict the virtual network infrastructure designs:
Switch
Native vLAN 443 vLAN 442 vLAN 440 Switch vmnic0 vmnic1
Fabric
PA R A M E T E R
SETTING
Route based on NIC load Link status Enabled All active except for Management Network Management Console: Active, Standby vMotion: Standby, Active
Number of Initial LUNs LUN Size Zoning VMFS Datastores per LUN VMs per LUN
Table 5 Shared Storage Logical Design Specifications Management Cluster
1 dedicated, 1 interchange (shared with Compute cluster) 539 GB Single initiator, single target 1 10 (distribute redundant VMs)
ESXi
ESXi
vCenter Database
vCenter Server
VIM API
Chargeback Database
vCenter Chargeback
Load Balancer JDBC HTTPS VSM Data Collector vCloud Data Collector JDBC vCenter Chargeback UI
HTTPS VSM
vCD Database
vCenter Server VMware vCloud Director vCenter Chargeback Server vShield Manager
Table 6 Management Component Resiliency
VMware DRS Configuration VMware DRS Migration Threshold VMware HA Enable Host Monitoring VMware HA Admission Control Policy VMware HA Percentage VMware HA Admission Control Response VMware HA Default VM Restart Priority VMware HA Host Isolation Response
Table 7 vSphere Cluster Configuration Resource Group
Fully automated 3 stars Yes Cluster tolerances 1 host failure (Percentage based) 83% Prevent VMs from being powered on if they violate availability constraints N/A Leave VM Powered On
VCDCompute01
ACMEmgmtVC01.vcd. acme.com
83%
5.1.3. Host Logical Design Each ESXi host in the resource groups will have the following specifications.
AT T R I B U T E S P E C I F I C AT I O N
VMware ESXi Installable x86 Compatible Local for ESX binaries Shared for virtual machines Shared for virtual machines Connectivity to all needed VLANs Enough to run estimated workloads
5.2 Network Logical Design The network design section defines how the vSphere virtual networking will be configured. Following best practices, the network architecture will meet these requirements: Separate networks for vSphere management, VM connectivity, vMotion traffic Redundant vSwitches with at least 2 active physical adapter ports Redundancy across different physical adapters to protect against NIC or PCI slot failure Redundancy at the physical switch level
S W I TC H N A M E S W I TC H T Y P E FUNCTION # OF NIC PORTS
vSwitch0 vDSwitch
Standard Distributed
When using the distributed virtual switch, dvUplink ports are the number of physical NIC ports on each host. The physical NIC ports will be connected to redundant physical switches.
Switch
Fabric
vLAN 440
vmnic2 vmnic3
PA R A M E T E R
SETTING
Route based on NIC load (for vDS) Link status Enabled All active except for Management Network
AT T R I B U T E
S P E C I F I C AT I O N
Number of Initial LUNs LUN Size Zoning VMFS Datastores per LUN VMs per LUN
Table 12 Shared Storage Logical Design Specifications Resource Groups
6 dedicated, 1 interchange (shared with Management cluster) 539 GB Single initiator, single target 1 12
5.4.1. Datastore Sizing Estimation An estimate of the typical datastore size can be approximated by considering the following factors.
VA R I A B L E VA LU E
Maximum Number of VMs per Datastore Average Size of Virtual Disk(s) per VM Average Memory Size per VM Safety Margin
Table 13 Datastore Size Estimation Factors
12 60 GB 2 GB 10%
For example,
((12 * 60GB) + (15 * 2GB))+ 10% = (720GB + 30GB) * 1.1 = 825GB
vCD
vSphere
(d)VS Port Group vDS
Physical
VLAN Physical Network Physical Host Storage Array
Vcdcomputecluster1-1
Vcdcomputecluster1-2 VMFS
Vcdcomputecluster1-3 VMFS
Vcdcomputecluster1-4
Vcdcomputeclusterx-1
Vcdcomputeclusterx-2
Vcdcomputeclusterx-3
Vcdcomputeclusterx-4
VMFS
vcd_compute_01 (539GB)
vcd_compute_02 (539GB)
vcd_compute_0X (539GB)
All ESXi hosts will belong to a vSphere cluster which will be associated with one and only one ACME Enterprise vDC. A vSphere cluster will scale to 25 hosts, allowing for up to 14 clusters per vCenter Server (the limit is bound by the maximum number of hosts per datacenter possible) and an upper limit of 10,000 VMs (this is a vCenter limit) per resource group. The recommendation is to start with 8 hosts in a cluster and add resources (Hosts) to the cluster as dictated by customer consumption. However, for the initial implementation, the provider vDC will start with 6 hosts. When utilization of the resources reaches 60%, VMware recommends that a new provider vDC/cluster be deployed. This provides for growth within the provider vDCs for the existing organizations / business units without necessitating their migration as utilization nears maxing out a clusters resources. As an example, a fully loaded resource group will contain 14 Provider vDCs, and up to 350 ESXi hosts, giving an average VM consolidation ratio of 26:1 assuming a 5:1 ratio of vCPU:pCPU. To increase this ratio, ACME Enterprise would need to increase the vCPU:pCPU ratio that they are willing to support. The risk associated with an increase in CPU over commitment is mainly in degraded overall performance that can result in higher than acceptable vCPU ready times. The vCPU:pCPU ratio is based on the amount of CPU over commitment, for the available cores, that ACME is comfortable with. For VMs that are not busy this ratio can be increased without any undesirable effect on VM performance. Monitoring of vCPU ready times helps identify if the ratio needs to be increased or decreased on a per cluster basis. A 5:1 ratio is a good starting point for a multi-core system.
A Provider vDC can map to only one vSphere cluster, but can map to multiple datastores and networks. Multiple Provider vDCs are used to map to different types/tiers of resources. Compute this is a function of the mapped vSphere clusters and the resources that back it Storage this is a function of the underlying storage types of the mapped datastores Networking this is a function of the mapped vSphere networking in terms of speed and connectivity Multiple Provider vDCs are created for the following reasons: The cloud requires more compute capacity than a single vSphere cluster (a vSphere resource pool cannot span vSphere clusters) Tiered storage is required; each Provider vDC maps to datastores on storage with different characteristics Requirement for workloads to run on physically separate infrastructure
AT T R I B U T E S P E C I F I C AT I O N
1 1 (Production)
P R OV I D E R V D C
C LU S T E R
DATA S TO R E S
VSPHERE N E T WOR KS
GIS
VCDCompute01
Production
VMware recommends assessing workloads to assist in sizing. Following is a standard sizing table that can be used as a reference for future design activities.
VM SIZE DISTRIBUTION NUMBER OF VMS
6.3 Organizations
O R G A N I Z AT I O N N A M E DESCRIPTION
AIS
Table 17 Organizations
6.4 Networks
AT T R I B U T E S P E C I F I C AT I O N
Number of Default External Networks Number of Default vApp Networks Number of default Organization Networks Default Network Pool Types Used Is a Pool of Public Routable IP Addresses Available?
1 End-user controlled 2 vCloud Director Network Isolation (vCD-NI) Yes, for access to Production but only a certain range is given to each Organization.
6.4.1. External Networks ACME Enterprise will provide the following External Network for the initial implementation: Production (VLAN 440) Part of the provisioning for an organization can involve creating an external network for each Organization, such as internet access, and a VPN network if desired, and associating them with the required Org Networks. 6.4.2. Network Pools ACME will provide the following sets of Network Pools based on need: VMware vCloud Director - Network Isolation-backed VLAN-Backed (Optional) For the vCD-NI-backed pool VMware recommends the transport VLAN (VLAN ID: 1254) be a VLAN that is not in use within the ACME infrastructure for increased security and isolation. In the case of this initial implementation, we do not have this option so will use Production VLAN 440. 6.4.3. Networking Use Cases ACME will provide the following two use cases for the initial implementation to both demonstrate VMware vCloud Director capabilities and as a use case for deploying their production vApp: 1. Users should be able to completely isolate vApps for their Development and/or Test Users 2. Users should be able to connect to the Organization Networks either directly or via fencing and the Organization Networks will not have access to any public Internet.
vApp01
vAppNetwork1
vApp01
vApp02
DB x.10
Web x.11
App x.12
DB x.13
Web x.14
App x.15
vAppNetwork1
vAppNetwork2
Network Pool
Direct
This is an example for a Dev/Test environment where developers will use the different IPs in their vApps, so the VMs in a vApp can communicate to the VMs in another vApp without any conflicts.
vApp01
vApp02
DB x.10
Web x.11
App x.12
DB x.10
Web x.11
App x.12
vAppNetwork1
vAppNetwork2
Network Pool
Fenced
This is an example for Dev/Test where developers will have duplicate IPs in their vApps.
vApp01
vApp02
DB x.10
Web x.11
App x.12
DB x.13
Web x.14
App x.15
vAppNetwork1
Direct or Fenced
vAppNetwork2
Network Pool
Org Network
Direct
External Network
Physical Backbone
Figure 11 vApp Network Bridged or Fenced to an Org Network that is Direct attached to External Network
vApp01
vApp02
DB 1.10
Web 1.11
App 1.12
DB 1.13
Web 1.14
App 1.15
vAppNetwork1
Direct or Fenced
vAppNetwork2
Network Pool
(vCD-NI-backed/ VLAN- backed/ Portgroup-backed)
Org Network
Fenced
External Network
Physical Backbone
This is one way to connect the External network and preserve VLANs by sharing the same VLAN for the Internet among multiple Organizations. The vShield Edge is needed to provide NAT and firewall services for the different Organizations. Once the External Networks have been created, a VMware vCloud Director Administrator can create the Organization Networks as shown above. The vShield Edge (VSE) device is needed to perform Address translation between the different networks. The VSE can be configured to provide for port address translation to jump hosts located inside the networks or to gain direct access to individual hosts. VMware recommends separating External and Organization networks by using two separate vDS switches. For ACMEs initial implementation, we do not have the option to create two vDS switches as we only had one network (Production VLAN 440) to route vCD-NI traffic between ESX hosts.
6.5 Catalogs
The catalog contains ACME-specific templates that are made available to all organizations / business units. ACME will make a set of catalog entries available to cover the classes of virtual machines, templates, and media as specified in the corresponding Service Definition. For the initial implementation, a single cost model will be created using the following fixed cost pricing and chargeback model:
V M C O N F I G U R AT I O N
PRICE
1 vCPU and 512 MB RAM 1 vCPU and 1 GB RAM 1 vCPU and 2 GB RAM 2 vCPUs and 2 GB RAM 1 vCPU and 3 GB RAM 2 vCPUs and 3 GB RAM 1 vCPU and 4 GB RAM 2 vCPUs and 4 GB RAM 4 vCPUs and 4 GB RAM 1 vCPU and 8 GB RAM 2 vCPUs and 8 GB RAM 4 vCPUs and 8 GB RAM
Table 19 ACME Fixed-cost Cost Model
$248.00 $272.00 $289.00 $308.00 $315.00 $331.00 $341.00 $354.00 $386.00 $461.00 $477.00 $509.00
7. vCloud Security
7.1 vSphere Security
7.1.1. Host Security Chosen in part for its limited management console functionality, ESXi will be configured by ACME with a strong root password stored following corporate password procedures. ESXi lockdown mode will also be enabled to prevent root access to the hosts over the network, and appropriate security policies and procedures will be created and enforced to govern the systems. Because ESXi cannot be accessed over the network, sophisticated host-based firewall configurations are not required. 7.1.2. Network Security Virtual switch security settings will be set as follows:
FUNCTION SETTING
Management cluster Reject Resource Group - Reject Management cluster Reject Resource Group - Reject Management cluster Reject Resource Group - Reject
7.1.3. vCenter Security vCenter Server is installed using a local administrator account. When vCenter Server is joined to a domain, this will result in any domain administrator gaining administrative privileges to vCenter. VMware recommends ACME remove this potential security risk by creating new vCenter Administrators group in Active Directory and assign it to the vCenter Server Administrator Role, making it possible to remove the local Administrators group from this role.
8. vCloud Management
8.1 vSphere Host Setup Standardization
Host Profiles can be used to automatically configure network, storage, security and other features. This feature along with automated installation of ESXi hosts is used to standardize all host configurations. VM Monitoring is enabled on a cluster level within HA and uses the VMware Tools heartbeat to verify a virtual machine is alive. When a virtual machine fails, causing VMware Tools heartbeat to not be updated, VM Monitoring will verify if any storage or networking I/O has occurred over the last 120 seconds and if not, the virtual machine will be restarted. As such VMware recommends enabling both VMware HA and VM monitoring on the Management cluster and the Resource Group clusters.
System
Leases Quotas Limits CPU Memory Network IP address pool Storage free space Not in scope
vSphere Resources
Virtual Machines/vApps
Table 21 VMware vCloud Director Monitoring Items
ESXi Host
Vendor X Compute Resource Chassis: 3 Blades per Chassis: 1 Processors: 2 Socket Intel Xeon X5670 (6 core, 2.9 GHz (Westmere) Memory: 96GB Version: vSphere 4.1 (ESXi) Type: VM Guest OS: Windows 2008 x86_64 2 x vCPU 4 GB memory 1 vNIC Min. free disk space: 10GB Version: 4.1
vCenter Server
0 1
N/A Minimum number of VMware vCloud Director Cells:1 Type: VM Guest OS: RHEL 5 x64 4 vCPU 4 GB memory 2 vNIC Version: 1.0
ITEM
Q UA N T I T Y
NAME/DESCRIPTION
Type: VM (unless using an existing, managed db cluster) Guest OS: RHEL Oracle 11g 4 x vCPU 4 GB memory 1 vNICs Type: VM appliance Version: 4.1 1 x vCPU 4 GB memory 1 vNIC Type: VM Guest OS: Windows 2008 x64 2 x vCPU 2 GB memory 1 vNIC Version: 1.5
vShield Manager
Type: VM (unless using an existing, managed db cluster) Guest OS: Windows 2008 x86_64 SQL 2008/ MS 2 x vCPU 4 GB memory 1 vNIC N/A Type: VM 1 vCPU 256MB RAM 1 vNIC
0 Multiple
Isolated AD VM built specifically for PoC infrastructure, no access to other DCs. Type: VM MS Windows 2008 Datacenter 2 x vCPU 4 GB Memory 1 x vNIC N/A N/A N/A
ITEM
Q UA N T I T Y
NAME/DESCRIPTION
ESXi Host
Vendor X Compute Resource Chassis: 6 Blades per Chassis: 1 Blade Type: N20-B6625-1 Processors: 2 Socket Intel Xeon X5670 (6 core, 2.9 GHz (Westmere) Memory: 96GB Version: vSphere 4.1 (ESXi) Same as Management Cluster FC SAN Array VMFS LUN Sizing: 539 GB RAID Level: 5
1 1
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www .vmware .com
Copyright 2010 VMware, Inc . All rights reserved . This product is protected by U .S . and international copyright and intellectual property laws . VMware products are covered by one or more patents listed at http://www .vmware .com/go/patents . VMware is a registered trademark or trademark of VMware, Inc . in the United States and/or other jurisdictions . All other marks and names mentioned herein may be trademarks of their respective companies . Item No: VMW_10Q3_WP_Private_p27_A_R2