Sei sulla pagina 1di 34

1

Executive Summary
This document provides the physical design and configurations for the Windows Server 2012 with Hyper-V (Hyper-V) and the System Center 2012 Virtual Machine Manager (SCVMM) Technology streams for the Public Protector South Africa (PPSA) platform upgrade project. The design and configuration of these two (2) components will provide a standard for extending their virtualization capacity based on future requirements as the business grow. The PPSA already purchased a pre-designed and configured Dell vStart 200 which will deployed and configured as their first scale-unit in the their datacenter. The virtualization capabilities will be made available through the deployment of Windows Server 2012 with Hyper-V on the standardized scale-unit in the PPSA. The management layer will be build using System Center 2012 SP1 and this document will include the design components for System Center 2012 Virtual Machine Manager.

Scale Unit Design


A scale-unit is a set of servers, network and storage capacity deployed as a single unit in a datacenter. Its the smallest unit of capacity deployed in the datacenter and the size of the scale-unit is dependent on the average new capacity required on a quarterly or yearly basis. Rather than deploying a single server at a time, Public Protector South Africa must deploy a new scale-unit when they need additional capacity to fulfill the need and leave room for growth. The pre-configured scale-unit for the Public Protector South Africa will consist out of one (1) Dell vStart 200 that has six(6) Dell R720 hosts that will be configured as a single cluster (6 nodes), four (4) Dell PowerConnect 7048 1GBe switches and three (3) Dell EquaLogic PS6100 SANs with 14.4TB (24 x 600GB drives) each. The scale-unit is also configured with two(2) uninterruptable power supply units (UPS) and one(1) Dell R620 host which is already configured as the scale-unit component management server.

2.1 Host Configuration


The host configuration will be based on the Dell PowerEdge R720 model. Each host will be configured as shown in Figure 1:

Windows Server 2012 Datacenter

128GB RAM

Intel Processor

Intel Processor

C: 2 x 149GB (Raid 1 NTFS)

SAN Storage (Witness & CSVs)

Expansion Cards 1GB NIC 1GB NIC 1GB NIC 1GB NIC 1GB NIC

Onboard NICs 1GB NIC 1GB NIC 1GB NIC

NIC

iSCSI

iSCSI

iSCSI

iSCSI

iLO

Windows Server 2012 - LBFO Team

Figure 1: Host Configuration

The primary operating system (OS) installed on the host will be Windows Server 2012 Datacenter Edition with the following roles and features enabled.

2.1.1

Required Roles

The following roles will be required on each of the hosts:

Hyper-V with the applicable management tools thats automatically selected. 2.1.2 Required Features

The following features will be required on each of the hosts:

Failover Clustering with the applicable tools thats automatically selected Multipath I/O

2.1.3

Host Bios Configuration

The bios of the individual hosts need to be upgraded to the latest release version and the following options needs to be enabled:

Processor Settings Virtualization Technology must be Enabled Execute Disable must be enabled 2.1.4 BMC Configuration

The baseboard management controller (BMC) needs to be configured to allow for out-of-bandmanagement of the hosts and to allow System Center 2012 Virtual Machine Manager (SCVMM) to discover the physical computer. This will be used for bare-metal provisioning of the host and management from SCVMM. The BMC must support any one of the following out-of-band management protocols:

Intelligent Platform Management Interface (IPMI) versions 1.5 or 2.0 Data Center Management Interface (DCMI) version 1.0 System Management Architecture for Server Hardware (SMASH) version 1.0 over WSManagement (WS-Man) DRAC Configuration The following table provides the detailed DRAC configuration:
Model Host Name IP Subnet Gateway VLAN Enable Protocol IPMI IPMI IPMI IPMI IPMI IPMI IPMI

R 620 R 720 R 720 R 720 R 720 R 720 R 720

OHOWVSMAN OHOWVSHV01 OHOWVSHV02 OHOWVSHV03 OHOWVSHV04 OHOWVSHV05 OHOWVSHV06

10.131.133.47 10.131.133.13 10.131.133.14 10.131.133.15 10.131.133.18 10.131.133.19 10.131.133.20

255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0

10.131.133.1 10.131.133.1 10.131.133.1 10.131.133.1 10.131.133.1 10.131.133.1 10.131.133.1

1 1 1 1 1 1 1

Table 1: Host DRAC Configuration

The following details have been configured to gain access to the DRAC controller for the individual hosts:

Username root
Table 2: DRAC Credentials

Password Can be obtained from Dell documentation.

2.1.5

Host and Cluster Network Configuration

The following table provides the detailed Host and Hyper-V cluster network configuration once the LBFO team is established:
Model R620 Host Name OHOWVSMAN Host Type Management Management Interface IP: 10.131.133.39 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV01 Virtualization Host IP: 10.131.133.41 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV02 Virtualization Host IP: 10.131.133.42 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV03 Virtualization Host IP: 10.131.133.43 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV04 Virtualization Host IP: 10.131.133.44 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV05 Virtualization Host IP: 10.131.133.45 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV06 Virtualization Host IP: 10.131.133.46 Subnet: 255.255.255.0 Gateway: 10.131.133.1 Hyper-V OHOWVSCV01 Hyper-V Cluster Name IP: 10.131.133.40 Subnet: 255.255.255.0 Gateway: 10.131.133.1 1 1 1 1 1 1 1 VLAN 1

Table 3: Host and Hyper-V Cluster Network Configuration

2.1.6

Private Network Configuration

The following table provides the detailed private network configuration for the Cluster and Live Migration Networks that will be created as virtual interfaces once the LBFO team is established. The private network interfaces will be disabled from registering in DNS.
Hosts Cluster Network Cluster VLAN 6 Live Migration Network Live Migrate VLAN 7

OHOWVSHV01

IP: 10.10.6.1 Subnet: 255.255.254.0

IP: 10.10.7.1 Subnet: 255.255.252.0

OHOWVSHV02

IP: 10.10.6.2 Subnet: 255.255.254.0

IP: 10.10.7.2 Subnet: 255.255.252.0

OHOWVSHV03

IP: 10.10.6.3 Subnet: 255.255.254.0

IP: 10.10.7.3 Subnet: 255.255.252.0

OHOWVSHV04

IP: 10.10.6.4 Subnet: 255.255.254.0

IP: 10.10.7.4 Subnet: 255.255.252.0

OHOWVSHV05

IP: 10.10.6.5 Subnet: 255.255.254.0

IP: 10.10.7.5 Subnet: 255.255.252.0

OHOWVSHV06

IP: 10.10.6.6 Subnet: 255.255.254.0

IP: 10.10.7.6 Subnet: 255.255.252.0

Table 4: Private Network Configuration

2.1.7

Hyper-V Security Design

The following security design principles needs to be taken into consideration when designing a virtualization solution build using Hyper-V. The section below provides the details on the decisions taken for the PPSA thats based on their skill and requirements.
Security Consideration Design Decision

Reducing the attack footprint of the Windows Server The PPSA does not have the required knowledge of operating system by installing Windows Server Core. PowerShell to manage Windows Server Core. Create and apply Hyper-V specific group policies to disable any unnecessary ports and/or features. The recommended Windows Server 2012 Hyper-V group policy will be extracted from the Microsoft Security and Compliance Manager and applied to all the Hyper-V hosts. The group policy will be imported into Active Directory and applied on an organization unit where all the Hyper-V hosts resides. The following group will be created in Active Directory: GG-HyperV-Admins. The group will be added to the Hyper-V group policy discussed earlier to add it to the local Hyper-V Administrators group for each of the Hyper-V hosts. This group will have only the required Hyper-V administrators in the PPSA. System Center 2012 Endpoint Protection (SCEP) will be deployed and managed by System Center 2012 Configurations Manager. When SCEP is installed on a Hyper-V host it will automatically configure the exclusions for the virtual machine data locations as it inherits it from the Hyper-V host. The clustered shared volumes (CSV) where the virtual machine data will reside will not be encrypted.

Limit the Hyper-V operators to only manage the virtualization layer and not the operating system itself by adding the required users to the Hyper-V Administrators group on the local Hyper-V server.

Install antivirus on the Hyper-V servers and add exclusions for the locations where the hypervisor stores the virtual machine profiles and virtual hard drives.

Encrypt the volumes using BitLocker where the virtual machine data is stored. This is required for virtualization hosts where physical security is a constraint.
Table 5: Hyper-V Security Design Decisions

The creation and deployment of the required group policies and organization units needs to go through the standard change process to make sure thats in a managed state and created in the correct location.

2.2 Network
The following section provides the detailed design for the network configuration of the hosts and the switches used in the solution. The design and configuration is aimed at simplifying the management of the networks while providing a load balanced network for the management OS and virtual machines.

2.2.1

Host Network Configuration

The six (4) available network cards for dataflow per host will be used to create a single network team thats switch independent and it will be configured to use the Hyper-V switch port traffic distribution algorithm. This will allow for the offload of Virtual Machine Queues (VMQs) directly to the NIC and will distribute inbound and outbound traffic evenly across the team members because there will be more VMs than available networks on the hosts. Figure 2 provides a logical view of how the network team will be configured and what virtual interfaces the management OS requires to enable the hosts to be configured as a failover cluster. The network team will also be used for the virtual machine traffic and a virtual network switch will be created to allow communication from the virtual environment to the production environment.
Management OS
VM 1 Management VM n

Cluster

Live Migration

Hyper-V Extensible Switch

Network Team

Figure 2: Host Network Configuration

The following classic networks architecture will be required for implementing a failover cluster:

VHD File(s)

Pass-through Disk(s)

Storage Area Network

Node 1

Node 2

Node 3

Node n

Host Management Network

Cluster Heartbeat Network

Live Migration Network

Virtual Machine Network(s)

Figure 3: Network Architecture for Failover Cluster

Each cluster nodes have at least four network connections:

The Host Management Network: This network is used for managing the Hyper-V host. This
type of configuration is recommended because it allows managing Hyper-V services whatever the type of network workload generated by hosted virtual machines.

The Cluster Heartbeat Network: This network is used by Failover Cluster Services to check
that each node of the cluster is available and working perfectly. This network can also be used per a cluster node to access to its storage through another node if direct connectivity to the SAN is lost (Dynamic IO Redirection). It is recommended to use dedicated network equipment for the Cluster Heartbeat Network to get the best availability of the failover cluster service.

The Live Migration Network: This network is used for Live Migration of virtual machines
between two nodes. This live migration process is particularly interesting for planned maintenance operation on host because it allows to move virtual machines between two cluster nodes with no or few network connectivity lost. The network bandwidth directly influence the time needed to Live Migrate a Virtual Machine. For this reasons it is recommended to use the fastest possible connectivity and, like the Cluster Heartbeat Network, it is recommended to use dedicated network equipment.

Virtual Machines Networks: These networks are used for Virtual Machine Connectivity. Most
of the time, Virtual Machines require multiples networks. This can be addressed by using several network cards dedicated to virtual machines workloads or by implementing V-LAN Tagging and V-LAN Isolation on a high-speed network. Most of the time, the host parent partition is not connected to those networks. This approach prevents using unnecessary TCP/IP Address and re-enforce the isolation of the Parent Partition from host Virtual Machines.

The following steps need to be follow to create the LBFO team with the required virtual adapters per Hyper-V host: 1. Create the switch independent Hyper-V Port LBFO Team called: Team01 using the Windows Server 2012 NIC Teaming Software 2. Create the Hyper-V switch called: vSwitch and do not allow the Management OS to create an additional virtual adapter using the Hyper-V Manager 3. Create the virtual adapters for the Management OS as illustrated in Figure 2 and assign the required VLANs. 4. Configure the network interfaces with the information described in Table 3. There will be a minimum network bandwidth assigned to the network interface and managed by the QoS Packet Scheduler in Windows. The traffic will be separated by VLANs which will allow for optimal usage of the available network connections. The following table provides the network configuration per host:
Network Interface Management Cluster Live Migration Virtual Switch Name Management Cluster LM vSwitch Minimum Bandwidth 5 5 20 1 IP Table 3 Table 4 Table 4 none VLAN 1 6 7 Native (1)

Table 6: Network Bandwidth Management

The following IP addresses will be assigned to the iSCSI network on each host to allow for communication to the SANs.
Host Name OHOWVSHV01 OHOWVSHV02 OHOWVSHV03 OHOWVSHV04 OHOWVSHV05 OHOWVSHV06 iSCSI NIC 1 10.10.5.51 10.10.5.61 10.10.5.71 10.10.5.81 10.10.5.91 10.10.5.101 iSCSI NIC 2 10.10.5.52 10.10.5.62 10.10.5.72 10.10.5.82 10.10.5.92 10.10.5.102 iSCSI NIC 3 10.10.5.53 10.10.5.63 10.10.5.73 10.10.5.83 10.10.5.93 10.10.5.103 iSCSI NIC 4 10.10.5.54 10.10.5.64 10.10.5.74 10.10.5.84 10.10.5.94 10.10.5.104 Subnet 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 VLAN 5 5 5 5 5 5

Table 7: Host iSCSI Configuration

Jumbo Frames will be enabled on the iSCSI network cards and SAN controllers to increase data performance through the network. The frame size will be set to 9014 bytes.

2.2.2

Switch Configuration

The Dell vStart 200 has four (4) Dell PowerConnect 7048 switches which will be used for Management and iSCSI traffic. Two (2) of the switches will connected to each other and need to be configured as trunk ports with a native VLAN ID of 5 because the switches will be used as the Storage Network. The other two (2) switches also needs to be connected to each other but the switch ports needs to be configured as trunk ports with encapsulation and with dot1q. The native VLAN needs to be set per port and a VLAN ID range needs to be tagged per port to allow for isolated communication between the required management OS interfaces and the production workloads. The following figure provides the detail on the switch and connection layout:
Storage Network
Port 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47

BG-iSCSI-01

Port Port

2 1

4 3

6 5

8 7

10 9

12 11

14 13

16 15

18 17

20 19

22 21

24 23

26 25

28 27

30 29

32 31

34 33

36 35

38 37

40 39

42 41

44 43

46 45

48 47

Port

10

12

14

16

18

20

22

24

26

28

30

32

34

36

38

40

42

44

46

48

Production Network
Port 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47

Uplink Trunk

BG-LAN-01

Port Port

2 1

4 3

6 5

8 7

10 9

12 11

14 13

16 15

18 17

20 19

22 21

24 23

26 25

28 27

30 29

32 31

34 33

36 35

38 37

40 39

42 41

44 43

46 45

48 47

Uplink Trunk

Port

10

12

14

16

18

20

22

24

26

28

30

32

34

36

38

40

42

44

46

48

LBFO Team Members for all Hosts DRAC Members iSCSI Connectors for all hosts
iSCSI Targets

iSCSI Management
Free Ports

Figure 4: Network Switch Design

The connections from each source device is divided between the required destination switches. This is why the LBFO team needs to be created as switch independent because the team cant be managed nor created on the switch.

The following table provides the rack mount switch information:


Type/Usage Connection Name IP 10.131.133.48 10.131.133.49 Subnet 255.255.255.0 255.255.255.0 Gateway 10.131.133.1 10.131.133.1

PC7048 iSCSI Stack OoB Management BG-ISCSI-01 PC7048 LAN Stack OoB Management BG-LAN-01

Table 8: Switch Stack Configuration

The following network VLANs will be used in the Dell vStart 200 for isolating network traffic:
VLAN ID 5 1 1 6 7 1 Name iSCSI OoB Management Management Cluster Live Migration Virtual Machines

Table 9: Network VLANs

The following network parameters have been identified for the platform upgrade project:
Network Parameters Primary DNS NTP SMTP 10.131.133.1 10.131.133.1 10.131.133.8 Secondary 10.131.133.2

Table 10: Additional Network Parameters

2.3 Storage
The Dell vStart 200 shippes with three (3) Dell Equalogics PS6100 iSCSI SANs with 24 x 600GB spindles each. That provides a total RAW capacity of 14.4TB per SAN and a total of 43.2TB RAW storage for PPSA to use. The recommended raid set configuration for each SAN will be the 50 Raid set across each SAN. This is to have a balance between storage capacity and acceptable read/write speed. Each SAN array is connected with four (4) iSCSI connections to the storage network as demonstrated in Figure 4. This allow for four (4) iSCSI data paths to flow from the SAN array to the Hyper-V hosts. This helps with connection redundancy and data performance because of the multiple iSCSI paths. Each of the Hyper-V host will be connected to the SAN array through four (4) network connections thats connected to the storage network as demonstrated in Figure 4. Multipath I/O (MPIO) will be enabled to allow for redundancy and to increase performance with an active/active path through all four (4) iSCSI connections. The Dell HIT Toolkit will be used to establish and manage MPIO on each host. The following diagram provides a logical view of how the storage will be configured:

VHD Files

LUNs

Raid Sets

Dell SAN Array

Hyper-V Host
Figure 5: Storage Configuration

After applying Raid 50 to the SAN arrays PPSA will only have 9TB available per array. There will be two (2) LUNs carved per SAN array with the size of 3TB each. The six (6) LUNs will be presented to each of the six (6) Hyper-V hosts for storing the virtual machine data. The following table provides the three (3) Equalogic SAN array configuration detail:

EQL Storage Array Name EQL Group Name Group Management EQL Array Name eth0 eth1 eth2 eth3 Management EQL Array Name eth0 eth1 eth2 eth3 Management EQL Array Name eth0 eth1 eth2 eth3 Management
Table 11: SAN Array Configuration

IP

Subnet

Gateway

VLAN

BG-EQL-GRP01 10.10.5.10 BG-EQL-ARY01 10.10.5.11 10.10.5.12 10.10.5.13 10.10.5.14 10.131.133.24 BG-EQL-ARY02 10.10.5.21 10.10.5.22 10.10.5.23 10.10.5.24 10.131.133.25 BG-EQL-ARY03 10.10.5.31 10.10.5.32 10.10.5.33 10.10.5.34 10.131.133.27 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 10.131.133.1 5 5 5 5 1 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 10.131.133.1 5 5 5 5 1 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 10.131.133.1 5 5 5 5 1 255.255.255.0 5

The SAN access information is as follow:


Equalogic Access Configuration

CHAP Username Passsword

HyperV Password can be obtained from the Dell documentation.

Table 12: SAN Access Information

The storage will be carved and presented to all the host in the six (6) node cluster as discussed in Table 3. This will allow for the storage to be assigned as Clustered Shared Volumes (CSV) where the virtual hard drives (VHD) and virtual machine profiles will reside. A cluster quorum disk will be presented to allow for the cluster configuration to be stored. The following table provides the storage configuration for the solution:
Disk Name HyperV-Quorum HyperV-CSV-1 HyperV-CSV-2 HyperV-CSV-3 HyperV-CSV-4 HyperV-CSV-5 HyperV-CSV-6 Name Witness Disk CSV01 CSV02 CSV03 CSV04 CSV05 CSV06 Storage Array BG-EQL-ARY01 BG-EQL-ARY01 BG-EQL-ARY01 BG-EQL-ARY02 BG-EQL-ARY02 BG-EQL-ARY03 BG-EQL-ARY03 Size 1GB 3TB 3TB 3TB 3TB 3TB 3TB Raid Set Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Preferred Owner OHOWVSHV06 OHOWVSHV01 OHOWVSHV02 OHOWVSHV03 OHOWVSHV04 OHOWVSHV05 OHOWVSHV06

Table 13: Storage Configuration

Combined Virtualization Overview


The following figure provides the overview of the final virtualization solution by combining the Scale-Unit design elements discussed in this document.

Virtual Hard Disks (VHD)

Shared Storage

Active
Virtual Hard Disks (VHD)

Failover

Active Failover
Virtual Hard Disks (VHD)

Active Failover Active Failover

6 Cluster Nodes

Active
Failover Passive

Figure 6: Final Solution Overview

This allows for six (6) nodes to be configured in a Hyper-V failover cluster thats connected to all the shared storage and networks thats configured correctly. In the failover cluster will be configured with five (5) active nodes and one (1) passive/reserve node for failover of virtual machines and for patch management of the Hyper-V hosts.

Management Layer Design


System Center 2012 SP1 will be the management infrastructure for the solution and includes the following products to deploy and operate the infrastructure in most cases:

System Center 2012 Virtual Machine Manager System Center 2012 Operations Manager
The management infrastructure itself will be hosted on the scale units deployed for the solution and a highly available SQL server will be deployed for the System Center databases.

4.1 SQL Server Architecture for Management Layer


Microsoft System Center 2012 products used in the management solution rely on Microsoft SQL Server databases. Consequently, it is necessary to define the Microsoft SQL Server architecture used by the management solution. The SQL Server databases for the management solution will be deployed on SQL Server 2008 R2 Enterprise Edition SP1 and CU6. Enterprise Edition will allow for the deployment of two (2) virtual machines that will be clustered to provide high availability and redundancy of the Management solution databases. Each of the components of the Management solution will use a dedicated SQL Server instance with disks presented through iSCSI to optimized performance. To establish the SQL Server Cluster, 2 (two) virtual machines with the following specifications will be required:
Component Server Role Physical or Virtual Operating System Application CPU Cores Memory Network Disk 1 Disk 2 Disk n
Table 14: SQL Server VM Requirements

Specification SQL Server Node Virtual Windows Server 2008 R2 SP1 64-bit SQL Server 2008 R2 SP1 with CU6 Enterprise Edition 8 Cores 16 GB RAM 2 x Virtual NICs (Public and Cluster Networks) 80 GB Operating System 1GB Quorum Disk Disks presented to the SQL virtual machines as outlined in section 4.1.1.

4.1.1

Management SQL Servers Instances

Microsoft System Center 2012 components are database-driven applications. This makes a wellperforming database platform critical to the overall management of the environment. The following instances will be required to support the solution:
Management Tool System Center 2012 Virtual Machine Manager System Center 2012 Operations Manager Instance Name VMM Primary Database VirtualMachineManagerDB Authentication Windows

OM_OPS OM_DW

OperationsManager OperationsManagerDW

Windows Windows

Table 15: SQL Instance Names

The following disk configuration will be required to support the Management solution:
SQL Instance VMM LUN LUN 1 LUN 2 LUN 3 OM_OPS LUN 4 LUN 5 LUN 6 OM_DW LUN 7 LUN 8 LUN 9 Purpose Database Files Database Log Files TempDB Files Database Files Database Log Files TempDB Files Database Files DB Log Files TempDB Files Size 50 GB 25 GB 25 GB 25 GB 15 GB 15 GB 800 GB 400 GB 50 GB Raid Set Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50

Table 16: SQL Instance Disk Configuration

4.1.2

Management SQL Server Service Accounts

Microsoft SQL Server requires service accounts for starting the database and reporting services required by the management solution. The following service accounts will be required to successfully install SQL server:
Service Account Purpose SQL Server Username SQLsvc Password The Password will be given in a secure document. The Password will be given in a secure document.

Reporting Server

SQLRSsvc

Table 17: SQL Server Service Accounts

4.2 System Center 2012 Virtual Machine Manager


System Center 2012 Virtual Machine Manager (SCVMM) helps enable centralized management of physical and virtual IT infrastructure, increased server usage, and dynamic resource optimization across multiple virtualization platforms. It includes end-to-end capabilities such as planning, deploying, managing, and optimizing the virtual infrastructure.

4.2.1

Scope

System Center 2012 Virtual Machine Manager will be used to manage Hyper-V hosts and guests in the datacenters. No virtualization infrastructure outside of the solution should be managed by this instance of System Center 2012 Virtual Machine Manager. The System Center 2012 Virtual Machine Manager configuration is only considering the scope of this architecture and therefore may suffer performance and health issues if that scope is changed.

4.2.2

Servers

The SCVMM VM specifications are shown in the following table:


Servers 1 x VM Specs 1 Virtual Machine dedicated for running SCVMM Windows Server 2012 4 vCPU 8 GB RAM 1 x vNIC Storage: One 80GB operating system VHD Additional Storage: 120GB SCSI VHD storage for Library

Table 18: SCVMM Specification

4.2.3

Roles Required for SCVMM

The following roles are required by SCVMM:

SCVMM Management Server SCVMM Administrator Console Command Shell SCVMM Library SQL Server 2008 R2 Client Tools

4.2.4

SCVMM Management Server Software Requirements for

The following software must be installed prior to installing the SCVMM management server.
Software Requirement Operating System .NET Framework 4.0 Windows Assessment and Deployment Kit (ADK) for Windows 8 Notes Windows Server 2012 Included in Windows Server 2012. Windows ADK is available at the Microsoft Download Center.

Table 19: SCVMM Management Server Software Requirements

4.2.5

SCVMM Administration Console Software Requirements

The following software must be installed prior to installing the SCVMM console.
Software requirement Notes

A supported operating system for the VMM console Windows Server 2012 and/or Windows 8 Windows PowerShell 3.0 Windows PowerShell 3.0 is included in Windows Server 2012. .NET Framework 4 is included in Windows 8, and Windows Server 2012.

Microsoft .NET Framework 4

Table 20: SCVMM Administration Console Software Requirements

4.2.6

Virtual Machine Hosts Management

SCVMM supports the following as virtual machine hosts:

Microsoft Hyper-V VMware ESX Citrix XenServer


Only the six (6) node Hyper-V Cluster will be managed by the SCVMM solution as described in Figure 6: Final Solution Overview.

Hyper-V Hosts System Requirements SCVMM supports the following versions of Hyper-V for managing hosts.
Operating System Edition Service Pack Service Pack 1 or earlier System Architecture x64

Windows Server 2008 R2 Enterprise and Datacenter (full installation or Server Core-MiniShell installation) Hyper-V Server 2008 R2 Windows Server 2012
Table 21: Hyper-V Hosts System Requirements

Not applicable

Not applicable N/A

x64

4.2.7

SCVMM Library Placement

Libraries are the repository for VM templates and therefore serve a very important role. The Library share itself will reside on the SCVMM server in the default architecture; however, it should have its own logical partition and corresponding VHD whose underlying disk subsystem is able to deliver the required level of performance to service the provisioning demands. This level of performance depends on:

Number of tenants Total number of templates and VHDs Size of VHDs How many VMs may be provisioned simultaneously What the SLA is on VM provisioning Network constraints

4.2.8

Operations Manager Integration

In addition to the built-in roles, SCVMM will be integrated with System Center 2012 Operations Manager. The integration will enable Dynamic Optimization and Power Optimization in SCVMM. SCVMM can perform load balancing within host clusters that support live migration. Dynamic Optimization migrates virtual machines within a cluster according to settings you enter. SCVMM can also help to save power in a virtualized environment by turning off hosts when they are not needed and turning the hosts back on when they are needed. SCVMM supports Dynamic Optimization and Power Optimization on Hyper-V host clusters and on host clusters that support live migration in managed VMware ESX and Citrix XenServer environments. For Power Optimization, the computers must have a baseboard management controller (BMC) that enables out-of-band management. The integration into Operation Manager will be configured with the default thresholds and dynamic optimization will be enabled. Dynamic Power Optimization schedule will be enabled from 8PM to 5AM.

4.2.9

SCVMM Service Accounts

The following service account will be required for SCVMM and to integrate into Operation Manager and to manage the Hyper-V Hosts:
Purpose SCVMM Service Account Username SCVMMsvc Password The Password will be given in a secure document.

Table 22: SCVMM Service Account

The service account will also be made local Administrator on each Hyper-V and SCVMM machine to allow for effective management.

4.2.10 Update Management


SCVMM provides the capability to use a Windows Server Update Services (WSUS) server to manage updates for the following computers in your SCVMM environment:

Virtual machine hosts Library servers VMM management server PXE servers The WSUS server

PPSA can configure update baselines, scan computers for compliance, and perform update remediation. The SCVMM will use WSUS that will be deployed with System Center 2012 Configurations Manager. Additional configuration will be required and discussed in the deployment and configuration guides.

4.2.11 Virtual Machine Templates Design Decision


The following base templates will be created for the deployment in PPSA. Each template will have their own assigned hardware and guest operating system profile.
Template Profile Template 1 Small Hardware Profile 1 vCPU 2 GB RAM 80 GB HDD 2 vCPU 4 GB RAM 80 GB HDD 4 vCPU 8 GB RAM 80 GB HDD 1 vCPU 2 GB RAM 80 GB HDD 2 vCPU 4 GB RAM 80 GB HDD 4 vCPU 8 GB RAM 80 GB HDD Network VLAN 1 Operating System Windows Server 2008 R2 SP1 64-bit

Template 2 Medium

VLAN 1

Windows Server 2008 R2 SP1 64-bit

Template 3 Large

VLAN 1

Windows Server 2008 R2 SP1 64-bit

Template 4 Small

VLAN 1

Windows Server 2012

Template 5 Medium

VLAN 1

Windows Server 2012

Table 23: Virtual Machine Templates

Template 6 Large

VLAN 1

Windows Server 2012

Appendix A New Host Design to support Guest Clustering


The requirement to provide guest clustering services on the six (6) Node Hyper-V cluster build on the vStart 200 will require an additional two (2) network cards. These network cards will be shared as dedicated virtual iSCSI networks in the virtual environment. This will change the host design as follow:

Windows Server 2012 Datacenter

128GB RAM

Intel Processor

Intel Processor

C: 2 x 149GB (Raid 1 NTFS)

SAN Storage (Witness & CSVs)

Expansion Cards

Onboard NICs 1GB NIC 1GB NIC 1GB NIC 1GB NIC 1GB NIC 1GB NIC

1GB NIC

1GB NIC

1GB NIC

1GB NIC

NIC

Virtual Virtual iSCSI iSCSI iSCSI

iSCSI

iSCSI

iSCSI

iLO

Windows Server 2012 - LBFO Team

Figure 7: Scale Unit Network Extension

If additional network cards cannot be acquired by PPSA then the current host design stays valid and all four (4) of the iSCSI network adapters needs to be share with the virtual environment. The Jumbo Frame size must also be set in the guest cluster virtual machines on each of the virtual iSCSI interfaces to 9014 bytes as well to take advantage of the performance benefits. Virtual iSCSI target providers will not be implemented in the solution because of the performance impact on the other guest machines.

Appendix B Updated SQL Server Design


The current SQL Server design has a requirement for shared storage to be able to form a SQL Cluster but cant be fulfill by the current environment because of the network adapter constrain as described in Section 5. When implementing a guest cluster in a virtual environment its required that theres enough network capacity/adapters to allow for the virtual machines to connect directly to the SAN and/or fiber channel cards to provide virtual fiber channel adapters to the virtual machines for SAN connectivity. This allows for the capability to assign shared storage to the virtual machines to build guest clusters. Because of the constrained mention above it is recommended to consider using SQL Server 2012 AlwaysOn availability group to achieve resiliency if SQL Server cannot be implemented on physical hardware. The deployment of the SQL Server on the virtual environment will already make the SQL Server hosts redundant but not the application databases hosted on SQL Server. The SQL Server 2012 AlwaysOn Availability groups is a new integrated feature that can provide data redundancy for databases and to improve application failover time to increase the availability of mission-critical application. It also helps with ensuing the availability of application databases enabling zero data loss through log-based data movement for data protection without shared disks. The following illustration shows an availability group that contains the maximum number of availability replicas, one primary replica and four secondary replicas.

Figure 8: SQL Server 2012 AlwaysOn Availability Group Maximum Configuration

The remainder of this section will provide the detail design for the PPSA SQL Server 2012 AlwaysOn Availability group.

6.1 Virtual Machine Configuration


To allow for the SQL Server AlwaysOn availability group to be establish PPSA must deploy two (2) virtual machines of the following specification and configuration:
Component Server Role Physical or Virtual Operating System Features Specification SQL Server Node Virtual Windows Server 2012 Enterprise o higher .net Framework 3.5 Failover Clustering Application CPU Cores Memory Network Disk 1 Disk n SQL Server 2012 SP1 8 Cores 16 GB RAM 2 x Virtual NICs (Public and Cluster Networks) 80 GB Operating System Disks presented to the SQL virtual machines as outlined in section 6.2.2.

Table 24: SQL Server 2012 VM Requirements

6.1.1

Virtual Machine Clustering

The newly created virtual machines must be clustered to allow SQL Server 2012 to create an AlwyasOn availability group. The following table provides the management and cluster network details.
Name Type Management Network VLAN Cluster Network VLAN

OHOWSQLS01

Virtual Machine

IP: 10.131.133.xx Subnet: 255.255.255.0 Gateway: 10.131.133.1

IP: 192.168.0.1 Subnet: 255.255.255.252

OHOWSQLS02

Virtual Machine

IP: 10.131.133.xx Subnet: 255.255.255.0 Gateway: 10.131.133.1

IP: 192.168.0.2 Subnet: 255.255.255.252

OHOWSQLC01

Cluster Name

IP: 10.131.133.xx Subnet: 255.255.255.0

None

Table 25: SQL Server 2012 Virtual Machine Network Configuration

The Quorum configuration will be configured as node majority after establishing the Windows Server 2012 cluster because shared storage isnt available. This is however not optimal and the Witness disk must be configured using node and file share majority. This will allow the Windows Cluster to save the cluster configuration and to vote for the cluster health. The file share requires only 1024MB of storage and can be located on the new file services. The following file share can be created: \\fileserver\ OHOWSQLC01\Witness Disk. Both the SQL Server virtual machines names and Windows cluster name must have full read/write access to the share.

6.2 SQL Server 2012 Configuration


The following section provides the detailed design and configuration information for the SQL Server 2012 SP1 implementation.

6.2.1

Feature Selection

The following features will be required when installing SQL Server 2012 SP1 to allow for AlwaysOn availability Groups: Instance Features:

Database Engine Services


SQL Server Replication Full-Text and Semantic Extraction for Search

Shared Features

Client Tools Connectivity Client Tools Backwards Compatibility Management Tools Complete
The shared features will be installed on C:\Program Files\Microsoft SQL Server\.

6.2.2

Management SQL Servers 2012 Instances

The following instances will be required to support the solution:


Management Tool System Center 2012 Virtual Machine Manager System Center 2012 Operations Manager System Center 2012 Configuration Manager SharePoint 2010 SP1 Instance Name SCVMM Primary Database Authentication Memory Primary SQL Allocation Host Windows 4GB SQL01

VirtualMachineManagerDB

SCOM_OPS SCOM_DW SCCM

OperationsManager OperationsManagerDW ConfigurationsManagerDB

Windows Windows Windows

4GB 7GB 4GB

SQL01 SQL01 SQL02

SharePoint

SharePointDB

Windows

8GB

SQL02

Table 26: SQL Server 2012 Instance Names

The instance root directory for all instances will be C:\Program Files\Microsoft SQL Server\.

The following disk configuration will be required and must be presented to both the virtual machines as fixed disks.
SQL Instance SCVMM LUN LUN 1 LUN 2 SCOM_OPS LUN 3 LUN 4 SCOM_DW LUN 5 LUN 6 SCCM LUN 7 LUN 8 Purpose Database and Temp Files Database and Temp Log Files Database and Temp Files Database and Temp Log Files Database and Temp Files Database and Temp Log Files Database and Temp Files Database and Temp Log Files Size 50 GB 25 GB 25 GB 15 GB 800 GB 400 GB 700 GB 350 GB Raid Set Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Drive Letter E F G H I J K L

Table 27: SQL Server 2012 Instance Disk Configuration

When installing the individual instances the data root directory will be C:\Program Files\Microsoft SQL Server\ and the individual database, Temp DB, database logs and Temp DB logs directories will be installed to the correct drive letters as outline in Table 27.

6.2.3

SQL Server 2012 Service Accounts and Groups

Microsoft SQL Server requires service accounts for starting the database and reporting services and SQL administrator groups for allowing management of SQL. The following service accounts and Group will be required to successfully install SQL server:
Service Account / Group Purpose SQL Server Name SQLsvc Password The Password will be given in a secure document. The Password will be given in a secure document. None

Reporting Server

SQLRSsvc

SQL Admin Group

SQL Admins

Table 28: SQL Server 2012 Server Service Accounts

The SQL Admin groups must contain all the required SQL administrator to allow them to manage SQL Server 2012 SP1.

6.2.4

Availability Group Design

The following section provides the design detail for the SQL Server 2012 AlwaysOn Availability groups. Before creating the SQL availability groups, PPSA must backup all the existing databases and all the databases must be enabled for full recovery mode. The individual SQL Server instances must also be enabled to use AlwaysOn availability Groups in the SQL Server Configurations Manager.

The following table provides the availability group configuration. This configuration needs to be done by connecting to the individual SQL instances.
Availability Databases in Group Primary Group Name Server System Center 2012 VirtualMachine ManagerDB SQL01 Replica Server SQL02 Replica Configuration Automatic Failover Listener

Name: VMM_Listener Port:1433 IP: 10.131.133.xx

System Center 2012

Operations ManagerDB

SQL01

SQL02

Automatic Failover

Name: SCOMDB_Listener Port:1433 IP: 10.131.133.xx

System Center 2012

Operations ManagerDW

SQL01

SQL02

Automatic Failover

Name: SCOMDW_Listener Port:1433 IP: 10.131.133.xx

System Center 2012

Configurations ManagerDB

SQL02

SQL01

Automatic Failover

Name: SCCM_Listener Port:1433 IP: 10.131.133.xx

SharePoint

SharePointDB

SQL02

SQL01

Automatic Failover

Name: SP_Listener Port:1433 IP: 10.131.133.xx

Table 29: Availability Group Design

When creating the availability group there will be a requirement for a file share to do the initial synchronization of the database. A temporary share can be be established on the file server called \\fileshare\SQLSync.

Potrebbero piacerti anche