Sei sulla pagina 1di 65

Building a Virtual Desktop Infrastructure

A recipe utilizing the Intel Modular Server and VMware View

December 4, 2009

Prepared by: David L. Endicott NeoTech Solutions, Inc. 2816 South Main St. Joplin, MO 64804

Building a Virtual Desktop Infrastructure


A recipe utilizing the Intel Modular Server and VMware View

Table of Contents
Chapter 1: Introduction .................................................................................................................1 The Purpose of This Document.................................................................................................1 Assumptions..............................................................................................................................1 The Case for Virtual Desktops ..................................................................................................1 Why Consider VDI? ...............................................................................................................2 Industry Trends......................................................................................................................3 Chapter 2: System Design and Requirements..............................................................................4 Logical Design...........................................................................................................................4 System Operation ..................................................................................................................5 Hardware and Software Requirements..................................................................................6 Network Considerations.........................................................................................................7 Storage Considerations .........................................................................................................8 System Overview and Diagram.................................................................................................9 Reference Environment .........................................................................................................9 Suggested Production Environment ....................................................................................11 Chapter 3: Reference Environment Installation and Configuration.............................................14 Before Beginning the Installation.............................................................................................14 Management Module IP Configuration....................................................................................14 Network Configuration.............................................................................................................15 Storage Configuration .............................................................................................................18 Install the Shared LUN option..............................................................................................18 Enable HDD Write Back Cache ...........................................................................................18 Configuration of the Storage Pool........................................................................................19 Configuration of Virtual Drives .............................................................................................20 ESX Server Installation ...........................................................................................................21 ESX Server Configuration .......................................................................................................28 Creation of Microsoft Support Servers..................................................................................32 Installation of vCenter Server ..............................................................................................33

Table of Contents

Configuration of the ESX Cluster ............................................................................................36 Chapter 4: View Manager ...........................................................................................................41 Installation of the View Manager Server..................................................................................41 Configuration of the View Manager Server. ............................................................................41 Meet the Workstation Pool Pre-Requisites..............................................................................43 Configure the Workstation Pool...............................................................................................45 Utilizing Active Directory..........................................................................................................50 Chapter 5: Testing the System ...................................................................................................52 Install and Configure the Workstation Client ...........................................................................52 Test the Connection to the Virtual Workstation .......................................................................52 Verify the View Portal Is Up and Running ...............................................................................53 Chapter 6: Troubleshooting ........................................................................................................55 Virtual Workstations not Completing the Provisioning/Customizing Sequence.......................55 Unable to Connect to Virtual Workstations..............................................................................56 Group Policy Not Applying as Expected..................................................................................57 Chapter 7: For More Information.................................................................................................58 About the Author .....................................................................................................................58 Acknowledgements .................................................................................................................58 References ..............................................................................................................................59 Appendix A..................................................................................................................................60 Reference Environment Worksheet ........................................................................................60

Table of Contents

Table of Contents

Chapter 1: Introduction
The Purpose of This Document
This document was designed to meet the following goals: give the reader a basic understanding of a virtual desktop infrastructure discuss the benefits of virtual desktops document the design and installation of a virtual desktop environment utilizing VMware View and the Intel Modular Server

Assumptions
The installation and configuration of a complete virtual desktop infrastructure encompasses many different technologies and products from many vendors. A step by step installation and configuration of all of these technologies is beyond the scope of this document. This document will cover the highlights of the process, especially where the configuration is customized for the Intel Modular Server, and will refer the reader to product documentation where appropriate. The acronym VDI will be defined as Virtual Desktop Infrastructure and does not refer to any particular product. It is assumed that the reader has a good working knowledge of the following products and concepts: Microsoft server systems and Active Directory - VMware View depends on Microsoft servers to support the View system and on Active Directory for authentication services. A functional active directory system is required for completion of the reference environment and several Microsoft servers will be needed for various components of the system. VMware VI3 The entire VDI system is built upon this framework from VMware. You will need a good understanding of its operation. Basic network infrastructure terms and concepts - Basic network administration and configuration tasks are required for the setup of the reference environment. Microsoft Desktop operating systems Windows XP pro is used in the reference environment. You will need to know how to setup an XP pro workstation and configure it for the network.

The Case for Virtual Desktops


For many years, the standard device for the corporate desktop has been the traditional PC. However, managing and maintaining a large base of networked PCs is a costly venture. Maintaining even a well designed and highly organized PC environment can put a heavy strain on information technology budgets and resources. Any technology that can actually deliver on the promise of reducing these costs should be investigated.

Chapter 1 : Introductions

Page |1

Why Consider VDI? Desktop virtualization promises to greatly reduce the time and cost involved in maintaining the desktop environment. Rather than a system comprised of individual desktop PCs, a virtual desktop environment provides all of the features of a fully functional desktop computer to the user on a variety of end-user devices. The actual desktop operating system runs on a virtual computer configured on server hardware. Unlike a traditional thin-client environment, users do not share an operating system. Each user has a separate operating system instance running in its own virtual hardware space. This significantly reduces the likelihood that an error in one users environment will negatively affect another user. In addition, there are many other benefits to using a virtual desktop environment including: Scalability The system detailed in this document is based on VMware VI3 and runs on the Intel Modular Server. This provides true data-center class performance and scalability. The virtual desktop computers run on modules in the server chassis on the latest Intel processors which provide a level of performance far beyond that of desktop hardware. By leveraging the features of VI3, along with the built in storage system of the modular server, adding capacity to the system is as simple as adding modules and configuring them into the VI3 cluster. In our experience, by configuring the modular server with dual Xeon modules with 24 GB of ram, excellent performance can be delivered to the end user with 30-50 virtual workstations per module. This means that a single modular server could scale to support up to 300 desktops per chassis. Flexibility The ability to change the system on the fly is one of the greatest strengths of VDI. Users can be added and removed easily. Applications can be added and removed from single users or large groups of users quickly. Software upgrades and basic support can be provided without visiting user desktops. Access to virtual desktops can be configured to utilize a wide variety of devices including thin client devices and legacy PCs which reduces the cost of migration. Adaptability The needs of users across an enterprise vary widely. For example, the needs of a call center worker are significantly different from the needs of an accountant. With VDI it is easy to roll out customized environments for groups of users with similar needs. These environments can be quickly modified as needs change, all without visiting the users desk and regardless of the users access device. Fault Tolerance VDI makes use of the fault tolerance features of both the Intel Modular Server and VMware VI3 to make the desktop environment highly fault tolerant. On the hardware side, the Intel Modular Server utilizes redundant power supplies, network switches, and RAID array technology. On the software side, VI3 utilizes High Availability clustering to monitor virtual desktop machines and in the event a module should fail, it will restart them on a different module.

Chapter 1 : Introductions

Page |2

Accessibility Users need to access data and applications from the workplace, home and while on the road. Since the virtual machine is running in the data center, and can be accessed from nearly any device, the end users location becomes unimportant. Through the use of the integrated web portal, the virtual machine can be accessed via almost any internet connected device. Industry Trends Server virtualization has become a standard in many companies today. In fact, Forrester Research reported that in a poll of 2686 technology decision makers in the United States and Europe that the majority of both large enterprises and SMBs were already using server virtualization1. As IT departments become comfortable with the use of virtualization technology for servers, it make sense to implement those same advantages for the desktop. In fact of those same decision makers polled above, 70 percent of large enterprises and 74 percent of SMBs hope to use Virtual Desktop technologies to reduce the cost of maintaining desktop systems2. This research shows that VDI will truly be the desktop of choice for enterprises and SMBs alike. The benefits and potential cost savings speak for themselves

Virtualization Review, Forrester: Businesses Adopting Virtualization by Herb Torrens 3/3/2009 http://virtualizationreview.com/articles/2009/03/06/forrester--businesses-adopting-virtualization.aspx Virtualization Review, Forrester: Businesses Adopting Virtualization by Herb Torrens 3/3/2009 http://virtualizationreview.com/articles/2009/03/06/forrester--businesses-adopting-virtualization.aspx
2

Chapter 1 : Introductions

Page |3

Chapter 2: System Design and Requirements


Logical Design
For any VDI project, the design must take into account the desired outcome as well as projected growth during the lifetime of the system. These can vary greatly from project to project and environment to environment, however, there are certain requirements which must be met regardless of the final system design. The figure 2-1 provides a logical view of the various components involved in a virtual desktop system:

visio Pro

VM

Figure 2-1 Chapter 2 : System Design and Requirements P a g e | 4

nection SSL con

Authen ticati

ning

nt me ge a an

on

As you can see in the diagram, there are 6 major components to the VMware VDI system. 1.) The ESX cluster This is a load balanced and highly available cluster of servers that actually house and process the virtual desktop machines that will be used by servers. For the reference environemnt, this will be the Intel Modular Server. 2.) vCenter server This is the primary management system for ESX. 3.) View Manager Server This is the server that handles all of the automation for VDI. It is also the primary management interface for the VDI system. View manager handles brokering connections from the clients to the appropriate desktops and manages the provisioning of new desktops. 4.) Active Directory The View architecture depends on active directory for user authentication and resource entitlement 5.) View security server This server acts as a proxy between users outside the firewall and resources inside. This system provides a highly secure method of allowing external access to production virtual desktops 6.) Services This comprises the bulk of other services available on the network including but not limited to email, file storage, printing, etc. System Operation When a user signs onto the system utilizing the View client, the system authenticates the user via Active Directory and verifies what desktop resources the user is entitled to use. Those resources are then presented to the user in a menu so that the user can connect to them. Virtual desktops are provisioned on demand based on rules setup during system configuration. User connections are created to the correct resources based on the configuration of the system. Once those virtual workstations are provisioned on the domain, they are subject to all of the configurations in active directory, group policy, etc. just like any other workstation. When a user disconnects or logs off of a virtual workstation, the system will release that virtual desktop for use by another user, destroy the virtual workstation, or permanently assign the virtual workstation to that user and leave it in an available state. All of these options are fully configurable by the administrator. There many options for configuration of the system and exploring them all is beyond the scope of this document. However, a few key points to consider are listed below: 1.) How will your users utilize the system? If your users are very standard and all use the exact same workstation configuration (for example a call center), you may want a new workstation provisioned for each user each day. If the users need customizable machines, you may want to assign each user a workstation when they first log in and allow them to keep the workstation as their own from that point on. 2.) Do you have categories of users? - You can configure multiple pools of virtual desktops and configure them to behave in different ways. For example, you may want a new desktop everyday for you call center workers, but an assigned desktop for office workers.

Chapter 2 : System Design and Requirements P a g e | 5

3.) How will applications get installed on the desktops? The virtual desktops are created by deploying a new VM from a template that is created and maintained by the administrator. Applications can simply be installed on the template before the desktops are provisioned. In addition, nearly any application publishing mechanism available for physical machines will also work with virtual desktops. For example, application publishing via active directory and group policy works normally in a VDI environment. If you purchase VMware View Premier, you also get licenses for VMware Thinapp which is VMwares application virtualization software. For more information please see: http://www.VMware.com/files/pdf/thinapp_datasheet.pdf Hardware and Software Requirements There are 3 components required for a fully functional VMware virtual desktop infrastructure: 1.) Virtual Machine Hosts (ESX3.5 - VMware VI3) These are the servers that actually host the virtual machines used in the system. The compute modules in the Intel Modular Server will run ESX3.5 to fulfill this function. 2.) Virtual Machine system management (vCenter Server VI3) vCenter is the management system utilized to manage the ESX hosts and the virtual machines. It is an integral part of the VI3 infrastructure and is a requirement for many VMware functions. vCenter runs on a Windows platform and can be installed on either a physical or virtual machine. 3.) VDI Manager Component (VMware View) VMware View is the virtual desktop management system that controls access to virtual desktops and automates virtual desktop provisioning. VMware View runs on a Windows platform and can be installed on either a physical or virtual machine. For the hardware and software requirements for these components, please reference the following VMware documentation: The VMware Infrastructure 3 Documentation: http://www.VMware.com/support/pubs/vi_pages/vi_pubs_35.html VMware View Documentation http://www.VMware.com/support/pubs/View_pubs.html Both vCenter and VMview can be installed on either physical or virtual machines. The best path to take for your environment depends on many factors and is beyond the scope of this document. However, the following facts can be deciding factors: 1.) Depending on how your View software licenses were purchased, you may not be in license compliance if you run server and workstation workloads on the same ESX host server. There is however an exclusion in the license agreement that allows connection brokers, vCenter server and performance monitoring tools to be run as server Chapter 2 : System Design and Requirements P a g e | 6

workloads on ESX hosts running VDI licenses. For more information, please see the VMware View (formerly VDI) Pricing and Support FAQ at: http://www.VMware.com/files/pdf/View_pricing_support_faq.pdf 2.) Certain limitations in management processes are introduced if the vCenter server is a VM hosted on an ESX host that is managed by that same vCenter server. See the following documentation for more information: http://www.VMware.com/vmtn/resources/798 3.) vCenter utilizes a database to store all the VMware configuration information. This database can be SQL express installed directly on the vCenter server or can be an existing SQL or Oracle server elsewhere on the network. For small environments of up to 5 hosts and 5 virtual machines, the SQL express database is adequate. For larger production environments, SQL server is recommended. Please see the VI3 quick Start Guide at : http://www.VMware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_quickstart.pdf for more information. For the purposes of this document, our test environment will utilize virtual machines for both the vCenter server and the VM View server. In addition, the vCenter database will be configured to use SQL express installed on the vCenter server VM. Network Considerations The network configuration can be a bottleneck in a VDI implementation. Since all the virtual machines on a VM host share the available network resources, it is possible for busy workstation VMs to consume all available bandwidth. To reduce this possibility, at least 2 gigabit Ethernet connections should be provisioned for each VM host. In the Intel Modular Server, connectivity to the host modules is provided by the switch module installed in the chassis. Each module can be configured with either 2 or 4 internal gigabit Ethernet connections to the switch module. The switch module can then be uplinked to your existing network infrastructure. To provide the best possible fault tolerance and performance, a link aggregation group of at least 2 gigabit connections should be configured for this uplink. This requires that the switch connected to the chassis must support link aggregation. For more information concerning link aggregation control protocol (LACP) please see the following documentation: http://en.wikipedia.org/wiki/Link_Aggregation_Control_Protocol#Link_Aggregation_Control_Prot ocol In our experience, 2 connections per module and a 4 connection LACP uplink is adequate to support at least 100 office worker type workstations. However, this can vary greatly depending on how the workstations are utilized and what applications are installed on the VMs. Ultimately, monitoring and adjustment of the configuration may be necessary once the system goes into production. For the purposes of this document, the reference environment will utilize 2 - 1GB connections per module and a 2GB LACP uplink to the network.

Chapter 2 : System Design and Requirements P a g e | 7

Storage Considerations VMware provides a comprehensive discussion of VMware View storage considerations in the following document: http://www.VMware.com/files/pdf/view3_storage.pdf However, since we have already chosen an Intel Modular Server, the choice to use the highperformance internal storage of the chassis is a foregone conclusion. This system provides a high level of performance and reliability and is a great choice of storage platform for View. A critical component of the Intel Modular Server required for a view deployment is the Shared LUN Option. This option is a simple software key that activates the ability for more than one server to read and write to a virtual disk configured on the Intel Modular Server at the same time. The load balancing and high availability functions of VMware and VMware View are dependent on this ability. Be sure to include this option in your Intel Modular Server configuration for View. For more information on the shared LUN option, please see the Intel Modular Server Systems Shared LUN FAQ at http://www.intel.com/support/motherboards/server/sb/CS-030711.htm The amount of required disk storage capacity for your environment is dependent on how your workstations are configured, what applications will be used, and many other factors. A complete analysis of those factors is beyond the scope of this document. However, the following items should be taken into consideration: 1.) Full Clones or Linked Clones VMware offers a product called View composer which can greatly reduce the amount of storage space required for a VDI implementation through a technology called linked clones. In a nutshell, linked clones technology uses a master desktop VM as a base for other VMs in the system. Then keeps delta files that record only the differences between the master clone and the other VMs. In a traditional full clone environment, every virtual desktop is a complete clone of the workstation template. If your template is configured with a 10 GB virtual drive, each virtual desktop will take up a full 10 GB of disk space. View composer has many other features, most notably the ability to separate user data from operating system data and redirect it to a different storage location. This eliminates the need to implement roaming profiles and keeps user data manageable. While we are not using View composer in our reference system for this document, I encourage you to learn more about View Composer by reading the following datasheet: http://www.VMware.com/files/pdf/View_composer_datasheet.pdf 2.) In order to keep management requirements to a minimum, user data and configuration information will need to be kept separate from the desktop operating system. You can utilize roaming profiles to accomplish this. In a roaming profiles system, user data and configuration information is kept on a server share rather than on the virtual desktops virtual disk. Not only does this reduce the amount of disk space required to store the VMs, it also allows users to be freely moved from VM to VM without losing data or reconfiguring their work environment.

Chapter 2 : System Design and Requirements P a g e | 8

System Overview and Diagram


Reference Environment For any VDI project, the design must take into account the desired outcome as well as projected growth during the lifetime of the system. For this document, the needs will be defined as follows: Integration with Microsoft Active Directory system for user authentication and a Windows file server for user data storage. 20 virtual workstations. The system will be as self-contained as possible. The Intel Modular Server will be utilized for all required servers and workstations without additional hardware. (beyond those that already exist, see above) Connectivity to existing network resources (shared storage, published applications, etc.) will be fully supported and accessible by the virtual workstations. An overview of the system as designed for this document is shown below as Figure 22

Chapter 2 : System Design and Requirements P a g e | 9

Virtual Machines

Intel Modular Server


2 Compute Modules 12 Gb RAM per module 4 135GB Drives

Vcenter

View Manager

Switch

File Server Active Directory

Router (internet Access) XP Pro Desktops (20)


Figure 2-2 Note: Although the diagram shows separate virtual servers for Active Directory and File Server those functions will be handled by the same virtual server.

Chapter 2 : System Design and Requirements P a g e | 10

Suggested Production Environment The reference environment will work well as a proof of concept, however, for a production environment, there are some additional considerations. For the suggested production environment, the system requirements will be defined as the following: Integration with a previously existing Microsoft Active Directory and file storage system for user authentication and user data storage. 100 virtual workstations. The system will be as self-contained as possible. The Intel Modular Server will be utilized for all required servers and workstations without additional hardware. (beyond those that already exist, see above) vCenter will utilize Microsoft SQL server for its database. SQL will be installed on a virtual server that resides inside the modular server cluster Connectivity to existing network resources (shared storage, published applications, etc.) will be fully supported and accessible by the virtual workstations. Users will connect to their virtual desktops from thin clients and PCs while in the office and will connect back into the office via the internet from laptops when traveling or from home. The system must be able to survive the complete failure of one compute module in the modular server and be able to function normally until the unit is replaced. Group policy will be used to secure client desktops Roaming profiles will be utilized to manage user data and settings

Chapter 2 : System Design and Requirements P a g e | 11

An overview of the suggested production environment is shown below as Figure 2-3

Virtual Machines

Intel Modular Server


4 Compute Modules 24 Gb RAM per module 8 300GB Drives

Vcenter

View Manager

Switch

Microsoft SQL XP Pro Workstations (100)

DMZ
Active Directory File Server (pre-existing) (pre-existing)

View Security Server For remote access

Firewall Internet Access


Figure 2-3 As you can see in the diagram, the suggested production environment is similar to the reference environment we are building in this document. However, there are several key differences: 1.) The Modular Server configuration To make sure that the system has enough available resources to handle the workload, the number of modules increase from 2 to 4 and the amount of RAM on each module increases from 12 to 24Gb. This also allows us to meet the design goal of having the system survive the complete failure of one computer module. Chapter 2 : System Design and Requirements P a g e | 12

2.) Drive configuration In a production environment, the 4 drives used in the reference configuration will most likely be inadequate. A greater number of larger capacity drives will provide additional space as well as greater flexibility in configuration options. For example, a greater number of drives would allow for the setup of multiple drive pools which allows you to optimize the performance of the storage subsystem. As the system grows, multiple drive pools could be used to isolate disk traffic between the server modules, or could be used to isolate server storage from virtual workstation storage. For more information on storage system, configuration, please see the Intel Modular Server System MFSYS25/MFSYS35 User Guide at http://download.intel.com/support/motherboards/server/mfsys25/sb/mfsys25_mfsys35_u serguide.pdf 3.) Pre-existing services Most businesses already have a basic infrastructure already in use that may include active directory and file storage. If so, you will probably want to use those resources already in place. The suggest environment assumes this to be true. 4.) Microsoft SQL For an installation of this size, the use of SQL server is recommended. This could be a database instance on an already existing SQL server (assuming there are adequate resources available or a new physical or virtual server just for this purpose. In the diagram, it is assumed that this will be a new virtual server hosted in the Intel Modular Server. 5.) View Security Server To meet the design goal of allowing users to connect while traveling or from home, the suggested environment includes a View security server setup in a DMZ to proxy secure traffic to the View manager. In the diagram, this is shown as a physical machine, but assuming your network is configured with the proper vlan configuration, it is possible for this to be a virtual server as well. For more information on VMware View in a production environment, please see the following documentation: http://www.VMware.com/resources/wp/View_reference_architecture_register.html

Chapter 2 : System Design and Requirements P a g e | 13

Chapter 3: Reference Environment Installation and Configuration


Before Beginning the Installation
In preparation for the installation of the reference environment, take the time to fill out the Reference Environment Worksheet in Appendix A. This information will be required during the installation and having it already available and at your fingertips will help speed the process along.

Management Module IP Configuration


The first step in the installation process is to configure the Management Module of the Intel Modular Server to the correct settings for your network. Once that is configured, we will use the management interface to configure the Modular server and storage. Connect the Modular Server management module Ethernet connection to an Ethernet switch. Verify that your management workstation is connected to the same physical network. The Management module has a default IP address assigned of 192.168.150.150/24, so your workstation will need an address assigned on that same subnet. Verify connectivity by opening a browser and connecting to HTTPS://192.168.150.150. If you have good connectivity, you should receive the certificate warning page. Click continue and you should be redirected to the Modular Server Control login page as seen below:

Chapter 3 : Reference Environment Installation P a g e | 14

Enter admin for the Username and admin for the password. Once successfully logged in, you should see the Modular server Control dashboard shown below:

To set the correct IP configuration for your network, click on IP Configuration in the left panel and fill out the form that appears with the correct IP configuration for your network. Keep in mind that this information is for the management interface only and does not affect the virtual machines in any way. We will configure their IP settings later. Once the form is completed, click on save changes. You will then be asked to reboot the system to put the changes you made into effect. Click Update and reboot to continue. When the system begins to reboot, close your browser and change the IP settings of your workstation to fit your network environment. Once the server reboots, you should be able to re-connect to the management module by opening a browser and connecting to HTTPS://IPaddress_you_assigned (for the purposes of this document, I used 192.168.99.10).

Network Configuration
The configuration of the network connections for the Intel Modular Server is handled in the management module interface. As discussed in Chapter 2 : System Design and Requirements, the reference environment is configured with a 2GB LACP uplink for connection to an existing network. For this configuration, that the switch connected to the Intel Modular Server will need to support and be configured with a matching 2GB LACP link. If the switch you are using in your environment doesnt support this feature, you can simply connect any available port on the Chapter 3 : Reference Environment Installation P a g e | 15

Intel Modular Server switch to your network and skip this section. The 1GB uplink will be adequate for the reference environment. However, your network may need to be upgraded to support the required features before putting the system into production. To configure the 2GB link, begin by logging into the management module interface and clicking on the Switches link in the left hand pane. This will take you to a rear-view of the chassis as seen below:

Click on the picture of the switch to highlight it as shown above and click Advanced Configuration from the Switch1 Actions menu on the right. When prompted, click apply to launch the advanced configuration utility as shown below:

Click the + next to Layer 2 to expand the menu. Then expand Interface and click to highlight LAG Configuration from the menu. Click the Edit button next to LAG1 to bring up the LAG Configuration Settings window as shown below:

Chapter 3 : Reference Environment Installation P a g e | 16

The proper configuration of the LAG group is somewhat dependant on the switches used in your environment. However, it is usually adequate to simply enter a description in the proper field and change the admin speed to 1000M. Then you can click Apply to apply the settings and Close to return to the Advanced Configuration screen. Click the Refresh button to see your additions. To add ports to your newly created LAG, click on LAG Membership in the menu on the left. Then click Edit next to your new LAG. This will launch the LAG Membership Settings utility as shown below:

Chapter 3 : Reference Environment Installation P a g e | 17

Simply select the ports you wish to assign to the LAG and click the right arrow (>>) button to move them into the LAG Members window. Then click to select the LACP check box and click Apply to apply your changes. Close to return to the Advanced Configuration screen. Click Refresh to see the ports you just assigned to the LAG. When you connect these ports to your properly configured switch, the lag will become active. There is no need to configure anything on the internal switch ports at this time. That configuration will be handled in the ESX server configuration.

Storage Configuration
Before we can install ESX server, we need to setup and configure the required disk storage. This process involves 4 Steps: 1.) 2.) 3.) 4.) Install the shared LUN option Enable HDD write back cache Configuration of the storage pool Configuration of Virtual Drives

Install the Shared LUN option As noted in Chapter 2 : System Design and Requirements the ability for multiple servers to share virtual disks is a critical part of the VMware View architecture. To enable this functionality, the Shared LUN option must be activated. The following document provides stepby-step instructions on how to activate the shared LUN feature. http://www.intel.com/support/motherboards/server/sb/CS-029940.htm Follow steps 1 through 5 then return here and continue to Step 2: Enable HDD write back cache. Enable HDD Write Back Cache Assuming that your modular server is on a reliable power source such as a UPS, turning on HDD Write back Cache can increase the write performance of your drive array. Enable it by clicking on Storage under Settings in the left pane. Then click on the dropdown and choose enabled. Click on the save changes button, then OK to verify.

Chapter 3 : Reference Environment Installation P a g e | 18

Configuration of the Storage Pool Next, configure the storage pool. Start by clicking on Storage under System at the top of the left pane. This will bring you to the following screen:

Click on Create Storage Pool and the Create Storage Pool screen appears. Select each of the 4 drives and enter a name for the pool. I used Pool1 as the name, but it can be anything you like.

Notice the highlighted RAID levels. These will be the choices presented to us later when we create the Virtual Drives. As you change the number of drives in the drive pool, the supported RAID levels change. In a production environment, the 4 drives used in the reference Chapter 3 : Reference Environment Installation P a g e | 19

configuration will most likely be inadequate; you may want to include enough drives to enable other raid levels like 50 or 60. In addition, a greater number of larger capacity drives will provide additional space as well as greater flexibility in configuration options. For example, a greater number of drives would allow for the setup of multiple drive pools which allows you to optimize the performance of the storage subsystem. For more information on storage system, configuration, please see the Intel Modular Server System MFSYS25/MFSYS35 User Guide at http://download.intel.com/support/motherboards/server/mfsys25/sb/mfsys25_mfsys35_userguid e.pdf. For the reference environment, click Apply to continue. Then click OK when the action succeeds. Configuration of Virtual Drives The final setup is to create the Virtual Drives. For the reference environment, we need the following virtual drives: 10Gb boot drive for module 1 10Gb boot drive for module 2 250Gb drive for our Virtual Desktops (shared between Modules 1 and 2) All Remaining space for Virtual Servers (shared between Modules 1 and 2)

Start by clicking Create Virtual Drive in the menu on the right. The following screen appears:

As I did in the screen shot above, enter a name for the drive, choose a RAID level, a size, and since this is to be used as a boot drive for the module, choose to initialize the boot sector. Then chose the server you want to assign this drive to. In this case the drive is to be assigned to Server 1. When your selections are complete, click Create. When the action succeeds, click OK to continue.

Chapter 3 : Reference Environment Installation P a g e | 20

Repeat this process for each of the virtual drives, being sure to check assign to multiple servers and chose both server one and two where appropriate. When all 4 virtual drives have been created, your screen should look similar to the screen below:

You can click on each of the virtual drives to verify the setup is correct. When you are satisfied with the drive layout, you are ready to proceed with the installation of ESX server.

ESX Server Installation


To install ESX we will make use of the remote KVM and CD feature of the management module. Before you start, there are some settings you should make in Internet Explorer to make the process smoother. In IE, click on tools, then Internet Options, then click the Security tab. Click on the green check mark for Trusted Sites and add the address of your Management Module to the trusted sites list. For example: HTTPS://192.168.99.10. Click OK to return to the browser screen and navigate to the Management Module user interface. Once you are logged in as admin, click on Servers under System on the top left. This will bring you to the Front server View of the modular server as seen below:

Chapter 3 : Reference Environment Installation P a g e | 21

Click on the server module you wish to start with, and click on Remote KVM & CD. The following screen will appear:

Chapter 3 : Reference Environment Installation P a g e | 22

For an ESX installation, change the mouse mode to relative, but leave the rest at the defaults and click Apply. This will launch the KVM applet. You will need to answer a series of security notifications and authorize the applet download. When prompted, choose to open the downloaded file. When the KVM loads, you will see a new window with a blank screen:

For the purposes of this document, we will install ESX from an ISO on the hard drive of the local management workstation. To make the ISO accessible to the server module, click on the Device menu and choose Redirect ISO. Then simply browse to the ISO on your hard drive and click Open as seen below:

Before we can start the installation process, there are two BIOS settings we need to verify. You can now power the module on and go to the BIOS configuration. To do so, leave the KVM open and return to the Management Interface screen. Then click on Power on under Server Chapter 3 : Reference Environment Installation P a g e | 23

Actions at the top right, and then click Apply in the pop-up window. When the power on sequence begins, you will see the server BIOS screen appear in the KVM window.

Click on the KVM window and press F2 to enter the BIOS configuration. Navigate to Advanced then Processor Configuration. On that screen make sure that Execute Disable Bit and Intel Virtualization Technology are both set to enabled as seen below:

Once you have verified or set the settings correctly, save the configuration and exit by pressing ESC then F10 to save and exit. When the system reboots, it will come up into the ESX installer from the re-directed CD-ROM ISO. The first installer screen is shown below:

Chapter 3 : Reference Environment Installation P a g e | 24

Press Enter to begin the installation. When asked about checking the CD media, select skip. After a short time, the ESX install screen will appear:

Click Next to continue. On the next 2 screens choose keyboard and mouse types. Generally you can take the defaults and click Next to continue. At this point you will get the following warning concerning the available disk partitions:

Chapter 3 : Reference Environment Installation P a g e | 25

This is informing you that the Virtual Disks you created earlier have not been formatted and will be as a part of the installation process. You will receive one for each disk the system finds during installation. You can safely click Yes to each warning. When prompted, accept the license agreement and click Next to continue. That will bring you to the disk partitioning screen:

Chapter 3 : Reference Environment Installation P a g e | 26

Verify that the disk to be used for the installation of ESX is the correct size and should be labeled sda:, then click Next. You will next see another warning screen:

This gives you one more chance to abort if the wrong drive is about to be formatted. Again, verify the correct disk is chosen, and click Yes to continue. At the next screen you will be shown the complete partition table that will be created. Take a look at the partition table and click Next when you are ready to proceed. Click Next at the advanced options screen and you will be taken to the network configuration screen:

Chapter 3 : Reference Environment Installation P a g e | 27

Fill in the Network Address and Host Name section with the information you entered in the Reference Environment Worksheet that you completed earlier. Since the reference environment doesnt make use of VLANs, leave the VLAN ID blank and click Next to continue. At the Time Zone Selection screen, choose your time zone and click next to continue. On the Account Creation screen enter a password for the root user. Be sure to document the root password and keep it safe. There is no need to add additional users at this time. When the About to Install screen comes up, verify the information listed and when you are satisfied that the installation is configured correctly, click Next to continue. When the installation completes, remove the CD-ROM re-direction by clicking on Device and Redirect ISO then click Finish to reboot the system. Repeat the installation procedure for the second server module.

ESX Server Configuration


To begin the ESX server configuration, open a browser from your management workstation and browse to HTTPS://ip_address_of_Module1. You will see the ESX server welcome screen seen below:

Click on the link to download the VMware Infrastructure Client and allow it to install to your management workstation. You can choose to install the upgrade manager if you wish, but its use will not be covered in this document. Chapter 3 : Reference Environment Installation P a g e | 28

When the installation is complete, launch the infrastructure client and connect to module1 by entering its IP address in the IP address / Name box, enter root for the User name and enter the root password you chose earlier. Click Login to connect.

When prompted, click ignore to ignore the self-signed certificate warning. You will then see the VMware ESX management screen as seen below:

Verify connectivity to the second module by repeating the login process for that host. If you are presented with the ESX management screen for both hosts, you are ready to continue with the configuration of ESX. The next step is to add the storage you configured in the Modular server as available storage for the ESX host. To do so, return to the ESX management screen of module 1. Select the server Chapter 3 : Reference Environment Installation P a g e | 29

in the left pane and click on the Configuration tab in the right pane. Then click on the storage link in the Hardware window. You will be taken to the storage configuration screen as seen below:

Click on the Add Storage link in upper right to launch the add storage wizard. Follow the wizard to add the 2 virtual disks created earlier. You can only add one disk at a time, so you will need to go through the wizard twice. Be sure to give them meaningful names like in the example above, and take the default setting of 1MB block size. When you have completed adding the storage, your screen should look very much like the image above. Since you previously configured the storage as shared, once you add it to one module it will be automatically recognized by the second module so there is no need to repeat the process. In an ESX cluster, time synchronization is critical so you need to configure the time for both servers. Start by clicking on Time Configuration in the Software window. Then click on the Properties link in the upper right. You will then see the time configuration window as seen below:

Chapter 3 : Reference Environment Installation P a g e | 30

Check the NTP Client Enabled check box and click the Options button to configure. Click on NTP Settings, remove the loopback address from the NTP servers window and add a valid NTP server by IP address or fully qualified domain name. Click OK twice to get back to the ESX management screen. Repeat this process on the second server and verify that both servers are synchronizing to the correct time. Before you proceed with setup of the virtual machines required for the system, you need to configure the virtual networking of the ESX servers. Start by clicking on Networking in the Software window. You will see the networking window as shown below:

As you can see, the installation recognized the first network adapter and created a virtual switch with two port groups, one for a service console and one for the VMs. This is very close to what we need for the reference configuration. All you need to do is add the second NIC to the switch. Click the Properties link to start the process. Then click the Network Adapters tab and the Add button to start the add adapter wizard. The wizard starts with a list of the unclaimed Chapter 3 : Reference Environment Installation P a g e | 31

adapters. Check the box next to the unclaimed adapter you wish to add and click the Next button. Verify that both adapters are listed in the Active Adapters list and click Next. Click Finish to add the adapter. You can safely take the defaults on the rest of the networking settings for the reference Environment, so click close to return to the ESX management screen. You will see that the second adapter has been added to the switch diagram as seen below:

Repeat this process on the second module.

Creation of Microsoft Support Servers


At this point, your modules are configured as ESX servers and are ready to host virtual machines. In order to proceed with the build of the reference environment, you need three Windows servers; a properly configured Active directory server, a windows server configured as a member of the domain ready for the installation of vCenter server, and another windows server configured as a member of the domain ready for the installation of View manager server. You will also need a virtual machine setup with a windows desktop operating system to be used as a template for your virtual desktops. The build for these machines is beyond the scope of this document; however, some guidelines are listed below. The reference environment uses virtual servers setup on Module 1 for the Active Directory role, the vCenter server role and the View manager role. Suggested specs. for these servers are listed below, but are suggestions only. Please see Microsoft and VMware documentation for minimum required and recommended specs. o Active Directory Server (TESTAD1) Windows 2003 or 2008 server standard 1Gb RAM 8Gb system volume 20Gb data volume DNS for local domain with forwarders to external public DNS servers DHCP with a range of adequate size setup for the virtual desktops VMware tools o vCenter server (TESTVC1) Windows 2003 or 2008 server standard 3Gb RAM

Chapter 3 : Reference Environment Installation P a g e | 32

8Gb system volume 20Gb data Volume Member server in the domain o View Manager server (TESTVIEW1) Windows 2003 or 2008 server standard 3Gb RAM 8Gb system volume 20Gb data Volume Member server in the domain VMware tools The required virtual workstation will act as the template for the automated deployment of virtual desktops in the reference environment. In a production environment, you would want all of your production applications installed on that template so that they would be included in the final desktops. For the reference environment, you only need the following components: o Virtual Desktop Machine (will be converted to a template later) (XPPRO) Windows XP or Vista (The reference Environment uses XP Pro) 512Mb RAM 8Gb system volume VMware tools VMware View Agent

Once the virtual Windows machines are configured and tested, you can proceed with the Reference environment build.

Installation of vCenter Server


Download the .ISO file for vCenter 2.5U4 and make it accessible to your TESTVC1 server. If it is a virtual server, you can use the connect CD/DVD button to mount the .ISO file as a CD drive in the VM. The Installation of vCenter should begin on its own. Click Next to start the install. Accept the license agreement and click Next to continue. At the next screen, enter or accept the user name and organization for the installation and click Next. At the installation type screen, make sure the VMware VirtualCenter Server is selected for installation and click Next. At the next screen, you select what type of database your vCenter server will use. For the reference environment, take the default of SQL express and click next. In a production environment, you may want to use a dedicated SQL server instance. See the VI3 documentation at: http://www.VMware.com/support/pubs/vi_pages/vi_pubs_35.html.

Chapter 3 : Reference Environment Installation P a g e | 33

At the step 2 screen, take the default and install in evaluation mode. You can setup and configure licensing later for all of the various VMware components. At the Server Authorization Screen shown below, verify that the TESTVC1 servers fully qualified domain name is listed in the VC Server IP field.

You will also need to enter a domain administrators username and password in the provided fields. This account will be used during installation to login so the installation routine can create a permanent account for ongoing operations. It will not be used again after the installation is complete. When you are ready, click Next to continue. If .NET framework 2.0 is not installed on your TESTVC1 server, it will be installed as a part of the vCenter installation. The entire installation of vCenter server can take several minutes, so be patient while the installation completes. When the installation is complete you will see the following screen:

Chapter 3 : Reference Environment Installation P a g e | 34

Click Finish to end the installation and launch the Infrastructure client. When it comes up, test your vCenter installation by logging into the management interface. To do so, leave localhost in the IP address/Name field and enter the domain administrators user ID and password and click Login. If vCenter is functional, you will see the following management screen:

You are now ready to proceed with the configuration of the ESX Cluster.

Chapter 3 : Reference Environment Installation P a g e | 35

Configuration of the ESX Cluster


In this section, you will bring your ESX hosts into management by the vCenter server. Then create a datacenter, a server cluster for High Availability and Distributed Resource scheduling and configure vmotion. Explanation of all of the various options available is beyond the scope of this document. Refer to the VI3 Online Library at http://pubs.VMware.com/vi35/wwhelp/wwhimpl/js/html/wwhelp.htm for more information on how to configure an ESX cluster. From your management workstation, log into the vCenter server using the VMware infrastructure client. Start the client and enter the name of your vCenter server in the IP address/Name field and enter the domain administrators user ID and password and click Login. Click on the link to Create a Datacenter. When prompted, give the datacenter a name. For the reference environment we used testDC for test datacenter, but you can use any name you wish. Click on your newly created datacenter in the left window and then click the link to add a host. The Add Host wizard will appear:

Go through the wizard to add each of your ESX hosts into vCenter. Start by typing the fully qualified domain name of the first host to be added in the appropriate field. Note: It is critical that the hosts be added to vCenter using their fully qualified domain name such as vhost1.test.local. You may need to manually add DNS entries for your ESX hosts into your DNS server before the vCenter server can resolve the names. Then type root for the username and the root password in the password field, click next. In the Host Summary window, you will see the host information and any VMs that are on that server, click next. Chapter 3 : Reference Environment Installation P a g e | 36

On the Virtual Machine Location window, click to highlight the datacenter and click next. Verify the selections you made on the Ready to Complete window and click Finish. After the add process is complete, you will see your host under the datacenter in the vCenter inventory window. Repeat this process to add the other ESX host. Before creating the ESX Cluster, you need to setup VMKernel ports on your virtual network switches. For more information on virtual network switch configuration see the VI3 Online Library at http://pubs.VMware.com/vi35/wwhelp/wwhimpl/js/html/wwhelp.htm. Click on a host to highlight and click on the configuration tab. Click on networking in the hardware window. Click on the Properties link just to the upper right of the switch diagram in the main window, the vSwitch Properties window will appear. Then click the Add button in the lower left. The Add Network wizard will appear as shown below:

Click to select VMkernel and click next. Leave the Network Label and VLAN ID fields at the defaults. Click to select the Use this port group for VMotion option. Set the IP address to an unused IP address on an unused subnet in your network. In the reference environment, we used 192.168.100.1 for vhost1 and 192.168.100.2 for vhost2. Note: This subnet is used for communication between the hosts only for Vmotion traffic and does not need to be routable to the rest of your network, nor does it need access to the internet. Please see the VI3 documentation referenced earlier for more information concerning VMkernel port IP addressing. Enter an appropriate subnet mask (we used 255.255.255.0 in the reference environment) and leave the default gateway blank. Since our ESX hosts are all on the same subnet, we do not need VMkernel traffic to know about a default gateway. Click next to continue.

Chapter 3 : Reference Environment Installation P a g e | 37

Verify the selections you made on the Ready to Complete window and click Finish. When prompted, answer no to Do you want to configure a default gateway now?. Click close to close the vSwitch Properties window. You should now see the newly created VMkernel port on your virtual switch diagram as highlighted below:

Repeat this process for the other ESX host. You are now ready to create the ESX cluster. Start by clicking the datacenter to highlight it, then click the Create a cluster link in the main window. The New Cluster Wizard appears. Give the cluster a name by typing it in the appropriate field. For the reference environment, we used testIMS for test Intel Modular Server, but you can use any name you wish. Click to select both VMware HA and VMware DRS and click next. Leave the defaults on the VMware DRS screen and click next. On the VMware HA screen click to select the option for Allow VMs to be powered on even if they violate availability constraints. Leave the rest of the fields at the defaults as shown below:

Chapter 3 : Reference Environment Installation P a g e | 38

Click next to continue. Accept the default on the Virtual Machine Swapfile Location window and click next to continue. Verify the selections you made on the Ready to Complete window and click Finish. You should see your new cluster appear in the inventory list as highlighted below:

To add your hosts to your new cluster, simply drag and drop them onto the cluster icon. When you do, the Add Host wizard will appear. Accept the defaults and click next to continue. Verify the selections you made on the Ready to Complete window and click Finish. The system will move the host into the cluster and configure it for high availability. When the processes are complete, add the second host to the cluster by repeating the process. You can Chapter 3 : Reference Environment Installation P a g e | 39

now expand your cluster to see the newly added hosts and VMs. You should be able to see your 3 windows server VMs and one XP VM. You are now ready to go forward with the installation of the View manager server.

Chapter 3 : Reference Environment Installation P a g e | 40

Chapter 4: View Manager


Installation of the View Manager Server.
From the console of the TESTVIEW1 server, download the VMware View connection server .EXE file from http://www.VMware.com. Run the executable to start the installation process. The Installation wizard will appear:

Click next to continue. Accept the terms of the license agreement and click next. Choose the destination for the installation and click next. For the reference environment, select standard for the installation type and click next. When prompted, accept the terms of the ADAM license agreement and click next. Verify the installation location and click Install to begin the installation. When the install process is complete, click finish to finish the installation.

Configuration of the View Manager Server.


From your management workstation, open a browser and go to http://testview1.test.local/admin to open the View Manager administration interface. The following login screen should appear:

Chapter 4: View Manager

P a g e | 41

Login using the domain administrator user ID and password. You will see the administration screen appear. Before you can configure View, you must enter a valid license key. Click on the Configuration tab at the top of the window, then click on Product Licensing and Usage on the left. Click on the link to edit the license and type in your View license key. If you are evaluating View, you can use the evaluation license that appeared on the download screen for the View executable. Once the license is configured, you can click on the Servers link on the left to configure the servers for use in the View infrastructure. For the reference environment, we only need to add a vCenter server. Click the Add link under VirtualCenter Servers. The following screen will appear:

On the VirtualCenter Settings screen type the fully qualified domain name of the vCenter server in the Server address field. Enter the User name and password of a domain administrator in the appropriate fields. Leave the rest of the fields at the defaults and click OK to continue. After a short time, you should be taken back to the View admin screen where you will see the name of your vCenter server in the VirtualCenter Servers list.

Chapter 4: View Manager

P a g e | 42

Meet the Workstation Pool Pre-Requisites


Before you can setup the workstation pool, there are 3 pre-requisites you need to meet: 1. Create the active directory group that will be entitled to access the virtual workstations. 2. Mark you workstation VM as a template so it is ready for deployment. 3. Create the workstation customization specification to be used when new workstations are automatically deployed. Use the active directory Users and Computers console to create the Active directory group the way you would any other security group. Add all of the users you wish to have a virtual workstation to that group. To mark the XP workstation as a template, go to the VMware infrastructure client, and connect it to the vCenter server. Once you are logged in, change to the Virtual Machines and Templates View by clicking the down arrow next to the Inventory icon at the upper left and selecting the Virtual Machines and Templates View from the menu. You will see the list of virtual machines under the datacenter in the left window. If the workstation is powered on, right click it and select shut down guest from the menu. After a short time, the workstation will power off. Then right click the workstation and select Convert to Template. After a few seconds, you will see the icon of the workstation change to a template icon. The final pre-requisite is to create the workstation customization specification. From the VMware infrastructure client connected to the vCenter server, click on the edit menu and choose Customization Specifications. Click new to open the Guest Customization wizard as shown below:

Chapter 4: View Manager

P a g e | 43

On the first screen, choose windows as the target OS, and give this new specification a name and description. Click Next to continue. At the Registration Information window, enter the name and Organization you wish to appear in the workstation VMs when they are provisioned. When finished, click Next to continue. When the Computer Name window appears, select Use the Virtual Machine Name and click Next to continue. The next window is the Windows License screen as shown below:

Enter the Microsoft product ID that will be used for the virtual workstations as they are provisioned. The key should be entered exactly as it appears on the license documentation including the hyphens. Click to de-select the option to Include Server License Information. That option only applies to server operating systems. Click Next to continue. At the Administrator Password window, enter the local administrator password you assigned when you built the workstation template. Enter it into the Password field and the Confirm Password field. Leave the option to automatically log on as the administrator unchecked and click Next to continue. At the Time Zone window, enter your time zone and click Next to continue. You may leave the Run Once window blank and click Next to continue. Assuming you have DHCP properly configured on your DC server as discussed previously, simply select Typical Settings on the Network Interface Settings window and click Next to continue.

Chapter 4: View Manager

P a g e | 44

The next screen allows you to setup credentials for joining the virtual workstation to the Active Directory domain.

Choose Windows Server Domain and enter the name of the domain in the field. Then enter the user ID and password of a domain administrator in the appropriate fields. The user must have the proper authority to add computers to the domain. When finished, click Next to continue. On the Operating System Options screen, make sure that Generate New Security ID (SID) is selected and click Next to continue. Verify the selections you made on the Ready to Complete window and click Finish. Close the Customization Specification Manager window. You have now met all of the prerequisites required to create your first workstation pool. You can now return to the View administration console to set it up.

Configure the Workstation Pool


From the View Administrator, click the Desktops and Pools tab. Click the link to add a new desktop. The Add Desktop wizard will appear as seen below:

Chapter 4: View Manager

P a g e | 45

Click to select Automated Desktop Pool and click Next. At the Desktop Persistence screen, select Persistent and click Next. Since there is only one vCenter server in the reference environment, accept the default on the Virtual Center Server screen and click Next to continue. The next screen is the Unique ID screen as seen below:

Chapter 4: View Manager

P a g e | 46

Follow the instructions to give the pool a unique ID and a common name and description. When finished, click Next to continue. For the reference environment, you can accept the defaults on the Desktop/Pool Settings screen and click Next to continue. The Automated Provisioning Settings screen will appear. Click the Advanced Settings link to expand your options as shown below:

Start by entering a naming pattern for the virtual workstations. A number will be appended to the name as each workstation is provisioned, so you just need to enter the name. For the reference environment, we used TESTWKS. Click to select the Enable Advanced Number of Desktops Settings and enter the following values in the appropriate fields: Minimum Number of Desktops 1 Maximum Number of Desktops 20 Number of Desktops Available 2 When finished, click Next to continue. At the Template Selection screen, you will see your workstation template listed in the main window. Click to select it and click next to continue. At the Virtual Machine Folder Location window, click to highlight the Datacenter and click Next to continue. At the Hosts and Clusters window click to highlight the cluster you created earlier and click Next to continue. The next window is the Resource Pool window. Since we have not yet configured any resource pools, simply click on the cluster to highlight it and click Next to continue. The Datastores screen shown below is the next screen in the wizard:

Chapter 4: View Manager

P a g e | 47

Select the datastore you wish to use for storage of the workstation VMs. In the reference environment, we used the DesktopStorage datastore. When you are ready, click Next to continue. At the guest Customization window, click Use this Customization Specification and select the workstation customization specification you created earlier. Click Next to continue. On the Ready to Complete window, verify the selections you made and click Finish to create the workstation pool. Click the Desktop Sources tab and you can see that as soon as the pool was created, View began the provisioning process. Two workstations should be in the Provisioning state as shown below:

Chapter 4: View Manager

P a g e | 48

Click on the Desktops and Pools tab to return to the workstation pool list. Click on the name of the pool, to open the workstation pool summary shown below:

As you can see from the warning highlighted in red, you need to entitle users to the pool before they can login. To do so, click on the link for Entitlements. When the Entitlements window appears, click on Add. The search window shown below will appear:

Chapter 4: View Manager

P a g e | 49

Using the criteria fields at the top of the window, locate the user group you created earlier. Highlight it and click OK. Verify that you have selected the correct group and click OK to continue. Go back to the Desktop Sources tab and verify that your test workstations have finished provisioning. If they have, they should both show a status of Ready.

Utilizing Active Directory


One of the ultimate goals of implementing a VDI system is to keep customization of the user VMs to a minimum. Wherever possible, you will want to depend on the workstation template, or other network services to provide automatic installation of applications and configurations. That way, in the event that a VM becomes damaged, an administrator can simply remove the affected VM and allow the system to deploy a new one to that user. Folder redirection and roaming profiles are two features of Active Directory that help make that goal a reality. As discussed in Chapter2 - storage considerations, VMware offers a solution to the separation of user data as well in their View Composer product. See the View Composer datasheet at http://www.VMware.com/files/pdf/View_composer_datasheet.pdf for more information. For the reference environment, we make use of both roaming profiles and folder redirection of the MyDocuments folder to provide for separation of user and system data. For more information on these features please see the following documentation http://technet.Microsoft.com/en-us/library/bb742549.aspx.

Chapter 4: View Manager

P a g e | 50

Although a complete step by step process for configuring roaming profiles and folder redirection is beyond the scope of this document, the following points illustrate the major configuration points of the reference environment related to these features: Profiles are stored on a share created on the AD server. In a proof of concept environment like the reference environment, this is not a problem. However, in a production environment, profiles should be stored on a dedicated file server (either virtual or physical) to make sure that performance doesnt suffer as the system scales up. The user objects are setup with the path to the profile directory as shown below:

Group policy is used to setup folder redirection to a different share on the same server. Just as with the profile share, in a production environment, re-directed folders should be stored on a dedicated file server (either virtual or physical) to make sure that performance doesnt suffer as the system scales up. Before users can connect to the virtual workstations, they must have the appropriate rights to do so. For the reference environment, we utilized group policy to set the appropriate user rights for remote desktop. For more information, see: http://technet.Microsoft.com/en-us/library/bb457106.aspx Group policy is also used to prevent the use of offline files, limit user access to configuration applets, and many other user settings.

Chapter 4: View Manager

P a g e | 51

Chapter 5: Testing the System


Install and Configure the Workstation Client
Now that you have the system configured, you need to test the functionality of the system. Start by setting up a client to connect to the View server. From a test workstation, download the View client from the VMware website. Run the .EXE file to begin the installation. When the installer window comes up, click Next to continue. Accept the license agreement and click Next to continue. At the Custom Setup window, accept the defaults and click Next. The Default Server window appears as shown below:

Enter the fully qualified domain name of the View server in the field. In the reference environment, the server name is TESTVIEW1.TEST.LOCAL. When finished, click Next to continue. On the configure Shortcuts screen, choose the locations you would like to have shortcuts and click Next to continue. Verify the installation location and click Install to begin the installation. When the install process is complete, click finish to finish the installation. At that point, you are required to reboot the workstation to complete the installation.

Test the Connection to the Virtual Workstation


When the system reboots, start the View client from the shortcut. The View login screen appears:

Chapter 5: Testing the System

P a g e | 52

Log in with a user ID that was one of the users added to the appropriate security group during the system configuration. You will then see the list of desktops to which that user is entitled:

Double-click on the workstation icon to connect. After a few moments, you will be presented with the desktop of one of the virtual workstations.

Verify the View Portal Is Up and Running


On your test workstation, open a browser and browse to http://fully_qualified_name_of_your_View_server for the reference environment, that would be http://testview1.test.local You will be presented with the View portal login screen:

Chapter 5: Testing the System

P a g e | 53

Login with the same credentials you used to test the direct client connection and click Login. You will then be presented with the list of workstations available for that user to use. Simply click the workstation icon to connect. You will then be presented with the desktop of the virtual workstation. Verify that you are presented with the same virtual machine through the portal that you received when using the client. Congratulations! Your VMview system is up and running. If you were unsuccessful in getting a connection to one of the virtual desktops, take a look at Chapter 6: Troubleshooting for a list of common mistakes and things to check.

Chapter 5: Testing the System

P a g e | 54

Chapter 6: Troubleshooting
If your View system is not working correctly, there are a few things to check that are commonly sources of problems. Review the following list and compare to your environment:

Virtual Workstations not Completing the Provisioning/Customizing Sequence


Connect your VMware infrastructure client to the vCenter server and review the tasks and event window for the virtual workstations. The errors listed there may help you track down the issue. The most common problems stem from the customization specification you setup in Chapter 4 under Meet the Workstation Pool Pre-Requisites. If something was mis-configured during that process, your virtual workstations will provision, but might fail the customization. Or, the customization might finish, but the resulting VM might not be on the domain, or might be licensed incorrectly. We have found that the easiest way to troubleshoot the provisioning process, is to manually provision a workstation from the same template that View is configured to use. Simply right click on the template in the VMware infrastructure client, and choose Deploy Virtual Machine from this template. This opens the Deploy Template Wizard as shown below:

Type a name for the new VM in the Name field and click on the datacenter as the location for the new VM. Then click Next to continue. Choose the cluster as the run location for the VM and click Next to continue. Choose a datastore for the VM files. Since this VM is only for testing, you can choose any datastore with adequate free space. Then click Next to continue. At the Select guest Customization Option screen, select the option to Customize using an existing customization specification. When the selection window appears as shown, Chapter 6: Troubleshooting P a g e | 55

Select the customization specification that is used by View. Then click Next to continue. At the Ready to Complete screen, review the selections you made earlier and click to select the option to Power on this virtual machine after creation, then click Finish to continue. Monitor the process by watching the Recent Tasks pane at the bottom of your window and by connecting to the console of the VM. By seeing what happens and what fails in the process, you may be able to track down the issue. The following are the some of the most common mistakes: Windows license key entered incorrectly Incorrect credentials for local user of template machine Incorrect credentials for joining the domain

Unable to Connect to Virtual Workstations


If you are unable to connect to the virtual workstation with the View client, verify that you can connect to the virtual workstation with a remote desktop client. This simply takes VMview out of the equation. Look in the View admin console to get the name of an available virtual desktop and use the windows remote desktop client to try to connect to it by fully qualified domain name. If you are unable to connect, look into common reasons why remote desktop is not working. Some common issues are listed below: Users not authorized for remote access to the workstation Users must be authorized to connect via remote desktop for View to function properly. If this is set incorrectly, or if the group policy used to set this variable did not properly apply, your users will be unable to connect. Use the VMware infrastructure client to connect to the console of the VM and verify who has rights to access the system remotely. If your VDI users do not have access, use the method of your choice to set the appropriate rights. Network settings mis-configured Your virtual desktops should be configured to utilize DHCP for network configuration. In the reference environment, DHCP is configured on P a g e | 56

Chapter 6: Troubleshooting

the TESTAD1 virtual server. Log into the console of the virtual workstation from the VMware Infrastructure Client and verify that the network settings are correct. DNS resolution View makes heavy use of DNS. Verify that your workstations and your virtual workstations are capable of resolving the various hostnames throughout your network. If they cannot, you may have a DNS configuration issue.

Group Policy Not Applying as Expected


It is a common problem for your virtual workstations to seem like they do not receive group policy settings they way you expected. Generally this is related to the use of Group Policy machine policies rather than user policies. Both types will apply to VMs exactly the way they do with physical machines. However, keep in mind that the trigger mechanism for the application of machine policies is a reboot. Virtual workstations will be rebooted rarely, if ever. Because of this, it is generally better to make use of user policies where possible and loopback policies where machine policy is your only option. If you reconfigure your group policy this way, you will find that policy is applied the way you expect.

Chapter 6: Troubleshooting

P a g e | 57

Chapter 7: For More Information


About the Author
David Endicott, started his first job in the computer industry in the summer of 1985. He was 16 years old. By 1997, he had 12 years of personal computing and networking experience in a variety of industries including manufacturing, healthcare and education. His experiences led him to believe that there was a significant need for a networking services provider in southwest Missouri and across the four state area. In October of that same year, David left behind his consulting position with Computer Sciences Corporation and founded NeoTech Solutions, inc. in his home town of Joplin, Missouri. David holds several industry certifications including, Microsoft MCSE, Novell CNE and VMware VCP. Even though he is President of NeoTech, David utilizes his 25 years of computer industry experience daily in his favorite role as an active technician working to solve real-world IT problems and bring the latest information technology to his clients. Davids company, NeoTech Solutions Inc. is a full service consulting firm that fulfills the Information Systems needs of businesses nationwide. NeoTech specializes in the design, installation and support of computer networks in a wide variety of business sectors. They provide services to customers in many industries, with emphasis in the Healthcare, Education, and Government arenas. NeoTech provides services that include pure consulting for IT strategy planning and RFP development, system design and installation, troubleshooting and configuration, fully managed IT services including proactive system monitoring, and full physical and virtual computer hosting in their in-house datacenter. For more information about NeoTech Solutions, Inc., please visit their website at http://www.neotechsolutions.com

Acknowledgements
The author would like to acknowledge the following people who helped make this whitepaper a reality: Don McBride, CEO, Access Family Care Rhonda Endicott, Kim Morgan, Billy Dunnic, and Cody Neal all of NeoTech Solutions, Inc. Jared Leavitt, Intel Modular Server product line marketing manager, Intel, Inc. for his vision of this project

Chapter 7: For More Information

P a g e | 58

References
The following references were used by the author during the writing of this whitepaper: 1.) Virtualization Review, Forrester: Businesses Adopting Virtualization by Herb Torrens 3/3/2009, http://virtualizationreview.com/articles/2009/03/06/forrester--businessesadopting-virtualization.aspx 2.) VMware ThinApp Product Datasheet, http://www.VMware.com/files/pdf/thinapp_datasheet.pdf 3.) VMware Infrastructure 3 Documentation, http://www.VMware.com/support/pubs/vi_pages/vi_pubs_35.html 4.) VMware View Documentation, http://www.VMware.com/support/pubs/View_pubs.html 5.) VMware View (formerly VDI) Pricing and Support FAQ: http://www.VMware.com/files/pdf/View_pricing_support_faq.pdf 6.) Running Virtual Center in a Virtual Machine 4-3-2007, http://www.VMware.com/vmtn/resources/798 7.) Quick Start Guide ESX Server 3.5 and VirtualCenter 2.5, http://www.VMware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_quickstart.pdf 8.) Wikipedia, Link Aggregation, http://en.wikipedia.org/wiki/Link_Aggregation_Control_Protocol#Link_Aggregation_C ontrol_Protocol 9.) Storage Considerations for VMware View, http://www.VMware.com/files/pdf/view3_storage.pdf 10.) 11.) VMware View Composer Product Datasheet, http://www.VMware.com/files/pdf/View_composer_datasheet.pdf VMware View Reference Architecture, a guide to Large-scale Enterprise VMware View Deployments, http://www.VMware.com/resources/wp/View_reference_architecture_register.html Step-by-Step Guide to User Data and User Settings, Microsoft TechNet, http://technet.Microsoft.com/en-us/library/bb742549.aspx. Configuring Remote Desktop 11-03-2005, Microsoft TechNet, http://technet.Microsoft.com/en-us/library/bb457106.aspx Intel Modular Server System MFSYS25/MFSYS35 User Guide, A Guide for Technically Qualified Assemblers of Intel Identified Subassemblies/Products

12.) 13.) 14.)

Chapter 7: For More Information

P a g e | 59

Appendix A
Reference Environment Worksheet
Common Items Domain______________________________ Primary DNS Server (AD Server)_________________________________________ Secondary DNS Server(AD Server2 or External DNS)_________________________ Management Module IP address___________________________________________________________ Subnet Mask_________________________________________________________ Gateway____________________________________________________________ Hostname___________________________________________________________ Management Interface User (Default - admin)______________________________ Management Interface Password (default admin)__________________________ Module 1 IP address___________________________________________________________ Subnet Mask_________________________________________________________ Gateway____________________________________________________________ Hostname___________________________________________________________ ESX root User password________________________________________________ Module 2 IP address___________________________________________________________ Subnet Mask_________________________________________________________ Gateway____________________________________________________________ Hostname___________________________________________________________ ESX root User password________________________________________________ Active Directory / File Server IP address___________________________________________________________ Subnet Mask_________________________________________________________ Gateway____________________________________________________________ Hostname___________________________________________________________ Domain Administrator_________________________________________________ Domain Administrator Password_________________________________________

Appendix A

P a g e | 60

Vcenter Server IP address___________________________________________________________ Subnet Mask_________________________________________________________ Gateway____________________________________________________________ Hostname___________________________________________________________ Local Administrator___________________________________________________ Local Administrator Password___________________________________________ View Manager Server IP address___________________________________________________________ Subnet Mask_________________________________________________________ Gateway____________________________________________________________ Hostname___________________________________________________________ Local Administrator___________________________________________________ Local Administrator Password___________________________________________ Virtual Desktop DHCP Info IP address range-<start>_________________________<End>__________________ Subnet Mask_________________________________________________________ Gateway____________________________________________________________ Base Hostname_______________________________________________________ Local Administrator____________________________________________________ Local Administrator Password____________________________________________

Appendix A

P a g e | 61

Potrebbero piacerti anche