Sei sulla pagina 1di 12

Step-by-step Virtualization

configuration on an IBM eServer p5


server, part 3:
configure virtual devices
by Nam Keung

IBM eServer Solutions Enablement

May 2005

© Copyright IBM Corporation, 2005. All Rights Reserved.


All trademarks or registered trademarks mentioned herein are the property of their respective
holders.
Table of contents
Abstract ..................................................................................................................1
Introduction............................................................................................................1
Setup scenario.......................................................................................................1
POWER5 machine type and hardware structure .................................................2
Configure the Virtual devices in the VIO Server .................................................2
Update the latest VIO Server fix pack ..................................................................6
Install operating system on the client partitions....................................................6
Conclusion .............................................................................................................7
References .............................................................................................................8
About the author....................................................................................................9
Trademarks ..........................................................................................................10
Part 3: Configure virtual devices

Abstract
This paper, the third in a three-part series, offers a step-by-step (screen-by-screen)
explanation of the process for Virtual I/O devices on an IBM® eServer™ p5 model
9117-570. These steps serve as a broader guide for installing Virtual I/O devices
on all current models of the IBM eServer pSeries® family of servers. The Virtual
devices created in this example will be shared by client partitions via the IBM
Virtual I/O Server (VIO Server), which is driven by the IBM POWER5™
Virtualization Engine™.

Note: A step-by-step illustration of the process for configuring a VIO Server is


explained in part one of this set of three white papers. The second paper in this
series demonstrated how to configure the client partitions.

Introduction
The reader is about to observe the process for configuring Virtual I/O devices on an
eServer p5 model 9117-570. Each of these virtual devices will be utilized by client
partitions through the administrative and management functionality built into the VIO
Server. Before beginning, you must have already installed the VIO Server on the same
system that will host the client servers. Second, you must have already created the client
partitions that will use the Virtual devices.

The Virtual devices will support the needs of the client partitions with the following services
and functions:
• Micro-Partitioning™ support for up to 10 logical partitions to share one processor
• Virtual SCSI (VSCSI) disks that allow partitions to share physical storage adapters
and devices
• Automated CPU and memory reconfiguration
• Real-time partition configuration and load statistics
• Support for dedicated and shared processor logical partitioning (LPAR) groups
• Support for manual provisioning of resources
• Virtual networking

The virtual devices can be accessed from any of the following operating systems (installed
on a client partition):
• IBM AIX 5L™ Version 5.3
• SUSE Linux™ Enterprise Server 9 for POWER™
• Red Hat Enterprise Linux AS for POWER, Version 3

Setup scenario
The sample installation process discussed in this paper involves setting up virtual devices
using the POWER5 Virtualization Engine features that are available on the latest set of
IBM POWER-based pSeries hardware, running under the AIX 5L operating system. The
VIO Server, which you have already set up, will own a physical SCSI and a physical
Ethernet controller. The physical SCSI controller has three physical hard disks attached to
it. In this scenario, you should have already configured each client partition on a separate
physical hard disk.

1
Part 3: Configure virtual devices

POWER5 machine type and hardware structure


The eServer p5 model 9117-570 has two physical SCSI controllers, one located on slot
T14 and the other located on slot T12. Each SCSI controller has three physical disks
attached to it. Two physical Ethernet controllers are on slots T6 and C3.

Configure the Virtual devices in the VIO Server


The following processes configure the Virtual devices in the VIO Server partition:

1. Activate the Run_VIO_Server_profile under vioserv1_linux.


2. Log in as padmin. After login, run the List Device command (shown below) from
the command line:
> lsdev –virtual

This lists the Virtual adapters defined for Run_VIO_Server_profile. In this


scenario, it shows a Virtual Ethernet and three Virtual SCSI adapters, as seen
below in boldface type:
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter

Note: If a device is not active, then there was a problem in defining it. You can use
the Remove Device command (rmdev –dev vhost# -recursive) for each
device; then restart the Virtual I/O Server if needed. On reboot, the system
configuration manager will detect the hardware and recreate the vhost device.

3. Select the appropriate physical Ethernet adapter (ent0), which will be used to
create the shared Ethernet adapter. The lsdev command (below) will show a list of
available physical adapters:
> lsdev –type adapter

The results of this lsdev command are shown below:


ent0 Available 2-Port 10/100/1000 Base-Tx PCI-X Adapter

4. Map the Virtual Ethernet adapter ent2 to the physical Ethernet adapter ent0. Run
the Make Virtual Device (mkvdev) command to map ent2 to ent0:
> mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 3

The defaultid is corresponded to the Port virtual LAN ID.

If this command is successful, you will see the results shown below:
ent3 Available
en3
et3

This means the shared Ethernet adapter ent3 has been created. To verify this, run
the lsdev command to list shared Ethernet adapter ent3:
lsdev -type adapter

2
Part 3: Configure virtual devices

Set the TCPIP for ent3 in order that the VIO Server can connect to an external
network. For the purposes of this scenario, assume that the following list of
parameters is relevant for this TCPIP connection:
Hostname: vioserv1.austin.ibm.com
IP address: 9.3.41.117
Netmask: 255.255.255.0
Gateway: 9.3.41.1
Name server: 9.0.7.1
Domain: austin.ibm.com

To configure a TCP/IP connection to ent3, use the following command:

$mktcpip -hostname Hostname -inetaddr Address -interface interface –netmask \


SubnetMask -gateway Gateway -nsrvaddr NameServerAddress -nsrvdomain

Where:
o Hostname is the hostname you defined for this partition.
o Address is the IP address to associate with the shared Ethernet adapter.
o Interface is the interface associated with the shared Ethernet adapter device.
In the example set forth in this paper, because the shared Ethernet adapter
device is ent3, the associated interface is en3.
o Subnetmask is the subnet mask address you defined for your subnet.
o Gateway is the gateway address you defined for your subnet.
o NameServerAddress is the address of your domain name server.

In our scenario, you would type in the following:

> mktcpip -hostname vioserv1.austin.ibm.com -inetaddr 9.3.41.117 -


interface en3 -netmask 255.255.255.0 -gateway 9.3.41.1
-nsrvaddr 9.0.7.1 -nsrvdoamin austin.ibm.com –start

After this command runs successfully, the vioserv1_linux partition can ping an
external host from the command line.

5. Map the Virtual SCSI adapters to the physical hard disks. In this scenario, there
are three physical hard disks—each with 70 gigabytes of storage. Each Virtual
SCSI adapter will be mapped to one physical hard disk. Two of the Virtual SCSI
adapters map to the entire hard drives, hdisk0 and hdisk1. The last Virtual SCSI
adapter only maps to a partial segment of hdisk2 (about 50 gigabytes). The
reason for this is that the VIO Server is also installed on this disk. Each hard disk is
made into a different volume group. Here are the steps to map the Virtual SCSI
adapters to the disks:

a. To create volume groups for each hard disk, run the lspv command:
> lspv

This command lists only hdisk2 as having been defined as rootvg. The reason
for this is that the VIO Server is installed on this disk; and rootvg is defined
when the server is installed. The other two hard disks are not defined as any
volume group.

3
Part 3: Configure virtual devices

Run the mkvg command to define the volume group. Use rootvg_client0 for
hdisk0 and rootvg_client1 for hdisk1. The command format is shown here:
$ mkvg -f -vg VolumeGroup(Client) PhysicalVolume

In this scenario, the mkvg command would look as shown below (in bold). The
results of the command are also shown:
> mkvg -f -vg rootvg_client0 hdisk0
rootvg_client0

> mkvg -f -vg rootvg_client1 hdisk1


rootvg_client1

Now, if you run the lspv command again, it will list the mappings for the Virtual
SCSI adapters to the physical hard disks, as shown here:
hdisk0 rootvg_client0 active
hdisk1 rootvg_client1 active
hdisk2 rootvg active

Alternatively, you can run the following lsdev command:


> lsdev -type lv

b. Now, it is necessary to create the logical volumes for each Virtual SCSI adapter
from the volume groups. Because svtlnx1, svtlnx2 and svtlnx3 use hdisk2,
hdisk1, and hdisk0, respectively, it makes sense to also name the logical
volumes svtlnx1_lv, vtlnx2_lv, and svtlnx3_lv, respectively.

To do this, run the mklv command to create the logical volumes, using the
command format:
$ mklv –lv LogicalVolume(Client) VolumeGroup VolumeGroupSize

In this scenario, each of these three mklv commands are entered as shown
below (in bold). Notice that the results of the commands, which are displayed
back on the console, are also shown:
> mklv -lv svtlnx1_lv rootvg 50G
svtlnx1_lv

> mklv -lv svtlnx2_lv rootvg_client1 69888M


svtlnx2_lv

> mklv -lv svtlnx3_lv rootvg_client0 69888M


svtlnx3_lv

To verify that the logical volumes are created, run this lsdev command:
> lsdev -type lv

c. Next, you must map the Virtual SCSI adapters to each logical volume. You do
this after the mapping is finished, by setting the device names vsvtlnx1,
vsvtlnx2, and vsvtlnx3 for svtlnx1_lv, svtlnx2_lv, and svtlnx3_lv,
respectively.

4
Part 3: Configure virtual devices

Run the mkdev command to map the Virtual adapters to the logical volumes,
using the command format show below:
$ mkvdev –vdev LogicalVolume(Client) –vadapter vhost# -dev v(client_name)

Where:
o logicalVolume(Client) is a logical volume that you created.
o vhost# is your new Virtual SCSI adapter.
o v(client_name) is the name of the new target device which will be available
to the client partition.

In our scenario, you would type in the following mkvdev commands (in bold).
The results of the commands are also shown:
> mkvdev -vdev svtlnx1_lv -vadapter vhost0 -dev vsvtlnx1
vsvtlnx1 Available

> mkvdev -vdev svtlnx2_lv -vadapter vhost1 -dev vsvtlnx2


vsvtlnx2 Available

> mkvdev -vdev svtlnx3_lv -vadapter vhost2 -dev vsvtlnx3


vsvtlnx3 Available

After running these commands, the Virtual disks vsvtlnx1, vsvtlnx2, and
vsvtlnx3 are created for partitions.

Run the lsmap command to verify that vhost0, vhost1, and vhost2 use slots 10,
11, and 12, respectively:
> lsmap -vadapter vhost0
> lsmap -vadapter vhost1
> lsmap -vadapter vhost2

The last digits under physloc represent the Virtual SCSI Server ID number
displayed in the Virtual adapter window of the Virtual Server Partition.

To verify the results, run the following lsdev command:


> lsdev -type disk

Now the VIO Server is ready for the client partitions to install the operating
system—which is discussed later in this paper (in section B).

5
Part 3: Configure virtual devices

Update the latest VIO Server fix pack


As mentioned in part 1 of this white paper series, and as part of a best-practices
installation effort, no installation is complete until you have installed the latest fix packs. In
this case, you can download the VIO Server fix pack from the Virtual I/O Server support
center (see details in part 1). A link to this Web site is found in the References section at
the end of this paper.

If the fix pack is a tar file, save it under a directory in a partition that is running under a
release of the AIX 5L operating system. You can then untar the file. Ensure that NFS is
running on your AIX 5L partition. If it is not, start it from smitty, as follows:
Communications Applications and Services > NFS > Network File System (NFS)
> Configure NFS on this System > Start NFS

For example, you can put the tar file at directory /images/vio_fixpak_3 in svtaix29.
Then, from the svtaix29 partition’s command line, execute the following commands:
> tar -xvf vio_fixpak_3.tar
> exportfs -i /images

From the VIO Server command line, run the following commands:
> oem_setup_env

> mkdir update update2


> mount svtaix29:/images update
> cp update/* update2

You must ensure that there must be a minimum of 200 megabytes of free disk space at
the root directory (df –M). To allocate this additional space, at the root, type the following:
> chfs –a size=+409600 /
> exit

As padmin, type in the following:


> updateios -dev update2/vio_fixpak_3

This sequence of steps will install the fix pack for the VIO Server partition. However, you
must remember to reboot the VIO Server for the changes to be effective.

For more detailed installation instructions related to Virtual I/O Server fix pack downloads,
visit the Web site found in the References section at the end of this paper.

Install operating system on the client partitions


After going through all the steps above, you are ready to install the operating system on
the client partitions. Put the operating system install CD to the CD/DVD drive and activate
the install profile under the client partitions. Follow the installation steps to boot the
operating system from the CD/DVD.

6
Part 3: Configure virtual devices

Conclusion
Virtual devices that are configured on a pSeries server are available to be shared by client
partitions that are also configured on that same pSeries server. The IBM Virtual I/O
Server, enabled by POWER5 technology, facilitates this sharing of physical resources
(through IBM Virtualization Engine technologies).

Through a screen-by-screen approach, this paper has provided the reader with a sense of
the relative simplicity and speed with which Virtual devices can be implemented to interact
with client partitions through the VIO Server.

With the sophistication of the mainframe-like partitioning technology that is now available
with POWER5, there is no reason to postpone embarking on the “virtual” path—allowing
your enterprise to enjoy higher and more efficient use of its many server resources.

7
Part 3: Configure virtual devices

References

• White paper: Implementing Virtualization on an IBM eServer p5 server (part 1):


Configure VIO Server
ibm.com/servers/enable/site/peducation/abstracts/abs_4376.html

• White paper: Implementing Virtualization on an IBM eServer p5 server (part 2):


Configure client partitions
ibm.com/ servers/enable/site/peducation/abstracts/abs_4372.html

• IBM Redbook: Introduction to Advanced POWER Virtualization on IBM p5 Servers,


Introduction and basic configuration (SG24-7940)
ibm.com/redbooks/abstracts/sg247940.html?Open

• IBM Redbook: Server Consolidation on IBM pSeries Systems (SG24-6966)


ibm.com/redbooks/abstracts/sg246966.html?Open

• White paper: IBM eServer POWER5 Processors Virtual SCSI Throughput Analysis, by
Elizabeth Stahl, January 2005
ibm.com/eserver/pseries/hardware/whitepapers/virtual_scsi.pdf

• IBM Redbooks Technote: Server Consolidation: A Comparison of Workload


Management and Partitioning (TIPS0426):
ibm.com/redbooks/abstracts/tips0426.html?Open

• Advanced POWER Virtualization on eServer p5 website


ibm.com/servers/eserver/pseries/ondemand/ve/resources.html

• IBM Virtual I/O Server support center


http://techsupport.services.ibm.com/server/virtualization/vios/download

8
Part 3: Configure virtual devices

About the author


Nam Keung is a senior technical consultant for IBM in Austin, Texas. He has worked in
the area of AIX ISDN communications, AIX SOM/DSOM development, AIX multimedia
development, Microsoft® Windows® NT clustering technology, and Java™ performance.
His current assignment involves helping IBM Business Partners and solution providers in
their efforts to port and deploy applications to the pSeries platform. He also consults in
performance tuning and other educational needs for the pSeries platform.

9
Part 3: Configure virtual devices

Trademarks
© IBM Corporation 1994-2005. All rights reserved.
References in this document to IBM products or services do not imply that IBM intends to
make them available in every country.

The following terms are trademarks of International Business Machines Corporation in the
United States, other countries, or both: IBM, eServer, pSeries, AIX, AIX 5L, POWER,
POWER5, Micro-Partitioning, and Virtualization Engine.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.

Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc., in the
United States, other countries, or both.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

10

Potrebbero piacerti anche