Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
SAN Implementation
Workshop
Exercise Guide
NETAPP UNIVERSITY
E-1
SAN Implementation Workshop: Welcome
2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.
ATTENTION
The information contained in this guide is intended for training use only. This guide contains information
and activities that, while beneficial for the purposes of training in a closed, non-production environment,
can result in downtime or other severe consequences and therefore are not intended as a reference guide.
This guide is not a technical reference and should not, under any circumstances, be used in production
environments. To obtain reference materials, please refer to the NetApp product documentation located
at www.now.com for product information.
COPYRIGHT
2008 NetApp. All rights reserved. Printed in the U.S.A. Specifications subject to change
without notice.
No part of this book covered by copyright may be reproduced in any form or by any meansgraphic,
electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval
systemwithout prior written permission of the copyright owner.
NetApp reserves the right to change any products described herein at any time and without notice.
NetApp assumes no responsibility or liability arising from the use of products or materials described
herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product or
materials does not convey a license under any patent rights, trademark rights, or any other intellectual
property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
TRADEMARK INFORMATION
NetApp, the NetApp logo, and Go further, faster, FAServer, NearStore, NetCache, WAFL, DataFabric,
FilerView, SecureShare, SnapManager, SnapMirror, SnapRestore, SnapVault, Spinnaker Networks,
the Spinnaker Networks logo, SpinAccess, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer, and
SpinStor are registered trademarks of Network Appliance, Inc. in the United States and other countries.
Network Appliance, Data ONTAP, ApplianceWatch, BareMetal, Center-to-Edge, ContentDirector, gFiler,
MultiStore, SecureAdmin, Smart SAN, SnapCache, SnapDrive, SnapMover, Snapshot, vFiler, Web Filer,
SpinAV, SpinManager, SpinMirror, and SpinShot are trademarks of NetApp, Inc. in the United States and/or
other countries.
Apple is a registered trademark and QuickTime is a trademark of Apple Computer, Inc. in the United States
and/or other countries.
Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the
United States and/or other countries.
RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered trademarks
and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the United States
and/or other countries.
All other brands or products are trademarks or registered trademarks of their respective holders and should
be treated as such.
NetApp is a licensee of the CompactFlash and CF Logo trademarks.
E-2
SAN Implementation Workshop: Welcome
2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.
E-3
SAN Implementation Workshop: Welcome
2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.
SAN Deployments
Exercise
Module 1: SAN Implementation
Workshop
Estimated Time: None
EXERCISE
There is no exercise for this module.
E1-1
SAN Implementation Workshop: SAN Deployments
2008 NetApp. This material is intended for training use only: Not authorized for reproduction purposes.
Software Host
Stack
Exercise
Module 2: Host Software Stack
Estimated Time: None
EXERCISE
There is no exercise for this module.
E2-1
SAN Implementation Workshop: Host Software Stack
2008 NetApp. This material is intended for training use only: Not authorized for reproduction purposes.
SAN Implementation
Phases
Exercise
Module 3: SAN Implementation
Phases
Estimated Time: None
EXERCISE
There is no exercise for this module.
E3-1
SAN Implementation Workshop: SAN Implementation Phases
2008 NetApp. This material is intended for training use only: Not authorized for reproduction purposes.
FC Switching
Exercise
Module 4: FC Switching Concepts
Estimated Time: 40 minutes
In this exercise, you will disable an FC switch and set it up as if it came out of the box. Note
that FC connectivity is disrupted during this process.
TIME ESTIMATE
20 minutes
E4-1
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
START OF EXERCISE
STEP
1.
ACTION
2.
Enter the following command to set the switch name. Use the host name of
the switch in your pod, for example san201_brocade for pod 201.
3.
E4-2
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
san205_brocade:admin> ipaddrset
Log out of the console server and log in by way of a Telnet connection to
confirm IP network connectivity to the FC switch.
4.
Configure...
Fabric parameters (yes, y, no, n): [no] y
Domain: (1..239) [1] 1
BB credit: (1..27) [16]
R_A_TOV: (4000..120000) [10000]
E_D_TOV: (1000..5000) [2000]
WAN_TOV: (0..30000) [0]
MAX_HOPS: (7..19) [7]
Data field size: (256..2112) [2112]
Sequence Level Switching: (0..1) [0]
Disable Device Probing: (0..1) [0]
Suppress Class F Traffic: (0..1) [0]
SYNC IO mode: (0..1) [0]
VC Encoded Address Mode: (0..1) [0]
Switch PID Format: (0..2) [1] 1
Per-frame Route Priority: (0..1) [0]
Long Distance Fabric: (0..1) [0]
(select defaults for remaining prompts)
5.
Enter the following command to set the time zone (offset from UTC):
san201_brocade:admin> tstimezone -5
6.
7.
Enter the following command to confirm that the switch has been enabled:
E4-3
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
san201_brocade:admin> switchshow
switchName:
san201_brocade
switchType:
34.0
switchState: Online
switchMode:
Native
switchRole:
Principal
switchDomain: 1
switchId:
fffc01
switchWwn:
10:00:00:05:1e:04:e9:96
zoning:
OFF
switchBeacon: OFF
Area Port Media Speed State
==============================
0 0 id N2 Online F-Port 50:0a:09:81:86:b8:24:a1
1 1 id N2 Online F-Port 50:0a:09:82:86:b8:24:a1
2 2 id N2 Online F-Port 50:0a:09:81:96:b8:24:a1
3 3 id N2 Online F-Port 50:0a:09:82:96:b8:24:a1
4 4 id N4 Online F-Port 21:00:00:e0:8b:86:30:3c
5 5 id N4 Online F-Port 21:01:00:e0:8b:a6:30:3c
6 6 id N4 Online F-Port 10:00:00:00:c9:58:29:62
7 7 id N4 Online F-Port 10:00:00:00:c9:58:29:63
8 8 id N4 Online F-Port 10:00:00:00:c9:43:f2:d6
9 9 id N4 Online F-Port 10:00:00:00:c9:43:f2:d7
10 10 id N4 No_Light
11 11 id N4 No_Light
12 12 id N4 No_Light
13 13 id N4 No_Light
14 14 id N4 No_Light
15 15 id N4 No_Light
END OF EXERCISE
E4-4
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you inspect the FC fabric configuration to ensure that the hosts and the
storage system are properly connected into the switch.
TIME ESTIMATE
30 minutes
START OF EXERCISE
STEP
1.
ACTION
Use PuTTY or establish a Telnet connection to log on to the FC switch in your pod
as the admin user. (The Brocade default password for admin is: password.)
Enter the following command at the Brocade FC switch prompt to view the current
nodes connected to the switch as well as other FC switch parameters:
san201_brocade:admin> switchshow
switchName:
san201_brocade
switchType:
34.0
switchState: Online
switchMode:
Native
switchRole:
Principal
switchDomain: 1
switchId:
fffc01
switchWwn:
10:00:00:05:1e:04:e9:96
zoning:
OFF
switchBeacon: OFF
Area Port Media Speed State
==============================
0 0 id N2 Online F-Port 50:0a:09:81:86:b8:24:a1
1 1 id N2 Online F-Port 50:0a:09:82:86:b8:24:a1
2 2 id N2 Online F-Port 50:0a:09:81:96:b8:24:a1
3 3 id N2 Online F-Port 50:0a:09:82:96:b8:24:a1
4 4 id N4 Online F-Port 21:00:00:e0:8b:86:30:3c
5 5 id N4 Online F-Port 21:01:00:e0:8b:a6:30:3c
6 6 id N4 Online F-Port 10:00:00:00:c9:58:29:62
7 7 id N4 Online F-Port 10:00:00:00:c9:58:29:63
8 8 id N4 Online F-Port 10:00:00:00:c9:43:f2:d6
9 9 id N4 Online F-Port 10:00:00:00:c9:43:f2:d7
10 10 id N4 No_Light
11 11 id N4 No_Light
12 12 id N4 No_Light
E4-5
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
13 13
14 14
15 15
id
id
id
N4
N4
N4
No_Light
No_Light
No_Light
Observe in bold that some FC ports have remote FC nodes connected. These nodes
are all connected to F-Ports on the fabric (fabric ports).
Observe also that this FC switch is not zoned (zoning: OFF).
According to the output you get from switchshow, how many FC switches are
there in this FC fabric? ________________________________
You will now identify each block of F-ports.
2.
Use PuTTY or establish a Telnet connection to your Solaris host prompt and enter
the following command to view the FC initiator WWPNs of the FC HBA ports on
your host:
$ fcinfo hba-port
Observe that the WWPNs of your Solaris host start with 21:00 and 21:01 (for
ports 0 and 1). You should be able to locate the WWPNs of your host in the output
of switchshow on the Brocade FC switch. They should be connected to a fabric
port (F-Port) and online.
Write down (or copy and paste into a new text file) the WWPNs of the FC initiator
ports of your Solaris host:
Port 0 WWPN: __________________________________________________
Port 1 WWPN: __________________________________________________
To which port on the Brocade FC switch do the Solaris FC HBA ports connect?
Solaris FC HBA Port 0 connects to Brocade port _______________________
Solaris FC HBA Port 1 connects to Brocade port _______________________
3.
While still at the prompt of your Solaris host, enter the following command to
view the target WWPNs assigned to the target FC ports on the storage controller:
rsh <storage_ctlr> fcp show adapter
Repeat the command for both storage controllers in the dual controller storage
system. Observe that the WWPNs of your storage controllers start with 50:0a.
E4-6
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Write down (or copy and paste into a new text file) the WWPNs of the FC target
ports of your storage system:
Storage 1 Port 0c WWPN: __________________________________________
Storage 1 Port 0d WWPN: __________________________________________
Storage 2 Port 0c WWPN: __________________________________________
Storage 2 Port 0d WWPN: __________________________________________
To which port on the Brocade FC switch do the Solaris FC HBA ports connect?
Storage 1 FC Port 0c connects to Brocade port _______________________
Storage 1 FC Port 0d connects to Brocade port _______________________
Storage 2 FC Port 0c connects to Brocade port _______________________
Storage 2 FC Port 0d connects to Brocade port _______________________
4.
Use PuTTY or establish a Telnet connection to your VMware ESX Server host
prompt and enter the following command to view the FC initiator WWPNs of the
FC HBA ports on your host.
$ esxcfg-info | grep i port number
Observe that the WWPNs of your ESX Server host start with 10:00. However, you
will see only two WWPNs starting with 10:00 on your ESX Server, although there
are 4 WWPNs starting with 10:00 connected to the Brocade switch. Since both the
VMware ESX Server host and the Linux host in your pod are using Emulex FC
HBAs, the WWPNs of both the VMware ESX Server and the WWPNs of your
Linux host start with 10:00. So, you need to look at the other digits in the WWPN
to locate where they connect on the Brocade switch.
Write down (or copy and paste into a new text file) the WWPNs of the FC initiator
ports of your VMware ESX Server host:
Port 0 WWPN: __________________________________________________
Port 1 WWPN: __________________________________________________
To which port on the Brocade FC switch do the VMware ESX Server FC HBA
ports connect?
ESX FC HBA Port 0 connects to Brocade port _______________________
ESX FC HBA Port 1 connects to Brocade port _______________________
E4-7
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Use PuTTY or establish a Telnet connection to your Linux host prompt and enter
the following command to view the FC initiator WWPNs of the FC HBA ports on
your host. Replace <#> with the port number.
$ cat /sys/class/scsi_host/host<#>/port_name
Write down (or copy and paste into a new text file) the WWPNs of the FC initiator
ports of your Linux host:
Port 0 WWPN: __________________________________________________
Port 1 WWPN: __________________________________________________
To which port on the Brocade FC switch do the Linux host FC HBA ports
connect?
Linux FC HBA Port 0 connects to Brocade port _______________________
Linux FC HBA Port 1 connects to Brocade port _______________________
END OF EXERCISE
E4-8
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you save the current configuration of the FC switch into a file on a server
host. Next, you zone the FC switch and save the configuration with zoning enabled. Then,
you reload the original configuration from the server host back onto the FC switch.
TIME ESTIMATE
90 minutes
START OF EXERCISE
STEP
1.
ACTION
2.
You will now save the current configuration of the FC switch to a file on your
workstation.
http://<san<pod#>_brocade
E4-9
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click the Admin button. You will be prompted to enter a user name and
password. Use admin and password. The Switch Admin dialog box will
appear. (Be sure that pop-up windows are not blocked for this site in Microsoft
Internet Explorer.)
Click the Configure main tab and select the Upload/Download sub-tab as
shown here.
E4-10
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Type the IP address of the Solaris host in your pod, in the Host IP text box.
Enter root in the User Name text box and passwd in the Password text box.
Type /tmp/san<pod#>FCSwitchConfig in the File Name box.
E4-11
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click Apply.
Click Yes.
Observe the Upload/Download Progress bar and the message log. You should
see a message reporting ConfigUpload completed successfully as shown here.
E4-12
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Now you will create FC aliases for the WWPNs of the FC initiator ports of your
Solaris, ESX, and Linux hosts and for the FC target ports of your dual storage
controller system.
E4-13
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Create two aliases each containing two target ports on the storage system.
san201_brocade:admin> aliCreate "STO_FAS3050_cl1",
"50:0a:09:81:86:b8:24:a1; 50:0a:09:82:86:b8:24:a1"
san201_brocade:admin> aliCreate "STO_FAS3050_cl2",
"50:0a:09:81:96:b8:24:a1; 50:0a:09:82:96:b8:24:a1"
Display aliases:
san201_brocade:admin> aliShow
Observe that you created aliases that contain two WWPNs. When you specify
more than one WWPN, you separate them with ;. You could also assign just
one WWPN to any given alias. In this case you simply put the single WWPN in
between .
4.
Create an FC zone containing the Linux host and the storage system:
E4-14
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Create an FC zone containing the VMware ESX Server host and the storage
system:
san201_brocade:admin> zoneCreate "ZNE_ESX1",
"SRV_ESX1_c1; STO_FAS3050_cl1; STO_FAS3050_cl2"
Display FC zones:
san201_brocade:admin> zoneShow
Observe that the storage system is part of all FC zones.
5.
STEP
ACTION
transfer option.
san201_brocade:admin> configUpload
Observe that the switch configuration file uploaded to the Solaris host is named
config.txt by default and it is created in the root directory /. You can verify this
by logging onto your Solaris host and running the ls / command.
6.
You will use the Brocade SwitchExplorer GUI to inspect the zoning of the FC
switch.
AdminBrowse to the IP address of the FC switch in your pod and click the Zone
Admin button in the lower-left part of the GUI as shown below.
http://<san<pod#>_brocade
E4-16
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
You will be prompted to authenticate. Enter admin for user and password for
password.
7.
Observe the Alias, Zone, QuickLoop, Fabric Assist, and Config tabs.
E4-17
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that there are three FC zones available in the Name drop-down list.
These are the FC zones you created previously at the Brocade FC switch prompt.
As you select each zone, observe the Aliases change in the Zone Members list
on the left.
E4-18
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that there is only one configuration available in the Name drop-down
list. This is the configuration you created previously at the Brocade FC switch
prompt. Observe also that this configuration is reported to be the effective
configuration as shown by the Effective Config field in the upper-right portion
of the screen.
You create FC aliases, zones and a configuration using the Brocade CLI in this
lab exercise. You could also use the Brocade SwitchExplorer GUI to manage the
FC switch including FC zones and configurations.
8.
Now you will reload the original configuration of the FC switch that you saved
at the beginning of this lab exercise, effectively returning the switch to its initial
state before you created FC zones.
First, use PuTTY or establish a Telnet connection to your FC switch prompt and
enter the following command to disable the switch. The switch needs to be
disabled while a new configuration is being downloaded (in to the switch).
E4-19
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
san201_brocade:admin> switchDisable
Next, browse back to the IP address of the FC switch in your pod.
http://<san<pod#>_brocade
You should be back in the Brocade SwitchExplorer GUI.
Click the Admin button. You will be prompted to enter a user name and
password. Use admin and password. The Switch Admin dialog box will
appear. (Be sure that pop-up windows are not blocked for this site in Internet
Explorer).
E4-20
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click the Configure main tab and select the Upload/Download sub-tab as
shown here.
Select the Config Download to Switch radio button. This radio button would
not be available if the FC switch was not disabled.
E4-21
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Type the IP address of the Solaris host in your pod, in the Host IP text box.
Enter root in the User Name text box and passwd in the Password text box.
Type /tmp/san<pod#>FCSwitchConfig in the File Name box.
Click Apply.
E4-22
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Make sure that you are downloading the configuration file of the switch in
YOUR pod (double-check the pod number in the file name) and confirm the
prompt below.
E4-23
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
You should see that the download completed successfully in the status bar as
shown here.
If you wanted to remove the existing FC zones on the FC switch to have a cleansweep download instead of a cumulative download, you would need to complete
the following steps BEFORE the download:
E4-24
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
END OF EXERCISE
E4-25
SAN Implementation Workshop: FC Switching Concepts
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
FC Linux
MODULE 5: FC LINUX
Exercise
Module 5: FC Linux
Estimated Time: 20 minutes
You will be logging into your systems and checking to see what is the version of your OS, if
there are any HBA drivers currently installed, and what the version of those drivers are. If
they are not the correct version, you will be updating them either by downloading from the
Web, or installing from the <class_files> location. You will also confirm that the correct
multipathing RPMs are installed, and if not, updating the files as needed.
OBJECTIVES
Discover and document the host OS, HBA, HBA driver, and firmware versions on
the host
Read and interpret the compatibility matrix to confirm the correct setup
E5-1
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
TIME ESTIMATE
10 minutes
START OF EXERCISE
STEP
1.
ACTION
SSH into your groups host using PuTTY or some similar utility.
2.
4.
E5-2
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
___
Does the current support matrix allow SnapDrive for UNIX with your
configuration?
_________________________________________________________
____
END OF EXERCISE
E5-3
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will be installing the correct NetApp Host Utilities for your environment.
You will then remove any previously installed HBA drivers, and install the supported driver
and utilities for the HBAs that are installed. Once they are installed, you will need to set the
HBA and driver parameters. This will include unloading the driver and updating the
modprobe.conf file. Finally you will be recording the worldwide port names (WWPNs) for
future reference.
OBJECTIVES:
Discover and document the host OS, HBA, HBA driver, and firmware versions on
the host
TIME ESTIMATE:
20 minutes
START OF EXERCISE
STEP
1.
ACTION
2.
The NetApp Host Utilities are available for download at the following location
on the NOW site.
http://now.netapp.com/NOW/download/software/sanhost_linux/Linux/
The NetApp Host Utilities have been provided for you in the <class_files>
location. Replace the <class_files> string by the exact location as specified by
your instructor.
E5-4
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
3.
ACTION
4.
Check for previously installed FC HBA drivers. If previous FC HBA drivers are
not found, move to Step 5.
First verify it the LPFC driver is loaded in the kernel. Run:
modprobe c | grep lpf
If it is, unload it using:
modprobe r lpfc
Next, verify it the LPFC driver package is installed. Run:
rpm qa | grep lpf
NOTE: The drivers may be installed by the OS, but the full utilities may not be
available. For that reason, it is suggested to reinstall the full driver suite.
5.
For Emulex drivers, change to the directory where the driver installer
files are located (see Steps 5 and 6) and run ./lpfc-install --uninstall
command to remove the Emulex driver. Does the current support
matrix allow SnapDrive for UNIX with your configuration?
________________________________________________________
Decompress and extract the Emulex FC HBA driver compressed archive file.
cp -R <class_files>/Emulex /tmp
cd /tmp/Emulex
gunzip lpfc_2.6_driver_kit-8.0.16.27-1.tar.gz
tar xvf lpfc_2.6_driver_kit-8.0.16.27-1.tar
E5-5
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
6.
7.
ACTION
Move to the driver installer directory (/tmp/Emulex/lpfc_2.6_driver_kit8.0.16.27-1). It is always a best practice to have a quick look at the README
file before installing a driver. Next, run the driver setup script:
cd lpfc_2.6_driver_kit-8.0.16.27-1
./lpfc-install
Building the LPFC driver: this implies rebuilding the driver in kernel
space and installing the driver as a dynamically loadable kernel
module
Update the ramdisk to load the LPFC driver in the kernel upon
bootup; observe that the installation program saves the current
ramdisk image using a filename ending with .elx extension
Verify that the LPFC driver was successfully built and installed as a kernel
driver module:
ls /lib/modules/<kernel_build_number>/kernel/drivers/scsi/lpfc
<kernel_build_number>: Recall from the Host Configuration Check exercise
that you can find out the kernel build number using the uname a command.
8.
Reboot the Linux host to allow the LPFC driver to be loaded in the kernel upon
reboot.
Reboot
9.
Verify that the LPFC FC HBA driver was successfully loaded in the Linux
kernel:
modprobe c | grep lpfc
E5-6
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
This step is informational only. You can read through it, but do not run the
commands shown.
Device mapper multipathing (DM-MP) is used in these lab exercises. No special
LPFC driver settings are required for supported Emulex FC HBA with dmmultipath. However, if dm-multipath is not used, the following step would need
to be completed:
Unload the LPFC driver.
modprobe r lpfc
Edit the /etc/modprobe.conf configuration file and add the following
parameters:
options lpfc lpfc_nodev_tmo=180
Reload the LPFC driver module in the kernel.
modprobe v lpfc
Update the ramdisk image with the new LPFC parameter.
/usr/src/lpfc/lpfc-install --createramdisk
Reboot the Linux host using the updated the ramdisk image.
Reboot
11.
You will install the Emulex HBAnyware utility now. First change directory to
the Emulex driver and utilities class files directory and extract the Emulex Linux
Applications tar file.
cd /tmp/Emulex
tar xvf elxlinuxapps-3.0a14-8.0.16.27-1-1.tar
The files will be extracted in the ElxLinuxApps-3.0a14-8.0.16.27-1 directory.
Next, install the Emulex Linux Applications.
cd ElxLinuxApps-3.0a14-8.0.16.27-1
./install
When prompted, select Local Mode for the mode of operation of
E5-7
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
HBAnyware.
When prompted, type y (yes) to allow the user to change the operation mode
of HBAnyware using the set_operating_mode script.
12.
END OF EXERCISE
E5-8
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will be completing the HBA setup by configuring your system with dmmultipathing. Once that is completed you will mount a LUN and create a file system on the
LUN. Then you will be creating another LUN, assigning an igroup, and accessing it as a raw
device.
OBJECTIVES:
45 minutes
START OF EXERCISE
STEP
1.
ACTION
To configure dm-Multipathing:
E5-9
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
At the end of the list add the local SCSI devices to be excluded.
devnode sd[a]$
-The $ sign prevents the multipathing from excluding all paths that
may be /dev/sdab when using a high number of LUNs.
Edit the device-specific section at the end of the file. You may leave
the current devices section commented out, or remove it altogether.
Copy and paste the section that looks like the one below from the
<class_files>/multipath.devs file into the /etc/multipath.conf
file):
devices {
device {
vendor
"NETAPP "
product
"LUN"
path_grouping_policy
group_by_prio
getuid_callout
"/sbin/scsi_id -g -u -s /block/%n"
prio_callout
/dev/%n"
"/opt/netapp/santools/mpath_prio_ontap
features
"1 queue_if_no_path"
path_checker
readsector0
failback
immediate
}
}
E5-10
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
2.
This will display all LUNs that have been discovered by the HBAs.
You may see multiple paths to the same LUN depending on the paths
available.
Note four /dev/sdX labels for each path to LUN fslun discovered.
fslun/host1
fslun/host2
/dev/sd___
/dev/sd___
/dev/sd___
/dev/sd___
/dev/sd___
/dev/sd___
/dev/sd___
/dev/sd___
3.
Example output:
[root@kc105b1 ~]# multipath -v3 -d ll
load path identifiers cache
ux_socket_connect error
#
# all paths in cache :
E5-11
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
#
dm-0 blacklisted
dm-1 blacklisted
md0 blacklisted
ram0 blacklisted
ram10 blacklisted
ram11 blacklisted
ram9 blacklisted
sda blacklisted
#
# all paths :
#
Take a look at the device mapper device directory and observe the
DM-MP devices currently available.
ls -l /dev/mapper
Example output:
[root@san102rh ~]# ls -l /dev/mapper
total 0
Take another look at the device mapper device directory. New devices
should appear for the three LUNs you discovered earlier. Observe that
these devices are named mpath#, where # is an indicator of the order
in which these devices were created.
ls /dev/mapper
Example output:
[root@san102rh ~]# ls -l /dev/mapper
total 0
crw------- 1 root root 10, 63 Jul 31 19:34 control
brw-rw---- 1 root disk 253, 2 Jul 31 19:34 mpath0
brw-rw---- 1 root disk 253, 1 Jul 31 19:34 mpath1
brw-rw---- 1 root disk 253, 0 Jul 31 19:34 mpath2
E5-12
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Example output:
[root@kc106b9-e0 ~]# multipath -d -l
mpath0 (360a98000433461504e342d4244645735)
[size=500 MB][features="1 queue_if_no_path"][hwhandler="0"]
\_ round-robin 0 [active]
\_ 1:0:1:0 sdb 8:16 [active][ready]
\_ 1:0:2:0 sdc 8:32 [active][ready]
\_ round-robin 0 [enabled]
\_ 1:0:3:0 sdd 8:48 [active][ready]
\_ 1:0:4:0 sde 8:64 [active][ready]
NOTE: The /dev/mapper devices are persistent across reboots, but the
/dev/sdx devices are not. Now, you need to correlate for each LUN, the
/dev/sdx devices to the single mpath# DM-MP device.
4.
E5-13
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Verify that the mountpoint has been created by using the df command
6.
7.
Once the LUNs on the storage controller are mapped to the host
system, and the HBAs are refreshed, the raw device will show up as
another device in the /dev/mapper location.
Use the Linux Volume Manager to create a volume group from the two raw disk
devices that are available (rawlun and rawlun2). You will then create a logical
volume with an ext3 file system, and prove it is accessible by writing directories
to it.
Discover the device names assigned to the rawlan and rawlun2.
sanlun lun show all | grep rawlun
Match the device labels from the previous command to the device within
the multipath configuration file.
multipath d l
For example: mpath0, mpath1
What is the mpath # of rawlun?
____________________________________
What is the mpath # of rawlun2?
___________________________________
E5-14
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
List the contents of the device mapper device directory and observe
that the lvmvg-datalv volume you just created now shows up as a DMMP device.
ls /dev/mapper
E5-15
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Mount the new logical volume by first editing the /etc/fstab file.
Example: device mount_point
type
defaults 0 0
/dev/mapper/lvmvg-datalv /mnt/lvmvol ext3 defaults 0 0
Create a mountpoint.
mkdir /mnt/lvmvol
END OF EXERCISE
E5-16
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will be running a script that will be creating a clone of the fslun that was
mounted and written to in the previous labs. You will then make changes to the files on the
clone and then compare them to the files on the source to verify that the clone is independent
of the source. After verifying the differences of the files, you will run a script to destroy the
LUNs and then flush the multipath maps on the host to clean up the LUN system.
OBJECTIVES:
20 minutes
START OF EXERCISE
STEP
1.
ACTION
2.
If files do not exist in the data directory, or the directories do not exist,
create a directory structure and create files to verify correct clone
creation
Run the script to create a clone of the fslun on the storage system
<class_files>/basiclunclone.sh <storage_ctrler_ip>
Normally you would confirm that the data has been quiesced and that
the file system has been unmounted to guarantee a consistent snapshot
before running this script.
If you get errors when running the script, you may need to run the
dos2unix basiclunclone.sh command to remove ^M characters at the
end of each line.
E5-17
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
3.
4.
ACTION
Rescan the HBAs, verify that the LUN clone was discovered, and view the
paths used by DM-mapper
Rescan the Qlogic HBAs to confirm all mapped LUNs are discovered
/usr/sbin/lpfc/lun_scan all
Use the sanlun command to verify that the LUN was discovered
sanlun lun show | grep fslun
What are the sdX mappings that are tied to the cloned LUN? (list 2)
_________________________________________________________
Create a mountpoint and mount the new LUN clone. Edit files and verify
localized changes using diff command. There should be no changes that
occur on the source LUN after modifying the file on the cloned LUN.
Create a mountpoint
mkdir /mnt/fslun-clone
Mount the clone and edit a file created in a previous lab (IE. Truth.txt)
mount /dev/mapper/(cloned lun mpath#) /mnt/fslun-clone
cd /mnt/fslun-clone
Use the diff command to verify that the changes occurred only in the
cloned LUN and had no effect on the source LUN
diff /mnt/fslun/truth.txt /mnt/fslun-clone/truth.txt
Unmount the cloned LUN and flush the mappings within the DMmultipath configuration
E5-18
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
multipath F
Run the script to destroy the cloned LUN on the storage system, then rescan the
HBAs, and verify that the mapping for the cloned LUN has been removed
If you get errors when running the script, you may need to run the
dos2unix basiclunclone.sh command to remove ^M characters
at the end of each line.
Inspect the LUNs currently available on the host and observe that an
<Unknown> LUN shows up. This is fslun-clone. It shows up as
<Unknown> because the LUN does NOT exist anymore on the
storage system. However, the device files are still there on the Linux
host.
sanlun lun show all
Note the four /dev/sdX labels for each path to the <Unknown> LUN
<unknown>/host1
/dev/sd___
<unknown>/host2
/dev/sd___
/dev/sd___
/dev/sd___
/dev/sd___
/dev/sd___
/dev/sd___
/dev/sd___
Clean up the dangling Linux device files on the host. Replace <X> in
the command below with the device identifiers noted above for the
<unknown> LUN. Important: ENSURE that you only REMOVE the
<UNKNOWN> DEVICES.
/rm /dev/sd<X>
Use the sanlun command to confirm that the LUN clone (fslun-clone)
is not available on the host any more. The <Unknown> LUN should
be gone.
sanlun lun show all
E5-19
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Use the multipath command to confirm that the devices have been
removed from the mapper configuration
multipath v3
END OF EXERCISE
E5-20
SAN Implementation Workshop: FC Linux
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
FC & IP Sun
Solaris
Exercise
Module 6: FC and IP Solaris
Estimated Time: 3 hours
In this exercise, you will verify that the Solaris host is compatible with the NetApp SAN
Support Matrix Solaris for FCP and iSCSI. The NetApp SAN Support Matrix is available as
a PDF on http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config and is also
available as a database application at http://now.netapp.com/matrix/mtx/login.do.
OBJECTIVES:
20 minutes
E6-1
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
START OF EXERCISE
TASK 1: INTERPRET A PARTICULAR LINE IN THE NETAPP SAN SUPPORT MATRIX SOLARIS
STEP
1.
ACTION
E6-2
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
which version
2.
Keep in mind that the NetApp FC SAN Support Matrix is also available
as a researchable database here:
http://now.netapp.com/matrix/mtx/login.do
TASK 2: VERIFY WHETHER OR NOT THE SOLARIS HOST COMPLIES WITH THE
NETAPP SAN SUPPORT MATRIX (FCP) SOLARIS
Your Solaris host is running Solaris 10 Update3. All QLogic FC HBA drivers and Solaris FC
software stack components are included by default with Sol10_Update3. However, the default
driver and firmware may not be the best suited for the particular FC HBA and Solaris host
hardware used. It is always a best practice to verify the firmware and the driver version of the
FC HBA to make sure it is supported by the NetApp support matrix.
This task shows you how to verify that the packages required for the Solaris FC software
stack components are installed on your host.
You will need to complete these steps on the Solaris host.
STEP
1.
ACTION
Consider the line item 140 in the NetApp SAN Support Matrix Solaris (July
2007) available at:
http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Ne
tAppSANSupport_July2007RevB.pdf#page=95
This is a supported FCP configuration. You need to ensure that your Solaris host
E6-3
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
3.
E6-4
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
5.
9) Host Bus: PCI-Express indicates that the type of the expansion bus is
PCI-e.
10) HBA Model: QLogic QLE2460 and QLE2462 are the FC HBAs that
are supported with this configuration.
You can use the prtdiag Solaris command to find out more about the QLogic
E6-5
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
FC HBA, the PCI slot it is being installed in, and its status.
# prtdiag | grep qlc
Bustype Mhz Slot/Status
Name/Path
/pci@1e,600000/pci@0/pci@8/SUNW,qlc@0
11) HBA Driver / FW: SAN Foundation Software (SFS) distributed with
Sol10 u3 qlc (SunFC QLogic FCA v20060630-2.16 / 4.0.22 indicates
that the driver and firmware of the HBA that are supported with this
configuration are the ones distributed with Solaris 10 Update3.
qlc (SunFC QLogic FCA v20060630-2.16: shows that this is the qlc driver
distributed by Sun as Sun/QLogic driver v2.16.
4.0.22: shows that the firmware required is 4.0.22.
You need to verify that the FC HBA model installed complies with the type of
host bus supported. The same HBA models can be available for several host bus
types. For example, in this case, you need to use QLogic QLE2460 (single port)
or QLE2462 (dual port) HBAs. The QLE family of QLogic FC HBAs works
with PCI-e buses. In contrast, the QLA family of QLogic FC HBAs works with
PCI-X buses. Check the QLogic Web site for more details about their families of
FC HBAs. The model of FC HBA should be verified before the installation.
Once the FC HBA installed, you can use the QLogic SANSurfer FC HBA CLI
utility program (/usr/sbin/scli) to verify the model of the HBA installed on your
host and the status of the FC HBA.
The Sun/QLogic FC HBA driver and utilities are installed by default with Solaris
10 Update3. You can enter the following command to verify that the Sun/QLogic
FC HBA driver and utilities are installed on your host:
# pkginfo | grep SUNWqlc
system SUNWqlc
E6-6
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
The QLogic SANSurfer FC CLI utility program is NOT installed by default with
Solaris 10 Update3. You can download the QLogic SANSurfer FC CLI from the
QLogic Web site. The package to download is scli-<version>.SPARCX86.Solaris.pkg. This package is available in the <class_files>
directory on your Solaris host. Ask your instructor about the exact location of the
<class_files> directory. Run the following commands to install the
QLogic SANSurfer FC CLI utility program on your host:
Copy the installation package into the temporary directory.
# cp <class_files>/QLogic/scli-1.06.16-50.SPARCX86.Solaris.pkg /tmp
# cd /tmp
Install the package from the temporary directory. Choose 1for SPARC below,
and answer Y to the confirmation prompt:
# pkgadd d scli-1.06.16-50.SPARC-X86.Solaris.pkg
The following packages are available:
1 QLScli
QLogic SANsurfer FC CLI (HBA Configuration
E6-7
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Utility)
(sparc) 1.06.16 Build 50
2 QLSclix
QLogic SANsurfer FC CLI (HBA
Configuration Utility)
(x86) 1.06.16 Build 50
Select package(s) you wish to process (or 'all' to
process
all packages). (default: all) [?,??,q]: 1
Enter the following command to view information about the FC HBA installed
on your host using the QLogic SANSurfer FC CLI utility program. Choose menu
items shown in bold below:
# scli
Scanning QLogic FC HBA(s) and device(s) ...
SANsurfer FC HBA CLI
v1.06.16 Build 50
Main Menu
1: Display System Information
2: Display HBA Settings
3: Display HBA Information
4: Display Device List
5: Display LUN List
6: Configure HBA Settings
7: Boot Device
8: HBA Utilities
9: Flash Beacon
10: Diagnostics
11: Statistics
E6-8
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
12: Help
13: Quit
Enter Selection: 3
: san102sun
HBA Model
: QLE2462
Port
: 0
OS Instance
: 0
Node Name
: 20-00-00-E0-8B-93-01-8E
Port Name
: 21-00-00-E0-8B-93-01-8E
Port ID
: 01-08-00
E6-9
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Serial Number
: RFC0646M81537
Driver Version
: qla-20070212-2.19
FCode Version
: 1.08
Firmware Version
: 4.00.27
: 1.04
: 1.08
: 1.00
: Point to Point
: 2 Gbps
PortType (Topology)
: FPort
: 4
HBA Status
: Online
: solsrv2-0
HBA Model
: QLA2462
Port
: 1
OS Instance
: 1
Node Name
: 20-01-00-E0-8B-B3-01-8E
Port Name
: 21-01-00-E0-8B-B3-01-8E
Port ID
Serial Number
Driver Version
: 01-09-00
: RFC0646M81537
: qla-20070212-2.19
E6-10
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
FCode Version
: 1.08
Firmware Version
: 4.00.27
: 1.04
: 1.08
: 1.00
: Point to Point
: 2 Gbps
PortType (Topology)
: FPort
: 4
HBA Status
: Online
Observe the FC HBA model, port number, driver version, firmware version, and
status. At this point, enter 0 followed by 16 to exit back to the Solaris
prompt. Enter 0 followed by 13 to exit back to the Solaris prompt.
The Solaris SAN Foundation Software (SFS) is installed by default with Solaris
10 Update3. With previous versions of Solaris, such as Solaris 9, it needs to be
installed separately. The SAN Foundation Software (SFS) is also known as Sun
StoreEdge SAN Foundation Software. You can enter the following commands to
verify that the components of the Solaris SAN Foundation Software (SFS) are
installed on your host:
# pkginfo | grep SUNWfc
system SUNWfchba
Library
E6-11
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
system SUNWfcp
system SUNWfcprt
system SUNWfcsm
FCSM driver
system SUNWfctl
E6-12
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
12) Volume Manager: Sun ZFS indicates that Suns ZFS file system is
supported with this configuration. Note that although Sun SVM" is not
listed as a supported volume manager on this configuration line (140), it
is listed on configuration lines 79, 83, 90 and 94. We will use Sun SVM
in this workshop. For more information about the new Sun ZFS file
system (and volume manager) see: http://en.wikipedia.org/wiki/ZFS.
The Solaris Volume Manager is installed by default with Solaris 10 Update3.
You can enter the following commands to verify that the Solaris Volume
Manager is installed on your host:
# pkginfo | grep SUNWlv
system SUNWlvma Solaris Volume Management APIs
system SUNWlvmg Solaris Volume Management Application
system SUNWlvmr Solaris Volume Management (root)
# pkginfo | grep SUNWmd
system SUNWlvma Solaris Volume Management APIs
system SUNWmdar Solaris Volume Manager Assistant
(Root)
system SUNWmdau Solaris Volume Manager Assistant (Usr)
system SUNWmddr SVM RCM Module
system SUNWmdr
system SUNWmdu
E6-13
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
7.
8.
14) File System: UFS indicates that Sun Unix File System (UFS) is
supported with this configuration.
Enter the following command to view the default file system on your Solaris
host:
# cat /etc/default/fs
LOCAL=ufs
9.
15) Host Cluster: Sun Cluster 3.1 Update 4 and Oracle 9i, 10g RAC
indicates that Sun Clusters and Oracle RAC cluster management
solutions are supported with this configuration. We do not use any of
these cluster management solutions in these lab exercises.
10.
16) Host Virtual: Containers indicates that Solaris virtual servers, also
known as virtual host containers are supported with this configuration.
We do not use virtual host containers in these lab exercises.
11.
17) Data ONTAP: 7.0.5, 7.1.1, 7.2, 7.2.1 shows the versions of Data
ONTAP that are supported with this configuration (replace <filer-x>
with the name of your storage controller).
Enter one of the following commands to verify the version of Data ONTAP on
your storage controllers:
# rsh <storage_ctlr> sysconfig
NetApp Release 7.2.1: Sun Dec 10 01:33:06 PST
2006
OR
# rsh <storage_ctlr> version
12.
18) Cfmode: SSI indicates that the single system image cluster failover
mode (CFMODE) is the only CFMODE supported with this
configuration. Enter the following commands to verify that cluster
failover is enabled and to verify which cluster failover mode is being
used on the storage system (replace <filer-x> with the name of your
E6-14
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
storage controller):
# rsh <storage_ctlr> cf status
Cluster enabled, filer-p is up.
# rsh <storage_ctlr> fcp show cfmode
fcp show cfmode: single_image
13.
19) NDU: Minor indicates that nondisruptive upgrades are only supported
between minor Data ONTAP releases (for example, from 7.2 to 7.2.1).
14.
20) SAN Boot: Yes indicates that booting from a SAN disk device is
supported with this configuration.
15.
TASK 3: VERIFY WHETHER OR NOT THE SOLARIS HOST COMPLIES WITH THE
NETAPP SAN SUPPORT MATRIX (ISCSI) - SOLARIS
STEP
1.
ACTION
Consider the line item 515 in the NetApp SAN Support Matrix Solaris (July
2007) available at:
http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Ne
tAppSANSupport_July2007RevB.pdf#page=95
This is a supported iSCSI configuration. You need to ensure that your Solaris
host complies with this configuration. Some steps are similar to the ones in Task
2 (FCP). Feel free to skip those steps.
1) No.: This is line item number 515.
2) Protocol: This line item shows a supported iSCSI SAN configuration.
3) Notes: References to matrix footnotes 500, 501 and 503:
500: Software Initiator is supported in a Guest OS on top of VMware ESX
Server 3.0X.
501: MPxIO support only for active-active (round-robin) configurations
503: iSNS is not supported with Solaris 10 Update3 software iSCSI initiator
due to known issues with Solaris iSCSI initiator.
E6-15
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
3.
4.
E6-16
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
19 installed pathnames
13 shared pathnames
13 directories
2 executables
iscsi
State
LOADED/INSTALLED
If the Sun iSCSI Device Driver driver module is not LOADED, you can enter the
following command to load it:
# modload /kernel/drv/sparcv9/iscsi
If the Sun iSCSI Device Driver module is not INSTALLED, you can use the
add_drv Solaris operating system command to install the driver, or simply reinstall/setup the SUNWiscsir (Sun iSCSI Device Driver (root))
package.
The Sun iSCSI Management Utilities are installed by default with Solaris 10
Update3. You can enter the following command to ensure that the Sun iSCSI
Management Utilities are installed and to verify the version and installation
status of the Sun iSCSI Management Utilities:
E6-17
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
# pkginfo l SUNWiscsiu
PKGINST: SUNWiscsir
NAME: Sun iSCSI Management Utilities (usr)
CATEGORY: system
ARCH: sparc
VERSION: 11.10.0,REV=2005.01.04.14.31
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: Sun iSCSI Management Utilities
PSTAMP: bogglidite20060421153221
INSTDATE: Feb 13 2000 18:34
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES:
15 installed pathnames
5 shared pathnames
5 directories
5 executables
9) Host Bus: N/A indicates that the type of the expansion bus is
irrelevant for this configuration.
10) HBA Model: N/A indicates that the HBA model is irrelevant for this
configuration.
11) HBA Driver / FW: N/A indicates that the driver and firmware of the
HBA are irrelevant for this configuration.
No HBA needs to be installed for this configuration because we are using a
software initiator. Thus, the model, driver, firmware and bus type of the HBA
are irrelevant.
E6-18
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
6.
12) Volume Manager: N/A indicates that the type and version of volume
manager are irrelevant for this configuration.
7.
8.
14) File System: Sun UFS indicates that the Sun Unix File System (UFS)
is supported with this configuration
Enter the following command to view the default file system on your Solaris
host:
# cat /etc/default/fs
LOCAL=ufs
9.
15) Host Cluster: No indicates that host cluster manager solutions are not
supported with this configuration.
10.
16) Host Virtual: No indicates that Solaris virtual servers are not
supported with this configuration.
11.
17) Data ONTAP: 7.0.5, 7.1.1, 7.2, 7.2.1 shows the versions of Data
ONTAP that are supported with this configuration (replace <filer-x>
with the name of your storage controller):
Enter the following command to verify the version of Data ONTAP on your
storage controllers:
# rsh <storage_ctlr> sysconfig
NetApp Release 7.2.1: Sun Dec 10 01:33:06 PST
2006
...
OR
# rsh <storage_ctlr> version
12.
18) Cfmode: N/A indicates that the cluster failover mode (CFMODE)
supported is irrelevant with this configuration. The CFMODE is only
relevant with FCP.
13.
19) NDU: Minor indicates that nondisruptive upgrades are only supported
between minor Data ONTAP releases (for example, from 7.2 to 7.2.1).
E6-19
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
14.
20) SAN Boot: No indicates that booting from a SAN disk device is not
supported in this configuration.
15.
END OF EXERCISE
In this exercise, you will install the NetApp SAN Toolkit (iSCSI and FCP) for Solaris for
Native OS. This kit is currently distributed on the NOW site (now.netapp.com) under two
product names:
1) FCP Solaris Host Utilities Kit 4.1 for Native OS
(santoolkit_solaris_sparc_3.3.tar.Z)
2) iSCSI SolarisTM Host Utilities Kit 3.0.1 for Native OS
(santoolkit_solaris_sparc_3.4.tar.Z)
OBJECTIVES:
By the end of this exercise, you should be able to install the latest version of the kit.
TIME ESTIMATE:
20 minutes
START OF EXERCISE
TASK 1: INTERPRET A PARTICULAR LINE IN THE NETAPP SAN SUPPORT MATRIX SOLARIS
STEP
1.
ACTION
NOTE: This step is shown here for documentation purposes only. The iSCSI
SolarisTM Host Utilities Kit 3.0.1 for Native OS package has already been copied
into the <class_files> directory on your Solaris host. Replace the
<class_files> string with the exact directory specified by your instructor.
In this step, you just need to copy the
E6-20
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
The iSCSI SolarisTM Host Utilities Kit 3.0.1 for Native OS can be downloaded
from the NetApp On the Web site (now.netapp.com). Save the file to the /tmp
directory on your Solaris host. The file to download is either:
Sun SPARC CPU: santoolkit_solaris_sparc_3.4.tar.Z
or
AMD64 (Opteron) CPU: santoolkit_solaris_amd_3.4.tar.Z
E6-21
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
3.
4.
5.
6.
7.
Observe the various components of the iSCSI SolarisTM Initiator Host Utilities
Kit 3.0.1 for Native OS.
ls /opt/NTAP/SANToolkit/bin
Please keep in mind that some components of the kit are needed for iSCSI,
others are needed for FCP, and some are needed for both iSCSI and FCP.
E6-22
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
8.
ACTION
Add the bin directory of the host utilities kit to the system path.
PATH=/opt/NTAP/SANToolkit/bin:$PATH
export PATH
9.
Enter the following command to verify that the bin directory is in the system
path:
which sanlun
You should get an output similar to:
# which sanlun
/opt/NTAP/SANToolkit/bin/sanlun
STEP
1.
ACTION
Enter the following command to obtain information about your Solaris host:
solaris_info
2.
Enter the following command to change directory to the results output directory
created by the solaris_info utility:
cd /tmp/netapp/ntap_sol_info
3.
Observe the various items explored and documented by the solaris_info utility.
Run:
ls /tmp/netapp/ntap_sol_info
E6-23
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
4.
ACTION
Take a look at the lpfc.pkg file. This file contains the output of the
pkginfo -l lpfc command ran by solairs_info. This command
provides installation status and version information for the Emulex FC HBA
driver package.
E6-24
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise you will provision two new virtual machines using NetApp FlexClone
technology:
First, you will provision a new virtual machine by cloning the VMFS data store
hosting the virtual disk of an existing virtual machine.
Second, you will provision another new virtual machine by cloning the raw device
(RDM storage) of another existing virtual machine.
Provisioning new virtual machines by cloning existing ones using VMware technology can be
time-consuming and generate a great deal of load on your ESX server and storage device,
since data is copied. In this lab you will use FlexClone technology to rapidly provision new
datastores and virtual machines.
OBJECTIVES
When you have completed this exercise, you should be able to do the following:
E6-25
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
START OF EXERCISE
In this task you will clone an existing VMware data store using NetApp FlexClone
technology.
STEP
ACTION
1.
Use Putty (or another Telnet client) to connect to the prompt of the target
storage controller in your pod and use the vol clone Data ONTAP
command to create a FlexClone clone of your FCP datastore volume:
2.
Use the lun map Data ONTAP command to map the LUN to the
esx_fcp_ig igroup:
Notice that Data ONTAP automatically assigns LUN id=2 to the LUN in
the cloned volume, since there already are two LUNs mapped with id=0
E6-26
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
3.
4.
Use the lun show and the lun show -m Data ONTAP commands to
verify that the LUN is online and mapped to the esx_fcp_ig initiator
group.
In this task you will discover a virtual machine in a NetApp FlexClone and add that virtual
machine to the ESX server inventory. This virtual machine is a clone of an existing virtual
machine.
STEP
ACTION
1.
If you are already logged on to the VIC GUI, skip to the next step.
Open a Remote Desktop Connection to start up the Virtual Infrastructure
Client (VIC) GUI on the remote Windows RDP host. The VIC GUI
prompts you for a server, user name, and password. At this point you can
log in either as Administrator to the VirtualCenter Server software suite,
which is installed on the Windows RDP host (the localhost), or as root to
the VMware ESX server directly. The VirtualCenter Server software suite
features are not needed for the first part of the class, so we will be logging
on directly to the ESX server to keep it simpler. Log in as root to the
remote VMware ESX server using the host name or IP address supplied by
your instructor.
You get the warning shown below. This warns you that changes made to
the ESX server directly, in this VIC GUI session, may not be visible to
VIC GUI sessions logged in to the VirtualCenter server. This is ok, since
there is no other VIC GUI session logged in to the VirtualCenter server at
this point.
E6-27
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
2.
Click the san<pod#>esx (or local host) server in the ESX Inventory tree.
Click the Configuration tab. Select Storage Adapters from the Hardware
menu.
Select the first vmhbaX port in the LP11000 4-GB Fibre Channel Host
Adapter and click the Rescan... hyperlink in the upper-right corner of the
screen.
E6-28
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E6-29
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Repeat the rescan procedure for the second FC HBA vmhbaX port.
NOTE: You may need to run the process twice for each FC port: once to
find the LUN and a second time to discover the datastore.
3.
To verify that the datastore has been discovered, go to the Storage (SCSI,
SAN, NFS) heading under Hardware.
The datastore will automatically be renamed to something different than
the production datastore (it should be something like snap-00000001FCVMFS)
If you do not see a cloned datastore with a name like snap-00000001FCVMFS in the Storage list, you need to ensure that
LVM.EnableResignature is set to 1.
E6-30
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
You can then select the Storage (SCSI, SAN, NFS) heading under
Hardware and click Refresh You should now see the snap-00000001FCVMFS cloned datastore in the list as shown below.
E6-31
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
_________________________________________________________________
4.
Perform the following steps to add one of the virtual machines hosted by
the cloned VMFS datastore to the ESX server inventory:
Right-click the datastore and rename it to FCVMFSCLONE.
Right-click the FCVMFSCLONE datastore and select Browse Datastore.
Open the W2K3FCVMFS directory, right-click the W2KFCVMFS.vmx file, and
select Add to inventory.
In the window that opens, you will be asked to name the VM. Name it
W2K3FCVMFSCLONE.
For the virtual machine inventory location, choose your data center or
local host and click Next.
Review the options and click Finish.
E6-32
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Close the Datastore browser window and notice the new, cloned, VM
appears in the ESX inventory tree.
You have now created a VM replica that is running on a zero-space cloned
LUN.
5.
Connect via SSH to the prompt of your ESX server and use the ls
command to inspect the cloned VMFS data store:
> ls /vmfs/volumes
You should see a directory for each of these: FCVMFS, iSCSIVMFS, NFS,
and the new cloned FCVMFSCLONE data stores.
Use the ls command again to inspect the contents of the source FCVMFS
datastore and its clone FCVMFSCLONE data store:
For both data stores, you should see a directory for each virtual machine
file hosted by that data store. Notice that the virtual machines are the
same, since FCVMFSCLONE is a clone of FCVMFS. Keep in mind that you
only added one of the VMs hosted by the FCVMFSCLONE to the ESX
inventory tree. You could add them all to the ESX inventory, if needed.
In this task you will split the FlexClone clone from its backing Snapshot copy and remove
the backing Snapshot copy.
STEP
ACTION
1.
Use Putty (or another Telnet client) to connect to the prompt of the target
storage controller in your pod and use the vol clone split Data
ONTAP command to create a FlexClone clone of your FCP datastore
volume:
E6-33
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Use the vol clone split status Data ONTAP command to verify the
progress of the split operation:
When you get the No clone status. output, the clone split operation
is complete.
3.
Delete the backing Snapshot copy, as it is not needed anymore. The name
of the backing Snapshot copy should be similar to
clone_esx_fcp_vol1_clo.1. You can use the snap list command to
verify the name.
E6-34
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this task you will clone an existing raw device that provisions a VMware virtual machine
using NetApp FlexClone technology.
STEP
ACTION
1.
Use Putty (or another Telnet client) to connect to the prompt of the target
storage controller in your pod and use the vol clone Data ONTAP
command to create a FlexClone clone of you the NetApp volume hosting
the FCP raw device used by the W2K3FCRDM virtual machine (RDM):
2.
Use the lun map Data ONTAP command to map the LUN to the
esx_fcp_ig igroup:
Notice that Data ONTAP automatically assigns LUN id=3 to the LUN in
the cloned volume, since there already are three LUNs mapped with id=0,
id=1 and id=2 to the esx_fcp_ig igroup.
E6-35
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
3.
4.
Use the lun show and the lun show -m Data ONTAP commands to
verify that the LUN is online and mapped to the esx_fcp_ig initiator
group.
In this task, you will discover a virtual machine in a raw LUN hosted by a NetApp FlexClone
and add that virtual machine to the ESX server inventory. Since the source volume used for
the NetApp FlexClone clone contained a raw device provisioning an existing virtual machine,
the cloned raw device also contains the data of the existing virtual machine. Thus, we use the
VM data in the cloned raw device to create a new virtual machine provisioned by the cloned
raw device. The new virtual machine is an exact replica of the existing virtual machine. Keep
in mind that as long as the FlexClone clone is not split from its backing Snapshot copy, there
is almost no extra space taken on storage for the FlexClone clone : no new space consumed
by the cloned raw device and no new space consumed by the cloned virtual machine.
STEP
ACTION
1.
If you are already logged on to the VIC GUI, skip to the next step.
Open a Remote Desktop Connection to start up the Virtual Infrastructure
Client (VIC) GUI on the remote Windows RDP host. The VIC GUI
prompts you for a server, user name, and password. At this point you can
log in either as Administrator to the VirtualCenter Server software suite,
which is installed on the Windows RDP host (the local host), or as root to
the VMware ESX server directly. The VirtualCenter Server software suite
features are not needed for the first part of the class, so we will be logging
in directly to the ESX server to keep it simpler. Log in as root to the
remote VMware ESX server using the host name or IP address supplied by
your instructor.
You get the warning shown below. This warns you that changes made to
the ESX server directly, in this VIC GUI session, may not be visible to
VIC GUI sessions logged in to the VirtualCenter server. This is ok, since
there is no other VIC GUI session logged in to the VirtualCenter server at
this point.
E6-36
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
2.
Click the san<pod#>esx (or local host) server in the ESX Inventory tree.
Click the Configuration tab. Select Storage Adapters from the Hardware
menu.
Select the first vmhbaX port in the LP11000 4GB Fibre Channel Host
Adapter, then click the Rescan... hyperlink in the upper-right corner of the
screen.
Clear the Scan for New VMFS Volumes check box, since there is no
VMFS datastore on the raw LUN that hosts the files of the cloned VM.
E6-37
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
________________________________________________________________
3.
You should still have the SAN<pod#>esx (or local host) branch selected in
the Inventory browsing tree. Click the Summary tab, and then click the
New Virtual Machine link in the Commands section.
Select Custom and click Next.
Name the Virtual Machine W2K3FCRDMCLONE. Select Next.
Select FCVMFS to store the configuration file (.vmx) and the RDMP (raw
device mapping pointer) file. Click Next.
Keep the default selection of Microsoft Windows as the guest operating
system and Microsoft Windows Server 2003, Enterprise Edition, as the
E6-38
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Select Raw Device Mappings on the Select a Disk screen and click Next.
E6-39
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
_________________________________________________________________
E6-40
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
NOTE: Although you clone from physical RDM, the new RDM does not
have to be physical.
Keep the defaults on the Specify Advanced Options screen and select
Next.
Review the parameters and click Finish.
E6-41
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
4.
E6-42
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E6-43
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
EXERCISE SUMMARY
You created two new virtual machines provisioned by storage cloned using
NetApp FlexClone technology:
1. W2K3FCVMFSCLONE
a.
b.
2. W2K3FCRDMCLONE
a.
b.
Clone not split from its backing Snapshot copy (sharing storage: near-zero additional
space required for the W2K3FCRDMCLONE VM)
Notice that both new VMs were cloned while the parent VM was shut down. If the
parent VM needs to be up during the clone procedure, particularly while the
NetApp backing Snapshot copy is being created, the parent VM needs to be
quiesced to ensure data consistency. You will learn more about data consistency
and about how to quiesce a VM in the next module.
END OF EXERCISE
E6-44
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will see how to discover LUNs accessed with FCP on Solaris in a Solaris
MPxIO multipathing environment.
OBJECTIVES:
Observe the underlying paths for the devices using sanlun and luxadm and learn how
to map the host side paths to storage system HBA ports
TIME ESTIMATE:
40 minutes
START OF EXERCISE
You will need to complete the following steps on your Solaris host by replacing
<storage_ctrl> with the name (or the IP address) of your storage controller.
STEP
1.
ACTION
210000e08b922bf4
qlc1
210100e08bb22bf4
E6-45
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
NOTE: The HBA vendor supplied utilities and the fcinfo Solaris
command can also be used to obtain the WWPNs of the HBAs.
Now enter the following command to ensure that the FC initiators of the
Solaris host can see the target FC ports on the storage controller:
Group
21:01:00:e0:8b:b2:2b:f4
21:00:00:e0:8b:92:2b:f4
...
21:01:00:e0:8b:ae:fb:7e
21:00:00:e0:8b:8e:fb:7e
21:01:00:e0:8b:a8:ab:76
21:00:00:e0:8b:88:ab:76
Group
21:01:00:e0:8b:b2:2b:f4
21:00:00:e0:8b:92:2b:f4
...
21:01:00:e0:8b:ae:fb:7e
21:00:00:e0:8b:8e:fb:7e
21:01:00:e0:8b:a8:ab:76
21:00:00:e0:8b:88:ab:76
E6-46
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
2.
TASK 2: ENABLE ALUA ON THE IGROUP TO WHICH THE LUNS ARE MAPPED
You will need to complete the following steps on your Solaris host by replacing
<storage_ctrl> with the name (or IP address) of your storage controller.
STEP
1.
ACTION
STEP
ACTION
7a,7b,vtic)
Member: 21:01:00:e0:8b:b2:2b:f4 (logged in on:
7a,7b,vtic)
ALUA: Yes
Mapped to
LUN ID Protocol
----------------------------------------------------------------/vol/solarisvol1/lunC
solaris_fcp_ig
FCP
/vol/solarisvol1/lunD
solaris_fcp_ig
FCP
E6-48
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
$ cfgadm -l
Ap_Id
Type
Receptacle
Occupant
Condition
c0
scsi-bus
connected
configured
unknown
c1
scsi-bus
connected
unconfigured unknown
c2
fc-fabric
connected
unconfigured unknown
c3
fc-fabric
connected
unconfigured unknown
usb0/1
unknown
empty
unconfigured ok
usb0/2
unknown
empty
unconfigured ok
We see that c2 and c3 are the controllers that are connecting to the storage
controller by way of the fc-fabric service.
2.
$ cfgadm -al
Ap_Id
Condition
Type
c0
scsi-bus
c0::dsk/c0t0d0
disk
c0::dsk/c0t1d0
disk
c1
scsi-bus
c2
fc-fabric
Receptacle
Occupant
c2::210100e08bb22bf4 unknown
connected unconfigured
E6-49
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
unknown
...
c2::500a098186a7af35 disk connected unconfigured
unknown
c2::500a098196a7af35 disk connected unconfigured
unknown
c2::500a098286a7af35 disk connected unconfigured
unknown
c2::500a098296a7af35 disk connected
unknown
c3
unconfigured
The WWPNs that are highlighted in bold are the WWPNs of one of the target
storage controllers. Note that the output on your host may contain the WWPNs of
other FC initiator and FC target ports if the FC switch is not zoned. For example,
if you see any WWPNs starting with 10:00, they are likely FC initiator ports on
Emulex FC HBAs on the Linux and ESX Server hosts. You can identify the
WWPNs of your storage controllers by executing fcp show adapter on each
of the storage controller. Observe also that the status is shown unconfigured in
the output of cfgadm al.
E6-50
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
7a
Description:
Fibre Channel Target Adapter 7a
(Dual-c channel, QLogic 2312 (2352) rev. 2)
Adapter Type:
Status:
Local
ONLINE
FC Nodename:
50:0a:09:80:86:a7:af:35
(500a098086a7af35)
FC Portname:
50:0a:09:81:96:a7:af:35
(500a098196a7af35)
Standby:
Slot:
No
7b
Description:
Fibre Channel Target Adapter 7b
(Dual-channel, QLogic 2312 (2352) rev. 2)
Adapter Type:
Status:
Local
ONLINE
FC Nodename:
50:0a:09:80:86:a7:af:35
(500a098086a7af35)
FC Portname:
50:0a:09:82:96:a7:af:35
(500a098296a7af35)
Standby:
No
7a
Description:
Fibre Channel Target Adapter 7a
(Dual-channel, QLogic 2312 (2352) rev. 2)
Adapter Type:
Local
E6-51
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Status:
ONLINE
FC Nodename:
50:0a:09:80:86:a7:af:35
(500a098086a7af35)
FC Portname:
50:0a:09:81:86:a7:af:35
(500a098186a7af35)
Standby:
No
Slot:
7b
Description:
Fibre Channel Target Adapter 7b
(Dual-channel, QLogic 2312 (2352) rev. 2)
Adapter Type:
Status:
Local
ONLINE
FC Nodename:
50:0a:09:80:86:a7:af:35
(500a098086a7af35)
FC Portname:
50:0a:09:82:86:a7:af:35
(500a098286a7af35)
Standby:
3.
No
You have just seen that the Solaris host bus controllers that were connected to the
fabric are c2 and c3. You can also verify this using the sanlun fcp show
adapter v on the Solaris host.
adapter name:
qlc0
WWPN:
210000e08b922bf4
WWNN:
200000e08b922bf4
driver name:
qlc
E6-52
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
model:
QLE2462
Not Available
20070212-2.19
1 of 2
port type:
Fabric
port state:
Operational
supported speed:
adapter name:
/dev/cfg/c2
qlc1
WWPN:
210100e08bb22bf4
WWNN:
200100e08bb22bf4
driver name:
model:
qlc
QLE2462
Not Available
20070212-2.19
2 of 2
E6-53
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
port type:
Fabric
port state:
Operational
supported speed:
4.
/dev/cfg/c3
$ cfgadm -c configure c2
$ cfgadm -c configure c3
$ cfgadm -al
Ap_Id
Type
Receptacle Occupant
Condition
c0
scsi-bus connected configured unknown
c0::dsk/c0t0d0 disk connected configured unknown
c0::dsk/c0t1d0 disk connected configured unknown
c1
scsi-bus connected unconfigured unknown
c2
fc-fabric connected configured unknown
c2::210100e08bb22bf4 unknown connected unconfigured
unknown
c2::500a098186a7af35 disk connected configured
unknown
c2::500a098196a7af35 disk connected configured
unknown
c2::500a098286a7af35 disk connected configured
unknown
c2::500a098296a7af35 disk connected configured
unknown
c3
fc-fabric connected configured unknown
c3::210000e08b922bf4 unknown connected unconfigured
unknown
E6-54
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
c3::500a098186a7af35
unknown
c3::500a098196a7af35
unknown
c3::500a098286a7af35
unknown
c3::500a098296a7af35
unknown
usb0/1
unknown
usb0/2
unknown
disk connected
configured
disk connected
configured
disk connected
configured
disk connected
configured
empty
empty
unconfigured ok
unconfigured ok
5.
Execute the sanlun lun show command to see that the LUNs have been
discovered on the Solaris host. Only LUNs from your storage controller are
discovered because only the initiator groups on your storage controllers contain
the WWPNs of the FC initiator ports on your Solaris host.
device filename
lun state
nau-dev1: /vol/solarisvol1/lunC
/dev/rdsk/c4t60A98000433461504E342D4A66586252d0s2 qlc1
FCP
3g (3221225472)
GOOD
nau-dev1: /vol/solarisvol1/lunD
/dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2 qlc1
FCP
10g (10737418240) GOOD
Observe the consolidated device file name assigned by MPxIO to the NetApp
LUN lunC and lunD.
You can also use the format operating system command to ensure that Solaris
E6-55
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
$ format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SEAGATE-ST336706LC-010A cyl 26123 alt 2
hd 4 sec 686>
/pci@1c,600000/scsi@2/sd@0,0
1. c0t1d0 <SEAGATE-ST336706LC-010A cyl 26123 alt 2
hd 4 sec 686>
/pci@1c,600000/scsi@2/sd@1,0
2. c4t60A98000433461504E342D4A69796C2Dd0 <NETAPPLUN-0.2 cyl 5118 alt 2 hd 16 sec 256>
/scsi_vhci/ssd@g60a98000433461504e342d4a69796c2d
3. c4t60A98000433461504E342D4A66586252d0 <NETAPPLUN-0.2 cyl 1534 alt 2 hd 16 sec 256>
/scsi_vhci/ssd@g60a98000433461504e342d4a66586252
E6-56
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
Enter the following command to view the LUNs and the multiple paths that lead
to them:
ONTAP_PATH: nau-dev1:/vol/solarisvol1/lunD
LUN: 1
LUN Size: 10g (10737418240)
Host Device:
/dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2
LUN State: GOOD Filer_CF_State: Cluster Enabled
E6-57
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that the sanlun command shows the paths that are optimized versus
non-optimized.
Observe also, that the host supports ALUA as shown by TARGET PORT
GROUP SUPPORT ENABLED.
Notice that, in an MPxIO environment, it is not obvious to identify the multiple
paths to the LUN in the output of sanlun lun show p. The sanlun
command does not show the underlying paths because Sun MPxIO masks the
underlying paths and presents the LUNs as single consolidated MPxIO devices.
However, you will see in a few moments that the Target Port IDs can be
used to identify the paths to the LUN. Also, Solaris provides some operating
system commands that can be used for this purpose.
Question: Why is the Target Port Group: 0x1001 shown as activeoptimized whereas the Target Port Group: 0x3002 is shown as activenonoptimized?
Hint: Identify to which storage controller each target port group is attached (refer
to Step 3 and Step 4 in Task 5 below).
E6-58
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
You can use the format Solaris operating system command to view the devices
that have multiple paths. All these devices start with /scsi_vhci:
# format
1. c3t500A098296A7AF35d2 <NETAPP-LUN-0.2 cyl 5118 alt
2 hd 16 sec 384>
/pci@1d,700000/QLGC,qlc@1,1/fp@0,0/ssd@w500a098296a7af
35,2
2. c4t60A98000433461504E342D4A69796C2Dd0 <NETAPP-LUN0.2 cyl 5118 alt 2 hd 16 sec 256>
/scsi_vhci/ssd@g60a98000433461504e342d4a69796c2d
E6-59
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
TASK 5: OBSERVE THE UNDERLYING DEVICE PATHS USING SANLUN AND LUXADM
AND LEARN HOW TO MAP THE HOST SIDE PATHS TO STORAGE SYSTEM HBA
PORTS
You will need to complete the following steps both on the Solaris host and on the target
storage controller.
STEP
1.
ACTION
Enter the following command to identify the MPxIO device file names:
$ luxadm probe
No Network Array enclosures found in /dev/es
Found Fibre Channel device(s):
Node WWN:500a098086a7af35 Device Type:Disk device
Logical
Path:/dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2
Node WWN:500a098086a7af35 Device Type:Disk device
Logical
Path:/dev/rdsk/c4t60A98000433461504E342D4A66586252d0s2
2.
Enter the following command to view the device properties for a particular
MPxIO device:
$ luxadm display
/dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2
DEVICE PROPERTIES for disk:
/dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2
Vendor:
NETAPP
Product ID:
LUN
Revision:
0.2
Serial Num:
C4aPN4-JiylUnformatted capacity: 10240.000 MBytes
Read Cache:
Enabled
Minimum prefetch: 0x0
Maximum prefetch: 0x0
Device Type:
Disk device
Path(s):
/dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2
/devices/scsi_vhci/ssd@g60a98000433461504e342d4a69796c
E6-60
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
2d:c,raw
Controller
/devices/pci@1d,700000/QLGC,qlc@1,1/fp@0,0
Device Address
500a098196a7af35,1
Host controller port WWN 210100e08bb22bf4
Class
primary
State
ONLINE
Controller
/devices/pci@1d,700000/QLGC,qlc@1,1/fp@0,0
Device Address
500a098296a7af35,1
Host controller port WWN 210100e08bb22bf4
Class
primary
State
ONLINE
Controller
/devices/pci@1d,700000/QLGC,qlc@1,1/fp@0,0
Device Address
500a098186a7af35,1
Host controller port WWN 210100e08bb22bf4
Class
secondary
State
ONLINE
Controller
/devices/pci@1d,700000/QLGC,qlc@1,1/fp@0,0
Device Address
500a098286a7af35,1
Host controller port WWN 210100e08bb22bf4
Class
secondary
State
ONLINE
Controller
/devices/pci@1d,700000/QLGC,qlc@1/fp@0,0
Device Address
500a098196a7af35,1
Host controller port WWN 210000e08b922bf4
Class
primary
State
ONLINE
Controller
/devices/pci@1d,700000/QLGC,qlc@1/fp@0,0
Device Address
500a098296a7af35,1
Host controller port WWN 210000e08b922bf4
Class
primary
State
ONLINE
Controller
/devices/pci@1d,700000/QLGC,qlc@1/fp@0,0
Device Address
500a098186a7af35,1
Host controller port WWN 210000e08b922bf4
E6-61
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Class
secondary
State
ONLINE
Controller
/devices/pci@1d,700000/QLGC,qlc@1/fp@0,0
Device Address
500a098286a7af35,1
Host controller port WWN 210000e08b922bf4
Class
secondary
State
ONLINE
Observe in bold the target WWPNs of each path, the class of each path (primary
or secondary), and the status of each path (online or offline).
There are eight paths to the LUN.
3.
You can also identify the paths from the host to the storage controller manually
by comparing the output of the fcp show adapter v command run on the
storage controller to the output of the sanlun lun show all p command
run on the Solaris host.
STEP
ACTION
4.
Enter the following command to identify each Target Port ID with a specific
target FC port on the storage controller.
Slot:
0d
E6-63
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Description:
Fibre Channel Target Adapter 0d
(Dual-channel, QLogic 2312 (2352) rev. 2)
Status:
ONLINE
Host Port Address:
010100
Firmware Rev:
3.3.19
PCI Bus Width:
64-bit
PCI Clock Speed:
33 MHz
FC Nodename:
50:0a:09:80:86:a7:af:35
(500a098086a7af35)
FC Portname:
50:0a:09:82:96:a7:af:35
(500a098296a7af35)
Cacheline Size:
8
FC Packet Size:
2048
External GBIC:
No
Data Link Rate:
2 GBit
Adapter Type:
Local
Fabric Established:
Yes
Connection Established: PTP
Mediatype:
auto
Partner Adapter:
None
Standby:
No
Target Port ID:
0x2
In this example we found FC target port 0x1 and 0x2 to belong to nau-dev1,
the first storage controller in the dual storage controller system. You can compare
these Target port IDs to the ones shown on the Solaris host by sanlun lun
show p to map each Target Port Group and paths to a specific storage
controller and target FC ports. In this example, If we ran fcp show adapter
v on nau-dev2 we would have found FC target ports 0x1 and 0x2.
END OF EXERCISE
E6-64
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will see how to label a LUN as a Solaris disk using the format
operating system command.
TIME ESTIMATE:
10 minutes
START OF EXERCISE
You will need to complete the following steps on your Solaris host to label both NetApp
LUNs as Solaris disks.
STEP
1.
ACTION
$ format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SEAGATE-ST336706LC-010A cyl 26123 alt 2
hd 4 sec 686>
/pci@1c,600000/scsi@2/sd@0,0
1. c4t60A98000433461504E342D4A69796C2Dd0 <NETAPPLUN-0.2 cyl 5118 alt 2 hd 16 sec 256>
/scsi_vhci/ssd@g60a98000433461504e342d4a69796c2d
2. c4t60A98000433461504E342D4A66586252d0 <NETAPPLUN-0.2 cyl 1534 alt 2 hd 16 sec 256>
/scsi_vhci/ssd@g60a98000433461504e342d4a66586252
STEP
ACTION
Using format, you could rearrange the partitions on the NetApp LUNs, if need
be. However, for the purpose of this lab exercise, simply exit the format program.
Enter the following command at the format> prompt to exit back to the Solaris
operating system prompt.
format> quit
END OF EXERCISE
E6-66
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
EXERCISE 13: CREATE A SUN SVM VOLUME (PART 1:USING SVM CLI)
OVERVIEW:
In this lab exercise you will see how to label a LUN as a Solaris disk using the format
operating system command.
TIME ESTIMATE:
20 minutes
START OF EXERCISE
You will need to complete the following steps on your the Solaris host.
STEP
1.
ACTION
Enter the following command to view the NetApp LUNs available on the Solaris
host and record the host device paths for both FCP LUNs:
sanlun lun show
lunC
Host Device:
/dev/rdsk/c___________________________________________
_
lunD
Host Device:
/dev/rdsk/c___________________________________________
_
You will need to know the consolidated device file name assigned to lunD by
MPxIO in a few moments to create Sun SVM state database replicas on it.
You will need to know the consolidated device file name assigned to lunC by
MPxIO in a few moments in Sun SVM to provision the SVM volume.
NOTE: To ensure that you use the correct MPxIO consolidated device file
names, copy and paste them from sanlun output into a text file so they are
available when you need to look them up.
E6-67
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
TASK 2: IDENTIFY SLICES OF LOCAL DISKS THAT CAN BE USED TO STORE SVM
STATE DB REPLICAS
You will need to complete either Step 1 or Step 2 on your Solaris host. Step 1 shows
format. Step 2 show prtvtoc.
STEP
1.
ACTION
Next, select the disk to work with. Choose the disk that corresponds to
lunD, and then choose partition to access the partition menu. Finally,
choose print to look at available partitions (slices) on the disk
corresponding to lunD. You should get an output similar to:
partition> p
Current partition table (original):
Total disk cylinders available: 498 + 2 (reserved
cylinders)
Part
Tag Flag
Cylinders
Size
Blocks
0
root wm
0 - 31
32.00MB (32/0/0) 65536
1
swap wu
32 - 63
32.00MB (32/0/0) 65536
2
backup wu
0 - 497
498.00MB (498/0/0) 1019904
3 unassigned wm
0
0
(0/0/0)
0
4 unassigned wm
0
0
(0/0/0)
0
5 unassigned wm
0
0
(0/0/0)
0
6
usr wm
64 - 497
434.00MB (434/0/0) 888832
7 unassigned wm
0
0
(0/0/0)
0
Observe, slice 6 containing most of the free space on lunD. This is the
slice that you will use to store the Sun SVM state database replicas.
Type quit (or just q) twice to exit the partition menu and go back to
Solaris prompt.
2.
You can also use the prtvtoc Solaris operating system command to view
the current partition table of a disk. Enter the command below to view the
E6-68
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
partition table of the disk corresponding to lunD. Make sure to fill in the
blank with the MPxIO consolidated device name noted above for lunD
MINUS the slice number (which you replace with s2, here, for slice 2)
prtvtoc /dev/rdsk/c_____________________________________s2
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
TASK 3: CREATE SUN SVM STATE DATABASE REPLICAS USING THE METADB
COMMAND
You will need to complete the following steps on your Solaris host.
STEP
1.
ACTION
Enter the command below to create three Sun SVM state database replicas on
slice 6 of the disk corresponding to lunD. Fill in the blanks with the MPxIO
device name assigned to lunD.
metadb a f c 3
c____________________________________s6
2.
Enter the following command to view information about the Sun SVM state
database replicas:
metadb i
E6-69
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
TASK 4: CREATE A SUN SVM VOLUME PROVISIONED BY A NETAPP LUN USING THE
METAINIT COMMAND
You will need to complete the following steps on your the Solaris host.
STEP
1.
ACTION
Enter the command below to create a Sun SVM volume named d0. This
volume is provisioned by the consolidated MPxIO device corresponding to
lunC that you identified in task 1. That device is in fact a NetApp LUN.
metainit d0 1 1 c______________________________________s6
The 1 1arguments specify to create one stripe of one disk slice. This
effectively means to create a concatenated volume provisioned by one
slice, identified by the MPxIO device file name.
Observe that we use slice 6 (s6), which represents the usr slice of the disk.
It contains most of the space of that disk. You could add any slices of the
disk to the SVM volume. You could even create different volumes with
other slices of the disk.
2.
Go to the UNIX prompt of your Solaris host and enter the following
command to view the device file name created by the metainit
command in the previous step for the new Sun SVM volume named d0:
$ ls /dev/md/rdsk
d0
$
3.
Enter the following command to display information about the SVM volume you
have just created:
$ metastat
END OF EXERCISE
E6-70
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
EXERCISE 13: CREATE A SUN SVM VOLUME (PART 2: USING THE SUN
SMC GUI)
OVERVIEW:
You will need to complete the following steps on your the Solaris host or on your Windows
Workstation.
STEP
1.
ACTION
E6-71
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
Enter the commands below to set the current X display on your Solaris server.
This effectively sends all Unix X Window displays to the Exceed X Server
running on your workstation. Make sure to replace silviulxp.hq.netapp.com with the hostname or IP address of your workstation
$ ping silviu-lxp.hq.netapp.com
silviu-lxp.hq.netapp.com is alive
$ export DISPLAY=silviu-lxp.hq.netapp.com:0
$ echo $DISPLAY
silviu-lxp.hq.netapp.com:0
3.
Enter the following command on your Solaris server to start up the Solaris
Management Console GUI on your Solaris host:
$ smc &
4.
The Solaris Management Console 2.1 GUI will appear on your Windows
Workstation. All tasks that need to be completed in the Solaris Management
Console 2.1 GUI will be performed on your Windows Workstation. However, the
commands run in the SMC GUI are really run on your Solaris host.
E6-72
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
5.
ACTION
Click successively on the navigation keys to expand the This Computer tab, and
the Storage tab.
E6-73
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E6-74
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
6.
ACTION
Disks to view the available disks on your host. You are prompted to log on as
root. Type in the password of the root user on your Solaris host and click OK.
E6-75
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
7.
ACTION
There are two local disks: c0t0d0 and c0t1d0. In the example above there are
three MPxIO consolidated devices (the device file names starting with c3 and
c4). Keep in mind that the device file names are likely to be different on your
host.
You will need to complete the following steps on your the Solaris host.
STEP
1.
ACTION
The consolidated device file name assigned to lunC by MPxIO appears in bold
text below. This device is one of the disks listed in the SMC GUI in the previous
task. This is the device file name that you will need to use in a few moments in
Sun SVM to provision the SVM volume.
LUN Size:
3g (3221225472)
Host Device:
E6-76
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
/dev/rdsk/c4t60A98000433461504E342D4A66586252d0s2
LUN State: GOOD Filer_CF_State: Cluster Enabled
Multipath_Policy: Native Multipath-provider: Sun
Microsystems
TPGS flag: 0x10Filer Status: TARGET PORT GROUP
SUPPORT ENABLED
Target Port Group : 0x1001
Target Port Group State: Active/optimized
Vendor unique Identifier : 0x10 (2GB FC)
Target Port Count: 0x2
Target Port ID : 0x101
Target Port ID : 0x102
Target Port Group : 0x3002
Target Port Group State: Active/non-optimized
Vendor unique Identifier : 0x30 (2GB FC)
Target Port Count: 0x2
Target Port ID : 0x1
Target Port ID : 0x2
IMPORTANT: The device file name is different on your host. Make sure to use
the device file name as it shows up on your host. It is recommended to simply
select and paste the device file name wherever needed.
You will need to complete the following steps on your Windows workstation.
STEP
1.
ACTION
E6-77
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
Now, expand the Enhanced Storage tab by clicking the navigation key and then,
click State Database Replicas.
E6-78
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that there are no Sun SVM state database replicas currently created on
this host. Nothing shows up in the main window and Status bar displays 0
Replicas.
Observe also the Information window that provides contextual information in the
Sun SMC GUI.
3.
E6-79
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
The slices (partitions) of all disks on your host are shown in the Available list.
We need to select slices on local disks to store the Sun SVM state database
replicas.
4.
Select slices 6 and 7 on the first local disk (c0t0d0) and slices 3 and 4 on the
second local disk (c0t1d0). It is recommended to spread out the Sun SVM state
database replicas across multiple disks and multiple SCSI controllers.
E6-80
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Below is where you specify the replica length (number of 512-KB blocks) and the
number of replicas on each slice. Keep default number of blocks (8192) and enter
3 for three replicas on each slice.
E6-81
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click Next.
6.
Observe the Sun SVM CLI commands (metadb commands) that the Sun SMC
GUI will run to create the Sun SVM state db replicas on local slices c0t0d0s6
and c0t0d0s7 and c0t1d0s3 and c0t1d0s4. These commands could be
run at the UNIX prompt on the Solaris host instead of using the Sun SMC GUI.
E6-82
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click Finish.
7.
Click the View menu and select Refresh to view the Sun SVM state db replicas
that you have just created.
E6-83
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E6-84
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
You will need to complete the following steps on your the Windows workstation.
STEP
1.
ACTION
Observe that there are no Sun SVM volumes currently available on this host.
Nothing shows up in the main window and Status bar displays 0 Replicas.
2.
E6-85
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
The slices (partitions) of all disks on your host are shown in the Available list. We
need to select slices on local disks to store the Sun SVM state database replicas.
3.
Select the slice that corresponds to the slice 2 of the consolidated MPxIO device
file name assigned to the NetApp LUN lunC that you identified in Task 2 above.
You choose slice 2 because this slide represents the whole disk. In a previous
step, you labeled lunC as a Solaris disk with format and you put most of the free
space on slice 6 of lunC because you were not planning to use the disk with
multiple partitions reserved for different usage. Alternately, instead of putting
most of the free space of lunC in a single partition (slice 6) you could have
partitioned the lunC in several different partitions. Then, you could have added
those partitions independently to Sun SVM volumes.
In our example this is slice c4t60A98000433461504E342D4A66586252d0s2.
Keep in mind that the device file name and thus, the slice name, is different on
your Solaris host. Make sure to choose the device file name as it appeared on
your host in Task 2.
E6-86
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click Next.
4.
5.
Observe the Sun SVM CLI command (metainit) that the Sun SMC GUI will
run to create the Sun SVM volume named d0 provisioned by the slice 2 of lunC
(represented below by its consolidated MPxIO device file name). This command
could be run at the UNIX prompt on the Solaris host instead of using the Sun
SMC GUI.
E6-87
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click Finish.
6.
Click the View menu and select Refresh to view the Sun SVM volume d0 that
you have just created.
E6-88
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
7.
ACTION
Go to the UNIX prompt of your Solaris host and enter the following command to
view the device file name created by the metainit command in the previous
step for the new Sun SVM volume named d0:
$ ls /dev/md/rdsk
d0
$
END OF EXERCISE
E6-89
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will create a UNIX File System (UFS) on a Sun SVM volume that is
provisioned by a NetApp LUN. This LUN was previously discovered on the Solaris host
using FCP.
OBJECTIVES:
Inspect the raw Sun SVM volume provisioned by the NetApp LUN on the Solaris host
Create a UFS on the raw Sun SVM volume provisioned by the NetApp LUN
Mount the UFS onto the active file system on the Solaris host
Add entry in the Virtual File System Table to mount the LUN persistently across reboots
TIME ESTIMATE:
15 minutes
START OF EXERCISE
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
Enter the following command to look at the Sun SVM volumes available on your
Solaris host:
ls /dev/md/rdsk
You should get an output similar to:
bash-3.00# ls /dev/md/rdsk
d0
bash-3.00#
Observe in bold, the d0 Sun SVM volume that we created in the previous lab
exercise. Observe that we are listing the contents of /dev/md/rdsk. This is the
raw device directory for Sun SVM metadevices (md).
2.
Enter the following command to install a UFS on the d0 Sun SVM volume that is
provisioned by a NetApp LUN:
E6-90
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
newfs /dev/md/rdsk/d0
newfs: construct a new file system /dev/md/rdsk/d0:
(y/n)? y
/dev/md/rdsk/d0:
6283264 sectors in 1534 cylinders
of 16 tracks, 256 sectors
3068.0MB in 62 cyl groups (25 c/g, 50.00MB/g, 8192
i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 102688, 205344, 308000, 410656, 513312, 615968,
718624, 821280, 923936,
5325856, 5428512, 5531168, 5633824, 5736480, 5839136,
5941792, 6044448,
6147104, 6249760
QUESTION 1: On which slice (partition) of the Solaris MPxIO disk device did
you create the file system?
HINT: Find out which slices you added to the d0 Sun SVM volume.
3.
Enter the following command to create a mountpoint for the UFS created on the
d0 Sun SVM volume.
mkdir p /mnt/lunC
Observe that we name the mountpoint using the name of the NetApp LUN that is
provisioning the d0 Sun SVM volume.
4.
Enter the following command to mount the lunC onto the active Solaris file
system:
mount /dev/md/dsk/d0 /mnt/lunC
Observe that we use the dsk path at this point because we now have a file system
created on d0.
5.
Enter the following command to test writing to Sun SVM volume d0, which is
E6-91
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Enter the following command to verify that the file test_write.txt was
successfully created in Sun SVM volume d0:
ls -la /mnt/lunC
7.
drwxr-xr-x
3 root
root
drwxr-xr-x
3 root
sys
drwx-----lost+found
2 root
root
-rw-r--r-- 1 root
test_write.txt
root
0 Jan 23 13:50
This step is OPTIONAL. If you need to have the d0 Sun SVM volume
automatically mounted after a system reboot, you need to add an entry in the
Virtual File System Table file, in /etc/vfstab.
Add the following line into the /etc/vfstab file to persistently mount d0 across
system reboots:
/dev/md/dsk/d0 - /mnt/lunC ufs no yes Observe the comments at the beginning of the vfstab file; they explain each
field.
END OF EXERCISE
E6-92
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will test access to a LUN during an FC path failure.
OBJECTIVES:
Disable the FC switch port where the first Solaris FC HBA port connects
Re-enable the FC switch port where the first Solaris FC HBA port connects
TIME ESTIMATE:
20 minutes
START OF EXERCISE
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
Enter the following command on the Solaris host to view the WWPNs of the FC
HBA installed on your host. NOTE: You could also use the QLogic SANSurfer
CLI utility (/usr/bin/scli) to perform this step.
sanlun fcp show adapter
You should get an output similar to:
$ sanlun fcp show adapter
qlc0
WWPN:210000e08b922bf4
qlc1
WWPN:210100e08bb22bf4
Observe the digits (in bold) that differ between the WWPNs of each FC HBA port
on the Solaris host.
IMPORTANT: The WWPNs are different on your host. Record the WWPNs of
E6-93
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
your host here, emphasizing the digits that differ between qlc0 and qlc1:
qlc0:____________________________________
qlc1:____________________________________
You will need to complete the following steps on the Brocade FC switch.
STEP
1.
ACTION
Enter the following command on the Brocade switch console to view the WWPNs
connected to each port. Locate the two F-Ports where your Solaris host
connects.
switchshow
For example:
nau-48k:root> switchshow
Index Slot Port Address Media Speed State
Proto
===================================================
...
68 7 4 014400 id N4 Online F-Port
21:00:00:e0:8b:92:2b:f4
69 7 5 014500 id N4 Online F-Port
21:01:00:e0:8b:b2:2b:f4
...
In this example we were looking for 210000e08b922bf4 (qlc0) and for
210100e08bb22bf4 (qlc0)
Observe that IN THIS EXAMPLE, the FC initiator port qlc0 on the Solaris host
connects to port 4 on the Brocade switch. FC initiator port qlc1 on the Solaris
host connects to port 5 on the Brocade switch.
IMPORTANT: Make sure to properly identify the slot and port where YOUR
E6-94
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
Solaris host connects. If in doubt, ASK YOUR INSTRUCTOR. Record your slot
and port number here:
qlc0 connected to slot:____________
port:___________
TASK 3: DISABLE THE FC SWITCH PORT WHERE THE FIRST SOLARIS FC HBA PORT
CONNECTS
You will need to complete the following steps on the Brocade FC switch.
STEP
1.
ACTION
IMPORTANT: Make sure to properly identify the slot and port where YOUR
Solaris host connects. You do not want to disable the port of someone elses host.
If in doubt, ASK YOUR INSTRUCTOR.
Enter the following command to disable the port 4 in slot 7 on the Brocade
Director switch. Slot 7, port 4 is where the qlc0 FC HBA connects IN THIS
EXAMPLE. Make sure to identify the slot and port where YOUR Solaris host
qlc0 FC HAB port connects.
portdisable 7/4
If you are not using a director FC switch you just need to specify the port number:
portdisable 4
2.
Enter the following command on the Brocade switch console to ensure that the
slot and port connected to your Solaris qlc0 FC HBA is disabled.
switchshow
For example:
nau-48k:root> switchshow
Index Slot Port Address Media Speed State
Proto
===================================================
...
68 7 4 014400 id N4
69 7 5 014500 id N4
21:01:00:e0:8b:b2:2b:f4
...
No_Sync Disabled
Online F-Port
E6-95
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
Enter the following command to verify that some of the paths to the LUNs are
currently unusable:
cfgadm -al
You should see that some of the paths leading to devices of type disk, that is
cX::500 FC targets are reported either unavailable, or unusable, or
failed.
2.
Enter the following command to test writing to Sun SVM volume d0 that is
provisioned by NetApp LUN lunC while some of the FC paths to lunC are
broken due to FC switch port failure. (Actually, you just disabled the FC switch
port.)
touch /mnt/lunC/test_write_during_fail.txt
3.
E6-96
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
TASK 5: REENABLE THE FC SWITCH PORT WHERE THE FIRST SOLARIS FC HBA
PORT CONNECTS
You will need to complete the following steps on the Brocade FC switch.
STEP
1.
ACTION
IMPORTANT: Make sure to properly identify the slot and port where YOUR
Solaris host connects. If in doubt, ASK YOUR INSTRUCTOR.
Enter the following command to enable the port 4 in slot 7 on the Brocade
Director switch. Slot 7, port 4 is where the qlc0 FC HBA connects IN THIS
EXAMPLE. Make sure to identify the slot and port where YOUR Solaris host
qlc0 FC HAB port connects.
portenable 7/4
If you are not using a director FC switch, you just need to specify the port
number: portenable 4
2.
Enter the following command on the Brocade switch console to ensure that the
slot and port connected to your Solaris qlc0 FC HBA is re-enabled:
switchshow
For example:
nau-48k:root> switchshow
Index Slot Port Address Media Speed State
Proto
===================================================
...
68 7 4 014400 id N4 Online F-Port
21:00:00:e0:8b:92:2b:f4
69 7 5 014500 id N4 Online F-Port
21:01:00:e0:8b:b2:2b:f4
...
3.
Enter the following command on the Solaris host to verify that all paths to the
LUNs are now usable on the host:
cfgadm -al
You should see that all paths leading to devices of type disk, that is
cX::500 FC targets, are now reported either OK, or unknown. You
should NOT see unavailable, unusable, or failed in the Condition
column.
END OF EXERCISE
E6-97
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will run the basic_config script provided by the NetApp Host Utilities
Kit to configure a Solaris host with the recommended values for FC access to LUNs on a
NetApp storage system.
TIME ESTIMATE:
20 minutes
START OF EXERCISE
TASK 1: INSPECT THE CURRENT STATE OF THE ISCSI SERVICE ON THE SOLARIS
HOST.
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
Enter the command below to obtain the iSCSI node name of your Solaris host.
The iSCSI initiator node name would have been need if you had to create
initiator groups on the storage controller.
iscsiadm list initiator-node
E6-98
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
2.
3.
Enter the following command to list the current iSCSI target discovery
parameters on your Solaris host:
iscsiadm list discovery
You will need to complete the following steps on your Solaris host by replacing
<storage_ctlr> with the name of your storage controller.
STEP
1.
ACTION
Enter the following command to ensure that the iSCSI protocol is licensed on the
storage controller:
$ rsh <storage_ctlr> license
iscsi site IKVAREM
2.
Enter the command below to disable iSCSI traffic on the e0a Ethernet interface.
It is recommended to disable iSCSI traffic on the default e0a management
interface.
$ rsh <storage_ctlr> iscsi interface disable e0a
3.
Enter the following command to enable the iSCSI service on the storage
controller:
$ rsh <storage_ctlr> iscsi start
Tue Jan 16 18:00:16 GMT [Filer1:
iscsi.service.startup:info]: iSCSI service startup
E6-99
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Enter the command below to see the iSCSI interfaces currently enabled on the
storage controller. Make sure that the e0a interface is disabled for iSCSI traffic.
$ rsh <storage_ctlr> iscsi interface show
Interface e0a disabled
Interface e0b enabled
Interface e0c enabled
Interface e0d enabled
5.
Observe that interface e0a is reserved for general purpose TCP/IP traffic to the
storage controller. Thus, interface e0a is disabled for iSCSI traffic. VLANs can
also be used on the switch to isolate iSCSI traffic from general purpose TCP/IP
traffic.
Enter the following command to see the iSCSI Target Portal Groups (TPGs)
currently available on the storage controller:
$ rsh <storage_ctlr> iscsi tpgroup show
TPGTag
Name
Member Interfaces
1000
e0a_default
e0a
1001
e0b_default
e0b
1002
e0c_default
e0c
1003
e0d_default
e0d
Why are there four different iSCSI TPGs on this storage controller?
6.
Run the following command to view the IP address assigned to each Ethernet
interface on your storage controller:
$ rsh <storage_ctlr> ifconfig a
E6-100
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
Enter the following command to set the discovery address for iSCSI targets on the
target storage controller:
iscsiadm add discovery-address <e0b_ip_adress>:3260
<e0b_ip_adress> is the IP address of e0b Ethernet
interface on the target storage controller in your
pod.
2.
Enter the following command to verify that the discovery addresses were properly
set up:
iscsiadm list discovery-address
Enter the following command to have a quick look at the general syntax and
options of the Solaris iscsiadm command:
iscsiadm
4.
The console of the storage controllers, should output a message similar to:
Filer1> Wed Jan 17 16:47:48 GMT [Filer1:
iscsi.notice:notice]: ISCSI: New session from
initiator iqn.1986-03.com.sun:01:00801784624b.458c0eaf
at IP addr 10.61.170.21
This shows that the Solaris iSCSI software initiator has logged in to discover
iSCSI targets on the storage controllers.
5.
Enter the following command to verify that the dynamic iSCSI targets discovery
was properly set up:
iscsiadm list discovery
E6-101
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
If you see the text below in scsi_vhci.conf, you need to complete Step 8.
Otherwise, you can just read through Step 8.
Added by NetApp to enable MPxIO for Data ONTAP LUNs
device-type-scsi-options-list =
"NETAPP
LUN", "symmetric-option";
symmetric-option = 0x1000000;
7.
The iSCSI Solaris Host Utilities 3.0.1 does not support Asymmetric Logical Unit
Access (ALUA) with iSCSI. While ALUA is currently supported in the iSCSI
Solaris Host Utilities 3.0, if you upgrade to iSCSI Solaris Host Utilities 3.0.1 or
Solaris 10 Update 3, ALUA will not be supported by NetApp.
Going forward, it is recommended to use the Solaris iSCSI software initiator with
MPxIO and without ALUA for iSCSI on Solaris.
Because ALUA must be turned off for iSCSI, you need to disable ALUA on the
Solaris initiator groups on the storage controller (including the FC initiator
groups) using the following command: igroup set <igroup_name>
alua off. If you are provisioning NetApp LUNs from the Solaris host using
both FC and iSCSI from the SAME host, because ALUA needs to be disabled for
that host, you need to manage the multiple FC paths manually, using the Solaris
mpathadm command.
To enable multipathing, you must execute the mpxio_set script provided by the
Host Utilities to configure the Sun StorageEdge Traffic Manager. You do this by
adding the storage systems vendor ID and product ID (VID/PID) to the Sun
StorageEdge Traffic Manager configuration file.
E6-102
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
The format of the entries in this file is very specific. To ensure that the entry is
correct, the Host Utilities includes the mpxio_set script to automatically add the
required storage vendor specific configuration variables. This script was placed in
the /opt/NTAP/SANToolkit/bin directory when you installed the Host Utilities.
To add the NetApp VID/PID (Vendor Id/ Product Id) lines to the
scsi_vhci.conf file enter the following command:
/opt/NTAP/SANToolkit/bin/mpxio_set -e
W A R N I N G
This script will modify /kernel/drv/scsi_vhci.conf
to add Vendor ID information for your storage system.
You should only run this script if you are using MPxIO
multipathing AND you have NOT enabled ALUA for this
host's
igroup on the filer.
reboot -- -r
8.
This command will warn you that a change will be done in the
configuration files. Answer Y to this question to update the configuration
files. Then, it will ask you to reboot. Answer N to avoid rebooting with
stmsboot. Enter the following command instead to reboot and reconfigure
the host for MPxIO:
E6-103
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
reboot -- -r
9.
Enter the following command to explore the iSCSI targets that Solaris found:
iscsiadm list target v | more
NOTE: If you do not see the IP address of any iSCSI target in the output of this
command, run devfsadm C to clean up dangling device links and rerun the
iscsiadm list target -v command as shown above.
Also, verify that LUNs are mapped to the correct initiator group and that the
initiator groups contain the correct IQN number. If the LUNs are already mapped
to the correct igroup, but the igroup does not contain the correct IQN numbers,
you may need to unmap or map the LUNs to update the map with the correct
iSCSI IQNs.
10.
Q1: Consider the output of the command run in the previous step. Why are some
iSCSI targets shown as connected to a certain IP address whereas other targets are
shown not connected?
Q2: The Target Portal Group 1000 (TPGT: 1000) is not shown (discovered) at
all on the Solaris host. Why?
Q3: The Target Portal Group 1003 (TPGT: 1003) is not shown (discovered) at
all on the Solaris host. Why?
Hints:
TPGT is the Target Portal Group Tag
E6-104
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Use the iscsi tpgroup show Data ONTAP command in conjunction with
the ifconfig a command on the target and partner storage controllers to find
the answers.
END OF EXERCISE
In this exercise, you will learn how to discover a new LUN on a Solaris host using the iSCSI
protocol. The Solaris host is using native MPxIO to manage multiple paths to the LUN. You
will also learn how to interpret the output of sanlun lun show p command in a Solaris
Native MPxIO environment and how to use the iscsiadm list target Solaris
command.
OBJECTIVES:
Observe the multiple paths between the host and the storage controller
TIME ESTIMATE:
40 minutes
START OF EXERCISE
You will need to complete the following steps on your Solaris host by replacing
<storage_ctlr> with the name of your storage controller.
E6-105
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
1.
ACTION
Enter the following command to inspect the LUNs available on the target storage
controller and the way they are mapped to the initiator groups:
$ rsh <storage_ctlr>.rtp.netapp.com lun show -m
LUN path
Mapped to
LUN ID Protocol
---------------------------------------------------------------------/vol/solarisvol1/lunA
solaris_iscsi_ig
0
iSCSI
/vol/solarisvol1/lunB
solaris_iscsi_ig2
1
iSCSI
Observe that the LUN named lunA is mapped with lun_id 0 to an iSCSI igroup
named solaris_iscsi_ig. The LUN named lunB is mapped with lun_id 1
to an iSCSI igroup named solaris_iscsi_ig2.
2.
Enter the following command to inspect the initiator groups currently available on
the target storage controller:
$ rsh <storage_ctlr>.rtp.netapp.com igroup show -v
solaris_iscsi_ig (iSCSI):
OS Type: solaris
Member: iqn.1986-03.com.sun:01:san201.00000201
(logged in on: e0b, e0c)
Observe the Member iSCSI node name shown in bold type face in this
example. This is the iSCSI node name of the iSCSI software initiator on the
Solaris host.
Ensure that the iSCSI node name shown by the output of the igroup show
command, is the iSCSI node name that you recorded previously when you ran the
iscsiadm list initiator-node command on the Solaris host.
Make sure that both ALUA and MPxIO entries in scsi_vhci.conf are NOT
enabled at the same time. If scsi_vhci.conf has entries, disable ALUA. You know
if ALUA is enabled by looking at the igroup show v output. In this example,
ALUA is not enabled.
E6-106
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
Enter the following command to discover the new LUNs on the Solaris host using
the iSCSI protocol:
devfsadm i iscsi
2.
Use the sanlun command to see if the LUNs have been discovered; you can
also use format or iscsiadm.
sanlun lun show
You should get an output similar to:
filer:
lun-pathname
device filename
adapter protocol
lun size
lun state
san201f1: /vol/solarisvol1/lunA
/dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2
iscsi0 iSCSI
500m (524288000)
GOOD
san201f1: /vol/solarisvol1/lunB
/dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2
iscsi0 iSCSI
500m (524288000)
GOOD
You can see that the LUNs have been discovered by the host.
3.
Enter the following native Solaris command to verify that the LUNs have been
discovered by the Solaris host:
iscsiadm list target -S
You should get an output similar to:
Target: iqn.1992-08.com.netapp:sn.101196961
Alias: TPGT: 1002
ISID: 4000002a0000
Connections: 1
LUN: 1
Vendor: NETAPP
Product: LUN
OS Device Name:
E6-107
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
/dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2
LUN: 0
Vendor: NETAPP
Product: LUN
OS Device Name:
/dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2
Target: iqn.1992-08.com.netapp:sn.101196961
Alias: TPGT: 1001
ISID: 4000002a0000
Connections: 1
LUN: 1
Vendor: NETAPP
Product: LUN
OS Device Name:
/dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2
LUN: 0
Vendor: NETAPP
Product: LUN
OS Device Name:
/dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2
Observe that the LUNs are discovered through the Target Portal Group identified
by the Tag 1001 (TPGT) and Tag 1002(TPGT) on the iSCSI target node
iqn.1992-08.com.netapp:sn.101196961. These Target Portal
Groups correspond to the e0b and e0c Ethernet interfaces on the target storage
controller. You can verify this by issuing the iscsi portal show or the
iscsi tpgroup show command on the storage array.
san201f1> iscsi portal show
Network portals:
IP address
10.254.135.101
3260
1001
e0b
10.254.135.121
3260
1002
e0c
E6-108
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
You will need to complete the following steps on your Solaris host by replacing
<storage_ctlr> with the name of your storage controller.
STEP
1.
ACTION
2.
You should also see as many LUNs as you mapped to the host. NOTE: LUNs that
are offline are not visible though.
In this example, we had two LUNs mapped to the host by way of iSCSI initiator
groups. If MPxIO was NOT working we would see four LUNs, one LUN per
path. Enter the following commands to check and confirm:
format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@0,0
1. c3t60A9800043346D525A4A47494E586B74d0 <NETAPP-LUN-0.2 cyl
498 alt 2 hd 16 sec 128>
/scsi_vhci/ssd@g60a9800043346d525a4a47494e586b74
2. c3t60A9800043346D525A4A47494E58726Ed0 <NETAPP-LUN-0.2 cyl
498 alt 2 hd 16 sec 128>
/scsi_vhci/ssd@g60a9800043346d525a4a47494e58726e
Specify disk (enter its number):
Observe the long consolidated MPxIO path names for the virtual disks
(NetApp LUNs). This shows that MPxIO is working as intended. If
MPxIO was not working, then we would see the drives with just a cXtXdX
notation, similar to:
0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@0,0
Also, MPxIO devices have the physical path names starting with
/scsi_vhci, as opposed to /pci@1e etc, note the differences in the output
above.
E6-109
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
3.
ACTION
device filename
lun state
san201f1: /vol/solarisvol1/lunA
/dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2 iscsi0
iSCSI
500m (524288000)
GOOD
san201f1: /vol/solarisvol1/lunB
/dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2 iscsi0
iSCSI
500m (524288000)
GOOD
TASK 4: OBSERVE THE MULTIPLE PATHS BETWEEN THE HOST AND THE STORAGE
CONTROLLER
You will need to complete the following steps on your Solaris host by replacing
<storage_ctlr> with the name of your storage controller.
STEP
1.
ACTION
sanlun lun show -v
filer:
protocol
lun-pathname
lun size
device filename
adapter
lun state
san201f1: /vol/solarisvol1/lunA
/dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2 iscsi0
500m (524288000)
GOOD
iSCSI
10.254.135.101
10.254.135.121
Filer volume name:solarisvol1
FSID:0x1ad2968
ID:0x0
ID:0x0
iSCSI
STEP
ACTION
3260
Filer iSCSI adapter name: ism_sw1
Filer iSCSI portal group: 1001
Filer IP address:
10.254.135.101
10.254.135.121
Filer volume name:solarisvol1
FSID:0x1ad2968
ID:0x0
ID:0x0
2.
Observe that the LUNs are discovered by way of TPGTs 1001 and 1002, which
correspond to interfaces e0b and e0c.
You can also use native Solaris commands to view the paths.
STEP
1.
ACTION
E6-111
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
The mpathadm is a very useful native Solaris command that can be used to
inspect LUNs and paths to them for both FCP and iSCSI.
Enter the following command to list the LUNs on the host:
mpathadm list lu
/dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2
Total Path Count: 2
Operational Path Count: 2
Enter the following command to look at the details about a particular LUN on the
host:
mpathadm show lu
/dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2
Logical Unit:
/dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2
mpath-support: libmpscsi_vhci.so
Vendor: NETAPP
Product: LUN
Revision: 0.2
Name Type: unknown type
E6-112
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Name: 60a9800043346d525a4a47494e58684e
Asymmetric: no
Current Load Balance: round-robin
Logical Unit Group ID: NA
Auto Failback: on
Auto Probing: NA
Paths:
Initiator Port Name: iqn.198603.com.sun:01:san201.00000201,4000002a00ff
Target Port Name: 4000002a0000,iqn.199208.com.netapp:sn.101196961,1002
Override Path: NA
Path State: OK
Disabled: no
Initiator Port Name: iqn.198603.com.sun:01:san201.00000201,4000002a00ff
Target Port Name: 4000002a0000,iqn.199208.com.netapp:sn.101196961,1001
Override Path: NA
Path State: OK
Disabled: no
Target Ports:
Name: 4000002a0000,iqn.199208.com.netapp:sn.101196961,1002
Relative ID: 0
Name: 4000002a0000,iqn.199208.com.netapp:sn.101196961,1001
Relative ID: 0
Again, observe the target portal groups being used in the output above.
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
Enter the following command to label one of the LUNs as a Solaris disk.
Make sure to select the disk corresponding to lunA and lunB (use output
of sanlun lun show; in this example lunB is disk 2 and lunA is disk
4). Once you have labeled both lunA and lunB, enter quit (or just q) to
E6-113
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Next you can run newfs on any slices that you have created to create a
UNIX file system on that slice. You will do this in a subsequent lab
exercise.
END OF EXERCISE
E6-114
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will create a UNIX File System (UFS) on one of the LUNs discovered
previously on the Solaris host using iSCSI. Here are the main tasks of this lab exercise:
OBJECTIVES:
Inspect the raw disk devices created for NetApp LUNs on the Solaris host
Mount the UFS onto the active file system on the Solaris host
Add entry in the Virtual File System Table to mount the LUN persistently across reboots
TIME ESTIMATE:
20 minutes
START OF EXERCISE
You will need to complete the following steps on the Solaris host:
STEP
1.
ACTION
Enter the following command to look at NetApp LUNs using the sanlun utility
provided by the NetApp iSCSI Utilities Kit:
sanlun lun show
You should get an output similar to:
bash-3.00# sanlun lun show
filer:
filename
size
lun-pathname
adapter
lun state
device
protocol
Filer1: /vol/solarisvol1/lunA
/dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2
0
iSCSI
500m (524288000)
GOOD
Filer1: /vol/solarisvol1/lunB
/dev/rdsk/c1t60A9800043346C4C564A397163314164d0s2
E6-115
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
lun
STEP
ACTION
iSCSI
500m (524288000)
GOOD
Observe the consolidated raw disk device file name (in bold) given by MPxIO to
each of the NetApp LUNs.
IMPORTANT: These device file names are different on your host. Make sure to
use the device file names as they show up on your host in the following steps.
2.
IMPORTANT: In the following steps, make sure to use the device file name as it
appears on your host.
Enter the following command to install a UFS on the MPxIO consolidated device
created for lunA:
newfs
/dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2
newfs: construct a new file system
/dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2:
(y/n)? y
/dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2:
1019904 sectors in 498 cylinders of 16 tracks, 128
sectors
498.0MB in 32 cyl groups (16 c/g, 16.00MB/g,
7680 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 32928, 65824, 98720, 131616, 164512, 197408,
230304, 263200, 296096,
721696, 754592, 787488, 820384, 853280, 886176,
919072, 951968, 984864,
1017760
Observe that we choose to install a UFS on slice 2 (partition 2) of the disk. This is
the slice that represents the whole disk. Alternately, we could install different file
systems on each partition of the disk, by choosing different slices of the MPxIO
consolidated disk device with the newfs command.
Keep in mind that the newfs command creates a file system of the default FS
type on the solaris host. To view the default file system type on your host, look at
E6-116
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Enter the following command to create a mountpoint for the UFS created on
lunA:
mkdir p /mnt/lunA
4.
Enter the following command to mount the lunA onto the active Solaris file
system:
mount /dev/dsk/c1t60A9800043346C4C564A396F472F6B63d0s2
/mnt/lunA
Observe that we use the dsk path at this point because we now have a file system
created on lunA.
5.
6.
ls -la /mnt/lunA
7.
drwxr-xr-x
.
3 root
root
drwxr-xr-x
..
3 root
sys
drwx-----lost+found
2 root
root
-rw-r--r-1 root
root
test_write_into_lunA.txt
0 Jan 23 13:50
This step is OPTIONAL. If you need to have the NetApp LUN automatically
mounted after a system reboot, you need to add an entry in the Virtual File
System Table file, in /etc/vfstab.
IMPORTANT: Make sure to use the device file name as it appears on your host.
E6-117
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Add the following line into the /etc/vfstab file to persistently mount lunA across
system reboots:
/dev/dsk/c1t60A9800043346C4C564A396F472F6B63d0s2
/mnt/lunA ufs no yes -
Observe the comments at the beginning of the vfstab file; they explain each
field.
END OF EXERCISE
Map a LUN clone to the same initiator group as the original LUN
TIME ESTIMATE:
20 minutes
E6-118
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
START OF EXERCISE
ACTION
1.
2.
3.
4.
5.
6.
7.
You will need to complete the following steps on your Solaris host by replacing
<storage_ctlr> with the name of your storage controller.
STEP
1.
ACTION
Enter the command below to create a Snapshot of the NetApp volume where lunA
sits. This Snapshot will be used as backing Snapshot for the clone of a LUN.
$ rsh <storage_ctlr> snap create solarisvol1
snap_lunA_clone
2.
Enter the command below to clone the lunA using the Snapshot
snap_lunA_clone of the solarisvol1 volume.
$ rsh <storage_ctlr> lun clone create
/vol/solarisvol1/lunA_clone b /vol/solarisvol1/lunA
snap_lunA_clone
3.
Enter the following command to view the Snapshot copies of the solarisvol1
volume:
$ rsh <storage_ctlr> snap list solarisvol1
Observe that the status of the snap_lunA_clone Snapshot is (busy,LUNs). This is
due to the Snapshot being used by the clone of lunA that we just created in the
E6-119
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
4.
ACTION
previous step.
Enter the following command to view available LUNs:
$ rsh <storage_ctlr> lun show
/vol/solarisvol1/lunA
online, mapped)
500m (524288000)
/vol/solarisvol1/lunA_clone
(r/w, online)
/vol/solarisvol1/lunB
online, mapped)
(r/w,
500m (524288000)
500m (524288000)
(r/w,
Mapped to
----------------------------------------------------------------------
5.
/vol/solarisvol1/lunA
0
iSCSI
solaris_iscsi_ig
/vol/solarisvol1/lunB
1
iSCSI
solaris_iscsi_ig2
500m (524288000)
Serial#: C4lLVJ9oG/kc
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris
Maps: solaris_iscsi_ig=0
E6-120
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
(r/w,
STEP
ACTION
/vol/solarisvol1/lunA_clone
(r/w, online)
500m (524288000)
Serial#: C4lLVJ9vfk5G
Backed by:
/vol/solarisvol1/.snapshot/snap_lun_clone/lun_sa1
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris
/vol/solarisvol1/lunB
online, mapped)
500m (524288000)
(r/w,
Serial#: C4lLVJ9qc1Ad
Share: none
Space Reservation: enabled
Multiprotocol Type: solaris
Maps: solaris_iscsi_ig2=1
TASK 3: MAP LUN CLONE TO THE SAME INITIATOR GROUP AS THE ORIGINAL LUN
You will need to complete the following steps on the NetApp1 target storage controller.
STEP
1.
ACTION
Enter the following command to map the LUN clone to the same initiator
group as the original LUN:
$ rsh <storage_ctlr> lun map /vol/solarisvol1/lunA_clone
solaris_iscsi_ig
2.
500m (524288000)
/vol/solarisvol1/lunA_clone
mapped)
(r/w, online,
500m (524288000)
E6-121
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
(r/w, online,
STEP
ACTION
/vol/solarisvol1/lunB
mapped)
500m (524288000)
(r/w, online,
Mapped to
LUN ID
---------------------------------------------------------------------/vol/solarisvol1/lunA
iSCSI
solaris_iscsi_ig
/vol/solarisvol1/lunB
iSCSI
solaris_iscsi_ig2
/vol/solarisvol1/lunA_clone
iSCSI
solaris_iscsi_ig
You will need to complete the following steps on the Solaris host.
STEP
1.
ACTION
Enter the following command to discover the new LUN clone on the Solaris host
using the iSCSI protocol:
devfsadm i iscsi
2.
Enter the following command to view the NetApp LUNs available on the Solaris
host:
sanlun lun show
You should get an output similar to:
bash-3.00# sanlun lun show
filer:lun-pathname
device filename
protocol lun size lun state
adapter
Filer1: /vol/solarisvol1/lunA
/dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2
0
iSCSI
500m (524288000)
GOOD
Filer1: /vol/solarisvol1/lunB
/dev/rdsk/c1t60A9800043346C4C564A397163314164d0s2
0
iSCSI
500m (524288000)
GOOD
E6-122
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Filer1: /vol/solarisvol1/lunA_clone
/dev/rdsk/c1t60A9800043346C4C564A3976666B3547d0s2
0
iSCSI
500m (524288000)
GOOD
Observe that lunA_clone is shown as any other LUN on the Solaris host.
3.
Enter the following command to create a mountpoint for the lunA_clone LUN:
mkdir p /mnt/lunA_clone
4.
Enter the following command to mount the file system on lunA_clone onto the
active file system.
IMPORTANT: Make sure to use the lunA_clone device file name as it appears
on your host.
mount /dev/dsk/c1t60A9800043346C4C564A3976666B3547d0s2
/mnt/lunA_clone
5.
6.
7.
8.
Enter the following commands to view and compare the contents of lunA and
lunA_clone:
ls la /mnt/lunA
drwxr-xr-x
.
3 root
root
drwxr-xr-x
..
5 root
sys
E6-123
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
drwx-----lost+found
2 root
root
-rw-r--r-1 root
root
test2_write_into_lunA.txt
0 Jan 24 14:05
-rw-r--r-1 root
root
test_write_into_lunA.txt
0 Jan 23 13:50
ls la /mnt/lunA_clone
drwxr-xr-x
.
3 root
root
drwxr-xr-x
..
5 root
sys
drwx-----lost+found
2 root
root
-rw-r--r-1 root
root
test_write_into_lunA.txt
0 Jan 23 13:50
-rw-r--r-1 root
root
test_write_into_lunA_clone.txt
0 Jan 24 12:36
Observe that the contents are now different. Keep in mind though that much of
the space occupied by lunA and by lunA_clone in
solarisvol1/.snapshot/snap_lunA_clone is still shared.
You will need to complete the following steps on the NetApp1 target storage controller. Keep
in mind that in most use cases, it is not necessary to split the LUN clone from its backing
Snapshot. Also, the split can happen while the LUN is being used.
STEP
1.
ACTION
2.
E6-124
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
If the splitting occurs too quickly for you to get the status, you should see:
3.
E6-125
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
2.
END OF EXERCISE
E6-126
SAN Implementation Workshop: FC and IP Solaris
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
FC & IP VMware
Exercise
Module 7: FC and IP VMware
Estimated Time: 6 hours
In this exercise, you will gain hands-on experience working in a basic VMware FC SAN
setup, dealing with the installation of the host utilities, FC HBA driver installation and
configuration, multipathing policy setup, and understanding how the host interacts with the
storage system.
OBJECTIVES:
45 minutes
E7-1
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
START OF EXERCISE
1.
ACTION
SSH into your groups host using PuTTY or some similar utility.
2.
3.
4.
Is the configuration that you have documented so far compatible with the
support matrix?
___________________________________________________________
_
E7-2
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Does the current support matrix allow SnapDrive for UNIX with your
configuration?
___________________________________________________________
__
1.
ACTION
2.
The NetApp Host Utilities are available for download at the following location on
the NOW site.
http://now.netapp.com/NOW/download/software/sanhost_esx/ESX/
The NetApp Host Utilities have been provided for you in the <class_files>
location provided by the instructor.
3.
Enter the following command to install the NetApp Host Utilities. Answer yes
to the prompt asking to open TCP ports through the ESX firewall.
cd netapp_fcp_esx_host_utilities_3_0
./install
4.
Ensure that the Emulex LightPulse FC (LPFC) driver is loaded in the ESX Server
kernel.
modprobe c | grep lpf
E7-3
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that the FC HBA driver module is named lpfcdd_732 and an alias
named scsi_hostadapter points to it. You can either use the driver module name
or the alias in the following command.
If the driver module is not already loaded, load it using modprobe:
modprobe v scsi_hostadapter
5.
Verify that the timeout value for the LPFC driver is set to 120.
esxcfg-module g lpfcdd_732
You should get an output similar to:
lpfcdd_732 enabled = 1 options = 'lpfc_nodev_tmo=120'
If the lpfc_nodev_tmo option is not set to 120, run the following command to
set it to 120:
esxcfg-module s lpfc_nodev_tmo=120 lpfcdd_732
esxcfg-boot -b
Reboot the ESX Server host
reboot
NOTE: the lpfc_nodev_tmo option is normally set to 120 as part of the
installation of the NetApp FC HUK for VMware ESX Server 3.0. Thus, you do
not have to set it manually if you installed the HUK.
NOTE: If a Windows guest OS is set up to access NetApp storage in this ESX
Server, the DiskTimeoutValue
(HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\TimeO
utValue) will need to be set to 190 in the Windows registry.
6.
E7-4
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Take a QUICK look at the other information items displayed by the esxcfginfo VMware command.
7.
Run the following command provided by the NetApp FC HUK for VMware ESX
Server 3.0 to collect information about your ESX Server:
/opt/netapp/santools/esx_info fcp
You should get an output similar to:
Gathering RPM information.........................DONE
Gathering ESX Server information..................DONE
Gathering FCP information.........................DONE
Done gathering information
ESX Server system info is in directory /tmp/netapp/netapp_esx_info
Compressed file is /tmp/netapp/netapp_esx_info.tar.gz
Please send this file for analysis
Take a quick look at the information items dumped into the
/tmp/netapp/netapp_esx_info directory by this command.
1.
ACTION
2.
3.
E7-5
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
4.
ACTION
This step is informational only. You can read through it, but do not run the
commands shown.
You can configure the multipathing policy using the config_mpath command
provided by the NetApp Host Utilities Kit for VMware ESX 3.0 provided by
NetApp. For example, to configure multipathing by balancing the load amongst
all of the primary paths and make the configuration persistent across ESX Server
reboots you can run the following command:
/opt/netapp/santools/config_mpath --primary --loadbalance persistent
END OF EXERCISE
At the end of this exercise, you should be able to understand and interpret the compatibility
matrix to confirm a supported installation.
TIME ESTIMATE:
90 minutes
START OF EXERCISE
TASK 1: CREATE IGROUPS, VOLUMES, AND LUNS FOR FCP
STEP
1.
ACTION
E7-6
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
storage controller.
Port 0: vmhba________
Port 1: vmhba________
WWPN Port0: _____________________________________________
WWPN Port1: _____________________________________________
2.
E7-7
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click Add.
You will get a message indicating that the initiator group was successfully
created.
3.
Next, add a volume from FilerView or the command line. Instructions are
provided for use with FilerView.
Select Volumes and Add. The Volume Wizard appears.
Select Next.
Select Flexible and click Next.
Name the volume esx_fcp_vol1.
Keep Language set to POSIX and select Next.
The containing aggregate should be aggr1. The volume should be 2GB.
Set Space Guarantee to none.
Select Next.
Review the summary and click Commit.
E7-8
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
4.
ACTION
Now you create a LUN in FilerView or with the command line. Instructions are
provided for use with FilerView.
Select LUNs and Add.
The path to the LUN should be /vol/esx_fcp_vol1/LUN
Set the LUN Protocol Type to VMware.
Set the size of the LUN to 1500 MB.
Leave Space reservation checked on.
Click Add.
5.
Add another LUN using FilerView or the command line. Instructions are
provided for use with FilerView.
Select LUNs and Add.
The path to the LUN should be /vol/esx_fcp_vol1/LUN2.
Set the LUN Protocol Type to Windows.
Set the size of the LUN to 50 MB.
Leave Space reservation checked on.
Click Add.
6.
7.
E7-9
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-10
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
Repeat the rescan procedure for the second FC HBA vmhbaX port.
E7-11
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that there are four FC targets discovered on the vmhba0 FC HBA port
and four other FC target discovered on the vmhba1 FC HBA port. This is eight
targets in total corresponding to eight paths to each LUN that you previously
created and mapped to the esx_fcp_ig initiator group. Why are there eight paths
to the LUNs?
_________________________________________________________________
__
_________________________________________________________________
__
E7-12
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that in this example LUN 0 and LUN 1 are discovered on vmhba0 and
vmhba1. However, on your ESX Server host, the vmhba adapter number may be
different. For example, if you have a local SCSI adapter with a local disk attached
to the local SCSI bus, then the local SCSI adapter will likely show up and
vmhba0, the Emulex FC HBA, would then show up as vmhba 1 and vmhba2.
Look at the SCSI Target 0 for each LUN and record their vmhba adapter
number below:
LUN 0 (1.5GB): vmhba__:0:0
LUN 1 (50MB): vmhba__:0:1
2.
The canonical path is the path that is first discovered by ESX to a given LUN.
That path also becomes the ESX name of the LUN. Run:
/opt/netapp/santools/sanlun lun show
Observe that the vmkdisk name given to the LUN corresponds to the canonical
path shown in the Virtual Infrastructure Client.
3.
Now inspect the paths to one of the LUNs using the Virtual Infrastructure Client.
Click the first path of the first vmbhaX adapter.
E7-13
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Next, right-click the path and select Manage paths The Manage Paths dialog
box is displayed.
E7-14
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that the vmhba0:0:0 is currently the preferred active path to LUN 0.
You need to verify that this path is going to the primary storage controller (that is
the controller that is hosting the LUN that we are accessing). To do this you need
to verify that the target WWPN (the SAN Identifier) of this path is a WWPN on
the primary storage controller.
Using PuTTY or another Telnet client, log on to each target storage controller and
run the fcp show adapter Data ONTAP command. Look at the FC
Portname entry and ensure that one of the WWPNs displayed on the storage
controller owning LUN 0 is the WWPN, which shows up as the active preferred
path in the Virtual Infrastructure Client. If that is not the case, you need to locate
a path that targets one of the WWPNs on the primary storage controller. Then,
click Change in the Manage Paths dialog box and check Preferred.
END OF TASK 2
E7-15
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
2.
Click Next.
3.
E7-16
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
However, in your case the LUN may be on a different vmhba adapter. Please refer
to the vmhba adapter number you recorded earlier.
Observe that only LUN 0 of the FC HBA appears as an available choice. Why?
Hint: Minimum VMFS datastore size.
Click Next.
4.
5.
E7-17
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
6.
ACTION
Click Next.
E7-18
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
7.
ACTION
The Summary screen appears. Review the proposed disk layout and click Finish.
Notice that the Create VMFS datastore is in progress (in the Recent Tasks section
of the screen).
E7-19
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-20
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
Select the Summary tab and click New Virtual Machine in the Commands pane.
E7-21
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-22
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click Next.
3.
Select Microsoft Windows as the Guest Operating System and select the version
Microsoft Windows Server 2003, Enterprise Edition.
E7-23
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Click Next.
4.
E7-24
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
If you had multiple networks, you would use this screen to select a different
network. In this example, the defaults are accepted.
5.
On the Define Virtual Disk Capacity screen, set the Disk Size to 0.5 GB. Click
Next.
E7-25
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
6.
ACTION
E7-26
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
7.
ACTION
After the Create Virtual Machine task is complete, observe the new Win2003 FC
VMFS virtual machine created on your ESX Server.
E7-27
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
Select the Summary tab and click New Virtual Machine in the Commands pane.
E7-28
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Select Custom and click Next. You need to select Custom here to be able to
provision the new VM using raw device mapping (RDM) instead of using a
typical VMFS datastore.
2.
3.
Select a location where the vmx file and the pointer to the RDM will be located.
Observe that the dialog box asks to select a datastore in which to store the files
for the virtual machine. When using RDM storage, only the vmx VM
configuration file and the pointer to the RDM will be stored in the datastore you
select here.
E7-29
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Select FC VMFS.
Click Next.
4.
Select Microsoft Windows as the Guest Operating System and select the version
Microsoft Windows Server 2003, Standard Edition.
Click Next.
5.
6.
7.
E7-30
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
If you had multiple networks, you would use this screen to select a different
network. In this example, the defaults are accepted.
8.
9.
E7-31
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
10.
ACTION
E7-32
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Question: Why do you only have one LUN (LUN1, 50 MB, which is
vmhba0:1:0) showing up in this list when in fact you know that you created two
LUNs accessed through FCP (LUN0=1.5 GB and LUN1=50 MB)?
Hint: Think of the fact that RDM stands for Raw Device Mapping.
11.
12.
13.
Leave all options as default on the Specify Advanced Options screen. Click Next.
E7-33
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
14.
ACTION
END OF EXERCISE
E7-34
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
In this exercise, you will establish FC connections between the virtual machine (VM) and the
storage. In addition, you will learn to create VMs.
OBJECTIVES:
TIME ESTIMATE:
50 minutes
START OF EXERCISE
1.
ACTION
E7-35
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
Select the Configuration tab. Select Networking from the Hardware list on your
screen.
You should see Virtual Switch vSwitch0 as shown here. Observe that vSwitch0 is
currently used for both the Service Console and for the VM Network.
E7-36
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
You need to add vSwitch1 for the Virtual Machine Network. You should still be
in the Configuration tab. In the upper-right corner is an option to add networking.
Select Add Networking.
The Add Network Wizard appears.
E7-37
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that the Create a virtual switch option automatically selects the second
NIC (vmnic1) installed on the ESX Server host.
5.
6.
E7-38
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that we have two virtual machine networks defined now: one on
vSwitch0 and one on vSwitch1.
Observe that the two virtual machines, named Win2003 FC VMFS and
Win2003 FC RDM respectively, which you created earlier, are using the VM
Network on vSwitch0. You need to reassign the network of those two VMs onto
the Virtual Machine Network on vSwitch1.
E7-39
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
7.
ACTION
E7-40
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Select the Network Adapter entry in the Hardware list and pick Virtual
Machine Network from the Network Connection list as shown here. Click OK to
exit the VM Properties dialog box.
E7-41
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
8.
ACTION
Click the san<pod#>esx server entry in the ESX inventory tree again. Observe
that the two VMs are now using the VM network on vSwitch1 instead of the VM
network on vSwitch0.
Now you need to remove the VM Network entry from vSwitch0. Click the
Properties hyperlink next to vSwitch0.
E7-42
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
9.
10.
ACTION
Select the VM Network entry in the list and click the Remove button.
E7-43
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
11.
ACTION
END OF TASK 1
E7-44
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
2.
ACTION
Select the Configuration tab. Select Networking from the Hardware list on your
E7-45
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
screen.
You should see Virtual Switch vSwitch0 as shown here. Observe that vSwitch0 is
currently used for both the Service Console and for the VM Network.
Typically, in production environments vSwitch0 is used for the Service Console,
and a separate vSwitch1 is used for the Virtual Machine Network.
3.
You need to add vSwitch1 for the Virtual Machine Network. You should still be
in the Configuration tab. In the upper-right corner is an option to add networking.
Select Add Networking.
E7-46
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-47
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Observe that the Create a virtual switch option automatically selects the second
NIC (vmnic1) installed on the ESX Server host.
5.
E7-48
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
6.
ACTION
Observe that we have two virtual machine networks defined now: one on
vSwitch0 and one on vSwitch1.
Observe that the two virtual machines, named Win2003 FC VMFS and
Win2003 FC RDM respectively, which you created earlier, are using the VM
Network on vSwitch0. You need to reassign the Network of those two VMs onto
the Virtual Machine Network on vSwitch1.
E7-49
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
TASK 3: CRE
STEP
1.
2.
3.
ACTION
You should still be in the Configuration tab. From the Software list on the left
select Security Profile.
Select Properties from the upper right-hand corner of the screen. The Firewall
Properties window appears.
Check the box for the Software iSCSI Client entry. Click OK. The Software
E7-50
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Verify that the Software iSCSI Client is listed under Outgoing Connections.
END OF EXERCISE
OVERVIEW:
In this exercise, you will establish connections between the virtual machine (VM) and the
storage. In addition, you will learn how to create VMs.
OBJECTIVES:
90 minutes
START OF EXERCISE
1.
ACTION
E7-52
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
E7-53
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
3.
ACTION
E7-54
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
4.
Select the Dynamic Discovery tab in the iSCSI Initiator Properties window.
Click Add.
E7-55
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Enter the target IP address specified by your instructor. Keep in mind that this is
the IP address of the first iSCSI target on the storage controller, not the
management IP address. Click OK.
Once you click OK, an iSCSI session is opened between the VMware host and
the target storage controller. This can be verified on the storage controller using
the iscsi session show Data ONTAP command.
Observe also the list of iSCSI discovery addresses, which are used by the
VMware iSCSI software initiator to discover iSCSI targets dynamically:
E7-56
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-57
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Locate the iqn number by navigating to the Virtual Infrastructure Client and
selecting Storage Adapters in the Hardware section of the Configuration tab.
Then, click the iSCSI Software Adapter and look at the Details window as
shown below.
E7-58
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Type (or even better, copy and paste) the iqn into the Initiators section of
FilerView.
Click Add.
You will get a message indicating that the initiator group was successfully
created.
6.
Next, you add a volume from FilerView or the command line. Instructions are
provided for use with FilerView.
Select Volumes and Add. The Volume Wizard appears.
Select Next.
Select Flexible and click Next.
Name the volume esx_iscsi_vol1
Keep Language set to POSIX and select Next.
The containing aggregate should be aggr1. The volume should be 50 GB.
E7-59
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Add another volume using FilerView or the command line. Instructions are
provided for use with FilerView.
Select Volumes and Add. The Volume Wizard appears.
Select Next.
Select Flexible and click Next.
Name the volume esx_iscsi_vol2 and select Next.
Keep Language set to POSIX.
The containing aggregate should be aggr1. The volume should be 9 GB.
Set Space Guarantee to none.
Select Next.
Review the summary and click Commit.
8.
Now create a LUN in FilerView or with the command line. Instructions are
provided for use with FilerView.
Select LUNs and Add.
The path to the LUN should be /vol/esx_iscsi_vol1/LUN.
Set the LUN Protocol Type to VMware.
Set the size of the LUN to 20 GB.
Check OFF Space reservation.
Click Add.
9.
Add another LUN using FilerView or the command line. Instructions are
provided for use with FilerView.
Select LUNs and Add.
The path to the LUN should be /vol/esx_iscsi_vol2/LUN.
Set the LUN Protocol Type to Windows.
Set the size of the LUN to 5 GB.
Check OFF Space reservation.
Click Add.
E7-60
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
10.
ACTION
END OF TASK 1
E7-61
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
De-select Scan for New VMFS Volumes. You will only scan for new storage
devices here.
Click OK.
E7-62
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Notice that both of your LUNs are now visible in the Details section of the
window.
Note the path for each of your LUNs and write them below.
LUN 0 = ________________________ LUN 1= ________________________
In the example above, the paths are vmhba40:0:0 and vmhba40:0:1.
vmhba40:0:0 is the name assigned by VMware to the HBA. This is a
virtual HBA in this case: the ESX iSCSI software initiator.
vmhba40:0:0 is the LUN number. This is the LUN id that you used when
you mapped the LUN to the esx_iscsi_ig initiator group.
E7-63
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
Select Storage (SCSI, SAN, and NFS) from the Hardware menu.
Select Add Storage from the upper right-hand corner of the screen.
E7-64
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
Click Next.
E7-65
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
3.
ACTION
Click Next.
4.
5.
E7-66
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
6.
ACTION
Click Next.
E7-67
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
7.
ACTION
The Summary screen appears. Review the proposed disk layout and click Finish.
Notice that the Create VMFS datastore is in progress (in the Recent Tasks
section of the screen).
E7-68
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
1.
ACTION
Create a flexible volume using either FilerView or the command line with the
following characteristics:
Volume Name: esx_nfs_vol1
Size: 2 GB
Containing Aggregate: aggr1
Space Guarantee: none
2.
3.
Establish a Telnet session to the storage system. Type vol status. Notice
that the esx_nfs_vol1 is present in the list of volumes.
E7-69
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
If you do not see any output when you type exportfs, run exportfs a
and re-execute exportfs. Also, make sure NFS is licensed on the storage
controller.
4.
5.
6.
Type exportfs again. Notice that anon=0 is present in the list. This allows
the volume to be mounted by root.
7.
Type the IP address of the storage system (supplied by your instructor). The
folder name should be /vol/esx_nfs_vol1.
E7-70
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-71
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
8.
ACTION
A summary screen appears with the parameters you selected. Review the
parameters and click Finish.
E7-72
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Notice that the NFS datastore now appears in your list of storage.
E7-73
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
Select the Summary tab and click New Virtual Machine in the Commands
pane.
E7-74
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Select Custom and click Next. You need to select Custom here to be able to
provision the new VM using raw device mapping (RDM) instead of using a
typical VMFS datastore.
2.
3.
Select a location where the vmx file and the pointer to the RDM will be located.
Observe that the dialog box asks to select a datastore in which to store the files
for the virtual machine. When using RDM storage, only the vmx VM
configuration file and the pointer to the RDM will be stored in the datastore you
select here.
Observe also that there are several datastores where the vmx file and pointer to
the RDM could be stored: storage1 is a datastore that corresponds to a local
SCSI disk; NFS VMFS is the NFS datastore that you previously created by
mounting a NetApp NFS volume on ESX; iSCSI VMFS is the VMFS
datastore that you previously created, which is provisioned by a NetApp LUN
E7-75
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
You could store the vmx file and the pointer to the RDM in any of these VMFS
datastores. It is a good idea to keep vmx files and pointers to RDMs in a VMFS
datastore reserved for this purpose and clearly identified as such.
Select Microsoft Windows as the Guest Operating System and select Microsoft
Windows Server 2003, Enterprise Edition.
Click Next.
5.
6.
E7-76
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
7.
ACTION
If you had multiple networks, you would use this screen to select a different
network. In this example, the defaults are accepted.
8.
E7-77
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
9.
ACTION
E7-78
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
10.
ACTION
Question: Why do you only have one LUN (LUN 1) showing up in this list
when in fact you know that you created two LUNs accessed through iSCSI?
E7-79
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
The Configuration/Storage Adapters window shown below lists the two iSCSI
LUNs you previously created: vmhba40:0:0 and vmhba40:0:1. The New
Virtual Machine Wizard shows only LUN1 (vmhba40:0:1) as an available
choice. Why?
E7-80
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
13.
ACTION
Click Next.
14.
E7-81
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
15.
ACTION
Right-click the Win2003 iSCSI RDM virtual machine and select Open
Console.
E7-82
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Select the green start arrow within the console window. The machine will start.
E7-83
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
If the machine does not start due to licensing problems, ensure that your ESX
Server has a valid license file installed as shown below:
Also make sure that the license is enabled under ESX Server License Type.
Observe that in this case, although a license file is installed, the license is not
enabled yet. To do this, click Edit, which is located next to ESX Server
License Type.
E7-84
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Select ESX Server Standard and click OK. Now your license should show up
enabled as shown below in the ESX Server License Type section:
E7-85
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
Click the san<pod#>esx server in the ESX Inventory tree. Select the Summary
tab and click New Virtual Machine in the Commands pane.
E7-86
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-87
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
Click Next.
E7-88
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
3.
ACTION
Select Microsoft Windows as the Guest Operating System and select Microsoft
Windows Server 2003, Enterprise Edition.
Click Next.
4.
E7-89
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
If you had multiple networks, you would use this screen to select a different
network. In this example, the defaults are accepted.
E7-90
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
5.
ACTION
On the Define Virtual Disk Capacity screen, set the Disk Size to 4 GB. Click
Next.
E7-91
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
6.
ACTION
E7-92
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
7.
ACTION
After the Create Virtual Machine task is complete, observe the new
Win2003VMFS virtual machine created on your ESX Server.
There are four virtual machines created on the ESX Server at this point. Two
VMs are accessing their storage through iSCSI and two other VMs are accessing
their storage through FCP:
FCP
1) Win2003 FC RDM
a. Using LUN1 (vmhba1:0:1) as a raw device storage map (RDM)
b. Using the FC VMFS datastore (on LUN0, vmhba1:0:0) to store
the vmx file and pointer to the RDM datastore
2) Win2003 FC VMFS
a. Using FC VMFS datastore (on LUN0, vmhba1:0:0) as VMFS
storage
The FC VMFS datastore (on LUN0, vmhba1:0:0) is used both as VMFS
storage for the Win2003 FC VMFS virtual machine and as vmx file and RDM
pointer repository for the Win2003 FC RDM virtual machine.
E7-93
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
iSCSI
3) Win2003 iSCSI RDM
a. Using LUN1 (vmhba40:0:1) as a raw device storage map (RDM)
b. Using the iSCSI VMFS datastore (on LUN0, vmhba40:0:0) to
store the vmx file and pointer to the RDM datastore
4) Win2003 iSCSI VMFS
a. Using iSCSI VMFS datastore (on LUN0, vmhba40:0:0) as VMFS
storage
The iSCSI VMFS datastore (on LUN0, vmhba40:0:0) is used both as
VMFS storage for the Win2003 iSCSI VMFS virtual machine and as vmx
file and RDM pointer repository for the Win2003 iSCSI RDM virtual
machine.
8.
END OF EXERCISE
In this exercise, you will create NetApp and VMware snapshots, and you create NetApp
FlexClone volumes provisioning RDM and VMFS datastores.
OBJECTIVES:
60 minutes
E7-94
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
START OF EXERCISE
1.
ACTION
E7-95
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
Now, take a VMware snapshot. From the Virtual Infrastructure Client, right-click
Win2003 iSCSI VMFS and select Snapshot and Take Snapshot.
E7-96
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
3.
4.
ACTION
When the Take Virtual Machine Snapshot window appears, name the snapshot
Snapshot1. Note that the VMware snapshot does not immediately occur. The
virtual machine is placed in a consistent state and then changes are written to a
log file. A NetApp Snapshot occurs much more quickly.
Return to the Telnet session and type ls. You should see several new files in the
output including:
Win2003 iSCSI VMFS-Snapshot1.vmsn
Win2003 iSCSI VMFS-000001.vmdk
Win2003 iSCSI VMFS-000001-delta.vmdk
Viewing this directory will let you know if active snapshots are present.
5.
Take a second snapshot by repeating Step 2 and Step 3. Name this snapshot
Snapshot2.
E7-97
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
6.
ACTION
Return to the Telnet session and type ls. Notice that new files were created for
Snapshot2:
Win2003 iSCSI VMFS-Snapshot2.vmsn
Win2003 iSCSI VMFS-000002.vmdk
Win2003 iSCSI VMFS-000002-delta.vmdk
7.
You can also view active VMware snapshots using the Virtual Machine Snapshot
Manager. Right-click the Virtual Machine name and select Snapshot and
Snapshot Manager
E7-98
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
1.
ACTION
Now take a NetApp Snapshot of the Win2003 iSCSI RDM. Establish a Telnet
session to your server. Type the following commands:
vmware-cmd l
This lists all of the vmx files. Find your Win2003 iSCSI RDM vmx file.
vmware-cmd <full_path_to_your_RDM_vmx_file> \
createsnapshot backup quiesce
Ensure to escape the blanks in the
<full_path_to_your_RDM_vmx_file> using \ (instead of just ).
This places the RDM in a quiesced state. The VM is now in hot backup mode.
2.
Open FilerView for your storage system and select Volumes and Manage.
Notice the esx_iscsi_vol2 volume, which is the volume hosting the raw LUN1
(vmhba40:0:1) that you used to provision the datastore of the Win2003 iSCSI
RDM virtual machine.
3.
Select Snapshots and Add. Select the esx_iscsi_vol2 volume and name the
E7-99
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-100
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
4.
ACTION
Select Manage under Snapshots. Notice that the quiesced snapshot is now
present.
Now you need take the Win2003RDM VM out of quiesced state (out of hot
backup mode). Establish a Telnet session to your server. Type the following
command:
vmware-cmd <full_path_to_your_RDM_vmx_file>
removesnapshots
NOTE: Be sure to use the same <full_path to_your_RDM vmx_file> as above.
E7-101
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
1.
ACTION
E7-102
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Select the Quiesced Snapshot as the Parent Volume Snapshot. Click Next.
Review the summary and select Commit.
When the message appears that the clone was created successfully, select Close.
Select Manage from the FlexClones menu.
Notice that the RDM_FlexClone was created.
6.
E7-103
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-104
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
8.
ACTION
Return to the Virtual Infrastructure Client, click the SAN<pod#>esx server tree
branch, and select Storage Adapters from the Hardware menu (Configuration
tab).
Right-click the iSCSI Software Adapter (vmhba40) and select Rescan from
the pop-up menu. Do not scan for new VMFS Volumes. Click OK.
Observe that the new LUN 5 (vmhba40:0:5) is available. This is the LUN 5
hosted by the /vol/RDM_FlexClone clone volume.
E7-105
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-106
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
9.
ACTION
You should still have the SAN<pod#>esx branch selected in the Inventory
browsing tree. Click the Summary tab, and then click the New Virtual Machine
link in the Commands section.
E7-107
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
Select Raw Device Mappings on the Select a Disk screen and click Next.
Select LUN 5 and click Next.
E7-108
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
You can watch the progress of the Virtual Machine creation in the Recent Tasks
portion of the screen.
10.
E7-109
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-110
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
1.
ACTION
You will FlexClone the volume, which provisions the iSCSI VMFS LUN. That is
the /vol/esx_iscsi_vol1 volume on NetApp.
E7-111
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
2.
ACTION
E7-112
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
STEP
ACTION
Click Next.
Select Create new for the Parent Volume Snapshot. Click Next.
Review the summary and select Commit.
When a message appears that the FlexClone volume was successfully created,
click Close.
Click Manage in the FlexClones section to view the new FlexClone volume. You
E7-114
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-115
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
3.
ACTION
Select Manage from the LUNs menu. Notice that the FlexClone LUN
(/vol/VMFS_FlexClone/LUN) is present.
E7-116
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-117
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-118
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
ACTION
E7-119
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
STEP
6.
ACTION
Select Storage (SCSI, SAN, and NFS) from the Hardware menu.
Notice that snap-00000002-iSCSI VMFS now appears in the Storage list.
STEP
ACTION
datastore, does NOT contain a VMFS file system on it (it is a raw LUN). Upon
rescan of the iSCSI bus, you discovered a new LUN (LUN 5). However,
contrary to LUN6, LUN5 does NOT contain a VMFS file system on it. Thus, no
VMFS entry appears in the Storage (SCSI, SAN and NFS) list for the FlexClone
of Win2003RDM.
You may now right-click and Browse the Datastore to create new VMs.
8.
Using FlexClone, you will create a flexible clone of the volume, which provisions
the iSCSI VMFS LUN. That is the /vol/esx_iscsi_vol1 volume on
NetApp.
END OF EXERCISE
E7-121
SAN Implementation Workshop: FC and IP VMware
2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
Appendix A
NETAPP UNIVERSITY
A-1
SAN Implementation Workshop: Appendix A: Answer Key
2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.
Answers
Lab 16 - Configure iSCSI Service on
the Solaris Host
You will need to complete the following steps on your Solaris host by replacing
<storage_ctlr> with the name of your storage controller.
STEP
5.
ACTION
Enter the following command to see the iSCSI Target Portal Groups (TPG)
currently available on the storage controller.
$ rsh <storage_ctlr> iscsi tpgroup show
TPGTag
Name
Member Interfaces
1000
e0a_default
e0a
1001
e0b_default
e0b
1002
e0c_default
e0c
1003
e0d_default
e0d
Why are there 4 different iSCSI Target Portal Groups on this storage controller?
By default, Data ONTAP assigns each Ethernet interface to its own Target Portal
Group (TPG). You can create new TPGs and assign interfaces to new TPGs. As
interfaces are assigned to new TPGs, they are removed from the default TPG.
A-2
SAN Implementation Workshop: Appendix A: Answer Key
2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.
You will need to complete the following steps on the Solaris host.
STEP
10.
ACTION
Q1: Consider the output of the command run in the previous step. Why are some
iSCSI targets shown as connected to a certain IP address whereas other targets are
shown not connected?
Some iSCSI targets are shown as connected to a certain IP address whereas
other targets are shown as not connected, since some Ethernet interfaces on the
storage controllers may be disconnected or down. Also, some iSCSI interfaces on
the storage controllers may be disabled.
Q2: The Target Portal Group 1000 (TPGT: 1000) is not shown (discovered) at all
on the Solaris host. Why?
Target Portal Group 1000 (TPGT: 1000) is not shown (discovered) at all on the
Solaris host, since TPG 1000 contains an Ethernet interface (e0a) which is
currently disabled for iSCSI.
Q3: The Target Portal Group 1003 (TPGT: 1003) is not shown (discovered) at all
on the Solaris host. Why?
Target Portal Group 1003 (TPGT: 1003) is not shown (discovered) at all on the
Solaris host since TPG 1003 contains an Ethernet interface (e0d) which is
currently down and disconnected from the network.
A-3
SAN Implementation Workshop: Appendix A: Answer Key
2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.
Appendix B
NETAPP UNIVERSITY
B-1
SAN Implementation Workshop: Appendix B
2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.
LUN1
disk 2
/dev/rdsk/c0t1d0
disk 1
/dev/rdsk/c0t0d0
Local Disks
LUN3
LUN4
LUN3
dev/vx/rdsk/vxdg1
LUN3
/dev/rdsk/c1t0d3
/dev/rdsk/c1t1d3
/dev/rdsk/c1t2d3
LUN4
dev/vx/rdmp/c1t0d4
LUN4
dev/vx/rdmp/c1t0d3
/dev/rdsk/c1t0d3
/dev/rdsk/c1t1d3
/dev/rdsk/c1t2d3
OS Devices
/dev/rdsk/c2t60A980004334616E4A342D4C68344B61d0
LUN2
/dev/rdsk/c2t60A980004334616E4A342D4C68344B61d0
LUN1
/dev/rdsk/c2t60A980004334616E4A342D4C68344B61d0
LUN0
LUN2
dev/vx/rdsk/vxdg1/vxvol1
dev/rdsk/md/d1
dev/rdsk/md/d0
LUN0
/dev/dsk/md/d1
/dev/dsk/md/d0
dev/vx/dsk/vxdg1/vxvol1
Veritas VxVM
Logical Volume (FS)
/mnt/testing
File System
Mounted on Host
Solaris Host
/mnt/AppY
File Systems
Mounted on Host
/mnt/AppX
iSCSI
FCP
disk 1
Aggregate - aggr1
disk n
diskp 1
/vol/TestVol/LUN4.1un
LUN4
diskp 2
diskp
Aggregate - aggr0
disk 1
/vol/ProdVol/LUN2.1un
LUN2
/vol/ProdVol/AppXQT/LUN1.1un
LUN1
/vol/ProdVol/AppXQT/LUN0.1un
LUN0
/home
/etc
Vol Root
Volume - /vol/vol0
Qtree - /vol/ProdVol/AppXQT
/vol/TestVol/LUN3.1un
LUN3
Volume - /vol/ProdVol
Vol Root
Volume - /vol/TestVol
Vol Root
WAFL Root
iSCSI Session
e0d
TPGT: 1003
iSCSI
Software
Initiator
Solaris Host
TPGT: 1002
e0b
TPGT: 1001
e0d
e0c
iSCSI Target
Portal Groups
e0c
e0b
TPGT: 1001
TPGT: 1002
Fig. 2: Data ONTAP Default (One TPG for each iSCSI target) - Supported by Solaris
Solaris iSCSI
Software
Initiator
Solaris Host
Fig. 1: Data ONTAP Default (One TPG for each iSCSI target) - Supported by Solaris
iSCSI Connection
e0d
e0c
e0b
e0a
Ethernet Interfaces
e0d
e0c
e0b
e0a
Ethernet Interfaces
WAFL Root
Vol Root
/etc
/home
Volume - /vol/TestVol
Vol Root
LUN3
LUN clone
/vol/TestVol/LUN3.1un
LUN4
/vol/TestVol/LUN4.1un
LUN3
/vol/TestVol/LUN3_clone.1un
Based on
Volume - /vol/TestVol/
.snapshot/TestVolSnap
Vol Root
LUN3
.../LUN3.1un
LUN4
.../LUN4.1un
disk 1
disk n
Aggregate - aggr1
diskp 1
diskp 2
disk 1
diskp
Aggregate - aggr0
WAFL Root
Vol Root
/etc
/home
Volume - /vol/TestVol
Vol Root
LUN3
/vol/TestVol/LUN3.1un
LUN4
/vol/TestVol/LUN4.1un
Volume - /vol/TestVol/
.snapshot/TestVolSnap
Vol Root
LUN3
.../LUN3.1un
LUN4
.../LUN4.1un
...
disk 1
Snapshot
disk n
diskp 1
diskp 2
Aggregate - aggr1
disk 1
diskp
Aggregate - aggr0
WAFL Root
Vol Root
/etc
/home
Volume - /vol/TestVol
Vol Root
LUN3
/vol/TestVol/LUN3.1un
Base Snapshot for
VOL Clone (automatic)
LUN4
/vol/TestVol/LUN4.1un
Volume - /vol/TestVol/
.snapshot/clone_TestVolClone.1
Vol Root
The LUNs in the volume
clone are offline and NOT
mapped to initiator groups
LUN3
.../LUN3.1un
LUN4
.../LUN4.1un
VOL Clone
Based on
Volume - /vol/TestVolClone
Vol Root
LUN3
/vol/TestVolClone/LUN3.1un
LUN4
/vol/TestVolClone/LUN4.1un
disk 1
disk n
diskp 1
diskp 2
Aggregate - aggr1
disk 1
diskp
Aggregate - aggr0