Sei sulla pagina 1di 197

NetApp Lab On Demand (LOD)

Lab Guide: NetApp Introduction Lab


Clustered Data ONTAP 8.2 v1.2 Rev 2
Tim Dietrich, NetApp
February 2014 | LOD Lab Overview SL10153/SL10279

TABLE OF CONTENTS
1

INTRODUCTION ............................................................................................................................... 4
1.1 Why clustered Data ONTAP? .................................................................................................................... 4
1.2 Lab Objectives.......................................................................................................................................... 5
1.3 Prerequisites ............................................................................................................................................ 6
1.4 How To Use This Lab Guide ..................................................................................................................... 6
1.4.1 The Callout Conventions Used In This Lab Guide..................................................................................... 6
1.5 Lab Architecture ....................................................................................................................................... 8
1.6 Accessing the Command Line ................................................................................................................... 9

Cluster Setup ................................................................................................................................. 11


2.1 Run the Cluster Setup Wizard to Create cluster1 ..................................................................................... 12
2.2 Add a 2nd Node to the Cluster ................................................................................................................. 16
2.3 Connect to the Cluster with OnCommand System Manager ..................................................................... 21
2.4 Rename the Node Root Aggregate on cluster1-01 ................................................................................... 24
2.5 Create a New Aggregate on Each Cluster Node ...................................................................................... 28

Create a Storage Virtual Machine for NFS and CIFS .................................................................... 44


3.1 Create a Storage Virtual Machine for NAS ............................................................................................... 45
3.2 Configure CIFS and NFS ........................................................................................................................ 66
3.3 Create a Volume and Map It to the Namespace ....................................................................................... 80
3.4 Connect to the SVM from a client ............................................................................................................ 99
3.5 NFS Exporting Qtrees (Optional) ........................................................................................................... 105

Create and Mount a LUN ............................................................................................................. 111


Create a Storage Virtual Machine for iSCSI ................................................................................................... 111
4.1 Create, Map, and Mount a Windows LUN .............................................................................................. 120
4.2 Create, Map, and Mount a Linux LUN.................................................................................................... 170

Appendix 1 Using the clustered Data ONTAP Command Line ..................................................... 194
References ......................................................................................................................................... 196
Version History .................................................................................................................................. 197

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

LIST OF TABLES
Table 1) Lab Host Credentials.................................................................................................................................. 9
Table 2) Lab Controller Credentials .......................................................................................................................... 9
Table 3) Preinstalled NetApp Software ..................................................................................................................... 9

LIST OF FIGURES
Figure 1) Intro Lab Architecture................................................................................................................................ 8

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1 INTRODUCTION
This lab introduces the fundamentals of clustered Data ONTAP. In it we will create a 2-node cluster and
configure Windows 2008R2 and Red Hat Enterprise Linux 6.3 hosts to access storage on the cluster
using CIFS, NFS, and iSCSI.
This lab does include additional storage nodes and hosts beyond those just mentioned. Those additional
components will be described later in this guide and utilized in an upcoming version of this lab.

1.1

Why clustered Data ONTAP?

A helpful way to start understanding the benefits offered by clustered Data ONTAP is to consider server
virtualization. Before server virtualization system administrators frequently deployed applications on
dedicated servers in order to maximize application performance and to avoid the instabilities often
encountered when combining multiple applications on the same operating system instance. While this
design approach was effective it also had the following drawbacks:

It does not scale well adding new servers for every new application is extremely expensive.

It is inefficient most servers are significantly underutilized meaning that businesses are not
extracting the full benefit of their hardware investment.

It is inflexible re-allocating standalone server resources for other purposes is time


consuming, staff intensive, and highly disruptive.

Server virtualization directly addresses all three of these limitations by decoupling the application instance
from the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware,
meaning that businesses can now consolidate their server workloads to a smaller set of more effectively
utilized physical servers. In addition, the ability to transparently migrate running virtual machines across a
pool of physical servers enables businesses to reduce the impact of downtime due to scheduled
maintenance activities.
Clustered Data ONTAP brings these same benefits and many others to storage systems. As with server
virtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a
single logical cluster that can non-disruptively service multiple storage workload needs. With clustered
Data ONTAP you can:

Combine different types and models of NetApp storage controllers (known as nodes) into a
shared physical storage resource pool (referred to as a cluster).

Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on
the same storage cluster.

Consolidate various storage workloads to the cluster. Each workload can be assigned its own
Storage Virtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its
own data volumes, LUNs, CIFS shares, and NFS exports.

Support multitenancy with delegated administration of SVMs. Tenants can be different


companies, business units, or even individual application owners, each with their own distinct
administrators whose admin rights are limited to just the assigned SVM.

Use Quality of Service (QoS) capabilities to manage resource utilization between storage
workloads.

Non-disruptively migrate live data volumes and client connections from one cluster node to
another.

Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively
removed from the cluster, meaning that you can non-disruptively scale a cluster up and down
during hardware refresh cycles.

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Leverage multiple nodes in the cluster to simultaneously service a given SVMs storage
workloads. This means that businesses can scale out their SVMs beyond the bounds of a single
physical node in response to growing storage and performance requirements, all non-disruptively.

Apply software & firmware updates and configuration changes without cluster, SVM, and volume
downtime.

1.2

Lab Objectives

This lab is designed to explore fundamental concepts of clustered Data ONTAP, and utilizes a modular
design to allow you to zero in on the specific topics that are of interest to you. Section 2 is required for all
invocations of the lab because it is a prerequisite for both Section 3 and Section 4. If you are interested in
NAS functionality then complete Section 3 in which you will provision both NFS and CIFS storage. If you
are interested in SAN functionality then complete Section 4 to create and mount an iSCSI LUN for
Windows, an iSCSI LUN for Linux, or both if you so choose.
Here is a more detailed summary of the tasks that you will perform in this lab.

Section 2 (Required - Estimated Completion Time = 20 minutes).


o

Create a cluster.

Create an aggregate.

Section 3 (Optional Estimated Completion Time = 40 minutes)


o

Create a Storage Virtual Machine.

Create a volume on the Storage Virtual Machine.

Configure the Storage Virtual Machine for CIFS and NFS access.

Mount a CIFS share from the Storage Virtual Machine on a Windows client.

Mount a NFS volume from the Storage Virtual Machine on a Linux client.

Section 4 (Optional - Estimated Completion Time including all optional subsections = 90 minutes)
o

Create a Storage Virtual Machine.

Create a volume on the Storage Virtual Machine.

For Windows (Optional - Estimated Completion Time = 40 minutes)

Create a Windows LUN on the volume and map the LUN to an igroup.

Configure a Windows client for iSCSI and MPIO and mount the LUN.

For Linux (Optional - Estimated Completion Time = 40 minutes)

Create a Linux LUN on the volume and map the LUN to an igroup.

Configure a Linux client for iSCSI and multipath and mount the LUN.

This lab includes instructions for completing each of these tasks using either System Manager, NetApps
graphical administration interface, or the Data ONTAP command line. The end state of the lab produced
by either method is exactly the same so use whichever you are the most comfortable with.
Note that while switching back and forth between the graphical and command line methods from one
section of the lab guide to another is supported, this guide was not designed to support switching back
and forth between these methods within a single section. For the best experience we recommend sticking
with a single method for the duration of a lab section.

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1.3

Prerequisites

This lab introduces clustered Data ONTAP and so this guide makes no assumptions that the user has
previous experience with Data ONTAP. The lab does assume some basic familiarity with storage system
related concepts such as RAID, CIFS, NFS, LUNs, and DNS.
This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps
assume that the lab user has a basic familiarity with Microsoft Windows.
This lab also includes steps for mount NFS volumes and LUNs on a Linux client. All steps are performed
from the Linux command line and assume a basic working knowledge of the Linux command line. A basic
working knowledge of a text editor such as vi may be useful but is not required.

1.4

How To Use This Lab Guide

This lab uses a combination of screenshots and command line examples to present the configuration
steps for this lab. Where possible both a graphical and command line procedure is shown for each
sections task, with the graphical option being presented first and the command line option being
presented second. Each such section is preceded by a gray box with orange lettering to help you identify
the start of the available completion options.

To perform this sections tasks from the GUI:


Graphical configuration steps detailed here

To perform this sections tasks from the command line:


Command line configuration steps detail here

If a section can only be completed using single method then the orange wording in the gray box will
reflect that fact.

1.4.1 The Callout Conventions Used In This Lab Guide


Screenshots are documented as in the following examples. The sequence of steps you will need to
complete in a window are called out using orange circles with attached arrows; the number in the circle
specifies the completion order and a corresponding numbered text item underneath the screenshot
elaborates on any amplifying details that you need to be aware of. The callout numbers start fresh at the
number 1 for each successive screenshot.
The orange ovals in the screenshot indicate fields where you will be entering data and along with what
values you should enter. Before you execute the screenshots final action (indicated by the highest
numbered callout in the screenshot) the fields in your lab should all match the values encapsulated by
orange ovals.

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Expand the More section and then complete the fields as shown in the screenshot. Note that the
Password you specify here is Netapp1!.
2) Click Add to add the cluster to System Manager.
In many instances we use partial screenshots to focus attention on just the part of a window that is of
interest. In these cases torn edges indicate the parts of the window that have been omitted:

Command line instructions are all delimited by a text box. The actual command you will be entering will
be highlighted in blue, with the command output displayed in black. Many of the commands you will be
entering are long and span more than one line in the lab guide; in these cases the entire command is
actually entered as a single line with a space character separating the text from successive lines as
shown in the lab guide.
cluster1::> volume create -vserver svm1 -volume vol1 -aggregate aggr1
-size 1GB -junction-path /vol1
[Job 34] Job is queued: Create vol1.
[Job 34] Job succeeded: Successful
cluster1::>

So, in the preceding example you would actually enter the command as:
volume create -vserver svm1 -volume vol1 -aggregate aggr1 -size 1GB -junction-path /vol1

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1.5

Lab Architecture

Figure 1 contains a diagram of the environment for this lab.


Figure 1) Intro Lab Architecture

All of the servers and storage controllers presented in this lab are virtual devices, and the networks that
interconnect them are exclusive to just your lab session. While we encourage you to follow the
demonstration steps outlined in this lab guide, you are free to deviate from this guide and experiment with
other Data ONTAP features that may interest you. The virtual storage controllers (vsims) used in this lab
offer nearly all the same functionality as do physical storage controllers (the main exception right now
being that these vsims dont offer HA support) but at a reduced performance profile, which is why Lab on
Demand labs are not suitable for performance testing. If you need to conduct performance testing we
recommend that you contact NetApps Customer Proof of Concept (CPOC) team for assistance.
Table 1 provides a listing of the servers and storage controller nodes in the lab along with their IP address

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Table 1) Lab Host Credentials


Hostname
JUMPHOST

Description
Windows 2008R2 Remote Access
host
OnCommand Unified Manager

OCUM

Server

IP Address(es)

Username

Password

192.168.0.5

Demo\Administrator

Netapp1!

192.168.0.71

Administrator

Netapp1!

WFA

OnCommand Workflow Automation

192.168.0.72

admin

Netapp1!

RHEL1

Red Hat 6.3 x64 Linux host

192.168.0.12

root

Netapp1!

RHEL2

Red Hat 6.2 x64 Linux host

192.168.0.13

root

Netapp1!

DC

Active Directory Server

192.168.0.253

Demo\Administrator

Netapp1!

unjoined1

Unjoined cluster node vsim

192.168.0.111

admin

Netapp1!

unjoined2

Unjoined cluster node vsim

192.168.0.112

admin

Netapp1!

unjoined3

Unjoined cluster node vsim

192.168.0.121

admin

Netapp1!

unjoined4

Unjoined cluster node vsim

192.168.0.122

admin

Netapp1!

The vsims for this lab are initially delivered unjoined to any cluster, indicated by the fact that the nodes
hostnames are all of the form unjoinedN. If you follow the flow outline in this lab guide the nodes will be
renamed during the course of the lab as shown in Table 2.
Table 2) Lab Controller Credentials
Hostname

Description

IP Address(es)

Username

Password

cluster1

Cluster address for cluster1

192.168.0.101

Admin

Netapp1!

cluster1-01

Previously UNJOINED1

192.168.0.111

admin

Netapp1!

cluster1-02

Previously UNJOINED2

192.168.0.112

admin

Netapp1!

cluster2

Cluster address for cluster2

192.168.0.102

Admin

Netapp1!

cluster2-01

Previously UNJOINED3

192.168.0.121

admin

Netapp1!

cluster2-02

Previously UNJOINED4

192.168.0.122

admin

Netapp1!

The NetApp software pre-installed on the various hosts in this lab is listed in Table 3.
Table 3) Preinstalled NetApp Software
Hostname
JUMPHOST

Description
System Manager 3.0RC1, Data ONTAP DSM v4.0 for Windows MPIO, Windows Host
Utility Kit v6.0.2

OC-CORE

OnCommand Unified Manager 6.0RC1

RHEL1, RHEL2

Linux Host Utilities Kit v6.1

1.6

Accessing the Command Line

PuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in
order to run command line commands. The launch icon for the PuTTY application is pinned to the taskbar

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

on the Windows host jumphost as shown in the following screenshot; just double-click on the icon to
launch it.

Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. In this
example we are connecting to the unconfigured vsim named unjoined1.

1) By default PuTTY should launch into the Basic options for your PuTTY session display as
shown in the screenshot. If you accidentally navigate away from this view just click on the
Session category item to return to this view.
2) Use the scrollbar in the Saved Sessions box to navigate down to the desired host and doubleclick it to populate the Host Name and Save Sessions fields for the session you plan to open.
3) Click the Open button to initiate the ssh connection to the selected host. A terminal window will
open and you will be prompted to log into the host. You can find the correct username and
password for the host in the Lab Host and Lab Controller tables in section 1.5..
The clustered Data ONTAP command lines supports a number of usability features that make the
command line much easier to use. If you are unfamiliar with those feature then you might want to review
Appendix 1 of this lab guide which contains a brief overview of them.

10

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

2 Cluster Setup
Expected Completion Time: 20 Minutes
A cluster is a group of physical storage controllers, or nodes, that have been joined together for the
purpose of serving data to end users. The nodes in a cluster can pool their resources together and can
distribute their work across the member nodes. Communication and data transfer between member
nodes (such as when a client accesses data on a node other than the one actually hosting the data) takes
place over a 10Gb cluster-interconnect network to which all the nodes are connected, while management
and client data traffic passes over separate management and data networks configured on the member
nodes.
Clusters typically consist of one or more NetApp storage controller High Availability (HA) pairs. Both
controllers in an HA pair actively host and serve data but they are also capable of taking over their
partners responsibilities in the event of a service disruption by virtue of their redundant cable paths to
each others disk storage. Having multiple HA pairs in a cluster allows the cluster to scale out to handle
greater workloads and to support non-disruptive migrations of volumes and client connections to other
nodes in the cluster resource pool, which means that cluster expansion and technology refreshes can
take place while the cluster remains fully online and serving data.
Data ONTAP 8.2 clusters that will be only be serving NFS and CIFS can scale up to a maximum of 24
nodes, although the node limit may be lower depending on the model of FAS controller in use. Data
ONTAP 8.2 clusters that will also host iSCSI and FC can scale up to a maximum of 8 nodes.
At a high level the procedure for creating a cluster with NetApp physical controllers usually involves steps
similar to the following:
1) Cable up all the components (heads, disks, SAN/Ethernet NICs, power, serial console, etc.),
including redundant cable paths for each controller in an HA pair.
2) Connect a PC to the controllers serial console port using a terminal emulation program.
3) Power on the controller hardware and use the terminal connection to initiate a controller boot.
4) If the disks are not already assigned to the controller head, boot the head into maintenance
mode, assign the disks to the controller, then reboot the head to normal mode. If the disks are
already assigned then the maintenance mode boot is skipped and the controller is instead booted
straight into normal mode.
5)

At the end of the boot process the cluster setup wizard automatically launches on the serial
console connection and prompts the administrator for the information necessary to create the
cluster.

The controllers used in this lab are vsims (i.e. virtual NetApp storage controllers), meaning that some of
the physical controller capabilities and creation steps we just listed do not apply for this lab. For example,
vsims do not support Fibre Channel and so we will be using iSCSI to demonstrate block storage
functionality. The vsims provided in this lab also do not support HA so we will not be demonstrating HA
failover in this lab.
Lab on Demand has already handled the vsim equivalents to the physical controller setup steps 1-4 as a
part of provisioning this lab environment. Step 5 is an activity covered by this lab guide, but since the
vsims used in this lab dont provide a user accessible serial console port we instead emulate that console
connection by establishing an ssh connection to the controller node. This workaround required that we
preconfigure the vsims IP network settings to support ssh connections over the lab network. One
consequence of that preconfiguration is that some of the default values offered during cluster setup
wizard prompting will be different from what you might see when using real hardware. However, the
overall flow of the cluster setup wizard is still the same and so the vsims in this lab still provide a good
example of how the wizard behaves during cluster setup.

11

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

2.1

Run the Cluster Setup Wizard to Create cluster1

The cluster setup wizard gathers the data necessary to create a brand new cluster or to add a new node
to a pre-existing cluster. In this exercise we will be creating a brand new cluster named cluster1 using
the vsim named unjoined1.There are two methods available for accomplishing this task in this lab.

Manual (section 2.1.1): Using this method you will manually run the Data ONTAP setup wizard to
create the cluster. The setup wizard is a text driven tool that will prompt you for information such
as the name of the cluster you want to create, your Data ONTAP license keys, the TCP/IP
address information for the cluster and the node, and so on. If you have never run through this
procedure before then we recommend you use this method to complete this lab section. It takes
approximately 10-15 minutes to create a cluster in this manner.

Automatic (section 2.1.2): Using this method you will run a custom script included in this lab that
will automatically run through the setup wizard on your behalf. The script takes 1-2 minutes to
complete.

Both methods produce an identical cluster configuration result.

2.1.1 Create cluster1 Manually


In this section we will manually run the cluster setup wizard to create the cluster cluster1 using the node
unjoined1.

This sections tasks can only be performed from the command line
Launch PuTTY as described in section 1.6, and connect to the host unjoined1 using the username
admin and the password Netapp1!. Once you are logged in run the cluster setup wizard and supply it
the inputs shown in blue in the following example. In places where the bracketed default value provided
by the prompt contains the actual value we desire we have displayed ACCEPT DEFAULT DO NOT ENTER
THIS TEXT. In places where you see this string, just accept the default value by hitting the Enter key.
As a part of creating a new cluster you are prompted to input the required Data ONTAP license keys. For
your convenience, the license keys shown in the following example command text are the minimum set of
keys needed to complete the scope of this lab guide.
If you want to enter additional keys beyond those listed here, you can find the full set of license keys in
the README.txt file stored on the desktop of the Windows system named jumphost; you can easily copy
& paste these keys one at time into your PuTTY terminal session when prompted by the cluster setup
wizard. To copy & paste in this manner, open the README.txt file on the desktop using notepad.exe,
highlight a desired license key, enter Ctrl-c to copy the text, then right-click inside the PuTTY window
which will paste the copied text into the wizard.
The cluster creation script in found in section 2.1.2 will populate the full set of license keys.

12

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

unjoined1::> cluster setup


Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster? {create, join}:
create
Do you intend for this node to be used as a single node cluster? {yes, no} [no]:
ACCEPT DEFAULT DO NOT ENTER THIS TEXT
Will the cluster network be configured to use network switches? [yes]:
ACCEPT DEFAULT DO NOT ENTER THIS TEXT
Existing cluster interface configuration found:
(Note: The Existing cluster interface IP addresses shown here are autogenerated and
may vary in your instance of the lab.)
Port
e0a
e0b

MTU
1500
1500

IP
169.254.207.173
169.254.250.79

Netmask
255.255.0.0
255.255.0.0

Do you want to use this configuration? {yes, no} [yes]:


ACCEPT DEFAULT DO NOT ENTER THIS TEXT
Step 1 of 5: Create a Cluster
You can type "back", "exit", or "help" at any question.
Enter the cluster name: cluster1
Enter the cluster base license key: UJBGVLVQJHOJKBAAAAAAAAAAAAAA
Creating cluster cluster1
Network set up .........
Starting replication service ..
Creating cluster
System start up ...........
Updating volume location database
Flexcache Management
Starting cluster support services
Cluster cluster1 has been created.
Step 2 of 5: Add Feature License Keys
You can type "back", "exit", or "help" at any question.
Enter an additional license key []: ETMQSIDSYWKZOFAAAAAAAAAAAAAA
CIFS License was added.
Enter an additional license key []: QNKFTIDSYWKZOFAAAAAAAAAAAAAA
iSCSI License was added.
Enter an additional license key []: SYOBSIDSYWKZOFAAAAAAAAAAAAAA
NFS License was added.
Enter an additional license key []: IVSUXIDSYWKZOFAAAAAAAAAAAAAA
SnapManagerSuite License was added.
Enter an additional license key []: AXDYUIDSYWKZOFAAAAAAAAAAAAAA
SnapRestore License was added.
Enter an additional license key []: ACCEPT DEFAULT DO NOT ENTER THIS TEXT

13

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Step 3 of 5: Set Up a Vserver for Cluster Administration


You can type "back", "exit", or "help" at any question.
Enter the cluster management interface port [e0c]: ACCEPT DEFAULT DO NOT ENTER THIS
TEXT
Enter the cluster management interface IP address: 192.168.0.101
Enter the cluster management interface netmask: 255.255.255.0
Enter the cluster management interface default gateway: 192.168.0.1
A cluster management interface on port e0c with IP address 192.168.0.101 has been
created. You can use this address to connect to and manage the cluster.
Enter the DNS domain names: demo.netapp.com
Enter the name server IP addresses: 192.168.0.253
DNS lookup for the admin Vserver will use the demo.netapp.com domain.
Step 4 of 5: Configure Storage Failover (SFO)
You can type "back", "exit", or "help" at any question.
SFO will not be enabled on a non-HA system.
Step 5 of 5: Set Up the Node
You can type "back", "exit", or "help" at any question.
Where is the controller located []: ACCEPT DEFAULT DO NOT ENTER THIS TEXT
Enter the node management interface port [e0c]: ACCEPT DEFAULT DO NOT ENTER THIS
TEXT
Enter the node management interface IP address [192.168.0.111]: ACCEPT DEFAULT DO
NOT ENTER THIS TEXT
Enter the node management interface netmask [255.255.255.0]: ACCEPT DEFAULT DO NOT
ENTER THIS TEXT
Enter the node management interface default gateway [192.168.0.1]: ACCEPT DEFAULT DO
NOT ENTER THIS TEXT
Cluster setup is now complete.
To begin storing and serving data on this cluster, log in to the command-line
interface (for example, ssh admin@192.168.0.101) and complete the following
additional tasks if they have not already been completed:
- Join additional nodes to the cluster by running "cluster setup" on
those nodes.
- For HA configurations, verify that storage failover is enabled by
running the "storage failover show" command.
- Create a Vserver by running the "vserver setup" command.
In addition to using the CLI to perform cluster management tasks, you can manage
your cluster using OnCommand System Manager, which features a graphical user
interface that simplifies many cluster management tasks. This software is
available from the NetApp Support Site.
Exiting the cluster setup wizard.
cluster1::>

NetApp offers a graphical tool named System Manager that you can use to configure and manage
clusters and storage controllers once youve completed the initial cluster creation. System Manager can
use SNMP to discover new clusters and controllers, but you must first enable SNMP on the cluster. The
following command will grant the public SNMP community read-only access on our newly created cluster.
cluster1::> system snmp community add community-name public type ro

Ordinarily that completes the steps required to create a cluster, but in our case there is an additional step
needed because we pre-configured the vsim to support ssh console access. That pre-configuration
resulted in the nodes name being set to unjoined1, and now we need to manually change the nodes
name to the value the cluster setup wizard would have otherwise assigned to it; in this case the
otherwise-assigned name would have been cluster1-01.

14

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> cluster show


Node
Health Eligibility
--------------------- ------- -----------unjoined1
true
true
cluster1::> node rename -node unjoined1 newname cluster1-01
[Job 9] Job is queued: Renaming node unjoined1 to cluster1-01.
cluster1::> cluster show
Node
Health Eligibility
--------------------- ------- -----------cluster1-01
true
true
cluster1::>

Close your PuTTY connection to the node by entering the command exit at the cluster1::> prompt,
and then proceed directly to section 2.2 to continue the lab.

2.1.2 Create cluster1 using a Script


In this section we will use a custom script to run the cluster setup wizard on our behalf to create the
cluster cluster1.out of the node unjoined1.

This sections tasks can only be performed from the GUI:


On the desktop of the Windows system named jumphost you will find a folder named scripts:

1) Open the folder.

15

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Double click the shortcut named Create cluster1.


A command prompt window named Create cluster1 opens:

1) This window shows the output generated by the labs custom cluster create script. This script
takes 1-2 minutes to complete under normal circumstances, and you will not be prompted for any
inputs to the script other than to accept its completion. When you see the Press any key to
continue prompt simply hit any key to exit the script.
At this point cluster1 has been created and exists in exactly the same state as if you had manually run
the cluster setup wizard as described in section 2.1.1. Continue on to section 2.2.

2.2

Add a 2nd Node to the Cluster

Clusters almost always contain an even number of controller nodes since clusters are usually created
using HA controller pairs. As mentioned previously, this lab uses non-HA vsims for its storage controllers,
which is a configuration that NetApp does not recommend or support for customers, but this configuration
is acceptable for the purpose of demonstrating the clustered Data ONTAP capabilities that fall within the
scope of this lab.

16

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

There is one exception to the rule that a cluster must always contain an even number of nodes and that is
the single node cluster, which is a special cluster configuration intended to support small storage
deployments that only need a single physical controller head. The primary noticeable difference between
single node and standard clusters is that a single node cluster does not have a cluster network. Single
node clusters can later be converted into traditional multi-node clusters and at that point become subject
to all the standard cluster requirements like the need to utilize an even number of nodes consisting of HA
pairs. Since we will not be using a single node cluster in this lab we will not discuss them any further here.
nd

In this section we are going to add a 2 node to the new cluster we created in section 2.1. As was the
case in that section, there are two methods available in this section for accomplishing this task.

Manual (section 2.2.1): Using this method you will run directly run the Data ONTAP setup wizard
to add the node named unjoined2 to the cluster cluster1. Adding a node to an existing cluster
involves much less text entry than does creating a brand new cluster. If you have never run
through this procedure before then we recommend that you use this method to perform this task,
which takes approximately 5 minutes to complete.

Automatic (section 2.2.2): Using this method you will run a custom script included in this lab that
will automatically run through the setup wizard on your behalf. The script takes approximately 2
minutes to complete.

Both methods produce the same resulting cluster configuration.

2.2.1 Add a 2nd Node Manually


In this section we will manually run the cluster setup wizard to add the node unjoined2 to the cluster
cluster1. Note that this is exactly the same procedure you would follow to add even more nodes to the
cluster, the only differences being that you would assign a different the IP address and possibly a
different management interface port name.

This sections tasks can only be performed from the command line:

Launch PuTTY as described in section 1.6, and connect to the host unjoined2 using the username
admin and the password Netapp1!. Once you are logged in run the cluster setup wizard and feed it the
input shown in blue. In places where the default value provided by the prompt (in brackets) contains the
value we desire we have instead displayed ACCEPT DEFAULT DO NOT ENTER THIS TEXT. In places
where you see this string just accept the default value by hitting the Enter key.
unjoined2::> cluster setup
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster? {create, join}: join
Existing cluster interface configuration found:

17

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

(Note: The Existing cluster interface IP addresses shown here are autogenerated and
may vary in your instance of the lab.)
Port
e0a
e0b

MTU
1500
1500

IP
169.254.254.105
169.254.111.119

Netmask
255.255.0.0
255.255.0.0

Do you want to use this configuration? {yes, no} [yes]: ACCEPT DEFAULT DO NOT ENTER
THIS TEXT
Step 1 of 3: Join an Existing Cluster
You can type "back", "exit", or "help" at any question.
Enter the name of the cluster you would like to join [cluster1]: ACCEPT DEFAULT DO
NOT ENTER THIS TEXT
Joining cluster cluster1
Network set up ..........
Node check ...
Joining cluster ...
System start up .....................................
Updating volume location database
Starting cluster support services ..
This node has joined the cluster cluster1.
Step 2 of 3: Configure Storage Failover (SFO)
You can type "back", "exit", or "help" at any question.
SFO will not be enabled on a non-HA system.
Step 3 of 3: Set Up the Node
You can type "back", "exit", or "help" at any question.
Enter the node management interface port [e0c]: ACCEPT DEFAULT DO NOT ENTER THIS
TEXT
Enter the node management interface IP address [192.168.0.112]: ACCEPT DEFAULT DO
NOT ENTER THIS TEXT
Enter the node management interface netmask [255.255.255.0]: ACCEPT DEFAULT DO NOT
ENTER THIS TEXT
Enter the node management interface default gateway [192.168.0.1]: ACCEPT DEFAULT DO
NOT ENTER THIS TEXT
Cluster setup is now complete.
To begin storing and serving data on this cluster, log in to the command-line
interface (for example, ssh admin@192.168.0.101) and complete the following
additional tasks if they have not already been completed:
- Join additional nodes to the cluster by running "cluster setup" on
those nodes.
- For HA configurations, verify that storage failover is enabled by
running the "storage failover show" command.
- Create a Vserver by running the "vserver setup" command.
In addition to using the CLI to perform cluster management tasks, you can manage
your cluster using OnCommand System Manager, which features a graphical user
interface that simplifies many cluster management tasks. This software is
available from the NetApp Support Site.
Exiting the cluster setup wizard.
cluster1::>

Ordinarily that completes the steps required to join a node to a cluster, but in our case there are a couple
of additional step needed because we pre-configured the vsim for ssh console access. That preconfiguration resulted in the node being named unjoined2, and now we need to manually change the

18

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

nodes name to the value the cluster setup would otherwise have assigned to it; in this case that
otherwise-assigned name would have been cluster1-02.
cluster1::> cluster show
Node
Health Eligibility
--------------------- ------- -----------cluster1-01
true
true
unjoined2
true
true
2 entries were displayed.
cluster1::> node rename -node unjoined2 newname cluster1-02
[Job 14] Job is queued: Renaming node unjoined2 to cluster1-02.
[Job 14] Job is running.
[Job 14] Job succeeded: Rename of the node "unjoined2" to "cluster1-02" is successful.
cluster1::> cluster show
Node
Health Eligibility
--------------------- ------- -----------cluster1-01
true
true
cluster1-02
true
true
2 entries were displayed.
cluster1::>

We also need to rename the newly joined nodes root aggregate to match that value that Data ONTAP
would have otherwise assigned it.. Well discuss root aggregates in section 2.4, so for now lets just enter
the following commands.
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0
7.98GB
381.1MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_unjoined2_0
7.98GB
382.8MB
95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::> aggr rename
[Job 15] Job is queued:
[Job 15] Job is queued:
[Job 15] Job succeeded:

-aggregate aggr0_unjoined2_0 -newname aggr0_cluster1_02_0


Rename aggr0_unjoined2_0 to aggr0_cluster1_02_0.
Rename aggr0_unjoined2_0 to aggr0_cluster1_02_0.
DONE

cluster1::> aggr show


Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0
7.98GB
381.8MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02_0
7.98GB
382.1MB
95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::>

Close your PuTTY connection to the node by entering the command exit at the cluster1::> prompt,
and then proceed directly to section 2.3 to continue the lab.

2.2.2 Add a 2nd Node using a Script


In this section we will use a custom script to run the cluster setup wizard on our behalf to add the node
unjoined2 to the cluster cluster1.

19

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

This sections tasks can only be performed from the GUI:


If the Scripts folder is not still open on the desktop of the Windows system named jumphost from
when you completed section 2.1.2 then open the folder again:

1) Open the folder.

1) Double click the shortcut named Add 2nd Node to cluster1.


nd

A command prompt window named Add 2 Node to cluster1 opens:

20

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) This window shows the output generated by the labs custom cluster add node script. You will not
be prompted for any inputs to the script, and under normal circumstances this script takes
approximately 2 minutes to complete. When you see the Press any key to continue prompt
simply hit any key to exit the script.
At this point the new node cluster1-02 has been added to the cluster cluster1, and exists in exactly
the same state as if you had run all the manual configuration steps listed in section 2.1.2.

2.3

Connect to the Cluster with OnCommand System Manager

OnCommand System Manager is NetApps browser-based management tool for configuring and
managing NetApp storage systems and clusters. Now that we have a working cluster as created in
sections 2.1 and 2.2 we can connect to the cluster using System Manager.

This sections tasks can only be performed from the GUI:

A shortcut for launching System Manager is located on the desktop of JUMPHOST.

1) Double-click to launch System Manager. Be patient; it may not appear to start right away but will
open after a 10 second or so delay.

21

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Internet Explorer opens and shows a display similar to the following:

1) Notice that initially no storage systems are shown because we have not yet added any to System
Manager.
2) Click Add to add a controller.
The Add a System window now opens.

1) Enter the hostname as shown in the screenshot, then click Add to add the cluster to System
Manager. This will cause System Manager to discover the cluster using SNMP. (You can click the
More down arrow to see the SNMP credential details)

Our newly created cluster should now be listed in System Manager:

1) Select cluster1 in System Manager.


2) Click Login to connect to cluster1.

22

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

The OnCommand System Manager Login window opens.

1) Populate the credentials fields as shown (using Netapp1! as the password) and then click Sign
In.
System Manager is now logged in to cluster1.

23

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

The tabs on the left side of the window are used to manage various aspects of the cluster. The Cluster
tab (1) accesses configuration settings that apply to the cluster as a whole. The Storage Virtual
Machines tab (2) is used to manage individual Storage Virtual Machines (SVMs, also known as
Vservers). The Nodes tab (3) contains configuration settings that are specific to individual controller
nodes. Please take a few moments to expand and browse these tabs to familiarize yourself with their
contents.
NOTE: As you use System Manager in this lab you may encounter situations where buttons at the bottom
of a System Manager pane are beyond the viewing size of the window and there is no scroll bar provided
to scroll down to see them. If this happens you have two options; either increase the screen size of the
desktop on jumphost (right-click in the background of jumphost and select Screen Resolution from the
pop-up menu), or else in the System Manager window use the tab key to cycle through all the various
fields and buttons, which will eventually force the window to scroll down to the non-visible items.

2.4

Rename the Node Root Aggregate on cluster1-01

Disks are the fundamental unit of physical storage in clustered Data ONTAP and are tied to a specific
cluster node by virtue of their physical connectivity (i.e. cabling) to a given controller head.
By default each node has one aggregate known as the root aggregate, which is a group of the nodes
local disks that host the nodes Data ONTAP operating system. A nodes root aggregate is created during
Data ONTAP installation in a minimal RAID-DP configuration, meaning it is initially comprised of 3 disks
(1 data, 2 parity), and is assigned the name aggr0. Aggregate names must be unique within a cluster so
when the cluster setup wizard joins a node it must rename that nodes root aggregate if there is a conflict

24

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

with the name of any aggregate that already exists in the cluster. If aggr0 is already in use elsewhere in
the cluster then it renames the new nodes aggregate according to the convention aggr0_<nodename>_0.
For the sake of clarity and consistency we will rename the root aggregates of all our nodes in this lab to
follow our own convention of aggr0_<clustername>_<nodenumber>, which in the case of our newly
created cluster means the root aggregate for the node cluster1-01 will be named aggr0_cluster1_01
and the root aggregate for the node cluster1-02 will be named aggr0_cluster1_02.

*** NOTE *** : If you used the scripts in sections 2.1.2 and 2.2.2 to automatically create the cluster
nd

cluster1 and to add the 2

node to that cluster then you can skip straight to section 2.5 as those scripts
have automatically renamed each nodes root aggregate as described in the preceding paragraph. Only if
nd
you manually created the cluster or manually added the 2 node to the cluster do you need to complete
the configuration steps in section 2.4 of this lab guide.

To perform this sections tasks from the GUI:

1) Expand the Cluster tab.


2) Navigate to Cluster1->Storage->Aggregates.
3) In the aggregates pane select aggr0.
4) Click the Edit button.

25

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

The Edit Aggregate window opens.

1) Populate the Aggregate Name field as shown and then click the Save & Close button.
Back in the System Manager repeat the process for the node cluster1-02s root aggregate.

1) In the aggregates pane select aggr0_cluster1_02_0.


2) Click the Edit button.

26

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

The Edit Aggregate window opens.

1) Populate the Aggregate Name field as shown and then click the Save & Close button.

To perform this sections tasks from the command line:


If you do not already have a PuTTY session established to cluster1 then launch PuTTY as described in
section 1.6, and connect to the host cluster1 using the username admin and the password Netapp1!.
Rename the root aggregate for the node cluster1-01 to aggr0_cluster1_01:
cluster1::> aggr show node cluster1-01
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0
7.98GB
381.7GB
95% online
1 cluster1-01
raid_dp,
Normal
cluster1::> aggr rename aggregate aggr0 newname aggr0_cluster1_01
[Job 23] Job succeeded: DONE
cluster1::> aggr show node cluster1-01
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
7.98GB
381.7GB
95% online
1 cluster1-01
raid_dp,
Normal
cluster1::>

27

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Rename the root aggregate for the node cluster1-02 to aggr0_cluster1_02:


cluster1::> aggr show node cluster1-02
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_02_0
7.98GB
381.9MB
95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::> aggr rename
[Job 24] Job is queued:
[Job 24] Job is queued:
[Job 24] Job succeeded:

-aggregate aggr0_cluster1_02_0 -newname aggr0_cluster1_02


Rename aggr0_cluster1_02_0 to aggr0_cluster1_02.
Rename aggr0_cluster1_02_0 to aggr0_cluster1_02.
DONE

cluster1::> aggr show


Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
7.98GB
381.7MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02
7.98GB
381.9MB
95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::>

2.5

Create a New Aggregate on Each Cluster Node

Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a
group of disks that are all physically attached to the same node. A given disk can only be a member of a
single aggregate.
As we discussed in section 2.4, the only aggregate that is automatically created on a cluster node is the
root aggregate, which hosts the Data ONTAP operating system for that node. The root aggregate should
not be used to host user data, so in this section we will be creating a new aggregate on each of the nodes
in cluster1 so they can later host the storage virtual machines, volumes, and LUNs that we will be
creating in this lab.
A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of
the storage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you
assign it to use one or more specific aggregates to host the SVMs volumes. Multiple SVMs can be
assigned to use the same aggregate, which offers greater flexibility in managing storage space, whereas
dedicating an aggregate to just a single SVM provides greater workload isolation.
For this lab we will be creating a single user data aggregate on each node in the cluster.

To perform this sections tasks from the GUI:

You can view the list of disks connected to a node by using System Manager and looking under the
Nodes tab:

28

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Expand the Nodes tab.


2) Navigate to cluster1->cluster1-01->Storage->Disks.
As you can see there are 17 disks connected to node cluster1-01, 3 that comprise the nodes root
aggregate and 14 spares.
Aggregates can be created from the Nodes tab where we just viewed the nodes available disks, but they
can also be created from the Cluster tab which is where we will now go to create the new aggregates we
will be using in this lab.

29

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select the Cluster tab. Double-check to make sure that youve done this to avoid problems later!
2) Go to cluster1->Storage->Aggregates.
3) Click on the Create button to launch the Create Aggregate Wizard.

1) Click Next to continue the wizard. If you cant see the buttons at the bottom of the window try
resizing the whole System Manager window.

30

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Specify the Aggregate Name as shown and then click Next.

31

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click the Select Disks button so we can specify how many disks to include in the aggregate.

32

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select the line for cluster1-01, then set the Number of capacity disks to use: to 6 as shown.
2) Click Save and Close.

33

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Weve finished specifying the configuration for the new aggregate so click Create to create the
aggregate and close the wizard.

34

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click Finish to close the Create Aggregate Wizard.

The newly created aggregate should now be visible in the list of aggregates. Notice aggr1_cluster1_01
in the following screenshot.

Now repeat the same process to create a new aggregate on the node cluster1-02.

35

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click the Create button again.

1) Click Next to continue the wizard.

36

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Specify the Aggregate Name as shown and then click Next.

37

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click the Select Disks button so we can specify how many disks to include in the aggregate.

38

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select the line for cluster1-02, then set the Number of capacity disks to use: to 6 as shown.
2) Click Save and Close.

39

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Weve finished specifying the configuration for the new aggregate so click Create to create the
aggregate and close the wizard.

40

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click Finish to close the Create Aggregate Wizard.

Our complete list of aggregates is now displayed in the System Manager Aggregates pane.

41

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

To perform this sections tasks from the command line:


From a PuTTY session logged in to cluster1 as the username admin and password Netapp1!:
Display a list of the disks attached to the node cluster-01. (Note that you can omit the nodelist option
to display a list of all the disks in the cluster.) By default the PuTTY window may wrap output lines
because the window is too small; if this is the case for you then simply expand the window by selecting its
edge and dragging it wider, after which any subsequent output will utilize the visible width of the window.
cluster1::> disk show nodelist cluster1-01
Usable
Container
Disk
Size Shelf Bay Type
Position
Aggregate Owner
---------------- ---------- ----- --- ----------- ---------- --------- -------cluster1-01:0b.0
8.88GB
- aggregate
dparity
aggr0_cluster1_01
cluster1-01
cluster1-01:0b.1
8.88GB
- aggregate
parity
aggr0_cluster1_01
cluster1-01
cluster1-01:0b.2
8.88GB
- aggregate
data
aggr0_cluster1_01
cluster1-01
cluster1-01:0c.0
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.1
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.2
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.3
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.4
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.5
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.6
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.8
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.9
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.10
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.11
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.12
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.13
28.44GB
- spare
present
cluster1-01
cluster1-01:0c.14
28.44GB
- spare
present
cluster1-01
17 entries were displayed.
cluster1::>

Create the aggregate named aggr1_cluster1_01 on the node cluster1-01 and the aggregate named
aggr1_cluster1_02 on the node cluster1-02.

42

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> aggr show


Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
7.98GB
381.6MB
95% online
1 cluster1-01
raid_dp,
normal
aggr1_cluster1_01
102.3GB
102.3GB
0% online
0 cluster1-01
raid_dp,
normal
2 entries were displayed.
cluster1::> aggr create -aggregate aggr1_cluster1_01 -nodes cluster1-01 -diskcount 6
[Job 25] Job is queued: Create aggr1_cluster1_01.
[Job 25] creating aggregate aggr1_cluster1_01 ...
[Job 25] Job succeeded: DONE
cluster1::> aggr create -aggregate aggr1_cluster1_02 -nodes cluster1-02 -diskcount 6
[Job 26] Job is queued: Create aggr1_cluster1_02.
[Job 26] Job succeeded: DONE
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
7.98GB
381.7MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02
7.98GB
381.9MB
95% online
1 cluster1-02
raid_dp,
normal
aggr1_cluster1_01
102.3GB
102.3GB
0% online
0 cluster1-01
raid_dp,
normal
aggr1_cluster1_02
102.3GB
102.3GB
0% online
0 cluster1-02
raid_dp,
normal
4 entries were displayed.
cluster1::>

43

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

3 Create a Storage Virtual Machine for NFS and CIFS


Expected Completion Time: 40 Minutes
If you are only interested in SAN protocols then you do not need to complete the lab steps in this section.
However, we do recommend that you review the conceptual information found here and at the beginning
of sections 3.2 and 3.3 before you advance to the SAN material in section 4.
Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that
operate within a cluster for the purpose of serving data out to storage clients. A single cluster may host
hundreds of SVMs, with each SVM managing its own set of volumes (FlexVols), Logical Network
Interfaces (LIFs), storage access protocols (e.g. NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients its own
namespace.
You explicitly choose and configure which storage protocols you want a given SVM to support at SVM
creation time, and you can later add or remove protocols as desired.. A single SVM can host any
combination of the supported protocols.
An SVMs assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM.
As we saw earlier, an aggregate is directly tied to the specific node hosting its disks, which means that an
SVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also has a
direct relationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a number
of associated characteristics such as an assigned home node, an assigned physical home port, a list of
physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. A given LIF can
only be assigned to a single SVM, and since LIFs are mapped to physical network ports on cluster nodes
this means that an SVM runs in part on all nodes that are hosting its LIFs.
When an SVM is configured with multiple data LIFs any of those LIFs can potentially be used to access
volumes hosted by the SVM. Which specific LIF IP address a client will use in a given instance, and by
extension which LIF, is a function of name resolution, the mapping of a hostname to an IP address. CIFS
Servers have responsibility under NetBIOS for resolving requests for their hostnames received from
clients, and in so doing can perform some load balancing by responding to different clients with different
LIF addresses, but this distribution is not sophisticated and requires external NetBIOS name servers in
order to deal with clients that are not on the local network. NFS Servers do not handle name resolution on
their own.
DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same
hostname. DNS is supported by both NFS and CIFS clients and works equally well with clients on local
area and wide area networks. Since DNS is an external service that resides outside of Data ONTAP this
architecture creates the potential for service disruptions if the DNS server is advertising IP addresses for
LIFs that are temporarily offline. To compensate for this condition DNS servers can be configured to
delegate the name resolution responsibility for the SVMs hostname records to the SVM itself so that it
can directly respond to name resolution requests involving its LIFs. This allows the SVM to consider LIF
availability and LIF utilization levels when deciding what LIF address to return in response to a DNS name
resolution request.
LIFS that are mapped to physical network ports that reside on the same node as a volumes containing
aggregate offer the most efficient client access path to the volumes data. However, clients can also
access volume data through LIFs bound to physical network ports on other nodes in the cluster; in these
cases clustered Data ONTAP uses the high speed cluster network to bridge communication between the
node hosting the LIF and the node hosting the volume. NetApp best practice is to create at least one NAS
LIF for a given SVM on each cluster node that has an aggregate that is hosting volumes for that SVM. If
additional resiliency is desired then you can also create a NAS LIF on nodes not hosting aggregates for
the SVM as well.
A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to
another in the event of a component failure; any existing connections to that LIF from NFS and SMB 2.0

44

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

and later clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS
LIF migrates to a different physical NIC, potentially to a NIC on a different node in the cluster, and
continues servicing network requests from that new node/port. Throughout this operation the NAS LIF
maintains its IP address; clients connected to the LIF may notice a brief delay while the failover is in
progress but as soon as it completes the clients resume any in-process NAS operations without any loss
of data.
The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each
storage controller node can host a maximum of 125 SVMs, so you can calculate the clusters effective
SVM limit by multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM
can host, but there is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per
node, but if the node is part of an HA pair configured for failover then the limit is half that value, 128 LIFs
per node (so that a node can also accommodate its HA partners LIFs in the event of a failover event).
Each SVM has its own NAS namespace, a logical grouping of the SVMs CIFS and NFS volumes into a
single logical filesystem view. Clients can access the entire namespace by mounting a single share or
export at the top of the namespace tree, meaning that SVM administrators can centrally maintain and
present a consistent view of the SVMs data to all clients rather than having to reproduce that view
structure on each individual client. As an Administrator maps and unmaps volumes from the namespace
those volumes instantly become visible or disappear from clients that have mounted CIFS and NFS
volumes higher in the SVMs namespace. Administrators can also create NFS exports at individual
junction points within the namespace and can create CIFS shares at any directory path in the
namespace.

3.1

Create a Storage Virtual Machine for NAS

In this section we will create a new SVM named svm1 on our cluster and will configure it to serve out a
volume over NFS and CIFS. We will be configuring two NAS data LIFs on the SVM, one per node in the
cluster.

To perform this sections tasks from the GUI:

In System Manager navigate to the Storage Virtual Machines tab so that we can launch the Storage
Virtual Machine Setup wizard.

45

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Open the Storage Virtual Machines tab.


2) Select cluster1.
3) Click the Create button to launch the Storage Virtual Machine Setup wizard.

Proceed to fill out the Storage Virtual Machine details in the setup wizard.

46

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Populate the indicated fields as shown, then click Submit & Continue.
Note that the list of available Data Protocols in your lab may differ somewhat from what is shown in the
preceding screenshot; the contents of that list depend upon on what protocol licenses you entered when
setting up your cluster. If you used the cluster setup script from section 2.1.2 to create your cluster then
all protocols i(including FC) will be available, even lthough this lab does not include Fibre Channel
connectivity.
The next window in the wizard, the Configure CIFS/NFS protocol window, is rather large and you may
not be able to view its whole contents without scrolling so we will present it here as two partial
screenshots:

47

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Populate the fields as shown in the screenshot.


2) Scroll down to see the rest of the window.

48

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) The value for the Administrator Password field is Netapp1!.


2) Expand the NIS Configuration (Optional) portion of the window and notice the pre-populated
values in the Domain Name(s) and IP Address(es) fields. If these fields are populated then the
SVM will be configured for NFS; even though were not running NIS in this lab we also want to
configure DNS for the SVM and so we need to populate these fields. The pre-populated values
should match those shown in the screenshot; if they dont then adjust them accordingly.
3) Click Submit & Continue to move on to step 3 of the wizard.

49

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Specify the password for an SVM specific administrator account for the SVM, which can then be
used to delegate admin access for just this SVM. Enter Netapp1! in the password field, then
click Submit & Continue.
The New Storage Virtual Machine Summary window opens displaying the details of the newly created
SVM.

50

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click OK to exit the wizard.

The new SVM now also shows up in the list of available Storage Virtual Machines.

51

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) The SVM svm1 is now listed under cluster1 on the Storage Virtual Machines tab.
2) The NFS and CIFS protocols are shown encapsulated in green boxes, which indictates that those
protocols are enabled for the selected SVM svm1.
The Storage Virtual Machines Setup Wizard only provisions a single LIF when it creates a new SVM. We
want to have a LIF available on both cluster nodes so that a client can access the SVMs shares through
either node. To do that we will now create a 2nd LIF hosted on the other node in the cluster.

52

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Under the Storage Virtual Machines tab navigate to cluster1->svm1->Configuration>Network Interfaces. Notice that in the main pane of the window there is only a single LIF
named svm1_cifs_nfs_lif1 specified for the SVM svm1.
2) Click on the Create button to launch the Network Interface Create Wizard.

53

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click Next to advance the wizard.

54

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Populate the fields as shown. Note that we are setting the Role to Both. The existing LIF was
configured for both when we created the SVM because we did not create a dedicated
management LIF. and we want this new LIF to have a matching configuration. Click Next to
continue.

55

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) We want to use the new LIF for both CIFS & NFS so accept the default selections and advance
the wizard by clicking Next.

1) In the Network Properties step click on the Browse button to open the port selection window.

56

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Expand the Ports/Adapters list entry for cluster1-02 and select port e0c.
2) Click OK to accept the selection and return to the Network Properties step in the wizard.

57

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Complete the remainder of the fields in the Network Properties window and click Next to continue
the wizard.

58

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Review the summary of the settings to make sure everything is set correctly as shown. This lab
only uses a single subnet so the fact that the new interface will be assigned to the default failover
group is perfectly acceptable. If everything is correct then click Next to continue.

59

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click Finish to complete the Create Network Interface wizard.

60

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Notice that our new LIF named svm1_lif_nfs_lif2 is now displayed in the list of the SVMs
network interfaces.
2) Notice how various properties for the selected LIF are listed in the details pane at the bottom of
the window.
Lastly, we need to configure DNS delegation for the SVM so that Linux and Windows clients can
intelligently utilize all of the svm1 SVMs configured NAS LIFs. To achieve this objective the DNS server
must delegate to the cluster the responsibility for the DNS zone corresponding to the SVMs hostname,
which in our case will be svm1.demo.netapp.com. We have preconfigured the labs DNS server to
delegate this responsibility, but the cluster must also be configured to accept it. You will be completing
that acceptance task now, but since it cannot be accomplished through System Manager you must
instead use the Data ONTAP command line.
Open a PuTTY connection to cluster1 following the instructions from section 1.6. Log in using the
username admin and the password Netapp1!, then enter the following commands.
cluster1::> network interface modify vserver svm1 lif svm1_cifs_nfs_lif1 dns-zone
svm1.demo.netapp.com
cluster1::> network interface modify vserver svm1 lif svm1_cifs_nfs_lif2 dns-zone
svm1.demo.netapp.com
cluster1::> network interface show vserver svm1 fields dns-zone,address
vserver lif
------- ----------------- ------------- ------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>

Validate that delegation is working correctly by opening a command prompt on jumphost (launch a
command prompt by going to Start->All Programs->Accessories->Command Prompt) and use the
nslookup command as shown in the following screenshot. If the nslookup command returns IP addresses
as identified by the yellow highlighted text then delegation is working correctly. If nslookup returns a Nonexistent domain error then delegation is not working correctly and you will need to review the Data
ONTAP commands you just entered as they most likely contained an error. Also notice from the following

61

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

screenshot that different executions of the nslookup command return different addresses, demonstrating
that DNS load balancing is working correctly.

To perform this sections tasks from the command line:


If you do not already have a PuTTY connection open to cluster1 then open one now following the
directions in section 1.6. The username is admin and the password is Netapp1.
Create the SVM named svm1. Note that the clustered Data ONTAP command line syntax still refers to
storage virtual machines as vservers.
cluster1::> vserver create -vserver svm1 -rootvolume svm1_root -aggregate
aggr1_cluster1_01 -language C.UTF-8 -rootvolume-security-style ntfs -ns-switch
file,nis -nm-switch file -snapshot-policy default
[Job 28] Job is queued: Create svm1.
[Job 28] Creating root volume
[Job 28] Job succeeded:
Vserver creation complete
cluster1::>

Configure the NIS domain to match how System Manager configured the SVM in the GUI lab workflow.
cluster1::> vserver services nis-domain create -vserver svm1 domain demo.netapp.com
tr
active true servers 192.168.0.253
cluster1::>

62

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Add CIFS and NFS protocol support to the SVM svm1:


cluster1::> vserver modify -vserver svm1
cluster1::> vserver show
Admin
Root
Vserver
Type
State
Volume
----------- ------- --------- ---------cluster1
admin
cluster1-01 node
cluster1-02 node
svm1
data
running
svm1_root

-allowed-protocols nfs,cifs
Aggregate
---------aggr1_
cluster1_
01

Name
Service
------file,
nis

Name
Mapping
------file

4 entries were displayed


cluster1::>

Display a list of the clusters network interfaces:


cluster1::> network interface show
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------cluster1
cluster_mgmt up/up
192.168.0.101/24
cluster1-01
clus1
up/up
169.254.207.173/16
clus1
up/up
169.254.250.79/16
mgmt1
up/up
192.168.0.112/24
cluster1-02
clus1
up/up
169.254.254.105/16
clus1
up/up
169.254.111.119/16
mgmt1
up/up
192.168.0.112/24
7 entries were displayed.
cluster1::>

Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01

e0c

true

cluster1-01
cluster1-01
cluster1-01

e0a
e0b
e0c

true
true
true

cluster1-01
cluster1-01
cluster1-01

e0a
e0b
e0c

true
true
true

Notice that there are not yet any LIFs defined for the SVM svm1. Create the svm1_cifs_nfs_lif1 data
LIF for svm1:
cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif1 -role data
-data-protocol nfs,cifs -home-node cluster1-01 -home-port e0c -address 192.168.0.131
-netmask 255.255.255.0 -firewall-policy mgmt
Info: Your interface was created successfully; the routing group d192.168.0.0/24 was
created
cluster1::>

Create the svm1_cifs_nfs_lif2 data LIF for the SVM svm1:


cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif2 -role data
-data-protocol nfs,cifs -home-node cluster1-02 -home-port e0c -address 192.168.0.132
-netmask 255.255.255.0 -firewall-policy mgmt
cluster1::>

63

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Display all of the LIFs owned by svm1:


cluster1::> network interface show -vserver svm1
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------svm1
svm1_cifs_nfs_lif1
up/up
192.168.0.131/24
svm1_cifs_nfs_lif2
up/up
192.168.0.132/24
2 entries were displayed.
cluster1::>

Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01

e0c

true

cluster1-02

e0c

true

Configure the DNS domain and nameservers for the svm1 SVM:
cluster1::> vserver services dns show
Name
Vserver
State
Domains
Servers
--------------- --------- ----------------------------------- ---------------cluster1
enabled
demo.netapp.com
192.168.0.253
cluster1::> vserver services dns create -vserver svm1 -name-servers 192.168.0.253
-domains demo.netapp.com
cluster1::> vserver services dns show
Name
Vserver
State
Domains
Servers
--------------- --------- ----------------------------------- ---------------cluster1
enabled
demo.netapp.com
192.168.0.253
svm1
enabled
demo.netapp.com
192.168.0.253
2 entries were displayed.
cluster1::>

Configure the LIFs to accept DNS delegation responsibility for the svm1.demo.netapp.com zone so that
we can advertise addresses for both of the NAS data LIFs that belong to svm1. We could have done this
as part of the network interface create commands but we opted to do it separately here to show you how
you can modify an existing LIF.
cluster1::> network interface modify vserver svm1 lif svm1_cifs_nfs_lif1 dns-zone
svm1.demo.netapp.com
(network interface modify)
cluster1::> network interface modify vserver svm1 lif svm1_cifs_nfs_lif2 dns-zone
svm1.demo.netapp.com
(network interface modify)
cluster1::> network interface show vserver svm1 fields dns-zone,address
vserver lif
------- ------------------ ------------- -------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>

Verify that DNS delegation is working correctly by opening a PuTTY connection to the Linux host rhel1
(username root and password Netapp1!) and executing the following commands. If the delegation is
working correctly then you should see IP addresses returned for the host svm1.demo.netapp.com, and if
you run the command several times you will eventually see that the responses vary the returned address
between the SVMs two LIFs.

64

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

[root@rhel1 ~]# nslookup svm1.demo.netapp.com


Server:
192.168.0.253
Address:
192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.132
[root@rhel1 ~]# nslookup svm1.demo.netapp.com
Server:
192.168.0.253
Address:
192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.131
[root@rhel1 ~]#

This completes the planned LIF configuration for svm1, so now display a detailed configuration report for
the LIF svm1_cifs_nfs_lif1:
cluster1::> network interface show -lif svm1_cifs_nfs_lif1 -instance
Vserver Name: svm1
Logical Interface Name: svm1_cifs_nfs_lif1
Role: data
Data Protocol: nfs, cifs
Home Node: cluster1-01
Home Port: e0c
Current Node: cluster1-01
Current Port: e0c
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.131
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: Routing Group Name: d192.168.0.0/24
Administrative Status: up
Failover Policy: nextavail
Firewall Policy: mgmt
Auto Revert: false
Fully Qualified DNS Zone Name: svm1.demo.netapp.com
DNS Query Listen Enable: true
Failover Group Name: system-defined
FCP WWPN: Address family: ipv4
Comment: cluster1::>

65

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

When we issued the vserver create command to create svm1 we included an option to enable CIFS for
it, but that command did not actually create a CIFS server for the svm. Now lets create that CIFS server.
cluster1::> vserver cifs create -vserver svm1 -cifs-server svm1 -domain
demo.netapp.com
In order to create an Active Directory machine account for the CIFS server, you must
supply the name and
password of a Windows account with sufficient privileges to add computers to the
"CN=Computers" container within the "DEMO.NETAPP.COM" domain.
Enter the user name: Administrator
Enter the password: Netapp1!
cluster1::> vserver cifs show
Server
Status
Domain/Workgroup Authentication
Vserver
Name
Admin
Name
Style
----------- --------------- --------- ---------------- -------------svm1
SVM1
up
DEMO
domain
cluster1::>

As with CIFS we enabled the SVM svm1 to support NFS at SVM creation time but that action did not
actually start up an NFS server for the SVM. Well do that now.
cluster1::> vserver nfs status -vserver svm1
The NFS server is not running.
cluster1::> vserver nfs create -vserver svm1 -v3 enabled -access true
cluster1::> vserver nfs status -vserver svm1
The NFS server is running.
cluster1::> vserver nfs show
Vserver: svm1
General Access: true
v3: enabled
v4.0: disabled
4.1: disabled
UDP: enabled
TCP: enabled
Default Windows User: Default Windows Group: cluster1::>

3.2

Configure CIFS and NFS

Clustered Data ONTAP configures CIFS and NFS on a per SVM basis. When we created the svm1 SVM
in the previous section we set up and enabled CIFS and NFS for it. However, it is important to understand
that clients cannot yet access the SVM using CIFS and NFS. That is partially because we have not yet
created any volumes on the SVM but also because we have not told the SVM what we want to share and
who we want to share it with.
Each SVM has its own namespace. A namespace is a logical grouping of a single SVMs volumes into a
directory hierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVMs root
volume (svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares
data to CIFS and NFS clients. The SVMs other volumes are junctioned (i.e. mounted) within that root
volume or within other volumes that are already junctioned into the namespace. This hierarchy presents
NAS clients with a unified, centrally maintained view of the storage encompassed by the namespace,
regardless of where the junctioned volumes physically reside in the cluster. CIFS and NFS clients cannot
access a volume that has not been junctioned into the namespace.
CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share
declared at the top of the namespace. While this is a very powerful capability, there is no requirement to
make the whole namespace accessible. You can create CIFS shares at any directory level in the

66

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

namespace, and you can create different NFS export rules at junction boundaries for individual volumes
and for individual qtrees within a junctioned volume.
Clustered Data ONTAP does not utilize an /etc/exports file for exporting NFS volumes; instead it uses
a policy model that dictates the NFS client access rules for the associated volumes. An NFS-enabled
SVM implicitly exports the root of its namespace and automatically associates that export with the SVMs
default export policy, but that default policy is initially empty and until it is populated with access rules no
NFS clients will be able to access the namespace. The SVMs default export policy applies to the root
volume and also to any volumes that an administrator junctions into the namespace, but an administrator
can optionally create additional export policies in order to implement different access rules within the
namespace. You can apply export policies to a volume as a whole and to individual qtrees within a
volume, but a given volume or qtree can only have one associated export policy. While you cant create
NFS exports at any other directory level in the namespace, NFS clients can mount from any level in the
namespace by leveraging the namespaces root export.
In this section of the lab we are going to configure a default export policy for our SVM so that any
volumes we junction into its namespace will automatically pick up the same NFS export rules. We will
also create a single CIFS share at the top of our namespace so that all the volumes we junction into our
namespace can be accessed via that one share. Finally, since our SVM will be sharing the same data
over NFS and CIFS, we will be setting up name mapping between UNIX and Windows user accounts to
facilitate smooth multiprotocol access to the volumes and files in the namespace.

To perform this sections tasks from the GUI:


When you create an SVM, Data ONTAP automatically creates a root volume to hold that SVMs
namespace. An SVM always has a root volume, whether or not it is configured to support NAS protocols.
Before we configure NFS and CIFS for our newly created SVM, lets take a quick look at the SVMs root
volume:

1) Select the Storage Virtual Machines tab and navigate to cluster1->svm1->Storage->Volumes.

67

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

2) Note the existence of the svm1_root volume, which hosts the namespace for the SVM svm1.
The root volume is not large; only 20 MB in this example. Root volumes are small because they
only intended to house the junctions that organize the SVMs volumes; all of the files hosted on
the SVM should reside inside the volumes that are junctioned into the namespace rather than
directly in the SVMs root volume.
Lets confirm that CIFS and NFS are running for our SVM using System Manager. Well check CIFS first.

1) Select the Storage Virtual Machines tab and navigate to cluster1->svm1->Configuration>Protocols->CIFS.


2) Select the Configuration tab if it is not already selected.
3)

If the Service Status property shows Started as it is the screenshot then CIFS is running for
this SVM.

If you were dealing with an SVM on which CIFS had not been previously setup then you could use the
Setup button in this window to accomplish that task.
Now check that NFS is enabled for our SVM.

68

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select NFS under the Protocols section.


2) Notice that the NFS Server Status shows as Enabled.
If you were dealing with an SVM on which NFS had not been previously set up then you could use the
Enable button in this window to turn NFS on.
At this point we have confirmed that our SVM has a running CIFS server and a running NFS server.
However, we have not yet configured those two servers to actually serve any data, so we will now start
that process by configuring the default NFS export policy for our SVM.
When you create an SVM with NFS, clustered Data ONTAP automatically creates a default NFS export
policy for the SVM that contains an empty list of access rules. Without any access rules that policy will not
allow clients to access any exports, so we will now add a rule to the default policy so that any volumes we
later create for the SVM will be automatically accessible to NFS clients. If any of this seems a bit
confusing, dont worry; the concept should become clearer as we work through this section and the next
one.

69

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) In System Manager select the Storage Virtual Machines tab and then go to cluster1->svm1>Policies->Export Policies.
2) In the Export Polices window select the default policy.
3) Click on Add Rule.
The Create Export Rule window opens. Using this dialog you can create any number of rules that provide
fine grained access control for clients and also specify their order of application. For this lab we are going
to create a single rule that grants unfettered access to any host on the labs private network.

70

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Configure the fields as shown in the screenshot. This will create a single access rule that grants
read-write and root access to any node on the network without regard to which NAS protocol they
are using. Click OK to create the rule.
Returning to the Export Policies window in System Manager we now see our newly added rule under the
default policy.

71

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

With this updated default export policy in place now NFS clients will be able to mount the root of the svm1
SVMs namespace and use that mount to access any volumes that we junction into the namespace.
We next need to configure a CIFS share for our SVM. We are going to create a single share named
nsroot at the root of our SVMs namespace.

72

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select the Storage Virtual Machines tab and navigate to cluster1->svm1->Storage->Shares.


2) In the Shares pane select Create Share.

The Create Share dialog box opens.

1) Populate the fields as shown to make the root folder of the namespace available as a CIFS share
named nsroot, then push the Create button.
The new nsroot share now shows up in the System Manager Shares window.

73

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select nsroot from the list of shares.


2) Click the Edit button to edit the shares settings.

The Edit nsroot Settings window opens.

74

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select the Permissions tab. Make sure that the group Everyone is granted the Full Control
permission. You can set more fine grained permissions on the share from this tab but this
configuration is sufficient for the purpose of this lab.

75

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select the Options tab at the top of the window and make sure the settings are as shown in the
screenshot.
2) If any of the settings differ from those shown correct them and hit the Save and Close button. If
everything matches hit the Cancel button instead.
Setup of the \\svm1\nsroot CIFS share is now complete.
For this lab we have created just one share at the root of our namespace which allows users to access
any volume mounted in the namespace via that share. The advantage of this approach is that it reduces
the number of mapped drives that you have to manage on your clients; any changes you make to the
namespace become instantly visible and accessible to your clients. If you prefer to use multiple shares
then clustered Data ONTAP allows you to create additional shares rooted at any directory level within the
namespace.
Since we have configured our SVM to support both NFS and CIFS, we next want to set up username
mapping so that our UNIX root users and the DEMO\Administrator account will have synonymous
access to each others files. Setting up such a mapping may not be desirable in all environments, but it
will simplify data sharing for us since these are the two primary accounts we are using in this lab.

76

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) In System Manager open the Storage Virtual Machines tab and navigate to cluster1->svm1>Configuration->Local Users and Groups->Name Mapping.
2) In the Name Mapping pane click the Add button.
The Add Name Mapping Entry window opens.

1) Create a Windows to UNIX mapping by completing all of the fields as shown (the two
backslashes in the Pattern field is not a typo, and administrator should not be capitalized) and
then click on the Add button.
Repeat the process to create another mapping.

77

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Create a UNIX to Windows mapping by completing all of the fields as shown and then click on the
Add button.
You should now see two mappings in the Name Mappings window that together make the root and
DEMO\Administrator accounts equivalent to each other for the purpose of file access within the SVM.

78

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

To perform this sections tasks from the command line:


Verify that CIFS is running by default for the SVM svm1:
cluster1::> vserver cifs show
Server
Status
Vserver
Name
Admin
----------- --------------- --------svm1
SVM1
up
cluster1::>

Domain/Workgroup
Name
---------------DEMO

Authentication
Style
-------------domain

Verify that NFS is running for the SVM svm1:


cluster1::> vserver nfs show
Vserver: svm1
General Access: true
v3: enabled
v4.0: disabled
4.1: disabled
UDP: enabled
TCP: enabled
Default Windows User: Default Windows Group: cluster1::>

Create an export policy for the SVM svm1and configure the policys rules.
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::> vserver export-policy rule show
This table is currently empty.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname default
-clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any -anon 65534 -ruleindex 1
cluster1::> vserver export-policy rule show
Policy
Rule
Access
Client
RO
Vserver
Name
Index
Protocol Match
Rule
------------ --------------- ------ -------- --------------------- --------svm1
default
1
any
0.0.0.0/0
any
cluster1::> vserver export-policy rule show policyname default -instance
Vserver: svm1
Policy Name: default
Rule Index: 1
Access Protocol: any
Client Match Hostname, IP Address, Netgroup, or Domain: 0.0.0.0/0
RO Access Rule: any
RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
Superuser Security Types: any
Honor SetUID Bits in SETATTR: true
Allow Creation of Devices: true
cluster1::>

Create a share at the root of the namespace for the SVM svm1:

79

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> vserver cifs share show


Vserver
Share
Path
-------------- ------------- ----------------svm1
admin$
/
svm1
c$
/
BUILTIN\Administrators / Full Control
svm1
ipc$
/
3 entries were displayed.
cluster1::> vserver cifs share create -vserver
cluster1::> vserver cifs share show
Vserver
Share
Path
-------------- ------------- ----------------svm1
admin$
/
svm1
c$
/
istrators / Full Control
svm1
ipc$
/
svm1
nsroot
/
ll Control

Properties
---------browsable
oplocks

Comment ACL
-------- -----------

browsable

svm1 -share-name nsroot -path /


Properties
---------browsable
oplocks

Comment
--------

ACL
----------BUILTIN\Admin

browsable
oplocks

Everyone / Fu

browsable
changenotify
4 entries were displayed.
cluster1::>

Set up CIFS <-> NFS user name mapping for the SVM svm1:
cluster1::> vserver name-mapping show
This table is currently empty.
cluster1::> vserver name-mapping create -vserver svm1 -direction win-unix -position 1
-pattern demo\\administrator -replacement root
cluster1::> vserver name-mapping create -vserver svm1 -direction unix-win -position 1
-pattern root -replacement demo\\administrator
cluster1::> vserver name-mapping show
Vserver
Direction Position
-------------- --------- -------smv1
win-unix 1
Pattern: demo\\administrator
Replacement: root
svm1
unix-win 1
Pattern: root
Replacement: demo\\administrator
2 entries were displayed.
cluster1::>

3.3

Create a Volume and Map It to the Namespace

Volumes, or FlexVols, are the logical containers used to store data. Each volume is hosted in a single
aggregate, but any given aggregate can host multiple volumes. Unlike an aggregate, each volume can be
associated with no more than a single SVM. The maximum size of a volume is dictated by the model of
the storage controller hosting it.
An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can be
configured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000
FlexVols (depending on controller model), which means that there is an effective limit on the total number
of volumes that a cluster can host.
Each storage controller node has a root aggregate (e.g. aggr0_<nodename>) that contains the nodes
Data ONTAP operating system. Do not use the nodes root aggregate to host any other volumes or user
data; always create additional aggregates and volumes for that purpose.
Clustered Data ONTAP FlexVols support a number of storage efficiency features including thin
provisioning, deduplication, and compression. One specific storage efficiency feature we will be showing
in the section of the lab is thin provisioning, which dictates how space for a FlexVol is allocated in its
containing aggregate. When you create a FlexVol with a volume guarantee of type volume you are
80

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

thickly provisioning the volume, pre-allocating all of the space for the volume on the containing aggregate,
which ensures that the volume will never run out of space unless the volume reaches 100% capacity.
When you create a FlexVol with a volume guarantee of none you are thinly provisioning the volume, only
allocating space for it on the containing aggregate at the time and in the quantity that the volume actually
needs it to store the data. This latter configuration allows you to increase your overall space utilization
and even oversubscribe an aggregate by allocating more volumes on it than the aggregate could actually
accommodate if all the subscribed volumes reached their full size. However, if an oversubscribed
aggregate does fill up then all its volumes will out of space before they reach their maximum volume size,
therefore oversubscription deployments generally requires a greater degree of administrative vigilance
around space utilization.
In section 2.5 we created a new aggregate named aggr1_cluster1_01; we will now use that aggregate
to host a new thinly provisioned volume named engineering for the SVM named svm1.

To perform this sections tasks from the GUI:

1) In System Manager open the Storage Virtual Machines tab.


2) Navigate to cluster1->svm1->Storage->Volumes.
3) Click Create to launch the Create Volume wizard.

81

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Populate the data fields as shown to specify a new 1 GB thin provisioned volume named
engineering in the aggregate aggr1_cluster1_01. Click the Create button to complete the
volume creation process.
The newly created thin provisioned volume should now display in the Volumes list.

82

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) If you are not already there then navigate to Storage Virtual Machines->cluster1->svm1>Storage->Volumes.
2) Notice that engineering is now listed as a volume for the SVM.
System Manager has also automatically mapped engineering into the SVMs NAS namespace.

83

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Navigate to Storage Virtual Machines->cluster1->svm1->Storage->Namespace.


2) Notice that engineering is mounted in the svm1 SVMs namespace.
Our newly created engineering volume has also inherited the default NFS export policy, and because
we have already configured the access rules for that policy the volume is instantly accessible to NFS
clients. As you can see in the preceding screenshot, the engineering volume was junctioned as
/engineering, meaning that any client that had mapped a share to \\svm1\nsroot or NFS mounted
svm1:/ would now instantly see the directory engineering in the share and NFS mount corresponding to
the newly created volume.
Now lets create a second volume.

84

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Navigate to Storage Virtual Machines->cluster1->svm1->Storage->Volumes.


2) Click Create to launch the Create Volume wizard.

85

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Populate the data fields as shown to specify a new 1 GB thin provisioned volume named
eng_users in the aggregate aggr1_cluster1_01. Click the Create button to complete the
volume creation process.

86

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Notice that eng_users is now listed as a volume for the SVM.

Now look at how System Manager junctioned in the new volume by default:

87

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Navigate to Storage Virtual Machines->cluster1->svm1->Storage->Namespace.


2) As was the case with engineering, System Manager has automatically mapped eng_users into
the SVMs NAS namespace, in this case as /eng_users.
You do have more options for junctioning than just placing your volumes into the root of your namespace.
In the case of the eng_users volume, we want to junction that volime underneath the engineering
volume and shorten the name to take advantage of an already intuitive context.

88

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select eng_users in the namespace view.


2) Click Unmount.

1) Click Unmount to confirm the operation.

89

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) As you can see, eng_users has disappeared from the namespace. Since it is no longer
junctioned in the namespace that means clients can no longer access it or even see it. Click
Mount so we can junction the volume in at a different location.

1) Fill out the name fields as shown, noting that we will be junctioning this volume in as users
rather than as eng_users. Click the Browse button so we can choose where in the namespace
to create the junction.

90

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Expand the root of the namespace structure.


2) Double-click engineering.
3) Verify that /engineering is displayed as the Selected Path.
4) Click OK to accept the selection.

1) The fields should now all be populated as shown. Click Mount to mount the volume in the
namespace.

91

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) The volume eng_users is mounted in the namespace as /engineering/users.


A junction can also be created within user created directories. For example, from a CIFS or NFS client we
could create a folder named projects inside the engineering volume and then create a widgets
volume that junctions in under the projects folder; in that scenario the namespace path to the widgets
volume contents would be /engineering/projects/widgets.
Now we are going to create a couple of qtrees within our the eng_users volume, one for each of the
users bob and susan.

92

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Navigate to Storage Virtual Machines->cluster1->svm1->Storage->Qtrees.


2) Click Create to launch the Create Qtree wizard.

1) Select the Details tab and then populate the fields as shown in the screenshot.
2) Click on the Quota tab

93

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) The Quota tab is where you define the space usage limits you want to apply to the qtree. We will
not be implementing any quota limits in this lab to click the Create button.
Now create a second qtree for the user account susan.

1) Click the Create button.

94

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select the Details tab and then populate the fields as shown in the screenshot.
2) Click the Create button.

At this point you should see both of our newly created user qtrees in System Manager.

95

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

To perform this sections tasks from the command line:


Display basic information about the SVMs current list of volumes:
cluster1::> volume show -vserver svm1
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm1
svm1_root
aggr1_cluster1_01
online
RW
20MB
18.89MB
5%
cluster1::>

Display the junctions in the SVMs namespace:


cluster1::> volume show -vserver svm1 -junction
Junction
Junction
Vserver
Volume
Language Active
Junction Path
Path Source
--------- ------------ -------- -------- ------------------------- ----------svm1
svm1_root
C.UTF-8
/
cluster1::>

Create the volume engineering, junctioning it into the namespace at /engineering:


cluster1::> volume create -vserver svm1 -volume engineering -aggregate
aggr1_cluster1_01 -size 1GB percent-snapshot-space 5 space-guarantee none -policy
default -junction-path /engineering
[Job 34] Job is queued: Create engineering.
[Job 34] Job succeeded: Successful
cluster1::>

Show the volumes for the SVM svm1 and list its junction points:
cluster1::> volume show -vserver svm1
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm1
engineering aggr1_cluster1_01
online
RW
1GB
972.7MB
5%
svm1
svm1_root
aggr1_cluster1_01
online
RW
20MB
18.88MB
5%
2 entries were displayed.
cluster1::> volume show -vserver svm1 -junction
Junction
Junction
Vserver
Volume
Language Active
Junction Path
Path Source
--------- ------------ -------- -------- ------------------------- ----------svm1
engineering C.UTF-8 true
/engineering
RW_volume
svm1
svm1_root
C.UTF-8 /
2 entries were displayed.
cluster1::>

96

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Create the volume eng_users, junctioning it into the namespace at /engineering/users.


cluster1::> volume create -vserver svm1 -volume eng_users -aggregate aggr1_cluster1_01
-size 1GB percent-snapshot-space 5 space-guarantee none -policy default -junctionpath /engineering/users
[Job 35] Job is queued: Create eng_users.
[Job 35] Job succeeded: Successful
cluster1::> volume show -vserver svm1 -junction
Junction
Junction
Vserver
Volume
Language Active
Junction Path
Path Source
--------- ------------ -------- -------- ------------------------- ----------svm1
eng_users
C.UTF-8 true
/engineering/users
RW_volume
svm1
engineering C.UTF-8 true
/engineering
RW_volume
svm1
svm1_root
C.UTF-8 /
2 entries were displayed.
cluster1::>

Display detailed information about the volume engineering. Notice here that the volume is reporting as
thin provisioned (Space Guarantee Style is set to none) and that the Export Policy is set to default.
cluster1::> volume show -vserver svm1 volume engineering -instance
Vserver Name: svm1
Volume Name: engineering
Aggregate Name: aggr1_cluster1_01
Volume Size: 1GB
Volume Data Set ID: 1026
Volume Master Data Set ID: 2147484674
Volume State: online
Volume Type: RW
Volume Style: flex
Is Cluster-Mode Volume: true
Is Constituent Volume: false
Export Policy: default
User ID: Group ID: Security Style: ntfs
UNIX Permissions: -----------Junction Path: /engineering
Junction Path Source: RW_volume
Junction Active: true
Junction Parent Volume: svm1_root
Comment:
Available Size: 972.6MB
Filesystem Size: 1GB
Total User-Visible Size: 972.8MB
Used Size: 180KB
Used Percentage: 5%
Volume Nearly Full Threshold Percent: 95%
Volume Full Threshold Percent: 98%
Maximum Autosize (for flexvols only): 1.20GB
Autosize Increment (for flexvols only): 51.20MB
Minimum Autosize: 1GB
Autosize Grow Threshold Percentage: 85%
Autosize Shrink Threshold Percentage: 50%
Autosize Mode: off
Autosize Enabled (for flexvols only): false
Total Files (for user-visible data): 31122
Files Used (for user-visible data): 97
Space Guarantee Style: none
Space Guarantee in Effect: true
Snapshot Directory Access Enabled: true
Space Reserved for Snapshots: 5%

97

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Snapshot Reserve Used: 0%


Snapshot Policy: default
Creation Time: Fri Feb 14 01:41:02 2014
Language: C.UTF-8
Clone Volume: false
Node name: cluster1-01
NVFAIL Option: off
Is File System Size Fixed: false
Extent Option: off
Reserved Space for Overwrites: 0B
Fractional Reserve: 0%
Snapshot Cloning Dependency: off
Primary Space Management Strategy: volume_grow
Read Reallocation Option: off
Inconsistency in the File System: false
Is Volume Quiesced (On-Disk): false
Is Volume Quiesced (In-Memory): false
Volume Contains Shared or Compressed Data: false
Space Saved by Storage Efficiency: 0B
Percentage Saved by Storage Efficiency: 0%
Space Saved by Deduplication: 0B
Percentage Saved by Deduplication: 0%
Space Shared by Deduplication: 0B
Space Saved by Compression: 0B
Percentage Space Saved by Compression: 0%
Block Type: 64-bit
FlexCache Connection Status: Is Volume Moving: false
Flash Pool Caching Eligibility: read-write
Flash Pool Write Caching Ineligibility Reason: Managed By Storage Service: Create Namespace Mirror Constituents For SnapDiff Use: Constituent Volume Role: QoS Policy Group Name: Is Volume Move in Cutover Phase: false
Number of Snapshot Copies in the Volume: 1
cluster1::>

View how much disk space this volume is actually consuming in its containing aggregate; the Total
Footprint value represents the volumes total consumption. The value here is so small because this
volume is thin provisioned and we have not yet added any data to it. If we had thick provisioned the
volume then the footprint here would have been 1 GB, the full size of the volume.
cluster1::> volume show-footprint -volume engineering
Vserver : svm1
Volume : engineering
Feature
-------------------------------Volume Data Footprint
Volume Guarantee
Flexible Volume Metadata
Delayed Frees
Total Footprint

Used
---------256KB
0B
5.78MB
672KB

Used%
----0%
0%
0%
0%

6.68MB

0%

cluster1::>

Create qtrees in the eng_users volume for the users bob and susan, then generate a list of all the qtrees
that belong to svm1, and finally produce a detailed report of the configuration for the qtree bob.

98

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> volume qtree create vserver svm1 volume eng_users qtree bob
cluster1::> volume qtree create vserver svm1 volume eng_users qtree susan
cluster1::> volume qtree show vserver svm1
Vserver
Volume
Qtree
Style
Oplocks
Status
---------- ------------- ------------ ------------ --------- -------svm1
eng_users
""
ntfs
enable
normal
svm1
eng_users
bob
ntfs
enable
normal
svm1
eng_users
susan
ntfs
enable
normal
svm1
engineering
""
ntfs
enable
normal
svm1
svm1_root
""
ntfs
enable
normal
5 entries were displayed.
cluster1::> volume qtree show -qtree bob -instance
Vserver Name:
Volume Name:
Qtree Name:
Qtree Path:
Security Style:
Oplock Mode:
Unix Permissions:
Qtree Id:
Qtree Status:
Export Policy:
Is Export Policy Inherited:

svm1
eng_users
bob
/vol/eng_users/bob
ntfs
enable
1
normal
default
true

cluster1::>

3.4

Connect to the SVM from a client

The SVM svm1 is up and running and is configured for NFS and CIFS access, so its time to validate that
everything is working properly by mounting the NFS export on a Linux host and the CIFS share on a
Windows host. You will want to complete both parts of this section so you can see that both hosts are
able to seamlessly access the volume and its files.

Connect a Windows client from the GUI:

In this part of the lab section we will demonstrate connecting the Windows client jumphost to the CIFS
share \\svm1\nsroot using the Windows GUI.

1) On the Windows host jumphost open Windows Explorer by clicking on the folder icon on the
taskbar.

99

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) In Windows Explorer click on Computer.


2) Click on Map network drive to launch the Map Network Drive wizard.

100

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Set the Drive and Folder fields as shown, then click the Finish button.

101

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

A new Windows Explorer window opens.

1) Note that the engineering volume we created in section 3.3 is visible at the top of the nsroot
share, which points to the root of the namespace. If we created another volume on svm1 right now
and mounted it under the root of the namespace then that new volume would instantly become
visible in this share and to jumphost. Double-click on the engineering folder to open it.

102

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Notice that engineering contains the users folder we earlier junctioned into our namespace to
represent the volume eng_users.
2) Inside engineering create a text file name cifs.txt.
3) Edit cifs.txt, enter some text (make sure you put a carriage return at the end of the line or else
when we later view the contents of this file on linux the command shell prompt will appear on the
same line as the file contents) and save out the file to verify that write access is working.

Connect a Linux client from the command line:

In this part of the lab section we will demonstrate connecting a Linux client to the NFS volume svm1:/
using the Linux command line. Follow the instructions in section 1.6 to open PuTTY and connect to the
system rhel1.
Log in as the user root with the password Netapp1!, then issue the following command to see that we
currently have no NFS volumes mounted on this Linux host.
[root@rhel1 /]# df
Filesystem
1K-blocks
/dev/mapper/vg_rhel1-lv_root
11877388
tmpfs
510320
/dev/sda1
495844
[root@rhel1 /]#

Used Available Use% Mounted on


4635412
112
37739

6638636
510208
432505

42% /
1% /dev/shm
9% /boot

Create a mountpoint and mount the NFS export corresponding to the root of our SVMs namespace on
that mountpoint. When you run the df command again after this youll see that the NFS export svm1:/ is
mounted on our Linux host as /svm1.

103

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

root@rhel1 /]# mkdir /svm1


[root@rhel1 /]# echo "svm1:/ /svm1 nfs rw,defaults 0 0" >> /etc/fstab
[root@rhel1 /]# grep svm1 /etc/fstab
svm1:/ /svm1 nfs rw,defaults 0 0
[root@rhel1 /]# mount -a
[root@rhel1 /]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root
11877388
4639900
6634148 42% /
tmpfs
510320
100
510220
1% /dev/shm
/dev/sda1
495844
37739
432505
9% /boot
svm1:/
19456
128
19328
1% /svm1
root@rhel1 /]#

Navigate into the /svm1 directory and notice that you can see the engineering volume that we previously
junctioned into the SVMs namespace. Navigate into engineering and verify that you can access and
create files.
NOTE: The output shown here assumes that you have already performed the Windows client connection
steps found earlier in this section. When you cat the cifs.txt file if the shell prompt winds up on the same
line as the file output then It indicates that when you created the file on Windows you forgot to include a
newline at the end of the file.
[root@rhel1 /]# cd /svm1
[root@rhel1 svm1]# ls
engineering
[root@rhel1 /]# cd engineering
[root@rhel1 svm1]# ls
cifs.txt users
[root@rhel1 svm1]# cat cifs.txt
write test from jumphost
[root@rhel1 svm1]# echo "write test from
[root@rhel1 svm1]# cat nfs.txt
write test from rhel1
[root@rhel1 svm1]# ll
total 3
-rwxrwxrwx 1 root bin
24 Jul 25 16:20
-rwxrwxrwx 1 root root
22 Jul 25 16:27
drwxrwxrwx 1 root root 4096 Jul 25 16:10
[root@rhel1 svm1]#

rhel1" > nfs.txt

cifs.txt
nfs.txt
users

You may be wondering why the cifs.txt file shows a group membership of bin rather that root like the
nfs.txt file. This is the result of a bug in RHEL and/or Data ONTAP. For more information see BURT
723323.

104

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

3.5

NFS Exporting Qtrees (Optional)

New in clustered Data ONTAP 8.2.1 is the ability to NFS export qtrees. This optional section explains how
to configure qtree exports and will demonstrate how to set different export rules for a given qtree. For this
exercise we will be working with the qtrees we created in section 3.3.
Qtrees had many capabilities in Data ONTAP 7-mode that have been significantly pared back in cluster
mode. Qtrees do still exist in cluster mode, but their purpose was essentially limited to just quota
management, with most other 7-mode qtree features, including NFS exports, now the exclusive purview
of volumes. This functionality change created challenges for 7-mode customers with large numbers of
NFS qtree exports who were trying to transition to cluster mode and could not convert those qtrees to
volumes because they would exceed clustered Data ONTAPs maximum number of volumes limit.
The introduction of qtree NFS exports to clustered Data ONTAP 8.2.1 resolves this problem. NetApp
continues to recommend that customers favor volumes over qtrees in cluster mode whenever practical,
but customers requiring large numbers of qtree NFS exports now have a supported solution under
clustered Data ONTAP.
While this section provides both graphical and command line methods for configuring qtree NFD exports,
some configuration steps can only be accomplished via the command line.

To perform this sections tasks from the GUI:

We will begin by creating a new export policy that we configure with rules that NFS allow access from the
Linux host rhel1.

1) In System Manager select the Storage Virtual Machines tab and then go to cluster1->svm1>Policies->Export Policies.
2) Click the Create Policy button.
105

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Complete the Policy Name field as shown and click the Add button.

1) Set the Client Specification to 192.168.0.12 and then click the OK button.

106

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) The fields in the Create Export Policy window should now be populated as in the screenshot.
Click the Create button.

107

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Now we need to apply this new export policy to the qtree. System Manager 3.1 does not support this
capability so we will have to use the clustered Data ONTAP command line. Open a PuTTY connection to
cluster1 following the instructions from section 1.6. Log in using the username admin and the password
Netapp1!, then enter the following commands.
Produce a list of svm1s export policies and then a list of its qtrees:
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::> volume qtree show
Vserver
Volume
Qtree
Style
---------- ------------- ------------ -----------svm1
eng_users
""
ntfs
svm1
eng_users
bob
ntfs
svm1
eng_users
susan
ntfs
svm1
engineering
""
ntfs
svm1
svm1_root
""
ntfs
5 entries were displayed.
cluster1::>

Oplocks
--------enable
enable
enable
enable
enable

Status
-------normal
normal
normal
normal
normal

Apply the rhel1-only export policy to the susan qtree.


cluster1::> volume qtree modify -vserver svm1 volume eng_users qtree susan
-export-policy rhel1-only
cluster1::>

Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is
using the rhel1-only export policy.
cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan
Vserver Name:
Volume Name:
Qtree Name:
Qtree Path:
Security Style:
Oplock Mode:
Unix Permissions:
Qtree Id:
Qtree Status:
Export Policy:
Is Export Policy Inherited:
cluster1::>

svm1
eng_users
susan
/vol/eng_users/susan
ntfs
enable
2
normal
rhel1-only
false

Produce a report showing the export policy assignments for all the volumes and qtrees that belong to
svm1.

108

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> volume qtree show -vserver svm1 -fields export-policy


vserver volume
qtree export-policy
------- --------- ----- ------------svm1
eng_users ""
default
svm1
eng_users bob
default
svm1
eng_users susan rhel1-only
svm1
engineering
""
default
svm1
svm1_root ""
default
5 entries were displayed.
cluster1::>

Now we need to validate that the more restrictive export policy that weve applied to the qtree susan is
working as expected. If you still have an active PuTTY session open to the the Linux host rhel1 then
bring that window up now, otherwise open a new PuTTY session to that host (username = root,
password = Netapp1!). Run the following commands to verify that you can still access the susan qtree
from rhel1.
[root@rhel1
[root@rhel1
bob susan
[root@rhel1
[root@rhel1
[root@rhel1
hello
[root@rhel1

~]# cd /sv1/engineering/users
users]# ls
users]# cd susan
susan]# echo "hello" rhel1.txt > rhel1.txt
susan]# cat rhel1.txt
susan]#

Now open a PuTTY connection to the Linux host rhel2 (again, username = root and password =
Netapp1!), This host should be able to access all the volumes and qtrees in the svm1 namespace
*except* susan, which should give a permission denied error because that qtrees associated export
policy only grants access to the host rhel1.
[root@rhel2 ~]# mkdir /svm1
[root@rhel2 ~]# mount svm1:/ /svm1
[root@rhel2 ~]# cd /svm1/engineering/users
[root@rhel2 users]# ls
bob susan
[root@rhel2 users]# cd susan
bash: cd: susan: Permission denied
[root@rhel2 users]# cd bob
[root@rhel2 bob]

To perform this sections tasks from the command line:

We need to first create a new export policy and configure it with rules so that only the Linux host rhel1 will
be granted access to the associated volume and/or qtree. First create the export policy.
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::> vserver export-policy create vserver svm1 policyname rhel1-only
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
cluster1::>

109

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Next add a rule to the policy so that only the Linux host rhel1 will be granted access.
cluster1::> vserver export-policy rule show -vserver svm1 policyname rhel1-only
There are no entries matching your query.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname rhel1-only
-clientmatch 192.168.0.12 -rorule any -rwrule any -superuser any -anon 65534
-ruleindex 1
cluster1::> vserver export-policy rule show
Policy
Rule
Access
Client
RO
Vserver
Name
Index
Protocol Match
Rule
------------ --------------- ------ -------- --------------------- --------svm1
default
1
any
0.0.0.0/0
any
svm1
rhel1-only
1
any
192.168.0.12
any
cluster1::> vserver export-policy rule show vserver svm1 policyname rhel1-only
-instance
Vserver: svm1
Policy Name: rhel1-only
Rule Index: 1
Access Protocol: any
Client Match Hostname, IP Address, Netgroup, or Domain: 192.168.0.12
RO Access Rule: any
RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
Superuser Security Types: any
Honor SetUID Bits in SETATTR: true
Allow Creation of Devices: true
cluster1::>

The remaining steps for applying and testing the rhel1-only export policy against the exported susan
qtree are exactly the same as the command line steps shown under the To perform this sectionss tasks
from the GUI heading found earlier in this section of the lab guide (section 3.5). Please complete those
command line instructions now..

110

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

4 Create and Mount a LUN


Expected Completion Time: 50 Minutes
This section of the lab is optional, and includes instructions for mounting a LUN on Windows (section 4.2)
and Linux (section 4.3). If you choose to complete this section you must first complete section 4.1, and
then complete section 4.2 or 4.3 as appropriate based on your platform of interest. The 40 minute time
estimate assumes you complete only section 4.2 or section 4.3. You are welcome to complete both of
those sections if you choose but in that case you should plan on needing about 90 minutes to complete
all of section 4.
In section 3 we explored the concept of a Storage Virtual Machine (SVM), known in previous versions of
clustered Data ONTAP as a Vserver, and then created an SVM and configured it to serve data over NFS
and CIFS. If you skipped over that section of the lab guide then you should consider reviewing the
introductory text found at the beginning of sections 3, 3.2, and 3.3 before you proceed further here as this
section builds on concepts described there.
In this section we are going to create another SVM and configure it for SAN protocols, which in the case
of this lab means we are going to configure the SVM for iSCSI since our virtualized lab does not support
FC. The configuration steps for iSCSI and FC are similar so the information provided here is also useful
for FC deployment. In this section we will create a new SVM, configure it for iSCSI, create a LUN for
Windows and/or a LUN for Linux, and then mount the LUN(s) on their respective hosts.
NetApp supports configuring an SVM to serve data over both SAN and NAS protocols, but is is quite
common to see people to use separate SVMs for each in order to separate administrative responsibility or
for architectural and operational clarity. For example, SAN protocols do not support LIF failover so you
cannot use NAS LIFs to support SAN protocols; you must instead create dedicated LIFs just for SAN.
Implementing separate SVMs for SAN and NAS can in this example simplify the operational complexity of
each SVMs configuration, making each easier to understand and manage, but ultimately whether to mix
or separate is a customer decision and not a NetApp recommendation.
Since SAN LIFs do not support migration to different nodes, an SVMs must have dedicated SAN LIFs on
every node that you want to service SAN requests, and you must utilize MPIO and ALUA to manage the
controllers available paths to the LUNs; in the event of a path disruption MPIO and ALUA will
compensate by re-routing the LUN communication over an alternate controller path (i.e. over a different
SAN LIF).
NetApp best practice is to configure at least one SAN LIF per storage fabric/network on each node in the
cluster so that all nodes can provide a path to the LUNs. In large clusters where this would result in the
presentation of a large number of paths for a given LUN then NetApp recommends that you use portsets
to limit the LUN to seeing no more than 8 LIFs. In this lab our cluster contains two nodes connected to a
single storage network, but we will still be configuring a total of 4 SAN LIFs simply because it is common
to see real world implementations with 2 paths per node; well just pretend that two of the SAN LIFs are
connected to a different storage network.
This section of the lab is designed to allow you to create and mount a LUN for just Windows, just Linux, or
both as you wish. Both the Windows and Linux LUN creation steps require that you complete section 4.1.
If you want to create a Windows LUN then you will then need to complete section 4.2, or if you want to
create a Linux LUN then you will then need to complete section 4.3. Both sections 4.2 and 4.3 can be
safely completed in the same lab.

Create a Storage Virtual Machine for iSCSI


In this section we will create a new SVM named svmluns on our cluster. We will create the SVM,
configure it for iSCSI, create four data LIFs to support LUN access to the SVM (two on each cluster
node), and then add those four LIFs to a portset, which are a set of LIFs that are authorized to access a
particular LUN.

111

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

To perform this sections tasks from the GUI:

Return to the System Manager window.

1) Open the Storage Virtual Machines tab.


2)

Select cluster1.

3) Click the Create button to launch the Storage Virtual Machine Setup wizard.
Fill out the Storage Virtual Machine details in the setup wizard. Note that the wizard window doesnt
include scrollbars so you may need to expand the System Manager window in order to see all the fields.

112

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Complete the fields as shown. Note that we are using the same aggregate here that is hosting
the SVM svm1 that we created in section 3. Multiple SVMs can share the same aggregate.
2) Click Submit & Continue to move to the next step in the wizard.
Note that in your lab the list of available Data Protocols may differ somewhat from what is shown in the
preceding screenshot, depending on what protocol licenses you entered when setting up your cluster. If
you used the cluster setup script from section 2.1.2 to create your cluster then all protocols will be
available.

113

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Configure the LIFs Per Node and IP address fields as shown, then click the Review or Modify
LIF configuration (Advanced Settings) checkbox. Note that the checkbox will not be selectable
until after you finish filling in all the IP address related fields as shown.
The bottom half of the wizard window now displays configuration details for the 4 LIFs it plans to
configure (2 LIFs per cluster node).

1) Review the settings for the LIFs for your cluster and make sure that they match those shownin
the screenshot. If any of the settings arent correct you can double-click on the line in question
and change its settings. There should be a LIF assigned to port e0d and e0e on each node.
2) Click Submit & Continue to advance the wizard.

114

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Complete the fields as shown, using Netapp1! as the SVM Administrator password. Then click
the Submit & Continue button.

115

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click OK to finish the Storage Virtual Machine Setup Wizard.

The new SVM svmluns now exists in the cluster.

116

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) The new svmluns SVM now shows up under the Storage Virtual Machines tab.
2) The green box around the iSCSI protocol indicates that iSCSI is enabled on the SVM.

To perform this sections tasks from the command line:


If you do not already have a PuTTY session open to cluster1, open one now following the instructions in
section 1.6 and enter the following commands.
Display the available aggregates so you can decide which one you want to use to host the root volume for
the SVM we will be creating.
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
7.98GB
381.6MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02
7.98GB
381.6MB
95% online
1 cluster1-02
raid_dp,
normal
aggr1_cluster1_01
102.3GB
101.3GB
1% online
3 cluster1-01
raid_dp,
normal
aggr1_cluster1_02
102.3GB
102.3GB
0% online
0 cluster1-02
raid_dp,
normal
4 entries were displayed.
cluster1::>

Create the SVM svmluns on aggregate aggr1_cluster1_01. Note that the clustered Data ONTAP
command line syntax still refers to storage virtual machines as vservers.

117

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> vserver create -vserver svmluns -rootvolume svmluns_root -aggregate


aggr1_cluster1_01 -language C.UTF-8 -rootvolume-security-style unix -ns-switch
file,nis -nm-switch file -snapshot-policy default
[Job 50] Job is queued: Create svmluns.
[Job 50] Creating root volume
[Job 50] Job succeeded:
Vserver creation completed
cluster1::>

Add the iSCSI protocol to the SVM svmluns:


cluster1::> vserver iscsi create -vserver svmluns
cluster1::> vserver modify -vserver svmluns -allowed-protocols iscsi
cluster1::> vserver show -vserver svmluns
Vserver: svmluns
Vserver Type: data
Vserver UUID: 3261535b-0304-11e3-83c0-123478563412
Root Volume: svmluns_root
Aggregate: aggr1_cluster1_01
Name Service Switch: file, nis
Name Mapping Switch: file
NIS Domain: Root Volume Security Style: unix
LDAP Client: Default Volume Language Code: C.UTF-8
Snapshot Policy: default
Comment:
Quota Policy: default
List of Aggregates Assigned: Limit on Maximum Number of Volumes allowed: unlimited
Vserver Admin State: running
Allowed Protocols: iscsi
Disallowed Protocols: nfs, cifs, fcp, ndmp
Is Vserver with Infinite Volume: false
QoS Policy Group: cluster1::>

Create 4 SAN LIFs for the SVM svmluns, 2 per node. Dont forget you can save some typing here by
using the up arrow to recall previous commands that you can edit and then execute.
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0d -address
192.168.0.133 -netmask 255.255.255.0 -failover-policy disabled -firewall-policy data
Info: Your interface was created successfully; the routing group
d192.168.0.0/24 was created
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0e -address
192.168.0.134 -netmask 255.255.255.0 -failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0d -address
192.168.0.135 -netmask 255.255.255.0 -failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0e -address
192.168.0.136 -netmask 255.255.255.0 -failover-policy disabled -firewall-policy data
cluster1::>

Now create a Management Interface LIF for the SVM.

118

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> network interface create -vserver svmluns -lif svmluns_admin_lif1


-role data -data-protocol none -home-node cluster1-01 -home-port e0c -address
192.168.0.137 -netmask 255.255.255.0 -failover-policy nextavail -firewall-policy mgmt
cluster1::>

Display a list of the LIFs in the cluster.


cluster1::> network interface show
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------cluster1
cluster_mgmt up/up
192.168.0.101/24
cluster1-01
clus1
up/up
169.254.25.15/16
clus2
up/up
169.254.137.89/16
mgmt1
up/up
192.168.0.111/24
cluster1-02
clus1
up/up
169.254.149.26/16
clus2
up/up
169.254.159.43/16
mgmt1
up/up
192.168.0.112/24
svm1
svm1_cifs_nfs_lif1
up/up
192.168.0.131/24
svm1_cifs_nfs_lif2
up/up
192.168.0.132/24
svmluns
cluster1-01_iscsi_lif_1
up/up
192.168.0.133/24
cluster1-01_iscsi_lif_2
up/up
192.168.0.134/24
cluster1-02_iscsi_lif_1
up/up
192.168.0.135/24
cluster1-02_iscsi_lif_2
up/up
192.168.0.136/24
svmluns_admin_lif1
up/up
192.168.0.137/24
14 entries were displayed.
cluster1::>

Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01

e0c

true

cluster1-01
cluster1-01
cluster1-01

e0a
e0b
e0c

true
true
true

cluster1-02
cluster1-02
cluster1-02

e0a
e0b
e0c

true
true
true

cluster1-01

e0c

true

cluster1-02

e0c

true

cluster1-01

e0d

true

cluster1-01

e0e

true

cluster1-02

e0d

true

cluster1-02

e0e

true

cluster1-01

e0c

true

Display detailed information for the LIF cluster1-01_iscsi_lif_1.


cluster1::> network interface show -lif cluster1-01_iscsi_lif_1 -instance
Vserver Name: svmluns
Logical Interface Name: cluster1-01_iscsi_lif_1
Role: data
Data Protocol: iscsi
Home Node: cluster1-01
Home Port: e0d
Current Node: cluster1-01
Current Port: e0d
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.133
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: -

119

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Routing Group Name:


Administrative Status:
Failover Policy:
Firewall Policy:
Auto Revert:
Fully Qualified DNS Zone Name:
DNS Query Listen Enable:
Failover Group Name:
FCP WWPN:
Address family:
Comment:
cluster1::>

d192.168.0.0/24
up
disabled
data
false
none
false
ipv4
-

Create a portset named iscsi_pset_1 for the svmluns SVM and add the newly created SAN LIFs to the
portset. Note that you can save yourself some typing by taking advantage of command line completion
when entering the port-name list.
cluster1::> portset create -vserver svmluns -portset iscsi_pset_1 -protocol iscsi -port-name
cluster1-01_iscsi_lif_1,cluster1-01_iscsi_lif_2,cluster1-02_iscsi_lif_1,cluster1-02_iscsi_lif_2

cluster1::> portset show


Vserver
Portset
Protocol Port Names
Igroups
--------- ------------ -------- ----------------------- -----------svmluns
iscsi_pset_1 iscsi
cluster1-01_iscsi_lif_1, cluster1-01_iscsi_lif_2,
cluster1-02_iscsi_lif_1, cluster1-02_iscsi_lif_2
cluster1::>

Display a list of all the volumes on the cluster to see the root volume for the svmluns SVM.
cluster1::> volume show
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------cluster1-01
vol0
aggr0_cluster1_01
online
cluster1-02
vol0
aggr0_cluster1_02
online
svm1
engineering aggr1_cluster1_01
online
svm1
eng_users
aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
svmluns
svmluns_root aggr1_cluster1_01
online
6 entries were displayed.
cluster1::>

4.1

Type
Size Available Used%
---- ---------- ---------- ----RW

7.56GB

5.47GB

27%

RW

7.56GB

5.50GB

27%

RW

1GB

972.6MB

5%

RW

1GB

972.6MB

5%

RW

20MB

18.88MB

5%

RW

20MB

18.89MB

5%

Create, Map, and Mount a Windows LUN

In section 4.1 we created a new SVM and configured it for iSCSI. In the following sub-sections we will
perform the remaining steps needed to configure and use a LUN under Windows:
1) Gather the iSCSI Initiator Name of the Windows client.
2) Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that
volume, and map the LUN so it can be accessed by the Windows client.
3) Mount the LUN on a Windows client leveraging multi-pathing.

120

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

You must complete all of the subsections of this section in order to use the LUN from the Windows client.

4.1.1 Gather the Windows Client iSCSI Initiator Name


We need to determine the Windows clients iSCSI initiator name so that when we create the LUN we can
set up an appropriate initiator group to control access to the LUN.

This sections tasks must be performed from the GUI:

On the desktop of the Windows client named jumphost (the main Windows host you use in the lab) click
the Start button and navigate to Administrative Tools->iSCSI Initiator to open the iSCSI Initiator
Properties window.

121

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click the Configuration tab and note the value in the Initiator Name box (highlighted in the
screenshot). The value is:
iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
You can highlight the value in your iSCSI Initiator Properties window and use Ctrl-c to copy it for
later use.
2) Click the Cancel button to close the window.

122

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

4.1.2 Create and Map a Windows LUN


We will now create a new thin provisioned Windows LUN named windows.lun in the volume winluns on
the SVM svmluns. We will also create an initiator igroup for the LUN and populate it with the Windows
host jumphost. An initiator group, or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node
names of the hosts that are permitted to see and access the associated LUNs.

To perform this sections tasks from the GUI:


Return to the System Manager window.

1) Open the Storage Virtual Machines tab.


2) Navigate to the cluster1->svmluns SVM.
3) Under svmluns navigate to Storage->LUNs.
4) Click Create to launch the Create LUN wizard.

123

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click Next to advance to the next step in the wizard.

124

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Set the fields as shown in the screenshot and then click the Next button to advance the wizard.
Note that we are creating a thin provisioned LUN here; if we created the LUN without setting this
Thin Provisioned checkbox then the total size of the LUN would get pre-allocated in the volume.
By setting thin provisioning here the LUN will only allocate space as it actually needs to consume
it.

125

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Choose Create a new flexible volume in, populate the fields as shown, and then click the Next
button.

126

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) In the Initiators Mapping Window click the Add Initiator Group button.

127

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Fill out the fields as shown in the screenshot and then click the Choose button to select the
Portset.

128

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select the portset entry for iscsi_pset_1.


2) Click OK to confirm the selection.

129

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) At this point all of the fields in the Create Initiator Group window should appear as shown. Click
the Initiators tab.

130

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click the Add button.

131

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Enter the iSCSI initiator name for the Windows host jumphost that we gathered in section 4.2.1;
that initiator name was iqn.1991-05.com.microsoft:jumphost.demo.netapp.com. If it is still in
your copy/paste buffer you can paste the value in here by using Ctrl-v. Afterwards click the OK
button.

132

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click the Create button to finish creating the Initiator Group.

133

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) You should see a message stating that the winigrp initiator group was created successfully.
Click OK to acknowledge the message.

134

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Make sure you select the checkbox so that the LUN will be mapped to this igroup.
2) Click the Next button.

135

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) We are not going to set any Quality of Service properties for this LUN, so just click the Next
button.

136

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Review the settings and if everything is correct click the Next button.

137

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) When the wizard completes click the Finish button to exit.


You should now see the new LUN under the clusters LUN list.

138

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

139

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

To perform this sections tasks from the command line:


If you do not already have a PuTTY connection open to cluster1 then please open one now following
the instructions from section 1.6.
Create the volume winluns to host the Windows LUN we will be creating in a later step:
cluster1::> volume create -vserver svmluns -volume winluns -aggregate
aggr1_cluster1_01 -size 206.2MB percent-snapshot-space 0 snapshot-policy none
-space-guarantee none autosize-mode grow nvfail on
[Job 161] Job is queued: Create winluns.
[Job 161] Job succeeded: Successful
cluster1::> volume show
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----cluster1-01
vol0
aggr0_cluster1_01
online
RW
7.56GB
5.22GB
30%
cluster1-02
vol0
aggr0_cluster1_02
online
RW
7.56GB
5.25GB
30%
svm1
engineering aggr1_cluster1_01
online
RW
1GB
972.6MB
5%
svm1
eng_users
aggr1_cluster1_01
online
RW
1GB
972.6MB
5%
svm1
svm1_root
aggr1_cluster1_01
online
RW
20MB
18.88MB
5%
svmluns
svmluns_root aggr1_cluster1_01
online
RW
20MB
18.88MB
5%
svmluns
winluns
aggr1_cluster1_01
online
RW
206.2MB
206.6MB
0%
7 entries were displayed.
cluster1::>

Create the Windows LUN named windows.lun:


cluster1::> lun create -vserver svmluns -volume winluns -lun windows.lun -size 200MB
-ostype windows_2008 -space-reserve disabled
Created a LUN of size 204m (213857280)
cluster1::> lun modify -vserver svmluns -volume winluns -lun windows.lun -comment
"Windows LUN"
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online unmapped windows_2008
204.0MB
cluster1::>

Display a list of the defined portsets and igroups, then create a new portset named iscsi_pset_1 and a
new igroup named winigrp that we will use to manage access to the new LUN. Finally, add the Windows
clients initiator name to the igroup.

140

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> igroup show


This table is currently empty.
cluster1::> portset show
Vserver
Portset
Protocol Port Names
Igroups
--------- ------------ -------- ----------------------- -----------svmluns
iscsi_pset_1 iscsi
cluster1-01_iscsi_lif_1, cluster1-01_iscsi_lif_2,
cluster1-02_iscsi_lif_1, cluster1-02_iscsi_lif_2
cluster1::> igroup create -vserver svmluns -igroup winigrp -protocol iscsi -ostype
windows -portset iscsi_pset_1 -initiator
iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
cluster1::>

Map the LUN windows.lun to the igroup winigrp, then display a list of all the LUNs, all the mapped
LUNs, and finally a detailed report on the configuration of the LUN windows.lun.
cluster1::> lun map -vserver svmluns -volume winluns -lun windows.lun -igroup winigrp
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
204.0MB
cluster1::> lun mapped show
Vserver
Path
Igroup
LUN ID Protocol
---------- ---------------------------------------- ------- ------ -------svmluns
/vol/winluns/windows.lun
winigrp
0 iscsi
cluster1::> lun show -lun windows.lun -instance
Vserver Name: svmluns
LUN Path: /vol/winluns/windows.lun
Volume Name: winluns
Qtree Name: ""
LUN Name: windows.lun
LUN Size: 204.0MB
OS Type: windows_2008
Space Reservation: disabled
Serial Number: BLH0T?DDsJWb
Comment: Windows LUN
Space Reservations Honored: false
Space Allocation: disabled
State: online
LUN UUID: e8a93e14-4730-49e0-bd3f-5c4d7fbabb6a
Mapped: mapped
Block Size: 512
Device Legacy ID: Device Binary ID: Device Text ID: Read Only: false
Inaccessible Due to Restore: false
Used Size: 0
Maximum Resize Size: 502.0GB
Creation Time: 2/18/2014 16:47:49
Class: regular
Clone: false
Clone Autodelete Enabled: false
QoS Policy Group: cluster1::>

141

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

4.1.3 Mount the LUN on a Windows Client


The final step is to mount the LUN on the Windows client. We will be using MPIO/ALUA to support
multiple paths to the LUN using both of the SAN LIFs we configured earlier on the svmluns SVM. Data
ONTAP DSM for Windows MPIO is the multi-pathing software we will be using for this lab, and that
software is already installed on jumphost.

This sections tasks must be performed from the GUI:

On the desktop of the Windows client named jumphost, click the Start button and navigate to
Administrative Tools->MPIO to open the MPIO Properties window. We are going to validate that the
Multi-Path I/O (MPIO) software is working properly before we attempt to mount the LUN.

1) Click the Discover Multi-Paths Tab.

142

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) If the Add support for iSCSI devices checkbox is NOT greyed out then MPIO is NOT configured
properly. If this is the case then check that checkbox, click the Add button, and then click Yes on
the reboot dialog to reboot your Windows host. After the reboot completes return here to verify
that the Add support for iSCSI devices checkbox is now greyed out. Once again, under a
proper MPIO configuration this checkbox should be greyed out.
2) Click OK to close the window.
Now we will begin the process of connecting to the LUNs. On the desktop of jumphost click the Start
button again and navigate to Administrative Tools->iSCSI Initiator to open the iSCSI Initiator Properties
window.

143

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Note that there are currently no targets listed in the Discovered targets text window as we have
not yet mapped any iSCSI targets to this host. Click the Discovery tab.

144

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) We are now going to manually add a target portal to jumphost. Click the Discover Portal
button to open the Discover Target Portal dialog window.

1) Set the IP address to 192.168.0.133, the address we assigned to the cluster1-01_iscsi_lif_1


LIF we created in section 4.1 for the SVM named svmluns. Leave Port at the default value. Click
the OK button to continue.

145

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Note that the Target portals box under the Discovery tab now shows an entry for the IP address
we specified in the preceding step.

146

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Select the Targets tab.


2) Under the Discovered targets list select the name of the target. Note that the target has a status
of Inactive. Also note that the Name of the discovered target in your lab will contain a different
value than what you see in this guide; that name string is uniquely generated for each instance of
the lab. (Make a mental note of that string value as it will pop up again and again as we continue
configuring iSCSI for jumphost in later steps of this process.) Click the Targets tab.
3) Click the Connect button to connect to the target.

The Connect to Target dialog opens.

147

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Check the Enable multi-path checkbox, then click the Advanced button.

The Advanced Setting dialog box opens.

148

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Set the Target portal IP dropdown to the value of the IP Address/Port we specified for the LIF
(should be 192.168.0.133/3260 as shown in the screenshot). Click the OK button to close the
Advanced Settings dialog box.

149

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click OK to close the Connect To Target dialog box.

Back in the iSCSI Initiator Properties window note that the status of the Discovered target has now
changed to Connected.

Thus far we have added a single path to our iSCSI LUN using the cluster1-01_iscsi_lif_1 LIF on
the node cluster1-01. We are now going to add in additional paths using each of the other SAN LIFs
we created for the SVM svmluns.

150

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) In the iSCSI Initiator Properties window select the target in the Discovered targets list.
2) Click the Properties button to open the Properties dialog.
This starts the sequence for adding a path for the cluster1-01_iscsi_lif_2 LIF.

151

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click the Add session button to open the Connect To Target dialog.

1) Check the Enable multi-path checkbox, then click the Advanced button to open the Advanced
Setting dialog box.

152

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Set the Target portal IP dropdown to the value of the IP Address/Port we specified for the 2nd LIF
(should be 192.168.0.134/3260 as shown in the screenshot). Click the OK button to close the
Advanced Settings dialog box.

153

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click OK to close the Connect To Target dialog box.


This starts the sequence for adding a path for the cluster1-02_iscsi_lif_1 LIF.

1) Click the Add session button to open the Connect To Target dialog.

154

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Check the Enable multi-path checkbox, then click the Advanced button to open the Advanced
Setting dialog box.

155

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Set the Target portal IP dropdown to the value of the IP Address/Port we specified for the 3rd LIF
(should be 192.168.0.135/3260 as shown in the screenshot). Click the OK button to close the
Advanced Settings dialog box.

156

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click OK to close the Connect To Target dialog box.


This starts the sequence for adding a path for the LIF cluster1-02_iscsi_lif_2.

1) Click the Add session button to open the Connect To Target dialog.

157

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Check the Enable multi-path checkbox, then click the Advanced button to open the Advanced
Setting dialog box.

158

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

th

1) Set the Target portal IP dropdown to the value of the IP Address/Port we specified for the 4 LIF
(should be 192.168.0.136/3260 as shown in the screenshot). Click the OK button to close the
Advanced Settings dialog box.

159

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click OK to close the Connect To Target dialog box.

At this point we have finished adding all four paths so we can move on with the rest of the
configuration process.

160

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) The Identifiers list in the Properties dialog box now shows entries for four sessions, one for each
path we just configured. Click OK to close the Properties dialog.

161

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click OK to close the iSCSI Initiator Properties dialog.

The LUN should now be properly connected to the LUN using multi-pathing so it is time to format our LUN
and build a filesystem on it. Launch Windows Server Manager by clicking the Start button on the desktop
of jumphost, and then go to Administrative Tools->Server Manager. It may take 10-20 seconds for
Server Manager to open so please be patient.

162

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) In Server Manager navigate to Server Manager->Storage->Disk Management.

When you launch Disk Management an Initialize Disk dialog will open informing you that you must
initialize a new disk before Logical Disk Manager can access it. If you see more than one disk listed then
MPIO has not correctly recognized that the multiple paths we set up are all for the same LUN, so please
review your steps to find and correct any configuration errors.

1) This screenshot correctly shows only a single disk. Click OK to initialize the disk.

The Disk Management window is now visible and shows a new unallocated disk as shown in the following
screenshot excerpt:

163

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Right-click inside the Unallocated disk and select New Simple Volume.

1) The New Simple Volume Wizard opens. Click Next to continue.

164

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Accept the defaults by clicking Next to continue.

1) Accept the defaults by clicking Next to continue.

165

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Set the Volume Label to WINLUN and click Next to continue.

1) Click Finish to close out the New Simple Volume Wizard.

166

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

The new LUN is now ready as shown in the following screenshot. Before we complete this section of the
lab, lets take a look at the MPIO configuration for the new LUN we just mounted.

1) Right-click on the WINLUN volume in Disk Manager and select Properties.

The WINLUN (E:) Properties window opens.

167

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) In the WINLUN (E:) Properties dialog click the Hardware tab.


2) In the All disk drives list select the NETAPP LUN C-Mode Multi-Path Disk entry.
3) Click the Properties button.
The NETAPP LUN C-Mode Multi-Path Disk Device Properties window opens.

168

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click the MPIO tab.


2) Notice that we are using the Data ONTAP DSM for multi-path access rather than the Microsoft
DSM. NetApp recommends using the Data ONTAP DSM software if possible although the
Microsoft DSM is also supported.
3) The MPIO policy is set to Least Queue Depth. A number of different multi-pathing policies are
available but the configuration shown here sends LUN I/O down the path that has the fewest
outstanding I/O requests. See the More information about MPIO policies link at the bottom of
the dialog window for details about all the available policies.
4) The top two paths show both a Path State and TPG State as Active/Optimized; these paths
are connected to the node cluster1-01 and the Least Queue Depth policy makes active use of
both paths to this node. On the other hand the bottom two paths show a Path State of
Unavailable and a TPG State of Active/Unoptimized; these paths are connected to the node
cluster1-02 and only enter a Path State of Active/Optimized if the node cluster1-01
becomes unavailable or if the volume hosting the LUN migrates over to the node cluster1-02.
5) When you are finished reviewing the information in this dialog click OK to exit. If you have
changed any of the values in this dialog you may want to consider instead using the Cancel
button in order to discard those changes.

169

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Back in the WINLUN (E:) Properties dialog click OK to close the window.

Finally, you may close out Server Manager.

4.2

Create, Map, and Mount a Linux LUN

In section 4.1 we created a new SVM and configured it for iSCSI. In the following sub-sections we will
perform the remaining steps needed to configure and use a LUN under Linux:
1) Gather the iSCSI Initiator Name of the Linux client.
2) Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun
within that volume, and map the LUN to the Linux client.
3) Mount the LUN on the Linux client.
You must complete all of the following subsections in order to use the LUN from the Linux client. Note
that there is no requirement to complete section 4.2 (the Windows LUN section) before starting this
section of the lab guide but the screenshots and command line output shown here assume that you have;
if you did not complete section 4.2 then the differences will not affect your ability to create and mount the
Linux LUN.

170

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

4.2.1 Gather the Linux Client iSCSI Initiator Name


We need to determine the Linux clients iSCSI initiator name so that we can set up an appropriate initiator
group to control access to the LUN.

This sections tasks must be performed from the command line:


If you completed section 3.4 you should already have a PuTTY connection open to the Linux host rhel1.
If you dont have a PuTTY connection open to rhel1 then open one now following the instructions found in
section 1.6 of this guide. The username will be root and the password will be Netapp1!.

Run the following command on rhel1 to find the name of its iSCSI initiator.
[root@rhel1 ~]# cd /etc/iscsi
[root@rhel1 iscsi]# ls
initiatorname.iscsi iscsid.conf
[root@rhel1 iscsi]# cat initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 iscsi]#

The initiator name for rhel1 is iqn.1994-05.com.redhat:rhel1.demo.netapp.com

4.2.2 Create and Map a Linux LUN


We will now create a new thin provisioned Linux LUN on the SVM svmluns under the volume linluns
and also create an initiator igroup for the LUN so that only the Linux host rhel1 can access it. An initiator
group, or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names for the hosts that are
permitted to see the associated LUNs.

To perform this sections tasks from the GUI:


Switch back to the System Manager window so we can create the LUN.

171

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) In System Manager open the Storage Virtual Machines tab.


2) Select the svmluns SVM.
3) Under svmluns navigate to Storage->LUNs. You may or may not see a listing presented for the
LUN windows.lun, depending on whether or not you completed the lab sections for creating a
Windows LUN.
4) Click Create to launch the Create LUN wizard.

172

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Click Next to advance to the next step in the wizard.

173

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Set the fields as shown in the screenshot and then click the Next button to advance the wizard.

174

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

2) Choose Create a new flexible volume in, populate the fields as shown, and then click the Next
button.

175

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) In the Initiators Mapping Window click the Add Initiator Group button.

176

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Complete the fields in the Create Initiator Group window as shown. The name of the group
should be linigrp. When selecting the portset you can click the Choose button to select
iscsi_pset_1 from the list of existing portsets. IMPORTANT!!! Do not click the Create button
yet! Instead click the Initiators tab to continue.

177

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

2) Click the Add button.

178

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

2) Enter the iSCSI initiator name for the rhel1 host that we gathered in section 4.3.1. That initiator
name was iqn.1994-05.com.redhat:rhel1.demo.netapp.com, then click the OK button.

179

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

2) Click the Create button to finish creating the Initiator Group.

180

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) You should see a message stating that the linigrp initiator group was created successfully. Click
OK to acknowledge the message.

181

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Make sure you check the checkbox so that the LUN will be mapped to this igroup.
2) Click Next.

182

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) We are not going to set any Quality of Service properties for this LUN, so just click the Next
button.

183

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) Review the settings and if everything is correct click the Next button.

184

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

1) When the wizard completes click the Finish button to exit.


You should now see the new LUN under the clusters LUN list.

185

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Our new Linux LUN exists and is mapped so that our rhel1 client can see it, but we still have one more
configuration step remaining for this LUN as follows:
New in Data ONTAP 8.2 is a space reclamation feature that allows Data ONTAP to reclaim space from a
thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to notify the client
when the LUN cannot accept writes due to lack of space on the volume. This feature is supported by
VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft Windows 2012. The
RHEL clients used in this lab are running version 6.3 and so we will enable the space reclamation feature
for our Linux LUN. Space reclamation can only be enabled through the Data ONTAP command line, so if
you do not already have a PuTTY session open to cluster1 then open one now following the directions
shown in section 1.6. The username will be admin and the password will be Netapp1!.
Enable space reclamation for the LUN.
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun disabled
cluster1::> lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation
enabled
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun enabled
cluster1::>

186

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

To perform this sections tasks from the command line:


If you do not currently have a PuTTY session open to cluster1 then open one now following the
instructions from section 1.6. The username will be admin and the password will be Netapp1!.
Create the thin provisioned volume linluns that will host the Linux LUN we will create in a later step:
cluster1::> volume create -vserver svmluns -volume linluns -aggregate
aggr1_cluster1_01 size 206.2MB percent-snapshot-space 0 snapshot-policy none
-space-guarantee none -autosize-mode grow nvfail on
[Job 59] Job is queued: Create linluns.
[Job 59] Job succeeded: Successful
cluster1::> volume show
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----cluster1-01
vol0
aggr0_cluster1_01
online
RW
7.56GB
5.08GB
32%
cluster1-02
vol0
aggr0_cluster1_02
online
RW
7.56GB
5.10GB
32%
svm1
eng_users
aggr1_cluster1_01
online
RW
1GB
972.6MB
5%
svm1
engineering aggr1_cluster1_01
online
RW
1GB
972.5MB
5%
svm1
svm1_root
aggr1_cluster1_01
online
RW
20MB
18.88MB
5%
svmluns
linluns
aggr1_cluster1_01
online
RW
206.2MB
190.4MB
0%
svmluns
svmluns_root aggr1_cluster1_01
online
RW
20MB
18.88MB
5%
svmluns
winluns
aggr1_cluster1_01
online
RW
206.2MB
190.4MB
0%
8 entries were displayed
cluster1::>

Create the thin provisioned Linux LUN linux.lun on the volume linluns:
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
204.0MB
cluster1::> lun create -vserver svmluns -volume linluns -lun linux.lun -size 200MB
-ostype linux -space-reserve disabled
Created a LUN of size 200m (209715200)
cluster1::> lun modify -vserver svmluns -volume linluns -lun linux.lun -comment
"Linux LUN"
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/linluns/linux.lun
online unmapped linux
200MB
svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
204.0MB
2 entries were displayed.
cluster1::>

187

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Display a list of the clusters igroups and portsets, then create a new igroup named linigrp that we will
use to manage access to the LUN linux.lun. Add the iSCSI initiator name for the Linux host rhel1 to the
new igroup.
cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
cluster1::> portset show
Vserver
Portset
Protocol Port Names
Igroups
--------- ------------ -------- ----------------------- -----------svmluns
iscsi_pset_1 iscsi
cluster1-01_iscsi_lif_1, cluster1-01_iscsi_lif_2,
cluster1-02_iscsi_lif_1, cluster1-02_iscsi_lif_2
winigrp
cluster1::> igroup create -vserver svmluns -igroup linigrp -protocol iscsi -ostype
linux -portset iscsi_pset_1 -initiator iqn.1994-05.com.redhat:rhel1.demo.netapp.com
cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
linigrp
iscsi
linux
iqn.1994-05.com.redhat:rhel1.demo.
netapp.com
svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
2 entries were displayed.
cluster1::>

Map the LUN linux.lun to the igroup linigrp:


cluster1::> lun map -vserver svmluns -volume linluns -lun linux.lun -igroup linigrp
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/linluns/linux.lun
online mapped
linux
200MB
svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
204.0MB
2 entries were displayed.
cluster1::> lun mapped show
Vserver
Path
Igroup
LUN ID Protocol
---------- ---------------------------------------- ------- ------ -------svmluns
/vol/linluns/linux.lun
linigrp
0 iscsi
svmluns
/vol/winluns/windows.lun
winigrp
0 iscsi
2 entries were displayed.
cluster1::> lun show -lun linux.lun
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/linluns/linux.lun
online mapped
linux
200MB
cluster1::> lun mapped show -lun linux.lun
Vserver
Path
Igroup
LUN ID Protocol
---------- ---------------------------------------- ------- ------ -------svmluns
/vol/linluns/linux.lun
linigrp
0 iscsi
cluster1::> lun show -lun linux.lun -instance
Vserver Name: svmluns
LUN Path: /vol/linluns/linux.lun
Volume Name: linluns
Qtree Name: ""
LUN Name: linux.lun
LUN Size: 200MB
OS Type: linux
Space Reservation: disabled
Serial Number: BLH0T?DDsJWc
Comment: Linux LUN

188

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Space Reservations Honored:


Space Allocation:
State:
LUN UUID:
Mapped:
Block Size:
Device Legacy ID:
Device Binary ID:
Device Text ID:
Read Only:
Inaccessible Due to Restore:
Used Size:
Maximum Resize Size:
Creation Time:
Class:
Clone:
Clone Autodelete Enabled:
QoS Policy Group:
cluster1::>

false
disabled
online
b6cd6dc9-b021-4155-af42-ad6b9a1571c7
mapped
512
false
false
0
64.00GB
2/18/2014 22:35:21
regular
false
false
-

New in Data ONTAP 8.2 is a space reclamation feature that allows Data ONTAP to reclaim space from a
thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to notify the client
when the LUN cannot accept writes due to lack of space on the volume. This feature is supported by
VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft Windows 2012. The
RHEL clients used in this lab are running version 6.3 and so we will enable the space reclamation feature
for our Linux LUN.
Configure the LUN to support space reclamation:
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun disabled
cluster1::> lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation
enabled
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun enabled
cluster1::>

4.3.4 Mount the LUN on a Linux Client


In this section we will be using the Linux command line to configure the host rhel1 to connect to the
Linux LUN /vol/linluns/linux.lun we created in the preceding section.

This sections tasks must be performed from the command line:

The steps in this section assume some familiarity with how to use the Linux command line. If you are not
familiar with those concepts then we recommend that you skip this section of the lab.
If you do not currently have a PuTTY session open to rhel1, open one now and log in as user root with
the password Netapp1!.

189

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

The NetApp Linux Host Utilities kit has been pre-installed on both Red Hat Linux hosts in this lab, and the
iSCSI initiator name has already been configured for each host. Confirm that is the case:
[root@rhel1 ~]# rpm qa | grep netapp
netapp_linux_host_utilities-6-1.x86_64
[root@rhel1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 ~]#

In the /etc/iscsi/iscsid.conf file the node.session.timeo.replacement_timeout value is


set to 5 to better support timely path failover, and the node.startup value is set to automatic so that
the system will automatically log in to the iSCSI node at startup.
[root@rhel1 ~]# grep replacement_timeout /etc/iscsi/iscsid.conf
node.session.timeo.replacement_timeout = 5
[root@rhel1 ~]# grep node.startup /etc/iscsi/iscsid.conf
# node.startup = automatic
node.startup = automatic
[root@rhel1 ~]#

We have also pre-installed the DM-Multipath packages and pre-created /etc/multipath.conf to


support multi-pathing so that the RHEL host can access the LUN using all of the SAN LIFs we created for
the svmluns SVM.
[root@rhel1 ~]# rpm -q device-mapper
device-mapper-1.02.74-10.el6.x86_64
[root@rhel1 ~]# rpm -q device-mapper-multipath
device-mapper-multipath-0.4.9-56.el6.x86_64
[root@rhel1 ~]# cat /etc/multipath.conf
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated
#
# REMEMBER: After updating multipath.conf, you must run
#
# service multipathd reload
#
# for the changes to take effect in multipathd
# NetApp recommended defaults
defaults {
flush_on_last_del
yes
max_fds
max
queue_without_daemon
no
user_friendly_names
no
dev_loss_tmo
infinity
fast_io_fail_tmo
5
}
blacklist {
devnode
devnode
devnode
devnode
}

"^sda"
"^hd[a-z]"
"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
"^ccis.*"

devices {
# NetApp iSCSI LUNs
device {
vendor
"NETAPP"
product
"LUN"
path_grouping_policy
group_by_prio

190

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

features
prio
path_checker
failback
path_selector
hardware_handler
rr_weight
rr_min_io
getuid_callout

"3 queue_if_no_path pg_init_retries 50"


"alua"
tur
immediate
"round-robin 0"
"1 alua"
uniform
128
"/lib/udev/scsi_id -g -u -d /dev/%n"

}
}
[root@rhel1 ~]#

We now need to start the iSCSI software service on rhel1 and configure it to start automatically at boot
time. Note that a force-start is only necessary the very first time you start the iscsid service on host.
[root@rhel1 ~]# service iscsid status
iscsid is stopped
[root@rhel1 ~]# service iscsid force-start
Starting iscsid: OK
[root@rhel1 ~]# service iscsi status
No active sessions
[root@rhel1 ~]# chkconfig iscsi on
[root@rhel1 ~]# chkconfig --list iscsi
iscsi
0:off 1:off 2:on
3:on
[root@rhel1 ~]#

4:on

5:on

6:off

Next discover the available targets using the iscsiadm command. Note that the exact values used for the
node paths may differ in your lab from what is shown in this example, and that after running this
command there will not as of yet be active iSCSI sessions because we have not yet created the
necessary device files.
[root@rhel1 ~]# iscsiadm --mode discovery --op update --type sendtargets --portal
192.168.0.133
192.168.0.133:3260,1028 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
192.168.0.136:3260,1031 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
192.168.0.135:3260,1030 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
192.168.0.134:3260,1029 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
[root@rhel1 ~]# iscsiadm --mode session
iscsiadm: No active sessions.
[root@rhel1 ~]#

Create the devices necessary to support the discovered nodes, after which the sessions become active.
[root@rhel1 ~]# iscsiadm --mode node -l all
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
(multiple)
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
(multiple)
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
(multiple)
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
(multiple)

191

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

portal: 192.168.0.133,3260]
portal: 192.168.0.135,3260]
portal: 192.168.0.136,3260]
portal: 192.168.0.134,3260]

Login to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,


successful.
Login to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
successful.
Login to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
successful.
Login to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
successful
[root@rhel1 ~]# iscsiadm --mode session
tcp: [1] 192.168.0.133:3260,1028 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
tcp: [2] 192.168.0.135:3260,1030 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
tcp: [3] 192.168.0.136:3260,1031 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
tcp: [4] 192.168.0.134:3260,1029 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
[root@rhel1 ~]#

portal: 192.168.0.133,3260]
portal: 192.168.0.135,3260]
portal: 192.168.0.136,3260]
portal: 192.168.0.134,3260]

At this point the Linux client sees the LUN over all four paths but it does not yet understand that all four
paths represent the same LUN.
[root@rhel1 ~]# sanlun lun show
controller(7mode)/
device
host
lun
vserver(Cmode)
lun-pathname
filename
adapter
protocol
size
mode
---------------------------------------------------------------------------------------------svmluns
/vol/linluns/linux.lun /dev/sde
host3
iSCSI
200m
C
svmluns
/vol/linluns/linux.lun /dev/sdd
host4
iSCSI
200m
C
svmluns
/vol/linluns/linux.lun /dev/sdc
host5
iSCSI
200m
C
scmluns
/vol/linluns/linux.lun /dev/sdb
host6
iSCSI
200m
C
[root@rhel1 ~]#

Since the lab includes a pre-configured /etc/multipath.conf file we just need to start the multipathd service
to handle the multiple path management and configure it to start automatically at boot time.
[root@rhel1 ~]# service multipathd status
multipathd is stopped
[root@rhel1 ~]# service multipathd start
Starting multipathd daemon: OK
[root@rhel1 ~]# service multipathd status
multipathd (pid 10408) is running...
[root@rhel1 ~]# chkconfig multipathd on
[root@rhel1 ~]# chkconfig --list multipathd
multipathd
0:off 1:off 2:on
3:on
[root@rhel1 ~]#

4:on

5:on

6:off

The multipath command displays the configuration of DM-Multipath, and the multipath ll command
displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/mapper that
you use to access the multipathed LUN (in order to create a filesystem on it and to mount it); the first line
of output from the multipath ll command lists the name of that device file (in this example
3600a0980424c4830543f443061704343). The autogenerated name for this device file will likely differ in your
copy of the lab. Also pay attention to the output of the sanlun lun show p command which shows
information about the Data ONTAP path of the LUN, the LUNs size, its device file name under
/dev/mapper, the multipath policy, and also information about the various device paths themselves.

192

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

[root@rhel1 ~]# multipath -ll


[1m3600a0980424c4830543f444472796366 dm-2 NETAPP,LUN C-Mode
size=200M features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 6:0:0:0 sdb 8:16 active ready running
| `- 3:0:0:0 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 5:0:0:0 sdc 8:32 active ready running
`- 4:0:0:0 sdd 8:48 active ready running
[root@rhel1 ~]# ls -l /dev/mapper
total 0
lrwxrwxrwx 1 root root
7 Aug 13 02:19 36m3600a0980424c4830543f444472796366 ->
../dm-2
crw-rw---- 1 root root 10, 58 Aug 10 03:08 mcontrol
lrwxrwxrwx 1 root root
7 Aug 10 03:08 vg_rhel1-lv_root -> ../dm-0
lrwxrwxrwx 1 root root
7 Aug 10 03:08 vg_rhel1-lv_swap -> ../dm-1
[root@rhel1 ~]# sanlun lun show -p
ONTAP Path: svmluns:/vol/linluns/linux.lun
LUN: 0
LUN Size: 200m
Mode: C
Host Device: 3600a0980424c4830543f444472796366
Multipath Policy: round-robin 0
Multipath Provider: Native
--------- ---------- ------- ------------ -------------------------------------------host
vserver
path
path
/dev/
host
vserver
state
type
node
adapter
LIF
--------- ---------- ------- ------------ -------------------------------------------up
primary
sdb
host6
cluster1-01_iscsi_lif_2
up
primary
sde
host3
cluster1-01_iscsi_lif_1
up
secondary sdc
host5
cluster1-02_iscsi_lif_2
up
secondary sdd
host4
cluster1-02_iscsi_lif_1
[root@rhel1 ~]#

You can see even more detail about the configuration of multipath and the LUN as a whole by running the
commands multipath v3 d ll or iscsiadm m session P 3. As the output of these commands
is rather lengthy we have omitted it here.
The LUN is now fully configured for multipath access, so the only steps remaining before you can use the
LUN on the Linux host is to create a filesystem and mount it. When you run the following commands in
your lab you will need to substitute in the /dev/mapper/ string that identifies your LUN (get that string
from the output of ls l /dev/mapper):
[root@rhel1 ~]# mkfs.ext4 /dev/mapper/3600a0980424c4830543f444472796366
mke2fs 1.41.12 (17-May-2010)
Discarding device blocks:
done
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=4 blocks, Stripe width=64 blocks
51200 inodes, 204800 blocks
10240 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
25 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729

193

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Writing inode tables: done


Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@rhel1 ~]# mkdir /linuxlun
[root@rhel1 ~]# mount -t ext4 -o discard
/dev/mapper/3600a0980424c4830543f4444727796366 /linuxlun
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root
11877388
4640532
6633516 42% /
tmpfs
510320
188
510132
1% /dev/shm
/dev/sda1
495844
37739
432505
9% /boot
/dev/mapper/3600a0980424c4830543f444472796366
198337
5646
182451
4% /linuxlun
[root@rhel1 ~]# ls /linuxlun
lost+found
[root@rhel1 ~]# echo "hello" > /linuxlun/test.txt
[root@rhel1 ~]# cat /linuxlun/test.txt
hello
[root@rhel1 ~]# ls -l /linuxlun/test.txt
-rw-r--r-- 1 root root 6 Aug 13 02:23 /linuxlun/test.txt
[root@rhel1 ~]#

The discard option shown in the mount command allows the Red Hat host to take advantage of space
reclamation for the LUN as we discussed in section 4.3.3.
To have the LUNs filesystem automatically mounted at boot time run the following command (modified to
reflect the multipath device path being used in your instance of the lab) to add the mount information to
the /etc/fstab file. The following command should be entered as a single line (with a space character
separating the text from each line).
[root@rhel1 ~]# echo '/dev/mapper/3600a0980424c4830543f444472796366
_netdev,discard,defaults 0 0' >> /etc/fstab
[root@rhel1 ~]#

/linuxlun ext4

Appendix 1 Using the clustered Data ONTAP Command Line


If you choose to utilize the clustered Data ONTAP command line to complete portions of this lab then you
should be aware that clustered Data ONTAP supports command line completion. When entering a
command at the Data ONTAP command line you can at any time mid-typing hit the Tab key and if you
have entered enough unique text for the command interpreter to determine what the rest of the argument
would be it will automatically fill in that text for you. For example, entering the text cluster sh and then
hitting the tab key will automatically expand the entered command text to cluster show.
At any point mid-typing you can also enter the ? character and the command interpreter will list any
potential matches for the command string. This is a particularly useful feature if you cant remember all of
the various command line options for a given clustered Data ONTAP command; for example, to see the
list of options available for the cluster show command you can enter:

194

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> cluster show ?


[ -instance | -fields <fieldname>, ]
[[-node] <nodename>]
Node
[ -eligibility {true|false} ] Eligibility
[ -health {true|false} ]
Health
cluster1::>

When using tab completion if the Data ONTAP command interpreter is unable to identify a unique
expansion it will display a list of potential matches similar to what using the ? character does.
cluster1::> cluster s
Error: Ambiguous command.
cluster setup
cluster show
cluster statistics
cluster1::>

Possible matches include:

The Data ONTAP commands are structured hierarchically. When you log in you are placed at the root of
that command hierarchy, but you can step into a lower branch of the hierarchy by entering one of the
base commands. For example, when you first log in to the cluster enter the ? command to see the list of
available base commands, as follows:
cluster1::> ?
up
cluster>
dashboard>
event>
exit
history
job>
lun>
man
network>
qos>
redo
rows
run
security>
set
sis
snapmirror>
statistics>
storage>
system>
top
volume>
vserver>
cluster1::>

Go up one directory
Manage clusters
Display dashboards
Manage system events
Quit the CLI session
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage LUNs
Display the on-line manual pages
Manage physical and virtual network connections
QoS settings
Execute a previous command
Show/Set the rows for this CLI session
Run interactive or non-interactive commands in
the node shell
The security directory
Display/Set CLI session settings
Manage volume efficiency
Manage SnapMirror
Display operational statistics
Manage physical storage, including disks,
aggregates, and failover
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage Vservers

The > character at the end of a command signifies that it has a sub-hierarchy; enter the vserver
command to enter the vserver sub-hierarchy.

195

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

cluster1::> vserver
cluster1::vserver> ?
audit>
cifs>
context
create
dashboard>
data-policy>
delete
export-policy>
fcp>
fpolicy>
group-mapping>
iscsi>
locks>
modify
name-mapping>
nfs>
peer>
rename
security>
services>
setup
show
smtape>
start
stop
cluster1::vserver>

Manage auditing of protocol requests that the


Vserver services
Manage the CIFS configuration of a Vserver
Set Vserver context
Create a Vserver
The dashboard directory
Manage data policy
Delete a Vserver
Manage export policies and rules
Manage the FCP service on a Vserver
Manage FPolicy
The group-mapping directory
Manage the iSCSI services on a Vserver
Manage Client Locks
Modify a Vserver
The name-mapping directory
Manage the NFS configuration of a Vserver
Create and manage Vserver peer relationships
Rename a Vserver
Manage ontap security
The services directory
Vserver setup wizard
Display Vservers
The smtape directory
Start a Vserver
Stop a Vserver

Notice how the prompt changed to reflect that you are now in the vserver sub-hierarchy, and that some
of the subcommands here have sub-hierarchies of their own. To return to the root of the hierarchy enter
the top command; you can also navigate upwards one level at a time by using the up or .. commands.
cluster1::vserver> top
cluster1::>

The Data ONTAP command interpreter supports command history. By repeatedly hitting the up arrow key
you can step through the series of commands you ran earlier and you can selectively execute a given
command again when you find it by hitting the Enter key. You can also use the left and right arrow keys to
edit the command before you run it again.

References
The following references were used in writing this lab guide.

TR-3982: NetApp Clustered Data ONTAP 8.2, an Introduction, May 2013

TR-4129: Namespaces in clustered Data ONTAP, August 2013

196

Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1

Version History
Version

Date

Document Version History

Version 1.1

August 2013

Initial Release

Version 1.1 Rev 1

September 2013

Various small revisions to improve clarity and increase best


practice compliance

Version 1.2

February 2014

Software version updates, some example changes.

Version 1.2 Rev 1

November 2014

Updated license keys

Version 1.2 Rev 2

November 2015

Updated license keys

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be
obtained by the use of the information or observance of any recommendations provided herein. The
information in this document is distributed AS IS, and the use of this information or the implementation of
any recommendations or techniques herein is a customers responsibility and depends on the customers
ability to evaluate and integrate them into the customers operational environment. This document and
the information contained herein may be used solely in connection with the NetApp products discussed
in this document.

197

2013 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp,
Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster, xxx, and xxx are trademarks or
registered trademarks of NetApp, Inc. in the United States and/or other countries. <<Insert third-party trademark notices here.>> All
Lab on Demand other
NetApp
Introduction
Lab
clustered or
Data
ONTAP
8.2 v1.1 of their respective holders and should be treated as such.
brands
or products
arefor
trademarks
registered
trademarks

Potrebbero piacerti anche