Sei sulla pagina 1di 182

NetApp Lab on Demand (LOD)

Basic Concepts for


Clustered Data ONTAP 8.3
April 2015 | SL10220 v1.1

The information is intended to outline our general product direction. It is intended for information purposes
only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or
functionality, and should not be relied upon in making purchasing decisions. NetApp makes no warranties,
expressed or implied, on future functionality and timeline. The development, release, and timing of any
features or functionality described for NetApps products remains at the sole discretion of NetApp.
NetApp's strategy and possible future developments, products and or platforms directions and functionality
are all subject to change without notice. NetApp has no obligation to pursue any course of business
outlined in this document or any related presentation, or to develop or release any functionality mentioned
therein.

CONTENTS

Introduction .............................................................................................................................. 3
Why clustered Data ONTAP? ............................................................................................................................3
Lab Objectives ..................................................................................................................................................4
Prerequisites ....................................................................................................................................................5
Accessing the Command Line ...........................................................................................................................5

Lab Environment ...................................................................................................................... 7


Lab Activities ............................................................................................................................ 8
Clusters ............................................................................................................................................................8
Create Storage for NFS and CIFS ................................................................................................................... 38
Create Storage for iSCSI............................................................................................................................... 107

Appendix 1 Using the clustered Data ONTAP Command Line ....................................... 179
References ............................................................................................................................ 181
Version History ..................................................................................................................... 182

2 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Introduction
This lab introduces the fundamentals of clustered Data ONTAP . In it you will start with a pre-created 2node cluster and configure Windows 2012R2 and Red Hat Enterprise Linux 6.5 hosts to access storage on
the cluster using CIFS, NFS, and iSCSI.

Why clustered Data ONTAP?


A helpful way to start understanding the benefits offered by clustered Data ONTAP is to consider server
virtualization. Before server virtualization, system administrators frequently deployed applications on
dedicated servers in order to maximize application performance and to avoid the instabilities often
encountered when combining multiple applications on the same operating system instance. While this
design approach was effective, it also had the following drawbacks:

It does not scale well adding new servers for every new application is extremely expensive.

It is inefficient most servers are significantly underutilized meaning that businesses are not extracting
the full benefit of their hardware investment.

It is inflexible re-allocating standalone server resources for other purposes is time consuming, staff
intensive, and highly disruptive.

Server virtualization directy addresses all three of these limitations by decoupling the application instance
from the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware,
meaning that businesses can now consolidate their server workloads to a smaller set of more effectively
utilized physical servers. In addition, the ability to transparently migrate running virtual machines across a
pool of physical servers enables businesses to reduce the impact of downtime due to scheduled
maintenance activities.
Clustered Data ONTAP brings these same benefits, and many others, to storage systems. As with server
virtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a
single logical cluster that can non-disruptively service multiple storage workload needs. With clustered Data
ONTAP you can:

Combine different types and models of NetApp storage controllers (known as nodes) into a shared
physical storage resource pool (referred to as a cluster).

Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on the
same storage cluster.

Consolidate various storage workloads to the cluster. Each workload can be assigned its own Storage
Virtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its own data
volumes, LUNs, CIFS shares, and NFS exports.

Support multitenancy with delegated administration of SVMs. Tenants can be different companies,
business units, or even individual application owners, each with their own distinct administrators whose
admin rights are limited to just the assigned SVM.

Use Quality of Service (QoS) capabilities to manage resource utilization between storage workloads.

Non-disruptively migrate live data volumes and client connections from one cluster node to another.

Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively
removed from the cluster, meaning that you can non-disruptively scale a cluster up and down during
hardware refresh cycles.

3 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Leverage multiple nodes in the cluster to simultaneously service a given SVMs storage workloads.
This means that businesses can scale out their SVMs beyond the bounds of a single physical node in
response to growing storage and performance requirements, all non-disruptively.

Apply software & firmware updates and configuration changes without downtime.

Lab Objectives
This lab explores fundamental concepts of clustered Data ONTAP, and utilizes a modular design to allow
you to focus on the topics that are of specific interest to you. The Clusters section is required for all
invocations of the lab (it is a prerequisite for the other sections). If you are interested in NAS functionality
then complete the Storage Virtual Machines for NFS and CIFS section. If you are interested in SAN
functionality, then complete the Storage Virtual Machines for iSCSI section and at least one of its
Windows or Linux subsections (you may do both if you so choose). If you are interested in nondisruptive
operations then you will need to first complete one of the Storage Virtual Machine sections just mentioned
before you can proceed to the Nondisruptive Operations: section.
Here summary of the exercises in this lab, along with their Estimated Completion Times (ECT):

Clusters (Required, ECT = 20 minutes).


o

Explore a cluster

View Advanced Drive Partitioning.

Create a data aggregate.

Create a Subnet.

Storage Virtual machines for NFS and CIFS (Optional, ECT = 40 minutes)
o

Create a Storage Virtual Machine.

Create a volume on the Storage Virtual Machine.

Configure the Storage Virtual Machine for CIFS and NFS access.

Mount a CIFS share from the Storage Virtual Machine on a Windows client.

Mount a NFS volume from the Storage Virtual Machine on a Linux client.

Storage Virtual Machines for iSCSI (Optional, ECT = 90 minutes including all optional subsections)
o

Create a Storage Virtual Machine.

Create a volume on the Storage Virtual Machine.

For Windows (Optional , ECT = 40 minutes)


o

Create a Windows LUN on the volume and map the LUN to an igroup.

Configure a Windows client for iSCSI and MPIO and mount the LUN.

For Linux (Optional, ECT = 40 minutes)


o

Create a Linux LUN on the volume and map the LUN to an igroup.

Configure a Linux client for iSCSI and multipath and mount the LUN.

This lab includes instructions for completing each of these tasks using either System Manager, NetApps
graphical administration interface, or the Data ONTAP command line. The end state of the lab produced
by either method is exactly the same so use whichever method you are the most comfortable with.
4 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

In a lab section you will encounter orange bars similar to the following that indicate the beginning of the
graphical or command line procedures for that exercise. A few sections only offer one of these two options
rather than both, in which case the text in the orange bar will communicate that point.
***EXAMPLE*** To perform this sections tasks from the GUI: ***EXAMPLE***

Note that while switching back and forth between the graphical and command line methods from one
section of the lab guide to another is supported, this guide is not designed to support switching back and
forth between these methods within a single section. For the best experience we recommend that you stick
with a single method for the duration of a lab section.

Prerequisites
This lab introduces clustered Data ONTAP and so this guide makes no assumptions that the user has
previous experience with Data ONTAP. The lab does assume some basic familiarity with storage system
related concepts such as RAID, CIFS, NFS, LUNs, and DNS.
This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume
that the lab user has a basic familiarity with Microsoft Windows.
This lab also includes steps for mount NFS volumes and LUNs on a Linux client. All steps are performed
from the Linux command line and assumes a basic working knowledge of the Linux command line. A basic
working knowledge of a text editor such as vi may be useful, but is not required.

Accessing the Command Line


PuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in
order to run command line commands.

1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host jumphost as
shown in the following screenshot; just double-click on the icon to launch it.

5 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. This
example shows a user connecting to the Data ONTAP cluster named cluster1.
1. By default PuTTY should launch into the Basic options for your PuTTY session display as shown in
the screenshot. If you accidentally navigate away from this view just click on the Session category item
to return to this view.
2. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it to
open the connection. A terminal window will open and you will be prompted to log into the host. You
can find the correct username and password for the host in Table 1 in the Lab Environment section at
the beginning of this guide.

The clustered Data ONTAP command lines supports a number of usability features that make the
command line much easier to use. If you are unfamiliar with those features then review Appendix 1 of this
lab guide which contains a brief overview of them.

6 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Lab Environment
The following figure contains a diagram of the environment for this lab.

All of the servers and storage controllers presented in this lab are virtual devices, and the networks that
interconnect them are exclusive to your lab session. While we encourage you to follow the demonstration
steps outlined in this lab guide, you are free to deviate from this guide and experiment with other Data
ONTAP features that interest you. While the virtual storage controllers (vsims) used in this lab offer nearly
all of the same functionality as physical storage controllers, they are not capable of providing the same
performance as a physical controller, which is why these labs are not suitable for performance testing.
Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address.
Table 1: Lab Host Credentials
Hostname
JUMPHOST

Description
Windows 20012R2 Remote Access
host

IP Address(es)

Username

Password

192.168.0.5

Demo\Administrator

Netapp1!

RHEL1

Red Hat 6.5 x64 Linux host

192.168.0.61

root

Netapp1!

RHEL2

Red Hat 6.5 x64 Linux host

192.168.0.62

root

Netapp1!

DC1

Active Directory Server

192.168.0.253

Demo\Administrator

Netapp1!

cluster1

Data ONTAP cluster

192.168.0.101

admin

Netapp1!

cluster1-01

Data ONTAP cluster node

192.168.0.111

admin

Netapp1!

cluster1-02

Data ONTAP cluster node

192.168.0.112

admin

Netapp1!

Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.
7 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Table 2: Preinstalled NetApp Software


Hostname

Description

JUMPHOST

Data ONTAP DSM v4.1 for Windows MPIO, Windows Host Utility Kit v6.0.2

RHEL1, RHEL2

Linux Host Utilities Kit v6.2

Lab Activities
Clusters
Expected Completion Time: 20 Minutes
A cluster is a group of physical storage controllers, or nodes, that have been joined together for the
purpose of serving data to end users. The nodes in a cluster can pool their resources together so that the
cluster can distribute its work across the member nodes. Communication and data transfer between
member nodes (such as when a client accesses data on a node other than the one actually hosting the
data) takes place over a 10Gb cluster-interconnect network to which all the nodes are connected, while
management and client data traffic passes over separate management and data networks configured on
the member nodes.
Clusters typically consist of one or more NetApp storage controller High Availability (HA) pairs. Both
controllers in an HA pair actively host and serve data, but they are also capable of taking over their
partners responsibilities in the event of a service disruption by virtue of their redundant cable paths to each
others disk storage. Having multiple HA pairs in a cluster allows the cluster to scale out to handle greater
workloads, and to support non-disruptive migrations of volumes and client connections to other nodes in
the cluster resource pool. This means that cluster expansion and technology refreshes can take place
while the cluster remains fully online, and serving data.
Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an
even number of controller nodes. There is one exception to this rule, and that is the single node cluster,
which is a special cluster configuration intended to support small storage deployments that can be satisfied
with a single physical controller head. The primary noticeable difference between single node and standard
clusters, besides the number of nodes, is that a single node cluster does not have a cluster network. Single
node clusters can later be converted into traditional multi-node clusters, and at that point become subject to
all the standard cluster requirements like the need to utilize an even number of nodes consisting of HA
pairs. This lab does not contain a single node cluster, and so this lab guide does not discuss them further.
Data ONTAP 8.3 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although
the node limit may be lower depending on the model of FAS controller in use. Data ONTAP 8.3 clusters
that also host iSCSI and FC can scale up to a maximum of 8 nodes.
This lab utilizes simulated NetApp storage controllers rather than physical FAS controllers. The simulated
controller, also known as a vsim, is a virtual machine that simulates the functionality of a physical controller
without the need for dedicated controller hardware. The vsim is not designed for performance testing, but
does offer much of the same functionality as a physical FAS controller, including the ability to generate I/O
to disks. This makes the vsim is a powerful tool to explore and experiment with Data ONTAP product
features. The vsim does is limited when a feature requires a specific physical capability that the vsim does
not support; for example, vsims do not support Fibre Channel connections, which is why this lab uses
iSCSI to demonstrate block storage functionality.
8 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

This lab starts with a pre-created, minimally configured cluster. The pre-created cluster already includes
Data ONTAP licenses, the clusters basic network configuration, and a pair of pre-configured HA
controllers. In this next section you will create the aggregates that are used by the SVMs that you will
create in later sections of the lab. You will also take a look at the new Advanced Drive Partitioning feature
introduced in clustered Data ONTAP 8.3.
Connect to the Cluster with OnCommand System Manager
OnCommand System Manager is NetApps browser-based management tool for configuring and managing
NetApp storage systems and clusters. Prior to 8.3, System Manager was a separate application that you
had to download and install on your client OS. In 8.3, System Manager is now moved on-board the cluster,
so you just point your web browser to the cluster management address. The on-board System Manager
interface is essentially the same that NetApp offered in the System Manager 3.1, the version you install on
a client.

This sections tasks can only be performed from the GUI:

On the Jumphost, the Windows 2012R2 Server desktop you see when you first connect to the lab, open
the web browser of your choice. This lab guide uses Chrome, but you can use Firefox or Internet Explorer if
you prefer one of those. All three browsers already have System Manager set as the browser home page.
1. Launch Chrome to open System Manager.

The OnCommand System Manager Login window opens.

9 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Enter the User Name as admin and the Password as Netapp1! and then click Sign In.

System Manager is now logged in to cluster1 and displays a summary page for the cluster. If you are
unfamiliar with System Manager, here is a quick introduction to its layout.

10 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Use the tabs on the left side of the window to manage various aspects of the cluster. The Cluster tab (1)
accesses configuration settings that apply to the cluster as a whole. The Storage Virtual Machines tab (2)
allows you to manage individual Storage Virtual Machines (SVMs, also known as Vservers). The Nodes tab
(3) contains configuration settings that are specific to individual controller nodes. Please take a few
moments to expand and browse these tabs to familiarize yourself with their contents.

Note:

As you use System Manager in this lab, you may encounter situations where buttons at the bottom of a
System Manager pane are beyond the viewing size of the window, and no scroll bar exists to allow you to
scroll down to see them. If this happens, then you have two options; either increase the size of the browser
window (you might need to increase the resolution of your jumphost desktop to accommodate the larger
browser window), or in the System Manager window, use the tab key to cycle through all the various fields
and buttons, which eventually forces the window to scroll down to the non-visible items.

Advanced Drive Partitioning


Disks, whether Hard Disk Drives (HDD) or Solid State Disks (SSD), are the fundamental unit of physical
storage in clustered Data ONTAP, and are tied to a specific cluster node by virtue of their physical
connectivity (i.e., cabling) to a given controller head.
Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a
group of disks that are all physically attached to the same node. A given disk can only be a member of a
single aggregate.

11 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

By default each cluster node has one aggregate known as the root aggregate, which is a group of the
nodes local disks that host the nodes Data ONTAP operating system. A nodes root aggregate is
automatically created during Data ONTAP installation in a minimal RAID-DP configuration This means it is
initially comprised of 3 disks (1 data, 2 parity), and has a name that begins the string aggr0. For example,
in this lab the root aggregate of the node cluster1-01 is named aggr0_cluster1_01., and the root
aggregate of the node cluster1-02 is named aggr0_cluster1_02.
On higher end FAS systems that have many disks, the requirement to dedicate 3 disks for each controllers
root aggregate is not a burden, but for entry level FAS systems that only have 24 or 12 disks this root
aggregate disk overhead requirement signficantly reduces the disks available for storing user data. To
improve usable capacity, NetApp has introduced Advanced Drive Partitioning in 8.3, which divides the Hard
Disk Drives (HDDs) on nodes that have this feature enabled into two partititions; a small root partition, and
a much larger data partition. Data ONTAP allocates the root partitions to the node root aggregate, and the
data partitions for data aggregates. Each partition behaves like a virtual disk, so in terms of RAID Data
ONTAP treats these partitions just like physical disks when creating aggregates. The key benefit here is
that a much higher percentage of the nodes overall disk capacity is now available to host user data.
Data ONTAP only supports HDD partitioning for FAS 22xx and FAS25xx controllers, and only for HDDs
installed in their internal shelf on those models. Advanced Drive Partitioning can only be enabled at system
installation time, and there is no way to convert an existing system to use Advanced Drive Partitioning
other than to completely evacuate the affected HDDs and then re-install Data ONTAP.
All-Flash FAS (AFF) supports a variation of Advanced Drive Partitioning that utilizes SSDs instead of
HDDs. The capability is available for entry-level, mid-range, and high-end AFF platforms. Data ONTAP 8.3
also introduces SSD partitioning for use with Flash Pools, but the details of that feature lie outside the
scope of this lab.
In this section, you will see how to determine if a cluster node is utilizing Advanced Drive Partitioning.
System Manager provides a basic view into this information, but if you want to see more detail then you will
want to use the CLI.

12 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the GUI:

1. In System Managers left pane, navigate to the Cluster tab.


2. Expand cluster1.
3. Expand Storage.
4. Click Disks.
5. In the main window, click on the Summary tab.
6. Scroll the main window down to the Spare Disks section, where you will see that each cluster node has
12 spare disks with a per-disk size of 26.88 GB. These spares represent the data partitions of the
physical disks that belong to each node.

13 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

If you scroll back up to look at the Assigned HDDs section of the window, you will see that there are no
entries listed for the root partitions of the disks. Under daily operation, you will be primarly concerned with
data partitions rather than root partitions, and so this view focuses on just showing information about the
data partitions. To see information about the physical disks attached to your system you will need to select
the Inventory tab.
1. Click on the Inventory tab at the top of the Disks window.

System Managers main window now shows a list of the physical disks available across all the nodes in the
cluster, which nodes own those disks, and so on. If you look at the Container Type column you see that the
disks in your lab all show a value of shared; this value indicates that the physical disk is partitioned. For
disks that are not partitioned you would typically see values like spare, data, parity, and dparity.

14 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically determines
the size of the root and data disk partitions at system installation time based on the quantity and size of the
available disks assigned to each node. In this lab each cluster node has twelve 32 GB hard disks, and you
can see how your nodes root aggregates are consuming the root partitions on those disks by going to the
Aggregates page in System Manager.
1. On the Cluster tab, navigate to cluster1->Storage->Aggregates.
2. In the Aggregates list, select aggr0_cluster1_01, which is the root aggregate for cluster node cluster101. Notice that the total size of this aggregate is a little over 10 GB. The Available and Used space
shown for this aggregate in your lab may vary from what is shown in this screenshot, depending on the
quantity and size of the snapshots that exist on your nodes root volume.
3. Click the Disk Layout tab at the bottom of the window. The lower pane of System Manager now
displays a list of the disks that are members of this aggregate. Notice that the usable space is 1.52 GB,
which is the size of the root partition on the disk. The Physical Space column displays to total capacity
of the whole disk that is available to clustered Data ONTAP, including the space allocated to both the
disks root and data partitions.

To perform this sections tasks from the command line:


If you do not already have a PuTTY session established to cluster1, then launch PuTTY as described in the
Accessing the Command Line section at the beginning of this guide, and connect to the host cluster1
using the username admin and the password Netapp1!.

15 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. List all of the physical disks attached to the cluster:


cluster1::> storage disk show
Usable
Disk
Container
Disk
Size Shelf Bay Type
Type
---------------- ---------- ----- --- ------- ----------VMw-1.1
28.44GB
0 VMDISK shared
VMw-1.2

28.44GB

1 VMDISK

shared

VMw-1.3

28.44GB

2 VMDISK

shared

VMw-1.4

28.44GB

3 VMDISK

shared

VMw-1.5

28.44GB

4 VMDISK

shared

VMw-1.6

28.44GB

5 VMDISK

shared

VMw-1.7

28.44GB

6 VMDISK

shared

VMw-1.8

28.44GB

8 VMDISK

shared

VMw-1.9

28.44GB

9 VMDISK

shared

VMw-1.10

28.44GB

10 VMDISK

shared

VMw-1.11
VMw-1.12
VMw-1.13

28.44GB
28.44GB
28.44GB

11 VMDISK
12 VMDISK
0 VMDISK

shared
shared
shared

VMw-1.14

28.44GB

1 VMDISK

shared

VMw-1.15

28.44GB

2 VMDISK

shared

VMw-1.16

28.44GB

3 VMDISK

shared

VMw-1.17

28.44GB

4 VMDISK

shared

VMw-1.18

28.44GB

5 VMDISK

shared

VMw-1.19

28.44GB

6 VMDISK

shared

VMw-1.20

28.44GB

8 VMDISK

shared

VMw-1.21
VMw-1.22

28.44GB
28.44GB

9 VMDISK
10 VMDISK

shared
shared

VMw-1.23
28.44GB
VMw-1.24
28.44GB
24 entries were displayed.

11 VMDISK
12 VMDISK

shared
shared

Container
Name
Owner
--------- -------aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
cluster1-01
cluster1-01
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
aggr0_cluster1_02
cluster1-02
cluster1-02
cluster1-02

cluster1::>

16 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The preceding command listed a total of 24 disks, 12 for each of the nodes in this two-node cluster. The
container type for all the disks is shared, which indicates that the disks are partitioned. For disks that are
not partitioned, you would typically see values like spare, data, parity, and dparity. The Owner field
indicates which node the disk is assigned to, and the Container Name field indicates which aggregate the
disk is assigned to. Notice that two disks for each node do not have a Container Name listed; these are
spare disks that Data ONTAP can use as replacements in the event of a disk failure.
2. At this point, the only aggregates that exist on this new cluster are the root aggregates. List the
aggregates that exist on the cluster:
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
10.26GB
510.6MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02
10.26GB
510.6MB
95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::>

3. Now list the disks that are members of the root aggregate for the node cluster-01. Here is the command
that you would ordinarily use to display that information for an aggregate that is not using partitioned
disks.
cluster1::> storage disk show -aggregate aggr0_cluster1_01
There are no entries matching your query.
Info: One or more aggregates queried for use shared disks. Use "storage aggregate show-status"
to get correct set of disks associated with these aggregates.
cluster1::>

17 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

4. As you can see, in this instance the preceding command is not able to produce a list of disks because
this aggregate is using shared disks. Instead it refers you to use the storage aggregate show
command to query the aggregate for a list of its assigned disk partitions.
cluster1::> storage aggregate show-status -aggregate aggr0_cluster1_01
Owner Node: cluster1-01
Aggregate: aggr0_cluster1_01 (online, raid_dp) (block checksums)
Plex: /aggr0_cluster1_01/plex0 (online, normal, active, pool0)
RAID Group /aggr0_cluster1_01/plex0/rg0 (normal, block checksums)
Usable Physical
Position Disk
Pool Type
RPM
Size
Size Status
-------- --------------------------- ---- ----- ------ -------- -------- -------shared
VMw-1.1
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.2
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.3
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.4
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.5
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.6
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.7
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.8
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.9
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.10
0
VMDISK
1.52GB 28.44GB (normal)
10 entries were displayed.
cluster1::>

The output shows that aggr0_cluster1_01 is comprised of 10 disks, each with a usable size of 1.52 GB,
and you know that the aggregate is using the listed disks root partitions because aggr0_cluster1_01 is a
root aggregate.
For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically determines
the size of the root and data disk partitions at system installation time. That determination is based on the
quantity and size of the available disks assigned to each node. As you saw earlier, this particular cluster
node has 12 disks, so during installation Data ONTAP partitioned all 12 disks but only assigned 10 of those
root partitions to the root aggregate so that the node would have 2 spares disks available.to protect against
disk failures.

18 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

5. The Data ONTAP CLI includes a diagnostic level command that provides a more comprehensive single
view of a systems partitioned disks. The following command shows the partitioned disks that belong to
the node cluster1-01.
cluster1::> set -priv diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster1::*> disk partition show -owner-node-name cluster1-01
Usable Container
Container
Partition
Size
Type
Name
Owner
------------------------- ------- ------------- ----------------- ----------------VMw-1.1.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.1.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.2.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.2.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.3.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.3.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.4.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.4.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.5.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.5.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.6.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.6.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.7.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.7.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
VMw-1.8.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.8.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.9.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.9.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.10.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.10.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.11.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.11.P2
1.52GB spare
Pool0
cluster1-01
VMw-1.12.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.12.P2
1.52GB spare
Pool0
cluster1-01
24 entries were displayed.
cluster1::*> set -priv admin
cluster1::>

19 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create a New Aggregate on Each Cluster Node


The only aggregates that exist on a newly created cluster are the node root aggregates. The root
aggregate should not be used to host user data, so in this section you will be creating a new aggregate on
each of the nodes in cluster1 so they can host the storage virtual machines, volumes, and LUNs that you
will be creating later in this lab.
A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of the
storage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you assign it to
use one or more specific aggregates to host the SVMs volumes. Multiple SVMs can be assigned to use
the same aggregate, which offers greater flexibility in managing storage space, whereas dedicating an
aggregate to just a single SVM provides greater workload isolation.
For this lab, you will be creating a single user data aggregate on each node in the cluster.

To perform this sections tasks from the GUI:

You can create aggregates from either the Cluster tab or the Nodes tab. For this exercise use the Cluster
tab as follows:

1. Select the Cluster tab. To avoid confusion, always double-check to make sure that you are working in
the correct left pane tab context when performing activities in System Manager!
2. Go to cluster1->Storage->Aggregates.
3. Click on the Create button to launch the Create Aggregate Wizard.

20 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Aggregate wizard window opens.

1. Specify the Name of the aggregate as aggr1_cluster1_01 shown and then click Browse.

The Select Disk Type window opens.


1. Select the Disk Type entry for the node cluster1-01.
2. Click the OK button.

21 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Select DiskType window closes, and focus returns to the Create Aggregate window.
1. The Disk Type should now show as VMDISK. Set the Number of Disks to 5.

2. Click the Create button to create the new aggregate and to close the wizard.

22 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Aggregate window close, and focus returns to the Aggregates view in System Manager.The
newly created aggregate should now be visible in the list of aggregates.
1. Select the entry for the aggregate aggr1_cluster1_01 if it is not already selected.
2. Click the Details tab to view more detailed information about this aggregates configuration.
3. Notice that aggr1_cluster1_01 is a 64-bit aggregate. In earlier versions of clustered Data ONTAP 8, an
aggregate could be either 32-bit or 64-bit, but Data ONTAP 8.3 only supports 64-bit aggregates. If you
have an existing clustered Data ONTAP 8.x system that has 32-bit aggregates and you plan to upgrade
that cluster to 8.3, you must convert those 32-bit aggregates to 64-bit aggregates prior to the upgrade.
The procedure for that migration is not covered in this lab, so if you need further details then please
refer to the clustered Data ONTAP documentation.

23 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Now repeat the process to create a new aggregate on the node cluster1-02.
1. Click the Create button again.

The Create Aggregate window opens.


1. Specify the Aggregates Name as aggr1_cluster1_02 and then click the Browse button.

24 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Select Disk Type window opens.


1. Select the Disk Type entry for the node cluster1-02.
2. Click the OK button.

The Select Disk Type window closes, and focus returns to the Create Aggregate window.
1. The Disk Type should now show as VMDISK. Set the Number of Disks to 5.
2. Click the Create button to create the new aggregate.

The Create Aggregate window closes, and focus returnsto the Aggregates view in System Manager.

25 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The new aggregate aggr1_cluster1_02 now appears in the clusters aggregate list.

26 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the command line:

From a PuTTY session logged in to cluster1 as the username admin and password Netapp1!.
Display a list of the disks attached to the node cluster-01. (Note that you can omit the -nodelist option to
display a list of all the disks in the cluster.) By default the PuTTY window may wrap output lines because
the window is too small; if this is the case for you then simply expand the window by selecting its edge and
dragging it wider, after which any subsequent output will utilize the visible width of the window.
cluster1::> disk show -nodelist cluster1-01
Usable
Disk
Container
Container
Disk
Size Shelf Bay Type
Type
---------------- ---------- ----- --- ------- ----------VMw-1.25
28.44GB
0 VMDISK shared
VMw-1.26
28.44GB
1 VMDISK shared
VMw-1.27
28.44GB
2 VMDISK shared
VMw-1.28
28.44GB
3 VMDISK shared
VMw-1.29
28.44GB
4 VMDISK shared
VMw-1.30
28.44GB
5 VMDISK shared
VMw-1.31
28.44GB
6 VMDISK shared
VMw-1.32
28.44GB
8 VMDISK shared
VMw-1.33
28.44GB
9 VMDISK shared
VMw-1.34
28.44GB
- 10 VMDISK shared
VMw-1.35
28.44GB
- 11 VMDISK shared
VMw-1.36
28.44GB
- 12 VMDISK shared
VMw-1.37
28.44GB
0 VMDISK shared
VMw-1.38
28.44GB
1 VMDISK shared
VMw-1.39
28.44GB
2 VMDISK shared
VMw-1.40
28.44GB
3 VMDISK shared
VMw-1.41
28.44GB
4 VMDISK shared
VMw-1.42
28.44GB
5 VMDISK shared
VMw-1.43
28.44GB
6 VMDISK shared
VMw-1.44
28.44GB
8 VMDISK shared
VMw-1.45
28.44GB
9 VMDISK shared
VMw-1.46
28.44GB
- 10 VMDISK shared
VMw-1.47
28.44GB
- 11 VMDISK shared
VMw-1.48
28.44GB
- 12 VMDISK shared
24 entries were displayed.

Name
Owner
--------- -------aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
cluster1-01
cluster1-01
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
cluster1-02
cluster1-02

cluster1::>

27 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create the aggregate named aggr1_cluster1_01 on the node cluster1-01 and the aggregate named
aggr1_cluster1_02 on the node cluster1-02.

cluster1::> aggr show


Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01 10.26GB 510.6MB 95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02 10.26GB 510.6MB 95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::> aggr create -aggregate aggr1_cluster1_01 -nodes cluster1-01 -diskcount 5
[Job 257] Job is queued: Create aggr1_cluster1_01.
[Job 257] Job succeeded: DONE
cluster1::> aggr create -aggregate aggr1_cluster1_02 -nodes cluster1-02 -diskcount 5
[Job 258] Job is queued: Create aggr1_cluster1_02.
[Job 258] Job succeeded: DONE
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01 10.26GB 510.6MB 95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02 10.26GB 510.6MB 95% online
1 cluster1-02
raid_dp,
normal
aggr1_cluster1_01 72.53GB 72.53GB 0% online
0 cluster1-01
raid_dp,
normal
aggr1_cluster1_02 72.53GB 72.53GB 0% online
0 cluster1-02
raid_dp,
normal
4 entries were displayed.
cluster1::>

28 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Networks
Clustered Data ONTAP provides a number of network components to that you use to manage your cluster.
Those components include:
Ports are the physical Ethernet and Fibre Channel connections on each node, the interface groups (ifgrps)
you can create to aggregate those connections, and the VLANs you can use to subdivide them.
A logical interface (LIF) is essentially an IP address that is associated with a port, and has a number of
associated characteristics such as an assigned home node, an assigned physical home port, a list of
physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. A given LIF can only
be assigned to a single SVM, and since LIFs are mapped to physical network ports on cluster nodes this
means that an SVM runs in part on all nodes that are hosting its LIFs.
Routing tables in clustered Data ONTAP are defined for each Storage Virtual Machine. Since each SVM
has its own routing table, changes to one SVMs routing table does not have impact on any other SVMs
routing table.
IPspaces are new in Data ONTAP 8.3 and allow you to configure a Data ONTAP cluster to logically
separate one IP network from another, even if those two networks are using the same IP address range.
IPspaces are a mult-tenancy feature designed to allow storage service providers to share a cluster
between different companies while still separating storage traffic for privacy and security. Every cluster
include a default IPspace to which Data ONTAP automatically assigns new SVMs, and that default IPspace
is probably sufficient for most NetApp customers who are deploying a cluster within a single company or
organization that uses a non-conflicting IP address range.
Broadcast Domains are also new in Data ONTAP 8.3, and are collections of ports that all have access to
the same layer 2 networks, both physical and virtual (i.e. VLANs). Every IPspace has its own set of
Broadcast Domains, and Data ONTAP provides a default broadcast domain to go along with the default
IPspace. Broadcast domains are used by Data ONTAP to determine what ports an SVM can use for its
LIFs.
Subnets are another new feature in Data ONTAP 8.3, and are a convenience feature intended to make LIF
creation and management easier for Data ONTAP administrators. A subnet is a pool of IP addresses that
you can specify by name when creating a LIF. Data ONTAP will automatically assign an available IP
address from the pool to the LIF, along with a subnet mask and a gateway. A subnet is scoped to a specific
broadcast domain, so all the subnets addresses belong to the same layer 3 network. Data ONTAP
manages the pool automatically as you create or delete LIFs, and if you manually configure a LIF with an
address from the pool then it will detect that the address is use and mark it as such in the pool.
DNS Zones allow an SVM to manage DNS name resolution for its own LIFs, and since multiple LIFs can
share the same DNS name, this allows the SVM to load balance traffic by IP address across the LIFs. To
use DNS Zones you must configure your DNS server to delegate DNS authority for the subdomain to the
SVM.
In this section of the lab, you will create a subnet that you will leverage in later sections to provision SVMs
and LIFs.You will not create IPspaces or Broadcast Domains as the system defaults are sufficient for this
lab.

29 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the GUI:

1. In the left pane of System Manager, select the Cluster tab.

2. In the left pane, navigate to cluster1->Configuration->Network.


3. In the right pane select the Broadcast Domains tab.
4. Select the Default subnet.

Review the Port Details section at the bottom of the Network pane and note that the e0c e0g ports on
both cluster nodes are all part of this broadcast domain. These are the network ports that you will be using
in this lab.

30 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Now create a new Subnet for this lab.


1. Select the Subnets tab, and notice that there are no subnets listed in the pane. Unlike Broadcast
Domains and IPSpaces, Data ONTAP does not provide a default Subnet.
2. Click the Create button.

31 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Subnet window opens.


1. Set the fields in the window as follows.

Subnet Name: Demo

Subnet IP/Subnet mask: 192.168.0.0/24

Gateway: 192.168.0.1

2. The values you enter in the IP address box depend on what sections of the lab guide you intend to
complete. It is important that you choose the right values here so that the values in your lab will
correctly match up with the values used in this lab guide.

If you plan to complete just the NAS section or both the NAS and SAN sections then enter
192.168.0.131-192.168.0.139

If you plan to complete just the SAN section then enter 192.168.0.133-192.168.0.139

3. Click the Browse button.

32 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Select Broadcast Domain window opens.

1. Select the Default entry from the list.

2. Click the OK button.

33 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Select Broadcast Domain window close, and focus returns to the Create Subnet window.

1. The values in your Create Subnet window should now match those shown in the following screenshot,
the only possible exception being for the IP Addresses field, whose value may differ depending on what
value range you chose to enter to match your plans for the lab.
Note: If you click the Show ports on this domain link under the Broadcast Domain textbox, you
can once again see the list of ports that this broadcast domain includes.
2. Click Create.

34 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Subnet window closes, and focus returns to the Subnets tab in System Manager. Notice that
the main pane pane of the Subnets tab now includes an entery for your newly created subnet, and that the
lower portion of the pane includes metrics tracking the consumption of the IP addresses that belong to this
subnet.

Feel free to explore the contents of the other available tabs on the Network page. Here is a brief summary
of the information available on those tabs.
The Ethernet Ports tab displays the physical NICs on your controller, which will be a superset of the NICs
that you saw previously listed as belonging to the default broadcast domain. The other NICs you will see
listed on the Ethernet Ports tab include the nodes cluster network NICs.
The Network Interfaces tab displays a list of all of the LIFs on your cluster.
The FC/FCoE Adapters tab lists all the WWPNs for all the controllers NICs in the event they will be used
for iSCSI or FCoE connections. The simulated NetApp controllers you are using in this lab do not include
FC adapters, and this lab does not make use of FCoE.

35 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the command line:

1. Display a list of the clusters IPspaces. A cluster actually contains two IPspaces by default; the Cluster
IPspace, which correlates to the cluster network that Data ONTAP uses to have cluster nodes
communicate with each other, and the Default IPspace to which Data ONTAP automatically assigne all
new SVMs. You can create more IPspaces if necessary, but that activity will not be covered in this lab.
cluster1::> network ipspace show
IPspace
Vserver List
------------------- ----------------------------Cluster
Cluster
Default
cluster1
2 entries were displayed.

Broadcast Domains
---------------------------Cluster
Default

cluster1::>

2. Display a list of the clusters broadcast domains. Remember that broadcast domains are scoped to a
single IPspace. The e0a ports on the cluster nodes are part of the Cluster broadcast domain in the
Cluster IPspace. The remaining ports are part of the Default broadcast domain in the Default IPspace.
cluster1::> network port broadcast-domain show
IPspace Broadcast
Name
Domain Name
MTU Port List
------- ----------- ------ ----------------------------Cluster Cluster
1500
cluster1-01:e0a
cluster1-01:e0b
cluster1-02:e0a
cluster1-02:e0b
Default Default
1500
cluster1-01:e0c
cluster1-01:e0d
cluster1-01:e0e
cluster1-01:e0f
cluster1-01:e0g
cluster1-02:e0c
cluster1-02:e0d
cluster1-02:e0e
cluster1-02:e0f
cluster1-02:e0g
2 entries were displayed.

Update
Status Details
-------------complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete

cluster1::>

3. Display a list of the clusters subnets.


cluster1::> network subnet show
This table is currently empty.
cluster1::>

36 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Data ONTAP does not include a default subnet, so you will need to create a subnet now. The specific
command you will use depends on what sections of this lab guide you plan to complete, as you want to
correctly align the IP address pool in your lab with the IP addresses used in the portions of this lab guide
that you want to complete.
4. If you plan to complete the NAS portion of this lab, enter the following command. Also use this this
command if you plan to complete both the NAS and SAN portions of this lab.
cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default
-subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.131-192.168.0.139
cluster1::>

5. If you only plan to complete the SAN portion of this lab, then enter the following command instead.
cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default
-subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.133-192.168.0.139
cluster1::>

6. Re-display the list of the clusters subnets. This example assumes you plan to complete the whole lab.
cluster1::> network subnet show
IPspace: Default
Subnet
Name
Subnet
--------- ---------------Demo
192.168.0.1/24

Broadcast
Avail/
Domain
Gateway
Total
Ranges
--------- --------------- --------- --------------Default
192.168.0.1
9/9
192.168.0.131-192.168.0.139

cluster1::>

7. If you are interested in seeing a list of all of the network ports on your cluster, you can use the following
command for that purpose.
cluster1::> network port show
Node
Port
IPspace
------ --------- -----------cluster1-01
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
cluster1-02
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
14 entries were displayed.

Speed (Mbps)
Broadcast Domain Link
MTU
Admin/Oper
---------------- ----- ------- -----------Cluster
Cluster
Default
Default
Default
Default
Default

up
up
up
up
up
up
up

1500
1500
1500
1500
1500
1500
1500

auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000

Cluster
Cluster
Default
Default
Default
Default
Default

up
up
up
up
up
up
up

1500
1500
1500
1500
1500
1500
1500

auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000

cluster1::>

37 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create Storage for NFS and CIFS


Expected Completion Time: 40 Minutes
If you are only interested in SAN protocols then you do not need to complete the lab steps in this section.
However, we do recommend that you review the conceptual information found here and at the beginning of
each of this sections subsections before you advance to the SAN section, as most of this conceptual
material will not be repeated there.
Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that
operate within a cluster for the purpose of serving data out to storage clients. A single cluster may host
hundreds of SVMs, with each SVM managing its own set of volumes (FlexVols), Logical Network Interfaces
(LIFs), storage access protocols (e.g. NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients its own namespace.
The ability to support many SVMs in a single cluster is a key feature in clustered Data ONTAP, and
customers are encouraged to actively embrace that feature in order to take full advantage of a clusters
capabilities. An organization is ill-advised to start out on a deployment intended to scale with only a single
SVM.
You explicitly choose and configure which storage protocols you want a given SVM to support at SVM
creation time, and you can later add or remove protocols as desired.. A single SVM can host any
combination of the supported protocols.
An SVMs assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM.
As you saw earlier, an aggregate is directly connected to the specific node hosting its disks, which means
that an SVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also
has a direct relationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a
number of associated characteristics such as an assigned home node, an assigned physical home port, a
list of physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. You can only
assign a given LIF to a single SVM, and since LIFs map to physical network ports on cluster nodes, this
means that an SVM runs in part on all nodes that are hosting its LIFs.
When you configure an SVM with multiple data LIFs, clients can use any of those LIFs to access volumes
hosted by the SVM. Which specific LIF IP address a client will use in a given instance, and by extension
which LIF, is a function of name resolution, the mapping of a hostname to an IP address. CIFS Servers
have responsibility under NetBIOS for resolving requests for their hostnames received from clients, and in
so doing can perform some load balancing by responding to different clients with different LIF addresses,
but this distribution is not sophisticated and requires external NetBIOS name servers in order to deal with
clients that are not on the local network. NFS Servers do not handle name resolution on their own.

38 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same
hostname. DNS is supported by both NFS and CIFS clients and works equally well with clients on local
area and wide area networks. Since DNS is an external service that resides outside of Data ONTAP this
architecture creates the potential for service disruptions if the DNS server is advertising IP addresses for
LIFs that are temporarily offline. To compensate for this condition you can configure DNS servers to
delegate the name resolution responsibility for the SVMs hostname records to the SVM itself, so that it can
directly respond to name resolution requests involving its LIFs. This allows the SVM to consider LIF
availability and LIF utilization levels when deciding what LIF address to return in response to a DNS name
resolution request.
LIFS that map to physical network ports that reside on the same node as a volumes containing aggregate
offer the most efficient client access path to the volumes data. However, clients can also access volume
data through LIFs bound to physical network ports on other nodes in the cluster; in these cases clustered
Data ONTAP uses the high speed cluster network to bridge communication between the node hosting the
LIF and the node hosting the volume. NetApp best practice is to create at least one NAS LIF for a given
SVM on each cluster node that has an aggregate that is hosting volumes for that SVM. If you desire
additional resiliency then you can also create a NAS LIF on nodes not hosting aggregates for the SVM.
A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to
another in the event of a component failure; any existing connections to that LIF from NFS and SMB 2.0
and later clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS
LIF migrates to a different physical NIC, potentially to a NIC on a different node in the cluster, and
continues servicing network requests from that new node/port. Throughout this operation the NAS LIF
maintains its IP address; clients connected to the LIF may notice a brief delay while the failover is in
progress but as soon as it completes the clients resume any in-process NAS operations without any loss of
data.
The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each
storage controller node can host a maximum of 125 SVMs, so you can calculate the clusters effective SVM
limit by multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can
host, but there is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per
node, but if the node is part of an HA pair configured for failover then the limit is half that value, 128 LIFs
per node (so that a node can also accommodate its HA partners LIFs in the event of a failover event).
Each SVM has its own NAS namespace, a logical grouping of the SVMs CIFS and NFS volumes into a
single logical filesystem view. Clients can access the entire namespace by mounting a single share or
export at the top of the namespace tree, meaning that SVM administrators can centrally maintain and
present a consistent view of the SVMs data to all clients rather than having to reproduce that view
structure on each individual client. As an administrator maps and unmaps volumes from the namespace
those volumes instantly become visible or disappear from clients that have mounted CIFS and NFS
volumes higher in the SVMs namespace. Administrators can also create NFS exports at individual junction
points within the namespace and can create CIFS shares at any directory path in the namespace.

Create a Storage Virtual Machine for NAS


In this section you will create a new SVM named svm1 on the cluster and will configure it to serve out a
volume over NFS and CIFS. You will be configuring two NAS data LIFs on the SVM, one per node in the
cluster.

39 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the GUI:

Start by creating the storage virtual machine.


1. In System Manager, open the Storage Virtual Machines tab.
2. Select cluster1.
3. Click the Create button to launch the Storage Virtual Machine Setup wizard.

40 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Storage Virual machine (SVM) Setup window opens.


4. Set the fields as follows:

SVM Name: svm1

Data Protocols: check the CIFS and NFS checkboxes


Note: The list of available Data Protocols is dependent upon what protocols are licensed on your
cluster; if a given protocol isnt listed, it is because you arent licensed for it.

Security Style: NTFS

Root Aggregate: aggr1_cluster1_01

The default values for IPspace, Volume Type, and Default Language are already populated for you by the
wizard, as is the DNS configuration. When ready, click Submit & Continue.

41 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The wizard creates the SVM and then advances to the protocols window. The protocols window can rather
large so this guide will present it in sections.
1. The Subnet setting defaults to Demo since this is the only subnet definition that exists in your lab. Click
the Browse button next to the Port textbox.

The Select Network Port or Adapter window opens.


1. Expand the list of ports for the node cluster1-01 and select port e0c.
2. Click the OK button.

42 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Select Network Port or Adapter window closes and focus returns to the protocols portion of the
Storage Virtual Machine (SVM) Setup wizard.
1. The Port textbox should have been populated with the cluster and port value you just selected.
2. Populate the CIFS Server Configuration textboxes with the following values:

CIFS Server Name: svm1

Active Directory: demo.netapp.com

Administrator Name: Administrator

Password: Netapp1!

3. The optional Provision a volume for CIFS storage textboxes offer a quick way to provision a simple
volume and CIFS share at SVM creation time. This share will not be multi-protocol, and since in most
cases when you create a share, you will be doing so for an existing SVM. This lab guide will show that
more full-featured procedure for creating a volume and share in the following sections.

4. Expand the optional NIS Configuration section.

43 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Scroll down in the window to see the expanded NIS Configuration section.
1. Clear the pre-populated values from the Domain Name and IP Address fields. In a NFS
environment where you are running NIS, you would want to configure these values, but this lab
environment is not utilizing NIS and in this case leaving these fields populated will create a name
resolution problem later in the lab.
2. As was the case with CIFS, the provision a volume for NFS storage textboxes offer a quick way to
provison a volume and create an NFS export for that volume. Once again, the volume will not be
inherently multi-protocol, and will in fact be a completely separate volume from the CIFS share volume
that you could have selected to create in the CIFS section. This lab will utilize the more full featured
volume creation process that you will see in later sections.

3. Click the Submit & Continue button to advance the wizard to the next screen.

44 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The SVM Administration section of the Storage Virtual Machine (SVM) Setup wizard opens.This window
allows you to set up an administration account that is scoped to just this SVM so that you can delegate
administrative tasks for this SVM to an SVM-specific administrator without giving that administrator clusterwide privileges. As the comments in this wizard window indicate, this account must also exist for use with
SnapDrive. Although you will not be using SnapDrive in this lab, it is usally a good idea to create this
account and you will do so here.
1. The User Name is pre-populated with the value vsadmin. Set the Password and Confirm Password
textboxes to netapp123. When finished, click the Submit & Continue button.

45 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The New Storage Virtual Machine (SVM) Summary window opens.


1. Review the settings for the new SVM, taking special note of the IP Address listed in the CIFS/NFS
Configuration section. Data ONTAP drew this address from the Subnets pool that you created earlier in
the lab.When finished, click the OK button.

46 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The window closes, and focus returns to the System Manager window, which now displays a summary
page for your newly created svm1 SVM.
1. Notice that in the main pane of the window the CIFS protocol is listed with a green background. This
indicates that a CIFS server is running for this SVM.

2. Notice that the NFS protocol is listed with a yellow background, which indicates that there is not a
running NFS server for this SVM. If you had configured the NIS server settings during the SVM Setup
wizard then the wizard would have started the NFS server, but since this lab is not using NIS you will
manually turn on NFS in a later step.

The New Storage Virtual Machine Setup Wizard only provisions a single LIF when creating a new SVM.
NetApp best practice is to configure a LIF on both nodes in an HA pair so that a client can access the
SVMs shares through either node. To comply with that best practise you will now create a second LIF
hosted on the other node in the cluster.

47 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

System Manager for clustered Data ONTAP 8.2 and earlier presented LIF management under the Storage
Virtual Machines tab, only offering visibility to LIFs for a single SVM at a time. With 8.3, that functionality
has moved to the Cluster tab, where you now have a single view for managing all the LIFs in your cluster.
1. Select the Cluster tab in the left navigation pane of System Manager.
2. Navigate to cluster1-> Configuration->Network.
3. Select the Network Interfaces tab in the main Network pane.
4. Select the only LIF listed for the svm1 SVM. Notice that this LIF is named svm1_cifs_nfs_lif1; you will
be following that same naming convention for the new LIF.

5. Click on the Create button to launch the Network Interface Create Wizard.

48 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Network Interface window opens.


1. Set the fields in the window to the following values:

Name: svm1_cifs_nfs_lif2

Interface Role: Serves Data

SVM: svm1

Protocol Access: Check CIFS and NFS checkboxes.

Management Access: Check the Enable Management Access checkbox.

Subnet: Demo

Check The IP address is selected from this subnet checkbox.

Also expand the Port Selection listbox and select the entry for cluster1-02 port e0c.

2. Click the Create button to continue.

49 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Network Interface window close, and focus returns to the Network pane in System Manager.
1. Notice that a new entry for the svm1_cifs_nfs_lif2 LIF is now present under the Network Interfaces
tab. Select this entry and review the LIFs properties.

Lastly, you need to configure DNS delegation for the SVM so that Linux and Windows clients can
intelligently utilize all of svm1s configured NAS LIFs. To achieve this objective, the DNS server must
delegate to the cluster the responsibility for the DNS zone corresponding to the SVMs hostname, which in
this case will be svm1.demo.netapp.com. The labs DNS server is already configured to delegate this
responsibility, but you must also configure the SVM to accept it. System Manager does not currently
include the capability to configure DNS delegation so you will need to use the CLI for this purpose.

50 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Open a PuTTY connection to cluster1 following the instructions in the Accessing the Command Line
section at the beginning of this guide. Log in using the username admin and the password Netapp1!,
then enter the following commands.
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface show -vserver svm1 -fields dns-zone,address
vserver lif
address
dns-zone
------- ----------------- ------------- ------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>

2. Validate that delegation is working correctly by opening PowerShell on jumphost and using the
nslookup command as shown in the following CLI output. If the nslookup command returns IP
addresses as identified by the yellow highlighted text, then delegation is working correctly. If the
nslookup returns a Non-existent domain error then delegation is not working correctly and you will
need to review the Data ONTAP commands you just entered as they most likely contained an error.
Also notice from the following screenshot that different executions of the nslookup command return
different addresses, demonstrating that DNS load balancing is working correctly. You may need to run
the nslookup command more than 2 times before you see it report different addresses for the
hostname.
Windows PowerShell
Copyright (C) 2013 Microsoft Corporation. All rights reserved.
PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name:
svm1.demo.netapp.com
Address: 192.168.0.132
PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name:
svm1.demo.netapp.com
Address: 192.168.0.131
PS C:\Users\Administrator.DEMO

51 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the command line:

If you do not already have a PuTTY connection open to cluster1 then open one now following the directions
in the Accessing the Command Line section at the beginning of his lab guide. The username is admin
and the password is Netapp1!.

1. Create the SVM named svm1. Notice that the clustered Data ONTAP command line syntax still refers
to storage virtual machines as vservers.
cluster1::> vserver create -vserver svm1 -rootvolume svm1_root -aggregate aggr1_cluster1_01
-language C.UTF-8 -rootvolume-security ntfs -snapshot-policy default
[Job 259] Job is queued: Create svm1.
[Job 259]
[Job 259] Job succeeded:
Vserver creation completed
cluster1::>

2. Add CIFS and NFS protocol support to the SVM svm1:


cluster1::> vserver show-protocols -vserver svm1
Vserver: svm1
Protocols: nfs, cifs, fcp, iscsi, ndmp
cluster1::> vserver remove-protocols -vserver svm1 -protocols fcp,iscsi,ndmp
cluster1::> vserver show-protocols -vserver svm1
Vserver: svm1
Protocols: nfs, cifs
cluster1::> vserver show
Vserver
----------cluster1
cluster1-01
cluster1-02
svm1

Type
------admin
node
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1_root

Aggregate
---------aggr1_
cluster1_
01

4 entries were displayed.


cluster1::>

52 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

3. Display a list of the clusters network interfaces:


cluster1::> network interface show
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------Cluster
cluster1-01_clus1
up/up
169.254.224.98/16
cluster1-02_clus1
up/up
169.254.129.177/16
cluster1
cluster1-01_mgmt1
up/up
192.168.0.111/24
cluster1-02_mgmt1
up/up
192.168.0.112/24
cluster_mgmt up/up
192.168.0.101/24
5 entries were displayed.

Current
Current Is
Node
Port
Home
------------- ------- ----

cluster1-01

e0a

true

cluster1-02

e0a

true

cluster1-01

e0c

true

cluster1-02
cluster1-01

e0c
e0c

true
true

cluster1::>

4. Notice that there are not yet any LIFs defined for the SVM svm1. Create the svm1_cifs_nfs_lif1 data
LIF for svm1:
cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif1 -role data
-data-protocol nfs,cifs -home-node cluster1-01 -home-port e0c -subnet-name Demo
-firewall-policy mgmt
cluster1::>

5. Create the svm1_cifs_nfs_lif2 data LIF for the SVM svm1:


cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif2 -role data
-data-protocol nfs,cifs -home-node cluster1-02 -home-port e0c -subnet-name Demo
-firewall-policy mgmt
cluster1::>

6. Display all of the LIFs owned by svm1:


cluster1::> network interface show -vserver svm1
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------svm1
svm1_cifs_nfs_lif1
up/up
192.168.0.131/24
svm1_cifs_nfs_lif2
up/up
192.168.0.132/24
2 entries were displayed.

Current
Current Is
Node
Port
Home
------------- ------- ----

cluster1-01

e0c

true

cluster1-02

e0c

true

cluster1::>

53 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

7. Configure the DNS domain and nameservers for the svm1 SVM:
cluster1::> vserver services dns show
Name
Vserver
State
Domains
Servers
--------------- --------- ----------------------------------- ---------------cluster1
enabled
demo.netapp.com
192.168.0.253
cluster1::> vserver services dns create -vserver svm1 -name-servers 192.168.0.253 -domains
demo.netapp.com
cluster1::> vserver services dns show
Vserver
State
--------------- --------cluster1
enabled
svm1
enabled
2 entries were displayed.

Domains
----------------------------------demo.netapp.com
demo.netapp.com

Name
Servers
---------------192.168.0.253
192.168.0.253

cluster1::>

8. Configure the LIFs to accept DNS delegation responsibility for the svm1.demo.netapp.com zone so that
you can advertise addresses for both of the NAS data LIFs that belong to svm1. You could have done
this as part of the network interface create commands but we opted to do it separately here to show
you how you can modify an existing LIF.
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface show -vserver svm1 -fields dns-zone,address
vserver lif
address
dns-zone
------- ------------------ ------------- -------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>

54 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

9. Verify that DNS delegation is working correctly by opening a PuTTY connection to the Linux host rhel1
(username root and password Netapp1!) and executing the following commands. If the delegation is
working correctly then you should see IP addresses returned for the host svm1.demo.netapp.com, and
if you run the command several times you will eventually see that the responses vary the returned
address between the SVMs two LIFs.
[root@rhel1 ~]# nslookup svm1.demo.netapp.com
Server:
192.168.0.253
Address:
192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.132
[root@rhel1 ~]# nslookup svm1.demo.netapp.com
Server:
192.168.0.253
Address:
192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.131
[root@rhel1 ~]#

10. This completes the planned LIF configuration changes for svm1, so now display a detailed
configuration report for the LIF svm1_cifs_nfs_lif1:
cluster1::> network interface show -lif svm1_cifs_nfs_lif1 -instance
Vserver Name: svm1
Logical Interface Name: svm1_cifs_nfs_lif1
Role: data
Data Protocol: nfs, cifs
Home Node: cluster1-01
Home Port: e0c
Current Node: cluster1-01
Current Port: e0c
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.131
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: Subnet Name: Demo
Administrative Status: up
Failover Policy: system-defined
Firewall Policy: mgmt
Auto Revert: false
Fully Qualified DNS Zone Name: svm1.demo.netapp.com
DNS Query Listen Enable: true
Failover Group Name: Default
FCP WWPN: Address family: ipv4
Comment: IPspace of LIF: Default
cluster1::>

55 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

11. When you issued the vserver create command to create svm1 you included an option to enable CIFS
for it, but that command did not actually create a CIFS server for the svm. Now it is time to create that
CIFS server.
cluster1::> vserver cifs show
This table is currently empty.
cluster1::> vserver cifs create -vserver svm1 -cifs-server svm1 -domain demo.netapp.com
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"DEMO.NETAPP.COM" domain.
Enter the user name: Administrator
Enter the password:
cluster1::> vserver cifs show
Server
Status
Vserver
Name
Admin
----------- --------------- --------svm1
SVM1
up

Domain/Workgroup
Name
---------------DEMO

Authentication
Style
-------------domain

cluster1::>

56 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Configure CIFS and NFS


Clustered Data ONTAP configures CIFS and NFS on a per SVM basis. When you created the svm1 SVM
in the previous section, you set up and enabled CIFS and NFS for that SVM. However, it is important to
understand that clients cannot yet access the SVM using CIFS and NFS. That is partially because you
have not yet created any volumes on the SVM, but also because you have not told the SVM what you want
to share and who you want to share it with.
Each SVM has its own namespace. A namespace is a logical grouping of a single SVMs volumes into a
directory hierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVMs root
volume (svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares
data to CIFS and NFS clients. The SVMs other volumes are junctioned (i.e. mounted) within that root
volume or within other volumes that are already junctioned into the namespace. This hierarchy presents
NAS clients with a unified, centrally maintained view of the storage encompassed by the namespace,
regardless of where those junctioned volumes physically reside in the cluster. CIFS and NFS clients cannot
access a volume that has not been junctioned into the namespace.
CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share
declared at the top of the namespace. While this is a very powerful capability, there is no requirement to
make the whole namespace accessible. You can create CIFS shares at any directory level in the
namespace, and you can create different NFS export rules at junction boundaries for individual volumes
and for individual qtrees within a junctioned volume.
Clustered Data ONTAP does not utilize an /etc/exports file to export NFS volumes; instead it uses a policy
model that dictates the NFS client access rules for the associated volumes. An NFS-enabled SVM implicitly
exports the root of its namespace and automatically associates that export with the SVMs default export
policy, but that default policy is initially empty and until it is populated with access rules no NFS clients will
be able to access the namespace. The SVMs default export policy applies to the root volume and also to
any volumes that an administrator junctions into the namespace, but an administrator can optionally create
additional export policies in order to implement different access rules within the namespace. You can apply
export policies to a volume as a whole and to individual qtrees within a volume, but a given volume or qtree
can only have one associated export policy. While you cannot create NFS exports at any other directory
level in the namespace, NFS clients can mount from any level in the namespace by leveraging the
namespaces root export.
In this section of the lab, you are going to configure a default export policy for your SVM so that any
volumes you junction into its namespace will automatically pick up the same NFS export rules. You will
also create a single CIFS share at the top of the namespace so that all the volumes you junction into that
namespace are accessible through that one share. Finally, since your SVM will be sharing the same data
over NFS and CIFS, you will be setting up name mapping between UNIX and Windows user accounts to
facilitate smooth multiprotocol access to the volumes and files in the namespace.

57 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the GUI:

When you create an SVM, Data ONTAP automatically creates a root volume to hold that SVMs
namespace. An SVM always has a root volume, whether or not it is configured to support NAS protocols.
Before you configure NFS and CIFS for your newly created SVM, take a quick look at the SVMs root
volume:
1. Select the Storage Virtual Machines tab.
2. Navigate to cluster1->svm1->Storage->Volumes.
3. Note the existence of the svm1_root volume, which hosts the namespace for the svm1 SVM. The root
volume is not large; only 20 MB in this example. Root volumes are small because they only intended to
house the junctions that organize the SVMs volumes; all of the files hosted on the SVM should reside
inside the volumes that are junctioned into the namespace rather than directly in the SVMs root
volume.

58 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Confirm that CIFS and NFS are running for our SVM using System Manager. Check CIFS first.
1. Under the Storage Virtual Machines tab, navigate to cluster1->svm1->Configuration->Protocols>CIFS.
2. In the CIFS pane, select the Configuration tab.
3. Note that the Service Status field is listed as Started, which indicates that there is a running CIFS
server for this SVM. If CIFS was not already running for this SVM, then you could configure and start it
using the Setup button found under the Configuration tab.

59 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Now check that NFS is enabled for your SVM.


1. Select NFS under the Protocols section.
2. Notice that the NFS Server Status field shows as Not Configured. Remember that when you ran the
Vserver Setup wizard that you specified that you wanted the NFS protocol, but you cleared the NIS
fields since this lab isnt using NFS. That combination of actions caused the wizard to lave the NFS
server for this SVM disabled.
3. Click the Enable button in this window to turn NFS on.

60 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Server Status field in the NFS pane switches from Not Configured to Enabled.

At this point, you have confirmed that your SVM has a running CIFS server and a running NFS server.
However, you have not yet configured those two servers to actually serve any data, and the first step in
that process is to configure the SVMs default NFS export policy.

61 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

When you create an SVM with NFS, clustered Data ONTAP automatically creates a default NFS export
policy for the SVM that contains an empty list of access rules. Without any access rules that policy will not
allow clients to access any exports, so you need to add a rule to the default policy so that the volumes you
will create on this SVM later in this lab will be automatically accessible to NFS clients. If any of this seems
a bit confusing, do not worry; the concept should become clearer as you work through this section and the
next one.
1. In System Manager, select the Storage Virtual Machines tab and then go to cluster1->svm1>Policies->Export Policies.
2. In the Export Polices window, select the default policy.
3. Click the Add button in the bottom portion of the Export Policies pane.

62 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Export Rule window opens. Using this dialog you can create any number of rules that provide
fine grained access control for clients and specify their application order. For this lab, you are going to
create a single rule that grants unfettered access to any host on the labs private network.
1. Set the fields in the window to the following values:

Client Specification: 0.0.0.0/0

Rule Index: 1

Access Protocols: Check the CIFS and NFS checkboxes.

The default values in the other fields in the window are acceptable.
When you finish entering these values, click OK.

The Create Export Policy window closes and focus returns to the Export Policies pane in System Manager.
The new access rule you created now shows up in the bottom portion of the pane. With this updated
default export policy in place, NFS clients will now be able to mount the root of the svm1 SVMs
namespace, and use that mount to access any volumes that you junction into the namespace.

63 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Now create a CIFS share for the svm1 SVM. You are going to create a single share named nsrootat the
root of the SVMs namespace.
1. Select the Storage Virtual Machines tab and navigate to cluster1->svm1->Storage->Shares.
2. In the Shares pane, select Create Share.

The Create Share dialog box opens.


1. Set the fields in the window to the following values:

Folder to Share: / (If you alternately opt to use the Browse button, make sure you select the root
folder).

Share Name: nsroot

2. Click the Create button.

64 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Share window closes, and focus returns to Shares pane in System Manager. The new nsroot
share now shows up in the Shares pane, but you are not yet finished.
1. Select nsroot from the list of shares.
2. Click the Edit button to edit the shares settings.

65 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Edit nsroot Settings window opens.


1. Select the Permissions tab. Make sure that you grant the group Everyone Full Control permission.
You can set more fine grained permissions on the share from this tab, but this configuration is sufficient
for the exercises in this lab.

66 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

There are other settings to check in this window, so do not close it yet.
1. Select the Options tab at the top of the window and make sure that the Enable as read/write, Enable
Oplocks, Browsable, and Notify Change checkboxes are all checked. All other checkboxes should be
cleared.

2. If you had to change any of the settings listed in Step 1 then the Save and Close button will become
active, and you should click it. Otherwise, click the Cancel button.

The Edit nsroot Settings window closes and focus returns to the Shares pane in System Manager. Setup of
the \\svm1\nsroot CIFS share is now complete.
For this lab you have created just one share at the root of your namespace, which allows users to access
any volume mounted in the namespace through that share. The advantage of this approach is that it
reduces the number of mapped drives that you have to manage on your clients; any changes you make to
the namespace become instantly visible and accessible to your clients. If you prefer to use multiple shares
then clustered Data ONTAP allows you to create additional shares rooted at any directory level within the
namespace.

67 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Since you have configured your SVM to support both NFS and CIFS, you next need to set up username
mapping so that the UNIX root accounts and the DEMO\Administrator account will have synonymous
access to each others files. Setting up such a mapping may not be desirable in all environments, but it will
simplify data sharing for this lab since these are the two primary accounts you are using in this lab.

1. In System Manager, open the Storage Virtual Machines tab and navigate to cluster1->svm1>Configuration->Users and Groups->Name Mapping.
2. In the Name Mapping pane, click Add.

68 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Add Name Mapping Entry window opens.


1. Create a Windows to UNIX mapping by completing all of the fields as follows:

Direction: Windows to UNIX

Position: 1

Pattern: demo\\administrator (the two backslashes listed here is not a typo, and administrator should
not be capitalized)

Replacement: root
When you have finished populating these fields, click Add.

The window closes and focus retruns to the Name Mapping pane in System Manager. Click the Add button
again to create another mapping rule.

69 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Add Name Mapping Entry window opens.


1. Create a UNIX to Windows to mapping by completing all of the fields as follows:

Direction: UNIX to Windows

Position: 1

Pattern: root

Replacement: demo\\administrator (the two backslashes listed here are not a typo, and
administrator should not be capitalized)
When you have finished populating these fields, click Add.

The second Add Name Mapping window closes, and focus again returns to the Name Mapping pane in
System Manager. You should now see two mappings listed in this pane that together make the root and
DEMO\Administrator accounts equivalent to each other for the purpose of file access within the SVM.

70 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the command line:

1. Verify that CIFS is running by default for the SVM svm1:


cluster1::> vserver cifs show
Server
Status
Vserver
Name
Admin
----------- --------------- --------svm1
SVM1
up

Domain/Workgroup
Name
---------------DEMO

Authentication
Style
-------------domain

cluster1::>

2. Verify that NFS is running for the SVM svm1. It is not initially, so turn it on.
cluster1::> vserver nfs status -vserver svm1
The NFS server is not running on Vserver "svm1".
cluster1::> vserver nfs create -vserver svm1 -v3 enabled -access true
cluster1::> vserver nfs status -vserver svm1
The NFS server is running on Vserver "svm1".
cluster1::> vserver nfs show
Vserver: svm1
General Access:
v3:
v4.0:
4.1:
UDP:
TCP:
Default Windows User:
Default Windows Group:

true
enabled
disabled
disabled
enabled
enabled
-

cluster1::>

71 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

3. Create an export policy for the SVM svm1 and configure the policys rules.
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::> vserver export-policy rule show
This table is currently empty.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname default
-clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any -anon 65534 -ruleindex 1
cluster1::> vserver export-policy rule show
Policy
Rule
Access
Vserver
Name
Index
Protocol
------------ --------------- ------ -------svm1
default
1
any

Client
Match
--------------------0.0.0.0/0

RO
Rule
--------any

cluster1::> vserver export-policy rule show -policyname default -instance


Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:

svm1
default
1
any
or Domain: 0.0.0.0/0
any
any
65534
any
true
true

cluster1::>

72 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

4. Create a share at the root of the namespace for the SVM svm1:
cluster1::> vserver cifs share show
Vserver
Share
Path
-------------- ------------- ----------------svm1
admin$
/
svm1
c$
/
Control

svm1
ipc$
3 entries were displayed.

Properties
---------browsable
oplocks

Comment
--------

browsable
changenotify
browsable -

ACL
----------BUILTIN\Administrators / Full

cluster1::> vserver cifs share create -vserver svm1 -share-name nsroot -path /
cluster1::> vserver cifs share show
Vserver
Share
Path
-------------- ------------- ----------------svm1
admin$
/
svm1
c$
/
Control

Properties
---------browsable
oplocks

svm1
svm1

browsable
changenotify
browsable oplocks
browsable
changenotify

ipc$
nsroot

/
/

Comment
--------

ACL
----------BUILTIN\Administrators / Full

Everyone / Full Control

4 entries were displayed.


cluster1::>

5. Set up CIFS <-> NFS user name mapping for the SVM svm1:
cluster1::> vserver name-mapping show
This table is currently empty.
cluster1::> vserver name-mapping create -vserver svm1 -direction win-unix -position 1 -pattern
demo\\administrator -replacement root
cluster1::> vserver name-mapping create -vserver svm1 -direction unix-win -position 1 -pattern
root -replacement demo\\administrator
cluster1::> vserver name-mapping show
Vserver
Direction Position
-------------- --------- -------svm1
win-unix 1
Pattern:
Replacement:
svm1
unix-win 1
Pattern:
Replacement:
2 entries were displayed.

demo\\administrator
root
root
demo\\administrator

cluster1::>

73 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create a Volume and Map It to the Namespace


Volumes, or FlexVols, are the dynamically sized containers used by Data ONTAP to store data. A volume
only resides in a single aggregate at a time, but any given aggregate can host multiple volumes. Unlike an
aggregate, which can associate with multiple SVMS, a volume can only associate to a single SVM. The
maximum size of a volume can vary depending on what storage controller model is hosting it.
An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can be
configured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000
FlexVols (varies based on controller model), which means that there is an effective limit on the total
number of volumes that a cluster can host, depending on how many nodes there are in your cluster.
Each storage controller node has a root aggregate (e.g. aggr0_<nodename>) that contains the nodes Data
ONTAP operating system. Do not use the nodes root aggregate to host any other volumes or user data;
always create additional aggregates and volumes for that purpose.
Clustered Data ONTAP FlexVols support a number of storage efficiency features including thin
provisioning, deduplication, and compression. One specific storage efficiency feature you will be seeing in
the section of the lab is thin provisioning, which dictates how space for a FlexVol is allocated in its
containing aggregate.
When you create a FlexVol with a volume guarantee of type volume you are thickly provisioning the
volume, pre-allocating all of the space for the volume on the containing aggregate, which ensures that the
volume will never run out of space unless the volume reaches 100% capacity. When you create a FlexVol
with a volume guarantee of none you are thinly provisioning the volume, only allocating space for it on the
containing aggregate at the time and in the quantity that the volume actually requires the space to store the
data.
This latter configuration allows you to increase your overall space utilization and even oversubscribe an
aggregate by allocating more volumes on it than the aggregate could actually accommodate if all the
subscribed volumes reached their full size. However, if an oversubscribed aggregate does fill up then all its
volumes will run out of space before they reach their maximum volume size, therefore oversubscription
deployments generally require a greater degree of administrative vigilance around space utilization.
In the Clusters section, you created a new aggregate named aggr1_cluster1_01; you will now use that
aggregate to host a new thinly provisioned volume named engineering for the SVM named svm1.

74 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the GUI:

1. In System Manager, open the Storage Virtual Machines tab.


2. Navigate to cluster1->svm1->Storage->Volumes.
3. Click Create to launch the Create Volume wizard.

75 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Volume window opens.


1. Populate the following values into the data fields in the window.

Name: engineering

Aggregate: aggr1_cluster1_01

Total Size: 10 GB

Check the Thin Provisioned checkbox.


Leave the other values at their defaults.

2. Click the Create button.

76 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Volume window closes, and focus returns to the Volumes pane in System Manager. The newly
created engineering volume should now appear in the Volumes list. Notice that the volume is 10 GB in
size, and is thin provisioned.

System Manager has also automatically mapped the engineering volume into the SVMs NAS namespace.

77 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Navigate to Storage Virtual Machines->cluster1->svm1->Storage->Namespace and notice that the


engineering volume is now junctioned in under the root of the SVMs namespace, and has also
inherited the default NFS Export Policy.

Since you have already configured the access rules for the default policy, the volume is instantly accessible
to NFS clients. As you can see in the preceding screenshot, the engineering volume was junctioned as
/engineering, meaning that any client that had mapped a share to \\svm1\nsroot or NFS mounted svm1:/
would now instantly see the engineering directory in the share, and in the NFS mount.

78 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Now create a second volume.


1. Navigate to Storage Virtual Machines->cluster1->svm1->Storage->Volumes.
2. Click Create to launch the Create Volume wizard.

79 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Volume window opens.


1. Populate the following values into the data fields in the window.

Name: eng_users

Aggregate: aggr1_cluster1_01

Total Size: 10 GB

Check the Thin Provisioned checkbox.


Leave the other values at their defaults.

2. Click the Create button.

80 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Volume window closes, and focus returns again to the Volumes pane in System Manager. The
newly created eng_users volume should now appear in the Volumes list.
1. Select the eng_users volume in the volumes list and examine the details for this volume in the General
box at the bottom of the pane. Specifically, note that this volume has a Junction Path value of
/eng_users.

81 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

You do have more options for junctioning than just placing your volumes into the root of your namespace.
In the case of the eng_users volume, you will re-junction that volime underneath the engineering volume
and shorten the junction name to take advantage of an already intuitive context.
1. Navigate to Storage Virtual Machines->cluster1->svm1->Storage->Namespace.
2. In the Namespace pane, select the eng_users junction point.
3. Click Unmount.

The Unmount Volume window opens asking for confirmation that you really want to unmount the volume
from the namespace.
1. Click Unmount.

82 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Unmount Volume window closes, and focus returns to the NameSpace pane in System Manager. The
eng_users volume no longer appears in the junction list for the namespace, and since it is no longer
junctioned in the namespace, that means clients can no longer access it or even see it. Now you will
junction the volume in at another location in the namespace.

1. Click Mount.

The Mount Volume window opens.


1. Set the fields in the window as follows.

Volume Name: eng_users

Junction Name: users


Click Browse.

83 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Browse For Junction Path window opens.


1. Expand the root of the namespace structure.
2. Double-click engineering, which will populate /engineering into the Selected Path textbox.

3. Click OK to accept the selection.

The Browse For Junction Path window closes, and focus returns to the Mount Volume window.
1. The fields in the Mount Volume window should now all contain values as follows:

Volume Name: eng_users

Junction Name: users

Junction Path: /engineering


When ready, click Mount.

84 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Mount Volume window closes, and focus returns to the Namespace pane in System Manager.
The eng_users volume is now mounted in the namespace as /engineering/users.

You can also create a junction within user created directories. For example, from a CIFS or NFS client you
could create a folder named projects inside the engineering volume and then create a widgets volume that
junctions in under the projects folder; in that scenario the namespace path to the widgets volume contents
would be /engineering/projects/widgets.
Now you will create a couple of qtrees within the eng_users volume, one for each of the users bob and
susan.

85 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Navigate to Storage Virtual Machines->cluster1->svm1->Storage->Qtrees.


2. Click Create to launch the Create Qtree wizard.

The Create Qtree window opens.


1. Select the Details tab and then populate the fields as follows.

Name: bob

Volume: eng_users

2. Click on the Quota tab

86 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Quota tab is where you define the space usage limits you want to apply to the qtree. You will not
actually be implementing any quota limits in this lab.
1. Click the Create button.

The Create Qtree window closes, and focus returns to the Qtrees pane in System Manager. Now create
another qtree, for the user account susan.
1. Click the Create button.

87 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. The Create Qtree window opens.

Select the Details tab and then populate the fields as follows.

Name: susan

Volume: eng_users
Click Create.

The Create Qtree window closes, and focus returns to the Qtrees pane in System Manager. At this point
you should see both the bob and susan qtrees in System Manager.

88 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the command line:

Display basic information about the SVMs current list of volumes:


cluster1::> volume show -vserver svm1
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm1
svm1_root
aggr1_cluster1_01
online
RW
20MB
18.86MB
5%
cluster1::>

Display the junctions in the SVMs namespace:

cluster1::> volume show -vserver svm1 -junction


Junction
Vserver
Volume
Language Active
Junction Path
--------- ------------ -------- -------- ------------------------svm1
svm1_root
C.UTF-8 true
/

Junction
Path Source
-----------

cluster1::>

Create the volume engineering, junctioning it into the namespace at /engineering:

cluster1::> volume create -vserver svm1 -volume engineering -aggregate aggr1_cluster1_01 -size
10GB -percent-snapshot-space 5 -space-guarantee none -policy default -junction-path /engineering
[Job 267] Job is queued: Create engineering.
[Job 267] Job succeeded: Successful
cluster1::>

Show the volumes for the SVM svm1 and list its junction points:
cluster1::> volume show -vserver svm1
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------svm1
engineering aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
2 entries were displayed.

Type
Size Available Used%
---- ---------- ---------- ----RW

10GB

9.50GB

5%

RW

20MB

18.86MB

5%

cluster1::> volume show -vserver svm1 -junction


Junction
Vserver
Volume
Language Active
Junction Path
--------- ------------ -------- -------- ------------------------svm1
engineering C.UTF-8 true
/engineering
svm1
svm1_root
C.UTF-8 true
/
2 entries were displayed.

Junction
Path Source
----------RW_volume
-

cluster1::>

89 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create the volume eng_users, junctioning it into the namespace at /engineering/users.


cluster1::> volume create -vserver svm1 -volume eng_users -aggregate aggr1_cluster1_01 -size
10GB -percent-snapshot-space 5 -space-guarantee none -policy default -junction-path
/engineering/users
[Job 268] Job is queued: Create eng_users.
[Job 268] Job succeeded: Successful
cluster1::> volume show -vserver svm1 -junction
Junction
Vserver
Volume
Language Active
Junction Path
--------- ------------ -------- -------- ------------------------svm1
eng_users
C.UTF-8 true
/engineering/users
svm1
engineering C.UTF-8 true
/engineering
svm1
svm1_root
C.UTF-8 true
/
3 entries were displayed.

Junction
Path Source
----------RW_volume
RW_volume
-

cluster1::>

Display detailed information about the volume engineering. Notice here that the volume is reporting as thin
provisioned (Space Guarantee Style is set to none) and that the Export Policy is set to default.

cluster1::> volume show -vserver svm1 -volume engineering -instance


Vserver Name: svm1
Volume Name: engineering
Aggregate Name: aggr1_cluster1_01
Volume Size: 10GB
Volume Data Set ID: 1026
Volume Master Data Set ID: 2147484674
Volume State: online
Volume Type: RW
Volume Style: flex
Is Cluster-Mode Volume: true
Is Constituent Volume: false
Export Policy: default
User ID: Group ID: Security Style: ntfs
UNIX Permissions: -----------Junction Path: /engineering
Junction Path Source: RW_volume
Junction Active: true
Junction Parent Volume: svm1_root
Comment:
Available Size: 9.50GB
Filesystem Size: 10GB
Total User-Visible Size: 9.50GB
Used Size: 152KB
Used Percentage: 5%
Volume Nearly Full Threshold Percent: 95%
Volume Full Threshold Percent: 98%
Maximum Autosize (for flexvols only): 12GB
(DEPRECATED)-Autosize Increment (for flexvols only): 512MB
Minimum Autosize: 10GB
Autosize Grow Threshold Percentage: 85%
Autosize Shrink Threshold Percentage: 50%

90 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Autosize Mode: off


Autosize Enabled (for flexvols only): false
Total Files (for user-visible data): 311280
Files Used (for user-visible data): 98
Space Guarantee Style: none
Space Guarantee in Effect: true
Snapshot Directory Access Enabled: true
Space Reserved for Snapshot Copies: 5%
Snapshot Reserve Used: 0%
Snapshot Policy: default
Creation Time: Mon Oct 20 02:33:31 2014
Language: C.UTF-8
Clone Volume: false
Node name: cluster1-01
NVFAIL Option: off
Volume's NVFAIL State: false
Force NVFAIL on MetroCluster Switchover: off
Is File System Size Fixed: false
Extent Option: off
Reserved Space for Overwrites: 0B
Fractional Reserve: 0%
Primary Space Management Strategy: volume_grow
Read Reallocation Option: off
Inconsistency in the File System: false
Is Volume Quiesced (On-Disk): false
Is Volume Quiesced (In-Memory): false
Volume Contains Shared or Compressed Data: false
Space Saved by Storage Efficiency: 0B
Percentage Saved by Storage Efficiency: 0%
Space Saved by Deduplication: 0B
Percentage Saved by Deduplication: 0%
Space Shared by Deduplication: 0B
Space Saved by Compression: 0B
Percentage Space Saved by Compression: 0%
Volume Size Used by Snapshot Copies: 0B
Block Type: 64-bit
Is Volume Moving: false
Flash Pool Caching Eligibility: read-write
Flash Pool Write Caching Ineligibility Reason: Managed By Storage Service: Create Namespace Mirror Constituents For SnapDiff Use: Constituent Volume Role: QoS Policy Group Name: Caching Policy Name: Is Volume Move in Cutover Phase: false
Number of Snapshot Copies in the Volume: 0
VBN_BAD may be present in the active filesystem: false
Is Volume on a hybrid aggregate: false
Total Physical Used Size: 152KB
Physical Used Percentage: 0%
cluster1::>

91 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

View how much disk space this volume is actually consuming in its containing aggregate; the Total
Footprint value represents the volumes total consumption. The value here is so small because this volume
is thin provisioned and you have not yet added any data to it. If you had thick provisioned the volume then
the footprint here would have been 1 GB, the full size of the volume.

cluster1::> volume show-footprint -volume engineering


Vserver : svm1
Volume : engineering
Feature
-------------------------------Volume Data Footprint
Volume Guarantee
Flexible Volume Metadata
Delayed Frees
Total Footprint

Used
---------152KB
0B
13.38MB
352KB

Used%
----0%
0%
0%
0%

13.88MB

0%

cluster1::>

Create qtrees in the eng_users volume for the users bob and susan, then generate a list of all the qtrees
that belong to svm1, and finally produce a detailed report of the configuration for the qtree bob.

cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree bob
cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree susan
cluster1::> volume qtree show -vserver svm1
Vserver
Volume
Qtree
Style
---------- ------------- ------------ -----------svm1
eng_users
""
ntfs
svm1
eng_users
bob
ntfs
svm1
eng_users
susan
ntfs
svm1
engineering
""
ntfs
svm1
svm1_root
""
ntfs
5 entries were displayed.

Oplocks
--------enable
enable
enable
enable
enable

Status
-------normal
normal
normal
normal
normal

cluster1::> volume qtree show -qtree bob -instance


Vserver Name:
Volume Name:
Qtree Name:
Actual (Non-Junction) Qtree Path:
Security Style:
Oplock Mode:
Unix Permissions:
Qtree Id:
Qtree Status:
Export Policy:
Is Export Policy Inherited:

svm1
eng_users
bob
/vol/eng_users/bob
ntfs
enable
1
normal
default
true

cluster1::>

92 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Connect to the SVM from a client


The svm1 SVM is up and running and is configured for NFS and CIFS access, so its time to validate that
everything is working properly by mounting the NFS export on a Linux host, and the CIFS share on a
Windows host. You should complete both parts of this section so you can see that both hosts are able to
seamlessly access the volume and its files.

Connect a Windows client from the GUI:

This part of the lab demonstrates connecting the Windows client jumphost to the CIFS share \\svm1\nsroot
using the Windows GUI.
1. On the Windows host jumphost open Windows Explorer by clicking on the folder icon on the taskbar.

A Windows Explorer window opens.

93 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. In Windows Explorer click on Computer.


2. Click on Map network drive to launch the Map Network Drive wizard.

The Map Network Drive wizard opens.

94 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Set the fields in the window to the following values.

Drive: S:

Folder: \\svm1\nsroot

Check the Reconnect at sign-in checkbox.


When finished click Finish.

Note:

If you encounter problems connecting to the share then most likely you did not properly clear the NIS
Configuration fields when you created the svm. (This scenario most likely only occured if you used System
Manager to create the svm, the CLI method is not as susceptible.) If those NIS Configuration fields remained
populated then the svm tries to use NIS for user and hostname name resolution, and since this lab doesnt
include a NIS server that resolution attempt will fail and you will not be able to mount the share. To correct
this problem go to System Manager and navigate to Storage Virtual Machines->cluster1->svm1>Configuration->Services->NIS. If you see an NIS configuration listed in the NIS pane then select it and use
the Delete button to delete it, then try to connect to the share again.

95 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

A new Windows Explorer window opens.


The engineering volume you earlier junctioned into the svm1s namespace is visible at the top of the nsroot
share, which points to the root of the namespace. If you created another volume on svm1 right now and
mounted it under the root of the namespace, that new volume would instantly become visible in this share,
and to clinet like jumphost that have mounted the share. Double-click on the engineering folder to open it.

96 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. File Explorer displays the contents of the engineering folder. Create a file in this folder to confirm that
you can write to it.
Notice that the eng_users volume that you junctioned in as users is visible inside this folder.
2. Right-click in the empty space in the right pane of File Explorer.
3. In the context menu, select New->Text Document, and name the resulting file cifs.txt.

97 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Double-click the cifs.txt file you just created to open it with Notepad.
2. In Notepad, enter some text (make sure you put a carriage return at the end of the line or else when
you later view the contents of this file on linux the command shell prompt will appear on the same line
as the file contents).

3. Use the File->Save menu in Notepad to save the files updated contents to the share. If write access is
working properly you will not receive an error message

Close Notepad and File Explorer to finish this exercise.

Connect a Linux client from the command line:

This part of the lab demonstrates connecting a Linux client to the NFS volume svm1:/ using the Linux
command line. Follow the instructions in the Accessing the Command Line section at the beginning of this
lab guide to open PuTTY and connect to the system rhel1.
Log in as the user root with the password Netapp1!, then issue the following command to see that you
currently have no NFS volumes mounted on this Linux host.
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962504
6311544 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
[root@rhel1 ~]#

98 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create a mountpoint and mount the NFS export corresponding to the root of your SVMs namespace on
that mountpoint. When you run the df command again after this youll see that the NFS export svm1:/ is
mounted on our Linux host as /svm1.

[root@rhel1 ~]# mkdir /svm1


[root@rhel1 ~]# echo "svm1:/ /svm1 nfs rw,defaults 0 0" >> /etc/fstab
[root@rhel1 ~]# grep svm1 /etc/fstab
svm1:/ /svm1 nfs rw,defaults 0 0
[root@rhel1 ~]# mount -a
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962508
6311540 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
svm1:/
19456
128
19328
1% /svm1
[root@rhel1 ~]#

Navigate into the /svm1 directory and notice that you can see the engineering volume that you previously
junctioned into the SVMs namespace. Navigate into engineering and verify that you can access and create
files.

Note:

The output shown here assumes that you have already performed the Windows client connection steps found
earlier in this section. When you cat the cifs.txt file if the shell prompt winds up on the same line as the file
output, that indicates that when you created the file on Windows you forgot to include a newline at the end of
the file.

[root@rhel1 ~]# cd /svm1


[root@rhel1 svm1]# ls
engineering
[root@rhel1 svm1]# cd engineering
[root@rhel1 engineering]# ls
cifs.txt users
[root@rhel1 engineering]# cat cifs.txt
write test from jumphost
[root@rhel1 engineering]# echo "write test from rhel1" > nfs.txt
[root@rhel1 engineering]# cat nfs.txt
write test from rhel1
[root@rhel1 engineering]# ll
total 4
-rwxrwxrwx 1 root bin
26 Oct 20 03:05 cifs.txt
-rwxrwxrwx 1 root root
22 Oct 20 03:06 nfs.txt
drwxrwxrwx 4 root root 4096 Oct 20 02:37 users
[root@rhel1 engineering]#

99 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

NFS Exporting Qtrees (Optional)


Clustered Data ONTAP 8.2.1 introduced the ability to NFS export qtrees. This optional section explains
how to configure qtree exports and will demonstrate how to set different export rules for a given qtree. For
this exercise you will be working with the qtrees you created in the previous section.
Qtrees had many capabilities in Data ONTAP 7-mode that are no longer present in cluster mode. Qtrees
do still exist in cluster mode, but their purpose is essentially now limited to just quota management, with
most other 7-mode qtree features, including NFS exports, now the exclusive purview of volumes. This
functionality change created challenges for 7-mode customers with large numbers of NFS qtree exports
who were trying to transition to cluster mode and could not convert those qtrees to volumes because they
would exceed clustered Data ONTAPs maximum number of volumes limit.
To solve this problem, clustered Data ONTP 8.2.1 introduced qtree NFS. NetApp continues to recommend
that customers favor volumes over qtrees in cluster mode whenever practical, but customers requiring
large numbers of qtree NFS exports now have a supported solution under clustered Data ONTAP.
While this section provides both graphical and command line methods for configuring qtree NFS exports,
you can only accomplish some configuration steps using the command line.

To perform this sections tasks from the GUI:

Begin by creating a new export and rules that only permit NFS access from the Linux host rhel1.
1. In System Manager, select the Storage Virtual Machines tab and then go to cluster1->svm1>Policies->Export Policies.
2. Click the Create button.

The Create Export Policy window opens.


100 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Set the Policy Name to rhel1-only and click the Add button.

The Create Export Rule window opens.


1. Set Client Specification to 192.168.0.61, and notice that you are not selecting any Access Protocol
checkboxes. Click OK.

101 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Export Rule window closes, and focus returns to the Create Export Policy window.
1. The new access rule now is now present in the rules window, and the rules Access Protocols entry
indicates that there are no protocol restrictions. If you had selected all the available protocol
checkboxes when creating this rule then each of those selected protocols would have been explicitly
listed here. Click Create.

The Create Export Policy window closes, and focus returns to the Export Policies pane in System
Manager.

Now you need to apply this new export policy to the qtree. System Manager does not support this
capability so you will have to use the clustered Data ONTAP command line. Open a PuTTY connection to
cluster1, and log in using the username admin and the password Netapp1!, then enter the following
commands.

102 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Note:

The following CLI commands are part of this lab sections graphical workflow. If you arelooking for the CLI
workflow then keep paging forward until you see the orange bar denoting the start of those instructions.

1. Produce a list of svm1s export policies and then a list of its qtrees:
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::> volume qtree show
Vserver
Volume
Qtree
Style
---------- ------------- ------------ -----------svm1
eng_users
""
ntfs
svm1
eng_users
bob
ntfs
svm1
eng_users
susan
ntfs
svm1
engineering
""
ntfs
svm1
svm1_root
""
ntfs
5 entries were displayed.
cluster1::>

Oplocks
--------enable
enable
enable
enable
enable

Status
-------normal
normal
normal
normal
normal

2. Apply the rhel1-only export policy to the susan qtree.


cluster1::> volume qtree modify -vserver svm1 -volume eng_users -qtree susan
-export-policy rhel1-only
cluster1::>

3. Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is using
the rhel1-only export policy.
cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan
Vserver Name:
Volume Name:
Qtree Name:
Qtree Path:
Security Style:
Oplock Mode:
Unix Permissions:
Qtree Id:
Qtree Status:
Export Policy:
Is Export Policy Inherited:
cluster1::>

svm1
eng_users
susan
/vol/eng_users/susan
ntfs
enable
2
normal
rhel1-only
false

4. Produce a report showing the export policy assignments for all the volumes and qtrees that belong to
svm1.
cluster1::> volume qtree show -vserver svm1 -fields export-policy
vserver volume
qtree export-policy
------- --------- ----- ------------svm1
eng_users ""
default
svm1
eng_users bob
default
svm1
eng_users susan rhel1-only
svm1
engineering
""
default
svm1
svm1_root ""
default
5 entries were displayed.
cluster1::>
103 Basic Concepts for Clustered Data ONTAP 8.3
2015 NetApp, Inc. All rights reserved.

5. Now you need to validate that the more restrictive export policy that youve applied to the qtree susan is
working as expected. If you still have an active PuTTY session open to the the Linux host rhel1 then
bring that window up now, otherwise open a new PuTTY session to that host (username = root,
password = Netapp1!). Run the following commands to verify that you can still access the susan qtree
from rhel1.
[root@rhel1 ~]# cd /svm1/engineering/users
[root@rhel1 users]# ls
bob susan
[root@rhel1 users]# cd susan
[root@rhel1 susan]# echo "hello from rhel1" > rhel1.txt
[root@rhel1 susan]# cat rhel1.txt
Hello from rhel1
[root@rhel1 susan]#

6. Now open a PuTTY connection to the Linux host rhel2 (again, username = root and password =
Netapp1!), This host should be able to access all the volumes and qtrees in the svm1 namespace
*except* susan, which should give a permission denied error because that qtrees associated export
policy only grants access to the host rhel1.
[root@rhel2 ~]# mkdir /svm1
[root@rhel2 ~]# mount svm1:/ /svm1
[root@rhel2 ~]# cd /svm1/engineering/users
[root@rhel2 users]# ls
bob susan
[root@rhel2 users]# cd susan
bash: cd: susan: Permission denied
[root@rhel2 users]# cd bob
[root@rhel2 bob]

To perform this sections tasks from the command line:

1. You need to first create a new export policy and configure it with rules so that only the Linux host rhel1
will be granted access to the associated volume and/or qtree. First create the export policy.
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::> vserver export-policy create -vserver svm1 -policyname rhel1-only
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::>

104 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

2. Next add a rule to the policy so that only the Linux host rhel1 will be granted access.
cluster1::> vserver export-policy rule show -vserver svm1 -policyname rhel1-only
There are no entries matching your query.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname rhel1-only
-clientmatch 192.168.0.61 -rorule any -rwrule any -superuser any -anon 65534
-ruleindex 1
cluster1::> vserver export-policy rule show
Policy
Rule
Access
Vserver
Name
Index
Protocol
------------ --------------- ------ -------svm1
default
1
any
svm1
rhel1-only
1
any
2 entries were displayed.

Client
Match
--------------------0.0.0.0/0
192.168.0.61

RO
Rule
--------any
any

cluster1::> vserver export-policy rule show -vserver svm1 -policyname rhel1-only -instance
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:

svm1
rhel1-only
1
any
or Domain: 192.168.0.61
any
any
65534
any
true
true

cluster1::>

3. Produce a list of svm1s export policies and then a list of its qtrees:
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::> volume qtree show
Vserver
Volume
Qtree
Style
---------- ------------- ------------ -----------svm1
eng_users
""
ntfs
svm1
eng_users
bob
ntfs
svm1
eng_users
susan
ntfs
svm1
engineering
""
ntfs
svm1
svm1_root
""
ntfs
5 entries were displayed.
cluster1::>

Oplocks
--------enable
enable
enable
enable
enable

Status
-------normal
normal
normal
normal
normal

4. Apply the rhel1-only export policy to the susan qtree.


cluster1::> volume qtree modify -vserver svm1 -volume eng_users -qtree susan
-export-policy rhel1-only
cluster1::>

105 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

5. Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is using
the rhel1-only export policy.
cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan
Vserver Name:
Volume Name:
Qtree Name:
Qtree Path:
Security Style:
Oplock Mode:
Unix Permissions:
Qtree Id:
Qtree Status:
Export Policy:
Is Export Policy Inherited:
cluster1::>

svm1
eng_users
susan
/vol/eng_users/susan
ntfs
enable
2
normal
rhel1-only
false

6. Produce a report showing the export policy assignments for all the volumes and qtrees that belong to
svm1.
cluster1::> volume qtree show -vserver svm1 -fields export-policy
vserver volume
qtree export-policy
------- --------- ----- ------------svm1
eng_users ""
default
svm1
eng_users bob
default
svm1
eng_users susan rhel1-only
svm1
engineering
""
default
svm1
svm1_root ""
default
5 entries were displayed.
cluster1::>

7. Now you need to validate that the more restrictive export policy that youve applied to the qtree susan is
working as expected. If you still have an active PuTTY session open to the the Linux host rhel1 then
bring that window up now, otherwise open a new PuTTY session to that host (username = root,
password = Netapp1!). Run the following commands to verify that you can still access the susan qtree
from rhel1.
[root@rhel1 ~]# cd /svm1/engineering/users
[root@rhel1 users]# ls
bob susan
[root@rhel1 users]# cd susan
[root@rhel1 susan]# echo "hello from rhel1" > rhel1.txt
[root@rhel1 susan]# cat rhel1.txt
hello from rhel1
[root@rhel1 susan]#

106 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

8. Now open a PuTTY connection to the Linux host rhel2 (again, username = root and password =
Netapp1!), This host should be able to access all the volumes and qtrees in the svm1 namespace
*except* susan, which should give a permission denied error because that qtrees associated export
policy only grants access to the host rhel1.
[root@rhel2 ~]# mkdir /svm1
[root@rhel2 ~]# mount svm1:/ /svm1
[root@rhel2 ~]# cd /svm1/engineering/users
[root@rhel2 users]# ls
bob susan
[root@rhel2 users]# cd susan
bash: cd: susan: Permission denied
[root@rhel2 users]# cd bob
[root@rhel2 bob]

Create Storage for iSCSI


Expected Completion Time: 50 Minutes
This section of the lab is optional, and includes instructions for mounting a LUN on Windows and Linux. If
you choose to complete this section you must first complete the Create a Storage Virtual Machine for
iSCSI section, and then complete either the Create, Map, and Mount a Windows LUN section, or the
Create, Map, and Mount a Linux LUN section as appropriate based on your platform of interest.
The 50 minute time estimate assumes you complete only one of the Windows or Linux LUN sections. You
are welcome to complete both of those section if you choose but in that case you should plan on needing
about 90 minutes to complete the entire Create and Mount a LUN section.
If you completed the Create a Storage Virtual Machine for NFS and CIFS section of this lab then you
explored the concept of a Storage Virtual Machine (SVM) and created an SVM and configured it to serve
data over NFS and CIFS. If you skipped over that section of the lab guide then you should consider
reviewing the introductory text found at the beginning of that section and each of its subsections before
you proceed further here as this section builds on concepts described there.
In this section you are going to create another SVM and configure it for SAN protocols, which in the case of
this lab means you are going to configure the SVM for iSCSI since this virtualized lab does not support FC.
The configuration steps for iSCSI and FC are similar so the information provided here is also useful for FC
deployment. In this section you will create a new SVM, configure it for iSCSI, create a LUN for Windows
and/or a LUN for Linux, and then mount the LUN(s) on their respective hosts.
NetApp supports configuring an SVM to serve data over both SAN and NAS protocols, but is is quite
common to see people to use separate SVMs for each in order to separate administrative responsibility or
for architectural and operational clarity. For example, SAN protocols do not support LIF failover so you
cannot use NAS LIFs to support SAN protocols; you must instead create dedicated LIFs just for SAN.
Implementing separate SVMs for SAN and NAS can in this example simplify the operational complexity of
each SVMs configuration, making each easier to understand and manage, but ultimately whether to mix
or separate is a customer decision and not a NetApp recommendation.
Since SAN LIFs do not support migration to different nodes, an SVMs must have dedicated SAN LIFs on
every node that you want to service SAN requests, and you must utilize MPIO and ALUA to manage the
controllers available paths to the LUNs; in the event of a path disruption MPIO and ALUA will compensate
by re-routing the LUN communication over an alternate controller path (i.e. over a different SAN LIF).

107 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

NetApp best practice is to configure at least one SAN LIF per storage fabric/network on each node in the
cluster so that all nodes can provide a path to the LUNs. In large clusters where this would result in the
presentation of a large number of paths for a given LUN then NetApp recommends that you use portsets to
limit the LUN to seeing no more than 8 LIFs. Data ONTAP 8.3 introduces a new Selective LUN Mapping
(SLM) feature to provide further assistance in managing fabric paths. SLM limits LUN path access to just
the node that owns the LUN and its HA partner, and Data ONTAP automatically applies SLM to all new
LUM map operations. For further information on Selective LUN Mapping, please see the Hands-On Lab for
SAN Features in clustered Data ONTAP 8.3 lab.
In this lab the cluster contains two nodes connected to a single storage network, but you will still be
configuring a total of 4 SAN LIFs simply because it is common to see real world implementations with 2
paths per node for redundancy.
This section of the lab allows you to create and mount a LUN for just Windows, just Linux, or both as you
wish. Both the Windows and Linux LUN creation steps require that you complete the Create a Storage
Virtual Machine for iSCSI section that comes next. If you want to create a Windows LUN then you will then
need to complete the Create, Map, and Mount a Windows LUN section that follows, or if you want to
create a Linux LUN then you will then need to complete the Create, Map, and Mount a Linux LUN section
that follows after that. You can safely complete both of those last two sections in the same lab.
Create a Storage Virtual Machine for iSCSI
In this section you will create a new SVM named svmluns on the cluster. You will create the SVM,
configure it for iSCSI, and create four data LIFs to support LUN access to the SVM (two on each cluster
node).

108 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the GUI:

Return to the System Manager window and start the procedure to create a new storage virtual
machine.
1. Open the Storage Virtual Machines tab.
2. Select cluster1.

3. Click Create to launch the Storage Virtual Machine Setup wizard.

The Storage Virual machine (SVM) Setup window opens.

109 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Set the fields as follows:

SVM Name: svmluns

Data Protocols: check the iSCSI checkboxes. Note that the list of available Data Protocols is
dependant upon what protocols are licensed on your cluster; if a given protocol isnt listed it is because
you arent licensed for it.

Root Aggregate: aggr1_cluster1_01. If you completed the NAS section of this lab you will note that
this is the same aggregate you used to hold the volumes for svm1. Multiple SVMs can share the same
aggregate.

The default values for IPspace, Volume Type, Default Language, and Security Style are already populated
for you by the wizard, as is the DNS configuration. When ready, click Submit & Continue.

110 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Configure iSCSI Protocol step of the wizard opens.


1. Set the fields in the window as follows.

LIFs Per Node: 2

Subnet: Demo

2. The Provision a LUN for iSCSI Storage (Optional) section allows to to quickly create a LUN when first
creating an SVM. This lab guide does not use that in order to show you the much more common activity
of adding a new volume and LUN to an existing SVM in a later step.
3. Check the Review or modify LIF configuration (Advanced Settings) checkbox. Checking this checkbox
changes the window layout and makes some fields uneditable, so the screenshot show this checkbox
before it has been checked.

111 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Once you check the Review or modify LIF configuration checkbox, the Configure iSCSI Protocol window
changes to include a list of the LIFs that the wizard plans to create. Take note of the LIF names and ports
that the wizard has chosen to assign the LIFs you have asked it to create. Since this lab utilizes a cluster
that only has two nodes and those nodes are configured as an HA pair, Data ONTAPs automatically
configured Selective LUN Mapping is more than sufficient for this lab so there is no need to create a
portset.
1. Click Submit & Continue.

112 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The wizard advances to the SVM Administration step. Unlike data LIFS for NAS protocols, which
automatically support both data and management functionality, iSCSI LIFs only support data protocols and
so you must create a dedicated management LIF for this new SVM.
1. Set the fields in the window as follow.

Password: netapp123

Confirm Password: netapp123

Subnet: Demo

Port: cluster1-01:e0c
Click Submit & Contnue.

113 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The New Storage Virtual Machine (SVM) Summary winow opens. Review the contents of this window,
taking note of the names, IP addresses, and port assignments for the 4 iSCSI LIFs and the management
LIF that the wizard created for you.
1. Click OK to close the window.

The New Storage Virtual Machine (SVM) Summary window closes, and focus returns to System Manager
which now show the summary view for the new svmluns SVM.

114 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Notice that in the main pane of the window the iSCSI protocol is listed with a green background. This
indicates that iSCSI is enabled and running for this SVM .

To perform this sections tasks from the command line:

If you do not already have a PuTTY session open to cluster1, open one now following the instructions in
the Accessing the Command Line section at the beginning of this lab guide and enter the following
commands.
1. Display the available aggregates so you can decide which one you want to use to host the root volume
for the SVM you will be creating.
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
10.26GB
510.6MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02
10.26GB
510.6MB
95% online
1 cluster1-02
raid_dp,
normal
aggr1_cluster1_01
72.53GB
72.49GB
0% online
3 cluster1-01
raid_dp,
normal
aggr1_cluster1_02
72.53GB
72.53GB
0% online
0 cluster1-02
raid_dp,
normal
4 entries were displayed.
cluster1::>

115 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

2. Create the SVM svmluns on aggregate aggr1_cluster1_01. Note that the clustered Data ONTAP
command line syntax still refers to storage virtual machines as vservers.
cluster1::> vserver create -vserver svmluns -rootvolume svmluns_root -aggregate
aggr1_cluster1_01 -language C.UTF-8 -rootvolume-security-style unix -snapshot-policy default
[Job 269] Job is queued: Create svmluns.
[Job 269]
[Job 269] Job succeeded:
Vserver creation completed
cluster1::>

3. Add the iSCSI protocol to the SVM svmluns:


cluster1::> vserver iscsi create -vserver svmluns
cluster1::> vserver show-protocols -vserver svmluns
Vserver: svmluns
Protocols: nfs, cifs, fcp, iscsi, ndmp
cluster1::> vserver remove-protocols -vserver svmluns -protocols nfs,cifs,fcp,ndmp
cluster1::> vserver show-protocols -vserver svmluns
Vserver: svmluns
Protocols: iscsi
cluster1::> vserver show -vserver svmluns
Vserver:
Vserver Type:
Vserver Subtype:
Vserver UUID:
Root Volume:
Aggregate:
NIS Domain:
Root Volume Security Style:
LDAP Client:
Default Volume Language Code:
Snapshot Policy:
Comment:
Quota Policy:
List of Aggregates Assigned:
Limit on Maximum Number of Volumes allowed:
Vserver Admin State:
Vserver Operational State:
Vserver Operational State Stopped Reason:
Allowed Protocols:
Disallowed Protocols:
Is Vserver with Infinite Volume:
QoS Policy Group:
Config Lock:
IPspace Name:

svmluns
data
default
beeb8ca5-580c-11e4-a807-0050569901b8
svmluns_root
aggr1_cluster1_01
unix
C.UTF-8
default
default
unlimited
running
running
iscsi
nfs, cifs, fcp, ndmp
false
false
Default

cluster1::>

116 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

4. Create 4 SAN LIFs for the SVM svmluns, 2 per node. Do not forget you can save some typing here by
using the up arrow to recall previous commands that you can edit and then execute.
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0d -subnet-name Demo
-failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0e -subnet-name Demo
-failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0d -subnet-name Demo
-failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0e -subnet-name Demo
-failover-policy disabled -firewall-policy data
cluster1::>

5. Now create a Management Interface LIF for the SVM.


cluster1::> network interface create -vserver svmluns -lif svmluns_admin_lif1 -role data
-data-protocol none -home-node cluster1-01 -home-port e0c -subnet-name Demo
-failover-policy nextavail -firewall-policy mgmt
cluster1::>

6. Display a list of the LIFs in the cluster.


cluster1::> network interface show
Logical
Status
Network
Current
Current
Vserver
Interface Admin/Oper Address/Mask
Node
Port
----------- ---------- ---------- ------------------ ------------- ------cluster
cluster1-01_clus1 up/up 169.254.224.98/16 cluster1-01 e0a
cluster1-02_clus1 up/up 169.254.129.177/16 cluster1-02 e0a
cluster1
cluster1-01_mgmt1 up/up 192.168.0.111/24 cluster1-01
e0c
cluster1-02_mgmt1 up/up 192.168.0.112/24 cluster1-02
e0c
cluster_mgmt up/up
192.168.0.101/24
cluster1-01
e0c
svm1
svm1_cifs_nfs_lif1 up/up 192.168.0.131/24 cluster1-01 e0c
svm1_cifs_nfs_lif2 up/up 192.168.0.132/24 cluster1-02 e0c
svmluns
cluster1-01_iscsi_lif_1 up/up 192.168.0.133/24 cluster1-01 e0d
cluster1-01_iscsi_lif_2 up/up 192.168.0.134/24 cluster1-01 e0e
cluster1-02_iscsi_lif_1 up/up 192.168.0.135/24 cluster1-02 e0d
cluster1-02_iscsi_lif_2 up/up 192.168.0.136/24 cluster1-02 e0e
svmluns_admin_lif1 up/up 192.168.0.137/24 cluster1-01 e0c
12 entries were displayed.
cluster1::>

117 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Is
Home
---true
true
true
true
true
true
true
true
true
true
true
true

7. Display detailed information for the LIF cluster1-01_iscsi_lif_1.


cluster1::> network interface show -lif cluster1-01_iscsi_lif_1 -instance
Vserver Name: svmluns
Logical Interface Name: cluster1-01_iscsi_lif_1
Role: data
Data Protocol: iscsi
Home Node: cluster1-01
Home Port: e0d
Current Node: cluster1-01
Current Port: e0d
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.133
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: Subnet Name: Demo
Administrative Status: up
Failover Policy: disabled
Firewall Policy: data
Auto Revert: false
Fully Qualified DNS Zone Name: none
DNS Query Listen Enable: false
Failover Group Name: FCP WWPN: Address family: ipv4
Comment: IPspace of LIF: Default
cluster1::>

8. Display a list of all the volumes on the cluster to see the root volume for the svmluns SVM.
cluster1::> volume show
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----cluster1-01 vol0
aggr0_cluster1_01 online RW
9.71GB
6.97GB
28%
cluster1-02 vol0
aggr0_cluster1_02 online RW
9.71GB
6.36GB
34%
svm1
eng_users
aggr1_cluster1_01 online RW
10GB
9.50GB
5%
svm1
engineering aggr1_cluster1_01 online RW
10GB
9.50GB
5%
svm1
svm1_root
aggr1_cluster1_01 online RW
20MB
18.86MB
5%
svmluns
svmluns_root aggr1_cluster1_01 online RW
20MB
18.86MB
5%
6 entries were displayed.
cluster1::>

118 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create, Map, and Mount a Windows LUN


In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you
will perform the remaining steps needed to configure and use a LUN under Windows:

Gather the iSCSI Initiator Name of the Windows client.

Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that
volume, and map the LUN so it can be accessed by the Windows client.

Mount the LUN on a Windows client leveraging multi-pathing.

You must complete all of the subsections of this section in order to use the LUN from the Windows client.
Gather the Windows Client iSCSI Initiator Name
You need to determine the Windows clients iSCSI initiator name so that when you create the LUN you can
set up an appropriate initiator group to control access to the LUN.

This sections tasks must be performed from the GUI:

On the desktop of the Windows client named jumphost (the main Windows host you use in the lab).

1. Click on the Windows button on the far left side of the task bar.

The Start screen opens.


1) Click on Administrative Tools.

119 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Windows Explorer opens to the List of Administrative Tools.


1) Double-click the entry for the iSCSI Initiator tool.

The iSCSI Initiator Properties window opens.

120 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Select the Configuration tab, and take note of the value in the Initiator Name field, which contains the
initiator name for jumphost. The value should read as:
iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
You will need this value later, so you might want to copy this value from the properties window and
paste it into a text file on your labs desktop so you have it readily available when that time comes.
2. Click OK.

The iSCSI Properties window closes, and focus returns to the Windows Explorer Administrator Tools
window. Leave this window open as you will need to access other tools later in the lab.

121 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create and Map a Windows LUN


You will now create a new thin provisioned Windows LUN named windows.lun in the volume winluns on
the SVM svmluns. You will also create an initiator igroup for the LUN and populate it with the Windows host
jumphost. An initiator group, or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names of
the hosts that are permitted to see and access the associated LUNs.

To perform this sections tasks from the GUI:

Return to the System Manager window.


1. Open the Storage Virtual Machines tab.
2. Navigate to the cluster1->svmluns->Storage->LUNs.
3. Click Create to launch the Create LUN wizard.

The Create LUN Wizard opens.

122 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click Next to advance to the next step in the wizard.

The wizard advances to the General Properties step.

123 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Set the fields in the window as follows.

Name: windows.lun

Description: Windows LUN

Type: Windows 2008 or later

Size: 10 GB

Check the Thin Provisioned check box.


Click Next to continue.

The wizard advances to the LUN Container step.

124 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Select the radio button to create a new flexible volume and set the fields under that heading as follows.

Aggregate Name: aggr1_cluster1_01

Volume Name: winluns


When finished click Next.

The wizard advances to the Initiator Mappings step.

1. Click the Add Initiator Group button.

125 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create Initiator Group window opens.


1. Set the fields in the window as follows.

Name: winigrp

Operating System: Windows

Type: Select the iSCSI radio button.


Click the Initiators tab.

The Initiators tab displays.

126 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click the Add button to add a new initiator.

A new empty entry appears in the list of initiators.

127 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Populate the Name entry with the value of the iSCSI Initiator name for jumnphost that you saved
earlier. In case you misplaced that value, it was
iqn.1991-05.com.microsoft:jumphost.demo.netapp.com.
When you finish entering the value, click the OK button underneath the entry. Finally, click Create.

An Initiator-Group Summary window opens confiming that the winigrp igroup was created successfully.
1. Click OK to acknowledge the confirmation.

The Initiator-Group Summary window closes, and focus returns to the Initiator Mapping step of the Create
LUN wizard.

128 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click the checkbox under the map column next to the winigrp initiator group. This is a critical step
because this is where you actually map the new LUN to the new igroup.
2. Click Next to continue.

The wizard advances to the Storage Quality of Service Properties step. You will not be creating any QoS
policies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab for Advanced
Concepts for clustered Data ONTAP 8.3 lab.

129 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click Next to continue.

The wizards advances to the LUN Summary step, where you can review your selections before proceding
with creating the LUN.

130 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. If everything looks correct, click Next.

The wizard begins the task of creating the volume that will contain the LUN, creating the LUN, and
mapping the LUN to the new igroup. As it finishes each step the wizard displays a green checkmark in the
window next to that step.

131 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Click the Finish button to terminate the wizard.

The Create LUN wizard window closes, and focus returns to the LUNs view in System Manager. The new
windows.lun LUN now shows up in the LUNs view, and if you select it you can review its details at the
bottom of the pane.

132 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the command line:

If you do not already have a PuTTY connection open to cluster1 then please open one now following the
instructions in the Accessing the Command Line section at the beginning of this lab guide.
Create the volume winluns to host the Windows LUN you will be creating in a later step:
cluster1::> volume create -vserver svmluns -volume winluns -aggregate aggr1_cluster1_01 -size
10.31GB -percent-snapshot-space 0 -snapshot-policy none -space-guarantee none
-autosize-mode grow -nvfail on
[Job 270] Job is queued: Create winluns.
[Job 270] Job succeeded: Successful
cluster1::> volume show
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------cluster1-01
vol0
aggr0_cluster1_01
online
cluster1-02
vol0
aggr0_cluster1_02
online
svm1
eng_users
aggr1_cluster1_01
online
svm1
engineering aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
svmluns
svmluns_root aggr1_cluster1_01
online
svmluns
winluns
aggr1_cluster1_01
online
7 entries were displayed.

Type
Size Available Used%
---- ---------- ---------- -----

RW

9.71GB

7.00GB

27%

RW

9.71GB

6.34GB

34%

RW

10GB

9.50GB

5%

RW

10GB

9.50GB

5%

RW

20MB

18.86MB

5%

RW

20MB

18.86MB

5%

RW

10.31GB

21.31GB

0%

cluster1::>

133 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create the Windows LUN named windows.lun:


cluster1::> lun create -vserver svmluns -volume winluns -lun windows.lun -size 10GB
-ostype windows_2008 -space-reserve disabled
Created a LUN of size 10g (10742215680)
cluster1::> lun modify -vserver svmluns -volume winluns -lun windows.lun -comment "Windows LUN"
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online unmapped windows_2008
10.00GB
cluster1::>

Display a list of the defined igroups, then create a new igroup named winigrp that you will use to manage
access to the new LUN. Finally, add the Windows clients initiator name to the igroup.
cluster1::> igroup show
This table is currently empty.
cluster1::> igroup create -vserver svmluns -igroup winigrp -protocol iscsi -ostype windows
-initiator iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
cluster1::>

134 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Map the LUN windows.lun to the igroup winigrp, then display a list of all the LUNs, all the mapped LUNs,
and finally a detailed report on the configuration of the LUN windows.lun.
cluster1::> lun map -vserver svmluns -volume winluns -lun windows.lun -igroup winigrp
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
10.00GB
cluster1::> lun mapped show
Vserver
Path
---------- ---------------------------------------svmluns
/vol/winluns/windows.lun

Igroup
------winigrp

LUN ID
-----0

Protocol
-------iscsi

cluster1::> lun show -lun windows.lun -instance


Vserver Name:
LUN Path:
Volume Name:
Qtree Name:
LUN Name:
LUN Size:
OS Type:
Space Reservation:
Serial Number:
Comment:
Space Reservations Honored:
Space Allocation:
State:
LUN UUID:
Mapped:
Block Size:
Device Legacy ID:
Device Binary ID:
Device Text ID:
Read Only:
Fenced Due to Restore:
Used Size:
Maximum Resize Size:
Creation Time:
Class:
Node Hosting the LUN:
QoS Policy Group:
Clone:
Clone Autodelete Enabled:
Inconsistent import:

svmluns
/vol/winluns/windows.lun
winluns
""
windows.lun
10.00GB
windows_2008
disabled
wOj4Q]FMHlq6
Windows LUN
false
disabled
online
8e62421e-bff4-4ac7-85aa-2e6e3842ec8a
mapped
512
false
false
0
502.0GB
10/20/2014 04:36:41
regular
cluster1-01
false
false
false

cluster1::>

135 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Mount the LUN on a Windows Client


The final step is to mount the LUN on the Windows client. You will be using MPIO/ALUA to support multiple
paths to the LUN using both of the SAN LIFs you configured earlier on the svmluns SVM. Data ONTAP
DSM for Windows MPIO is the multi-pathing software you will be using for this lab, and that software is
already installed on jumphost.

This sections tasks must be performed from the GUI:

You should begin by validating that the Multi-Path I/O (MPIO) software is working properly on this windows
host. The Administrative Tools window should still be open on jumphost; if you already closed it then you
will need to re-open it now so you can access the MPIO tool
1. Double-click the MPIO tool.

The MPIO Properties window opens.

136 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Select the Discover Multi-Paths tab.


2. Examine the Add Support for iSCSI devices checkbox. If this checkbox is NOT greyed out then MPIO
is improperly configured. This checkbox should be greyed out for this lab, but in the event it is not then
place a check in that checkbox, click the Add button, and then click Yes in the reboot dialog to reboot
your windows host. Once the system finishes rebooting, return to this window to verify that the
checkbox is now greyed out, indicating that MPIO is properly configured.
3. Click Cancel.

The MPIO Properties window closes and focus returns to the Administrative Tools window for jumphost.
Now you need to begin the process of connecting jumphost to the LUN.

137 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. In Administrative Tools, double-click the iSCSI Initiator tool.

The iSCSI Initiator Properties window opens.

138 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Select the Targets tab.


2. Notice that there are no targets listed in the Discovered Targets list box, indicating that that are
currently no iSCSI targets mapped to this host.
3. Click the Discovery tab.

The discovery tab is where you begin the process of discovering LUNs, and to do that you must define a
target portal to scan. You are going to manually add a target portal to jumphost.

139 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click the Discover Portal button.

The Discover Target Portal window opens. Here you will specify the first of the IP addresses that the
clustered Data ONTAP Create LUN wizard assigned your iSCSI LIFs when you created the svmluns SVM.
Recall that the wziard assigned your LIFs IP addresses in the range 192.168.0.133-192.168.0.136.
1. Set the IP Address or DNS name textbox to 192.168.0.133, the first address in the range for your
LIFs, and click OK.

The Discover Target Portal window closes, and focus returns to the iSCSI Injitiator Properties window.

140 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. The Target Portals list now contains an entry for the IP address you entered in the previous step.
2. Click on the Targets tab.

The Targets tab opens to show you the list of discovered targets.

1. In the Discovered targets list select the only listed target. Observe that the targets status is Inactive,
because although you have discovered it you have not yet connected to it. Also note that the Name of
the discovered target in your lab will have a different value than what you see in this guide; that name
string is uniquely generated for each instance of the lab. (Make a mental note of that string value as
you will see it a lot as you continue to configure iSCSI in later steps of this process.)
2. Click the Connect button.

The Connect to Target dialog box opens.


141 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click the Enable multi-path checkbox, then click the Advanced button.

The Advanced Settings window opens.


1. In the Target portal IP dropdown select the entry containing the IP address you specified when you
discovered the target portal, which should be 192.168.0.133. The listed values are IP Addres and Port
number combinations, and the specific value you want to select here is 192.168.0.133 / 3260. When
finished, click OK.

The Advanced Setting window closes, and focus returns to the Connect to Target window.

142 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click OK.

The Connect to Target window closes, and focus returns to the iSCSI Initiator Properties window.
1. Notice that the status of the listed discovered target has changed from Inactive to Connected".

143 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Thus far you have added a single path to your iSCSI LUN, using the address for the cluster1-01_iscsi_lif_1
LIF the Create LUN wizard created on the node cluster1-01 for the svmluns SVM. You are now going to
add each of the other SAN LIFs present on the svmluns SVM. To begin this procedure you must first edit
the properties of your existing connection.

1. Still on the Targets tab, select the discovered target entry for your existing connection.
2. Click Properties.

The Properties window opens. From this window you will be starting the procedure of connecting alternate
paths for your newly connected LUN. You will be repeating this procedure 3 times, once for each of the
remaining LIFs that are present on the svmluns SVM.

LIF IP
Address

Done

192.168.0.134
192.168.0.135
192.168.0.136

144 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. The Identifier list will contain an entry for every path you have specified so far, so it can serve as a
visual indicator on your progress for defining specify all your paths. The first time you enter this window
you will see one entry, for the the LIF you used to first connect to this LUN.
2. Click Add Session.

The Connect to Target window opens.


1. Check the Enable muti-path checkbox, and click Advanced.

The Advanced Setting window opens.

145 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Select the Target port IP entry that contains the IP address of the LIF whose path you are adding in
this iteration of the procedure to add an alternate path. The following screenshot shows the
192.168.0.134 address, but the value you specify will depend of which specific path you are
configuring. When finished, click OK.

The Advanced Settings window closes, and focus returns to the Connect to Target window.

1. Click OK.

The Connect to Target window closes, and focus returns to the Properties window where a new identifier
list. Repeat the procedure from the last 4 screenshots for each of the last two remaining LIF IP addresses.

146 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

When you have finished adding all 3 paths the Identifiers list in the Properties window should contain 4
entries.
1. There are 4 entries in the Identifier list when you are finished, indicating that there are 4 sessions, one
for each path. Note that it is normal for the identifier values in your lab to differ from those in the
screenshot.
2. Click OK.

The Properties window closes, and focus returns to the iSCSI Properties window.

147 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click OK.

The iSCSI Properties window closes, and focus returns to the desktop of jumphost. If the Administrative
Tools window is not still open on your desktop, open it again now.
If all went well, the jumphost is now connected to the LUN using multi-pathing, so it is time to format your
LUN and build a filesystem on it.

148 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. In Administrative Tools, double-click the Computer Management tool.

The Computer Management window opens.

1. In the left pane of the Computer Management window, navigate to Computer Management (Local)>Storage->Disk Management.

149 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

2. When you launch Disk Management an Initialize Disk dialog will open informing you that you must
initialize a new disk before Logical Disk Manager can access it. (If you see more than one disk listed
then MPIO has not correctly recognized that the multiple paths you set up are all for the same LUN, so
you will need to cancel the Initialize Disk dialog, quite Computer Manager, and go back to the iSCSI
Initiator tool to review your path configuration steps to find and correct any configuration errors, after
which you can return to the Computer Management tool and try again.)
Click OK to initialize the disk.

The Initialize Disk window closes, and focus returns to the Disk Management view in the Computer
Management window.
1. The new disk shows up in the disk list at the bottom of the window, and has a status of Unallocated.

2. Right-click inside the Unallocated box for the disk (if you right-click outside this box you will get the
incorrect context menu) and select New Simple Volume from the context menu.

150 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The New Simple Volume Wizard window opens.


1. Click the Next button to advance the wizard.

The wizard advances to the Specify Volume Size step.


1. The wizard defaults to allocating all of the space in the volume, so click the Next button.

The wizard advances to the Assign Drive Letter or Path step.

151 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. The wizard automatically selects the next available drive letter, which should be E:. Click Next.

The wizard advances to the Format Partition step.

1. Set the Volume Label field to WINLUN, and click Next.

The wizard advances to the Completing the New Simple Volume Wizard step

152 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click Finish.

The New Simple Volume Wizard window closes, and focus returns to the Disk Management view of the
Computer Management window.
1. The new WINLUN volume now shows as Healthy in the disk list at the bottom of the window, indicating
that the new LUN is mounted and ready for you to use. Before you complete this section of the lab, take
a look at the MPIO configuration for this LUN by right-clicking inside the box for the WINLUN volume.
2. From the context menu select Properties.

The WINLUN (E:) Properties window opens.


153 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click the Hardware tab.


2. In the All disk drives list select the NETAPP LUN C-Mode Multi-Path Disk entry.
3. Click Properties.

The NETAPP LUN C-Mode Multi-Path Disk Device Properties window opens.

154 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click the MPIO tab.


2. Notice that you are using the Data ONTAP DSM for multi-path access rather than the Microsoft DSM.
NetApp recommends using the Data ONTAP DSM software as it is the most full-featured option
available, although the Microsoft DSM is also supported.

3. The MPIO policy is set to Least Queue Depth. A number of different multi-pathing policies are
available but the configuration shown here sends LUN I/O down the path that has the fewest
outstanding I/O requests. You can click the More information about MPIO policies link at the bottom
of the dialog window for details about all the available policies.
4. The top two paths show both a Path State and TPG State as Active/Optimized; these paths are
connected to the node cluster1-01 and the Least Queue Depth policy makes active use of both paths to
this node. On the other hand the bottom two paths show a Path State of Unavailable and a TPG
State of Active/Unoptimized; these paths are connected to the node cluster1-02 and only enter a
Path State of Active/Optimized if the node cluster1-01 becomes unavailable or if the volume hosting
the LUN migrates over to the node cluster1-02.
5. When you are finished reviewing the information in this dialog click OK to exit. If you have changed any
of the values in this dialog you may want to consider instead using the Cancel button in order to
discard those changes.

155 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. The NETAPP LUN C-Mode Multi-Path Disk Device Properties window closes, and focus returns to the
WINLUN (E:) Properties window.
Click OK.

The WINLUN (E:) Properties window closes.


You may also close out the Computer Management window as this is the end of this exercise.
Create, Map, and Mount a Linux LUN
In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you
will perform the remaining steps needed to configure and use a LUN under Linux:

Gather the iSCSI Initiator Name of the Linux client.

Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun within
that volume, and map the LUN to the Linux client.

Mount the LUN on the Linux client.

You must complete all of the following subsections in order to use the LUN from the Linux client. Note that
you are not required to complete the Windows LUN section before starting this section of the lab guide but
the screenshots and command line output shown here assumes that you have; if you did not complete the
Windows LUN section then the differences will not affect your ability to create and mount the Linux LUN.

156 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Gather the Linux Client iSCSI Initiator Name


You need to determine the Linux clients iSCSI initiator name so that you can set up an appropriate initiator
group to control access to the LUN.

This sections tasks must be performed from the command line:

You should already have a PuTTY connection open to the Linux host rhel1. If you do not, then open one
now using the instructions found in the Accessing the Command Line section at the beginning of this lab
guide. The username will be root and the password will be Netapp1!.

Run the following command on rhel1 to find the name of its iSCSI initiator.
[root@rhel1 ~]# cd /etc/iscsi
[root@rhel1 iscsi]# ls
initiatorname.iscsi iscsid.conf
[root@rhel1 iscsi]# cat initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 iscsi]#

The initiator name for rhel1 is iqn.1994-05.com.redhat:rhel1.demo.netapp.com


Create and Map a Linux LUN
You will now create a new thin provisioned Linux LUN on the SVM svmluns under the volume linluns, and
also create an initiator igroup for the LUN so that only the Linux host rhel1 can access it. An initiator group,
or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names for the hosts that are permitted
to see the associated LUNs.
To perform this sections tasks from the GUI:

Switch back to the System Manager window so that you can create the LUN.

157 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. In System Manager open the Storage Virtual Machines tab.


2. In the left pane, navigate to cluster1->svmluns->Storage->LUNs. You may or may not see a listing
presented for the LUN windows.lun, depending on whether or not you completed the lab sections for
creating a Windows LUN.

3. Click Create.

The Create LUN Wizard opens.


1. Click Next to advance to the next step in the wizard.

158 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The wizard advances to the General Properties step.


1. Set the fields in the window as follows.

Name: linux.lun

Description: Linux LUN

Type: Linux

Size: 10 GB

Check the Thin Provisioned check box.


Click Next to continue.

The wizard advances to the LUN Container step.

159 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Select the radio button to create a new flexible volume and set the fields under that heading as follows.

Aggregate Name: aggr1_cluster1_01

Volume Name: linluns


When finished click Next.

The wizard advances to the Initiator Mapping step.

160 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click Add Initiator Group.

The Create Initiator Group window opens.

161 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Set the fields in the window as follows.

Name: linigrp

Operating System: Linux

Type: Select the iSCSI radio button.


Click the Initiators tab.

The Initiators tab displays.


1. Click the Add button to add a new initiator.

162 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

A new empty entry appears in the list of initiators.


1. Populate the Name entry with the value of the iSCSI Initiator name for rhel1. In case you misplaced that
value, it was
iqn.1994-05.com.redhat:rhel1.demo.netapp.com
When you finish entering the value, click OK underneath the entry. Finally, click Create.

An Initiator-Group Summary window opens confiming that the linigrp igroup was created successfully.
1. Click OK to acknowledge the confirmation.

The Initiator-Group Summary window closes, and focus returns to the Initiator Mapping step of the Create
LUN wizard.

163 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. Click the checkbox under the map column next to the linigrp initiator group. This is a critical step
because this is where you actually map the new LUN to the new igroup.
2. Click Next to continue.

164 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The wizard advances to the Storage Quality of Service Properties step. You will not be creating any QoS
policies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab for Advanced
Concepts for clustered Data ONTAP 8.3 lab.
1. Click Next to continue.

The wizards advances to the LUN Summary step, where you can review your selections before proceding
with creating the LUN.

165 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

1. If everything looks correct, click Next.

166 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The wizard begins the task of creating the volume that will contain the LUN, creating the LUN, and
mapping the LUN to the new igroup. As it finishes each step the wizard displays a green checkmark in the
window next to that step.
1. Click Finish to terminate the wizard.

167 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The Create LUN wizard window closes, and focus returns to the LUNs view in System Manager. The new
linux.lun LUN now shows up in the LUNs view, and if you select it you can review its details at the bottom
of the pane.

The new Linux LUN now exists and is mapped to your rhel1 client, but there is still one more configuration
step remaining for this LUN as follows:
1. Data ONTAP 8.2 introduced a space reclamation feature that allows Data ONTAP to reclaim space
from a thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to notify
the client when the LUN cannot accept writes due to lack of space on the volume. This feature is
supported by VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft
Windows 2012. The RHEL clients used in this lab are running version 6.5 and so you will enable the
space reclamation feature for your Linux LUN. You can only enable space reclamation through the
Data ONTAP command line, so if you do not already have a PuTTY session open to cluster1 then open
one now following the directions shown in the Accessing the Command Line section at the beginning
of this lab guide. The username will be admin and the password will be Netapp1!.
Enable space reclamation for the LUN.
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun disabled
cluster1::> lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation enabled
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun enabled
cluster1::>

168 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

To perform this sections tasks from the command line:

If you do not currently have a PuTTY session open to cluster1 then open one now following the instructions
from the Accessing the Command Line section at the beginning of this lab guide. The username will be
admin and the password will be Netapp1!.
Create the thin provisioned volume linluns that will host the Linux LUN you will create in a later step:
cluster1::> volume create -vserver svmluns -volume linluns -aggregate aggr1_cluster1_01 -size
10.31GB -percent-snapshot-space 0 -snapshot-policy none -space-guarantee none -autosize-mode
grow -nvfail on
[Job 271] Job is queued: Create linluns.
[Job 271] Job succeeded: Successful
cluster1::> volume show
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------cluster1-01
vol0
aggr0_cluster1_01
online
cluster1-02
vol0
aggr0_cluster1_02
online
svm1
eng_users
aggr1_cluster1_01
online
svm1
engineering aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
svmluns
linluns
aggr1_cluster1_01
online
svmluns
svmluns_root aggr1_cluster1_01
online
svmluns
winluns
aggr1_cluster1_01
online
8 entries were displayed.

Type
Size Available Used%
---- ---------- ---------- -----

RW

9.71GB

6.92GB

28%

RW

9.71GB

6.27GB

35%

RW

10GB

9.50GB

5%

RW

10GB

9.50GB

5%

RW

20MB

18.85MB

5%

RW

10.31GB

10.31GB

0%

RW

20MB

18.86MB

5%

RW

10.31GB

10.28GB

0%

cluster1::>

169 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Create the thin provisioned Linux LUN linux.lun on the volume linluns:
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
10.00GB
cluster1::> lun create -vserver svmluns -volume linluns -lun linux.lun -size 10GB -ostype linux
-space-reserve disabled
Created a LUN of size 10g

(10742215680)

cluster1::> lun modify -vserver svmluns -volume linluns -lun linux.lun -comment "Linux LUN"
cluster1::> lun show
Vserver
Path
--------- ------------------------------svmluns
/vol/linluns/linux.lun
svmluns
/vol/winluns/windows.lun

State
------online
online

Mapped
-------unmapped
mapped

Type
Size
-------- -------linux
10GB
windows_2008
10.00GB

2 entries were displayed.


cluster1::>

Display a list of the clusters igroups and portsets, then create a new igroup named linigrp that you will use
to manage access to the LUN linux.lun. Add the iSCSI initiator name for the Linux host rhel1 to the new
igroup.

cluster1::> igroup show


Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
cluster1::> igroup create -vserver svmluns -igroup linigrp -protocol iscsi
-ostype linux -initiator iqn.1994-05.com.redhat:rhel1.demo.netapp.com
cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
linigrp
iscsi
linux
iqn.1994-05.com.redhat:rhel1.demo.
netapp.com
svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
2 entries were displayed.
cluster1::>

170 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Map the LUN linux.lun to the igroup linigrp:

cluster1::> lun map -vserver svmluns -volume linluns -lun linux.lun -igroup linigrp
cluster1::> lun show
Vserver
Path
--------- ------------------------------svmluns
/vol/linluns/linux.lun
svmluns
/vol/winluns/windows.lun

State
------online
online

Mapped
-------mapped
mapped

Type
Size
-------- -------linux
10GB
windows_2008
10.00GB

2 entries were displayed.


cluster1::> lun mapped show
Vserver
Path
---------- ---------------------------------------svmluns
/vol/linluns/linux.lun
svmluns
/vol/winluns/windows.lun
2 entries were displayed.

Igroup
------linigrp
winigrp

LUN ID
-----0
0

Protocol
-------iscsi
iscsi

cluster1::> lun show -lun linux.lun


Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/linluns/linux.lun
online mapped
linux
10GB
cluster1::> lun mapped show -lun linux.lun
Vserver
Path
---------- ---------------------------------------svmluns
/vol/linluns/linux.lun

Igroup
------linigrp

LUN ID
-----0

Protocol
-------iscsi

cluster1::> lun show -lun linux.lun -instance


Vserver Name:
LUN Path:
Volume Name:
Qtree Name:
LUN Name:
LUN Size:
OS Type:
Space Reservation:
Serial Number:
Comment:
Space Reservations Honored:
Space Allocation:
State:
LUN UUID:
Mapped:
Block Size:
Device Legacy ID:
Device Binary ID:
Device Text ID:
Read Only:
Fenced Due to Restore:
Used Size:
Maximum Resize Size:
Creation Time:
Class:
Node Hosting the LUN:
QoS Policy Group:

svmluns
/vol/linluns/linux.lun
linluns
""
linux.lun
10GB
linux
disabled
wOj4Q]FMHlq7
Linux LUN
false
disabled
online
1b4912fb-b779-4811-b1ff-7bc3a615454c
mapped
512
false
false
0
128.0GB
10/20/2014 06:19:49
regular
cluster1-01
-

171 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Clone: false
Clone Autodelete Enabled: false
Inconsistent import: false
cluster1::>

Data ONTAP 8.2 introduced a space reclamation feature that allows Data ONTAP to reclaim space from a
thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to notify the client
when the LUN cannot accept writes due to lack of space on the volume. This feature is supported by
VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft Windows 2012. The
RHEL clients used in this lab are running version 6.5 and so you will enable the space reclamation feature
for your Linux LUN.

Configure the LUN to support space reclamation:


cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun disabled
cluster1::> lun modify -vserver svmluns -path /vol/linluns/linux.lun
-space-allocation enabled
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun enabled
cluster1::>

172 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Mount the LUN on a Linux Client


In this section you will be using the Linux command line to configure the host rhel1 to connect to the Linux
LUN /vol/linluns/linux.lun you created in the preceding section.

This sections tasks must be performed from the command line:

The steps in this section assume some familiarity with how to use the Linux command line. If you are not
familiar with those concepts then we recommend that you skip this section of the lab.
If you do not currently have a PuTTY session open to rhel1, open one now and log in as user root with the
password Netapp1!.
The NetApp Linux Host Utilities kit has been pre-installed on both Red Hat Linux hosts in this lab, and the
iSCSI initiator name has already been configured for each host. Confirm that is the case:
[root@rhel1 ~]# rpm -qa | grep netapp
netapp_linux_unified_host_utilities-7-0.x86_64
[root@rhel1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 ~]#

In the /etc/iscsi/iscsid.conf file the node.session.timeo.replacement_timeout value is set to 5 to better


support timely path failover, and the node.startup value is set to automatic so that the system will
automatically log in to the iSCSI node at startup.
[root@rhel1 ~]# grep replacement_time /etc/iscsi/iscsid.conf
#node.session.timeo.replacement_timeout = 120
node.session.timeo.replacement_timeout = 5
[root@rhel1 ~]# grep node.startup /etc/iscsi/iscsid.conf
# node.startup = automatic
node.startup = automatic
[root@rhel1 ~]#

173 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

You will find that the Red Hat Linux hosts in the lab have pre-installed the DM-Multipath packages and a
/etc/multipath.conf file pre-configured to support multi-pathing so that the RHEL host can access the LUN
using all of the SAN LIFs you created for the svmluns SVM.
[root@rhel1 ~]# rpm -q device-mapper
device-mapper-1.02.79-8.el6.x86_64
[root@rhel1 ~]# rpm -q device-mapper-multipath
device-mapper-multipath-0.4.9-72.el6.x86_64
[root@rhel1 ~]# cat /etc/multipath.conf
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated
#
# REMEMBER: After updating multipath.conf, you must run
#
# service multipathd reload
#
# for the changes to take effect in multipathd
# NetApp recommended defaults
defaults {
flush_on_last_del yes
max_fds
queue_without_daemon
user_friendly_names
dev_loss_tmo
fast_io_fail_tmo
5
}
blacklist {
devnode
devnode
devnode
devnode
}

max
no
no
infinity

"^sda"
"^hd[a-z]"
"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
"^ccis.*"

devices {
# NetApp iSCSI LUNs
device {
vendor
"NETAPP"
product
"LUN"
path_grouping_policy
group_by_prio
features
"3 queue_if_no_path pg_init_retries 50"
prio
"alua"
path_checker
tur
failback
immediate
path_selector
"round-robin 0"
hardware_handler
"1 alua"
rr_weight
uniform
rr_min_io
128
getuid_callout
"/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
[root@rhel1 ~]#

174 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

You now need to start the iSCSI software service on rhel1 and configure it to start automatically at boot
time. Note that a force-start is only necessary the very first time you start the iscsid service on host.
[root@rhel1 ~]# service iscsid status
iscsid is stopped
[root@rhel1 ~]# service iscsid force-start
Starting iscsid: OK
[root@rhel1 ~]# service iscsi status
No active sessions
[root@rhel1 ~]# chkconfig iscsi on
[root@rhel1 ~]# chkconfig --list iscsi
iscsi
0:off
1:off
2:on
3:on
[root@rhel1 ~]#

4:on

5:on

6:off

Next discover the available targets using the iscsiadm command. Note that the exact values used for the
node paths may differ in your lab from what is shown in this example, and that after running this command
there will not as of yet be active iSCSI sessions because you have not yet created the necessary device
files.
[root@rhel1 ~]# iscsiadm --mode discovery --op update --type sendtargets
--portal 192.168.0.133
192.168.0.133:3260,1028 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.136:3260,1031 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.135:3260,1030 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.134:3260,1029 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
[root@rhel1 ~]# iscsiadm --mode session
iscsiadm: No active sessions.
[root@rhel1 ~]#

Create the devices necessary to support the discovered nodes, after which the sessions become active.

[root@rhel1 ~]# iscsiadm --mode node -l all


Logging in to [iface: default, target: iqn.199208.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.134,3260] (multiple)
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.136,3260] (multiple)
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.135,3260] (multiple)
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.133,3260] (multiple)
Login to [iface: default, target: iqn.199208.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.134,3260] successful.
Login to [iface: default, target: iqn.199208.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.136,3260] successful.
Login to [iface: default, target: iqn.199208.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.135,3260] successful.
Login to [iface: default, target: iqn.199208.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.133,3260] successful.
[root@rhel1 ~]# iscsiadm --mode session
tcp: [1] 192.168.0.134:3260,1029 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
tcp: [2] 192.168.0.136:3260,1031 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
tcp: [3] 192.168.0.135:3260,1030 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
tcp: [4] 192.168.0.133:3260,1028 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
[root@rhel1 ~]#

175 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

At this point the Linux client sees the LUN over all four paths but it does not yet understand that all four
paths represent the same LUN.
[root@rhel1 ~]# sanlun lun show
controller(7mode)/
device
host
lun
vserver(Cmode)
lun-pathname
filename
adapter
protocol
size
product
-----------------------------------------------------------------------------------------------svmluns
/vol/linluns/linux.lun /dev/sde
host3
iSCSI
10g
cDOT
svmluns
/vol/linluns/linux.lun /dev/sdd
host4
iSCSI
10g
cDOT
svmluns
/vol/linluns/linux.lun /dev/sdc
host5
iSCSI
10g
cDOT
svmluns
/vol/linluns/linux.lun /dev/sdb
host6
iSCSI
10g
cDOT
[root@rhel1 ~]#

Since the lab includes a pre-configured /etc/multipath.conf file you just need to start the multipathd service
to handle the multiple path management and configure it to start automatically at boot time.
[root@rhel1 ~]# service multipathd status
multipathd is stopped
[root@rhel1 ~]# service multipathd start
Starting multipathd daemon: OK
[root@rhel1 ~]# service multipathd status
multipathd (pid 8656) is running...
[root@rhel1 ~]# chkconfig multipathd on
[root@rhel1 ~]# chkconfig --list multipathd
multipathd
0:off
1:off
2:on
3:on
[root@rhel1 ~]#

176 Basic Concepts for Clustered Data ONTAP 8.3

4:on

5:on

2015 NetApp, Inc. All rights reserved.

6:off

The multipath command displays the configuration of DM-Multipath, and the multipath -ll
command displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/mapper
that you use to access the multipathed LUN (in order to create a filesystem on it and to mount it); the first
line of output from the multipath -ll command lists the name of that device file (in this example
3600a0980774f6a34515d464d486c7137). The autogenerated name for this device file will likely differ in
your copy of the lab. Also pay attention to the output of the sanlun lun show -p command which shows
information about the Data ONTAP path of the LUN, the LUNs size, its device file name under
/dev/mapper, the multipath policy, and also information about the various device paths themselves.
[root@rhel1 ~]# multipath -ll
[1m3600a0980774f6a34515d464d486c7137 dm-2 NETAPP,LUN C-Mode
size=10G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 6:0:0:0 sdb 8:16 active ready running
| `- 3:0:0:0 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 5:0:0:0 sdc 8:32 active ready running
`- 4:0:0:0 sdd 8:48 active ready running
[root@rhel1 ~]# ls -l /dev/mapper
total 0
lrwxrwxrwx 1 root root
7 Oct 20 06:50 3600a0980774f6a34515d464d486c7137 -> ../dm-2
crw-rw---- 1 root root 10, 58 Oct 19 18:57 control
lrwxrwxrwx 1 root root
7 Oct 19 18:57 vg_rhel1-lv_root -> ../dm-0
lrwxrwxrwx 1 root root
7 Oct 19 18:57 vg_rhel1-lv_swap -> ../dm-1
[root@rhel1 ~]# sanlun lun show -p
ONTAP Path: svmluns:/vol/linluns/linux.lun
LUN: 0
LUN Size: 10g
Product: cDOT
Host Device: 3600a0980774f6a34515d464d486c7137
Multipath Policy: round-robin 0
Multipath Provider: Native
--------- ---------- ------- ------------ ---------------------------------------------host
vserver
path
path
/dev/
host
vserver
state
type
node
adapter
LIF
--------- ---------- ------- ------------ ---------------------------------------------up
primary
sdb
host6
cluster1-01_iscsi_lif_1
up
primary
sde
host3
cluster1-01_iscsi_lif_2
up
secondary sdc
host5
cluster1-02_iscsi_lif_1
up
secondary sdd
host4
cluster1-02_iscsi_lif_2
[root@rhel1 ~]#

You can see even more detail about the configuration of multipath and the LUN as a whole by running the
commands multipath -v3 -d -ll or iscsiadm -m session -P 3. As the output of these
commands is rather lengthy, it is omitted here.

177 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

The LUN is now fully configured for multipath access, so the only steps remaining before you can use the
LUN on the Linux host is to create a filesystem and mount it. When you run the following commands in
your lab you will need to substitute in the /dev/mapper/ string that identifies your LUN (get that string
from the output of ls -l /dev/mapper):
[root@rhel1 ~]# mkfs.ext4 /dev/mapper/3600a0980774f6a34515d464d486c71377
mke2fs 1.41.12 (17-May-2010)
Discarding device blocks:
0/204800 done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1 blocks, Stripe width=16 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@rhel1 ~]# mkdir /linuxlun
[root@rhel1 ~]# mount -t ext4 -o discard /dev/mapper/3600a0980774f6a345515d464d486c7137
/linuxlun
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root
11877388 4962816
6311232 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
svm1:/
19456
128
19328
1% /svm1
/dev/mapper/3600a0980774f6a34515d464d486c7137 10321208 154100
9642820
2% /linuxlun
[root@rhel1 ~]# ls /linuxlun
lost+found
[root@rhel1 ~]# echo "hello from rhel1" > /linuxlun/test.txt
[root@rhel1 ~]# cat /linuxlun/test.txt
hello from rhel1
[root@rhel1 ~]# ls -l /linuxlun/test.txt
-rw-r--r-- 1 root root 6 Oct 20 06:54 /linuxlun/test.txt
[root@rhel1 ~]#

The discard option for mount allows the Red Hat host to utilize space reclamation for the LUN.
To have RHEL automatically mount the LUNs filesystem at boot time, run the following command
(modified to reflect the multipath device path being used in your instance of the lab) to add the mount
information to the /etc/fstab file. The following command should be entered as a single line
[root@rhel1 ~]# echo '/dev/mapper/3600a0980774f6a34515d464d486c7137
/linuxlun ext4 _netdev,discard,defaults 0 0' >> /etc/fstab
[root@rhel1 ~]#

178 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Appendix 1 Using the clustered Data ONTAP Command Line


If you choose to utilize the clustered Data ONTAP command line to complete portions of this lab then you
should be aware that clustered Data ONTAP supports command line completion. When entering a
command at the Data ONTAP command line you can at any time mid-typing hit the Tab key and if you
have entered enough unique text for the command interpreter to determine what the rest of the argument
would be it will automatically fill in that text for you. For example, entering the text cluster sh and then
hitting the tab key will automatically expand the entered command text to cluster show.
At any point mid-typing you can also enter the ? character and the command interpreter will list any
potential matches for the command string. This is a particularly useful feature if you cannot remember all of
the various command line options for a given clustered Data ONTAP command; for example, to see the list
of options available for the cluster show command you can enter:
cluster1::> cluster show ?
[ -instance | -fields <fieldname>, ... ]
[[-node] <nodename>]
Node
[ -eligibility {true|false} ] Eligibility
[ -health {true|false} ]
Health
cluster1::>

When using tab completion if the Data ONTAP command interpreter is unable to identify a unique
expansion it will display a list of potential matches similar to what using the ? character does.
cluster1::> cluster s
Error: Ambiguous command.
cluster show
cluster statistics

Possible matches include:

cluster1::>

The Data ONTAP commands are structured hierarchically. When you log in you are placed at the root of
that command hierarchy, but you can step into a lower branch of the hierarchy by entering one of the base
commands. For example, when you first log in to the cluster enter the ? command to see the list of
available base commands, as follows:
cluster1::> ?
up
Go up one directory
cluster>
Manage clusters
dashboard>
(DEPRECATED)-Display dashboards
event>
Manage system events
exit
Quit the CLI session
export-policy
Manage export policies and rules
history
Show the history of commands for this CLI session
job>
Manage jobs and job schedules
lun>
Manage LUNs
man
Display the on-line manual pages
metrocluster>
Manage MetroCluster
network>
Manage physical and virtual network connections
qos>
QoS settings
redo
Execute a previous command
rows
Show/Set the rows for this CLI session
179 Basic Concepts for Clustered Data ONTAP 8.3
2015 NetApp, Inc. All rights reserved.

run
security>
set
snapmirror>
statistics>
storage>
system>
top
volume>
vserver>

Run interactive or non-interactive commands in


the nodeshell
The security directory
Display/Set CLI session settings
Manage SnapMirror
Display operational statistics
Manage physical storage, including disks,
aggregates, and failover
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage Vservers

cluster1::>

The > character at the end of a command signifies that it has a sub-hierarchy; enter the vserver command
to enter the vserver sub-hierarchy.

cluster1::> vserver
cluster1::vserver> ?
active-directory>
add-aggregates
add-protocols
audit>
check>
cifs>
context
create
dashboard>
data-policy>
delete
export-policy>
fcp>
fpolicy>
group-mapping>
iscsi>
locks>
modify
name-mapping>
nfs>
peer>
remove-aggregates
remove-protocols
rename
security>
services>
show
show-protocols
smtape>
start
stop
vscan>

Manage Active Directory


Add aggregates to the Vserver
Add protocols to the Vserver
Manage auditing of protocol requests that the
Vserver services
The check directory
Manage the CIFS configuration of a Vserver
Set Vserver context
Create a Vserver
The dashboard directory
Manage data policy
Delete a Vserver
Manage export policies and rules
Manage the FCP service on a Vserver
Manage FPolicy
The group-mapping directory
Manage the iSCSI services on a Vserver
Manage Client Locks
Modify a Vserver
The name-mapping directory
Manage the NFS configuration of a Vserver
Create and manage Vserver peer relationships
Remove aggregates from the Vserver
Remove protocols from the Vserver
Rename a Vserver
Manage ontap security
The services directory
Display Vservers
Show protocols for Vserver
The smtape directory
Start a Vserver
Stop a Vserver
Manage Vscan

cluster1::vserver>

180 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Notice how the prompt changed to reflect that you are now in the vserver sub-hierarchy, and that some of
the subcommands here have sub-hierarchies of their own. To return to the root of the hierarchy enter the
top command; you can also navigate upwards one level at a time by using the up or .. commands.
cluster1::vserver> top
cluster1::>

The Data ONTAP command interpreter supports command history. By repeatedly hitting the up arrow key
you can step through the series of commands you ran earlier, and you can selectively execute a given
command again when you find it by hitting the Enter key. You can also use the left and right arrow keys to
edit the command before you run it again.

References
The following references were used in writing this lab guide.

TR-3982: NetApp Clustered Data ONTAP 8.2.X an Introduction:, July 2014


TR-4100: Nondisruptive Operations and SMB File Shares for Clustered Data ONTAP, April 2013
TR-4129: Namespaces in clustered Data ONTAP, July 2014

181 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Version History
Version

Date

Document Version History

Version 1.0

October 2014

Initial Release for Hands On Labs

Version 1.0.1

December 2014

Updates for Lab on Demand

Version 1.1

April 2015

Updated for Data ONTAP 8.3GA and other application software.


NDO section spun out into a separate lab guide.

Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product and feature
versions described in this document are supported for your specific environment. The NetApp IMT defines product components
and versions that can be used to construct configurations that are supported by NetApp. Specific results depend on each
customer's installation in accordance with published specifications.

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or
recommendations provided in this publication, or with respect to any results that may be obtained by the use of the informati on
or observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of
this information or the implementation of any recommendations or techniques herein is a customers responsibility and
depends on the customers ability to evaluate and integrate them into the customers operational environment. This document
and the information contained herein may be used solely in connection with the NetApp products discussed in this document.

2015 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp, Inc.
Specifications are subject to change without notice. NetApp and the NetApp logo are registered trademarks of NetApp, Inc. in the United States
and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated
as such.

182 Basic Concepts for Clustered Data ONTAP 8.3

2015 NetApp, Inc. All rights reserved.

Potrebbero piacerti anche