Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
The information is intended to outline our general product direction. It is intended for information purposes
only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or
functionality, and should not be relied upon in making purchasing decisions. NetApp makes no warranties,
expressed or implied, on future functionality and timeline. The development, release, and timing of any
features or functionality described for NetApps products remains at the sole discretion of NetApp.
NetApp's strategy and possible future developments, products and or platforms directions and functionality
are all subject to change without notice. NetApp has no obligation to pursue any course of business
outlined in this document or any related presentation, or to develop or release any functionality mentioned
therein.
CONTENTS
Introduction .............................................................................................................................. 3
Why clustered Data ONTAP? ............................................................................................................................3
Lab Objectives ..................................................................................................................................................4
Prerequisites ....................................................................................................................................................5
Accessing the Command Line ...........................................................................................................................5
Appendix 1 Using the clustered Data ONTAP Command Line ....................................... 179
References ............................................................................................................................ 181
Version History ..................................................................................................................... 182
Introduction
This lab introduces the fundamentals of clustered Data ONTAP . In it you will start with a pre-created 2node cluster and configure Windows 2012R2 and Red Hat Enterprise Linux 6.5 hosts to access storage on
the cluster using CIFS, NFS, and iSCSI.
It does not scale well adding new servers for every new application is extremely expensive.
It is inefficient most servers are significantly underutilized meaning that businesses are not extracting
the full benefit of their hardware investment.
It is inflexible re-allocating standalone server resources for other purposes is time consuming, staff
intensive, and highly disruptive.
Server virtualization directy addresses all three of these limitations by decoupling the application instance
from the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware,
meaning that businesses can now consolidate their server workloads to a smaller set of more effectively
utilized physical servers. In addition, the ability to transparently migrate running virtual machines across a
pool of physical servers enables businesses to reduce the impact of downtime due to scheduled
maintenance activities.
Clustered Data ONTAP brings these same benefits, and many others, to storage systems. As with server
virtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a
single logical cluster that can non-disruptively service multiple storage workload needs. With clustered Data
ONTAP you can:
Combine different types and models of NetApp storage controllers (known as nodes) into a shared
physical storage resource pool (referred to as a cluster).
Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on the
same storage cluster.
Consolidate various storage workloads to the cluster. Each workload can be assigned its own Storage
Virtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its own data
volumes, LUNs, CIFS shares, and NFS exports.
Support multitenancy with delegated administration of SVMs. Tenants can be different companies,
business units, or even individual application owners, each with their own distinct administrators whose
admin rights are limited to just the assigned SVM.
Use Quality of Service (QoS) capabilities to manage resource utilization between storage workloads.
Non-disruptively migrate live data volumes and client connections from one cluster node to another.
Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively
removed from the cluster, meaning that you can non-disruptively scale a cluster up and down during
hardware refresh cycles.
Leverage multiple nodes in the cluster to simultaneously service a given SVMs storage workloads.
This means that businesses can scale out their SVMs beyond the bounds of a single physical node in
response to growing storage and performance requirements, all non-disruptively.
Apply software & firmware updates and configuration changes without downtime.
Lab Objectives
This lab explores fundamental concepts of clustered Data ONTAP, and utilizes a modular design to allow
you to focus on the topics that are of specific interest to you. The Clusters section is required for all
invocations of the lab (it is a prerequisite for the other sections). If you are interested in NAS functionality
then complete the Storage Virtual Machines for NFS and CIFS section. If you are interested in SAN
functionality, then complete the Storage Virtual Machines for iSCSI section and at least one of its
Windows or Linux subsections (you may do both if you so choose). If you are interested in nondisruptive
operations then you will need to first complete one of the Storage Virtual Machine sections just mentioned
before you can proceed to the Nondisruptive Operations: section.
Here summary of the exercises in this lab, along with their Estimated Completion Times (ECT):
Explore a cluster
Create a Subnet.
Storage Virtual machines for NFS and CIFS (Optional, ECT = 40 minutes)
o
Configure the Storage Virtual Machine for CIFS and NFS access.
Mount a CIFS share from the Storage Virtual Machine on a Windows client.
Mount a NFS volume from the Storage Virtual Machine on a Linux client.
Storage Virtual Machines for iSCSI (Optional, ECT = 90 minutes including all optional subsections)
o
Create a Windows LUN on the volume and map the LUN to an igroup.
Configure a Windows client for iSCSI and MPIO and mount the LUN.
Create a Linux LUN on the volume and map the LUN to an igroup.
Configure a Linux client for iSCSI and multipath and mount the LUN.
This lab includes instructions for completing each of these tasks using either System Manager, NetApps
graphical administration interface, or the Data ONTAP command line. The end state of the lab produced
by either method is exactly the same so use whichever method you are the most comfortable with.
4 Basic Concepts for Clustered Data ONTAP 8.3
In a lab section you will encounter orange bars similar to the following that indicate the beginning of the
graphical or command line procedures for that exercise. A few sections only offer one of these two options
rather than both, in which case the text in the orange bar will communicate that point.
***EXAMPLE*** To perform this sections tasks from the GUI: ***EXAMPLE***
Note that while switching back and forth between the graphical and command line methods from one
section of the lab guide to another is supported, this guide is not designed to support switching back and
forth between these methods within a single section. For the best experience we recommend that you stick
with a single method for the duration of a lab section.
Prerequisites
This lab introduces clustered Data ONTAP and so this guide makes no assumptions that the user has
previous experience with Data ONTAP. The lab does assume some basic familiarity with storage system
related concepts such as RAID, CIFS, NFS, LUNs, and DNS.
This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume
that the lab user has a basic familiarity with Microsoft Windows.
This lab also includes steps for mount NFS volumes and LUNs on a Linux client. All steps are performed
from the Linux command line and assumes a basic working knowledge of the Linux command line. A basic
working knowledge of a text editor such as vi may be useful, but is not required.
1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host jumphost as
shown in the following screenshot; just double-click on the icon to launch it.
Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. This
example shows a user connecting to the Data ONTAP cluster named cluster1.
1. By default PuTTY should launch into the Basic options for your PuTTY session display as shown in
the screenshot. If you accidentally navigate away from this view just click on the Session category item
to return to this view.
2. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it to
open the connection. A terminal window will open and you will be prompted to log into the host. You
can find the correct username and password for the host in Table 1 in the Lab Environment section at
the beginning of this guide.
The clustered Data ONTAP command lines supports a number of usability features that make the
command line much easier to use. If you are unfamiliar with those features then review Appendix 1 of this
lab guide which contains a brief overview of them.
Lab Environment
The following figure contains a diagram of the environment for this lab.
All of the servers and storage controllers presented in this lab are virtual devices, and the networks that
interconnect them are exclusive to your lab session. While we encourage you to follow the demonstration
steps outlined in this lab guide, you are free to deviate from this guide and experiment with other Data
ONTAP features that interest you. While the virtual storage controllers (vsims) used in this lab offer nearly
all of the same functionality as physical storage controllers, they are not capable of providing the same
performance as a physical controller, which is why these labs are not suitable for performance testing.
Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address.
Table 1: Lab Host Credentials
Hostname
JUMPHOST
Description
Windows 20012R2 Remote Access
host
IP Address(es)
Username
Password
192.168.0.5
Demo\Administrator
Netapp1!
RHEL1
192.168.0.61
root
Netapp1!
RHEL2
192.168.0.62
root
Netapp1!
DC1
192.168.0.253
Demo\Administrator
Netapp1!
cluster1
192.168.0.101
admin
Netapp1!
cluster1-01
192.168.0.111
admin
Netapp1!
cluster1-02
192.168.0.112
admin
Netapp1!
Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.
7 Basic Concepts for Clustered Data ONTAP 8.3
Description
JUMPHOST
Data ONTAP DSM v4.1 for Windows MPIO, Windows Host Utility Kit v6.0.2
RHEL1, RHEL2
Lab Activities
Clusters
Expected Completion Time: 20 Minutes
A cluster is a group of physical storage controllers, or nodes, that have been joined together for the
purpose of serving data to end users. The nodes in a cluster can pool their resources together so that the
cluster can distribute its work across the member nodes. Communication and data transfer between
member nodes (such as when a client accesses data on a node other than the one actually hosting the
data) takes place over a 10Gb cluster-interconnect network to which all the nodes are connected, while
management and client data traffic passes over separate management and data networks configured on
the member nodes.
Clusters typically consist of one or more NetApp storage controller High Availability (HA) pairs. Both
controllers in an HA pair actively host and serve data, but they are also capable of taking over their
partners responsibilities in the event of a service disruption by virtue of their redundant cable paths to each
others disk storage. Having multiple HA pairs in a cluster allows the cluster to scale out to handle greater
workloads, and to support non-disruptive migrations of volumes and client connections to other nodes in
the cluster resource pool. This means that cluster expansion and technology refreshes can take place
while the cluster remains fully online, and serving data.
Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an
even number of controller nodes. There is one exception to this rule, and that is the single node cluster,
which is a special cluster configuration intended to support small storage deployments that can be satisfied
with a single physical controller head. The primary noticeable difference between single node and standard
clusters, besides the number of nodes, is that a single node cluster does not have a cluster network. Single
node clusters can later be converted into traditional multi-node clusters, and at that point become subject to
all the standard cluster requirements like the need to utilize an even number of nodes consisting of HA
pairs. This lab does not contain a single node cluster, and so this lab guide does not discuss them further.
Data ONTAP 8.3 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although
the node limit may be lower depending on the model of FAS controller in use. Data ONTAP 8.3 clusters
that also host iSCSI and FC can scale up to a maximum of 8 nodes.
This lab utilizes simulated NetApp storage controllers rather than physical FAS controllers. The simulated
controller, also known as a vsim, is a virtual machine that simulates the functionality of a physical controller
without the need for dedicated controller hardware. The vsim is not designed for performance testing, but
does offer much of the same functionality as a physical FAS controller, including the ability to generate I/O
to disks. This makes the vsim is a powerful tool to explore and experiment with Data ONTAP product
features. The vsim does is limited when a feature requires a specific physical capability that the vsim does
not support; for example, vsims do not support Fibre Channel connections, which is why this lab uses
iSCSI to demonstrate block storage functionality.
8 Basic Concepts for Clustered Data ONTAP 8.3
This lab starts with a pre-created, minimally configured cluster. The pre-created cluster already includes
Data ONTAP licenses, the clusters basic network configuration, and a pair of pre-configured HA
controllers. In this next section you will create the aggregates that are used by the SVMs that you will
create in later sections of the lab. You will also take a look at the new Advanced Drive Partitioning feature
introduced in clustered Data ONTAP 8.3.
Connect to the Cluster with OnCommand System Manager
OnCommand System Manager is NetApps browser-based management tool for configuring and managing
NetApp storage systems and clusters. Prior to 8.3, System Manager was a separate application that you
had to download and install on your client OS. In 8.3, System Manager is now moved on-board the cluster,
so you just point your web browser to the cluster management address. The on-board System Manager
interface is essentially the same that NetApp offered in the System Manager 3.1, the version you install on
a client.
On the Jumphost, the Windows 2012R2 Server desktop you see when you first connect to the lab, open
the web browser of your choice. This lab guide uses Chrome, but you can use Firefox or Internet Explorer if
you prefer one of those. All three browsers already have System Manager set as the browser home page.
1. Launch Chrome to open System Manager.
1. Enter the User Name as admin and the Password as Netapp1! and then click Sign In.
System Manager is now logged in to cluster1 and displays a summary page for the cluster. If you are
unfamiliar with System Manager, here is a quick introduction to its layout.
Use the tabs on the left side of the window to manage various aspects of the cluster. The Cluster tab (1)
accesses configuration settings that apply to the cluster as a whole. The Storage Virtual Machines tab (2)
allows you to manage individual Storage Virtual Machines (SVMs, also known as Vservers). The Nodes tab
(3) contains configuration settings that are specific to individual controller nodes. Please take a few
moments to expand and browse these tabs to familiarize yourself with their contents.
Note:
As you use System Manager in this lab, you may encounter situations where buttons at the bottom of a
System Manager pane are beyond the viewing size of the window, and no scroll bar exists to allow you to
scroll down to see them. If this happens, then you have two options; either increase the size of the browser
window (you might need to increase the resolution of your jumphost desktop to accommodate the larger
browser window), or in the System Manager window, use the tab key to cycle through all the various fields
and buttons, which eventually forces the window to scroll down to the non-visible items.
By default each cluster node has one aggregate known as the root aggregate, which is a group of the
nodes local disks that host the nodes Data ONTAP operating system. A nodes root aggregate is
automatically created during Data ONTAP installation in a minimal RAID-DP configuration This means it is
initially comprised of 3 disks (1 data, 2 parity), and has a name that begins the string aggr0. For example,
in this lab the root aggregate of the node cluster1-01 is named aggr0_cluster1_01., and the root
aggregate of the node cluster1-02 is named aggr0_cluster1_02.
On higher end FAS systems that have many disks, the requirement to dedicate 3 disks for each controllers
root aggregate is not a burden, but for entry level FAS systems that only have 24 or 12 disks this root
aggregate disk overhead requirement signficantly reduces the disks available for storing user data. To
improve usable capacity, NetApp has introduced Advanced Drive Partitioning in 8.3, which divides the Hard
Disk Drives (HDDs) on nodes that have this feature enabled into two partititions; a small root partition, and
a much larger data partition. Data ONTAP allocates the root partitions to the node root aggregate, and the
data partitions for data aggregates. Each partition behaves like a virtual disk, so in terms of RAID Data
ONTAP treats these partitions just like physical disks when creating aggregates. The key benefit here is
that a much higher percentage of the nodes overall disk capacity is now available to host user data.
Data ONTAP only supports HDD partitioning for FAS 22xx and FAS25xx controllers, and only for HDDs
installed in their internal shelf on those models. Advanced Drive Partitioning can only be enabled at system
installation time, and there is no way to convert an existing system to use Advanced Drive Partitioning
other than to completely evacuate the affected HDDs and then re-install Data ONTAP.
All-Flash FAS (AFF) supports a variation of Advanced Drive Partitioning that utilizes SSDs instead of
HDDs. The capability is available for entry-level, mid-range, and high-end AFF platforms. Data ONTAP 8.3
also introduces SSD partitioning for use with Flash Pools, but the details of that feature lie outside the
scope of this lab.
In this section, you will see how to determine if a cluster node is utilizing Advanced Drive Partitioning.
System Manager provides a basic view into this information, but if you want to see more detail then you will
want to use the CLI.
If you scroll back up to look at the Assigned HDDs section of the window, you will see that there are no
entries listed for the root partitions of the disks. Under daily operation, you will be primarly concerned with
data partitions rather than root partitions, and so this view focuses on just showing information about the
data partitions. To see information about the physical disks attached to your system you will need to select
the Inventory tab.
1. Click on the Inventory tab at the top of the Disks window.
System Managers main window now shows a list of the physical disks available across all the nodes in the
cluster, which nodes own those disks, and so on. If you look at the Container Type column you see that the
disks in your lab all show a value of shared; this value indicates that the physical disk is partitioned. For
disks that are not partitioned you would typically see values like spare, data, parity, and dparity.
For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically determines
the size of the root and data disk partitions at system installation time based on the quantity and size of the
available disks assigned to each node. In this lab each cluster node has twelve 32 GB hard disks, and you
can see how your nodes root aggregates are consuming the root partitions on those disks by going to the
Aggregates page in System Manager.
1. On the Cluster tab, navigate to cluster1->Storage->Aggregates.
2. In the Aggregates list, select aggr0_cluster1_01, which is the root aggregate for cluster node cluster101. Notice that the total size of this aggregate is a little over 10 GB. The Available and Used space
shown for this aggregate in your lab may vary from what is shown in this screenshot, depending on the
quantity and size of the snapshots that exist on your nodes root volume.
3. Click the Disk Layout tab at the bottom of the window. The lower pane of System Manager now
displays a list of the disks that are members of this aggregate. Notice that the usable space is 1.52 GB,
which is the size of the root partition on the disk. The Physical Space column displays to total capacity
of the whole disk that is available to clustered Data ONTAP, including the space allocated to both the
disks root and data partitions.
28.44GB
1 VMDISK
shared
VMw-1.3
28.44GB
2 VMDISK
shared
VMw-1.4
28.44GB
3 VMDISK
shared
VMw-1.5
28.44GB
4 VMDISK
shared
VMw-1.6
28.44GB
5 VMDISK
shared
VMw-1.7
28.44GB
6 VMDISK
shared
VMw-1.8
28.44GB
8 VMDISK
shared
VMw-1.9
28.44GB
9 VMDISK
shared
VMw-1.10
28.44GB
10 VMDISK
shared
VMw-1.11
VMw-1.12
VMw-1.13
28.44GB
28.44GB
28.44GB
11 VMDISK
12 VMDISK
0 VMDISK
shared
shared
shared
VMw-1.14
28.44GB
1 VMDISK
shared
VMw-1.15
28.44GB
2 VMDISK
shared
VMw-1.16
28.44GB
3 VMDISK
shared
VMw-1.17
28.44GB
4 VMDISK
shared
VMw-1.18
28.44GB
5 VMDISK
shared
VMw-1.19
28.44GB
6 VMDISK
shared
VMw-1.20
28.44GB
8 VMDISK
shared
VMw-1.21
VMw-1.22
28.44GB
28.44GB
9 VMDISK
10 VMDISK
shared
shared
VMw-1.23
28.44GB
VMw-1.24
28.44GB
24 entries were displayed.
11 VMDISK
12 VMDISK
shared
shared
Container
Name
Owner
--------- -------aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
cluster1-01
cluster1-01
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
aggr0_cluster1_02
cluster1-02
cluster1-02
cluster1-02
cluster1::>
The preceding command listed a total of 24 disks, 12 for each of the nodes in this two-node cluster. The
container type for all the disks is shared, which indicates that the disks are partitioned. For disks that are
not partitioned, you would typically see values like spare, data, parity, and dparity. The Owner field
indicates which node the disk is assigned to, and the Container Name field indicates which aggregate the
disk is assigned to. Notice that two disks for each node do not have a Container Name listed; these are
spare disks that Data ONTAP can use as replacements in the event of a disk failure.
2. At this point, the only aggregates that exist on this new cluster are the root aggregates. List the
aggregates that exist on the cluster:
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
10.26GB
510.6MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02
10.26GB
510.6MB
95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::>
3. Now list the disks that are members of the root aggregate for the node cluster-01. Here is the command
that you would ordinarily use to display that information for an aggregate that is not using partitioned
disks.
cluster1::> storage disk show -aggregate aggr0_cluster1_01
There are no entries matching your query.
Info: One or more aggregates queried for use shared disks. Use "storage aggregate show-status"
to get correct set of disks associated with these aggregates.
cluster1::>
4. As you can see, in this instance the preceding command is not able to produce a list of disks because
this aggregate is using shared disks. Instead it refers you to use the storage aggregate show
command to query the aggregate for a list of its assigned disk partitions.
cluster1::> storage aggregate show-status -aggregate aggr0_cluster1_01
Owner Node: cluster1-01
Aggregate: aggr0_cluster1_01 (online, raid_dp) (block checksums)
Plex: /aggr0_cluster1_01/plex0 (online, normal, active, pool0)
RAID Group /aggr0_cluster1_01/plex0/rg0 (normal, block checksums)
Usable Physical
Position Disk
Pool Type
RPM
Size
Size Status
-------- --------------------------- ---- ----- ------ -------- -------- -------shared
VMw-1.1
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.2
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.3
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.4
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.5
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.6
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.7
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.8
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.9
0
VMDISK
1.52GB 28.44GB (normal)
shared
VMw-1.10
0
VMDISK
1.52GB 28.44GB (normal)
10 entries were displayed.
cluster1::>
The output shows that aggr0_cluster1_01 is comprised of 10 disks, each with a usable size of 1.52 GB,
and you know that the aggregate is using the listed disks root partitions because aggr0_cluster1_01 is a
root aggregate.
For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically determines
the size of the root and data disk partitions at system installation time. That determination is based on the
quantity and size of the available disks assigned to each node. As you saw earlier, this particular cluster
node has 12 disks, so during installation Data ONTAP partitioned all 12 disks but only assigned 10 of those
root partitions to the root aggregate so that the node would have 2 spares disks available.to protect against
disk failures.
5. The Data ONTAP CLI includes a diagnostic level command that provides a more comprehensive single
view of a systems partitioned disks. The following command shows the partitioned disks that belong to
the node cluster1-01.
cluster1::> set -priv diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster1::*> disk partition show -owner-node-name cluster1-01
Usable Container
Container
Partition
Size
Type
Name
Owner
------------------------- ------- ------------- ----------------- ----------------VMw-1.1.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.1.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.2.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.2.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.3.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.3.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.4.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.4.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.5.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.5.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.6.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.6.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.7.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.7.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
VMw-1.8.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.8.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.9.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.9.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.10.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.10.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.11.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.11.P2
1.52GB spare
Pool0
cluster1-01
VMw-1.12.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.12.P2
1.52GB spare
Pool0
cluster1-01
24 entries were displayed.
cluster1::*> set -priv admin
cluster1::>
You can create aggregates from either the Cluster tab or the Nodes tab. For this exercise use the Cluster
tab as follows:
1. Select the Cluster tab. To avoid confusion, always double-check to make sure that you are working in
the correct left pane tab context when performing activities in System Manager!
2. Go to cluster1->Storage->Aggregates.
3. Click on the Create button to launch the Create Aggregate Wizard.
1. Specify the Name of the aggregate as aggr1_cluster1_01 shown and then click Browse.
The Select DiskType window closes, and focus returns to the Create Aggregate window.
1. The Disk Type should now show as VMDISK. Set the Number of Disks to 5.
2. Click the Create button to create the new aggregate and to close the wizard.
The Create Aggregate window close, and focus returns to the Aggregates view in System Manager.The
newly created aggregate should now be visible in the list of aggregates.
1. Select the entry for the aggregate aggr1_cluster1_01 if it is not already selected.
2. Click the Details tab to view more detailed information about this aggregates configuration.
3. Notice that aggr1_cluster1_01 is a 64-bit aggregate. In earlier versions of clustered Data ONTAP 8, an
aggregate could be either 32-bit or 64-bit, but Data ONTAP 8.3 only supports 64-bit aggregates. If you
have an existing clustered Data ONTAP 8.x system that has 32-bit aggregates and you plan to upgrade
that cluster to 8.3, you must convert those 32-bit aggregates to 64-bit aggregates prior to the upgrade.
The procedure for that migration is not covered in this lab, so if you need further details then please
refer to the clustered Data ONTAP documentation.
Now repeat the process to create a new aggregate on the node cluster1-02.
1. Click the Create button again.
The Select Disk Type window closes, and focus returns to the Create Aggregate window.
1. The Disk Type should now show as VMDISK. Set the Number of Disks to 5.
2. Click the Create button to create the new aggregate.
The Create Aggregate window closes, and focus returnsto the Aggregates view in System Manager.
The new aggregate aggr1_cluster1_02 now appears in the clusters aggregate list.
From a PuTTY session logged in to cluster1 as the username admin and password Netapp1!.
Display a list of the disks attached to the node cluster-01. (Note that you can omit the -nodelist option to
display a list of all the disks in the cluster.) By default the PuTTY window may wrap output lines because
the window is too small; if this is the case for you then simply expand the window by selecting its edge and
dragging it wider, after which any subsequent output will utilize the visible width of the window.
cluster1::> disk show -nodelist cluster1-01
Usable
Disk
Container
Container
Disk
Size Shelf Bay Type
Type
---------------- ---------- ----- --- ------- ----------VMw-1.25
28.44GB
0 VMDISK shared
VMw-1.26
28.44GB
1 VMDISK shared
VMw-1.27
28.44GB
2 VMDISK shared
VMw-1.28
28.44GB
3 VMDISK shared
VMw-1.29
28.44GB
4 VMDISK shared
VMw-1.30
28.44GB
5 VMDISK shared
VMw-1.31
28.44GB
6 VMDISK shared
VMw-1.32
28.44GB
8 VMDISK shared
VMw-1.33
28.44GB
9 VMDISK shared
VMw-1.34
28.44GB
- 10 VMDISK shared
VMw-1.35
28.44GB
- 11 VMDISK shared
VMw-1.36
28.44GB
- 12 VMDISK shared
VMw-1.37
28.44GB
0 VMDISK shared
VMw-1.38
28.44GB
1 VMDISK shared
VMw-1.39
28.44GB
2 VMDISK shared
VMw-1.40
28.44GB
3 VMDISK shared
VMw-1.41
28.44GB
4 VMDISK shared
VMw-1.42
28.44GB
5 VMDISK shared
VMw-1.43
28.44GB
6 VMDISK shared
VMw-1.44
28.44GB
8 VMDISK shared
VMw-1.45
28.44GB
9 VMDISK shared
VMw-1.46
28.44GB
- 10 VMDISK shared
VMw-1.47
28.44GB
- 11 VMDISK shared
VMw-1.48
28.44GB
- 12 VMDISK shared
24 entries were displayed.
Name
Owner
--------- -------aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
cluster1-01
cluster1-01
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
cluster1-02
cluster1-02
cluster1::>
Create the aggregate named aggr1_cluster1_01 on the node cluster1-01 and the aggregate named
aggr1_cluster1_02 on the node cluster1-02.
Networks
Clustered Data ONTAP provides a number of network components to that you use to manage your cluster.
Those components include:
Ports are the physical Ethernet and Fibre Channel connections on each node, the interface groups (ifgrps)
you can create to aggregate those connections, and the VLANs you can use to subdivide them.
A logical interface (LIF) is essentially an IP address that is associated with a port, and has a number of
associated characteristics such as an assigned home node, an assigned physical home port, a list of
physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. A given LIF can only
be assigned to a single SVM, and since LIFs are mapped to physical network ports on cluster nodes this
means that an SVM runs in part on all nodes that are hosting its LIFs.
Routing tables in clustered Data ONTAP are defined for each Storage Virtual Machine. Since each SVM
has its own routing table, changes to one SVMs routing table does not have impact on any other SVMs
routing table.
IPspaces are new in Data ONTAP 8.3 and allow you to configure a Data ONTAP cluster to logically
separate one IP network from another, even if those two networks are using the same IP address range.
IPspaces are a mult-tenancy feature designed to allow storage service providers to share a cluster
between different companies while still separating storage traffic for privacy and security. Every cluster
include a default IPspace to which Data ONTAP automatically assigns new SVMs, and that default IPspace
is probably sufficient for most NetApp customers who are deploying a cluster within a single company or
organization that uses a non-conflicting IP address range.
Broadcast Domains are also new in Data ONTAP 8.3, and are collections of ports that all have access to
the same layer 2 networks, both physical and virtual (i.e. VLANs). Every IPspace has its own set of
Broadcast Domains, and Data ONTAP provides a default broadcast domain to go along with the default
IPspace. Broadcast domains are used by Data ONTAP to determine what ports an SVM can use for its
LIFs.
Subnets are another new feature in Data ONTAP 8.3, and are a convenience feature intended to make LIF
creation and management easier for Data ONTAP administrators. A subnet is a pool of IP addresses that
you can specify by name when creating a LIF. Data ONTAP will automatically assign an available IP
address from the pool to the LIF, along with a subnet mask and a gateway. A subnet is scoped to a specific
broadcast domain, so all the subnets addresses belong to the same layer 3 network. Data ONTAP
manages the pool automatically as you create or delete LIFs, and if you manually configure a LIF with an
address from the pool then it will detect that the address is use and mark it as such in the pool.
DNS Zones allow an SVM to manage DNS name resolution for its own LIFs, and since multiple LIFs can
share the same DNS name, this allows the SVM to load balance traffic by IP address across the LIFs. To
use DNS Zones you must configure your DNS server to delegate DNS authority for the subdomain to the
SVM.
In this section of the lab, you will create a subnet that you will leverage in later sections to provision SVMs
and LIFs.You will not create IPspaces or Broadcast Domains as the system defaults are sufficient for this
lab.
Review the Port Details section at the bottom of the Network pane and note that the e0c e0g ports on
both cluster nodes are all part of this broadcast domain. These are the network ports that you will be using
in this lab.
Gateway: 192.168.0.1
2. The values you enter in the IP address box depend on what sections of the lab guide you intend to
complete. It is important that you choose the right values here so that the values in your lab will
correctly match up with the values used in this lab guide.
If you plan to complete just the NAS section or both the NAS and SAN sections then enter
192.168.0.131-192.168.0.139
If you plan to complete just the SAN section then enter 192.168.0.133-192.168.0.139
The Select Broadcast Domain window close, and focus returns to the Create Subnet window.
1. The values in your Create Subnet window should now match those shown in the following screenshot,
the only possible exception being for the IP Addresses field, whose value may differ depending on what
value range you chose to enter to match your plans for the lab.
Note: If you click the Show ports on this domain link under the Broadcast Domain textbox, you
can once again see the list of ports that this broadcast domain includes.
2. Click Create.
The Create Subnet window closes, and focus returns to the Subnets tab in System Manager. Notice that
the main pane pane of the Subnets tab now includes an entery for your newly created subnet, and that the
lower portion of the pane includes metrics tracking the consumption of the IP addresses that belong to this
subnet.
Feel free to explore the contents of the other available tabs on the Network page. Here is a brief summary
of the information available on those tabs.
The Ethernet Ports tab displays the physical NICs on your controller, which will be a superset of the NICs
that you saw previously listed as belonging to the default broadcast domain. The other NICs you will see
listed on the Ethernet Ports tab include the nodes cluster network NICs.
The Network Interfaces tab displays a list of all of the LIFs on your cluster.
The FC/FCoE Adapters tab lists all the WWPNs for all the controllers NICs in the event they will be used
for iSCSI or FCoE connections. The simulated NetApp controllers you are using in this lab do not include
FC adapters, and this lab does not make use of FCoE.
1. Display a list of the clusters IPspaces. A cluster actually contains two IPspaces by default; the Cluster
IPspace, which correlates to the cluster network that Data ONTAP uses to have cluster nodes
communicate with each other, and the Default IPspace to which Data ONTAP automatically assigne all
new SVMs. You can create more IPspaces if necessary, but that activity will not be covered in this lab.
cluster1::> network ipspace show
IPspace
Vserver List
------------------- ----------------------------Cluster
Cluster
Default
cluster1
2 entries were displayed.
Broadcast Domains
---------------------------Cluster
Default
cluster1::>
2. Display a list of the clusters broadcast domains. Remember that broadcast domains are scoped to a
single IPspace. The e0a ports on the cluster nodes are part of the Cluster broadcast domain in the
Cluster IPspace. The remaining ports are part of the Default broadcast domain in the Default IPspace.
cluster1::> network port broadcast-domain show
IPspace Broadcast
Name
Domain Name
MTU Port List
------- ----------- ------ ----------------------------Cluster Cluster
1500
cluster1-01:e0a
cluster1-01:e0b
cluster1-02:e0a
cluster1-02:e0b
Default Default
1500
cluster1-01:e0c
cluster1-01:e0d
cluster1-01:e0e
cluster1-01:e0f
cluster1-01:e0g
cluster1-02:e0c
cluster1-02:e0d
cluster1-02:e0e
cluster1-02:e0f
cluster1-02:e0g
2 entries were displayed.
Update
Status Details
-------------complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
cluster1::>
Data ONTAP does not include a default subnet, so you will need to create a subnet now. The specific
command you will use depends on what sections of this lab guide you plan to complete, as you want to
correctly align the IP address pool in your lab with the IP addresses used in the portions of this lab guide
that you want to complete.
4. If you plan to complete the NAS portion of this lab, enter the following command. Also use this this
command if you plan to complete both the NAS and SAN portions of this lab.
cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default
-subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.131-192.168.0.139
cluster1::>
5. If you only plan to complete the SAN portion of this lab, then enter the following command instead.
cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default
-subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.133-192.168.0.139
cluster1::>
6. Re-display the list of the clusters subnets. This example assumes you plan to complete the whole lab.
cluster1::> network subnet show
IPspace: Default
Subnet
Name
Subnet
--------- ---------------Demo
192.168.0.1/24
Broadcast
Avail/
Domain
Gateway
Total
Ranges
--------- --------------- --------- --------------Default
192.168.0.1
9/9
192.168.0.131-192.168.0.139
cluster1::>
7. If you are interested in seeing a list of all of the network ports on your cluster, you can use the following
command for that purpose.
cluster1::> network port show
Node
Port
IPspace
------ --------- -----------cluster1-01
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
cluster1-02
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
14 entries were displayed.
Speed (Mbps)
Broadcast Domain Link
MTU
Admin/Oper
---------------- ----- ------- -----------Cluster
Cluster
Default
Default
Default
Default
Default
up
up
up
up
up
up
up
1500
1500
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
Cluster
Cluster
Default
Default
Default
Default
Default
up
up
up
up
up
up
up
1500
1500
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
cluster1::>
DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same
hostname. DNS is supported by both NFS and CIFS clients and works equally well with clients on local
area and wide area networks. Since DNS is an external service that resides outside of Data ONTAP this
architecture creates the potential for service disruptions if the DNS server is advertising IP addresses for
LIFs that are temporarily offline. To compensate for this condition you can configure DNS servers to
delegate the name resolution responsibility for the SVMs hostname records to the SVM itself, so that it can
directly respond to name resolution requests involving its LIFs. This allows the SVM to consider LIF
availability and LIF utilization levels when deciding what LIF address to return in response to a DNS name
resolution request.
LIFS that map to physical network ports that reside on the same node as a volumes containing aggregate
offer the most efficient client access path to the volumes data. However, clients can also access volume
data through LIFs bound to physical network ports on other nodes in the cluster; in these cases clustered
Data ONTAP uses the high speed cluster network to bridge communication between the node hosting the
LIF and the node hosting the volume. NetApp best practice is to create at least one NAS LIF for a given
SVM on each cluster node that has an aggregate that is hosting volumes for that SVM. If you desire
additional resiliency then you can also create a NAS LIF on nodes not hosting aggregates for the SVM.
A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to
another in the event of a component failure; any existing connections to that LIF from NFS and SMB 2.0
and later clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS
LIF migrates to a different physical NIC, potentially to a NIC on a different node in the cluster, and
continues servicing network requests from that new node/port. Throughout this operation the NAS LIF
maintains its IP address; clients connected to the LIF may notice a brief delay while the failover is in
progress but as soon as it completes the clients resume any in-process NAS operations without any loss of
data.
The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each
storage controller node can host a maximum of 125 SVMs, so you can calculate the clusters effective SVM
limit by multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can
host, but there is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per
node, but if the node is part of an HA pair configured for failover then the limit is half that value, 128 LIFs
per node (so that a node can also accommodate its HA partners LIFs in the event of a failover event).
Each SVM has its own NAS namespace, a logical grouping of the SVMs CIFS and NFS volumes into a
single logical filesystem view. Clients can access the entire namespace by mounting a single share or
export at the top of the namespace tree, meaning that SVM administrators can centrally maintain and
present a consistent view of the SVMs data to all clients rather than having to reproduce that view
structure on each individual client. As an administrator maps and unmaps volumes from the namespace
those volumes instantly become visible or disappear from clients that have mounted CIFS and NFS
volumes higher in the SVMs namespace. Administrators can also create NFS exports at individual junction
points within the namespace and can create CIFS shares at any directory path in the namespace.
The default values for IPspace, Volume Type, and Default Language are already populated for you by the
wizard, as is the DNS configuration. When ready, click Submit & Continue.
The wizard creates the SVM and then advances to the protocols window. The protocols window can rather
large so this guide will present it in sections.
1. The Subnet setting defaults to Demo since this is the only subnet definition that exists in your lab. Click
the Browse button next to the Port textbox.
The Select Network Port or Adapter window closes and focus returns to the protocols portion of the
Storage Virtual Machine (SVM) Setup wizard.
1. The Port textbox should have been populated with the cluster and port value you just selected.
2. Populate the CIFS Server Configuration textboxes with the following values:
Password: Netapp1!
3. The optional Provision a volume for CIFS storage textboxes offer a quick way to provision a simple
volume and CIFS share at SVM creation time. This share will not be multi-protocol, and since in most
cases when you create a share, you will be doing so for an existing SVM. This lab guide will show that
more full-featured procedure for creating a volume and share in the following sections.
Scroll down in the window to see the expanded NIS Configuration section.
1. Clear the pre-populated values from the Domain Name and IP Address fields. In a NFS
environment where you are running NIS, you would want to configure these values, but this lab
environment is not utilizing NIS and in this case leaving these fields populated will create a name
resolution problem later in the lab.
2. As was the case with CIFS, the provision a volume for NFS storage textboxes offer a quick way to
provison a volume and create an NFS export for that volume. Once again, the volume will not be
inherently multi-protocol, and will in fact be a completely separate volume from the CIFS share volume
that you could have selected to create in the CIFS section. This lab will utilize the more full featured
volume creation process that you will see in later sections.
3. Click the Submit & Continue button to advance the wizard to the next screen.
The SVM Administration section of the Storage Virtual Machine (SVM) Setup wizard opens.This window
allows you to set up an administration account that is scoped to just this SVM so that you can delegate
administrative tasks for this SVM to an SVM-specific administrator without giving that administrator clusterwide privileges. As the comments in this wizard window indicate, this account must also exist for use with
SnapDrive. Although you will not be using SnapDrive in this lab, it is usally a good idea to create this
account and you will do so here.
1. The User Name is pre-populated with the value vsadmin. Set the Password and Confirm Password
textboxes to netapp123. When finished, click the Submit & Continue button.
The window closes, and focus returns to the System Manager window, which now displays a summary
page for your newly created svm1 SVM.
1. Notice that in the main pane of the window the CIFS protocol is listed with a green background. This
indicates that a CIFS server is running for this SVM.
2. Notice that the NFS protocol is listed with a yellow background, which indicates that there is not a
running NFS server for this SVM. If you had configured the NIS server settings during the SVM Setup
wizard then the wizard would have started the NFS server, but since this lab is not using NIS you will
manually turn on NFS in a later step.
The New Storage Virtual Machine Setup Wizard only provisions a single LIF when creating a new SVM.
NetApp best practice is to configure a LIF on both nodes in an HA pair so that a client can access the
SVMs shares through either node. To comply with that best practise you will now create a second LIF
hosted on the other node in the cluster.
System Manager for clustered Data ONTAP 8.2 and earlier presented LIF management under the Storage
Virtual Machines tab, only offering visibility to LIFs for a single SVM at a time. With 8.3, that functionality
has moved to the Cluster tab, where you now have a single view for managing all the LIFs in your cluster.
1. Select the Cluster tab in the left navigation pane of System Manager.
2. Navigate to cluster1-> Configuration->Network.
3. Select the Network Interfaces tab in the main Network pane.
4. Select the only LIF listed for the svm1 SVM. Notice that this LIF is named svm1_cifs_nfs_lif1; you will
be following that same naming convention for the new LIF.
5. Click on the Create button to launch the Network Interface Create Wizard.
Name: svm1_cifs_nfs_lif2
SVM: svm1
Subnet: Demo
Also expand the Port Selection listbox and select the entry for cluster1-02 port e0c.
The Create Network Interface window close, and focus returns to the Network pane in System Manager.
1. Notice that a new entry for the svm1_cifs_nfs_lif2 LIF is now present under the Network Interfaces
tab. Select this entry and review the LIFs properties.
Lastly, you need to configure DNS delegation for the SVM so that Linux and Windows clients can
intelligently utilize all of svm1s configured NAS LIFs. To achieve this objective, the DNS server must
delegate to the cluster the responsibility for the DNS zone corresponding to the SVMs hostname, which in
this case will be svm1.demo.netapp.com. The labs DNS server is already configured to delegate this
responsibility, but you must also configure the SVM to accept it. System Manager does not currently
include the capability to configure DNS delegation so you will need to use the CLI for this purpose.
1. Open a PuTTY connection to cluster1 following the instructions in the Accessing the Command Line
section at the beginning of this guide. Log in using the username admin and the password Netapp1!,
then enter the following commands.
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface show -vserver svm1 -fields dns-zone,address
vserver lif
address
dns-zone
------- ----------------- ------------- ------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>
2. Validate that delegation is working correctly by opening PowerShell on jumphost and using the
nslookup command as shown in the following CLI output. If the nslookup command returns IP
addresses as identified by the yellow highlighted text, then delegation is working correctly. If the
nslookup returns a Non-existent domain error then delegation is not working correctly and you will
need to review the Data ONTAP commands you just entered as they most likely contained an error.
Also notice from the following screenshot that different executions of the nslookup command return
different addresses, demonstrating that DNS load balancing is working correctly. You may need to run
the nslookup command more than 2 times before you see it report different addresses for the
hostname.
Windows PowerShell
Copyright (C) 2013 Microsoft Corporation. All rights reserved.
PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name:
svm1.demo.netapp.com
Address: 192.168.0.132
PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name:
svm1.demo.netapp.com
Address: 192.168.0.131
PS C:\Users\Administrator.DEMO
If you do not already have a PuTTY connection open to cluster1 then open one now following the directions
in the Accessing the Command Line section at the beginning of his lab guide. The username is admin
and the password is Netapp1!.
1. Create the SVM named svm1. Notice that the clustered Data ONTAP command line syntax still refers
to storage virtual machines as vservers.
cluster1::> vserver create -vserver svm1 -rootvolume svm1_root -aggregate aggr1_cluster1_01
-language C.UTF-8 -rootvolume-security ntfs -snapshot-policy default
[Job 259] Job is queued: Create svm1.
[Job 259]
[Job 259] Job succeeded:
Vserver creation completed
cluster1::>
Type
------admin
node
node
data
Subtype
---------default
Admin
State
---------running
Operational
State
----------running
Root
Volume
---------svm1_root
Aggregate
---------aggr1_
cluster1_
01
Current
Current Is
Node
Port
Home
------------- ------- ----
cluster1-01
e0a
true
cluster1-02
e0a
true
cluster1-01
e0c
true
cluster1-02
cluster1-01
e0c
e0c
true
true
cluster1::>
4. Notice that there are not yet any LIFs defined for the SVM svm1. Create the svm1_cifs_nfs_lif1 data
LIF for svm1:
cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif1 -role data
-data-protocol nfs,cifs -home-node cluster1-01 -home-port e0c -subnet-name Demo
-firewall-policy mgmt
cluster1::>
Current
Current Is
Node
Port
Home
------------- ------- ----
cluster1-01
e0c
true
cluster1-02
e0c
true
cluster1::>
7. Configure the DNS domain and nameservers for the svm1 SVM:
cluster1::> vserver services dns show
Name
Vserver
State
Domains
Servers
--------------- --------- ----------------------------------- ---------------cluster1
enabled
demo.netapp.com
192.168.0.253
cluster1::> vserver services dns create -vserver svm1 -name-servers 192.168.0.253 -domains
demo.netapp.com
cluster1::> vserver services dns show
Vserver
State
--------------- --------cluster1
enabled
svm1
enabled
2 entries were displayed.
Domains
----------------------------------demo.netapp.com
demo.netapp.com
Name
Servers
---------------192.168.0.253
192.168.0.253
cluster1::>
8. Configure the LIFs to accept DNS delegation responsibility for the svm1.demo.netapp.com zone so that
you can advertise addresses for both of the NAS data LIFs that belong to svm1. You could have done
this as part of the network interface create commands but we opted to do it separately here to show
you how you can modify an existing LIF.
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface show -vserver svm1 -fields dns-zone,address
vserver lif
address
dns-zone
------- ------------------ ------------- -------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>
9. Verify that DNS delegation is working correctly by opening a PuTTY connection to the Linux host rhel1
(username root and password Netapp1!) and executing the following commands. If the delegation is
working correctly then you should see IP addresses returned for the host svm1.demo.netapp.com, and
if you run the command several times you will eventually see that the responses vary the returned
address between the SVMs two LIFs.
[root@rhel1 ~]# nslookup svm1.demo.netapp.com
Server:
192.168.0.253
Address:
192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.132
[root@rhel1 ~]# nslookup svm1.demo.netapp.com
Server:
192.168.0.253
Address:
192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.131
[root@rhel1 ~]#
10. This completes the planned LIF configuration changes for svm1, so now display a detailed
configuration report for the LIF svm1_cifs_nfs_lif1:
cluster1::> network interface show -lif svm1_cifs_nfs_lif1 -instance
Vserver Name: svm1
Logical Interface Name: svm1_cifs_nfs_lif1
Role: data
Data Protocol: nfs, cifs
Home Node: cluster1-01
Home Port: e0c
Current Node: cluster1-01
Current Port: e0c
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.131
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: Subnet Name: Demo
Administrative Status: up
Failover Policy: system-defined
Firewall Policy: mgmt
Auto Revert: false
Fully Qualified DNS Zone Name: svm1.demo.netapp.com
DNS Query Listen Enable: true
Failover Group Name: Default
FCP WWPN: Address family: ipv4
Comment: IPspace of LIF: Default
cluster1::>
11. When you issued the vserver create command to create svm1 you included an option to enable CIFS
for it, but that command did not actually create a CIFS server for the svm. Now it is time to create that
CIFS server.
cluster1::> vserver cifs show
This table is currently empty.
cluster1::> vserver cifs create -vserver svm1 -cifs-server svm1 -domain demo.netapp.com
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"DEMO.NETAPP.COM" domain.
Enter the user name: Administrator
Enter the password:
cluster1::> vserver cifs show
Server
Status
Vserver
Name
Admin
----------- --------------- --------svm1
SVM1
up
Domain/Workgroup
Name
---------------DEMO
Authentication
Style
-------------domain
cluster1::>
When you create an SVM, Data ONTAP automatically creates a root volume to hold that SVMs
namespace. An SVM always has a root volume, whether or not it is configured to support NAS protocols.
Before you configure NFS and CIFS for your newly created SVM, take a quick look at the SVMs root
volume:
1. Select the Storage Virtual Machines tab.
2. Navigate to cluster1->svm1->Storage->Volumes.
3. Note the existence of the svm1_root volume, which hosts the namespace for the svm1 SVM. The root
volume is not large; only 20 MB in this example. Root volumes are small because they only intended to
house the junctions that organize the SVMs volumes; all of the files hosted on the SVM should reside
inside the volumes that are junctioned into the namespace rather than directly in the SVMs root
volume.
Confirm that CIFS and NFS are running for our SVM using System Manager. Check CIFS first.
1. Under the Storage Virtual Machines tab, navigate to cluster1->svm1->Configuration->Protocols>CIFS.
2. In the CIFS pane, select the Configuration tab.
3. Note that the Service Status field is listed as Started, which indicates that there is a running CIFS
server for this SVM. If CIFS was not already running for this SVM, then you could configure and start it
using the Setup button found under the Configuration tab.
The Server Status field in the NFS pane switches from Not Configured to Enabled.
At this point, you have confirmed that your SVM has a running CIFS server and a running NFS server.
However, you have not yet configured those two servers to actually serve any data, and the first step in
that process is to configure the SVMs default NFS export policy.
When you create an SVM with NFS, clustered Data ONTAP automatically creates a default NFS export
policy for the SVM that contains an empty list of access rules. Without any access rules that policy will not
allow clients to access any exports, so you need to add a rule to the default policy so that the volumes you
will create on this SVM later in this lab will be automatically accessible to NFS clients. If any of this seems
a bit confusing, do not worry; the concept should become clearer as you work through this section and the
next one.
1. In System Manager, select the Storage Virtual Machines tab and then go to cluster1->svm1>Policies->Export Policies.
2. In the Export Polices window, select the default policy.
3. Click the Add button in the bottom portion of the Export Policies pane.
The Create Export Rule window opens. Using this dialog you can create any number of rules that provide
fine grained access control for clients and specify their application order. For this lab, you are going to
create a single rule that grants unfettered access to any host on the labs private network.
1. Set the fields in the window to the following values:
Rule Index: 1
The default values in the other fields in the window are acceptable.
When you finish entering these values, click OK.
The Create Export Policy window closes and focus returns to the Export Policies pane in System Manager.
The new access rule you created now shows up in the bottom portion of the pane. With this updated
default export policy in place, NFS clients will now be able to mount the root of the svm1 SVMs
namespace, and use that mount to access any volumes that you junction into the namespace.
Now create a CIFS share for the svm1 SVM. You are going to create a single share named nsrootat the
root of the SVMs namespace.
1. Select the Storage Virtual Machines tab and navigate to cluster1->svm1->Storage->Shares.
2. In the Shares pane, select Create Share.
Folder to Share: / (If you alternately opt to use the Browse button, make sure you select the root
folder).
The Create Share window closes, and focus returns to Shares pane in System Manager. The new nsroot
share now shows up in the Shares pane, but you are not yet finished.
1. Select nsroot from the list of shares.
2. Click the Edit button to edit the shares settings.
There are other settings to check in this window, so do not close it yet.
1. Select the Options tab at the top of the window and make sure that the Enable as read/write, Enable
Oplocks, Browsable, and Notify Change checkboxes are all checked. All other checkboxes should be
cleared.
2. If you had to change any of the settings listed in Step 1 then the Save and Close button will become
active, and you should click it. Otherwise, click the Cancel button.
The Edit nsroot Settings window closes and focus returns to the Shares pane in System Manager. Setup of
the \\svm1\nsroot CIFS share is now complete.
For this lab you have created just one share at the root of your namespace, which allows users to access
any volume mounted in the namespace through that share. The advantage of this approach is that it
reduces the number of mapped drives that you have to manage on your clients; any changes you make to
the namespace become instantly visible and accessible to your clients. If you prefer to use multiple shares
then clustered Data ONTAP allows you to create additional shares rooted at any directory level within the
namespace.
Since you have configured your SVM to support both NFS and CIFS, you next need to set up username
mapping so that the UNIX root accounts and the DEMO\Administrator account will have synonymous
access to each others files. Setting up such a mapping may not be desirable in all environments, but it will
simplify data sharing for this lab since these are the two primary accounts you are using in this lab.
1. In System Manager, open the Storage Virtual Machines tab and navigate to cluster1->svm1>Configuration->Users and Groups->Name Mapping.
2. In the Name Mapping pane, click Add.
Position: 1
Pattern: demo\\administrator (the two backslashes listed here is not a typo, and administrator should
not be capitalized)
Replacement: root
When you have finished populating these fields, click Add.
The window closes and focus retruns to the Name Mapping pane in System Manager. Click the Add button
again to create another mapping rule.
Position: 1
Pattern: root
Replacement: demo\\administrator (the two backslashes listed here are not a typo, and
administrator should not be capitalized)
When you have finished populating these fields, click Add.
The second Add Name Mapping window closes, and focus again returns to the Name Mapping pane in
System Manager. You should now see two mappings listed in this pane that together make the root and
DEMO\Administrator accounts equivalent to each other for the purpose of file access within the SVM.
Domain/Workgroup
Name
---------------DEMO
Authentication
Style
-------------domain
cluster1::>
2. Verify that NFS is running for the SVM svm1. It is not initially, so turn it on.
cluster1::> vserver nfs status -vserver svm1
The NFS server is not running on Vserver "svm1".
cluster1::> vserver nfs create -vserver svm1 -v3 enabled -access true
cluster1::> vserver nfs status -vserver svm1
The NFS server is running on Vserver "svm1".
cluster1::> vserver nfs show
Vserver: svm1
General Access:
v3:
v4.0:
4.1:
UDP:
TCP:
Default Windows User:
Default Windows Group:
true
enabled
disabled
disabled
enabled
enabled
-
cluster1::>
3. Create an export policy for the SVM svm1 and configure the policys rules.
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::> vserver export-policy rule show
This table is currently empty.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname default
-clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any -anon 65534 -ruleindex 1
cluster1::> vserver export-policy rule show
Policy
Rule
Access
Vserver
Name
Index
Protocol
------------ --------------- ------ -------svm1
default
1
any
Client
Match
--------------------0.0.0.0/0
RO
Rule
--------any
svm1
default
1
any
or Domain: 0.0.0.0/0
any
any
65534
any
true
true
cluster1::>
4. Create a share at the root of the namespace for the SVM svm1:
cluster1::> vserver cifs share show
Vserver
Share
Path
-------------- ------------- ----------------svm1
admin$
/
svm1
c$
/
Control
svm1
ipc$
3 entries were displayed.
Properties
---------browsable
oplocks
Comment
--------
browsable
changenotify
browsable -
ACL
----------BUILTIN\Administrators / Full
cluster1::> vserver cifs share create -vserver svm1 -share-name nsroot -path /
cluster1::> vserver cifs share show
Vserver
Share
Path
-------------- ------------- ----------------svm1
admin$
/
svm1
c$
/
Control
Properties
---------browsable
oplocks
svm1
svm1
browsable
changenotify
browsable oplocks
browsable
changenotify
ipc$
nsroot
/
/
Comment
--------
ACL
----------BUILTIN\Administrators / Full
5. Set up CIFS <-> NFS user name mapping for the SVM svm1:
cluster1::> vserver name-mapping show
This table is currently empty.
cluster1::> vserver name-mapping create -vserver svm1 -direction win-unix -position 1 -pattern
demo\\administrator -replacement root
cluster1::> vserver name-mapping create -vserver svm1 -direction unix-win -position 1 -pattern
root -replacement demo\\administrator
cluster1::> vserver name-mapping show
Vserver
Direction Position
-------------- --------- -------svm1
win-unix 1
Pattern:
Replacement:
svm1
unix-win 1
Pattern:
Replacement:
2 entries were displayed.
demo\\administrator
root
root
demo\\administrator
cluster1::>
Name: engineering
Aggregate: aggr1_cluster1_01
Total Size: 10 GB
The Create Volume window closes, and focus returns to the Volumes pane in System Manager. The newly
created engineering volume should now appear in the Volumes list. Notice that the volume is 10 GB in
size, and is thin provisioned.
System Manager has also automatically mapped the engineering volume into the SVMs NAS namespace.
Since you have already configured the access rules for the default policy, the volume is instantly accessible
to NFS clients. As you can see in the preceding screenshot, the engineering volume was junctioned as
/engineering, meaning that any client that had mapped a share to \\svm1\nsroot or NFS mounted svm1:/
would now instantly see the engineering directory in the share, and in the NFS mount.
Name: eng_users
Aggregate: aggr1_cluster1_01
Total Size: 10 GB
The Create Volume window closes, and focus returns again to the Volumes pane in System Manager. The
newly created eng_users volume should now appear in the Volumes list.
1. Select the eng_users volume in the volumes list and examine the details for this volume in the General
box at the bottom of the pane. Specifically, note that this volume has a Junction Path value of
/eng_users.
You do have more options for junctioning than just placing your volumes into the root of your namespace.
In the case of the eng_users volume, you will re-junction that volime underneath the engineering volume
and shorten the junction name to take advantage of an already intuitive context.
1. Navigate to Storage Virtual Machines->cluster1->svm1->Storage->Namespace.
2. In the Namespace pane, select the eng_users junction point.
3. Click Unmount.
The Unmount Volume window opens asking for confirmation that you really want to unmount the volume
from the namespace.
1. Click Unmount.
The Unmount Volume window closes, and focus returns to the NameSpace pane in System Manager. The
eng_users volume no longer appears in the junction list for the namespace, and since it is no longer
junctioned in the namespace, that means clients can no longer access it or even see it. Now you will
junction the volume in at another location in the namespace.
1. Click Mount.
The Browse For Junction Path window closes, and focus returns to the Mount Volume window.
1. The fields in the Mount Volume window should now all contain values as follows:
The Mount Volume window closes, and focus returns to the Namespace pane in System Manager.
The eng_users volume is now mounted in the namespace as /engineering/users.
You can also create a junction within user created directories. For example, from a CIFS or NFS client you
could create a folder named projects inside the engineering volume and then create a widgets volume that
junctions in under the projects folder; in that scenario the namespace path to the widgets volume contents
would be /engineering/projects/widgets.
Now you will create a couple of qtrees within the eng_users volume, one for each of the users bob and
susan.
Name: bob
Volume: eng_users
The Quota tab is where you define the space usage limits you want to apply to the qtree. You will not
actually be implementing any quota limits in this lab.
1. Click the Create button.
The Create Qtree window closes, and focus returns to the Qtrees pane in System Manager. Now create
another qtree, for the user account susan.
1. Click the Create button.
Select the Details tab and then populate the fields as follows.
Name: susan
Volume: eng_users
Click Create.
The Create Qtree window closes, and focus returns to the Qtrees pane in System Manager. At this point
you should see both the bob and susan qtrees in System Manager.
Junction
Path Source
-----------
cluster1::>
cluster1::> volume create -vserver svm1 -volume engineering -aggregate aggr1_cluster1_01 -size
10GB -percent-snapshot-space 5 -space-guarantee none -policy default -junction-path /engineering
[Job 267] Job is queued: Create engineering.
[Job 267] Job succeeded: Successful
cluster1::>
Show the volumes for the SVM svm1 and list its junction points:
cluster1::> volume show -vserver svm1
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------svm1
engineering aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
2 entries were displayed.
Type
Size Available Used%
---- ---------- ---------- ----RW
10GB
9.50GB
5%
RW
20MB
18.86MB
5%
Junction
Path Source
----------RW_volume
-
cluster1::>
Junction
Path Source
----------RW_volume
RW_volume
-
cluster1::>
Display detailed information about the volume engineering. Notice here that the volume is reporting as thin
provisioned (Space Guarantee Style is set to none) and that the Export Policy is set to default.
View how much disk space this volume is actually consuming in its containing aggregate; the Total
Footprint value represents the volumes total consumption. The value here is so small because this volume
is thin provisioned and you have not yet added any data to it. If you had thick provisioned the volume then
the footprint here would have been 1 GB, the full size of the volume.
Used
---------152KB
0B
13.38MB
352KB
Used%
----0%
0%
0%
0%
13.88MB
0%
cluster1::>
Create qtrees in the eng_users volume for the users bob and susan, then generate a list of all the qtrees
that belong to svm1, and finally produce a detailed report of the configuration for the qtree bob.
cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree bob
cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree susan
cluster1::> volume qtree show -vserver svm1
Vserver
Volume
Qtree
Style
---------- ------------- ------------ -----------svm1
eng_users
""
ntfs
svm1
eng_users
bob
ntfs
svm1
eng_users
susan
ntfs
svm1
engineering
""
ntfs
svm1
svm1_root
""
ntfs
5 entries were displayed.
Oplocks
--------enable
enable
enable
enable
enable
Status
-------normal
normal
normal
normal
normal
svm1
eng_users
bob
/vol/eng_users/bob
ntfs
enable
1
normal
default
true
cluster1::>
This part of the lab demonstrates connecting the Windows client jumphost to the CIFS share \\svm1\nsroot
using the Windows GUI.
1. On the Windows host jumphost open Windows Explorer by clicking on the folder icon on the taskbar.
Drive: S:
Folder: \\svm1\nsroot
Note:
If you encounter problems connecting to the share then most likely you did not properly clear the NIS
Configuration fields when you created the svm. (This scenario most likely only occured if you used System
Manager to create the svm, the CLI method is not as susceptible.) If those NIS Configuration fields remained
populated then the svm tries to use NIS for user and hostname name resolution, and since this lab doesnt
include a NIS server that resolution attempt will fail and you will not be able to mount the share. To correct
this problem go to System Manager and navigate to Storage Virtual Machines->cluster1->svm1>Configuration->Services->NIS. If you see an NIS configuration listed in the NIS pane then select it and use
the Delete button to delete it, then try to connect to the share again.
1. File Explorer displays the contents of the engineering folder. Create a file in this folder to confirm that
you can write to it.
Notice that the eng_users volume that you junctioned in as users is visible inside this folder.
2. Right-click in the empty space in the right pane of File Explorer.
3. In the context menu, select New->Text Document, and name the resulting file cifs.txt.
1. Double-click the cifs.txt file you just created to open it with Notepad.
2. In Notepad, enter some text (make sure you put a carriage return at the end of the line or else when
you later view the contents of this file on linux the command shell prompt will appear on the same line
as the file contents).
3. Use the File->Save menu in Notepad to save the files updated contents to the share. If write access is
working properly you will not receive an error message
This part of the lab demonstrates connecting a Linux client to the NFS volume svm1:/ using the Linux
command line. Follow the instructions in the Accessing the Command Line section at the beginning of this
lab guide to open PuTTY and connect to the system rhel1.
Log in as the user root with the password Netapp1!, then issue the following command to see that you
currently have no NFS volumes mounted on this Linux host.
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962504
6311544 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
[root@rhel1 ~]#
Create a mountpoint and mount the NFS export corresponding to the root of your SVMs namespace on
that mountpoint. When you run the df command again after this youll see that the NFS export svm1:/ is
mounted on our Linux host as /svm1.
Navigate into the /svm1 directory and notice that you can see the engineering volume that you previously
junctioned into the SVMs namespace. Navigate into engineering and verify that you can access and create
files.
Note:
The output shown here assumes that you have already performed the Windows client connection steps found
earlier in this section. When you cat the cifs.txt file if the shell prompt winds up on the same line as the file
output, that indicates that when you created the file on Windows you forgot to include a newline at the end of
the file.
Begin by creating a new export and rules that only permit NFS access from the Linux host rhel1.
1. In System Manager, select the Storage Virtual Machines tab and then go to cluster1->svm1>Policies->Export Policies.
2. Click the Create button.
1. Set the Policy Name to rhel1-only and click the Add button.
The Create Export Rule window closes, and focus returns to the Create Export Policy window.
1. The new access rule now is now present in the rules window, and the rules Access Protocols entry
indicates that there are no protocol restrictions. If you had selected all the available protocol
checkboxes when creating this rule then each of those selected protocols would have been explicitly
listed here. Click Create.
The Create Export Policy window closes, and focus returns to the Export Policies pane in System
Manager.
Now you need to apply this new export policy to the qtree. System Manager does not support this
capability so you will have to use the clustered Data ONTAP command line. Open a PuTTY connection to
cluster1, and log in using the username admin and the password Netapp1!, then enter the following
commands.
Note:
The following CLI commands are part of this lab sections graphical workflow. If you arelooking for the CLI
workflow then keep paging forward until you see the orange bar denoting the start of those instructions.
1. Produce a list of svm1s export policies and then a list of its qtrees:
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::> volume qtree show
Vserver
Volume
Qtree
Style
---------- ------------- ------------ -----------svm1
eng_users
""
ntfs
svm1
eng_users
bob
ntfs
svm1
eng_users
susan
ntfs
svm1
engineering
""
ntfs
svm1
svm1_root
""
ntfs
5 entries were displayed.
cluster1::>
Oplocks
--------enable
enable
enable
enable
enable
Status
-------normal
normal
normal
normal
normal
3. Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is using
the rhel1-only export policy.
cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan
Vserver Name:
Volume Name:
Qtree Name:
Qtree Path:
Security Style:
Oplock Mode:
Unix Permissions:
Qtree Id:
Qtree Status:
Export Policy:
Is Export Policy Inherited:
cluster1::>
svm1
eng_users
susan
/vol/eng_users/susan
ntfs
enable
2
normal
rhel1-only
false
4. Produce a report showing the export policy assignments for all the volumes and qtrees that belong to
svm1.
cluster1::> volume qtree show -vserver svm1 -fields export-policy
vserver volume
qtree export-policy
------- --------- ----- ------------svm1
eng_users ""
default
svm1
eng_users bob
default
svm1
eng_users susan rhel1-only
svm1
engineering
""
default
svm1
svm1_root ""
default
5 entries were displayed.
cluster1::>
103 Basic Concepts for Clustered Data ONTAP 8.3
2015 NetApp, Inc. All rights reserved.
5. Now you need to validate that the more restrictive export policy that youve applied to the qtree susan is
working as expected. If you still have an active PuTTY session open to the the Linux host rhel1 then
bring that window up now, otherwise open a new PuTTY session to that host (username = root,
password = Netapp1!). Run the following commands to verify that you can still access the susan qtree
from rhel1.
[root@rhel1 ~]# cd /svm1/engineering/users
[root@rhel1 users]# ls
bob susan
[root@rhel1 users]# cd susan
[root@rhel1 susan]# echo "hello from rhel1" > rhel1.txt
[root@rhel1 susan]# cat rhel1.txt
Hello from rhel1
[root@rhel1 susan]#
6. Now open a PuTTY connection to the Linux host rhel2 (again, username = root and password =
Netapp1!), This host should be able to access all the volumes and qtrees in the svm1 namespace
*except* susan, which should give a permission denied error because that qtrees associated export
policy only grants access to the host rhel1.
[root@rhel2 ~]# mkdir /svm1
[root@rhel2 ~]# mount svm1:/ /svm1
[root@rhel2 ~]# cd /svm1/engineering/users
[root@rhel2 users]# ls
bob susan
[root@rhel2 users]# cd susan
bash: cd: susan: Permission denied
[root@rhel2 users]# cd bob
[root@rhel2 bob]
1. You need to first create a new export policy and configure it with rules so that only the Linux host rhel1
will be granted access to the associated volume and/or qtree. First create the export policy.
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::> vserver export-policy create -vserver svm1 -policyname rhel1-only
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::>
2. Next add a rule to the policy so that only the Linux host rhel1 will be granted access.
cluster1::> vserver export-policy rule show -vserver svm1 -policyname rhel1-only
There are no entries matching your query.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname rhel1-only
-clientmatch 192.168.0.61 -rorule any -rwrule any -superuser any -anon 65534
-ruleindex 1
cluster1::> vserver export-policy rule show
Policy
Rule
Access
Vserver
Name
Index
Protocol
------------ --------------- ------ -------svm1
default
1
any
svm1
rhel1-only
1
any
2 entries were displayed.
Client
Match
--------------------0.0.0.0/0
192.168.0.61
RO
Rule
--------any
any
cluster1::> vserver export-policy rule show -vserver svm1 -policyname rhel1-only -instance
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
svm1
rhel1-only
1
any
or Domain: 192.168.0.61
any
any
65534
any
true
true
cluster1::>
3. Produce a list of svm1s export policies and then a list of its qtrees:
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::> volume qtree show
Vserver
Volume
Qtree
Style
---------- ------------- ------------ -----------svm1
eng_users
""
ntfs
svm1
eng_users
bob
ntfs
svm1
eng_users
susan
ntfs
svm1
engineering
""
ntfs
svm1
svm1_root
""
ntfs
5 entries were displayed.
cluster1::>
Oplocks
--------enable
enable
enable
enable
enable
Status
-------normal
normal
normal
normal
normal
5. Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is using
the rhel1-only export policy.
cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan
Vserver Name:
Volume Name:
Qtree Name:
Qtree Path:
Security Style:
Oplock Mode:
Unix Permissions:
Qtree Id:
Qtree Status:
Export Policy:
Is Export Policy Inherited:
cluster1::>
svm1
eng_users
susan
/vol/eng_users/susan
ntfs
enable
2
normal
rhel1-only
false
6. Produce a report showing the export policy assignments for all the volumes and qtrees that belong to
svm1.
cluster1::> volume qtree show -vserver svm1 -fields export-policy
vserver volume
qtree export-policy
------- --------- ----- ------------svm1
eng_users ""
default
svm1
eng_users bob
default
svm1
eng_users susan rhel1-only
svm1
engineering
""
default
svm1
svm1_root ""
default
5 entries were displayed.
cluster1::>
7. Now you need to validate that the more restrictive export policy that youve applied to the qtree susan is
working as expected. If you still have an active PuTTY session open to the the Linux host rhel1 then
bring that window up now, otherwise open a new PuTTY session to that host (username = root,
password = Netapp1!). Run the following commands to verify that you can still access the susan qtree
from rhel1.
[root@rhel1 ~]# cd /svm1/engineering/users
[root@rhel1 users]# ls
bob susan
[root@rhel1 users]# cd susan
[root@rhel1 susan]# echo "hello from rhel1" > rhel1.txt
[root@rhel1 susan]# cat rhel1.txt
hello from rhel1
[root@rhel1 susan]#
8. Now open a PuTTY connection to the Linux host rhel2 (again, username = root and password =
Netapp1!), This host should be able to access all the volumes and qtrees in the svm1 namespace
*except* susan, which should give a permission denied error because that qtrees associated export
policy only grants access to the host rhel1.
[root@rhel2 ~]# mkdir /svm1
[root@rhel2 ~]# mount svm1:/ /svm1
[root@rhel2 ~]# cd /svm1/engineering/users
[root@rhel2 users]# ls
bob susan
[root@rhel2 users]# cd susan
bash: cd: susan: Permission denied
[root@rhel2 users]# cd bob
[root@rhel2 bob]
NetApp best practice is to configure at least one SAN LIF per storage fabric/network on each node in the
cluster so that all nodes can provide a path to the LUNs. In large clusters where this would result in the
presentation of a large number of paths for a given LUN then NetApp recommends that you use portsets to
limit the LUN to seeing no more than 8 LIFs. Data ONTAP 8.3 introduces a new Selective LUN Mapping
(SLM) feature to provide further assistance in managing fabric paths. SLM limits LUN path access to just
the node that owns the LUN and its HA partner, and Data ONTAP automatically applies SLM to all new
LUM map operations. For further information on Selective LUN Mapping, please see the Hands-On Lab for
SAN Features in clustered Data ONTAP 8.3 lab.
In this lab the cluster contains two nodes connected to a single storage network, but you will still be
configuring a total of 4 SAN LIFs simply because it is common to see real world implementations with 2
paths per node for redundancy.
This section of the lab allows you to create and mount a LUN for just Windows, just Linux, or both as you
wish. Both the Windows and Linux LUN creation steps require that you complete the Create a Storage
Virtual Machine for iSCSI section that comes next. If you want to create a Windows LUN then you will then
need to complete the Create, Map, and Mount a Windows LUN section that follows, or if you want to
create a Linux LUN then you will then need to complete the Create, Map, and Mount a Linux LUN section
that follows after that. You can safely complete both of those last two sections in the same lab.
Create a Storage Virtual Machine for iSCSI
In this section you will create a new SVM named svmluns on the cluster. You will create the SVM,
configure it for iSCSI, and create four data LIFs to support LUN access to the SVM (two on each cluster
node).
Return to the System Manager window and start the procedure to create a new storage virtual
machine.
1. Open the Storage Virtual Machines tab.
2. Select cluster1.
Data Protocols: check the iSCSI checkboxes. Note that the list of available Data Protocols is
dependant upon what protocols are licensed on your cluster; if a given protocol isnt listed it is because
you arent licensed for it.
Root Aggregate: aggr1_cluster1_01. If you completed the NAS section of this lab you will note that
this is the same aggregate you used to hold the volumes for svm1. Multiple SVMs can share the same
aggregate.
The default values for IPspace, Volume Type, Default Language, and Security Style are already populated
for you by the wizard, as is the DNS configuration. When ready, click Submit & Continue.
Subnet: Demo
2. The Provision a LUN for iSCSI Storage (Optional) section allows to to quickly create a LUN when first
creating an SVM. This lab guide does not use that in order to show you the much more common activity
of adding a new volume and LUN to an existing SVM in a later step.
3. Check the Review or modify LIF configuration (Advanced Settings) checkbox. Checking this checkbox
changes the window layout and makes some fields uneditable, so the screenshot show this checkbox
before it has been checked.
Once you check the Review or modify LIF configuration checkbox, the Configure iSCSI Protocol window
changes to include a list of the LIFs that the wizard plans to create. Take note of the LIF names and ports
that the wizard has chosen to assign the LIFs you have asked it to create. Since this lab utilizes a cluster
that only has two nodes and those nodes are configured as an HA pair, Data ONTAPs automatically
configured Selective LUN Mapping is more than sufficient for this lab so there is no need to create a
portset.
1. Click Submit & Continue.
The wizard advances to the SVM Administration step. Unlike data LIFS for NAS protocols, which
automatically support both data and management functionality, iSCSI LIFs only support data protocols and
so you must create a dedicated management LIF for this new SVM.
1. Set the fields in the window as follow.
Password: netapp123
Subnet: Demo
Port: cluster1-01:e0c
Click Submit & Contnue.
The New Storage Virtual Machine (SVM) Summary winow opens. Review the contents of this window,
taking note of the names, IP addresses, and port assignments for the 4 iSCSI LIFs and the management
LIF that the wizard created for you.
1. Click OK to close the window.
The New Storage Virtual Machine (SVM) Summary window closes, and focus returns to System Manager
which now show the summary view for the new svmluns SVM.
1. Notice that in the main pane of the window the iSCSI protocol is listed with a green background. This
indicates that iSCSI is enabled and running for this SVM .
If you do not already have a PuTTY session open to cluster1, open one now following the instructions in
the Accessing the Command Line section at the beginning of this lab guide and enter the following
commands.
1. Display the available aggregates so you can decide which one you want to use to host the root volume
for the SVM you will be creating.
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
10.26GB
510.6MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02
10.26GB
510.6MB
95% online
1 cluster1-02
raid_dp,
normal
aggr1_cluster1_01
72.53GB
72.49GB
0% online
3 cluster1-01
raid_dp,
normal
aggr1_cluster1_02
72.53GB
72.53GB
0% online
0 cluster1-02
raid_dp,
normal
4 entries were displayed.
cluster1::>
2. Create the SVM svmluns on aggregate aggr1_cluster1_01. Note that the clustered Data ONTAP
command line syntax still refers to storage virtual machines as vservers.
cluster1::> vserver create -vserver svmluns -rootvolume svmluns_root -aggregate
aggr1_cluster1_01 -language C.UTF-8 -rootvolume-security-style unix -snapshot-policy default
[Job 269] Job is queued: Create svmluns.
[Job 269]
[Job 269] Job succeeded:
Vserver creation completed
cluster1::>
svmluns
data
default
beeb8ca5-580c-11e4-a807-0050569901b8
svmluns_root
aggr1_cluster1_01
unix
C.UTF-8
default
default
unlimited
running
running
iscsi
nfs, cifs, fcp, ndmp
false
false
Default
cluster1::>
4. Create 4 SAN LIFs for the SVM svmluns, 2 per node. Do not forget you can save some typing here by
using the up arrow to recall previous commands that you can edit and then execute.
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0d -subnet-name Demo
-failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0e -subnet-name Demo
-failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0d -subnet-name Demo
-failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0e -subnet-name Demo
-failover-policy disabled -firewall-policy data
cluster1::>
Is
Home
---true
true
true
true
true
true
true
true
true
true
true
true
8. Display a list of all the volumes on the cluster to see the root volume for the svmluns SVM.
cluster1::> volume show
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----cluster1-01 vol0
aggr0_cluster1_01 online RW
9.71GB
6.97GB
28%
cluster1-02 vol0
aggr0_cluster1_02 online RW
9.71GB
6.36GB
34%
svm1
eng_users
aggr1_cluster1_01 online RW
10GB
9.50GB
5%
svm1
engineering aggr1_cluster1_01 online RW
10GB
9.50GB
5%
svm1
svm1_root
aggr1_cluster1_01 online RW
20MB
18.86MB
5%
svmluns
svmluns_root aggr1_cluster1_01 online RW
20MB
18.86MB
5%
6 entries were displayed.
cluster1::>
Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that
volume, and map the LUN so it can be accessed by the Windows client.
You must complete all of the subsections of this section in order to use the LUN from the Windows client.
Gather the Windows Client iSCSI Initiator Name
You need to determine the Windows clients iSCSI initiator name so that when you create the LUN you can
set up an appropriate initiator group to control access to the LUN.
On the desktop of the Windows client named jumphost (the main Windows host you use in the lab).
1. Click on the Windows button on the far left side of the task bar.
1. Select the Configuration tab, and take note of the value in the Initiator Name field, which contains the
initiator name for jumphost. The value should read as:
iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
You will need this value later, so you might want to copy this value from the properties window and
paste it into a text file on your labs desktop so you have it readily available when that time comes.
2. Click OK.
The iSCSI Properties window closes, and focus returns to the Windows Explorer Administrator Tools
window. Leave this window open as you will need to access other tools later in the lab.
Name: windows.lun
Size: 10 GB
1. Select the radio button to create a new flexible volume and set the fields under that heading as follows.
Name: winigrp
1. Populate the Name entry with the value of the iSCSI Initiator name for jumnphost that you saved
earlier. In case you misplaced that value, it was
iqn.1991-05.com.microsoft:jumphost.demo.netapp.com.
When you finish entering the value, click the OK button underneath the entry. Finally, click Create.
An Initiator-Group Summary window opens confiming that the winigrp igroup was created successfully.
1. Click OK to acknowledge the confirmation.
The Initiator-Group Summary window closes, and focus returns to the Initiator Mapping step of the Create
LUN wizard.
1. Click the checkbox under the map column next to the winigrp initiator group. This is a critical step
because this is where you actually map the new LUN to the new igroup.
2. Click Next to continue.
The wizard advances to the Storage Quality of Service Properties step. You will not be creating any QoS
policies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab for Advanced
Concepts for clustered Data ONTAP 8.3 lab.
The wizards advances to the LUN Summary step, where you can review your selections before proceding
with creating the LUN.
The wizard begins the task of creating the volume that will contain the LUN, creating the LUN, and
mapping the LUN to the new igroup. As it finishes each step the wizard displays a green checkmark in the
window next to that step.
The Create LUN wizard window closes, and focus returns to the LUNs view in System Manager. The new
windows.lun LUN now shows up in the LUNs view, and if you select it you can review its details at the
bottom of the pane.
If you do not already have a PuTTY connection open to cluster1 then please open one now following the
instructions in the Accessing the Command Line section at the beginning of this lab guide.
Create the volume winluns to host the Windows LUN you will be creating in a later step:
cluster1::> volume create -vserver svmluns -volume winluns -aggregate aggr1_cluster1_01 -size
10.31GB -percent-snapshot-space 0 -snapshot-policy none -space-guarantee none
-autosize-mode grow -nvfail on
[Job 270] Job is queued: Create winluns.
[Job 270] Job succeeded: Successful
cluster1::> volume show
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------cluster1-01
vol0
aggr0_cluster1_01
online
cluster1-02
vol0
aggr0_cluster1_02
online
svm1
eng_users
aggr1_cluster1_01
online
svm1
engineering aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
svmluns
svmluns_root aggr1_cluster1_01
online
svmluns
winluns
aggr1_cluster1_01
online
7 entries were displayed.
Type
Size Available Used%
---- ---------- ---------- -----
RW
9.71GB
7.00GB
27%
RW
9.71GB
6.34GB
34%
RW
10GB
9.50GB
5%
RW
10GB
9.50GB
5%
RW
20MB
18.86MB
5%
RW
20MB
18.86MB
5%
RW
10.31GB
21.31GB
0%
cluster1::>
Display a list of the defined igroups, then create a new igroup named winigrp that you will use to manage
access to the new LUN. Finally, add the Windows clients initiator name to the igroup.
cluster1::> igroup show
This table is currently empty.
cluster1::> igroup create -vserver svmluns -igroup winigrp -protocol iscsi -ostype windows
-initiator iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
cluster1::>
Map the LUN windows.lun to the igroup winigrp, then display a list of all the LUNs, all the mapped LUNs,
and finally a detailed report on the configuration of the LUN windows.lun.
cluster1::> lun map -vserver svmluns -volume winluns -lun windows.lun -igroup winigrp
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
10.00GB
cluster1::> lun mapped show
Vserver
Path
---------- ---------------------------------------svmluns
/vol/winluns/windows.lun
Igroup
------winigrp
LUN ID
-----0
Protocol
-------iscsi
svmluns
/vol/winluns/windows.lun
winluns
""
windows.lun
10.00GB
windows_2008
disabled
wOj4Q]FMHlq6
Windows LUN
false
disabled
online
8e62421e-bff4-4ac7-85aa-2e6e3842ec8a
mapped
512
false
false
0
502.0GB
10/20/2014 04:36:41
regular
cluster1-01
false
false
false
cluster1::>
You should begin by validating that the Multi-Path I/O (MPIO) software is working properly on this windows
host. The Administrative Tools window should still be open on jumphost; if you already closed it then you
will need to re-open it now so you can access the MPIO tool
1. Double-click the MPIO tool.
The MPIO Properties window closes and focus returns to the Administrative Tools window for jumphost.
Now you need to begin the process of connecting jumphost to the LUN.
The discovery tab is where you begin the process of discovering LUNs, and to do that you must define a
target portal to scan. You are going to manually add a target portal to jumphost.
The Discover Target Portal window opens. Here you will specify the first of the IP addresses that the
clustered Data ONTAP Create LUN wizard assigned your iSCSI LIFs when you created the svmluns SVM.
Recall that the wziard assigned your LIFs IP addresses in the range 192.168.0.133-192.168.0.136.
1. Set the IP Address or DNS name textbox to 192.168.0.133, the first address in the range for your
LIFs, and click OK.
The Discover Target Portal window closes, and focus returns to the iSCSI Injitiator Properties window.
1. The Target Portals list now contains an entry for the IP address you entered in the previous step.
2. Click on the Targets tab.
The Targets tab opens to show you the list of discovered targets.
1. In the Discovered targets list select the only listed target. Observe that the targets status is Inactive,
because although you have discovered it you have not yet connected to it. Also note that the Name of
the discovered target in your lab will have a different value than what you see in this guide; that name
string is uniquely generated for each instance of the lab. (Make a mental note of that string value as
you will see it a lot as you continue to configure iSCSI in later steps of this process.)
2. Click the Connect button.
1. Click the Enable multi-path checkbox, then click the Advanced button.
The Advanced Setting window closes, and focus returns to the Connect to Target window.
1. Click OK.
The Connect to Target window closes, and focus returns to the iSCSI Initiator Properties window.
1. Notice that the status of the listed discovered target has changed from Inactive to Connected".
Thus far you have added a single path to your iSCSI LUN, using the address for the cluster1-01_iscsi_lif_1
LIF the Create LUN wizard created on the node cluster1-01 for the svmluns SVM. You are now going to
add each of the other SAN LIFs present on the svmluns SVM. To begin this procedure you must first edit
the properties of your existing connection.
1. Still on the Targets tab, select the discovered target entry for your existing connection.
2. Click Properties.
The Properties window opens. From this window you will be starting the procedure of connecting alternate
paths for your newly connected LUN. You will be repeating this procedure 3 times, once for each of the
remaining LIFs that are present on the svmluns SVM.
LIF IP
Address
Done
192.168.0.134
192.168.0.135
192.168.0.136
1. The Identifier list will contain an entry for every path you have specified so far, so it can serve as a
visual indicator on your progress for defining specify all your paths. The first time you enter this window
you will see one entry, for the the LIF you used to first connect to this LUN.
2. Click Add Session.
1. Select the Target port IP entry that contains the IP address of the LIF whose path you are adding in
this iteration of the procedure to add an alternate path. The following screenshot shows the
192.168.0.134 address, but the value you specify will depend of which specific path you are
configuring. When finished, click OK.
The Advanced Settings window closes, and focus returns to the Connect to Target window.
1. Click OK.
The Connect to Target window closes, and focus returns to the Properties window where a new identifier
list. Repeat the procedure from the last 4 screenshots for each of the last two remaining LIF IP addresses.
When you have finished adding all 3 paths the Identifiers list in the Properties window should contain 4
entries.
1. There are 4 entries in the Identifier list when you are finished, indicating that there are 4 sessions, one
for each path. Note that it is normal for the identifier values in your lab to differ from those in the
screenshot.
2. Click OK.
The Properties window closes, and focus returns to the iSCSI Properties window.
1. Click OK.
The iSCSI Properties window closes, and focus returns to the desktop of jumphost. If the Administrative
Tools window is not still open on your desktop, open it again now.
If all went well, the jumphost is now connected to the LUN using multi-pathing, so it is time to format your
LUN and build a filesystem on it.
1. In the left pane of the Computer Management window, navigate to Computer Management (Local)>Storage->Disk Management.
2. When you launch Disk Management an Initialize Disk dialog will open informing you that you must
initialize a new disk before Logical Disk Manager can access it. (If you see more than one disk listed
then MPIO has not correctly recognized that the multiple paths you set up are all for the same LUN, so
you will need to cancel the Initialize Disk dialog, quite Computer Manager, and go back to the iSCSI
Initiator tool to review your path configuration steps to find and correct any configuration errors, after
which you can return to the Computer Management tool and try again.)
Click OK to initialize the disk.
The Initialize Disk window closes, and focus returns to the Disk Management view in the Computer
Management window.
1. The new disk shows up in the disk list at the bottom of the window, and has a status of Unallocated.
2. Right-click inside the Unallocated box for the disk (if you right-click outside this box you will get the
incorrect context menu) and select New Simple Volume from the context menu.
1. The wizard automatically selects the next available drive letter, which should be E:. Click Next.
The wizard advances to the Completing the New Simple Volume Wizard step
1. Click Finish.
The New Simple Volume Wizard window closes, and focus returns to the Disk Management view of the
Computer Management window.
1. The new WINLUN volume now shows as Healthy in the disk list at the bottom of the window, indicating
that the new LUN is mounted and ready for you to use. Before you complete this section of the lab, take
a look at the MPIO configuration for this LUN by right-clicking inside the box for the WINLUN volume.
2. From the context menu select Properties.
The NETAPP LUN C-Mode Multi-Path Disk Device Properties window opens.
3. The MPIO policy is set to Least Queue Depth. A number of different multi-pathing policies are
available but the configuration shown here sends LUN I/O down the path that has the fewest
outstanding I/O requests. You can click the More information about MPIO policies link at the bottom
of the dialog window for details about all the available policies.
4. The top two paths show both a Path State and TPG State as Active/Optimized; these paths are
connected to the node cluster1-01 and the Least Queue Depth policy makes active use of both paths to
this node. On the other hand the bottom two paths show a Path State of Unavailable and a TPG
State of Active/Unoptimized; these paths are connected to the node cluster1-02 and only enter a
Path State of Active/Optimized if the node cluster1-01 becomes unavailable or if the volume hosting
the LUN migrates over to the node cluster1-02.
5. When you are finished reviewing the information in this dialog click OK to exit. If you have changed any
of the values in this dialog you may want to consider instead using the Cancel button in order to
discard those changes.
1. The NETAPP LUN C-Mode Multi-Path Disk Device Properties window closes, and focus returns to the
WINLUN (E:) Properties window.
Click OK.
Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun within
that volume, and map the LUN to the Linux client.
You must complete all of the following subsections in order to use the LUN from the Linux client. Note that
you are not required to complete the Windows LUN section before starting this section of the lab guide but
the screenshots and command line output shown here assumes that you have; if you did not complete the
Windows LUN section then the differences will not affect your ability to create and mount the Linux LUN.
You should already have a PuTTY connection open to the Linux host rhel1. If you do not, then open one
now using the instructions found in the Accessing the Command Line section at the beginning of this lab
guide. The username will be root and the password will be Netapp1!.
Run the following command on rhel1 to find the name of its iSCSI initiator.
[root@rhel1 ~]# cd /etc/iscsi
[root@rhel1 iscsi]# ls
initiatorname.iscsi iscsid.conf
[root@rhel1 iscsi]# cat initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 iscsi]#
Switch back to the System Manager window so that you can create the LUN.
3. Click Create.
Name: linux.lun
Type: Linux
Size: 10 GB
1. Select the radio button to create a new flexible volume and set the fields under that heading as follows.
Name: linigrp
An Initiator-Group Summary window opens confiming that the linigrp igroup was created successfully.
1. Click OK to acknowledge the confirmation.
The Initiator-Group Summary window closes, and focus returns to the Initiator Mapping step of the Create
LUN wizard.
1. Click the checkbox under the map column next to the linigrp initiator group. This is a critical step
because this is where you actually map the new LUN to the new igroup.
2. Click Next to continue.
The wizard advances to the Storage Quality of Service Properties step. You will not be creating any QoS
policies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab for Advanced
Concepts for clustered Data ONTAP 8.3 lab.
1. Click Next to continue.
The wizards advances to the LUN Summary step, where you can review your selections before proceding
with creating the LUN.
The wizard begins the task of creating the volume that will contain the LUN, creating the LUN, and
mapping the LUN to the new igroup. As it finishes each step the wizard displays a green checkmark in the
window next to that step.
1. Click Finish to terminate the wizard.
The Create LUN wizard window closes, and focus returns to the LUNs view in System Manager. The new
linux.lun LUN now shows up in the LUNs view, and if you select it you can review its details at the bottom
of the pane.
The new Linux LUN now exists and is mapped to your rhel1 client, but there is still one more configuration
step remaining for this LUN as follows:
1. Data ONTAP 8.2 introduced a space reclamation feature that allows Data ONTAP to reclaim space
from a thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to notify
the client when the LUN cannot accept writes due to lack of space on the volume. This feature is
supported by VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft
Windows 2012. The RHEL clients used in this lab are running version 6.5 and so you will enable the
space reclamation feature for your Linux LUN. You can only enable space reclamation through the
Data ONTAP command line, so if you do not already have a PuTTY session open to cluster1 then open
one now following the directions shown in the Accessing the Command Line section at the beginning
of this lab guide. The username will be admin and the password will be Netapp1!.
Enable space reclamation for the LUN.
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun disabled
cluster1::> lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation enabled
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun enabled
cluster1::>
If you do not currently have a PuTTY session open to cluster1 then open one now following the instructions
from the Accessing the Command Line section at the beginning of this lab guide. The username will be
admin and the password will be Netapp1!.
Create the thin provisioned volume linluns that will host the Linux LUN you will create in a later step:
cluster1::> volume create -vserver svmluns -volume linluns -aggregate aggr1_cluster1_01 -size
10.31GB -percent-snapshot-space 0 -snapshot-policy none -space-guarantee none -autosize-mode
grow -nvfail on
[Job 271] Job is queued: Create linluns.
[Job 271] Job succeeded: Successful
cluster1::> volume show
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------cluster1-01
vol0
aggr0_cluster1_01
online
cluster1-02
vol0
aggr0_cluster1_02
online
svm1
eng_users
aggr1_cluster1_01
online
svm1
engineering aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
svmluns
linluns
aggr1_cluster1_01
online
svmluns
svmluns_root aggr1_cluster1_01
online
svmluns
winluns
aggr1_cluster1_01
online
8 entries were displayed.
Type
Size Available Used%
---- ---------- ---------- -----
RW
9.71GB
6.92GB
28%
RW
9.71GB
6.27GB
35%
RW
10GB
9.50GB
5%
RW
10GB
9.50GB
5%
RW
20MB
18.85MB
5%
RW
10.31GB
10.31GB
0%
RW
20MB
18.86MB
5%
RW
10.31GB
10.28GB
0%
cluster1::>
Create the thin provisioned Linux LUN linux.lun on the volume linluns:
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
10.00GB
cluster1::> lun create -vserver svmluns -volume linluns -lun linux.lun -size 10GB -ostype linux
-space-reserve disabled
Created a LUN of size 10g
(10742215680)
cluster1::> lun modify -vserver svmluns -volume linluns -lun linux.lun -comment "Linux LUN"
cluster1::> lun show
Vserver
Path
--------- ------------------------------svmluns
/vol/linluns/linux.lun
svmluns
/vol/winluns/windows.lun
State
------online
online
Mapped
-------unmapped
mapped
Type
Size
-------- -------linux
10GB
windows_2008
10.00GB
Display a list of the clusters igroups and portsets, then create a new igroup named linigrp that you will use
to manage access to the LUN linux.lun. Add the iSCSI initiator name for the Linux host rhel1 to the new
igroup.
cluster1::> lun map -vserver svmluns -volume linluns -lun linux.lun -igroup linigrp
cluster1::> lun show
Vserver
Path
--------- ------------------------------svmluns
/vol/linluns/linux.lun
svmluns
/vol/winluns/windows.lun
State
------online
online
Mapped
-------mapped
mapped
Type
Size
-------- -------linux
10GB
windows_2008
10.00GB
Igroup
------linigrp
winigrp
LUN ID
-----0
0
Protocol
-------iscsi
iscsi
Igroup
------linigrp
LUN ID
-----0
Protocol
-------iscsi
svmluns
/vol/linluns/linux.lun
linluns
""
linux.lun
10GB
linux
disabled
wOj4Q]FMHlq7
Linux LUN
false
disabled
online
1b4912fb-b779-4811-b1ff-7bc3a615454c
mapped
512
false
false
0
128.0GB
10/20/2014 06:19:49
regular
cluster1-01
-
Clone: false
Clone Autodelete Enabled: false
Inconsistent import: false
cluster1::>
Data ONTAP 8.2 introduced a space reclamation feature that allows Data ONTAP to reclaim space from a
thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to notify the client
when the LUN cannot accept writes due to lack of space on the volume. This feature is supported by
VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft Windows 2012. The
RHEL clients used in this lab are running version 6.5 and so you will enable the space reclamation feature
for your Linux LUN.
The steps in this section assume some familiarity with how to use the Linux command line. If you are not
familiar with those concepts then we recommend that you skip this section of the lab.
If you do not currently have a PuTTY session open to rhel1, open one now and log in as user root with the
password Netapp1!.
The NetApp Linux Host Utilities kit has been pre-installed on both Red Hat Linux hosts in this lab, and the
iSCSI initiator name has already been configured for each host. Confirm that is the case:
[root@rhel1 ~]# rpm -qa | grep netapp
netapp_linux_unified_host_utilities-7-0.x86_64
[root@rhel1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 ~]#
You will find that the Red Hat Linux hosts in the lab have pre-installed the DM-Multipath packages and a
/etc/multipath.conf file pre-configured to support multi-pathing so that the RHEL host can access the LUN
using all of the SAN LIFs you created for the svmluns SVM.
[root@rhel1 ~]# rpm -q device-mapper
device-mapper-1.02.79-8.el6.x86_64
[root@rhel1 ~]# rpm -q device-mapper-multipath
device-mapper-multipath-0.4.9-72.el6.x86_64
[root@rhel1 ~]# cat /etc/multipath.conf
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated
#
# REMEMBER: After updating multipath.conf, you must run
#
# service multipathd reload
#
# for the changes to take effect in multipathd
# NetApp recommended defaults
defaults {
flush_on_last_del yes
max_fds
queue_without_daemon
user_friendly_names
dev_loss_tmo
fast_io_fail_tmo
5
}
blacklist {
devnode
devnode
devnode
devnode
}
max
no
no
infinity
"^sda"
"^hd[a-z]"
"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
"^ccis.*"
devices {
# NetApp iSCSI LUNs
device {
vendor
"NETAPP"
product
"LUN"
path_grouping_policy
group_by_prio
features
"3 queue_if_no_path pg_init_retries 50"
prio
"alua"
path_checker
tur
failback
immediate
path_selector
"round-robin 0"
hardware_handler
"1 alua"
rr_weight
uniform
rr_min_io
128
getuid_callout
"/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
[root@rhel1 ~]#
You now need to start the iSCSI software service on rhel1 and configure it to start automatically at boot
time. Note that a force-start is only necessary the very first time you start the iscsid service on host.
[root@rhel1 ~]# service iscsid status
iscsid is stopped
[root@rhel1 ~]# service iscsid force-start
Starting iscsid: OK
[root@rhel1 ~]# service iscsi status
No active sessions
[root@rhel1 ~]# chkconfig iscsi on
[root@rhel1 ~]# chkconfig --list iscsi
iscsi
0:off
1:off
2:on
3:on
[root@rhel1 ~]#
4:on
5:on
6:off
Next discover the available targets using the iscsiadm command. Note that the exact values used for the
node paths may differ in your lab from what is shown in this example, and that after running this command
there will not as of yet be active iSCSI sessions because you have not yet created the necessary device
files.
[root@rhel1 ~]# iscsiadm --mode discovery --op update --type sendtargets
--portal 192.168.0.133
192.168.0.133:3260,1028 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.136:3260,1031 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.135:3260,1030 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.134:3260,1029 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
[root@rhel1 ~]# iscsiadm --mode session
iscsiadm: No active sessions.
[root@rhel1 ~]#
Create the devices necessary to support the discovered nodes, after which the sessions become active.
At this point the Linux client sees the LUN over all four paths but it does not yet understand that all four
paths represent the same LUN.
[root@rhel1 ~]# sanlun lun show
controller(7mode)/
device
host
lun
vserver(Cmode)
lun-pathname
filename
adapter
protocol
size
product
-----------------------------------------------------------------------------------------------svmluns
/vol/linluns/linux.lun /dev/sde
host3
iSCSI
10g
cDOT
svmluns
/vol/linluns/linux.lun /dev/sdd
host4
iSCSI
10g
cDOT
svmluns
/vol/linluns/linux.lun /dev/sdc
host5
iSCSI
10g
cDOT
svmluns
/vol/linluns/linux.lun /dev/sdb
host6
iSCSI
10g
cDOT
[root@rhel1 ~]#
Since the lab includes a pre-configured /etc/multipath.conf file you just need to start the multipathd service
to handle the multiple path management and configure it to start automatically at boot time.
[root@rhel1 ~]# service multipathd status
multipathd is stopped
[root@rhel1 ~]# service multipathd start
Starting multipathd daemon: OK
[root@rhel1 ~]# service multipathd status
multipathd (pid 8656) is running...
[root@rhel1 ~]# chkconfig multipathd on
[root@rhel1 ~]# chkconfig --list multipathd
multipathd
0:off
1:off
2:on
3:on
[root@rhel1 ~]#
4:on
5:on
6:off
The multipath command displays the configuration of DM-Multipath, and the multipath -ll
command displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/mapper
that you use to access the multipathed LUN (in order to create a filesystem on it and to mount it); the first
line of output from the multipath -ll command lists the name of that device file (in this example
3600a0980774f6a34515d464d486c7137). The autogenerated name for this device file will likely differ in
your copy of the lab. Also pay attention to the output of the sanlun lun show -p command which shows
information about the Data ONTAP path of the LUN, the LUNs size, its device file name under
/dev/mapper, the multipath policy, and also information about the various device paths themselves.
[root@rhel1 ~]# multipath -ll
[1m3600a0980774f6a34515d464d486c7137 dm-2 NETAPP,LUN C-Mode
size=10G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 6:0:0:0 sdb 8:16 active ready running
| `- 3:0:0:0 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 5:0:0:0 sdc 8:32 active ready running
`- 4:0:0:0 sdd 8:48 active ready running
[root@rhel1 ~]# ls -l /dev/mapper
total 0
lrwxrwxrwx 1 root root
7 Oct 20 06:50 3600a0980774f6a34515d464d486c7137 -> ../dm-2
crw-rw---- 1 root root 10, 58 Oct 19 18:57 control
lrwxrwxrwx 1 root root
7 Oct 19 18:57 vg_rhel1-lv_root -> ../dm-0
lrwxrwxrwx 1 root root
7 Oct 19 18:57 vg_rhel1-lv_swap -> ../dm-1
[root@rhel1 ~]# sanlun lun show -p
ONTAP Path: svmluns:/vol/linluns/linux.lun
LUN: 0
LUN Size: 10g
Product: cDOT
Host Device: 3600a0980774f6a34515d464d486c7137
Multipath Policy: round-robin 0
Multipath Provider: Native
--------- ---------- ------- ------------ ---------------------------------------------host
vserver
path
path
/dev/
host
vserver
state
type
node
adapter
LIF
--------- ---------- ------- ------------ ---------------------------------------------up
primary
sdb
host6
cluster1-01_iscsi_lif_1
up
primary
sde
host3
cluster1-01_iscsi_lif_2
up
secondary sdc
host5
cluster1-02_iscsi_lif_1
up
secondary sdd
host4
cluster1-02_iscsi_lif_2
[root@rhel1 ~]#
You can see even more detail about the configuration of multipath and the LUN as a whole by running the
commands multipath -v3 -d -ll or iscsiadm -m session -P 3. As the output of these
commands is rather lengthy, it is omitted here.
The LUN is now fully configured for multipath access, so the only steps remaining before you can use the
LUN on the Linux host is to create a filesystem and mount it. When you run the following commands in
your lab you will need to substitute in the /dev/mapper/ string that identifies your LUN (get that string
from the output of ls -l /dev/mapper):
[root@rhel1 ~]# mkfs.ext4 /dev/mapper/3600a0980774f6a34515d464d486c71377
mke2fs 1.41.12 (17-May-2010)
Discarding device blocks:
0/204800 done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1 blocks, Stripe width=16 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@rhel1 ~]# mkdir /linuxlun
[root@rhel1 ~]# mount -t ext4 -o discard /dev/mapper/3600a0980774f6a345515d464d486c7137
/linuxlun
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root
11877388 4962816
6311232 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
svm1:/
19456
128
19328
1% /svm1
/dev/mapper/3600a0980774f6a34515d464d486c7137 10321208 154100
9642820
2% /linuxlun
[root@rhel1 ~]# ls /linuxlun
lost+found
[root@rhel1 ~]# echo "hello from rhel1" > /linuxlun/test.txt
[root@rhel1 ~]# cat /linuxlun/test.txt
hello from rhel1
[root@rhel1 ~]# ls -l /linuxlun/test.txt
-rw-r--r-- 1 root root 6 Oct 20 06:54 /linuxlun/test.txt
[root@rhel1 ~]#
The discard option for mount allows the Red Hat host to utilize space reclamation for the LUN.
To have RHEL automatically mount the LUNs filesystem at boot time, run the following command
(modified to reflect the multipath device path being used in your instance of the lab) to add the mount
information to the /etc/fstab file. The following command should be entered as a single line
[root@rhel1 ~]# echo '/dev/mapper/3600a0980774f6a34515d464d486c7137
/linuxlun ext4 _netdev,discard,defaults 0 0' >> /etc/fstab
[root@rhel1 ~]#
When using tab completion if the Data ONTAP command interpreter is unable to identify a unique
expansion it will display a list of potential matches similar to what using the ? character does.
cluster1::> cluster s
Error: Ambiguous command.
cluster show
cluster statistics
cluster1::>
The Data ONTAP commands are structured hierarchically. When you log in you are placed at the root of
that command hierarchy, but you can step into a lower branch of the hierarchy by entering one of the base
commands. For example, when you first log in to the cluster enter the ? command to see the list of
available base commands, as follows:
cluster1::> ?
up
Go up one directory
cluster>
Manage clusters
dashboard>
(DEPRECATED)-Display dashboards
event>
Manage system events
exit
Quit the CLI session
export-policy
Manage export policies and rules
history
Show the history of commands for this CLI session
job>
Manage jobs and job schedules
lun>
Manage LUNs
man
Display the on-line manual pages
metrocluster>
Manage MetroCluster
network>
Manage physical and virtual network connections
qos>
QoS settings
redo
Execute a previous command
rows
Show/Set the rows for this CLI session
179 Basic Concepts for Clustered Data ONTAP 8.3
2015 NetApp, Inc. All rights reserved.
run
security>
set
snapmirror>
statistics>
storage>
system>
top
volume>
vserver>
cluster1::>
The > character at the end of a command signifies that it has a sub-hierarchy; enter the vserver command
to enter the vserver sub-hierarchy.
cluster1::> vserver
cluster1::vserver> ?
active-directory>
add-aggregates
add-protocols
audit>
check>
cifs>
context
create
dashboard>
data-policy>
delete
export-policy>
fcp>
fpolicy>
group-mapping>
iscsi>
locks>
modify
name-mapping>
nfs>
peer>
remove-aggregates
remove-protocols
rename
security>
services>
show
show-protocols
smtape>
start
stop
vscan>
cluster1::vserver>
Notice how the prompt changed to reflect that you are now in the vserver sub-hierarchy, and that some of
the subcommands here have sub-hierarchies of their own. To return to the root of the hierarchy enter the
top command; you can also navigate upwards one level at a time by using the up or .. commands.
cluster1::vserver> top
cluster1::>
The Data ONTAP command interpreter supports command history. By repeatedly hitting the up arrow key
you can step through the series of commands you ran earlier, and you can selectively execute a given
command again when you find it by hitting the Enter key. You can also use the left and right arrow keys to
edit the command before you run it again.
References
The following references were used in writing this lab guide.
Version History
Version
Date
Version 1.0
October 2014
Version 1.0.1
December 2014
Version 1.1
April 2015
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product and feature
versions described in this document are supported for your specific environment. The NetApp IMT defines product components
and versions that can be used to construct configurations that are supported by NetApp. Specific results depend on each
customer's installation in accordance with published specifications.
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or
recommendations provided in this publication, or with respect to any results that may be obtained by the use of the informati on
or observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of
this information or the implementation of any recommendations or techniques herein is a customers responsibility and
depends on the customers ability to evaluate and integrate them into the customers operational environment. This document
and the information contained herein may be used solely in connection with the NetApp products discussed in this document.
2015 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp, Inc.
Specifications are subject to change without notice. NetApp and the NetApp logo are registered trademarks of NetApp, Inc. in the United States
and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated
as such.