Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Clustered Ontap
Configuration & Operation Instruction NetApp Storage
Configuration &
Operation
Instruction
12.2018
Document version 3.6
Configuration & Operation Instruction
Document control
date version author change/addition
no.
20/07/2013 2.0 Mathias Dubourg Initial Release for Clustered DoT 8.2
19/09/2014 2.1 Julien Lutran Sophos Antivirus configuration added (§4.20)
16/01/2015 2.2 Mathias Dubourg Updated with QOS
12/04/2015 2.3 Morgan DE Add NFS export §3.8
KERSAUSON Add Ipspace configuration §4.3
04/05/2016 2.4 Antoine Gutierrez Added Advanced Drive Partitioning, NDMP
modes & Tape drive operations
18/05/2017 2.5 Antoine Gutierrez Snaplock configuration added. (§4.22)
12/06/2017 2.6 Mathias Dubourg Updated with Snapmirror Transition + Harvest
user
27/06/2017 2.7 Antoine Gutierrez Metrocluster configuration added. (§4.25)
05/09/2017 2.8 Antoine Gutierrez NFSv4 configuration added (§3.9)
30/11/2017 2.9 Christophe Auchabie Disk Clearing/Sanitization process
01/12/2017 3.0 Christophe Auchabie Ndmpcopy usage
08/12/2017 3.1 Christophe Auchabie XCP Migration Tool
08/12/2017 3.2 Antoine Gutierrez §4.25 Metrocluster configuration added and
updated.
15/12/2017 3.3 Christophe Auchabie Update XCP / NDMPCopy
07/06/2018 3.4 Mathias Dubourg Update for Ontap 9.3 (Adaptive QoS, Inline
Aggregate-level Data Deduplication)
16/10/2018 3.5 Antoine Gutierrez SVM DR documentation added.
14/12/2018 3.6 Mathias Dubourg Update with NetApp Volume Encryption
16/01/2019 3.7 François Mocard Update with SVM DR description
08/02/2019 3.8 François Mocard Update with SVM DR activation
Table of contents
1 Introduction........................................................................................... 6
1.1 Document purpose.................................................................................................... 6
1.2 Audience................................................................................................................... 6
1.3 Acronyms and Abbreviations..................................................................................... 6
2 Architecture...........................................................................................7
2.1 Naming Conventions................................................................................................. 7
3 Operation instructions............................................................................8
3.1 NetApp management tools........................................................................................ 8
3.1.1 Storage Web GUIs....................................................................................... 8
3.1.2 Storage Command Line Interface (CLI).......................................................8
3.1.3 Storage Serial Console................................................................................ 9
3.2 Qtree Clustered Data ONTAP..................................................................................... 9
3.2.1 Modify Qtrees............................................................................................. 9
3.2.2 Display Qtree Statistics.............................................................................10
3.2.3 Reset Qtree Statistics............................................................................... 10
3.2.4 Delete Empty Qtrees................................................................................ 10
3.2.5 Delete Qtrees that Contain Files...............................................................11
3.2.6 Display Information About Qtrees.............................................................11
3.2.7 Rename a Qtree........................................................................................ 12
3.3 Deduplication Clustered Data ONTAP......................................................................12
3.3.1 Inline Aggregate-level Data Deduplication................................................14
3.4 Compression Clustered Data ONTAP........................................................................ 16
3.4.1 Enable Postprocess Compression..............................................................16
3.4.2 Compress and Deduplicate Existing Data.................................................16
3.4.3 View Compression Space Savings.............................................................16
3.4.4 Turn Off All Compression...........................................................................17
3.4.5 Compression Inline Clustered Data ONTAP................................................17
3.4.6 Enable Inline Compression........................................................................17
3.4.7 Compress and Deduplicate Existing Data.................................................17
3.4.8 View Compression Space Savings.............................................................17
3.4.9 Turn Off Inline Compression...................................................................... 18
3.5 RLM Clustered Data ONTAP..................................................................................... 18
3.5.1 Log in to RLM from Administration Host....................................................18
3.5.2 Connect to Storage System Console from RLM.........................................18
3.5.3 RLM Administrative Mode Functions.........................................................18
3.5.4 RLM Advanced Mode Display Information.................................................20
3.5.5 Manage RLM with Data ONTAP..................................................................20
3.5.6 RLM and SNMP Traps................................................................................ 21
3.5.7 Disable SNMP Traps for Only RLM.............................................................21
3.6 Firewall for Clustered Data ONTAP...........................................................................21
3.7 CIFS Clustered Data ONTAP..................................................................................... 21
3.7.1 Check Active Client Connections...............................................................22
3.7.2 Create an Export Policy............................................................................. 22
3.7.3 Add a Rule to an Export Policy..................................................................22
3.7.4 Create a Name Mapping from Windows to UNIX.......................................22
3.8 NFSv3 Clustered Data ONTAP.................................................................................. 22
3.8.1 Create Export Policy.................................................................................. 22
3.8.2 Add a Rule to an Export Policy..................................................................22
3.8.3 Attach the export policy to the vol root of the vserver..............................23
3.8.4 Create the volume/qtree to export with junction path...............................23
3.9 NFSv4 Clustered Data ONTAP.................................................................................. 23
3.9.1 Considerations.......................................................................................... 23
3.9.2 Pre-requisites............................................................................................ 25
3.9.3 Configuration............................................................................................ 25
3.10 Syslog Clustered Data ONTAP.................................................................................. 29
3.10.1 Display Events.......................................................................................... 29
3.10.2 Display Event Status................................................................................. 29
3.11 User Access for Clustered Data ONTAP....................................................................29
3.11.1 Security-Related User Tasks...................................................................... 29
3.11.2 General User Administration..................................................................... 30
3.12 Data Protection on Clustered Data ONTAP...............................................................30
1 Introduction
1.1 Document purpose
The Configuration & Operation Instruction (COPI) document provides main details
to do
- Storage Reporting/Management
- Storage Troubleshooting
1.2 Audience
The COPI is a reference document for all engineering team members who have
to work on this specific product.
The document may also be used by Implementation team (TIO) and operational
team (TOO).
Provide a list of the acronyms and abbreviations used in this document and the
meaning of each.
2 Architecture
This table describe in which situation the architecture fits.
For more information about supported architectures, please refer to the TSD
documentation
For more information about naming conventions, please refer to the Naming
Conventions documentation : ENG-Naming Convention for NetApp
3 Operation instructions
3.1 NetApp management tools
Storage GUI: NetApp System Manager Web GUI for storage administration
Storage CLI: use the NetApp CLI if you want to perform some operations not
available through GUI
Storage Service Processor CLI: use the NetApp Service Processor CLI if the
general CLI is unavailable or if you want to perform controller
takeovers/givebacks or any other maintenance requiring a reboot
Storage Serial console: use this console only if you can’t connect through
NetApp CLI anymore (network outage, unstable configuration)
NetApp OnCommand Core: use this central GUI to manage and monitor all
NetApp storage controllers.
On the rear panel, plug a Serial Cable into the RS-232 port
Configure the terminal or a terminal emulation utility, such as CRT or
HyperTerminal, to use these serial connection parameters:
- 9600 bits per second
- 8-bit No Parity (8N1)
- 1 Stop Bit
- No flow control
As soon as you connect through the serial port, enter the admin login and the
associated password.
You are now connected to the CLI
To modify an existing qtree to change its security, locking, and permissions properties,
complete the following steps:
1 Use the volume qtree modify command to modify an existing qtree.
2 To modify a qtree, specify the virtual server, volume, and name of the qtree.
3 To modify the attributes of the default qtree qtree0, omit the -qtree parameter from the command
or specify the value "" for the -qtree parameter.
4 Use the following parameters to modify the additional attributes of a qtree:
Security style. Use the -security-style parameter to specify the security style for the qtree.
Possible security style values include unix (for UNIX mode bits), ntfs (for CIFS ACLs), and
mixed (for mixed NFS and CIFS access).
Opportunistic locking. Use the -oplock-mode parameter to enable oplocks for the qtree.
UNIX permissions. Use the -unix-permissions parameter to specify the default UNIX
permissions for the qtree when the value of the -security-style parameter is set to unix or
mixed. Specify UNIX permissions either as a four-digit octal value (for example, 0700) or in
the style of the UNIX ls command (for example, -rwxr-x---).
For information about UNIX permissions, see the UNIX or Linux documentation.
If UNIX permissions are not specified, the qtree inherits them from the volume on which it is
being creating.
A quota policy or quota policy rule cannot be applied to qtree0. If a value for an optional
attribute is not specified, the qtree inherits it from the volume on which it resides.
5 Use the volume qtree show command to display information about the modified qtree.
The following example displays a modified qtree named qtree1. In this example:
0 Virtual server is named vs0
1 Volume containing the qtree is named vol0
2 Qtree security style is UNIX
3 Oplocks is enabled
node::> volume qtree modify -vserver vs0 -volume vol0 -qtree qtree1 -security-style unix
-oplocks enabled
To delete a qtree, specify the virtual server name, volume on which the qtree is located, and the
qtree name.
Do not delete the special qtree referred to as qtree0, which in the CLI is denoted by empty
quotation marks ("") and has the ID zero (0).
If there is a quota policy or quota policy rule associated with a qtree, it is deleted when the qtree is
deleted.
2 Use the volume qtree show command to display information about the remaining qtrees.
The following example shows the command to delete a qtree named qtree4. The virtual
server name is named vs0 and the volume containing the qtree is named vol0.
node::> volume qtree delete -vserver vs0 -volume vol0 -qtree qtree4
To display information about qtrees for volumes that are online, complete the following
step:
1 Use the volume qtree show command to display information about qtrees.
The command output depends on the parameter or parameters specified with the command. If no
parameters are specified, the command displays the following information about all qtrees:
Qtree0—when a volume is created, a special qtree referred to as qtree0 is automatically
created for the volume. It represents all of the data stored in a volume that isn't contained in a
qtree. In the CLI output, qtree0 is denoted by empty quotation marks ("") and has the ID zero
(0). Qtree0 cannot be manually created or deleted.
Virtual server name
Volume name
Qtree name
Security style: UNIX mode bits, CIFS ACLs, or mixed NFS and CIFS permissions
Whether opportunistic locking is enabled
Status
The following example displays default information about all qtrees and each qtree ID. On
the virtual server vs0, none of the qtrees were manually created, therefore, only the
qtrees referred to as qtree0 are shown. On the virtual server vs1, the volume vs1_vol1
contains qtree0 and two manually created qtrees, qtree1 and qtree2.
node::> volume qtree show -id
Virtual
Server Volume Qtree Style Oplocks Status ID
---------- ------------- ------------ ------------ --------- -------- ---
vs0 vs0_vol1 "" unix enable readonly 0
vs0 vs0_vol2 "" unix enable normal 0
vs0 vs0_vol3 "" unix enable readonly 0
vs0 vs0_vol4 "" unix enable readonly 0
vs0 root_vs_vs0 "" unix enable normal 0
vs1 vs1_vol1 "" unix enable normal 0
vs1 vs1_vol1 qtree1 unix disable normal 1
vs1 vsl_vol1 qtree2 unix enable normal 2
vs1 root_vs_vs1 "" unix enable normal 0
9 entries were displayed.
To display detailed information about a single qtree, complete the following step:
1 Execute the volume qtree show command with the -instance and -qtree parameters. The
detailed view provides information about UNIX permissions, the qtree ID, and the qtree status.
3.2.7 Rename a Qtree
Command Summary
volume efficiency on –vserver Enable deduplication on volume.
<vservername> –volume <volname>
volume efficiency start –vserver Optional: Run deduplication against data that existed in
<vservername> –volume <volname> –scan- the volume before deduplication was enabled.
old-data true
Command Summary
volume efficiency modify –vserver Assign or change a deduplication policy for a volume.
<vservername> –volume <volname>
-policy <policy_name>
volume show -vserver <vservername> View deduplication space savings in the volume.
-volume <volname> -fields dedupe-
space-saved, dedupe-space-saved-
percent
Table 2 presents the deduplication commands for new data.
Table 2) Deduplication commands for new data.
Command Summary
volume efficiency show Display which volumes have deduplication enabled and
the current status of any deduplication processes.
volume efficiency policy show Display the deduplication scheduling policies available
within the cluster.
volume efficiency policy create – Create a scheduling policy for a specific virtual storage
vserver <vservername> - policy server (Vserver). This policy can then be assigned to a
<policyname> -schedule <cron job volume. The value for –schedule must correlate to an
schedule> –duration <time interval> existing cron job schedule within the cluster. The value for
-enabled true
–duration represents how many hours to run
deduplication before stopping.
volume efficiency show -fields policy Display the assigned scheduling policies for each volume
that has deduplication enabled.
volume efficiency modify –vserver Modify the deduplication scheduling policy for a specific
<vservername> –volume <volname> volume to run only when 20% of new data is written to the
-schedule auto volume. The 20% threshold can be adjusted by adding
the @num option to auto (auto@num), where num is a two-
digit number that specifies the percentage.
volume efficiency start –vserver Begin the deduplication process on the specified flexible
<vservername> –volume <volname> volume. This process deduplicates any data that has
been written to disk after deduplication was enabled on
the volume. This process does not deduplicate data that
existed on the volume before deduplication was enabled.
volume efficiency stop –vserver Suspend an active deduplication operation running on the
<vservername> –volume <volname> flexible volume without creating a checkpoint.
volume efficiency show –vserver Check the progress of post-process deduplication
<vservername> –volume <volname> operations running on a volume.
-fields progress
volume show –vserver <vservername> – Show the total percentage of space saved in the volume
volume <volname> -fields dedupe-space- by deduplication (deduplicated / [used + deduplicated] x
saved-percent 100).
volume show –vserver <vservername> – Show the total space saved in the volume by
volume <volname> -fields dedupe-space- deduplication.
saved
Table 3 presents the deduplication commands for existing data.
Command Summary
volume efficiency start –vserver Begin deduplication on the specified flexible volume.
<vservername> –volume <volname> –scan- Deduplication uses the latest checkpoint if one exists and
old-data true is less than 24 hours old.
volume efficiency start –vserver Begin deduplication on the specified flexible volume by
<vservername> –volume <volname> – using the existing checkpoint information, regardless of
scan-old-data true –use-checkpoint the age of the checkpoint information.
true
volume efficiency start –vserver Begin deduplication on the specified flexible volume.
<vservername> –volume <volname> – Deduplication disregards any checkpoints that exist and
scan-old-data true –delete-checkpoint bypasses the compression of any blocks that are already
true deduplicated or locked in Snapshot copies.
volume efficiency show -fields Check the progress of post-process deduplication
progress operations.
Table 4 presents the commands that disable deduplication.
Table 4) Commands to disable deduplication.
Command Summary
volume efficiency stop –vserver Suspend an active deduplication process on the flexible
<vservername> –volume <volname> volume without creating a checkpoint.
volume efficiency off –vserver Disable deduplication on the specified volume. No
<vservername> –volume <volname> additional change logging or deduplication operations are
performed, but the flexible volume remains a
deduplicated volume and the storage savings are
preserved.
If this command is used and then deduplication is turned
back on for this flexible volume, the flexible volume
should be rescanned with the volume efficiency
start –scan-old-data true command to gain the
maximum savings.
Beginning with ONTAP 9.2, you can perform Cross Volume Sharing in volumes belonging
to the same aggregate using Inline Aggregate-level Deduplication. Cross Volume
Deduplication is enabled by default on AFF systems (no need to activate).
Beginning with ONTAP 9.3, background deduplication jobs run automatically with
Automatic Background Deduplication (ADS) on AFF systems. ADS is enabled by default for
all newly created volumes. The feature uses the block fingerprints created during the
inline deduplication process.
Aggregate: aggr0
Node: vivek6-vsim2
Aggregate: aggr1
Node: vivek6-vsim2
Aggregate: aggr0
Node: vivek6-vsim2
Aggregate: aggr1
Node: vivek6-vsim2
Table 5 shows the use cases for compression in clustered Data ONTAP.
Table 5) Use cases for compression in clustered Data ONTAP.
Compress and deduplicate existing data on a Compress and Deduplicate Existing Data
volume
Turn off both inline and postprocess compression on Turn Off All Compression
a volume
3.4.1 Enable Postprocess Compression
2 View the total percentage of space savings achieved through the compression of a volume.
volume show –vserver <<var_vserver01>> –volume <<var_vol01>> –fields compression-space-saved–
percent
Table 6 shows some use cases for inline compression of clustered systems.
Table 6) Use cases for compression inline clustered Data ONTAP.
Compress and deduplicate existing data on a Compress and Deduplicate Existing Data
volume
6 View the total percentage of space savings achieved through the compression of a volume.
volume show –vserver <<var_vserver01>> –volume <<var_vol01>> –fields compression-space-saved–
percent
Perform administrative tasks remotely. Connect to Storage System Console from RLM
Configure and define RLM settings through the Manage RLM with Data ONTAP
storage controller interface.
3.5.1 Log in to RLM from Administration Host
To connect to the storage system console from the RLM, complete the following steps:
1 Run the following command at the RLM prompt:
system console
In the RLM administrative mode, use the RLM commands to perform most tasks. Table 8
lists the RLM commands used in administrative mode:
Table 8) RLM commands used in administrative mode.
Function Command
Display system date and time date
Display storage system events logged by the RLM events {all | info | newest | oldest |
search string }
Set the privilege level to access the specified mode priv set {admin | advanced | diag}
Function Command
Display status for each power supply, such as system power status
presence, input power, and output power
Reset the storage system using the specified system reset {primary | backup | current}
firmware image The RLM remains operational as long as
input power to the storage system is not
interrupted.
Display the RLM version information, including version
hardware and firmware information
3.5.4 RLM Advanced Mode Display Information
Function Command
Display the RLM command history or search for audit rlm log audit
logs from the SEL
Manage the RLM from Data ONTAP using the rlm commands in the nodeshell. Some
management functions include:
4 Set up the RLM
5 Reboot the RLM
6 Display the status of the RLM
7 Update the RLM firmware
Table 10 provides a complete list of the Data ONTAP commands needed to manage the
RLM.
Table 10) Data ONTAP commands.
Reboot the RLM and trigger the RLM to perform a rlm reboot
self test Any console connection through the RLM is
lost during the reboot.
Initiate the interactive RLM setup script rlm setup
If SNMP is enabled for the RLM, the RLM generates SNMP traps to configured trap hosts
for all down system events.
2 Enable SNMP traps for both Data ONTAP and the RLM, but disable the SNMP traps for only
the RLM and leave the SNMP traps for Data ONTAP enabled.
3.5.7 Disable SNMP Traps for Only RLM
To disable SNMP traps for the RLM, complete the following steps:
1 To disable SNMP traps for the RLM, enter the rlm.snmp.traps off command in the nodeshell.
2 The default value is on.
3 Leave SNMP traps for Data ONTAP enabled. Do not enable SNMP traps for the RLM
when SNMP traps for Data ONTAP are disabled. When SNMP traps for Data ONTAP are
disabled, SNMP traps for the RLM are also disabled.
3.6 Firewall for Clustered Data ONTAP
1 From the cluster shell, show the default node-specific firewall options.
firewall show
4 The purpose of firewall policies is to give the user the flexibility to allow management
services such as ssh, http, ntp, on data interfaces. To modify a particular firewall policy, run
the following command.
Firewall policy modify –policy <<var_policy01>>
4 Enter any of the required system services, such as dns, http, https, ndmp, ntp, snmp, ssh,
or telnet.
6 Clone a firewall policy.
The administrator wants to create a new export Add a Rule to an Export Policy
policy rule to govern the behavior of new shares.
The administrator wants to map Windows domain Create a Name Mapping from Windows to UNIX
users to their UNIX IDs.
3.7.1 Check Active Client Connections
1 To check for active client connections, run the network connections active show-
clients command and look for the client host name or IP address.
network connections active show-clients
1 To create an export rule for an export policy, use the vserver export-policy rule create
command. This makes it possible to define client access to data.
vserver export-policy rule create -vserver virtual_server_name -policyname <<var_policy01>>
-ruleindex 1 -protocol cifs -clientmatch 0.0.0.0/0 -rorule any -rwrule any -anon 65535
1 To create a name mapping from Windows to UNIX, such as mapping every user in the
domain to the UNIX text equivalent, enter the vserver name-mapping create command.
vserver name-mapping create -vserver <<var_vserver01>> -direction win-unix -position 1
-pattern "<<var_ad_domainname>>\\(.+)" -replacement "\1"
The first with ro rule, attached to the root volume and the volume parent to the qtree, and the
second with rw rule attached to the qtree
2 To create an export rule for an export policy, use the vserver export-policy rule create
command. This makes it possible to define client access to data.
3 For the ro rule :
vserver export-policy rule create -vserver virtual_server_name -policyname <<var_policy01>>
-ruleindex 1 -protocol nfs -clientmatch 0.0.0.0/0 -rorule any -rwrule none -anon 65535
3.8.3 Attach the export policy to the vol root of the vserver
Create the volume with junction path and the ro export rule
vol qtree create -vserver virtual_server_name -volume vol_name_01 -qtree qtr01 -security-style
unix -oplock-mode enable -export-policy <<var_policy02>>
3.9.1 Considerations
- User authentication. You must plan how users are authenticated. There are
two options, Kerberos and standard UNIX password authentication.
- Directory and file access. You must plan for access control on files and
directories. Depending on their business needs, they can choose standard
UNIX permissions or Windows-style ACLs.
- NFSv4 ACL configuration for POSIX ACLs. NFSv4 only supports NFSv4 ACLs,
not POSIX ACLs. Any data that is migrated from third-party storage must
manually set NFSv4 ACLs if they are required.
- Mounting NFSv4 file systems. Mounting file systems over NFSv4 is not the
same for all NFS clients. Clients mount using different syntax based on
what kernel is being used. Some clients mount NFSv4 by default, so the
same considerations must be used if NFSv3 is desired. youmust plan
appropriately to correctly mount file systems to the NFSv4 client.
- NFSv3 and NFSv4 clients can coexist. The same file system can be mounted
over NFSv3 and NFSv4. However, any ACLs set on a file or directory from
NFSv4 are enforced for NFSv3 mounts as well. Setting permissions on a
folder with mode bits (rwx) affects the NFSv4-style ACL. For example, if a
directory is set to 777 permissions, then the NFSv4 ACL for “everyone” is
added. If a directory is set to 770, then “everyone” is removed from the
NFSv4 ACL. This can be prevented using ACL preservation options,
described in section 5.
3.9.2 Pre-requisites
- The Client system must have all NFSv4 package installed and
configured.
3.9.3 Configuration
On the client, install the NFSv4 package (depend of your distribution and OS,
here it’s tested with CentOS).
On filer:
Exemple:
The vserver nfs options must be like the following, modify the options in order.
- You should allow NFSv4 user and group IDs as numeric strings
If you use local account instead of an authentication server, you should create a
local account on the client, and on the cluster. The user on client and cluster
must be exactly the same that include their UID and GID.
nfs4user:x:1000:1000::/home/nfs4user:/bin/bash
nfs4user:x:1000:
- The NFS export must be applied on / and /vol in read only, and then
another export for the NFS volume in read & write.
You should replace 0.0.0.0/0 by the appropriate client IP that will mount the NFS
share.
If the volume exists, then modify with the good export policy:
- Create the data volume using the good UID & GID
1 View more detailed information about a specific type of event such as occurrence, last
occurrence, and number of events dropped.
event status show
Parameter Values
–application console, http, ontapi, service-processor, snmp, and ssh (for example).
Specific values depend on the application-level access.
Parameter Values
–role admin, none, readonly, vsadmin, vsadmin-protocol, vsadmin-readonly and
vsadmin-volume (for example). Specific values depend on the granularity
of access.
4 Run the following command to switch to a virtual storage server (Vserver) context.
vserver context –vserver <<var_vserver01>>
Parameter Values
–role vsadmin, vsadmin-volume, vsadmin-protocol and vsadmin-readonly
(for example)
–change-delay 0–1000
–username-minsize 3–16
–username-alphanum Enabled or disabled
–passwd-minsize 3–64
–passwd-alphanum Enabled or disabled
–disallowed-reuse 1–25
3.11.2 General User Administration
6 Run the following command to modify the access-control role of a user's login.
security login modify –username
A set of files must be restored, but their exact Restore Entire Volume from Snapshot Copy
name or location is not known.
To restore a single file from a Snapshot copy, complete the following step:
2 Run the following CLI operation.
volume snapshot restore-file -vserver <<var_vserver01>> -volume <<var_vol01>> -snapshot
<<var_snap01>> -path <<var_file_path>>
The snapmirror delete command can be executed on either the destination or the source
cluster, but with different results. When the command is executed from the destination
cluster, the SnapMirror relationship information as well as the Snapshot copy owners
associated with the relationship are removed from both clusters as long as both clusters
are accessible.
If the source cluster is inaccessible, only the relationship information and associated
Snapshot copy owners on the destination cluster are removed. In this case, a note is
generated indicating that only the destination relationship information and Snapshot copy
owners were removed. The remaining source cluster relationship information and
Snapshot copy owners can be removed by performing a snapmirror delete on the source
cluster independently.
Running snapmirror delete on the source cluster only removes the relationship
information and Snapshot copy owners on the source cluster. It does not contact the
destination cluster. A warning/confirmation request is issued by the command when it is
executed on the source cluster indicating that only the source-side information will be
deleted. It can be overridden by using the -force option on the CLI.
To delete a SnapMirror relationship, complete the following step:
3 Run snapmirror delete.
snapmirror delete <<var_cluster02>>://<<var_vserver02>>/<<var_vol02>>
Basic information about cloned volumes is required. View Basic FlexClone Information
6 The volume clone create command is not supported on Vservers with NetApp Infinite
Volume.
3.13.2 View Basic FlexClone Information
FlexClone volume splitting is used to assign physical space to the virtual volume. The
following sections describe operating procedures for using the clone split command.
3.13.5 View Space Estimates
To view the free disk space estimates for FlexClone volumes residing on the Vserver,
complete the following step:
4 From the cluster shell, run the volume clone split estimate command.
cluster1::> volume clone split estimate -vserver <<var_vserver>>.
(volume clone split estimate)
Split
Vserver FlexClone Estimate
--------- ------------- ----------
vs0 fc_vol_1 851.5MB
fc_vol_3 0.00B
flex_clone1 350.3MB
fv_2 47.00MB
tv9 0.00B
5 entries were displayed.
7 The space estimate reported might differ from the space actually required to perform the
split, especially if the cloned volume is changing while the split is being performed.
8 The volume clone split estimate command is not supported on Vservers with Infinite
Volume.
To split or separate a FlexClone volume from the parent Vserver, complete the following
step:
5 From the cluster shell, run the volume clone split start command.
volume clone split start -vserver <<var_vserver>> -flexclone <<var_vol01>>_1 –foreground true
9 Both the parent volume and the FlexClone volume are available during this operation.
10 The volume clone split start command is not supported on Vservers with Infinite
Volume.
3.13.7 Stop FlexClone Split
11 This procedure stops the process of separating the FlexClone volume from its underlying
parent volume without losing any progress achieved during the split operation.
12 The volume clone split stop command is not supported on Vservers with Infinite
Volume.
3.13.8 View FlexClone Split Status
To view the status of active FlexClone volume splitting operations, complete the following
step:
7 From the cluster shell, run the volume clone split show command.
cluster1::> volume clone split show
(volume clone split show)
Inodes Blocks
--------------------- ---------------------
Vserver FlexClone Processed Total Scanned Updated % Complete
--------- ------------- ---------- ---------- ---------- ---------- ----------
vs1 fc_vol_1 0 1260 0 0 0
13 Use the -instance option to view detailed information about all volume-splitting
operations.
The volume clone split show command is not supported on Vservers with Infinite
Volume.
3.14 QOS
The following storage objects can have storage QoS policies applied to them:
SVM
FlexVol volume
LUN
File
Beginning with ONTAP 9.3, Storage QoS supports adaptive QoS. Adaptive QoS
automatically scales a throughput ceiling or floor, maintaining the ratio of IOPS to TBs|
GBs as the size of the volume changes (volumes only). By default, 3 adaptive QOS are
defined but we can create new ones :
After you create an adaptive policy group, use the volume create command or volume
modify command to apply the adaptive policy group to a volume.
4 Configuration instructions
4.1 Disks
All disks owned by the storage system must be initialized when repurposing or reinstalling
a NetApp storage controller. This action wipes all data from the disks, including
aggregates, volumes, qtrees, and LUNs.
3 The controller and disks must be rebooted to begin disk initialization.
4 The disks that will be initialized must be owned by that node.
To initialize disks, complete the following steps:
8 Reboot the storage controller by running the halt command.
9 At the Loader prompt, type autoboot.
10 Press Ctrl-C to bring up the Boot menu.
*******************************
* *
* Press Ctrl-C for Boot Menu. *
* *
*******************************
mkdir: /cfcard/cores: Read-only file system
^C
The Boot Menu will be presented because an alternate boot method was specified.
12 Enter y to zero the disks, reset the configuration, and to install the new file system.
Zero disks, reset config and install a new file system?: y
Data ONTAP 8.3 introduced a new feature called Advanced Drive Partitioning.
With this feature, a single disk drive can be partitioned into multiple logical
partitions.
In the current shipping releases, a single hard disk drive can be partitioned into
two (one small and one large) logical partitions on entry-level systems.
Additionally, a solid state drive can be partitioned into two logical partitions on
All Flash FAS systems to maximize the usable capacity.
Also, the solid state drives can be partitioned into four equal partitions to be used
on hybrid aggregates as a shared storage pool to maximize the usable capacity.
Activating it during the RUN is not possible unless you erase all data on the filer,
and you reconfigure the filer from scratch.
This shows aggregates which have been created with ADP disks. When you run
the storage disk show command, the disks which are partitioned by ADP will have
a 'P1' or 'P2' appended to the end. You can also run this command to view the
partitioned disks:
- You should have decided which node will be the active node and which
node will be the passive node.
This procedure is designed for nodes for which no data aggregate has been
created from the partitioned disks.
Steps
Example
You can see that half of the data partitions are owned by one node and half are
owned by the other node. All of the data partitions should be spare.
set advanced
For each data partition owned by the node that will be the passive node, assign it
to the active node:
You do not need to include the partition as part of the disk name.
Example
You would enter a command similar to the following example for each data
partition you need to reassign:
storage disk assign -force -data true -owner cluster1-01 -disk 1.0.3
Confirm that all of the partitions are assigned to the active node.
Example
set admin
Create your data aggregate, leaving at least one data partition as spare:
To enable SSDs to be shared by multiple Flash Pool aggregates, you place them
in a storage pool. After you add an SSD to a storage pool, you can no longer
manage it as a stand-alone entity—you must use the storage pool to assign or
allocate the storage provided by the SSD.
You create storage pools for a specific HA pair. Then, you add allocation units
from that storage pool to one or more Flash Pool aggregates owned by the same
HA pair. Just as disks must be owned by the same node that owns an aggregate
before they can be allocated to it, storage pools can provide storage only to Flash
Pool aggregates owned by one of the nodes that owns the storage pool.
If you need to increase the amount of Flash Pool cache on your system, you can
add more SSDs to a storage pool, up to the maximum RAID group size for the
RAID type of the Flash Pool caches using the storage pool. When you add an SSD
to an existing storage pool, you increase the size of the storage pool's allocation
units, including any allocation units that are already allocated to a Flash Pool
aggregate.
You should provide one or more spare SSDs for your storage pools, so that if an
SSD in that storage pool becomes unavailable, Data ONTAP can use a spare SSD
to reconstruct the partitions of the malfunctioning SSD. You do not need to
reserve any allocation units as spare capacity; Data ONTAP can use only a full,
unpartitioned SSD as a spare for SSDs in a storage pool.
After you add an SSD to a storage pool, you cannot remove it, just as you cannot
remove disks from an aggregate. If you want to use the SSDs in a storage pool as
discrete drives again, you must destroy all Flash Pool aggregates to which the
storage pool's allocation units have been allocated, and then destroy the storage
pool.
Storage pools do not support a diskcount parameter; you must supply a disk list
when creating or adding disks to a storage pool.
Steps
The SSDs used in a storage pool can be owned by either node of an HA pair.
After the SSDs are placed into the storage pool, they no longer appear as spares
on the cluster, even though the storage provided by the storage pool has not yet
been allocated to any Flash Pool caches. The SSDs can no longer be added to a
RAID group as a discrete drive; their storage can be provisioned only by using the
allocation units of the storage pool to which they belong.
When you add SSDs to an SSD storage pool, you increase the storage pool's
physical and usable sizes and allocation unit size. The larger allocation unit size
also affects allocation units that have already been allocated to Flash Pool
aggregates.
You must have determined that this operation will not cause you to exceed the
cache limit for your HA pair. Data ONTAP does not prevent you from exceeding
the cache limit when you add SSDs to an SSD storage pool, and doing so can
render the newly added storage capacity unavailable for use.
When you add SSDs to an existing SSD storage pool, the SSDs must be owned by
one node or the other of the same HA pair that already owned the existing SSDs
in the storage pool. You can add SSDs that are owned by either node of the HA
pair.
Steps
View the current allocation unit size and available storage for the storage pool:
The system displays which Flash Pool aggregates will have their size increased by
this operation and by how much, and prompts you to confirm the operation.
4.2 Networking
This type of interface group requires two or more Ethernet interfaces and an external
switch or switches that follow the 802.3ad (static) IEEE standard.
4.2.2 Create Static Multimode Interface Group
To add ports to the static multimode interface group, complete the following step:
12 Run the following command from the clustershell interface:
network port ifgrp add-port –node <<var_node01>> –ifgrp <<var_ifgrp01>> –port <<var_port1>>
network port ifgrp add-port –node <<var_node01>> –ifgrp <<var_ifgrp01>> –port <<var_port2>>
2 To add ports to the single-mode interface group, use the following command from the clustershell
interface:
network port ifgrp add-port –node <<var_node01>> –ifgrp <<var_ifgrp01>> –port <<var_port1>>
network port ifgrp add-port –node <<var_node01>> –ifgrp <<var_ifgrp01>> –port <<var_port2>>
4.2.4.2 Control Single-Mode Interface Group Port Favoring
To control manually which ports will be favored or unfavored in the single-mode interface
group, complete the following steps using the ifgrp command from the cluster interface:
This type of interface group requires two or more Ethernet interfaces and a switch that
supports LACP. Therefore, make sure that the switch is configured properly.
1 Run the following command on the command line. This example assumes that there are two
network interfaces called e0a and e0b and that an interface group called <<var_vif01>> is being
created.
network port ifgrp create -node <<var_node01>> -ifgrp <<var_vif01>> -distr-func ip -mode
multimode_lacp
network port ifgrp add-port -node <<var_node01>> -ifgrp <<var_vif01>> -port e0a
network port ifgrp add-port -node <<var_node01>> -ifgrp <<var_vif01>> -port e0b
16 All interfaces must be in the down status before being added to an interface group.
17 The interface group name must follow the standard naming convention of x0x.
4.2.6 VLAN Clustered Data ONTAP
1 Run the following command from the clustershell to create a VLAN network port. This example
assumes that there is a physical network port called e0d on a cluster node named <<var_node01>>
and that traffic tagging is enabled with the VLAN ID VLAN 10.
network port vlan create –node <<var_node01>> -vlan-name e0d-10
1 To configure a clustered Data ONTAP network port to use jumbo frames (which usually have an
MTU of 9,000 bytes), run the following command from the clustershell:
network port modify –node <<var_node01>> -port <network_port> -mtu <<var_mtu>>
WARNING: Changing the network port settings will cause a serveral second interruption in
carrier.
Do you want to continue? {y|n}: y
6 The network port identified by -port can be a physical network port, a VLAN network port, or
an IFGRP.
4.3 IPSPACE creation
4.3.1 Ipspace
To establish a serial connection to the storage console port, complete the following steps:
1 Use the serial console cable shipped with the storage controller to connect the serial console port
on the controller to a computer workstation.
19 The serial console port on the storage controller is indicated by IOIOI.
2 Configure access to the storage controller by opening a terminal emulator application, such as
HyperTerminal in a Windows environment or tip in a UNIX environment.
3 Enter the following serial connection settings:
Bits per second: 9600
Data bits: 8
Parity: none
Stop bits: 1
Flow control: none
4 Power on the storage controller.
4.4.2 Validate Storage Controller Configuration
To verify that the system is ready and configured to boot to clustered Data ONTAP 8.2,
complete the following steps on all of the nodes:
1 Stop the autoboot process by pressing Ctrl+C.
2 At the LOADER prompt, enter the following command:
setenv bootarg.init.boot_clustered
20 Depending on the number of disks attached, the initialization and creation of the root
volume can take 75 minutes or more to complete.
8 After initialization is complete, wait while the storage system reboots.
4.4.4 Create Cluster
The Cluster Setup wizard is used to create the cluster on the first node. The wizard helps
in completing the following tasks:
8 Configuring the cluster network that connects the nodes (if the cluster consists of two or more
nodes)
9 Creating the cluster admin Storage Virtual Machine (SVM), formerly known as Vserver
10 Adding feature license keys
11 Creating the node management interface for the first node
8 The storage system hardware should be installed and cabled, and the console should be
connected to the node on which you intend to create the cluster.
To create the cluster, complete the following steps:
1 On system boot, verify that the console output displays the cluster setup wizard.
Welcome to the cluster setup wizard.
You can return to cluster setup at any time by typing "cluster setup". To accept a default or
omit a question, do not enter a value.
21 If a login prompt appears instead of the Cluster Setup wizard, you must start the wizard by
logging in by using the factory default settings and then entering the cluster setup
command.
2 Enter the following command to create a new cluster:
create
4 NetApp recommends accepting the system defaults. To accept the system defaults, press Enter.
5 Follow the prompts to complete the Cluster Setup wizard. NetApp recommends accepting the
defaults. To accept the defaults, press Enter for each prompt.
6 After the Cluster Setup wizard is completed and exits, verify that the cluster is active and that the
first node is healthy by typing cluster show and pressing Enter:
cluster show
Node Health Eligibility
--------------------- ------- ------------
cluster1-01 true true
9 You can access the Cluster Setup Wizard to change any of the values you entered for the
admin SVM or node SVM by using the cluster setup command.
After creating a new cluster, for each remaining node, use the Cluster Setup wizard to join
the node to the cluster and create its node management interface.
10 The storage system hardware should be installed and cabled, and the console should be
connected to the node on which you intend to create the cluster.
To join the node to the cluster, complete the following steps:
1 Power on the node. The node boots, and the Cluster Setup wizard opens on the console.
Welcome to the cluster setup wizard.
You can return to cluster setup at any time by typing "cluster setup". To accept a default or
omit a question, do not enter a value.
3 Follow the prompts to set up the node and join it to the cluster:
To accept the default value for a prompt, press Enter.
To enter a different value for the prompt, type the value and press Enter.
4 After the Cluster Setup wizard is completed and exits, verify that the node is healthy and eligible to
participate in the cluster. Type cluster show and press Enter:
cluster show
Node Health Eligibility
--------------------- ------- ------------
cluster1-01 true true
cluster1-02 true true
11 If a login prompt displays instead of the Cluster Setup wizard, start the wizard by logging in
using the factory default settings and then enter the cluster setup command.
2 Enter the following command to create a new cluster:
create
4 NetApp recommends accepting the system defaults. To accept the system defaults, press Enter.
5 Follow the prompts to complete the Cluster Setup wizard. NetApp recommends accepting the
defaults. To accept the defaults, press Enter for each prompt.
4.4.7 Cluster Join for Clustered Data ONTAP
12 If a login prompt displays instead of the Cluster Setup wizard, start the wizard by logging in
using the factory default settings, and then enter the cluster setup command.
2 Enter the following command to join a cluster:
join
4 NetApp recommends accepting the system defaults. To accept the system defaults, press Enter.
5 Follow the prompts to complete the Cluster Setup wizard. NetApp recommends accepting the
defaults. To accept the defaults, press Enter for each prompt.
22 command.
To reassign a spare disk to a different node in the same HA pair, complete the following
steps:
1 Run the following command:
storage disk modify -disk <<var_disk01>> -owner <<var_node01>> -force-owner true
13 Disks can be assigned to either storage controller in an HA pair. Use the information obtained
from the sizing done during the presales process to determine how many disks must be assigned to
each node to support each application's workload.
4.6 Flash Cache Clustered Data ONTAP
14 Data ONTAP 8.1 and later do not require a separate license for Flash Cache.
A 64-bit aggregate containing the root volume is created during the Data ONTAP setup
process. To create additional 64-bit aggregates, determine the aggregate name, the node
on which to create it, and how many disks it will contain.
4.7.1.1 Create New Aggregate
15 Leave at least one disk (select the largest disk) in the configuration as a spare. A best practice
is to have at least one disk spare for each disk type and size.
16 Because Data ONTAP 8.2 has a 64-bit default value, the -B 64 option is not needed.
4.8 Compression Clustered Data ONTAP
Table 16) Compression clustered Data ONTAP prerequisite.
Description
Compression can be enabled only on FlexVol volumes that exist within a 64-bit aggregate.
4.8.1.1 Enable Data Compression
Before configuring the Remote LAN Module (RLM), gather information about the network
and the AutoSupport settings.
Configure the RLM using DHCP or static addressing. To use static addressing, first gather
the following information:
12 An available static IP address
13 The netmask of the network
14 The gateway address of the network
15 Autosupport information
As a best practice, configure at least the AutoSupport recipients and mail host and then
configure the RLM; the name or the IP address of the AutoSupport mail host is required.
Data ONTAP automatically sends AutoSupport configuration to the RLM, allowing the RLM
to send alerts and notifications through an AutoSupport message to the system
administrative recipients specified in AutoSupport.
4.9.1.1 Configure RLM
To configure and enable the service processor (SP), complete the following step:
1 Run the following command:
system node service-processor network modify -node <<var_node01>> -address-type <IPv4> -enable
<true> -ip-address <<var_ipaddress>> -netmask <<var_netmask>> -gateway <<var_gateway>>
Where
-address-type specifies whether the IPv4 or IPv6 configuration of the SP should be modified.
-enable enables the network interface of the specified IP address type.
-dhcp specifies whether to use the network configuration from the DHCP server or the
network address that you provide.
23 You can enable DHCP (by setting -dhcp to v4), but only if you are using IPv4. You cannot
enable DHCP for IPv6 configurations.
-ip-address specifies the public IP address for the SP.
2 If needed, create the FC service on each Vserver. This command also starts the FC service and
sets the FC alias to the name of the Vserver.
vserver fcp create -vserver <<var_vserver01>>
5 If needed, make an FC port into a target to allow connections into the node.
For example, make a port called <<var_fctarget01>> into a target port by running the
following command:
system node run -node <<var_node01>> fcadmin config -t target <<var_fctarget01>>
17 If an initiator port is made into a target port, a reboot is required. NetApp recommends
rebooting after completing the entire configuration because other configuration steps might also
require a reboot.
4.12.4 iSCSI Clustered Data ONTAP
2 If needed, create the iSCSI service on each Vserver. This command also starts the iSCSI service
and sets the iSCSI alias to the name of the Vserver.
vserver iscsi create -vserver <<var_vserver01>>
19 To modify an existing entry, replace the word create with modify in the command.
4.14 NTP Clustered Data ONTAP
To configure time synchronization on the cluster, complete the following steps on each
node in the cluster:
1 Run the system services ntp server create command to associate the node with the NTP
server.
system services ntp server create -node <<var_node01>> -server
<<var_global_ntp_server_ip_addr>> -version max
2 Run the cluster date show command to verify that the date, system time, and time zone are set
correctly for each node.
24 All nodes in the cluster should be set to the same time zone.
cluster date show
Node Date Timezone
------------ ------------------- -----------------
cluster1-01 04/06/2013 09:35:15 America/New_York
cluster1-02 04/06/2013 09:35:15 America/New_York
cluster1-03 04/06/2013 09:35:15 America/New_York
cluster1-04 04/06/2013 09:35:15 America/New_York
cluster1-05 04/06/2013 09:35:15 America/New_York
cluster1-06 04/06/2013 09:35:15 America/New_York
6 entries were displayed.
3 To correct any time zone or date values, run cluster date modify to change the date or time
zone on all of the nodes.
1 Configure SNMP basic information, such as the location and contact. When polled, this information
is visible as the sysLocation and sysContact variables in SNMP.
snmp contact "Services Engineering"
snmp location "Firebird Lab"
snmp init 1
options snmp.enable on
2 Configure SNMP traps to send to remote hosts, such as a DFM server or another fault
management system.
snmp traphost add <<var_dfm_server_fqdn>>
4.15.1.1 SNMPv1 Cluster-Mode
20 Use the delete all command with caution. If community strings are used for other
monitoring products, the delete all command will remove them.
4.15.1.2 SNMPv3 Cluster-Mode
2 Select all of the default authoritative entities and select md5 as the authentication protocol.
3 Enter an 8-character minimum-length password for the authentication protocol, when prompted.
4 Select des as the privacy protocol.
5 Enter an 8-character minimum-length password for the privacy protocol, when prompted.
4.16 Syslog Clustered Data ONTAP
Table 17) Syslog prerequisites.
Description
The SMTP server with appropriate mail IDs is configured.
The remote server has syslogd running and listening on the appropriate UDP port.
Complete the following steps to configure syslog on the cluster:
1 Set up the preconfigured SMTP server as the default mail server. Use the mail ID as it appears in
the From: field of the e-mail.
event config modify -mailfrom <<var_admin_username>> -mailserver <<var_site_a_mailhost>>
4 Optimize the information sent to the destination on a specific frequency and time interval.
Best Practice
Delete or lock the default admin account.
There are two default administrative accounts: admin and diag. The admin account serves
in the role of administrator and is allowed access using all applications. To set up user
access for clustered Data ONTAP, complete the following steps:
1 Create a login method for a new administrator from clustershell.
security login create –username <<var_username>> -authmethod password -role admin -application
ssh
security login create –username <<var_username>> -authmethod password -role admin -application
http
security login create –username <<var_username>> -authmethod password -role admin -application
console
security login create –username <<var_username>> -authmethod password -role admin -application
ontapi
security login create –username <<var_username>> -authmethod password -role admin -application
service-processor
Secure access to the storage controller must be configured. To configure secure access,
complete the following steps:
1 Increase the privilege level to access the certificate commands.
set -privilege advanced
Do you want to continue? {y|n}: y
2 Generally, a self-signed certificate is already in place. Check it with the following command:
security certificate show
3 If a self-signed certificate does not exist, run the following command as a one-time command to
generate and install a self-signed certificate:
21 You can also use the security certificate delete command to delete expired
certificates.
security certificate create -vserver <<var_vserver01>> -common-name
<<var_security_cert_common_name>> -size 2048 -country US -state CA -locality Sunnyvale
-organization IT -unit Software -email-addr user@example.com
4 Configure and enable SSL and HTTPS access and disable Telnet access.
system services web modify -external true -sslv3-enabled true
Do you want to continue {y|n}: y
system services firewall policy delete -policy mgmt -service http -action allow
system services firewall policy create -policy mgmt -service http -action deny -ip-list
0.0.0.0/0
system services firewall policy delete -policy mgmt -service telnet -action allow
system services firewall policy create -policy mgmt -service telnet -action deny -ip-list
0.0.0.0/0
security ssl modify -vserver <<var_vserver01>> -certificate <<var_security_cert_common_name>>
-enabled true
22 It is normal for some of these commands to return an error message stating that the entry
does not exist.
4.20 NDMP Clustered Data ONTAP
NDMP must be enabled before it can be used by a NetApp storage controller. To enable
NDMP, complete the following steps:
1 Enable NDMP on each node in the cluster.
system services ndmpd on -node <<var_node01>>
Starting with Data ONTAP 8.2, you can choose to perform tape backup and restore
operations either at the node level as you have been doing until now or at the Storage
Virtual Machine (SVM) level. To perform these operations successfully at the SVM level,
NDMP service must be enabled on the SVM.
If you upgrade from Data ONTAP 8.1 to Data ONTAP 8.2, NDMP continues to follow node-
scoped behavior. You must explicitly disable node-scoped NDMP mode to perform tape
backup and restore operations in the SVM-scoped NDMP mode.
If you install a new Data ONTAP 8.2 cluster, NDMP is in the SVM-scoped NDMP mode by
default. To perform tape backup and restore operations in the node-scoped NDMP mode,
you must explicitly enable the node-scoped NDMP mode.
In this mode, you can perform tape backup and restore operations on a node that owns
the volume. To perform these operations, you must establish NDMP control connections
on a LIF hosted on the node that owns the volume or tape devices.
What SVM-scoped NDMP mode is:
Starting with Data ONTAP 8.2, you can perform tape backup and restore operations at the
Storage Virtual Machine (SVM) level successfully if the NDMP service is enabled on the
SVM. You can back up and restore all volumes hosted across different nodes in an SVM of
a cluster if the backup application supports the CAB extension.
An NDMP control connection can be established on different LIF types. In the SVM-scoped
NDMP mode, these LIFs belong to either the data SVM or admin SVM. Data LIF belongs to
the data SVM and the intercluster LIF, node-management LIF, and cluster-management
LIF belong to the admin SVM. The NDMP control connection can be established on a LIF
only if the NDMP service is enabled on the SVM that owns this LIF.
In the SVM context, the availability of volumes and tape devices for backup and restore
operations depends upon the LIF type on which the NDMP control connection is
established and the status of the CAB extension. If your backup application supports the
CAB extension and a volume and tape device share the same affinity, then the backup
application can perform a local backup or restore operation instead of a three-way backup
or restore operation.
There are commands for viewing information about tape drives and media changers in a
cluster, bringing a tape drive online and taking it offline, modifying the tape drive
cartridge position, setting and clearing tape drive alias name, and resetting a tape drive.
You can also view and reset tape drive statistics.
You have to access the nodeshell to use some of the commands listed in the following
table. You can access the nodeshell by using the system node run command.
Enable or disable a tape trace operation for a tape drive storage tape trace
Set an alias name for tape drive or media changer storage tape alias set
Take a tape drive offline storage tape offline
View information about all tape drives and media changers storage tape show
View information about tape drives attached to the cluster storage tape show-tape-drive
View information about media changers attached to the storage tape show-media-changer
cluster
View error information about tape drives attached to the storage tape show-errors
cluster
View all Data ONTAP qualified and supported tape drives storage tape show-supported-status
attached to each node in the cluster
View aliases of all tape drives and media changers storage tape alias show
attached to each node in the cluster
Reset the statistics reading of a tape drive to zero storage stats tape zero tape_name
Virus scanning is available only for CIFS-related traffic. This procedure indicates how to
connect enable antivirus scanning on a SVM.
Prerequisites :
Best Practice
Credentials used as service accounts to run the Antivirus Connector service must be added as
privileged users in the scanner pool.
The same service account must be used to run the antivirus engine service.
/!\ To finalize the configuration, please refer to the NetApp Antivirus Solution
for Clustered Ontap INM
1.1 SnapLock
SnapLock Compliance and Enterprise modes differ mainly in the level at which
each mode protects WORM files:
A related difference involves how strictly each mode manages file deletes:
• Enterprise-mode WORM files can be deleted during the retention period by the
compliance administrator, using an audited privileged delete procedure.
After the retention period has elapsed, you are responsible for deleting any files
you no longer need. Once a file has been committed to WORM, whether under
Compliance or Enterprise mode, it cannot be modified, even after the retention
period has expired.
The following table shows the differences between SnapLock Compliance and
Enterprise modes.
The SnapLock ComplianceClock ensures against tampering that might alter the
retention period for WORM files. You must initialize the system ComplianceClock
on each node that hosts a SnapLock aggregate. Once you initialize the
ComplianceClock on a node, you cannot initialize it again.
• You cannot destroy a Compliance aggregate until the retention period has
elapsed.
Command:
You must create a SnapLock volume for the files or Snapshot copies that you
want to commit to the WORM state. The volume inherits the SnapLock mode
—Compliance or Enterprise—from the SnapLock aggregate, and the
ComplianceClock time from the node.
• You must have created a standard aggregate and a SVM to host the
SnapLock volume.
Command:
SnapLock uses the default retention period to calculate the retention time.
• Default retention period, with a default value that depends on the mode:
You commit a file to WORM manually by making the file read-only. You can
use any suitable command or program over NFS or CIFS to change the read-
write attribute of a file to read-only.
# chmod -w document.txt
# attrib +r document.txt
The autocommit period specifies the amount of time that files must remain
unchanged before they are autocommitted. Changing a file before the
autocommit period has elapsed restarts the autocommit period for the file.
6. Add rules to the policy that define Snapshot copy labels and the retention
policy for each label:
The following command adds a rule to the SVM1-vault policy that defines the
Daily label and specifies that 30 Snapshot copies matching the label should
be kept in the vault:
8. On the destination SVM, create the SnapVault relationship and assign the
SnapVault policy and schedule:
# snapmirror create -source-path source_path -destination-path
destination_path -type XDP -policy policy_name -schedule
schedule_name
The following command initializes the relationship between the source volume
srcvolA on SVM1 and the destination volume dstvolB on
# SVM2: SVM2::> snapmirror initialize -destination-path
SVM2:dstvolB
4.22.3 Mirroring WORM files
You can use SnapMirror to replicate WORM files to another geographic location
for disaster recovery and other purposes. Both the source volume and
destination volume must be configured for SnapLock, and both volumes must
have the same SnapLock mode, Compliance or Enterprise. All key SnapLock
properties of the volume and files are replicated.
The source and destination volumes must be created in peered clusters with
peered SVMs.
Extract of the official documentation : please refer to this documentation for all
prerequisites.
You can transition 7-Mode volumes in a NAS and SAN environment to clustered
Data ONTAP volumes by using clustered Data ONTAP SnapMirror commands.
You must then set up the protocols, services, and other configuration on the
cluster after the transition is complete.
Steps
1. Add and enable the SnapMirror license on the 7-Mode system:
a. Add the SnapMirror license on the 7-Mode system:
license add license_code
license_code is the license code you purchased.
b. Enable the SnapMirror functionality:
options snapmirror.enable on
2. Configure the 7-Mode system and the target cluster to communicate with
each other by choosing
one of the following options:
• Set the snapmirror.access option to all.
• Set the value of the snapmirror.access option to the IP addresses of all the
LIFs on the
cluster.
• If the snapmirror.access option is legacy and the snapmirror.checkip.enable
option is off, add the SVM name to the /etc/snapmirror.allow file.
• If the snapmirror.access option is legacy and the snapmirror.checkip.enable
option is on, add the IP addresses of the LIFs to the /etc/snapmirror.allow file.
3. Depending on the Data ONTAP version of your 7-Mode system, perform the
following steps:
a. Allow SnapMirror traffic on all the interfaces:
options interface.snapmirror.blocked ""
b. If you are running Data ONTAP version 7.3.7, 8.0.3, or 8.1 and you are using
the IP address of the e0M interface as the management IP address to interact
with 7-Mode Transition Tool, allow data traffic on the e0M interface:
options interface.blocked.mgmt_data_traffic off
You must set up the cluster before transitioning a 7-Mode system and ensure
that the cluster meets requirements such as setting up LIFs and verifying
network connectivity for transition.
Step
1. Create the intercluster LIF on each node of the cluster for communication
between the cluster and 7-Mode system:
b. Create a static route for the intercluster LIF by using the network route
create
command.
Example
cluster1::> network route create -vserver vs0 -destination
0.0.0.0/0 -gateway 10.61.208.1
c. Verify that you can use the intercluster LIF to ping the 7-Mode system by
using the network ping command.
Example
cluster1::> network ping -lif intercluster_lif -lif-owner cluster1-01 -destination
system7mode
system7mode is alive
You must create a transition peer relationship before you can set up a
SnapMirror relationship for transition between a 7-Mode system and a cluster.
Steps
1. Use the vserver peer transition create command to create a transition peer
relationship.
2. Use the vserver peer transition show to verify that the transition peer
relationship is
created successfully.
Steps
1. Copy data from the 7-Mode volume to the clustered Data ONTAP volume:
a. If you want to configure the TCP window size for the SnapMirror relationship
between the 7-Mode system and the SVM, create a SnapMirror policy of type
async-mirror with the
window-size-for-tdp-mirror option.
You must then apply this policy to the TDP SnapMirror relationship between the
7-Mode
system and the SVM.
You can configure the TCP window size in the range of 256 KB to 7 MB for
improving the
SnapMirror transfer throughput so that the transition copy operations get
completed faster.
The default value of TCP window size is 2 MB.
Example
cluster1::> snapmirror policy create -vserver vs1 –policy
tdp_policy -window-size-for-tdp-mirror 5MB -type async-mirror
b. Use the snapmirror create command with the relationship type as TDP to
create a
SnapMirror relationship between the 7-Mode system and the SVM.
If you have created a SnapMirror policy to configure the TCP window size, you
must apply
the policy to this SnapMirror relationship.
Example
cluster1::> snapmirror create -source-path system7mode:dataVol20 -
destination-path vs1:dst_vol -type TDP -policy tdp_policy
Operation succeeded: snapmirror create the relationship with
destination vs1:dst_vol.
c. Use the snapmirror initialize command to start the baseline transfer.
Example
cluster1::> snapmirror initialize -destination-path vs1:dst_vol
Operation is queued: snapmirror initialize of destination
vs1:dst_vol.
d. Use the snapmirror show command to monitor the status.
Example
cluster1::>snapmirror show -destination-path vs1:dst_vol
Source Path: system7mode:dataVol20
Destination Path: vs1:dst_vol
Relationship Type: TDP
Relationship Group Type: none
SnapMirror Schedule:
SnapMirror Policy Type: asyncmirror
SnapMirror Policy: DPDefault
Tries Limit:
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
File Restore File Count:
File Restore File List:
Transfer Snapshot:
Snapshot Progress:
Total Progress:
Network Compression Ratio:
Snapshot Checkpoint:
Newest Snapshot: vs1(4080431166)_dst_vol.1
Newest Snapshot Timestamp: 10/16 02:49:03
Exported Snapshot: vs1(4080431166)_dst_vol.1
Exported Snapshot Timestamp: 10/16 02:49:03
Healthy: true
Unhealthy Reason:
Constituent Relationship: false
Destination Volume Node: cluster101
Relationship ID:
97b205a154ff11e49f30005056a68289
Current Operation ID:
Transfer Type:
Transfer Error:
Current Throttle:
Current Transfer Priority:
Last Transfer Type: initialize
Last Transfer Error:
Last Transfer Size: 152KB
Last Transfer Network Compression Ratio: 1:1
Last Transfer Duration: 0:0:6
Last Transfer From: system7mode:dataVol20
Last Transfer End Timestamp: 10/16 02:43:53
Progress Last Updated:
Relationship Capability: 8.2 and above
Lag Time:
Number of Successful Updates: 0
Number of Failed Updates: 0
Number of Successful Resyncs: 0
Number of Failed Resyncs: 0
Number of Successful Breaks: 0
Number of Failed Breaks: 0
Total Transfer Bytes: 155648
Total Transfer Time in Seconds: 6
If you have a schedule for incremental transfers, perform the following steps
when you are ready to perform cutover:
a. Optional: Use the snapmirror quiesce command to disable all future update
transfers.
Example
cluster1::> snapmirror quiesce -destination-path vs1:dst_vol
Example
system7mode> snapmirror release dataVol20 vs1:dst_vol
4.23 SVM DR
https://library.netapp.com/ecm/ecm_download_file/ECMLP2496254
https://library.netapp.com/ecm/ecm_download_file/ECMLP2496252
SVMs must use FlexVol volumes on clusters (Infinite volume not supported)
The source SVM does not contain DP or TDP volume (data protection or transition
data protection)
The source SVM do not contain any volume that resides in a FabricPool-enabled
aggregate.
The source SVM root volume must not contain any other data apart from
metadata because the other data is not replicated. Root volume metadata such
as volume junctions, symbolic links, and directories leading to junctions symbolic
links are replicated.
The destination cluster must have at least one non-root aggregate with a
minimum free space of 10 GB for configuration replication
The destination cluster have at least one non-root aggregate with a sufficient
space (replicated datas).
if any clone parent or clone child volumes are moved by using the volume move
command, then you must move the corresponding volume at the destination
SVM.
Depending on the architecture and the need, all or a subset of the SVM
configuration can be replicated:
Replicate data and all the SVM configuration except the NAS data LIFs
4.23.4.2 Preparation
Create the same custom schedules on the destination cluster as on the source
one:
cnasimu02::> job schedule cron create -name weekly -dayofweek "Sunday" -hour 0 -minute 15
Ensure that the destination cluster has at least one non-root aggregate with a
minimum 10GB free space for configuration replication (the best practice is two).
If you want to replicate all the SVM configuration, the IPspaces of the source and
destination SVMs must have ports belonging to the same subnet.
If needed, remove the ports (ifgroup or vlan) from the Default broadcast domain
and create a new one:
ips_802 bcast_vlan-802
9000
cnasimu01_02:e0d-802 complete
cnasimu01_01:e0d-802 complete
Note : Cannot specify options other than Vserver name, comment and ipspace for
a Vserver that is being configured as the destination for Vserver DR
You must create an intercluster SVM peer relationship between the source and
the destination SVMs.
On the destination cluster, verify that the SVM peer relationship is in the peered
state
There are several default SnapMirror policies which can be used for the
relationship. It can be viewed using the following command :
cnasimu02
MirrorAllSnapshots async-mirror 2 8 normal Asynchronous SnapMirror policy for
mirroring all Snapshot copies and the latest active file system.
SnapMirror Label: sm_created Keep: 1
all_source_snapshots 1
Total Keep: 2
If the source and destination SVMs are in different network subnets, and you do
not want to replicate the LIFs, you must create a SnapMirror policy with the
-discard-configs network option.
1. Create a SnapMirror policy to exclude the LIFs from replication by using the
snapmirror policy create command.
2. Verify that the new SnapMirror policy is created by using the snapmirror policy
show command.
You must use the newly created SnapMirror policy when creating the SnapMirror
relationship.
Example
Vserver: dsvm_test_svmdr
SnapMirror Policy Name: exclude_LIF
Create a snapmirror relationship between the source and the destination SVMs :
You can specify the source SVM and the destination SVM as either paths
or SVM names. If you want to specify the source and destination as paths,
then the SVM name must be followed by a colon.
Example :
By default, all rw data volumes of the source SVM are replicated. Volumes can be
excluded from the replication by modifying the –vserver-dr-protection option.
If the source SVM has CIFS configuration and you choose to set identity-preserve
to false, a CIFS server must be created for the destination SVM.
Create a LIF
Create a route
Configure DNS
The destination SVM must be initialized for the baseline transfer of data and
configuration details.
Once the SVM snapmirrored, the relationship status will switch from Transferring
to Idle :
If the source and destination SVMs are in the same network subnets, the LIFs are
configured on the destination SVM (discard-configs network option not set in
snapmirror policy).
It is in down state.
Note : the LIF follows the naming convention of the source site which can be
ambiguous (especially in OCB where the VLAN ID is specified in the LIF name)
For NAS:
Stop the destination SVM if the source SVM has CIFS configuration
Note : a ro access for NFS clients can be set up from the destination SVM
For SAN:
Note : a ro access for SAN hosts can be set up from the destination SVM
The SnapMirror relationship status can be monitored to verify that the updates
are occurring.
Transfer Snapshot: -
Snapshot Progress: -
Total Progress: -
Network Compression Ratio: -
Snapshot Checkpoint: -
Newest Snapshot: vserverdr.2.bed5a4e1-180d-11e9-a9da-
000c2912003c.2019-01-21_090500
Newest Snapshot Timestamp: 01/21 09:05:00
Exported Snapshot: vserverdr.2.bed5a4e1-180d-11e9-a9da-
000c2912003c.2019-01-21_090500
Exported Snapshot Timestamp: 01/21 09:05:00
Healthy: true
Unhealthy Reason: -
Constituent Relationship: false
Destination Volume Node: -
Relationship ID: 2f3cbd01-180e-11e9-a9da-000c2912003c
Current Operation ID: -
Transfer Type: -
Transfer Error: -
Current Throttle: -
Current Transfer Priority: -
Last Transfer Type: update
Last Transfer Error: -
Last Transfer Size: 8.59KB
Last Transfer Network Compression Ratio: -
Last Transfer Duration: 0:0:15
Last Transfer From: svm_test_svmdr:
Last Transfer End Timestamp: 01/21 09:05:15
Progress Last Updated: -
Relationship Capability: -
Lag Time: 0:9:1
Identity Preserve Vserver DR: true
Volume MSIDs Preserved: true
Is Auto Expand Enabled: -
Number of Successful Updates: -
Number of Failed Updates: -
Number of Successful Resyncs: -
Number of Failed Resyncs: -
Number of Successful Breaks: -
Number of Failed Breaks: -
Total Transfer Bytes: -
Total Transfer Time in Seconds: -
Prerequisites:
Warning: These advanced commands are potentially dangerous; use them only when directed to do
so by NetApp personnel.
Do you want to continue? {y|n}: y
Info: The default export policies of the Vserver "svm_test_clone" will be assigned to the
clone volume. Use the "volume modify"
command to change the policies associated with the clone volume.
[Job 334] Job succeeded: Successful
The FlexClone volume inherits the export policies and Snapshot policies from the
SVM to which it belongs.
Notes:
During a disaster, any new data that is written on the source SVM after
the last SnapMirror transfer is lost.
Before activating the destination SVM, you must quiesce the SVM disaster
recovery relationship to stop scheduled SnapMirror transfers from the source
SVM.
Verify that the SnapMirror relationship between the source and the destination
SVMs is in the Quiescing or Quiesced state by using the snapmirror show
command
If necessary, you must abort any ongoing SnapMirror transfers or any long-
running quiesce operations before breaking the SVM disaster recovery
relationship.
Abort any ongoing SnapMirror transfers by using the snapmirror abort command.
Error: command failed: Cannot perform this operation for destination-path "dsvm_test_svmdr"
because the SnapMirror operation status is "Idle".
Verify that the SnapMirror relationship between the source and destination SVMs
is in the Idle state by using the snapmirror show command.
Notice: Volume quota and efficiency operations will be queued after "SnapMirror break"
operation is complete. To check the status, run "job show -description "Vserverdr Break
Callback
job for Vserver : dsvm_test_svmdr"".
cnasimu02::> job show -description "Vserverdr Break Callback job for Vserver :
dsvm_test_svmdr"
Owning
Job ID Name Vserver Node State
------ -------------------- ---------- -------------- ----------
If you chose to set identity-preserve to true or if you want to test the SVM
disaster recovery setup, you must stop the source SVM before activating the
destination SVM.
Before you begin : if the source SVM is available on the source cluster, then you
must have ensured that all clients connected to the source SVM are
disconnected.
Check that the SVM is stopped by using the vserver show command.
In case of a disaster, once the source SVM is stopped or unavailable, you must
activate the destination SVM to provide data access.
Check that the SVM is started by using the vserver show command.
Depending on the SVM DR options, you must configure the destination SVM
(network, etc.).
If the source SVM exists after a disaster, you can reactivate it and protect it by
re-creating the SVM disaster recovery relationship between the source and the
destination SVMs. If the source SVM does not exist, you must create and set up a
new source SVM and then reactivate it.
If the source SVM does not exist, you must delete the SnapMirror relationship
between the source and destination SVMs, delete the SVM peer relationship, and
create and set up a new source SVM to replicate the data and configuration from
the destination SVM.
identify the SVM peer relationship between the source SVM that no longer
exists and its destination SVM by using the vserver peer show
delete the SVM peer relationship by using the vserver peer delete
command
After that, you must set up the disaster recovery relationship by using the same
method and configuration that you used to set up the SnapMirror relationship
before the disaster.
For example, if you chose to replicate data and all the configuration details when
creating the SnapMirror relationship between the original source SVM and
destination SVM, you must choose to replicate data and all the configuration
details when creating the SnapMirror relationship between the new source SVM
and the original destination SVM.
If the source SVM exists after a disaster, you must create the SVM disaster
recovery relationship between the destination and source SVMs and
resynchronize the data and configuration from the destination SVM to the source
SVM.
Steps :
Prerequisites:
the existing source SVM and the destination SVM must be peered
You must set up the disaster recovery relationship by using the same method and
configuration that you used before the disaster.
Before activating the source SVM, you must resynchronize the data and
configuration details from the destination SVM to the existing source SVM for
data access.
Warning : the source SVM must not contain any new protected volumes, delete
them if necessary.
dsvm_test_svmdr:
XDP svm_test_svmdr:
Snapmirrored
Idle - true -
Note : if you run the command with –instance option, you will see that transfer
size is corresponding to a resynchronization.
If you chose to set identity-preserve to true, you must stop the destination SVM
before starting the source SVM to prevent any data corruption.
verify that the destination SVM is in the stopped state by using the
vserver show command
You must update the SnapMirror relationship to replicate the changes from the
destination SVM to the source SVM since the last resynchronization operation.
You must break the SnapMirror relationship created between the source and the
destination SVMs for disaster recovery before reactivating the source SVM.
Verify that the SnapMirror relationship between the source and the
destination SVMs is in the Broken-off state by using the snapmirror show
command.
Notice: Volume quota and efficiency operations will be queued after "SnapMirror break"
operation is complete. To check the status, run "job show -description "Vserverdr Break
Callback
job for Vserver : svm_test_svmdr"".
The source SVM continues to be in the Stopped state and the subtype changes
from dpdestination to default. The state of the volumes in the source SVM
changes from DP to RW.
Vserver: svm_test_svmdr
Vserver Type: data
Vserver Subtype: default
Vserver UUID: 16421db1-14ec-11e9-b005-000c294463d3
Root Volume: svm_test_svmdr_root
Aggregate: simu01_fcal_001
NIS Domain: -
Root Volume Security Style: unix
LDAP Client: -
Default Volume Language Code: C.UTF-8
Snapshot Policy: default
Comment:
Quota Policy: default
List of Aggregates Assigned: -
Limit on Maximum Number of Volumes allowed: unlimited
Vserver Admin State: stopped
Vserver Operational State: stopped
Vserver Operational State Stopped Reason: admin-state-stopped
Allowed Protocols: nfs, cifs, fcp, iscsi, ndmp
Disallowed Protocols: -
Is Vserver with Infinite Volume: false
QoS Policy Group: -
Caching Policy Name: -
Config Lock: false
IPspace Name: ips_802
Foreground Process: -
For providing data access from the source SVM after a disaster, you must
reactivate the source SVM by starting it.
Verify that the source SVM is in the running state and the subtype is
default by using the vserver show command.
Vserver: svm_test_svmdr
Vserver Type: data
Vserver Subtype: default
Vserver UUID: 16421db1-14ec-11e9-b005-000c294463d3
Root Volume: svm_test_svmdr_root
Aggregate: simu01_fcal_001
NIS Domain: -
Root Volume Security Style: unix
LDAP Client: -
Default Volume Language Code: C.UTF-8
Snapshot Policy: default
Comment:
Quota Policy: default
List of Aggregates Assigned: -
Limit on Maximum Number of Volumes allowed: unlimited
Vserver Admin State: running
Vserver Operational State: running
Vserver Operational State Stopped Reason: -
Allowed Protocols: nfs, cifs, fcp, iscsi, ndmp
Disallowed Protocols: -
Is Vserver with Infinite Volume: false
QoS Policy Group: -
Caching Policy Name: -
Config Lock: false
IPspace Name: ips_802
Foreground Process: -
You can protect the reactivated source SVM by resynchronizing the data and
configuration details from the source SVM to the destination SVM.
resynchronize the destination SVM from the source SVM by using the
snapmirror resync command.
4.24 MetroCluster
http://renicm1011.equant.com/dicos/ciiengin.nsf/vTous/MDUG-9B9BMH
Because one controller might own all shelves after a disaster, all shelf numbers in
the disaster-recovery (DR) group must be unique.
Although there are two management ports, only one is configured for use in
MetroCluster configurations. The entire bridge is a field replaceable unit (FRU).
You cannot replace individual parts.
For clarity, the images show only the input output modules (IOMs), not the entire
disk shelves. The intershelf cabling is the same as for any cluster, and the first
and final ports are each connected to a bridge.
The switch configuration files determine which ports are used for which purposes.
The cabling connections in the diagram conform to the settings in the NetApp
reference configuration file (RCF), or golden configuration image file, which you
load on the switches in a later exercise.
Always make cable connections to the specified ports on the FC switches and on
the ATTO bridges.
The example shows the onboard UTA2 ports on a FAS8040 system that is
configured for FC. The onboard UTA2 ports are connected to the Brocade FC
switches.
You can also install FC host bus adapters (HBAs) in expansion slots and use those
ports instead of, or in addition to, the onboard ports.
The classroom environment uses Brocade 6505 switches. You can also use
Brocade 6510 and Cisco 9148 switches. The ports in the diagram are as
configured by the appropriate RCF files.
You must license the correct ports when you use the switches. Not all ports are
licensed by default. You should also always check the port configuration on the
Support site.
Before using a FibreBridge bridge in a live environment, check that the firmware
level is the most recent one. Use the Interoperability Matrix Tool (IMT) to verify
the current version.
Set all of the items as shown on the slide. The connectivity mode is set to ptp-
loop by default. You need to change that modeto
ptp.SetthebridgenametosomethingthatidentifiesitspositionintheMetroClusterconfi
guration.Afteryou configure all of the settings, save the configuration and restart
the bridge.
After you set the IP address, you can use a browser to access the bridge.
When you log in to the FibreBridge bridge, there is no prompt symbol. The word
“Ready” is displayed. Type your command below that word.
If you type a command incorrectly and then press the Backspace key to fix it, the
command fails. If you make a mistake typing a command, retype it.
To display all of the disks, enter the sastargets command. A list of the disks
appears, followed by a list of the input output modules (IOMs) in each shelf.
Ensure that you download the correct file for the site. The Site-A file has the
zoning information. When the links are enabled, the zoning is transferred to the
Site-B switches.
2. In the list of products, find the row for Fibre Channel Switch.
4. On the Fibre Channel Switch for Brocade page, click View & Download.
http://mysupport.netapp.com/NOW/download/software/metrocluster_cisco/sanswi
tch/download.shtml
The RCF for domain 5 and domain 6 have the zone configurations and they are
pulled to the domain 7 and domain 8 when the ISLs are synced.
You must manually calculate and set the ISL distance. Calculation and setting of
ISL distance is not included in the RCF file.
Due to the behavior of virtual interface over Fibre Channel (FC-VI) ports, you
must set the distance to 1.5 times the actual distance. The distance is actual
fabric cable length.
The value for the ISL is calculated as follows (in which the real_distance is
rounded up to the kilometer):
If the value is 10 or less, use the LE parameter without specifying the value.
If the value is greater than 10 (if the distance is greater than 6 km), use the LS
parameter and specify the value.
The number one (1) in the configuration command is the vc_link_init value. A
value of 1 uses the ARB fill word, which is the default. A value of 0 uses IDLE. The
required value might depend on the link that is used.
The fill word must be set to the same value as the fill word that is configured for
the remote port. Therefore, if the remote port link initialization and fill word
combination is idle-idle, the fill word for the long-distance link must be set to
”idle.”
If the combination of remote port link initialization and fill-word is set to arbff-
arbff, idle-arbff, or aa-then-ia, then the fill word for the long-distance link must be
set to ”arb.”
Create intercluster logical interfaces (LIFs) at both sites, and then create the peer
relationship.
During the cluster peering process, authentication must occur. You provide a
passphrase that you create and enter in both clusters during the peer-creation
step.
1. Run the cluster peer create –peer-addrs <intercluster LIFs on the remote
cluster> command.
1. Run the cluster peer create –peer-addrs <intercluster LIFs on the remote
cluster> command.
In Data ONTAP 8.3 and later, Brocade and Cisco switches can also be included for
health-monitoring purposes. You must manually add each management IP
address and then use the storage switch show command to verify that the
switches have all been included for monitoring.
The polling interval is 15 minutes. Something might fail and you might not see it
immediately on the CLI, but AutoSupport sends an alert.
Add the FibreBridge devices in the same way as the switches: by adding each
management-port IP address. Then verify that the bridges have all been added
correctly. Bridge monitoring supports only SNMPv1. However, currently ATTO
firmware prohibits changes to the community string.
After the MetroCluster cluster has been configured, use the metrocluster show
command to see the status of the cluster.
The site from which the command is issued is listed as the Local site, and the
other site is the Remote site. In the example, the command was issued from Site
A.
In the top-left corner of the main Config Advisor window, from the Profile menu,
select the MetroCluster in clustered Data ONTAP execution profile. Select the
type of switches that you are using and complete the login information for the
nodes and switches.
The information about the login page is arranged differently when you select
MetroCluster. All of the Cluster A information is in the left column, and all of the
Cluster B information is in the right column.
Save the query so that you can use it again without having to reenter all of the
information. Click Collect Data.
You should use a non privileged user to connect Harvest to your storage systems.
The password for this user is available on PMR site.
Configure role
Configure user
– Has been revised several times and has had several name changes. They
are all outdated and should no longer be referenced.
Disk Clearing
Disk Sanitization
system is being “released” to users with access level lower than the
accreditation level.
– Note that memory is required to be overwritten as well for both. The tools
available to the NetApp PSE/PSCs don’t include a method to overwrite a
NetApp storage controller’s memory.
– Requirements for tracking disks once they are sanitized is included in the
standard. NetApp doesn’t do tracking of disks once they are returned.
The preferred term to describe the NetApp service offering is “Disk Erasure”, not
“Disk Clearing”, or “Disk Sanitization”.
– Above counts as three cycles, sanitization is not complete until three cycles
are successfully completed.
– If any part of the disk can not be written to, the disk must be destroyed,
according to DoD standards. NetApp does not make a service available for
disk destruction; however, NetApp does have an offering for non-returning
of disks.
– Three passes of a single set of writes is clearly called out in the current
standard. The documentation clarifies that the standard is not three of
each pass, for a total of 9 writes as was mistakenly assumed by numerous
implementers in the past.
Important Notes
– The disk sanitization command can not be run on broken or failed disks.
– The customer may request that NetApp perform a ‘Disk Sanitization’ even
without the ability to sanitize the storage controller cluster’s memory.
– Need to ensure that the customer understands that this operation can not
be undone.
– See sample signoff text, select the one based upon if this is a paid
engagement or not.
Make sure that the motherboard, shelf and disk firmware are up to date.
Remove all failed disks from the storage controller. These disk will need to be
disposed of by the customer.
If all disks are part of a single root aggregate, you will need to build a new volume
and aggregate composed of a minimal number of disks.
Run the appropriate DataONTAP command for each disk to start the disk clearing
or sanitization process.
Wait for process to complete. Progress can be checked via the “disk sanitize
status” command and the “sysconfig –r” command.
Make note of disks that fail the sanitize process. They will need to be removed
and disposed of appropriately by the customer. Note that there may be an
additional charge for non-return of disks.
– See attached sample, select the sample text based upon if this is a paid
engagement or not.
The customer understands that the disk erasure process is non-reversable once started
and all existing data on the storage controllers named above will be non-recoverable.
This work will be performed under NetApp purchase number REPLACE_PO_NUMBER.
Signed for Customer: _________________________
Print name: _________________________
Date: _________________________
Disk erasure work was performed on the following NetApp storage controllers using the
built in DataONTAP tools:
REPLACE_NAME, SN# REPLACE_SSN
The process followed meets the disk clearing requirements detailed in the US
Government publication, “ISFO Process manual V3 14 June 2011”, the generally accepted
industry accepted authority on device erasure.
This work was performed without charge to the customer.
Signed for Customer: _________________________
Print name: _________________________
Date: ________________________
4.27 NDMPCopy
You can configure SVM-scoped NDMP on the cluster by enabling SVM-scoped NDMP mode
and
NDMP service on the cluster (admin SVM).
About this task
Turning off node-scoped NDMP mode enables SVM-scoped NDMP mode on the cluster.
Enable SVM-scoped NDMP mode by using the system services ndmp command with the
node-scope-mode parameter.
Enable NDMP service on the admin SVM by using the vserver services ndmp on
command.
Verify that NDMP service is enabled by using the vserver services ndmp show command.
If you are using an NIS or LDAP user, the user must be created on the respective server.
You cannot
use an Active Directory user.
Create a backup user with the admin or backup role by using the security login create
command.
You can specify a local backup user name or an NIS or LDAP user name for the -user-or-
group-name
parameter.
The following command creates the backup user backup_admin1 with the backup role:
Generate a password for the admin SVM by using the vserver services ndmp generate
password command.
The generated password must be used to authenticate the NDMP connection by the
backup application.
You must identify the LIFs that will be used for establishing a data connection between
the data and
tape resources, and for control connection between the admin SVM and the backup
application. After
identifying the LIFs, you must verify that firewall and failover policies are set for the LIFs,
and
specify the preferred interface role.
Ensure that the firewall policy is enabled for NDMP on the intercluster, cluster-
management
(cluster-mgmt), and node-management (node-mgmt) LIFs:
Verify that the firewall policy is enabled for NDMP by using the system services firewall
policy show command.
The following command displays the firewall policy for the cluster-management LIF:
ndmp 0.0.0.0/0
ndmps 0.0.0.0/0
ntp 0.0.0.0/0
rsh 0.0.0.0/0
snmp 0.0.0.0/0
ssh 0.0.0.0/0
telnet 0.0.0.0/0
10 entries were displayed.
The following command displays the firewall policy for the intercluster LIF:
cluster1::> system services firewall policy show -policy intercluster
Vserver Policy Service Allowed
------- ------------ ---------- -------------------
cluster1 intercluster dns -
http -
https -
ndmp 0.0.0.0/0, ::/0
ndmps -
ntp -
rsh -
ssh -
telnet -
9 entries were displayed.
The following command displays the firewall policy for the node-management LIF:
If the firewall policy is not enabled, enable the firewall policy by using the system services
firewall policy modify command with the –service parameter.
The following command enables firewall policy for the intercluster LIF:
Ensure that the failover policy is set appropriately for all the LIFs:
Verify that the failover policy for the cluster-management LIF is set to broadcast-domain-
wide, and the policy for the intercluster and node-management LIFs is set to local-only by
using the network interface show –failover command.
The following command displays the failover policy for the cluster-management,
intercluster, and node-management LIFs:
If the failover policies are not set appropriately, modify the failover policy by using the
network interface modify command with the -failover-policy parameter.
Specify the LIFs that are required for data connection by using the vserver services ndmp
modify command with the preferred-interface-role parameter.
Verify that the preferred interface role is set for the cluster by using the vserver services
ndmp show command.
The following table displays what resources are available in 'Vserver-scope' without CAB
:
LIF type Volumes available for backup or restore Tape devices available for backup or restore
Tape devices connected to the node hosting the
Node Mgmt LIF All volumes hosted by a node
node-management LIF
The following table displays what resources are available in 'Vserver-scope' when CAB is
supported by the Backup Application:
LIF type Volumes available for backup or restore Tape devices available for backup or restore
Tape devices connected to the node hosting the node-
Node Mgmt LIF All volumes hosted by a node
management LIF
All volumes that belong to the Vserver that hosts
Data LIF None
the data LIF
Cluster Mgmt
All volumes in the cluster All tape devices in the cluster
LIF
InterCluster LIF All volumes in the cluster All tape devices in the cluster
4.27.4.1 Benefits :
4.27.4.2 Drawbacks:
Destination Volume must have sufficient room to store Source Volume data
Destination Volume must have sufficient max number of files (max files > Source Volume max files)
Command to change max files on a Volume:
Example:
Target
Total Files (for user-visible data): 21251126
Files Used (for user-visible data): 21251126
SM
Total Files (for user-visible data): 999999995
Files Used (for user-visible data): 65186433
Command :
volume modify DV_C4U_DA6_OFF_FRA_01_target -vserver svm_fcav2_CUBE4Demo_file_nmd_01
-files 65186500
XCP is a high-performance NFSv3 migration tool for fast and reliable migrations from
third-party storage to NetApp and NetApp to NetApp transitions. The tool supports
discovery, logging, and reporting and a wide variety of sources and targets.
4.28.2 Features
XCP runs on a Linux client as a command-line tool. XCP is packaged as a single binary file
that is easy to deploy and use.
XCP runs on a Linux client host as a CLI tool. It is easy to use single file software and does
not involve complex installation procedures. Users can download the binary from
https://support.netapp.com/eservice/toolchest.
XCP is available for internal, partner, and customer use. Download and activate a free 90-
day renewable license from https://xcp.netapp.com/.
The following are the minimum system requirements.
• 64-bit Intel or AMD server, minimum 4 cores and 8GB RAM
• 20MB of disk space for the xcp binary and at least 50MB for the logs
• Recent Linux distribution (RHEL 5.11 or later or kernel 2.6.18-404 or later)
• No other active applications
• Access to log in as root or run sudo commands
• Network connectivity to source and destination NFS exports
XCP saves operation reports and metadata in an NFS 3–accessible catalog directory.
Provisioning the catalog is a one-time preinstallation task requiring:
• A NetApp NFSv3 export for security and reliability.
• At least 10 disks or SSDs in the aggregate containing the export for performance.
• Storage configured to allow root access to the catalog export for the IP addresses of all
Linux clients used to run XCP (multiple XCP clients can share a catalog location).
• Approximately 1GB of space for every 10 million objects (directories + files + hard
links) to be indexed. Each copy that can be resumed or synchronized and each scan that
can be searched offline requires an index. Note: Store XCP catalogs separately. The
catalogs should not be on either the source or the destination NFS export directory. XCP
maintains metadata and reports in the catalog location specified during initial setup. The
location for storing the reports needs to be specified and updated before you run any
operation with XCP. Edit the xcp.ini file using an appropriate Linux file editor at
/opt/NetApp/xFiles/xcp/.
4.28.5 Activate
The license file must be located in the XCP local configuration directory,
/opt/Netapp/xFiles/xcp:
• Run any xcp command to allow XCP to autocreate the configuration directory (the error
“License file /opt/NetApp/xFiles/xcp/license not found” is expected).
# mv ./license /opt/NetApp/xFiles/xcp
# ./xcp activate
After activating XCP, configure the NFS 3 catalog location. XCP is then ready to run. 5 XCP
NFS v1.3 Migration Tool Quick Start Guide © Copyright 2017 NetApp, Inc. All rights
reserved
• Edit or replace xcp.ini with the catalog location; for example, to use export xcat on
server atlas:
• For basic usage and a few examples, run xcp with no arguments:
Use
Documentation
Examples
Copy from a local SAN or DAS filesystem (requires local NFS service):
xcp copy localhost:/home/smith cdot:/target
Three-level verification: compare stats, attributes, and full data:
xcp verify -stats localhost:/home/smith cdot:/target
xcp verify -nodata localhost:/home/smith cdot:/target
xcp verify localhost:/home/smith cdot:/target
4.28.7.1 Benefits:
4.28.7.2 Drawbacks:
4.29.1 Prerequisites
NVE is not supported if the command output displays the text "no-DARE" (for "no
Data At Rest Encryption"),
cluster1::> version -v
NetApp Release 9.1.0: Tue May 10 19:30:23 UTC 2016 <1no-DARE>
The text "1no-DARE" in the command output indicates that NVE is not supported
on your cluster version.
2. An NVE license entitles you to use the feature on all nodes in the cluster. You must
install the license before you can encrypt data with NVE (with administrator
privilege).
Install the NVE license for a node: system license add -license-code
license_key
Example : The following command installs the license with the key
AAAAAAAAAAAAAAAAAAAAAAAAAAAA.
Verify that the license is installed by displaying all the licenses on the cluster:
system license show. For complete command syntax, see the man page for the
command.
You can use one or more external key management servers to secure the keys that
the cluster uses to access encrypted data. An external key management server is a
third-party system in your storage environment that serves keys to nodes using the
Key Management Interoperability Protocol (KMIP).
You can use the Onboard Key Manager to secure the keys that the cluster uses to
access encrypted data. You must enable Onboard Key Manager on each cluster that
accesses an encrypted volume or a self-encrypting disk (SED).
You must run the security key-manager setup command each time you add a node
to the cluster. In MetroCluster configurations, you must run security key-manager
setup on the local cluster first, then on the remote cluster, using the same
passphrase on each. Starting with ONTAP 9.5, you must run security key-manager
setup and security key-manager setup -sync-metrocluster-config yes on the local
cluster and it will synchronize with the remote cluster.
By default, you are not required to enter the key manager passphrase when a node
is rebooted. Starting with ONTAP 9.4, you can use the -enable-cc-mode true option
to require that users enter the passphrase after a reboot.
Note: After a failed passphrase attempt, you must reboot the node again.
Starting with ONTAP 9.5, ONTAP Key Manager supports Trusted Platform Module
(TPM). TPM is a secure crypto processor and micro-controller designed to provide
hardware-based security. Support for TPM is automatically enabled by ONTAP on
detection of the TPM device driver. If you are upgrading to ONTAP 9.5, you must
create new encryption keys for your data after enabling TPM support.
Steps :
Restart the key manager setup wizard with "security key-manager setup". To accept a
default
or omit a question, do not enter a value.
Would you like to configure onboard key management? {yes, no} [yes]:
Enter the cluster-wide passphrase for onboard key management. To continue the
configuration, enter the passphrase, otherwise type "exit":
Enter the cluster-wide passphrase for onboard key management. To continue the
configuration, enter the passphrase, otherwise type "exit":
Re-enter the cluster-wide passphrase:
After configuring onboard key management, save the encrypted
configuration data
in a safe location so that you can use it if you need to perform a manual
recovery
operation. Copy the passphrase to a secure location (PMR) outside the
storage system for future use.
Node: cnaces84-01
Key Store: onboard
Key ID Used By
---------------------------------------------------------------- --------
00000000000000000200000000000100078596BDE0CDA32B81023B5E5DC6BF44
NSE-AK
0000000000000000020000000000010087EB0C61C8DDBD677F477C8557E76897
NSE-AK
Node: cnaces84-02
Key Store: onboard
Key ID Used By
---------------------------------------------------------------- --------
00000000000000000200000000000100078596BDE0CDA32B81023B5E5DC6BF44
NSE-AK
0000000000000000020000000000010087EB0C61C8DDBD677F477C8557E76897
NSE-AK
You can enable encryption on a new volume or on an existing volume. You must have
installed the NVE license and enabled key management before you can enable volume
encryption. NVE is FIPS-140-2 level 1 compliant.
You can use the volume create command to enable encryption on a new volume.
Steps :
1. Create a new volume and enable encryption on the volume: volume create
-vserver SVM_name -volume volume_name -aggregate aggregate_name
-encrypt true
For complete command syntax, see the man page for the command.
Example
The following command creates a volume named vol1 on aggr1 and enables
encryption on the volume:
The system creates an encryption key for the volume. Any data you put on the volume
is encrypted.
2. Verify that the volume is enabled for encryption: volume show -is-encrypted
true
For complete command syntax, see the man page for the command.
Example
Starting with ONTAP 9.3, you can use the volume encryption conversion start
command to enable encryption on an existing volume. Once you start a conversion
operation, it must complete. If you encounter a performance issue during the
operation, you can run the volume encryption conversion pause command to pause
the operation, and the volume encryption conversion restart command to resume the
operation.
Note: You cannot use volume encryption conversion start to convert a SnapLock or
FlexGroup volume.
Steps :
1. Enable encryption on an existing volume: volume encryption conversion start
-vserver SVM_name -volume volume_name
For complete command syntax, see the man page for the command.
Example
Warning: Conversion from non-encrypted to encrypted volume scans and encrypts all of
the data in the specified volume. It may take a significant amount of time, and
may degrade performance during that time. Are you sure you still want to continue?
{y|n}: y
Conversion started on volume "volmwinad1". Run "volume encryption conversion show
-volume volmwinad1 -vserver svm_fca_se_file_04" to see the status of this operation.
The system creates an encryption key for the volume. The data on the volume is
encrypted.
Verify the status of the conversion operation: volume encryption conversion show
For complete command syntax, see the man page for the command.
Example
When the conversion operation is complete, verify that the volume is enabled for
encryption:
volume show -is-encrypted true
For complete command syntax, see the man page for the command.
Example
You can use the volume move start command to enable encryption on an existing
volume. You must use volume move start in ONTAP 9.2 and earlier. You can use the same
aggregate or a different aggregate.
Steps :
1. Move an existing volume and enable encryption on the volume: volume move
start -vserver SVM_name -volume volume_name -destination-aggregate
aggregate_name -encrypt-destination true|false
For complete command syntax, see the man page for the command.
Example
The following command moves an existing volume named vol1 to the destination
aggregate aggr2 and enables encryption on the volume:
cluster1::> volume move start -vserver vs1 -volume vol1 -destination-
aggregate aggr2 -encrypt-destination true
The system creates an encryption key for the volume. The data on the volume is
encrypted.
2. Verify that the volume is enabled for encryption: volume show -is-encrypted
true
For complete command syntax, see the man page for the command.
Example