Sei sulla pagina 1di 51

NetApp is very popular for NAS (Network Attached Storage) from the past decade.

In 2002 , NetApp would like to change the NAS tag to SAN. So they have renamed their product
lines to FAS (Fabric Attached SCSI) to support both NAS and SAN. In the FAS storage product lines, NetApp provides the unique storage solution which supports multiple protocols in
single system. NetApp storages uses DATA ONTAP operating system which is based on Net/2 BSD Unix.

1. DATA ONTAB 7G (NetApp’s Legacy Operating System)


2. DATA ONTAP GX. (NetApp’s Grid Based Operating System)
DATA ONTAP GX is based upon GRID technology (Distributed Storage Model) acquired from Spinnaker Networks.

NetApp 7 Mode vs Cluster Mode:

In the past, NetApp provided 7-Mode storage. 7-Mode storage provides dual-controller, cost-effective storage systems. In 2010, NetApp Released the new Operating
System called DATA ONTAP 8 which includes the 7 Mode and C Mode. We just need to choose the mode in the storage controller start-up (Similar to Dual Boot OS system). In
NetApp Cluster Mode , you can easily scale out the environment on demand basis

From DATA ONTAP 8.3 operating system version onwards, you do not have option to choose 7 Mode. It’s just available only as Clustered DATA ONTAP .

Clustered DATA ONTAP Highlights:


Here is the some of the key highlights of DATA ONTAP clustered mode. Some of the features are remain same as 7-Mode.

1. Supported Protocols:

 FC
 NFS
 FCoE
 iSCSI
 pNFS
 CIFS

2. Easy to Scale out


3. Storage Efficiency

 It supports De-duplication
 Compression
 Thin Provisioning.
 Cloning

4. Cost and Performance

 Supports Flash Cache


 Option to use SSD Drives
 Flash Pool
 FlexCache
 SAS and SATA drive Options

5. Integrated Data protection

 Snapshot Copies

 Asynchronous Mirroring
 Disk to Disk or Disk to tape backup option.

6. Management

 Unified Management. (Manage SAN and NAS using same portal)


 Secure Multi-tenancy
 Multi-vendor Virtualization.
 Clustered DATA ONTAP – Scalability:
 Clustered Data ONTAP solutions can scale from 1 to 24 nodes, and are mostly managed as one large system. More importantly, to client systems, a cluster looks like a
single system. The performance of the cluster scales linearly to multiple gigabytes per second of throughput, and capacity scales to petabytes. Clusters are built for continuous
operation; no single failure on a port, disk, card, or motherboard will cause data to become inaccessible in a system. Clustered scaling and load balancing are both
transparent.

Clusters provide a robust feature set, including data protection features such as Snapshot copies, intracluster asynchronous mirroring, SnapVault backups, and NDMP
backups.

Clusters are a fully integrated solution. This example shows a 20-node cluster that includes 10 FAS systems with 6 disk shelves each, and 10 FAS systems with 5 disk
shelves each. Each rack contains a high-availability (HA) pair with storage failover (SFO) capabilities.

Note:When you use both NAS and SAN on same system, the supported maximum cluster nodes are Eight. The 24 node cluster is possible when you use the Netapp storage
only for NAS.

NetApp – Clustered DATA ONTAP – Objects and Components


Physical elements of a system such as disks, nodes, and ports on those nodes?can be touched and seen. Logical elements of a system cannot be touched, but they do
exist and use disk space
 An ONTAP cluster typically consists of fabric-attached storage (FAS) controllers: computers optimized to run the clustered Data ONTAP operating system. The controllers
provide network ports that clients and hosts use to access storage. These controllers are also connected to each other using a dedicated, redundant 10 gigabit ethernet
interconnect. The interconnect allows the controllers to act as a single cluster. Data is stored on shelves attached to the controllers. The drive bays in these shelves may
contain hard disks, flash media, or both.
 A cluster provides hardware resources, but clients and hosts access storage in clustered ONTAP through storage virtual machines (SVMs). SVMs exist natively inside of
clustered ONTAP. They define the storage available to the clients and hosts. SVMs define authentication, network access to the storage in the form of logical interfaces (LIFs),
and the storage itself, in the form of SAN LUNs or NAS volumes.
 A single cluster may contain multiple storage virtual machines (SVMs) targeted for various use cases, including server and desktop virtualization, large NAS content
repositories, general-purpose file services, and enterprise applications. SVMs may also be used to separate different organizational departments or tenants.
 The components of an SVM are not permanently tied to any specific piece of hardware in the cluster. An SVM’s volumes, LUNs, and logical interfaces can move to different
physical locations inside the cluster, while maintaining the same logical location to clients and hosts. While physical storage and network access moves to a new location inside
the cluster, clients can continue accessing data in those volumes or LUNs, using those logical interfaces.
 Clients and hosts are aware of SVMs, but may be unaware of the underlying cluster. The cluster provides the physical resources the SVMs need in order to serve data.
The clients and hosts connect to an SVM, rather than to a physical storage array. A cluster, which is a physical entity, is made up of other physical and logical pieces. For
example, a cluster is made up of nodes, and each node is made up of a controller, disks, disk shelves, NVRAM, and so on. On the disks are RAID groups and aggregates. Also,
each node has a certain number of physical network ports, each with its own MAC address.
 A Data ONTAP cluster is a physical interconnectivity of storage systems, which are called "nodes". A cluster can include from 2 to 24 nodes in a NAS environment and 2
to 8 nodes in a SAN environment. Nodes are connected to each other by a private, nonroutable 10-Gb Ethernet interconnect. Each node has an HA partner node for storage
failover (abbreviated as SFO). Both nodes are also peer nodes within the cluster. The storage of a node can failover to its HA partner, and its logical interfaces can failover to
any node within the cluster.
Cluster
A cluster is typically composed of physical hardware: controllers with attached storage (solid state drives, spinning media, or both; or a third-party storage array when
FlexArray is used), network interface cards, and, optionally, PCI-based flash cards (Flash Cache). Together, all of these components create a physical resource pool.
This physical resource pool is visible to cluster administrators but not to the applications and hosts that use the cluster. The storage virtual machines (SVMs) in the cluster
use these resources to serve data to clients and hosts.
Nodes
Storage controllers are presented and managed as cluster nodes, or instances of clustered ONTAP. Nodes have network connectivity and storage. The terms “node” and
“controller” are sometimes used interchangeably, but “node” more frequently means a controller, its storage, and the instance of clustered ONTAP running on it.
HA Pair

An HA pair consists of 2 identical controllers; each controller actively provides data services and has redundant cabled paths to the other controller’s disk storage. If either controller is
down for any planned or unplanned reason, its HA partner can take over its storage and maintain access to the data. When the downed system rejoins the cluster, the partner will give
back the storage resources.

Netapp Filer Tutorial


The NetApp Filer also known as NetApp Fabric-Attached Storage (FAS), is a data storage device, it can act as a SAN or as a NAS, it serves storage over a network using either file-
based or block-based protocols. It uses an operating systems called Data ONTAP (based on FreeBSD).

File-Based Protocol : NFS, CIFS, FTP, TFTP, HTTP

Block-Based Protocol : Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)

1. NFS (Network File System)


NFS is the Network File System for UNIX and Linux operating systems. It allows files to be shared transparently between servers, desktops, laptops etc.
NFS allows network systems (clients) to access shared files and directories that are stored and administered centrally from a storage system.
It is a client/server application that allows a user to view, store and update files on a remote computer as though they were on their own computer.

2. CIFS (Common Internet File System)


CIFS is the Common Internet File System used by Windows operating systems for file sharing. CIFS uses the client/server programming model.
A client program makes a request of a server program (usually in another computer) for access to a file or to pass a message to a program that runs in the server computer. The
server takes the requested action and returns a response. CIFS uses the TCP/IP protocol.

3. FTP (File Transfer Protocol)


The File Transfer Protocol (FTP) is a standard network protocol used to transfer computer files from one host to another host over a TCP- based network, such as the Internet. FTP
uses the Internets TCP/IP protocols to enable data transfer.
FTP is most commonly used to download file from a server using the Internet or to upload a file to a server (e.g., uploading a Web page file to a server).

4. TFTP (Trivial File Transfer Protocol)


Trivial File Transfer Protocol (TFTP) is a simple, lock-step, File Transfer Protocol which allows Client to get from or put a file onto a remote Host (network). TFTP uses the User
Datagram Protocol (UDP) and provides no security features.
Differences between FTP and TFTP
1. FTP is a user-based password network protocol used to transfer data across a network; TFTP is a network protocol that does not have any authentication processes.
2. FTP may be accessed anonymously, but the amount of information transferred is limited; TFTP has no encryption process in place, and can only successfully transfer files that are
not larger than one terabyte.

5. HTTP (Hyper Text Transfer Protocol)


HTTP is the underlying protocol used by the World Wide Web. HTTP defines how messages are formatted and transmitted, and what actions Web servers and browsers should take
in response to various commands.
For example, when you enter a URL in your browser, this actually sends an HTTP command to the Web server directing it to fetch and transmit the requested Web page.

6. FC (Fibre Channel)
Fibre Channel, or FC, is a high-speed network technology (commonly running at 2-, 4-, 8- and 16-gigabit per second rates) primarily used to connect computer data storage.
Fibre Channel is a widely used protocol for high-speed communication to the storage device. The Fibre Channel interface provides gigabit network speed. It provides a serial data
transmission that operates over copper wire and optical fiber. The latest version of the FC interface (16FC) allows transmission of data up to 16Gb/s.

7. FcoE (Fibre Channel over Ethernet)


Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit
Ethernet networks (or higher speeds) while preserving the Fibre Channel protocol.

8. iSCSI (Internet Small Computer System Interface)


Pronounced eye skuzzy. Short for Internet SCSI, an IP-based standard for linking data storage devices over a network and transferring data by carrying SCSI commands over IP
networks. iSCSI supports a Gigabit Ethernet interface at the physical layer, which allows systems supporting iSCSI interfaces to connect directly to standard Gigabit Ethernet switches
and/or IP routers.
When an operating system receives a request it generates the SCSI command and then sends an IP packet over an Ethernet connection. At the receiving end, the SCSI commands
are separated from the request, and the SCSI commands and data are sent to the SCSI controller and then to the SCSI storage device. iSCSI will also return a response to the request
using the same protocol. iSCSI is important to SAN technology because it enables a SAN to be deployed in a LAN, WAN or MAN.

NetApp Filers can offer the following

 Supports SAN, NAS, FC, SATA, iSCSI, FCoE and Ethernet all on the same platform
 Supports either SATA, FC and SAS disk drives
 Supports block protocols such as iSCSI, Fibre Channel and AoE
 Supports file protocols such as NFS, CIFS , FTP, TFTP and HTTP
 High availability
 Easy Management
 Scalable

The most common NetApp configuration consists of a filer (also known as a controller or head node) and disk enclosures (also known as shelves), the disk enclosures are
connected by Fibre Channel or parallel/serial ATA, the filer is then accessed by other Linux, Unix or Window servers via a network (Ethernet or FC). First we need to describe the Data
ontap storage model architecture.

Total Dataontap 7-Mode architecture is divided into two Blades.

1. Network Blade (N-Blade)


2. Data Blade (D-Blade).

Network Blade (N-Blade)


End User’s read and write request is passed to the storage operation through the Network blade. As Netapp FAS is a unified storage so client can either access data from storage
system either by NAS protocols or SAN access or block based protocol. So following protocols works on Network Blade.
* CIFS, NFS.
* FC, ISCSI, FCOE.
So network blade provides the access to data via NAS or SAN protocols
Data Blade (D-Blade)
Data Blade consist 3 Layers.
* WAFL.
* RAID.
* STORAGE (Disk Array).
Data Blade is responsible for the data read and write operations and some mechanism that built Netapp Dataontap as a very efficient, fast, and robust.
Architecture for data read and writes operation.
WAFL is the patient File system (Not really a file system) used by Netapp ontap OS that makes Netapp FAS system’s more powerful solution than its competition. Definition wise
we can describe WAFL is below.
The Write Anywhere File Layout (WAFL) is a file layout that supports large, high-performance RAID arrays, quick restarts without lengthy consistency checks in the event of a
crash or power failure, and growing the file systems size quickly. It was designed by NetApp for use in its storage appliances.
WAFL, as a robust versioning filesystem, provides snapshots, which allow end-users to see earlier versions of files in the file system. Snapshots appear in a hidden directory:
~snapshot for Windows (CIFS) or .snapshot for Unix (NFS). Up to 255 snapshots can be made of any traditional or flexible volume. Snapshots are read-only, although Data ONTAP
7 provides additional ability to make writable "virtual clones", based at "WAFL snapshots" technique, as "FlexClones".
NetApp Write request Data Flow

As per the above diagram for a write operation whenever some write request appears on NetApp D-Blade via N-Blade (Either Via NAS or San Protocols) is cached into Memory
buffer cache (Cache Memory) and Simultaneously a copy into the NVRAM that is divided into NVLOG’s, and one thing that is need to be remembered that NVRAM in NetApp.

NVRAM

NetApp storage systems use several types of memory for data caching. Non-volatile battery-backed memory (NVRAM) is used for write caching (whereas main memory and flash
memory in forms of either extension PCIe card or SSD drives is used for read caching). Before going to hard drives all writes are cached in NVRAM. NVRAM memory is split in half and
each time 50% of NVRAM gets full, writes are being cached to the second half, while the first half is being written to disks. If during 10 seconds interval NVRAM doesn’t get full, it is
forced to flush by a system timer.

To be more precise, when data block comes into NetApp it’s actually written to main memory and then journaled in NVRAM. NVRAM here serves as a backup, in case filer fails. The
active file system pointers on the disk are not updated to point to the new locations until a write is completed. Upon completion of a write to disk, the contents of NVRAM are cleared
and made ready for the next batch of incoming write data. This act of writing data to disk and updating active file system pointers is called a Consistency Point (CP). In FAS32xx series
NVRAM has been integrated into main memory and is now called NVMEM.

RAID

RAID (originally redundant array of inexpensive disks, now commonly redundant array of independent disks) is a data storage virtualization technology that combines multiple
physical disk drive components into a single logical unit for the purposes of data redundancy, performance improvement, or both. Using RAID increases performance or provides fault
tolerance or both. Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of some of its components.

Note :

RAID Uses two or more physical disk drives and a RAID controller. Here RAID controller Acts as an interface between the host and disks.

RAID 0:

A popular disk subsystem that increases performance by interleaving data across two or more drives. Data are broken into blocks, called "stripes," and alternately written to two or
more drives simultaneously to increase speed. For example, stripe 1 is written to drive 1 at the same time stripe 2 is written to drive 2. Then stripes 3 and 4 are written to drives 3
and 4 simultaneously and so on. When reading, stripes 1 and 2 are read simultaneously; then stripes 3 and 4 and so on.

RAID 1:

A popular disk subsystem that increases safety by writing the same data on two drives. Called "mirroring," RAID 1 does not increase performance. However, if one drive fails, the
second drive is used, and the failed drive is manually replaced. After replacement, the RAID controller duplicates the contents of the working drive onto the new one.
RAID 10 and RAID 01:

A RAID subsystem that increases safety by writing the same data on two drives (mirroring), while increasing speed by interleaving data across two or more mirrored "virtual"
drives (striping). RAID 10 provides the most security and speed but uses more drives than the more common RAID 5 method.

RAID Parity:

Parity computations are used in RAID drive arrays for fault tolerance by calculating the data in two drives and storing the results on a third. The parity is computed by XOR 'ing a
bit from drive 1 with a bit from drive 2 and storing the result on drive 3. After a failed drive is replaced, the RAID controller rebuilds the lost data from the other two drives. RAID
systems often have a "hot" spare drive ready and waiting to replace a drive that fails.

RAID 2:

RAID 2, which is rarely used in practice, stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. Hamming-code parity is calculated across
corresponding bits and stored on at least one parity drive.

RAID 3:

RAID 3 consists of byte-level striping with dedicated parity. RAID 3 stripes data for performance and uses parity for fault tolerance. Parity information is stored on a dedicated drive
so that the data can be reconstructed if a drive fails in a RAID set. For example, in a set of five disks, four are used for data and one for parity. Although implementations exist,[20]
RAID 3 is not commonly used in practice. RAID 3 provides good performance for applications that involve large sequential data access, such as data backup or video streaming.
RAID 4:

RAID 4 is very similar to RAID 3. The main difference is the way of sharing data. They are divided in to blocks (16, 32, 64 lub 128 kB) and written on disk s – similar to RAID 0. For
each row of written data, any recorded block is written on a parity disk. This uses block level striping.

RAID 5:

RAID 5 is a versatile RAID implementation. It is similar to RAID 4 because it uses striping. The drives (strips) are also independently accessible. The difference between RAID 4 and
RAID 5 is the parity location. In RAID 4, parity is written to a dedicated drive, creating a write bottleneck for the parity disk. In RAID 5, parity is distributed across all disks to
overcome the write bottleneck of a dedicated parity disk.

RAID 6:

RAID 6 works the same way as RAID 5, except that RAID 6 includes a second parity element to enable survival if two disk failures occur in a RAID set. Therefore, a RAID 6
implementation requires at least four disks. RAID 6 distributes the parity across all the disks. The write penalty in RAID 6 is more than that in RAID 5; therefore, RAID 5 writes
perform better than RAID 6. The rebuild operation in RAID 6 may take longer than that in RAID 5 due to the presence of two parity sets.
RAID DP:

RAID DP used as RAID 4 first a horizontal parity (P). As an extension of RAID 4, RAID-DP adds a diagonal parity (DP). The double parity up to two drives fail without resulting in
the RAID group to data loss. RAID-DP fulfills the requirements for a RAID 6 according SNIA definition. NetApp RAID-DP uses two parity disks per RAID group. One parity disk stores
parity calculated for horizontal stripes, as described earlier. The second parity disk stores parity calculated from diagonal stripes.

The diagonal parity stripe includes a block from the horizontal parity disk as part of its calculation. RAID-DP treats all disks in the original RAID 4 construct—including both data
and parity disks—the same. Note that one disk is omitted from the diagonal parity stripe.

Data ONTAP Storage Architecture Overview


Storage architecture refers to how Data ONTAP provides data storage resources to host or client systems and applications. Data ONTAP distinguishes between the physical layer of
data storage resources and the logical layer. The physical layer includes drives, array LUNs, virtual disks, RAID groups, plexes, and aggregates.
Note: A drive (or disk) is the basic unit of storage for storage systems that use Data ONTAP to access native disk shelves. An array LUN is the basic unit of storage that a storage
array provides to a storage system that runs Data ONTAP. A virtual disk is the basic unit of storage for a storage system that runs Data ONTAP-v. Disk is the physical disk itself,
normally the disk will reside in a disk enclosure, the disk will have a pathname like 2a.17
2a = SCSI adapter
17 = disk SCSI ID
Any disks that are classed as spare will be used in any group to replace failed disks.
Aggregates
Aggregates are the raw space in your storage system. You take a bunch of individual disks and aggregate them together into aggregates. But, an aggregate can’t actually hold
data, its just raw space. An aggregate is the physical storage. It is made up of one or more raid groups of disks. Aggregates are collections of raid groups. They consist of one or more
Raid Groups.
I like to think of aggregates as a big hard drive. There are a lot of similarities in this. When you buy a hard drive you need partition it and format it before it can be used. Until then
its basically raw space. Well, that's an aggregate. its just raw space. One point to remember is that a aggregate can grow but cannot shrink. When I created aggr1 I used the
command:

aggr create aggr1 5

This caused Data ONTAP to create an aggregate named aggr1 with five disks in it. Let’s take a look at this with the following command:

sysconfig –r
If you notice aggr1, you can see that it contains 5 disks. Three disks are data disks and there are two parity disks, “parity” and “dparity”. The RAID group was created
automatically to support the aggregate. If I need more space, I can add disks to the aggregate and they will be inserted into the existing RAID group within the aggregate. I can add 3
disks with the following command:

aggr add aggr1 3

Look at the following output:

Before the disks can be added, they must be zeroed. If they are not already zeroed, then Data ONTAP will zero them first. This may take a significant amount of time.

Raid groups

Before all the physical hard disk drives (HDDs) are pooled into a logical construct called an aggregate (which is what ONTAP’s FlexVol is about), the HDDs are grouped into a RAID
group. A RAID group is also a logical construct, in which it combines all HDDs into data or parity disks. The RAID group is the building block of the Aggregate.

Raid groups are protected sets of disks. consisting of 1 or 2 parity, and 1 or more data disks. We don’t build raid groups, they are built automatically behind the scene when you
build an aggregate. For example:
In a default configuration you are configured for RAID-DP and a 16 disk raid group (assuming FC/SAS disks). So, if i create a 16 disk aggregate i get 1 raid group. If I create a 32
disk aggregate, i get 2 raid groups. Raid groups can be adjusted in size. For FC/SAS they can be anywhere from 3 to 28 disks, with 16 being the default. An aggregate is made of Raid
Groups. Lets do a few examples using the command to make an aggregate:

aggr create aggr1 16

If the default raid group size is 16, then the aggregate will have one raid group. But, if i use the command:

aggr create aggr1 32

Now I have two full raid groups, but still only one aggregate. So, the aggregate gets the performance benefit of 2 RGs worth of disks. Notice we did not build a raid group. Data
ONTAP built the RG based on the default RG size.
If I had created an aggregate with 24 disks, then Data ONTAP would have created two RAID groups. The first RAID group would be fully populated with 16 disks (14 data disks and
two parity disks) and the second RAID group would have contained 8 disks (6 data disks and two parity disks). This is a perfectly normal situation. For the most part, it is safe to
ignore RAID groups and simply let Data ONTAP take care of things.
Volumes
Volumes are data containers. A volume is analogous to a partition. It’s where you can put data. Think of the previous analogy. An aggregate is the raw space (hard drive), the
volume is the partition, its where you put the file system and data. Some other similarities include the ability to have multiple volumes per aggregate, just like you can have multiple
partitions per hard drive. and you can grow and shrink volumes, just like you can grow and shrink partitions.

Qtrees

A qtree is analogous to a subdirectory. Lets continue the analogy. Aggregate is hard drive, volume is partition, and qtree is subdirectory. Why use them? to sort data. The same
reason you use them on your personal PC. There are 5 things you can do with a qtree you can’t do with a directory and thats why they aren’t just called directories:

 Oplocks
 Security style
 Quotas
 Snapvault
 Qtree SnapMirror

Opportunistic lock (OpLock)


Opportunistic lock (OpLock) is a form of file locking used to facilitate caching and access control and improve performance. A cache (pronounced CASH) is a place to store
something temporarily in a computing environment.
OpLocks are made to enable simultaneous file access by multiple users while also improving performance for synchronized caches. In a synchronized cache, when a client requests
a file from a server, the shared file may be cached to avoid subsequent trips over the network to retrieve it.
OpLock is part of the Server message block (SMB) protocol, also known as the Common Internet File System (CIFS) protocol. OpLocks include batch locks, exclusive locks and level 2
OpLocks.
Security Styles
Each volume and qtree on the storage system has a security style. The security style determines what type of permissions are used for data on volumes when authorizing users.
You must understand what the different security styles are, when and where they are set, how they impact permissions, how they differ between volume types, and more.
Every qtree and volume has a security style setting—NTFS, UNIX, or mixed. The setting determines whether files use Windows NT or UNIX (NFS) security. How you set up security
styles depends on what protocols are licensed on your storage system.
Although security styles can be applied to volumes, they are not shown as a volume attribute, and they are managed for both volumes and qtrees using the qtree command. The
security style for a volume applies only to files and directories in that volume that are not contained in any qtree. The volume security style does not affect the security style for any
qtrees in that volume.
There are four different security styles: UNIX, NTFS, mixed, and unified. Each security style has a different effect on how permissions are handled for data. You must understand
the different effects to ensure that you select the appropriate security style for your purposes.
It is important to understand that security styles do not determine what client types can or cannot access data. Security styles only determine the type of permissions Data ONTAP
uses to control data access and what client type can modify these permissions.
For example, if a volume uses UNIX security style, CIFS clients can still access data (provided that they properly authenticate and authorize) due to the multiprotocol nature of
Data ONTAP. However, Data ONTAP uses UNIX permissions that only UNIX clients can modify.

Quotas:
Quotas provide a way to restrict or track the disk space and number of files used by a user, group, or qtree. You specify quotas using the /etc/quotas file. A quota limits the
amount of disk space and the number of files that a particular user or group can consume. A quota can also restrict the total space and files used in a qtree, or the usage of users and
groups within a qtree. A request that would cause a user or group to exceed an applicable quota fails with a ``disk quota exceeded'' error. A request that would cause the number of
blocks or files in a qtree to exceed the qtree's limit fails with an ``out of disk space'' error.
User and group quotas do not apply to the root user or to the Windows Administrator account; tree quotas, however, do apply even to root and the Windows Administrator
account.
Before going to Snapvault and Snapmirror we have to know what is snapshots.
Snapshots
NetApp snapshot is the patient technology of netapp that allow the Storage admin to take the backup of files locally, on the storage box and a very fast restoration of files in case
of any file corruption and file deletion by mistake. To learn the snapshot we have to first go through the concept of AFS (Active file system).
Active file system
Netapp write data using the 4 KB block size so when ever we write the data into netapp file sytem it breaks up that file into 4 Kb blocks for example we want to write a file ABC
then it will be written as below. So in above example ABC is using 4 netapp block to write the file ABC. So at any point of time a file is represented by the active file system.
Definition of snapshot on the basis of active file system
Whenever we take snapshot of any file, all the block’s constituting the file becomes frozen means after taking the snapshot of any file in netapp, blocks constituting the file can not
be altered and deleted, when end use make some changes into that file, changes are written using new blocks.
for example some end user make some changes in file ABC and convert the block C into C’ then new file will be ABC’.so if storage admin has taken the snapshot then the file will
be written as below.

So for definition wise snapshot can be defined as the the read only image of active file system at any point of time.

Secret behind very fast data retrieval using snapshot technology

As described snapshot in above lines snapshot is the read only image of active file system at any point of time. so after the first snapshot which has frozen the ABC blocks, file got
deleted or got corrupt, means file ABC’ got corrupt and it is required to retrieve the last image of that file, as because of snapshot ABC blocks are their as those got frozen because of
the snapshot so retrieval only changes the pointer towards the previous frozen blocks and with in second file got retrieved.

Snapshot as the secret of Netapp performance and other feature

Other some Patient technology of Netapp are fully based on snapshot engine, Below is a description of those technologies.

1. Snaprestore.

2. Snapmirror.

3. Snapvault.
Above is the data backup and recovery spectrum in netapp fas storage system. Only using snapshot we can backup a whole volume or aggregate of file but we can only restore a
single file, which does not seems practical in any sense, so their are other technologies that usages snapshot engine but provide more feature and granularity to storage admin.

* Snapshot provides the following data backup and recovery spectrum.

* Snap restore enables the storage admin to recover a whole qtree, volume using a single command.

* Snapmirror that provide the feature of DR (Disaster recovery solution) works on snapshot engine technology.

* Snapvault is the technology that enable the storage admin to take backup of data on a remote storage.

Difference between Snapmirror and Snapvault functionality

Both snapvault and snapmirror usages the snapshot engine and seems slimier in functionality but their is a very basic difference in both technologies.

Snapvault

Snapvault is a feature that provide a backup solution on a remote storage system independent of remote storage type, Means we can take backup on non netapp storage system
as well, in netapp taking backup on a non netapp storage can be achieved using OSSV (Open sytem snap vault).

Snapmirror

Snapmirror a DR solution provided by Netapp means we enables a snapmirror relationship between two Netapp sytstem, then in case of Disaster storage admin can route the user
to access the data from the replica or mirror storage.

https://netappnotesark.blogspot.com/2015/08/index_12.html
\\usdata003\esr-storage\FileServices\Script_Outputs\Scripts

How to replace Failed Disk - NetApp

HOME

IDENTIFY THE FAILED DISK

To identify the failed disk you can verify using physically / logically.

NetappFiler>vol status -f
NetappFiler>aggr status -f

If your yet in-front of the disk shelf run below commands to ensure the disk. Anyway you see the LED light as RED

You should required a Administrator privilages for this.


NetappFiler>priv set advanced
Now your in Advanced mode. Be Very careful while running any commands in this mode.

NetappFiler*>blink_on DiskName

Disk LED will blink continuously

NetappFiler*>blink_off DiskName
Disk LED blink will off

NetappFiler*>led_on DiskName
Disk LED will on

NetappFiler*>led_off DiskName
Disk LED will switchoff

Use this step in extreme Cases only. You can also bring back the disk online forcefully, but the same disk may fail again after
few hours, if you did not have any spare drives you can try this as temporary solution.

Come back from Advanced mode to normal


NetappFiler>priv set

Enter into the diag mode


NetappFiler>priv set diag

Then run below command to bring disk online agian


NetappFiler*>disk unfail -s DiskName

come back from the diag mode to normal


NetappFiler*>priv set

Replace your Disk by accessing the disk shelf physically then follow the steps to use new disk.

Verify newly added disk is detected as UN-owned disk


NetappFiler>disk show -n

Assign the disk to the pool


NetappFiler>disk assign DiskName

Verify the disk is added as a spare or not


NetappFiler> vol status -s
NetappFiler> vol status -f

Now you Disk is replaced successfully.

Netapp Hardware Interview Questions and Answers

1. How to improve the Netapp storage performance?

Ans:- There is no direct answer for this question but we shall do it in several way.

 If volume/lun present in ATA/SATA harddisk aggregate, then the volume can be migrated to FC/SAS disk aggregate. Either you can use flash cache to
improve performance.

 For NFS/CIFS instead of accessing from single interface, multi mode vif can be configured to get better bandwidth and fault tolerance.

 Always advised to keep aggr/vol utilization below 90%.

 Avoid doing multiple volume backup in single point of time.

 Aggr/volume/lun reallocation can be done to re–distribute the data to multiple disk for better striping performance.

 Schedule scrubbing and De-duplication scanning after business hours.

 Create multiple loops and connect different types of shelf's to each loop

 Avoid mixing up different speeds of disk and different types of disk in a same aggregate.

 Always keep sufficient spare disk to replace in case of disk failure. Because reconstruction time will take more time and cause negative performance.

 Keep the advised version of firmware/software which is recommended by Netapp.

2. Unable to map lun to solaris server, but solaris server side no issue. How to resolve the issue?

Ans:- FROM STORAGE SIDE:

Verify iscsi/fcp license is added in storage


Verify iscsi/fcp session is logged in from server side use below command
Netapp> igroup show -v
Verify luns are mapped to the corresponding igroup
Verify whether correct host type is mentioned while creating igroup and lun
Verify whether correct iqn/wwpn number is added to igroup
Verify zoning is properly configured from switch side, if it is FCP protocol

3. How to create the LUN for linux server?


Ans:- lun create –s size –t linux /vol/vol1/lunname

4. How to create qtree and provide the security?

Ans:-
Netapp>qtree create /vol/vol1/qtreename
Netapp>qtree security /vol/vol1/qtree unix|ntfs|mixed
5. How to copy volume filer to filer?

Ans:- ndmpcopy or snapmirror

6. How to resize the aggregate?

Ans:-
Netapp> aggr add AggName no.of.disk

7. How to increase the volume?

Ans:- Traditional Volume

vol add VolName no.of.disk

Flexible Volume

vol size VolName +60g

8. What is qtree?

Ans:- qtree is Logical partition of the volume.

9. What is the default snap reserve in aggregate?

Ans:- 5%

10. What is snapshot?

Ans:-
A Snapshot copy is a read-only image of a traditional or FlexVol volume, or an aggregate, that captures the state of the file system at a point in time.

11. What are the raid groups Netapp supporting?, what is the difference between them?

Ans:-
Supported RAID types:

Raid-4
Raid-6
Raid-Dp

12. What are the protocols you are using?


Ans:-
Say some protocols like NFS, CIFS, ISCSI and FCP

13. What is the difference between iscsi and fcp?

Ans:-
Iscsi-sending block through. iSCSI does not required dedicated network, it will work on existing network also. it work's an TCP/IP.

Fcp-send through fibre medium. Required an dedicated FC network. Performance is so high compare to the iSCSI

14. What is the iscsi port number?


Ans:- 3260

15. What is the difference between ndmp copy and vol copy?

Ndmp copy –network data management protocol(used for tape backup)

Vol copy – is used to transfer volume to same or another aggregate

16. What is the difference between ONTAP 7 & 8?

In ONTAP 7 the individual aggregate is limited to maximum of 16 TB. Where ONTAP 8 supports the new 64 bit aggregate and hence the size of the individual
aggregate extends to 100 TB.

17. What are the steps need to perform to configure SnapMirror?

The SnapMirror configuration process consists of the following four steps:


Refer Topic
Install the SnapMirror license on the source and destination systems:
license add <code>
On the source, specify the host name or IP address of the SnapMirror destination systems you wish to authorize to replicate this source system.

options snapmirror.access host=dst_hostname1,dst_hostname2

For each source volume or qtree to replicate, perform an initial baseline transfer. For volume SnapMirror

restrict the destination volume first: vol restrict dst_vol

Then initialize the volume SnapMirror baseline, using the following syntax on the destination:

snapmirror initialize -S src_hostname:src_v

oldst_hostname:dst_vol

For a qtree SnapMirror baseline transfer, use the following syntax on the destination:

snapmirror initialize –S src_hostname:/vol/src_vol/src_qtree

dst_hostname:/vol/dst_vol/dst_qtree

18. While doing baseline transfer you’re getting error message. What are the troubleshooting steps you’ll do?

Ans:-
Check both the hosts are reachable by running “ping” command
Check whether the TCP port 10566 & 10565 are open from firewall
Check whether the snapmirror license are installed in both filers

19. Explain the different types of replication modes..?

The SnapMirror Async mode replicates Snapshot copies from a source volume or qtree to a destination. It will support to replicate more than 800Kms Long.
volume or qtree. Incremental updates are based on a schedule or are performed manually using the snapmirror update command. Async mode works with both
volume SnapMirror and qtree SnapMirror.

SnapMirror Sync mode replicates writes from a source volume to a destination volume at the same time it is written to the source volume. SnapMirror Sync is
used in environments that have zero tolerance for data loss. it will note support more then 300Kms long.

SnapMirror Semi-Sync provides a middle-ground solution that keeps the source and destination systems more closely synchronized than Async mode, but with
less impact on performance.

20. How do you configure multiple path in Snapmirror?

Add a connection name line in the snapmirror.conf file


/etc/snapmirror.conf
FAS1_conf = multi (FAS1-e0a,FAS2-e0a) (FAS1-e0b,FAS2-e0b)

21. Explain how De-Duplication works?

In the context of disk storage, De-duplication refers to any algorithm that searches for duplicate data objects (for example, blocks, chunks, files) and discards
those duplicates. When duplicate data is detected, it is not retained, but instead a “data pointer” is modified so that the storage system references an exact
copy of the data object already stored on disk. This De-duplication feature works well with datasets that have lots of duplicated date (for example, full
backups).

22. What is the command used to see amount of space saved using De-duplication?

df –s <volume name>

23. Command used to check progress and status of De-duplication?

sis status

24. How do you setup Snapvault Snapshot schedule?

pri> snapvault snap sched vol1 sv_hourly 22@0-22

This schedule is for the home directories volume vol1


Creates hourly Snapshot copies, except 11:00 p.m.
Keeps nearly a full day of hourly copies

25. What is metadata?

Metadata is defined as data providing information about one or more aspects of the data,

1. Inode file
2. Used block bitmap file
3. Free block bitmap file

26. How do you shutdown filer through RLM?

ssh “rlm ip address”


RLM_Netapp> system power on

27. After creating LUN (iSCSI) & mapped the LUN to particular igroup, the client not able to access the LUN. What are the trouble shooting steps you take?

Check whether IQN number specified is correct


Check whether the created LUN is in “restrict” mode
Check the iscsi status
Un-map and map the LUN once again
Check Network connectivity communication

28. In CIFS how do you check who is using most?

cifs top

29. How to check cifs performance statistics.?

cifs stat

30. What do you do if a customer reports a particular CIFS share is responding slow?

Check the r/w using "cifs stat" & "sysstat -x 1".


If disk & cpu utilization is more then problem is with filer side only.
CPU utilization will be high if more disk r/w time, i.e.,during tape backup & also during scrub activities.

31. what is degraded mode? If you don't have parity for failed disks then?

If the spare disk is not added within 24hours,then filer will be shutdown automatically to avoid further disk failures and data loss.

32. Did you ever do ontap upgrade? From which version to which version and for what reason?

Yes i have done ontap upgrade from version 7.2.6.1 to 7.3.3 due to lot of bugs in old version.

33. How do you create a lun ?

lun create -s <lunsize> -t <host type> <lunpath>

34. How do you monitor the filers?

Using DFM(Data Fabric Manager) or also using SNMP you can monitor the filer. Using any monitoring systems like .i.e.Nagios

35. What are the prerequisites for a cluster?

cluster interconnect cable should be connected.

shelf connect should be properly done for both the controllers with Path1 and Path2

cluster license should be enabled on both the nodes

Interfaces should be properly configured for fail over

cluster should be enabled

36. What is the diff bet cf takeover and cf force takeover?

If partner shelf power is off, if you try to takeover it will not take. if you do as force using (-f) it will work

37. What is LIF.?

LIF ( Logical interface) :


As the name suggest its a logical interface which is created from physical interface of NetApp controllers.

38. What is VServer..?

A Vserver is defined as logical container which holds the volumes. A 7 mode vfiler is called as a vserver in Clustered mode .

39. What is junction path..?


This is a new term in cluster mode and this is used for mounting.Volume junctions are a way to join individual volumes together into a single, logical namespace
to enable data access to NAS clients.

40. What is infinite volumes..?

NetApp Infinite Volume is a software abstraction hosted over clustered Data ONTAP

# NFS TROUBLESHOOTING

Problem1: Stale NFS File handle


Sample Error Messages - NFS Error 70

Error Explanation:

A “stale NFS file handle” error message can and usually is caused by the following events:

1. A certain file or directory that is on the NFS server is opened by the NFS client
2. That specific file or directory is deleted either on that server or on another system that has access to the same share
3. Then that file or directory is accessed on the client

A file handle usually becomes stale when a file or directory referenced by the file handle on the client is removed by another host, while your
client is still holding on to an active reference to that object.
Resolution Tips

 Check connectivity to the storage system (server)


 Check mount point
 Check client vfstab or fstab as relevant
 Check showmount –e filerx from client
 Check exportfs from command line of the storage system
 Check storage system /etc/exports file

Problem2: NFS server not responding


NFS Server (servername) not responding

Error Explanation: NFS client hangs, mount hangs on all clients

Resolution Tips

 Use ping to contact the hostname of the storage system (server) from client
 Use ping to contact the client from the storage system
 Check ifconfig from the storage system
 Check that the correct NFS version is enabled
 Check all nfs options on the storage system
 Check /etc/rc file for nfs options
 Check nfs license

Problem3: Permission denied


nfs mount: mount: /nfs: Permission denied

Error Explanation: Permission is not there but trying to access NFS share from server
Resolution Tips

 Check showmount –e filername from client


 Try to create a new mountpoint
 Check exportfs at the storage system command line to see what system is exporting
 Check auditlog for recent exportfs –a
 Check the /etc/log/auditlog for messages related to exportfs
 Check the storage path with exportfs –s
 Check whether the client can mount the resource with the exportfs –c command
 Flush the access cache and reload the exports, then retry the mount

Problem4: Network Performance Slow


Poor NFS read and/or write performance

Error Explanation: End user is feeling the slowness

Resolution Tips

 Check sysstat 1 for nfs ops/sec vs. kbs/sec


 Check parameters on network card interface (NIC) with ifconfig –a
 Check netdiag
 Check network condition with ifstat –a; netstat –m
 Check client side network condition
 Check routing table on the storage system with netstat
 Check routing table on the client
 Check perfstat.sh
 Check throughput with sio_ntap tool
 Check rsize and wsize
 Consider configuring jumbo frames (entire path must support jumbo frames)

Problem5: RPC not responding


RPC: Unable to receive or RPC:Timed out
Resolution Tips

 Use ping to contact the storage system (server)


 From storage system, use ping to contact client
 Check mountpoint
 Check showmount –e filerX from client
 Verify name of directory on the storage system
 Check exportfs to see what the storage system is exporting
 Use the "rpcinfo -p filerx" command from the client to verify that the RPCs are running

Problem6: No Space Left On Disk


No space left on disk error
Resolution Tips

 Check df for available disk space


 Check for snapshot overruns
 Check quota report for exceeded quotas

Netapp Basic Commands Getting Started with Cluster Mode


Netapp Basic commands getting started with cluster mode, If we look into the previous Netapp 7-mode
there is no Option to Clear all the commands typed on screen and change directory path from one to
another without changing the privilege from admin to Advanced.

Change Directory cd Command


Cluster::> cd dashboard
Cluster::> cd volume
Cluster::> cd "storage aggregate"

History of Previously executed commands


Cluster::> history
1 cd dashboard
2 cd volume
3 cd "storage aggregate"
Cluster::> redo 2
history is the command to see the history of previously executed commands and redo is the command to execute the previously executed command number 2

Clear Screen
To clear the screen of cluttered with commands and commands output just press CTRL+L

Instantly getting help from Netapp Command Line

Suddenly struck up in middle of the task because did not remember the command need help use inbuilt man pages to get help from command line.

Cluster::> man storage aggregate

Rows to Display in CLI session

Show/Set the rows for CLI session using rows command

Cluster::> rows 45

Above command will set the CLI session display as 45 rows.

Run commands interactively Or non-interactively


Cluster::> run -node node1 -command "aggr status; vol status"
Using above command we can run commands on nodeshell interactively Or non-
interactively. Above command will show aggregate status and Volume status yet the same
time on node1 system.

Change Privileges as per the requirement


Using set command we can change the privilege level

1. Privilege as Advanced
2. Privilege to admin
3. Privilege as Diag

Cluster::> set -privilege advanced

Note: Until unless you have a compulsary requirement in advanced mode then only using
above command, Strictly not required for normal usage.

Cluster::> set diag

Diagnostic log collection or performance deep digging we can use above command

Cluster::> set admin

Setting as Admin we can come from diag and advanced mode to admin mode

TOP command
top command is used to change from any level directory path to direct top which means
out of directory path.

Cluster::> cd network interface


Cluster:: interface> top

UP Command

Using up command one step directory path will go to up.

Cluster::> cd storage aggregate

Cluster:: aggregate> up
Cluster:: storage>

Exit from CLI Session


We can also use exit command to exit from CLI session also we can press CTRL + D
7 mode all cmds

This post contains the list of commands that will be most used and will come handy when managing or monitoring or troubleshooting a Netapp
filer in 7-mode.
 sysconfig -a : shows hardware configuration with more verbose information
 sysconfig -d : shows information of the disk attached to the filer
 version : shows the netapp Ontap OS version.
 uptime : shows the filer uptime
 dns info : this shows the dns resolvers, the no of hits and misses and other info
 nis info : this shows the nis domain name, yp servers etc.
 rdfile : Like “cat” in Linux, used to read contents of text files/
 wrfile : Creates/Overwrites a file.
 aggr status : Shows the aggregate status
 aggr status -r : Shows the raid configuration, reconstruction information of the disks in filer
 aggr show_space : Shows the disk usage of the aggregate, WAFL reserve, overheads etc.
 vol status : Shows the volume information
 vol status -s : Displays the spare disks on the filer
 vol status -f / aggr status -f : Displays the failed disks on the filer
 vol status -r : Shows the raid configuration, reconstruction information of the disks
 df -h : Displays volume disk usage
 df -i : Shows the inode counts of all the volumes
 df -Ah : Shows “df” information of the aggregate
 license : Displays/add/removes license on a netapp filer
 maxfiles : Displays and adds more inodes to a volume
 aggr create <Aggr Name> <Disk Names> : Creates aggregate
 vol create : Creates volume in an aggregate
 vol offline : Offlines a volume
 vol online : Onlines a volume
 vol destroy : Destroys and removes an volume
 vol size [+|-] : Resize a volume in netapp filer
 vol options : Displays/Changes volume options in a netapp filer
 qtree create : Creates qtree
 qtree status : Displays the status of qtrees
 quota on : Enables quota on a netapp filer
 quota off : Disables quota
 quota resize : Resizes quota
 quota report : Reports the quota and usage
 snap list : Displays all snapshots on a volume
 snap create : Create snapshot
 snap sched : Schedule snapshot creation
 snap reserve : Display/set snapshot reserve space in volume
 /etc/exports : File that manages the NFS exports
 rdfile /etc/exports : Read the NFS exports file
 wrfile /etc/exports : Write to NFS exports file
 exportfs -a : Exports all the filesystems listed in /etc/exports
 cifs setup : Setup cifs
 cifs shares : Create/displays cifs shares
 cifs access : Changes access of cifs shares
 lun create : Creates iscsi or fcp luns on a netapp filer
 lun map : Maps lun to an igroup
 lun show : Show all the luns on a filer
 igroup create : Creates netapp igroup
 lun stats : Show lun I/O statistics
 disk show : Shows all the disk on the filer
 disk zero spares : Zeros the spare disks
 disk_fw_update : Upgrades the disk firmware on all disks
 options : Display/Set options on netapp filer
 options nfs : Display/Set NFS options
 options timed : Display/Set NTP options on netapp.
 options autosupport : Display/Set autosupport options
 options cifs : Display/Set cifs options
 options tcp : Display/Set TCP options
 options net : Display/Set network options
 ndmpcopy : Initiates ndmpcopy
 ndmpd status : Displays status of ndmpd
 ndmpd killall : Terminates all the ndmpd processes.
 ifconfig : Displays/Sets IP address on a network/vif interface
 vif create : Creates a VIF (bonding/trunking/teaming)
 vif status : Displays status of a vif
 netstat : Displays network statistics
 sysstat -us 1 : begins a 1 second sample of the filer’s current utilization (crtl – c to end)
 nfsstat : Shows nfs statistics
 nfsstat -l : Displays nfs stats per client
 nfs_hist : Displays nfs historgram
 statit : beings/ends a performance workload sampling [-b starts / -e ends]
 stats : Displays stats for every counter on netapp. Read stats man page for more info
 ifstat : Displays Network interface stats
 qtree stats : displays I/O stats of qtree
 environment : display environment status on shelves and chassis of the filer
 storage show <disk|shelf|adapter> : Shows storage component details
 snapmirror intialize : Initialize a snapmirror relation
 snapmirror update : Manually Update snapmirror relation
 snapmirror resync : Resyns a broken snapmirror
 snapmirror quiesce : Quiesces a snapmirror bond
 snapmirror break : Breakes a snapmirror relation
 snapmirror abort : Abort a running snapmirror
 snapmirror status : Shows snapmirror status
 lock status -h : Displays locks held by filer
 sm_mon : Manage the locks
 storage download shelf : Installs the shelf firmware
 software get : Download the Netapp OS software
 software install : Installs OS
 download : Updates the installed OS
 cf status : Displays cluster status
 cf takeover : Takes over the cluster partner
 cf giveback : Gives back control to the cluster partner
 reboot : Reboots a filer

Netapp Cluster Mode commands cheat sheet - Netapp Notes by ARK

MISC
set -privilege advanced (Enter into privilege mode)
set -privilege diagnostic (Enter into diagnostic mode)
set -privilege admin (Enter into admin mode)
system timeout modify 30 (Sets system timeout to 30 minutes)
system node run – node local sysconfig -a (Run sysconfig on the local node)
The symbol ! means other than in clustered ontap i.e. storage aggregate show -state !online (show all aggregates that are not online)
node run -node -command sysstat -c 10 -x 3 (Running the sysstat performance tool with cluster mode)
system node image show (Show the running Data Ontap versions and which is the default boot)
dashboard performance show (Shows a summary of cluster performance including interconnect traffic)
node run * environment shelf (Shows information about the Shelves Connected including Model Number)
DIAGNOSTICS USER CLUSTERED ONTAP
security login unlock -username diag (Unlock the diag user)
security login password -username diag (Set a password for the diag user)
security login show -username diag (Show the diag user)

SYSTEM CONFIGURATION BACKUPS FOR CLUSTERED ONTAP


system configuration backup create -backup-name node1-backup -node node1 (Create a cluster backup from node1)
system configuration backup create -backup-name node1-backup -node node1 -backup-type node (Create a node backup of node1)
system configuration backup upload -node node1 -backup node1.7z -destination ftp://username:password@ftp.server.com (Uploads a backup file
to ftp)
LOGS
To look at the logs within clustered ontap you must log in as the diag user to a specific node
set -privilege advanced
systemshell -node
username: diag
password:
cd /mroot/etc/mlog
cat command-history.log | grep volume (searches the command-history.log file for the keyword volume)
exit (exits out of diag mode)
SERVICE PROCESSOR
system node image get -package http://webserver/306-02765_A0_SP_3.0.1P1_SP_FW.zip -replace-package true (Copies the firmware file from the
webserver into the mroot directory on the node)
system node service-processor image update -node node1 -package 306-02765_A0_SP_3.0.1P1_SP_FW.zip -update-type differential (Installs the
firmware package to node1)
system node service-processor show (Show the service processor firmware levels of each node in the cluster)
system node service-processor image update-progress show (Shows the progress of a firmware update on the Service Processor)
CLUSTER
set -privilege advanced (required to be in advanced mode for the below commands)
cluster statistics show (shows statistics of the cluster – CPU, NFS, CIFS, FCP, Cluster Interconnect Traffic)
cluster ring show -unitname vldb (check if volume location database is in quorum)
cluster ring show -unitname mgmt (check if management application is in quorum)
cluster ring show -unitname vifmgr (check if virtual interface manager is in quorum)
cluster ring show -unitname bcomd (check if san management daemon is in quorum)
cluster unjoin (must be run in priv -set admin, disjoins a cluster node. Must also remove its cluster HA partner)
debug vreport show (must be run in priv -set diag, shows WAFL and VLDB consistency)
event log show -messagename scsiblade.* (show that cluster is in quorum)
NODES
system node rename -node -newname
system node reboot -node NODENAME -reason ENTER REASON (Reboot node with a given reason. NOTE: check ha policy)
FLASH CACHE
system node run -node * options flexscale.enable on (Enabling Flash Cache on each node)
system node run -node * options flexscale.lopri_blocks on (Enabling Flash Cache on each node)
system node run -node * options flexscale.normal_data_blocks on (Enabling Flash Cache on each node)
node run NODENAME stats show -p flexscale (fashcache configuration)
node run NODENAME stats show -p flexscale-access (display flash cache statistics)
FLASH POOL
storage aggregate modify -hybrid-enabled true (Change the AGGR to hybrid)
storage aggregate add-disks -disktype SSD (Add SSD disks to AGGR to begin creating a flash pool)
priority hybrid-cache set volume1 read-cache=none write-cache=none (Within node shell and diag mode disable read and write cache on
volume1)
FAIL-OVER
storage failover takeover -bynode (Initiate a failover)
storage failover giveback -bynode (Initiate a giveback)
storage failover modify -node -enabled true (Enabling failover on one of the nodes enables it on the other)
storage failover show (Shows failover status)
storage failover modify -node -auto-giveback false (Disables auto giveback on this ha node)
storage failover modify -node -auto-giveback enable (Enables auto giveback on this ha node)
aggregate show -node NODENAME -fields ha-policy (show SFO HA Policy for aggregate)
AGGREGATES
aggr create -aggregate -diskcount -raidtype raid_dp -maxraidsize 18 (Create an AGGR with X amount of disks, raid_dp and raidgroup size 18)
aggr offline | online (Make the aggr offline or online)
aggr rename -aggregate -newname
aggr relocation start -node node01 -destination node02 -aggregate-list aggr1 (Relocate aggr1 from node01 to node02)
aggr relocation show (Shows the status of an aggregate relocation job)
aggr show -space (Show used and used% for volume foot prints and aggregate metadata)
aggregate show (show all aggregates size, used% and state)
aggregate add-disks -aggregate -diskcount (Adds a number of disks to the aggregate)
reallocate measure -vserver vmware -path /vol/datastore1 -once true (Test to see if the volume datastore1 needs to be reallocated or not)
reallocate start -vserver vmware -path /vol/datastore1 -force true -once true (Run reallocate on the volume datastore1 within the vmware
vserver)
DISKS
storage disk assign -disk 0a.00.1 -owner (Assign a specific disk to a node) OR
storage disk assign -count -owner (Assign unallocated disks to a node)
storage disk show -ownership (Show disk ownership to nodes)
storage disk show -state broken | copy | maintenance | partner | percent | reconstructing | removed | spare | unfail |zeroing (Show the
state of a disk)
storage disk modify -disk NODE1:4c.10.0 -owner NODE1 -force-owner true (Force the change of ownership of a disk)
storage disk removeowner -disk NODE1:4c.10.0 -force true (Remove ownership of a drive)
storage disk set-led -disk Node1:4c.10.0 -action blink -time 5 (Blink the led of disk 4c.10.0 for 5 minutes. Use the blinkoff action to
turn it off)
VSERVER
vserver setup (Runs the clustered ontap vserver setup wizard)
vserver create -vserver -rootvolume (Creates a new vserver)
vserver show (Shows all vservers in the system)
vserver show -vserver (Show information on a specific vserver)
VOLUMES
volume create -vserver -volume -aggregate -size 100GB -junction-path /eng/p7/source (Creates a Volume within a vserver)
volume move -vserver -volume -destination-aggregate -foreground true (Moves a Volume to a different aggregate with high priority)
volume move -vserver -volume -destination-aggregate -cutover-action wait (Moves a Volume to a different aggregate with low priority but
does not cutover)
volume move trigger-cutover -vserver -volume (Trigger a cutover of a volume move in waiting state)
volume move show (shows all volume moves currently active or waiting. NOTE: You can only do 8 volume moves at one time, more than 8 and
they get queued)
system node run – node vol size 400g (resize volume_name to 400GB) OR
volume size -volume -new-size 400g (resize volume_name to 400GB)
volume modify -vserver -filesys-size-fixed false -volume (Turn off fixed file sizing on volumes)
LUNS
lun show -vserver (Shows all luns belonging to this specific vserver)
lun modify -vserver -space-allocation enabled -path (Turns on space allocation so you can run lun reclaims via VAAI)
lun geometry -vserver path /vol/vol1/lun1 (Displays the lun geometry)
NFS
vserver modify -4.1 -pnfs enabled (Enable pNFS. NOTE: Cannot coexist with NFSv4)
FCP
storage show adapter (Show Physical FCP adapters)
fcp adapter modify -node NODENAME -adapter 0e -state down (Take port 0e offline)
node run fcpadmin config (Shows the config of the adapters – Initiator or Target)
node run -t target 0a (Changes port 0a from initiator or target – You must reboot the node)
CIFS
vserver cifs create -vserver -cifs-server -domain (Enable Cifs)
vserver cifs share create -share-name root -path / (Create a CIFS share called root)
vserver cifs share show
vserver cifs show
SMB
vserver cifs options modify -vserver -smb2-enabled true (Enable SMB2.0 and 2.1)
SNAPSHOTS
volume snapshot create -vserver vserver1 -volume vol1 -snapshot snapshot1 (Create a snapshot on vserver1, vol1 called snapshot1)
volume snapshot restore -vserver vserver1 -volume vol1 -snapshot snapshot1 (Restore a snapshot on vserver1, vol1 called snapshot1)
volume snapshot show -vserver vserver1 -volume vol1 (Show snapshots on vserver1 vol1)
DP MIRRORS AND SNAPMIRRORS
volume create -vserver -volume vol10_mirror -aggregate -type DP (Create a destinaion Snapmirror Volume)
snapmirror create -vserver -source-path sysadmincluster://vserver1/vol10 -destination -path sysadmincluster://vserver1/vol10_mirror -type
DP (Create a snapmirror relationship for sysadmincluster)
snapmirror initialize -source-path sysadmincluster://vserver1/vol10 -destination-path sysadmincluster://vserver1/vol10_mirror -type DP -
foreground true (Initialize the snapmirror example)
snapmirror update -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror -throttle 1000 (Snapmirror update and throttle to
1000KB/sec)
snapmirror modify -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror -throttle 2000 (Change the snapmirror throttle to
2000)
snapmirror restore -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror (Restore a snapmirror from destination to source)
snapmirror show (show snapmirror relationships and status)
NOTE: You can create snapmirror relationships between 2 different clusters by creating a peer relationship

SNAPVAULT
snapmirror create -source-path vserver1:vol5 -destination-path vserver2:vol5_archive -type XDP -schedule 5min -policy backup-vspolicy
(Create snapvault relationship with 5 min schedule using backup-vspolicy)
NOTE: Type DP (asynchronous), LS (load-sharing mirror), XDP (backup vault, snapvault), TDP (transition), RST (transient restore)

NETWORK INTERFACE
network interface show (show network interfaces)
network port show (Shows the status and information on current network ports)
network port modify -node * -port -mtu 9000 (Enable Jumbo Frames on interface vif_name>
network port modify -node * -port -flowcontrol-admin none (Disables Flow Control on port data_port_name)
network interface revert * (revert all network interfaces to their home port)
INTERFACE GROUPS
ifgrp create -node -ifgrp -distr-func ip -mode multimode (Create an interface group called vif_name on node_name)
network port ifgrp add-port -node -ifgrp -port (Add a port to vif_name)
net int failover-groups create -failover-group data__fg -node -port (Create a failover group – Complete on both nodes)
ifgrp show (Shows the status and information on current interface groups)
net int failover-groups show (Show Failover Group Status and information)
ROUTING GROUPS
network interface show-routing-group (show routing groups for all vservers)
network routing-groups show -vserver vserver1 (show routing groups for vserver1)
network routing-groups route create -vserver vserver1 -routing-group 10.1.1.0/24 -destination 0.0.0.0/0 -gateway 10.1.1.1 (Creates a
default route on vserver1)
ping -lif-owner vserver1 -lif data1 -destination www.google.com (ping www.google.com via vserver1 using the data1 port)
DNS
services dns show (show DNS)
UNIX
vserver services unix-user show
vserver services unix-user create -vserver vserver1 -user root -id 0 -primary-gid 0 (Create a unix user called root)
vserver name-mapping create -vserver vserver1 -direction win-unix -position 1 -pattern (.+) -replacement root (Create a name mapping from
windows to unix)
vserver name-mapping create -vserver vserver1 -direction unix-win -position 1 -pattern (.+) -replacement sysadmin011 (Create a name mapping
from unix to windows)
vserver name-mapping show (Show name-mappings)
NIS
vserver services nis-domain create -vserver vserver1 -domain vmlab.local -active true -servers 10.10.10.1 (Create nis-domain called
vmlab.local pointing to 10.10.10.1)
vserver modify -vserver vserver1 -ns-switch nis-file (Name Service Switch referencing a file)
vserver services nis-domain show
NTP
system services ntp server create -node -server (Adds an NTP server to node_name)
system services ntp config modify -enabled true (Enable ntp)
system node date modify -timezone (Sets timezone for Area/Location Timezone. i.e. Australia/Sydney)
node date show (Show date on all nodes)
DATE AND TIME
timezone -timezone Australia/Sydney (Sets the timezone for Sydney. Type ? after -timezone for a list)
date 201307090830 (Sets date for yyyymmddhhmm)
date -node (Displays the date and time for the node)
CONVERGED NETWORK ADAPTERS (FAS 8000)
ucadmin show -node NODENAME (Show CNA ports on specific node)
ucadmin -node NODENAME -adapter 0e -mode cna (Change adapter 0e from FC to CNA. NOTE: A reboot of the node is required)
PERFORMANCE
show-periodic -object volume -instance volumename -node node1 -vserver vserver1 -counter total_ops|avg_latency|read_ops|read_latency (Show
the specific counters for a volume)
statistics show-periodic 0object nfsv3 -instance vserver1 -counter
nfsv3_ops|nfsv3_read_ops|nfsv3_write_ops|read_avg_latency|write_avg_latency (Shows the specific nfsv3 counters for a vserver)
sysstat -x 1 (Shows counters for CPU, NFS, CIFS, FCP, WAFL)

What is DAS, SAN, NAS and Unified Storage..??

DAS: DIRECT-ATTACHED STORAGE

Direct-attached storage (DAS) is digital storage directly attached to the computer accessing it, as opposed to storage accessed over a
computer network. Examples of DAS include hard drives, optical disc drives, and storage on external drives.

Direct-Attach Storage connectivity example

In Above Screen it shows that Storage device is directly attached to Server it means in real

Example: If you attach a External USB drive to your Server/Desktop it is best example for DAS.

Same as above example, we can attach SCSI and SAS Storage Devices such as HDD and RAID Controllers.

Disadvantages

 An initial investment in a server with built in storage can meet the needs of a small organization for a period of time. But as data is added
and the need for storage capacity increases, the server has to be taken out of service to add additional drives.

 DAS expansion generally requires the expertise of an IT professional, which means either staffing someone or taking on the expense of a
consultant.

 A key disadvantage of DAS storage is its limited scalability.

 A Host Bus Adapter can only support a limited number of drives. For environments with stringent up time requirements, or for
environments with rapidly increasing storage requirements, DAS may not be the right choice.

Advantages

 One advantage of DAS storage is its low initial cost.

SAN: STORAGE AREA NETWORK


SAN (storage area network) is a high-speed network of storage devices that also connects those storage devices with servers. It
provides block-level storage that can be accessed by the applications running on any networked servers. SAN storage devices can include tape
libraries and disk-based devices, like RAID hardware.
Block level access means, server can able to create its own file system on SAN disk with mapped to server.

Storage Area Network Connectivity Example

Advantages

 SAN Architecture facilitates scalability - Any number of storage devices can be added to store hundreds of terabytes.

 SAN reduces down time - We can upgrade our SAN, replace defective drives, backup our data without taking any servers offline. A well-
configured SAN with mirroring and redundant servers can bring zero downtime.

 Sharing SAN is possible - As SAN is not directly attached with any particular server or network, a SAN can be shared by all

 SAN provides long distance connectivity - With Fibre channel capable of running upto 10 kilometers, we can keep our data in a remote,
physically secure location. Fibre channel switching also makes it very easy to establish private connections with other SANs for mirroring,
backup, or maintenance.

 SAN is truly versatile - A SAN can be single entity, a master grouping of several SANs and can include SANs in remote locations.

Disadvantages

 SANs are very expensive as Fibre channel technology.

 Leveraging of existing technology investments tends to be much difficult. Though SAN facilitates to make use of already existing legacy
storage, lack of SAN-building skills has greatly diminished deployment of homegrown SANs.

 Management of SAN systems has proved to be a real tough one due to various reasons. Also for some, having a SAN storage facility
seems to be wasteful one.

 Also, there are a few SAN product vendors due to its very high price and very few mega enterprises need SAN set up.

NAS: NETWORK ATTACHED STORAGE

Network-attached storage (NAS) is a type of dedicated file storage device that provides local-area network local area network (LAN) nodes
with file-based shared storage through a standard Ethernet connection.
Network Attached Storage Example

NAS devices, which typically do not have a keyboard or display, are configured and managed with a Web-based utility program. Each NAS
resides on the LAN as an independent network node and has its own IP address.

Advantages

 NAS systems stores data as files and support both CIFS and NFS protocols. They can be accessed easily over the commonly used TCP/IP
Ethernet based networks and support multiple users connecting to it simultaneously.

 Entry level NAS systems are quite inexpensive – they can be purchased for capacities as low as 1 or 2 TB with just two disks. This
enables them to be deployed with Small and Medium Business (SMB) networks easily.

 A NAS device may support one or more RAID levels to make sure that individual disk failures do not result in loss of data.

 A NAS appliance comes with a GUI based web based management console and hence can be centrally accessed and administered from
remote locations over the TCP/IP networks including Internet/ VPN / Leased Lines etc.

 NAS appliances are connected to the Ethernet network. Hence servers accessing them can also be connected to the Ethernet network.
So, unlike SAN systems, there is no need for expensive HBA adapters or specialized switches for storage or specialized skills required to set
up and maintain the NAS systems. With NAS, its simple and easy.

 Ethernet networks are scaling up to support higher throughputs – Currently 1 GE and 10GE throughputs are possible. NAS systems are
also capable of supporting such high throughputs as they use the Ethernet based networks and TCP/IP protocol to transport data.

 The management tools required to manage the Ethernet network are well established and hence no separate training is required for
setting up and maintaining a separate network for storage unlike SAN systems.

Disadvantages

 Transaction intensive databases, ERP, CRM systems and such high performance oriented data are better off when stored in SAN (Storage
Area Network) than NAS as the former creates a network that has low latencies, reliable, lossless and faster.Also, for large, heterogeneous
block data transfers SAN might be more appropriate.

 At the end of the day, NAS appliances are going to share the network with their computing counterparts and hence the NAS solution
consumes more bandwidth from the TCP/IP network. Also, the performance of the remotely hosted NAS will depend upon the amount of
bandwidth available for Wide Area Networks and again the bandwidth is shared with computing devices. So, WAN optimization needs to be
performed for deploying NAS solutions remotely in limited bandwidth scenarios.

 Ethernet is a lossy environment, which means packet drops and network congestion are inevitable. So, the performance and architecture
of the IP networks are very important for effective high volume NAS solution implementation at least till the lossless Ethernet framework is
implemented.

 For techniques like Continuous Data Protection, taking frequent Disk Snapshots for backup etc, block level storage with techniques like
Data De-duplication as available with SAN might be more efficient than NAS.

 Sometimes, the IP network might get congested if operations like huge data back up is done during business hours.
UNIFIED STORAGE

Unified storage is a storage system that makes it possible to run and manage files and applications from a single device. To this end, a
unified storage system consolidates file-based and block-based access in a single storage platform and supports fibre channel SAN, IP-based
SAN (iSCSI), and NAS.

It means combined of NAS and SAN is called as unified Storage.

Example for unified Storage Diagram

Come Back to NetApp Storage environment is Unified Storage Devices.

In NetApp Storage We have below mentioned series of models are available.

FAS: Fiber Attached Storage

V-Series: Virtualization Series, this Series is used most of times to virtulize the SAN with other SAN devices. To Reduce the cost in real terms.

E-Series: The E-Series is NetApp's name for new platforms resulting from the acquisition of Engenio. Aimed at the storage stress resulting
from high-performance computing (HPC) applications, NetApp offers full-motion video storage built on the E-Series Platform that enables, for
example, government agencies to take advantage of full-motion video and improve battlefield intelligence. Additionally, NetApp offers a
Hadoop Storage Solution on the E-Series that is designed to enable real-time or near-real-time data analysis of larger and more complex
datasets.

----------
WHAT IS CONSISTENCY POINT..? HOW ITS DIFFER FROM SNAPSHOT..?
CONSISTENCY POINT: A CP IS TRIGGERED WHENEVER THE FILESYSTEM REACHES A POINT WHERE IT WANTS TO UPDATE THE PHYSICAL DATA ON THE DISKS, WITH
WHATEVER HAS ACCUMULATED IN CACHE (AND WAS JOURNALED IN NVRAM).

SNAPSHOT: A SNAPSHOT IS CREATED WHENEVER THE SNAP-SCHEDULE IS CONFIGURED TO TRIGGER IT OR ANY OTHER OPERATION (SNAPMANAGER, SNAPDRIVE,
SNAPMIRROR, SNAPVAULT, ADMINISTRATOR) CREATES A NEW SNAPSHOT. CREATING A SNAPSHOT ALSO TRIGGERS A CP, BECAUSE THE SNAPSHOT IS ALWAYS A
CONSISTENT IMAGE FO THE FILESYSTEM AT THIS POINT IN TIME

Monitor volumes and Aggregates using simple Script


To run scripts from any Linux Server you have create a Password Less connection to Netapp Storage.

Then Create the Directories as mentioned below.

~]# mkdir -P /netapp/monitoring/

copy the below Script and Paste in one file. Save file with .sh Extension provide executable permissions to that file.

Replace 'YOUREMAILID' with your email address.

################## SCRIT START #################


#!/bin/bash
## Purpose: Calculate the Volume usage percentage as specified, If usage is more then specified send Mail
## Date: 21-08-2015
## Author: Ankam Ravi Kumar

ssh 192.168.1.5 df -h > /netapp/monitoring/volumes.txt


echo "Date: `date`" > /netapp/monitoring/temp.txt
echo "Below volumes usage is more then 95%" >> /netapp/monitoring/temp.txt
echo " " >> /netapp/monitoring/temp.txt
echo "Filer: NETAPP01" >> /netapp/monitoring/temp.txt
cat /netapp/monitoring/volumes.txt | awk ' length($5) > 2 && (substr($5,1,length($5)-1) >= 95) { print $0 }' >>
/netapp/monitoring/temp.txt
echo " " >> /netapp/monitoring/temp.txt
echo " " >> /netapp/monitoring/temp.txt
ssh 192.168.1.6 df -h > /netapp/monitoring/volumes1.txt
echo "Date: `date`" >> /netapp/monitoring/temp.txt
echo "Filer: NETAPP02" >> /netapp/monitoring/temp.txt
cat /netapp/monitoring/volumes1.txt | awk ' length($5) > 2 && (substr($5,1,length($5)-1) >= 95) { print $0 }' >>
/netapp/monitoring/temp.txt
echo " " >> /netapp/monitoring/temp.txt
echo "Below Aggregates Usage is more then 95%" >> /netapp/monitoring/temp.txt
echo "NETAPP01" >> /netapp/monitoring/temp.txt
ssh 192.168.1.5 df -Ah > /netapp/monitoring/aggr1.txt
cat /netapp/monitoring/aggr1.txt | awk ' length($5) > 2 && (substr($5,1,length($5)-1) >= 95) { print $0 }' >>
/netapp/monitoring/temp.txt

echo " " >> /netapp/monitoring/temp.txt


echo "NETAPP02 " >> /netapp/monitoring/temp.txt
ssh 192.168.1.6 df -Ah > /netapp/monitoring/aggr2.txt
cat /netapp/monitoring/aggr2.txt | awk ' length($5) > 2 && (substr($5,1,length($5)-1) >= 95) { print $0 }' >>
/netapp/monitoring/temp.txt

count=`cat /netapp/monitoring/temp.txt | wc -l`


if [ $count -gt 17 ]
then
mail -s "Volumes and Aggregates Usage more then 95%" YOUREMAILID < /netapp/monitoring/temp.txt
else
echo " " > /netapp/monitoring/temp.txt
fi
###################### END SCRIPT #############################

Below is the Sample Out Mail

Schedule this script using crontab based on your requirement, Every 4 Hours as a example.

Create Flexclone and spit to create clone of your Volume

NetApp® FlexClone® technology instantly replicates data volumes and datasets as transparent, virtual copies—true clones—without
compromising performance or demanding additional storage space.

using recent snapshot we can create a Flexclone and splitting the volume will create actual volume clone.

If you need a temporary copy of your data that can be made quickly and without using a lot of disk space, you can create a FlexClone
volume. FlexClone volumes save data space because all unchanged data blocks are shared between the FlexClone volume and its parent.

Check wheather you have latest snapshot exists in your volume Or not using below command.

NetappFiler> snap list Volume1


Volume Volume1
working...

%/used %/total date name


---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Aug 25 13:52 NetappFiler(2016531614)_Volume1.79
0% ( 0%) 0% ( 0%) Aug 25 12:00 SqlSnap__recent
0% ( 0%) 0% ( 0%) Aug 25 12:00 NetappFiler(2016531614)_Volume1.78
1% ( 0%) 0% ( 0%) Aug 25 06:00 SqlSnap_08-25-2015_06.00.12
1% ( 0%) 1% ( 0%) Aug 25 00:00 SqlSnap_08-25-2015_00.00.09
2% ( 0%) 1% ( 0%) Aug 24 18:00 SqlSnap_08-24-2015_18.00.10
2% ( 1%) 2% ( 0%) Aug 24 12:00 SqlSnap_08-24-2015_12.00.09
3% ( 0%) 2% ( 0%) Aug 24 06:00 SqlSnap_08-24-2015_06.00.09
3% ( 0%) 2% ( 0%) Aug 24 00:00 SqlSnap_08-24-2015_00.00.10

in this case i am using snapshot 'SqlSnap_recent'.

NetappFiler> vol clone create CLONE -s volume -b Volume1 SqlSnap__recent


Creation of clone volume 'CLONE' has completed.
NetappFiler> Tue Aug 25 14:18:07 CST [NetappFiler:lun.newLocation.offline:warning]: LUN /vol/CLONE/lun has been taken offline to prevent
map conflicts after a copy or move operation.
Tue Aug 25 14:18:07 CST [NetappFiler:wafl.volume.clone.created:info]: Volume clone CLONE of volume Volume1 was created successfully.

Estimate the Aggregate space is enough to split your volume Or Not below command
NetappFiler> vol clone split estimate CLONE

Check Aggregate space availability using below command


NetappFiler>df -Ah
NetappFiler> vol clone split start CLONE
Clone volume 'CLONE' will be split from its parent.
Monitor system log or use 'vol clone split status' for progress.
NetappFiler> Tue Aug 25 14:20:05 CST [NetappFiler:wafl.volume.clone.split.started:info]: Clone split was started for volume CLONE
Tue Aug 25 14:20:05 CST [NetappFiler:wafl.scan.start:info]: Starting volume clone split on volume CLONE.

NetappFiler> vol clone split status


Volume 'CLONE', 96 of 33587190 inodes processed (0%)
1603331 blocks scanned. 6293 blocks updated.

Thin provisioning and Thick Provisioning - NetappNotes By ARK

THIN PROVISIONING:

Thin provisioning is used in large environment because we can assign the more space than what we have in current environment.

If we take an example that, if you have 1TB storage space in SAN still you can allocate to the clients 1.5TB, if you enable the thin
provisioning SAN will take only what you have used.

Clear example here.

if you clearly observe the above example after allocating 1.5TB of space to all the clients still we have available space, thin provisioning will
take only used space in consideration.

Using NetApp Thin Provisioning

Thin provisioning is enabled on NetApp storage by setting the appropriate option on a volume or LUN. You can thin provision a volume by
changing the "guarantee" option to "none." You can thin provision a LUN by changing the reservation on the LUN. These settings can be set
using NetApp management tools such as NetApp Operations Manager and NetApp Provisioning Manager or by entering the following
commands:

Volume: vol options "targetvol" guarantee none

LUN: lun set reservation "/vol/targetvol/targetlun" disable

The change is instantaneous and nondisruptive.

When not to use thin provisioning. There are some situations in which thin provisioning may not be appropriate. Keep these in mind when
deciding whether to implement it and on which volumes to apply it:

 If the storage consumption on a volume is unpredictable or highly volatile


 If the application using the volume is mission critical such that even a brief storage outage could not be tolerated
 If your storage monitoring processes aren't adequate to detect when critical thresholds are crossed; you need well-defined policies for
monitoring and response
 When the time required for new storage acquisition is unpredictable; if your procurement process is too lengthy you may not be able to
bring new storage online fast enough to avoid running out of space

You can periodically initiate the space reclamation process on your LUNs. The GUI tool will first determine how much space can be reclaimed
and ask if you wish to continue. You can limit the amount of time the process will use so that it does not run during peak periods.

Here are a few things to keep in mind when you run space reclamation:

 It's a good practice to run space reclamation before creating a Snapshot copy. Otherwise, blocks that should be available for freeing will
be locked in the Snapshot copy and not be able to be freed.
 Because space reclamation initially consumes cycles on the host, it should be run during periods of low activity.
 Normal data traffic to the LUN can continue while the process runs. However, certain operations cannot be performed during the space
reclamation process:
 Creating or restoring a Snapshot copy stops space reclamation.
 The LUN may not be deleted, disconnected, or expanded.
 The mount point cannot be changed.
 Running Windows® defragmentation is not recommended.

THICK PROVISIONING:

Thick provisioning you can't use in large environments, because as soon as you allocate the space to the client it will reserve for the same
client.

below is the clear example

If you clearly observe the above example as i have allocated 1TB space to the clients, after assign the space to clients there is no space is
available in the SAN. Thick provisioning will not consider the used space it will only consider allocated space.

Thanks or reading this blog.......

Please provide your valuable comments and understanding about this topic. Subscribe with your email address to get more
updates directly to your mail box.

NETAPP DISK SHELF CABLING EXAMPLES

This page is dedicated to a few examples I’ve put together in Visio to highlight the correct SAS and ACP cabling combinations for Netapp disk shelf cabling.

Two types of controllers and two types of disk shelves are used in these examples. They are:

 Netapp FAS2040
 Netapp FAS3240
 Disk Shelf DS4243
 Disk Shelf DS2246

A few notes before we get started:

 Netapp FAS2040 use single-path HA


 SAS Cables can be SAS copper or SAS optical or a mix. EXCEPT Shelf-to-Shelf connections in a stack must be all SAS copper cables or all SAS optical cables. If the Shelf-to-Shelf
cable is SAS copper then the controller to shelf cable must also be SAS copper. If the Shelf-to-Shelf cable is SAS optical then the controller to shelf cable must also be SAS optical.
 SAS optical cables connected to disk shelves require a disk shelf firmware that supports SAS optical cables
 The total end-to-end path, from controller to the last shelf, cannot exceed 510 meters.
 Square Ports are always cabled to circles ports, and circle ports are always cabled to square ports. Never cable square ports to square ports or circle ports to circle ports.
 The illustrations below show the Green Cable as the SAS cable and the Red Cable as the ACP cable.

Let’s dive into some examples.

FAS2040 – 1 Netapp Disk Shelf

This example shows a Netapp FAS2040 with 2 controllers connected to a single DS4243 or DS2246 Netapp disk shelf.

FAS2040 – 2 Netapp Disk Shelves

This example follows on from the previous example by adding an additional disk shelf for a total of 2 disk shelves.
FAS2040 – 3 Netapp Disk Shelves

This example follows on from the previous example by adding another additional disk shelf for a total of 3 disk shelves.
FAS3240 – 1 Netapp Disk Shelf

This example shows 2 Netapp FAS3240’s connected to a single DS4243 Netapp disk shelf.
FAS3240 – 2 Netapp Disk Shelves

This example follows on from the previous example by adding an additional disk shelf for a total of 2 disk shelves.
FAS3240 – 3 Netapp Disk Shelves

This example follows on from the previous example by adding an additional disk shelf for a total of 3 disk shelves.
FAS3240 – 6 Netapp Disk Shelves in 2 Seperate Stacks

In this example we use 3 Netapp DS4243 disk shelves in Stack 1 and 3 Netapp DS2246 disk shelves in Stack 2. The requirement for this configuration is 4 SAS ports. I have
added the 4-Port SAS expansion card Netapp X2065A into each controller.
If you have any technical questions about this tutorial or any other tutorials on this site, please open a new thread in the forums and the community will be able to help you
out.

Disclaimer:
All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate
steps to date, we take no responsibility if you implement any of these steps in a production environment.

Create Flexclone and spit to create clone of your Volume

NetApp® FlexClone® technology instantly replicates data volumes and datasets as transparent, virtual copies—true clones—without
compromising performance or demanding additional storage space.

using recent snapshot we can create a Flexclone and splitting the volume will create actual volume clone.

If you need a temporary copy of your data that can be made quickly and without using a lot of disk space, you can create a FlexClone
volume. FlexClone volumes save data space because all unchanged data blocks are shared between the FlexClone volume and its parent.

Check wheather you have latest snapshot exists in your volume Or not using below command.
NetappFiler> snap list Volume1
Volume Volume1
working...

%/used %/total date name


---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Aug 25 13:52 NetappFiler(2016531614)_Volume1.79
0% ( 0%) 0% ( 0%) Aug 25 12:00 SqlSnap__recent
0% ( 0%) 0% ( 0%) Aug 25 12:00 NetappFiler(2016531614)_Volume1.78
1% ( 0%) 0% ( 0%) Aug 25 06:00 SqlSnap_08-25-2015_06.00.12
1% ( 0%) 1% ( 0%) Aug 25 00:00 SqlSnap_08-25-2015_00.00.09
2% ( 0%) 1% ( 0%) Aug 24 18:00 SqlSnap_08-24-2015_18.00.10
2% ( 1%) 2% ( 0%) Aug 24 12:00 SqlSnap_08-24-2015_12.00.09
3% ( 0%) 2% ( 0%) Aug 24 06:00 SqlSnap_08-24-2015_06.00.09
3% ( 0%) 2% ( 0%) Aug 24 00:00 SqlSnap_08-24-2015_00.00.10

in this case i am using snapshot 'SqlSnap_recent'.

NetappFiler> vol clone create CLONE -s volume -b Volume1 SqlSnap__recent


Creation of clone volume 'CLONE' has completed.
NetappFiler> Tue Aug 25 14:18:07 CST [NetappFiler:lun.newLocation.offline:warning]: LUN /vol/CLONE/lun has been taken offline to prevent
map conflicts after a copy or move operation.
Tue Aug 25 14:18:07 CST [NetappFiler:wafl.volume.clone.created:info]: Volume clone CLONE of volume Volume1 was created successfully.

Estimate the Aggregate space is enough to split your volume Or Not below command
NetappFiler> vol clone split estimate CLONE

Check Aggregate space availability using below command


NetappFiler>df -Ah
NetappFiler> vol clone split start CLONE
Clone volume 'CLONE' will be split from its parent.
Monitor system log or use 'vol clone split status' for progress.
NetappFiler> Tue Aug 25 14:20:05 CST [NetappFiler:wafl.volume.clone.split.started:info]: Clone split was started for volume CLONE
Tue Aug 25 14:20:05 CST [NetappFiler:wafl.scan.start:info]: Starting volume clone split on volume CLONE.

NetappFiler> vol clone split status


Volume 'CLONE', 96 of 33587190 inodes processed (0%)
1603331 blocks scanned. 6293 blocks updated.

Netapp NFS Troubleshooting

# NFS TROUBLESHOOTING

Problem1: Stale NFS File handle


Sample Error Messages - NFS Error 70

Error Explanation:

A “stale NFS file handle” error message can and usually is caused by the following events:

1. A certain file or directory that is on the NFS server is opened by the NFS client
2. That specific file or directory is deleted either on that server or on another system that has access to the same share
3. Then that file or directory is accessed on the client

A file handle usually becomes stale when a file or directory referenced by the file handle on the client is removed by another host, while your
client is still holding on to an active reference to that object.
Resolution Tips

 Check connectivity to the storage system (server)


 Check mount point
 Check client vfstab or fstab as relevant
 Check showmount –e filerx from client
 Check exportfs from command line of the storage system
 Check storage system /etc/exports file
Problem2: NFS server not responding
NFS Server (servername) not responding

Error Explanation: NFS client hangs, mount hangs on all clients

Resolution Tips

 Use ping to contact the hostname of the storage system (server) from client
 Use ping to contact the client from the storage system
 Check ifconfig from the storage system
 Check that the correct NFS version is enabled
 Check all nfs options on the storage system
 Check /etc/rc file for nfs options
 Check nfs license

Problem3: Permission denied


nfs mount: mount: /nfs: Permission denied

Error Explanation: Permission is not there but trying to access NFS share from server
Resolution Tips

 Check showmount –e filername from client


 Try to create a new mountpoint
 Check exportfs at the storage system command line to see what system is exporting
 Check auditlog for recent exportfs –a
 Check the /etc/log/auditlog for messages related to exportfs
 Check the storage path with exportfs –s
 Check whether the client can mount the resource with the exportfs –c command
 Flush the access cache and reload the exports, then retry the mount

Problem4: Network Performance Slow


Poor NFS read and/or write performance

Error Explanation: End user is feeling the slowness

Resolution Tips

 Check sysstat 1 for nfs ops/sec vs. kbs/sec


 Check parameters on network card interface (NIC) with ifconfig –a
 Check netdiag
 Check network condition with ifstat –a; netstat –m
 Check client side network condition
 Check routing table on the storage system with netstat
 Check routing table on the client
 Check perfstat.sh
 Check throughput with sio_ntap tool
 Check rsize and wsize
 Consider configuring jumbo frames (entire path must support jumbo frames)

Problem5: RPC not responding


RPC: Unable to receive or RPC:Timed out
Resolution Tips

 Use ping to contact the storage system (server)


 From storage system, use ping to contact client
 Check mountpoint
 Check showmount –e filerX from client
 Verify name of directory on the storage system
 Check exportfs to see what the storage system is exporting
 Use the "rpcinfo -p filerx" command from the client to verify that the RPCs are running

Problem6: No Space Left On Disk


No space left on disk error
Resolution Tips

 Check df for available disk space


 Check for snapshot overruns
 Check quota report for exceeded quotas

Brocade SAN Switch Zoning

WHAT IS SAN ZONING..?


SAN zoning is a method of arranging Fibre Channel devices into logical groups over the physical configuration of the fabric.

SAN zoning may be utilized to implement compartmentalization of data for security purposes.

Each device in a SAN may be placed into multiple zones.

Before doing the zoning using the GUI mode we have to connect the servers using the FC cables. From SAN Switch port to Server HBA port.
#####################################################################################3

NetApp: A Complete Setup of a Netapp Filer

After reinitializing and resetting a filer to factory defaults there is always a time when you want to re-use your prescious 100k+ baby. Thinking, this should be a piece of
cake, I encountered some unforeseen surprises which led to this document. Here I'll show you how to setup a filer from the start to the end. Notice this is a filer with a
partner, so there is a lot of switching around.
Note that I copied a lot of output from the filers to this article for your convenience. Also note, that I mixed output from the two filers and that all steps need to be done on both filers.

Initial Configuration

When starting up the filer and connecting to the management console (serial cable, COM1 etc, all default settings if using a Windows machine with Putty) you'll see a
configuration setup. Simply answer the questions, and don't be shy if you're not sure, everything can be changed afterwards:

Note: This wizard can be started by issuing the command setup.

Please enter the new hostname []: filer01b


Do you want to enable IPv6? [n]:
Do you want to configure virtual network interfaces? [n]:
For environments that use dedicated LANs to isolate management
traffic from data traffic, e0M is the preferred Data ONTAP
interface for the management LAN. The e0M interface is separate
from the RLM interface even though they share the same external
connector (port with wrench icon). It is highly recommended that
you configure both interfaces.
Please enter the IP address for Network Interface e0M []: 10.18.1.32
Please enter the netmask for Network Interface e0M [255.255.0.0]:
Should interface e0M take over a partner IP address during failover? [n]: y
The clustered failover software is not yet licensed. To enable
network failover, you should run the 'license' command for
clustered failover.
Please enter the IPv4 address or interface name to be taken over by e0M []: 10.18.1.31
Please enter flow control for e0M {none, receive, send, full} [full]:
Please enter the IP address for Network Interface e0a []:
Should interface e0a take over a partner IP address during failover? [n]:
Please enter the IP address for Network Interface e0b []:
Should interface e0b take over a partner IP address during failover? [n]:
Would you like to continue setup through the web interface? [n]:
Please enter the name or IP address of the IPv4 default gateway: 10.18.1.254
The administration host is given root access to the filer's
/etc files for system administration. To allow /etc root access
to all NFS clients enter RETURN below.
Please enter the name or IP address of the administration host:
Where is the filer located? []: Den Haag
Do you want to run DNS resolver? [n]: y
Please enter DNS domain name []: getshifting.com
You may enter up to 3 nameservers
Please enter the IP address for first nameserver []:
Bad IP address entered - must be of the form a.b, a.b.c, or a.b.c.d
Please enter the IP address for first nameserver []: 10.16.110.1
Do you want another nameserver? [n]:
Do you want to run NIS client? [n]:
The Remote LAN Module (RLM) provides remote management capabilities
including console redirection, logging and power control.
It also extends autosupport by sending
additional system event alerts. Your autosupport settings are used
for sending these alerts via email over the RLM LAN interface.
Would you like to configure the RLM LAN interface [y]:
Would you like to enable DHCP on the RLM LAN interface [y]: n
Please enter the IP address for the RLM: 10.16.121.96
Please enter the netmask for the RLM []: 255.255.0.0
Please enter the IP address for the RLM gateway:
Bad IP address entered - must be of the form a.b, a.b.c, or a.b.c.d
Please enter the IP address for the RLM gateway: 10.16.1.254
The mail host is required by your system to send RLM
alerts and local autosupport email.
Please enter the name or IP address of the mail host [mailhost]:
You may use the autosupport options to configure alert destinations.
Name of primary contact (Required) []: sjoerd @ getshifting.com
Phone number of primary contact (Required) []: 0151234567
Alternate phone number of primary contact []:
Primary Contact e-mail address or IBM WebID? []: sjoerd @ getshifting.com
Name of secondary contact []:
Phone number of secondary contact []:
Alternate phone number of secondary contact []:
Secondary Contact e-mail address or IBM WebID? []:
Business name (Required) []: SHIFT
Business address (Required) []: Street 1
City where business resides (Required) []: Delft
State where business resides []:
2-character country code (Required) []: NL
Postal code where business resides []: 1234AA
The Shelf Alternate Control Path Management process provides the ability
to recover from certain SAS shelf module failures and provides a level of
availability that is higher than systems not using the Alternate Control
Path Management process.
Do you want to configure the Shelf Alternate Control Path Management interface for SAS shelves [n]:
The initial aggregate currentSetting the administrative (root) password for filer01b ...
New password:

Retype new password:

IP Addresses

Setup IP addresses:

filer01b> ifconfig -a
e0M: flags=0x2948867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
inet 10.18.1.132 netmask-or-prefix 0xffff0000 broadcast 10.18.255.255
partner inet 10.18.1.131 (not in use)
ether 00:a0:98:29:16:32 (auto-100tx-fd-up) flowcontrol full
e0a: flags=0x2508866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:29:16:30 (auto-unknown-cfg_down) flowcontrol full
e0b: flags=0x2508866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:29:16:31 (auto-unknown-cfg_down) flowcontrol full
lo: flags=0x1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 9188
inet 127.0.0.1 netmask-or-prefix 0xff000000 broadcast 127.0.0.1
filer01b> ifconfig e0M 10.18.1.132 netmask 255.255.0.0 partner 10.18.1.131

Route

Set default route:

filer01a> route delete default


delete net default

filer01a> route add default 10.18.1.254 1


add net default: gateway 10.18.1.254

Set Up Startup Files

By setting these files correctly your settings will be persistent over reboots:

Node A:

#Auto-generated by setup Mon Sep 17 13:34:05 GMT 2012


hostname filer01a
ifconfig e0a `hostname`-e0a flowcontrol full partner 10.18.1.32
route add default 10.18.1.254 1
routed on
options dns.domainname getshifting.com
options dns.enable on
options nis.enable off
savecore
#Auto-generated by setup Mon Sep 17 13:34:05 GMT 2012
127.0.0.1 localhost
10.18.1.31 filer01a filer01a-e0a
# 0.0.0.0 filer01a-e0M
# 0.0.0.0 filer01a-e0b

Node B:

filer01b> rdfile /etc/rc


#Auto-generated by setup Mon Sep 17 13:34:05 GMT 2012
hostname filer01b
ifconfig e0a `hostname`-e0a flowcontrol full partner 10.18.1.31
route add default 10.18.1.254 1
routed on
options dns.domainname getshifting.com
options dns.enable on
options nis.enable off
savecore
filer01b> rdfile /etc/hosts
#Auto-generated by setup Mon Sep 17 13:34:05 GMT 2012
127.0.0.1 localhost
10.18.1.32 filer01b filer01b-e0a
# 0.0.0.0 filer01b-e0M
# 0.0.0.0 filer01b-e0b
PASSWORD

Setup a password for the root user. Notice that you'll get a warning if you use a password with less than 8 characters, but you're still allowed to set it:

filer01a> passwd
New password:
Retype new password:
Mon Sep 17 14:44:09 GMT [snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from this user. Reason: Password is
too short (SNMPv3 requires at least 8 characters).
Mon Sep 17 14:44:09 GMT [snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from root. Reason: Password is too
short (SNMPv3 requires at least 8 characters).
Mon Sep 17 14:44:09 GMT [passwd.changed:info]: passwd for user 'root' changed.
CIFS AND AUTHENTICATION

This is a little bit confusing part. Allthough we're not using CIFS, I have to do a cifs setup to configure the normal authentication for these filers:

filer01a> cifs setup


This process will enable CIFS access to the filer from a Windows(R) system.
Use "?" for help at any prompt and Ctrl-C to exit without committing changes.

Your filer does not have WINS configured and is visible only to
clients on the same subnet.
Do you want to make the system visible via WINS? [n]: ?
Answer 'y' if you would like to configure CIFS to register its names
with WINS servers, and to use WINS server queries to locate domain
controllers. You will be prompted to add the IPv4 addresses of up to 4
WINS servers. Answer 'n' if you are not using WINS servers in your
environment or do not want to use them.
Do you want to make the system visible via WINS? [n]: n
A filer can be configured for multiprotocol access, or as an NTFS-only
filer. Since multiple protocols are currently licensed on this filer,
we recommend that you configure this filer as a multiprotocol filer

(1) Multiprotocol filer


(2) NTFS-only filer

Selection (1-2)? [1]:


CIFS requires local /etc/passwd and /etc/group files and default files
will be created. The default passwd file contains entries for 'root',
'pcuser', and 'nobody'.
Enter the password for the root user []:
Retype the password:
The default name for this CIFS server is 'filer01a'.
Would you like to change this name? [n]:
Data ONTAP CIFS services support four styles of user authentication.
Choose the one from the list below that best suits your situation.

(1) Active Directory domain authentication (Active Directory domains only)


(2) Windows NT 4 domain authentication (Windows NT or Active Directory domains)
(3) Windows Workgroup authentication using the filer's local user accounts
(4) /etc/passwd and/or NIS/LDAP authentication

Selection (1-4)? [1]: 3


What is the name of the Workgroup? [WORKGROUP]: SHIFT
Wed Sep 19 13:40:08 GMT [filer01a: snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from this user. Reason:
Password is too short (SNMPv3 requires at least 8 characters).
Wed Sep 19 13:40:08 GMT [filer01a: snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from root. Reason: Password
is too short (SNMPv3 requires at least 8 characters).
Wed Sep 19 13:40:08 GMT [filer01a: passwd.changed:info]: passwd for user 'root' changed.
CIFS - Starting SMB protocol...
It is recommended that you create the local administrator account
(filer01a\administrator) for this filer.
Do you want to create the filer01a\administrator account? [y]:
Enter the new password for filer01a\administrator:
Retype the password:
Welcome to the SHIFT Windows(R) workgroup

CIFS local server is running.


filer01a>
SETUP SSH

Configure secure access to the filer:

filer01a*> secureadmin setup ssh


SSH Setup
---------
Determining if SSH Setup has already been done before...no

SSH server supports both ssh1.x and ssh2.0 protocols.

SSH server needs two RSA keys to support ssh1.x protocol. The host key is
generated and saved to file /etc/sshd/ssh_host_key during setup. The server
key is re-generated every hour when SSH server is running.

SSH server needs a RSA host key and a DSA host key to support ssh2.0 protocol.
The host keys are generated and saved to /etc/sshd/ssh_host_rsa_key and
/etc/sshd/ssh_host_dsa_key files respectively during setup.

SSH Setup will now ask you for the sizes of the host and server keys.
For ssh1.0 protocol, key sizes must be between 384 and 2048 bits.
For ssh2.0 protocol, key sizes must be between 768 and 2048 bits.
The size of the host and server keys must differ by at least 128 bits.

Please enter the size of host key for ssh1.x protocol [768] :
Please enter the size of server key for ssh1.x protocol [512] :
Please enter the size of host keys for ssh2.0 protocol [768] :

You have specified these parameters:


host key size = 768 bits
server key size = 512 bits
host key size for ssh2.0 protocol = 768 bits
Is this correct? [yes]

Setup will now generate the host keys. It will take a minute.
After Setup is finished the SSH server will start automatically.

filer01a*> Thu Sep 20 08:14:06 GMT [filer01a: secureadmin.ssh.setup.success:info]: SSH setup is done and ssh2 should be enabled. Host keys
are stored in /etc/sshd/ssh_host_key, /etc/sshd/ssh_host_rsa_key, and /etc/sshd/ssh_host_dsa_key.
APPLY A SOFTWARE UPDATE

Notice that after reinitializing a netapp you don't only remove the data, you also remove the software of a filer, leaving you with a basic version. You need to get the software
update from NetApp / your reseller. You can't download it, so make sure you can get this before you proceed with the wiping.

There is a really easy way to get a new software version on your filer. If you have a local webserver you can download it from there, or use the miniweb webserver:

 Download the miniweb webserver from: http://sourceforge.net/projects/miniweb/


 Extract the package, run the miniweb executable and place the software update file in the webroot folder.

Now continue on the filer by issuing these commands:

 software get http://<webserver ip address>/<software update filename>


 software list
 software install <software update filename>
 download
 version -b
 reboot

filer01a> software get http://10.16.61.16/26405.211.788.737_setup_q.exe


software: copying to /etc/software/26405.211.788.737_setup_q.exe
software: 100% file read from location.
software: /etc/software/26405.211.788.737_setup_q.exe has been copied.

filer01a> software list


26405.211.788.737_setup_q.exe
filer01a> software install 26405.211.788.737_setup_q.exe
software: You can cancel this operation by hitting Ctrl-C in the next 6 seconds.
software: Depending on system load, it may take many minutes
software: to complete this operation. Until it finishes, you will
software: not be able to use the console.
software: installing software, this could take a few minutes...
software: installation of 26405.211.788.737_setup_q.exe completed.
Thu Sep 20 08:46:55 GMT [filer01a: cmds.software.installDone:info]: Software: Installation of 26405.211.788.737_setup_q.exe was completed.
Please type "download" to load the new software,
and "reboot" subsequently for the changes to take effect.

filer01a> download

download: Reminder: upgrade both the nodes in the Cluster


download: You can cancel this operation by hitting Ctrl-C in the next 6 seconds.
download: Depending on system load, it may take many minutes
download: to complete this operation. Until it finishes, you will
download: not be able to use the console.
Thu Sep 20 08:47:33 GMT [filer01a: download.request:notice]: Operator requested download initiated

download: Downloading boot device


download: If upgrading from a version of Data ONTAP prior to 7.3, please ensure
download: there is at least 3% of available space on each aggregate before
download: upgrading. Additional information can be found in the release notes.
........Thu Sep 20 08:48:45 GMT [filer01a: raid.disk.offline:notice]: Marking Disk /aggr0/plex0/rg0/1a.00.0 Shelf 0 Bay 0 [NETAPP
X306_HJUPI02TSSM NA00] S/N [B9JMZ14F] offline.
Thu Sep 20 08:48:45 GMT [filer01a: bdfu.selected:info]: Disk 1a.00.0 [NETAPP X306_HJUPI02TSSM NA00] S/N [B9JMZ14F] selected for
background disk firmware update.
Thu Sep 20 08:48:45 GMT [filer01a: dfu.firmwareDownloading:info]: Now downloading firmware file /etc/disk_fw/X306_HJUPI02TSSM.NA02.LOD on 1
disk(s) of plex [Pool0]...
....Thu Sep 20 08:49:11 GMT [filer01a: raid.disk.online:notice]: Onlining Disk /aggr0/plex0/rg0/1a.00.0 Shelf 0 Bay 0 [NETAPP
X306_HJUPI02TSSM NA02] S/N [B9JMZ14F].
Disk I/O attempted while disk 1a.00.0 is being zeroed.
Thu Sep 20 08:49:11 GMT [filer01a: fmmb.lock.disk.remove:info]: Disk ?.? removed from local mailbox set.
Thu Sep 20 08:49:12 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.1 is a local HA mailbox disk.
Thu Sep 20 08:49:12 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.2 is a local HA mailbox disk.
Thu Sep 20 08:49:12 GMT [filer01a: fmmb.lock.disk.remove:info]: Disk 1a.00.2 removed from local mailbox set.
Thu Sep 20 08:49:12 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.0 is a local HA mailbox disk.
Thu Sep 20 08:49:12 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.1 is a local HA mailbox disk.
.....Thu Sep 20 08:50:00 GMT [filer01a: mgr.stack.openFail:warning]: Unable to open function name/address mapping file
/etc/boot/mapfile_7.3.4.2O: No such file or directory

download: Downloading boot device (Service Area)


........Thu Sep 20 08:51:11 GMT [filer01a: raid.disk.offline:notice]: Marking Disk /aggr0/plex0/rg0/1a.00.1 Shelf 0 Bay 1 [NETAPP
X306_HJUPI02TSSM NA00] S/N [B9JM4AUT] offline.
Thu Sep 20 08:51:11 GMT [filer01a: bdfu.selected:info]: Disk 1a.00.1 [NETAPP X306_HJUPI02TSSM NA00] S/N [B9JM4AUT] selected for
background disk firmware update.
Thu Sep 20 08:51:11 GMT [filer01a: dfu.firmwareDownloading:info]: Now downloading firmware file /etc/disk_fw/X306_HJUPI02TSSM.NA02.LOD on 1
disk(s) of plex [Pool0]...
.....Thu Sep 20 08:51:37 GMT [filer01a: raid.disk.online:notice]: Onlining Disk /aggr0/plex0/rg0/1a.00.1 Shelf 0 Bay 1 [NETAPP
X306_HJUPI02TSSM NA02] S/N [B9JM4AUT].
Disk I/O attempted while disk 1a.00.1 is being zeroed.
Thu Sep 20 08:51:37 GMT [filer01a: fmmb.lock.disk.remove:info]: Disk ?.? removed from local mailbox set.
Thu Sep 20 08:51:38 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.0 is a local HA mailbox disk.
Thu Sep 20 08:51:38 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.2 is a local HA mailbox disk.
Thu Sep 20 08:51:38 GMT [filer01a: fmmb.lock.disk.remove:info]: Disk 1a.00.2 removed from local mailbox set.
Thu Sep 20 08:51:38 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.0 is a local HA mailbox disk.
Thu Sep 20 08:51:38 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.1 is a local HA mailbox disk.
....
filer01a> Thu Sep 20 08:52:17 GMT [filer01a: download.requestDone:notice]: Operator requested download completed

filer01a> version -b
1:/x86_64/kernel/primary.krn: OS 7.3.7
1:/backup/x86_64/kernel/primary.krn: OS 7.3.4
1:/x86_64/diag/diag.krn: 5.6.1
1:/x86_64/firmware/excelsio/firmware.img: Firmware 1.9.0
1:/x86_64/firmware/DrWho/firmware.img: Firmware 2.5.0
1:/x86_64/firmware/SB_XV/firmware.img: Firmware 4.4.0
1:/boot/loader: Loader 1.8
1:/common/firmware/zdi/zdi_fw.zpk: Flash Cache Firmware 2.2 (Build 0x201012201350)
1:/common/firmware/zdi/zdi_fw.zpk: PAM II Firmware 1.10 (Build 0x201012200653)
1:/common/firmware/zdi/zdi_fw.zpk: X1936A FPGA Configuration PROM 1.0 (Build 0x200706131558)
filer01a>

filer01a> reboot

CIFS local server is shutting down...

CIFS local server has shut down...


Thu Sep 20 08:54:53 GMT [filer01a: kern.shutdown:notice]: System shut down because : "reboot".
Thu Sep 20 08:54:53 GMT [filer01a: perf.archive.stop:info]: Performance archiver stopped.

Phoenix TrustedCore(tm) Server


Copyright 1985-2006 Phoenix Technologies Ltd.
All Rights Reserved
BIOS version: 4.4.0
Portions Copyright (c) 2007-2009 NetApp. All Rights Reserved.
CPU= Dual-Core AMD Opteron(tm) Processor 2216 X 1
Testing RAM
512MB RAM tested
4096MB RAM installed
Fixed Disk 0: STEC

Boot Loader version 1.8


Copyright (C) 2000-2003 Broadcom Corporation.
Portions Copyright (C) 2002-2009 NetApp

CPU Type: Dual-Core AMD Opteron(tm) Processor 2216

Starting AUTOBOOT press Ctrl-C to abort...


Loading x86_64/kernel/primary.krn:................0x200000/49212136 0x30eeae8/23754912 0x4796388/7980681 0x4f32a11/7 Entry at 0x00202018
Starting program at 0x00202018
Press CTRL-C for special boot menu
The platform doesn't support service processor
nvram: Need to update primary image on flash from version 2 to 4
nvram: Need to update secondary image on flash from version 2 to 4
Updating nvram firmware, memory valid is off. The system will automatically reboot when the update is complete.
and nvram6 boot block...................
Thu Sep 20 08:56:14 GMT [nvram.new.fw.downloaded:CRITICAL]: Firmware version 4 has been successfully downloaded on to the NVRAM card. The
system will automatically reboot now.

Starting AUTOBOOT press Ctrl-C to abort...


Loading x86_64/kernel/primary.krn:................0x200000/49212136 0x30eeae8/23754912 0x4796388/7980681 0x4f32a11/7 Entry at 0x00202018
Starting program at 0x00202018
Press CTRL-C for special boot menu
The platform doesn't support service processor
Thu Sep 20 08:56:58 GMT [nvram.battery.state:info]: The NVRAM battery is currently OFF.
Thu Sep 20 08:57:00 GMT [nvram.battery.turned.on:info]: The NVRAM battery is turned ON. It is turned OFF during system shutdown.
Thu Sep 20 08:57:02 GMT [cf.nm.nicTransitionUp:info]: Interconnect link 0 is UP
Thu Sep 20 08:57:02 GMT [sas.adapter.firmware.download:info]: Updating firmware on SAS adapter 1a from version 01.08.00.00 to version
01.10.00.00.
Thu Sep 20 08:57:02 GMT [sas.adapter.firmware.download:info]: Updating firmware on SAS adapter 1c from version 01.08.00.00 to version
01.10.00.00.
Thu Sep 20 08:57:04 GMT [sas.adapter.firmware.download:info]: Updating firmware on SAS adapter 1a from version 01.09.01.00 to version
01.10.14.00.
Thu Sep 20 08:57:04 GMT [sas.adapter.firmware.download:info]: Updating firmware on SAS adapter 1c from version 01.09.01.00 to version
01.10.14.00.
Thu Sep 20 08:57:12 GMT [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0b.
Thu Sep 20 08:57:12 GMT [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0c.

Data ONTAP Release 7.3.7: Thu May 3 04:27:32 PDT 2012 (IBM)
Copyright (c) 1992-2012 NetApp.
Starting boot on Thu Sep 20 08:56:56 GMT 2012
Thu Sep 20 08:57:28 GMT [kern.version.change:notice]: Data ONTAP kernel version was changed from Data ONTAP Release 7.3.4 to Data ONTAP
Release 7.3.7.
Thu Sep 20 08:57:31 GMT [diskown.isEnabled:info]: software ownership has been enabled for this system
Thu Sep 20 08:57:34 GMT [sas.link.error:error]: Could not recover link on SAS adapter 1d after 10 seconds. Offlining the adapter.
Thu Sep 20 08:57:34 GMT [cf.nm.nicReset:warning]: Initiating soft reset on Cluster Interconnect card 0 due to rendezvous reset
Thu Sep 20 08:57:34 GMT [cf.rv.notConnected:error]: Connection for cfo_rv failed
Thu Sep 20 08:57:34 GMT [cf.nm.nicTransitionDown:warning]: Cluster Interconnect link 0 is DOWN
Thu Sep 20 08:57:34 GMT [cf.rv.notConnected:error]: Connection for cfo_rv failed
Thu Sep 20 08:57:34 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.0 is a local HA mailbox disk.
Thu Sep 20 08:57:34 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.1 is a local HA mailbox disk.
Thu Sep 20 08:57:34 GMT [fmmb.instStat.change:info]: normal mailbox instance on local side.
Thu Sep 20 08:57:35 GMT [sas.link.error:error]: Could not recover link on SAS adapter 1b after 10 seconds. Offlining the adapter.
Thu Sep 20 08:57:35 GMT [shelf.config.spha:info]: System is using single path HA attached storage only.
Thu Sep 20 08:57:35 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.12 is a partner HA mailbox disk.
Thu Sep 20 08:57:35 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.13 is a partner HA mailbox disk.
Thu Sep 20 08:57:35 GMT [fmmb.instStat.change:info]: normal mailbox instance on partner side.
Thu Sep 20 08:57:35 GMT [cf.fm.partner:info]: Cluster monitor: partner 'filer01b'
Thu Sep 20 08:57:35 GMT [cf.fm.kernelMismatch:warning]: Cluster monitor: possible kernel mismatch detected local 'Data ONTAP/7.3.7',
partner 'Data ONTAP/7.3.4'
Thu Sep 20 08:57:35 GMT [cf.fm.timeMasterStatus:info]: Acting as cluster time slave
Thu Sep 20 08:57:36 GMT [cf.nm.nicTransitionUp:info]: Interconnect link 0 is UP
Thu Sep 20 08:57:36 GMT [ses.multipath.ReqError:CRITICAL]: SAS-Shelf24 detected without a multipath configuration.
Thu Sep 20 08:57:36 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks.
Thu Sep 20 08:57:36 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes.
Thu Sep 20 08:57:37 GMT [localhost: cf.fm.launch:info]: Launching cluster monitor
Thu Sep 20 08:57:38 GMT [localhost: cf.fm.partner:info]: Cluster monitor: partner 'filer01b'
Thu Sep 20 08:57:38 GMT [localhost: cf.fm.notkoverClusterDisable:warning]: Cluster monitor: cluster takeover disabled (restart)
sparse volume upgrade done. num vol 0.
Thu Sep 20 08:57:38 GMT [localhost: cf.fsm.takeoverOfPartnerDisabled:notice]: Cluster monitor: takeover of filer01b disabled (cluster
takeover disabled)
Tadd net 127.0.0.0: gateway 127.0.0.1h
u Sep 20 08:57:40 GMT [localhost: cf.nm.nicReset:warning]: Initiating soft reset on Cluster Interconnect card 0 due to rendezvous reset
Vdisk Snap Table for host:0 is initialized
Thu Sep 20 08:57:40 GMT [localhost: rc:notice]: The system was down for 164 seconds
Thu Sep 20 08:57:41 GMT [localhost: rc:info]: Registry is being upgraded to improve storing of local changes.
Thu Sep 20 08:57:41 GMT [filer01a: rc:info]: Registry upgrade successful.
Thu Sep 20 08:57:41 GMT [filer01a: cf.partner.short_uptime:warning]: Partner up for 2 seconds only

***** 5 disks have been identified as having an incorrect


***** firmware revision level.
***** Please consult the man pages for disk_fw_update
***** to upgrade the firmware on these disks.

Thu Sep 20 08:57:42 GMT [filer01a: dfu.firmwareDownrev:error]: Downrev firmware on 5 disk(s)


Thu Sep 20 08:57:43 GMT [filer01a: sfu.partnerNotResponding:error]: Partner either responded in the negative, or did not respond in 20
seconds. Aborting shelf firmware update.
Thu Sep 20 08:57:43 GMT [filer01a: perf.archive.start:info]: Performance archiver started. Sampling 23 objects and 211 counters.
Thu Sep 20 08:57:47 GMT [filer01a: netif.linkUp:info]: Ethernet e0a: Link up.
add net default: gateway 10.18.1.254
Thu Sep 20 08:57:48 GMT [filer01a: rpc.dns.file.not.found:error]: Cannot enable DNS: /etc/resolv.conf does not exist
Thu Sep 20 08:57:48 GMT [filer01a: snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from root. Reason: Password
is too short (SNMPv3 requires at least 8 characters).
Thu Sep 20 08:57:48 GMT [filer01a: mgr.boot.disk_done:info]: Data ONTAP Release 7.3.7 boot complete. Last disk update written at Thu Sep 20
08:54:54 GMT 2012
Reminder: you should also set option timed.proto on the partner node
or the next takeover may not function correctly.
Thu Sep 20 08:57:48 GMT [filer01a: cf.hwassist.notifyEnableOn:info]: Cluster hw_assist: hw_assist functionality has been enabled by user.
Thu Sep 20 08:57:48 GMT [filer01a: cf.hwassist.intfUnConfigured:info]: Cannot get IP address of preferred e0M ethernet interface for
hardware assist functionality.
Thu Sep 20 08:57:48 GMT [filer01a: cf.hwassist.emptyPrtnrAddr:warning]: Partner address is empty; set it using the command 'options
cf.hw_assist.partner.address' and to make the hardware-assisted takeover work.
Thu Sep 20 08:57:48 GMT [filer01a: mgr.boot.reason_ok:notice]: System rebooted after a reboot command.
CIFS local server is running.
Thu Sep 20 08:57:49 GMT [filer01a: asup.post.host:info]: Autosupport (BATTERY_LOW) cannot connect to url
eccgw01.boulder.ibm.com/support/electronic/nas (Could not find hostname 'eccgw01.boulder.ibm.com', hostname lookup resolution error:
Unknown host)
filer01a> Thu Sep 20 08:57:50 GMT [filer01a: rlm.driver.mailhost:warning]: RLM setup could not access the mailhost specified in Data ONTAP.
Thu Sep 20 08:57:50 GMT [filer01a: unowned.disk.reminder:info]: 24 disks are currently unowned. Use 'disk assign' to assign the disks to a
filer.
Ipspace "acp-ipspace" created
Thu Sep 20 08:57:55 GMT [filer01a: rlm.firmware.upgrade.reqd:warning]: The RLM firmware 3.1 is incompatible with Data ONTAP for IPv6.
Thu Sep 20 08:57:55 GMT [filer01a: cf.hwassist.emptyPrtnrAddr:warning]: Partner address is empty; set it using the command 'options
cf.hw_assist.partner.address' and to make the hardware-assisted takeover work.
Thu Sep 20 08:57:56 GMT [filer01a: cf.hwassist.emptyPrtnrAddr:warning]: Partner address is empty; set it using the command 'options
cf.hw_assist.partner.address' and to make the hardware-assisted takeover work.
Thu Sep 20 08:58:05 GMT [filer01a: monitor.globalStatus.critical:CRITICAL]: Cluster failover of filer01b is not possible: cluster takeover
disabled.
Thu Sep 20 08:58:12 GMT [filer01a: nbt.nbns.registrationComplete:info]: NBT: All CIFS name registrations have completed for the local
server.
Thu Sep 20 08:58:14 GMT [filer01a: asup.post.host:info]: Autosupport (BATTERY_LOW) cannot connect to url
eccgw01.boulder.ibm.com/support/electronic/nas (Could not find hostname 'eccgw01.boulder.ibm.com', hostname lookup resolution error:
Unknown host)
Thu Sep 20 08:58:50 GMT [filer01a: asup.post.host:info]: Autosupport (BATTERY_LOW) cannot connect to url
eccgw01.boulder.ibm.com/support/electronic/nas (Could not find hostname 'eccgw01.boulder.ibm.com', hostname lookup resolution error:
Unknown host)
Thu Sep 20 08:59:43 GMT [filer01a: raid.disk.offline:notice]: Marking Disk 1a.00.3 Shelf 0 Bay 3 [NETAPP X306_HJUPI02TSSM NA00] S/N
[B9JLT7ST] offline.
Thu Sep 20 08:59:43 GMT [filer01a: bdfu.selected:info]: Disk 1a.00.3 [NETAPP X306_HJUPI02TSSM NA00] S/N [B9JLT7ST] selected for
background disk firmware update.
Thu Sep 20 08:59:44 GMT [filer01a: dfu.firmwareDownloading:info]: Now downloading firmware file /etc/disk_fw/X306_HJUPI02TSSM.NA02.LOD on 1
disk(s) of plex [Pool0]...

Firmware update opmerking: reboot na een tijdje weer:


Thu Sep 20 09:29:24 GMT [filer01a: dfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk drives
SET VOL0 SETTINGS

Set the Vol0 to 20 GB and some additional settings (see NetApp Data Planning for more information on these settings):

filer01a> vol size vol0 20g


vol size: Flexible volume 'vol0' size set to 20g.
filer01a> vol options vol0 no_atime_update on
filer01a> vol options vol0 fractional_reserve 0
DNS AND NTP

DNS

Configure DNS:

filer01a> rdfile /etc/resolv.conf


search getshifting.com intranet
nameserver 10.16.110.1
filer01a> options dns.enable on
You are changing option dns.enable which applies to both members of
the cluster in takeover mode.
This value must be the same in both cluster members prior to any takeover
or giveback, or that next takeover/giveback may not work correctly.
Thu Sep 20 12:31:00 GMT [filer01a: reg.options.cf.change:warning]: Option dns.enable changed on one cluster node.

Note: Wrfile needs an empty line at the end and save with CTRL+C.

NTP

Configure NTP:

filer01a> options timed


timed.enable on (same value in local+partner recommended)
timed.log off (same value in local+partner recommended)
timed.max_skew 30m (same value in local+partner recommended)
timed.min_skew 0 (same value in local+partner recommended)
timed.proto rtc (same value in local+partner recommended)
timed.sched 1h (same value in local+partner recommended)
timed.servers (same value in local+partner recommended)
timed.window 0s (same value in local+partner recommended)
filer01a> options timed.servers 10.16.123.123
Reminder: you should also set option timed.servers on the partner node
or the next takeover may not function correctly.
filer01a> options timed.proto ntp
Reminder: you should also set option timed.proto on the partner node
or the next takeover may not function correctly.
filer01a> options timed
timed.enable on (same value in local+partner recommended)
timed.log off (same value in local+partner recommended)
timed.max_skew 30m (same value in local+partner recommended)
timed.min_skew 0 (same value in local+partner recommended)
timed.proto ntp (same value in local+partner recommended)
timed.sched 1h (same value in local+partner recommended)
timed.servers 10.16.123.123 (same value in local+partner recommended)
timed.window 0s (same value in local+partner recommended)
AUTOSUPPORT

Confgure the Autosupport settings:

options autosupport.from filer01a @ getshifting.com


options autosupport.mailhost 10.16.102.111
options autosupport.support.transport smtp
options autosupport.to sjoerd @ getshifting.com
CLUSTER FAILOVER

Configure the Cluster Failover:

First configure RLM failover since IP failover is already managed by the startup files:
filer01a> options cf.hw_assist.partner.address 10.18.1.32
Validating the new hw-assist configuration. Please wait...
Thu Sep 20 12:45:19 GMT [filer01a: cf.hwassist.localMonitor:warning]: Cluster hw_assist: hw_assist functionality is inactive.
cf hw_assist Error: can not validate new config.
No response from partner(filer01b), timed out.
filer01a> Thu Sep 20 12:46:00 GMT [filer01a: cf.hwassist.hwasstActive:info]: Cluster hw_assist: hw_assist functionality is active on IP
address: 10.18.1.31 port: 4444

Thu Sep 20 12:45:46 GMT [filer01b: cf.hwassist.localMonitor:warning]: Cluster hw_assist: hw_assist functionality is inactive.
Thu Sep 20 12:45:46 GMT [filer01b: cf.hwassist.missedKeepAlive:warning]: Cluster hw_assist: missed keep alive alert from partner(filer01a).
Thu Sep 20 12:45:46 GMT [filer01b: cf.hwassist.hwasstActive:info]: Cluster hw_assist: hw_assist functionality is active on IP address:
10.18.1.32 port: 4444

filer01b> options cf.hw_assist.partner.address 10.18.1.31


Validating the new hw-assist configuration. Please wait...
cf hw_assist Error: can not validate new config.
No response from partner(filer01a), timed out.

The check and enable (cf enable) the clustering:

filer01a> cf status
Cluster disabled.
filer01a> cf partner
filer01b
filer01a> cf monitor
current time: 20Sep2012 12:48:38
UP 03:19:42, partner 'filer01b', cluster monitor disabled
filer01a> cf enable
filer01a> Thu Sep 20 12:50:48 GMT [filer01a: cf.misc.operatorEnable:warning]: Cluster monitor: operator initiated enabling of cluster
Thu Sep 20 12:50:48 GMT [filer01a: cf.fsm.takeoverOfPartnerEnabled:notice]: Cluster monitor: takeover of filer01b enabled
Thu Sep 20 12:50:48 GMT [filer01a: cf.fsm.takeoverByPartnerEnabled:notice]: Cluster monitor: takeover of filer01a by filer01b enabled

filer01a> Thu Sep 20 12:51:01 GMT [filer01a: monitor.globalStatus.ok:info]: The system's global status is normal.

filer01a> cf status
Cluster enabled, filer01b is up.
filer01a> cf partner
filer01b
filer01a> cf monitor
current time: 20Sep2012 12:51:21
UP 03:22:22, partner 'filer01b', cluster monitor enabled
VIA Interconnect is up (link 0 up, link 1 up), takeover capability on-line
partner update TAKEOVER_ENABLED (20Sep2012 12:51:21)
SYSLOG

Configure syslog to send the logging to a central syslog server, see NetApp Syslog for more information:

filer01a> wrfile /etc/syslog.conf


# Log messages of priority info or higher to the console and to /etc/messages
*.err /dev/console
*.info /etc/messages
*.info @10.18.2.240
read: error reading standard input: Interrupted system call
filer01a> rdfile /etc/syslog.conf
# Log messages of priority info or higher to the console and to /etc/messages
*.err /dev/console
*.info /etc/messages
*.info @10.18.2.240
DISABLE TELNET

Disable telnet access to your filers:

filer01a> options telnet.enable off


Reminder: you MUST also set option telnet.enable on the partner node
or the next takeover will not function correctly.
DISK CONFIG

Now this is quite personal I guess, you need to configure you're disks. Assign the correct disks to the correct heads. How you do this is up to you, I usually devide them
equally:

First a few commands:

 find disk speed:


o storage show disk -a

See all disk and their ownerships (to only see unowned disks do disk show -n):

filer01a> disk show -v


DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
1a.00.13 filer01b (151762803) Pool0 B9JGMTRT
1a.00.12 filer01b (151762803) Pool0 B9JB8DGF
1a.00.16 filer01b (151762803) Pool0 B9JMT0JF
1a.00.15 filer01b (151762803) Pool0 B9JM4N7T
1a.00.19 filer01b (151762803) Pool0 B9JMDT2T
1a.00.18 filer01b (151762803) Pool0 B9JHNPST
1a.00.14 filer01b (151762803) Pool0 B9JHNPLT
1a.00.17 filer01b (151762803) Pool0 B9JLT7UT
1c.01.8 Not Owned NONE 6SL3PDBR0000N2407Y4X
1c.01.9 Not Owned NONE 6SL3T0EP0000N2401BQE
1c.01.10 Not Owned NONE 6SL3QWQF0000N238L2U8
1c.01.4 Not Owned NONE 6SL3SHRZ0000N240BGZE
1c.01.23 Not Owned NONE 6SL3PR9T0000N237EL3Q
1c.01.12 Not Owned NONE 6SL3QHLG0000N238D6W8
1c.01.11 Not Owned NONE 6SL3PNB30000N2392UAG
1c.01.1 Not Owned NONE 6SL3PF180000N24007YD
1c.01.0 Not Owned NONE 6SL3SRHH0000N2409944
1c.01.6 Not Owned NONE 6SL3SEE80000N2404HQC
1c.01.16 Not Owned NONE 6SL3R6KL0000N23907K2
1c.01.22 Not Owned NONE 6SL3PBX20000N237NBWY
1c.01.13 Not Owned NONE 6SL3RBAS0000N2395038
1c.01.20 Not Owned NONE 6SL3PND40000N238H5GX
1c.01.19 Not Owned NONE 6SL3P3NH0000N238608X
1c.01.18 Not Owned NONE 6SL3PWDK0000N238L4QM
1c.01.5 Not Owned NONE 6SL3S87F0000N239GPHT
1c.01.15 Not Owned NONE 6SL3RC5M0000M125NTE1
1c.01.2 Not Owned NONE 6SL3SB6E0000N2402QZE
1c.01.14 Not Owned NONE 6SL3R1FP0000N239553G
1c.01.17 Not Owned NONE 6SL3R3LJ0000N239575J
1c.01.21 Not Owned NONE 6SL3QQQP0000N23903DJ
1c.01.7 Not Owned NONE 6SL3SG830000N2404H3Z
1c.01.3 Not Owned NONE 6SL3SWBE0000N24007QV
1a.00.2 filer01a (151762815) Pool0 B9JMDP0T
1a.00.0 filer01a (151762815) Pool0 B9JMZ14F
1a.00.7 filer01a (151762815) Pool0 B9JLS1MT
1a.00.1 filer01a (151762815) Pool0 B9JM4AUT
1a.00.5 filer01a (151762815) Pool0 B9JHNPJT
1a.00.4 filer01a (151762815) Pool0 B9JMYGNF
1a.00.3 filer01a (151762815) Pool0 B9JLT7ST
1a.00.6 filer01a (151762815) Pool0 B9JMDL9T

Now assign disks to owner. There are 24 unowned disks, 12 for one head, 12 for the other:

filer01a> disk assign 1c.01.0 -o filer01a


Sep 21 15:01:08 [filer01a: diskown.changingOwner:info]: changing ownership for disk 1c.01.3 (S/N 6SL3SWBE0000N24007QV) from unowned (ID -1)
to filer01a (ID 151762815)

Now, because I forgot to set this option:

filer01a*> priv set advanced


filer01a*> options disk.auto_assign off
You are changing option disk.auto_assign which applies to both members of
the cluster in takeover mode.
This value must be the same in both cluster members prior to any takeover
or giveback, or that next takeover/giveback may not work correctly.

THIS happened, all disks got assigned:

filer01a> disk show -v


DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
1a.00.13 filer01b (151762803) Pool0 B9JGMTRT
1a.00.12 filer01b (151762803) Pool0 B9JB8DGF
1a.00.16 filer01b (151762803) Pool0 B9JMT0JF
1a.00.15 filer01b (151762803) Pool0 B9JM4N7T
1a.00.19 filer01b (151762803) Pool0 B9JMDT2T
1a.00.18 filer01b (151762803) Pool0 B9JHNPST
1a.00.14 filer01b (151762803) Pool0 B9JHNPLT
1a.00.17 filer01b (151762803) Pool0 B9JLT7UT
1c.01.9 filer01a (151762815) Pool0 6SL3T0EP0000N2401BQE
1c.01.10 filer01a (151762815) Pool0 6SL3QWQF0000N238L2U8
1c.01.4 filer01a (151762815) Pool0 6SL3SHRZ0000N240BGZE
1c.01.23 filer01a (151762815) Pool0 6SL3PR9T0000N237EL3Q
1c.01.12 filer01a (151762815) Pool0 6SL3QHLG0000N238D6W8
1c.01.11 filer01a (151762815) Pool0 6SL3PNB30000N2392UAG
1c.01.1 filer01a (151762815) Pool0 6SL3PF180000N24007YD
1c.01.6 filer01a (151762815) Pool0 6SL3SEE80000N2404HQC
1c.01.8 filer01a (151762815) Pool0 6SL3PDBR0000N2407Y4X
1c.01.16 filer01a (151762815) Pool0 6SL3R6KL0000N23907K2
1c.01.22 filer01a (151762815) Pool0 6SL3PBX20000N237NBWY
1c.01.13 filer01a (151762815) Pool0 6SL3RBAS0000N2395038
1c.01.20 filer01a (151762815) Pool0 6SL3PND40000N238H5GX
1c.01.19 filer01a (151762815) Pool0 6SL3P3NH0000N238608X
1c.01.18 filer01a (151762815) Pool0 6SL3PWDK0000N238L4QM
1c.01.5 filer01a (151762815) Pool0 6SL3S87F0000N239GPHT
1c.01.15 filer01a (151762815) Pool0 6SL3RC5M0000M125NTE1
1c.01.2 filer01a (151762815) Pool0 6SL3SB6E0000N2402QZE
1c.01.14 filer01a (151762815) Pool0 6SL3R1FP0000N239553G
1c.01.17 filer01a (151762815) Pool0 6SL3R3LJ0000N239575J
1c.01.21 filer01a (151762815) Pool0 6SL3QQQP0000N23903DJ
1c.01.7 filer01a (151762815) Pool0 6SL3SG830000N2404H3Z
1c.01.3 filer01a (151762815) Pool0 6SL3SWBE0000N24007QV
1a.00.2 filer01a (151762815) Pool0 B9JMDP0T
1a.00.0 filer01a (151762815) Pool0 B9JMZ14F
1a.00.7 filer01a (151762815) Pool0 B9JLS1MT
1a.00.1 filer01a (151762815) Pool0 B9JM4AUT
1a.00.5 filer01a (151762815) Pool0 B9JHNPJT
1a.00.4 filer01a (151762815) Pool0 B9JMYGNF
1a.00.3 filer01a (151762815) Pool0 B9JLT7ST
1a.00.6 filer01a (151762815) Pool0 B9JMDL9T
1c.01.0 filer01a (151762815) Pool0 6SL3SRHH0000N2409944

So I had to remove the ownership of the disks that got wrong assigned:

filer01a*> disk remove_ownership 1c.01.12 1c.01.13 1c.01.14 1c.01.15 1c.01.16 1c.01.17 1c.01.18 1c.01.19 1c.01.20 1c.01.21 1c.01.22
1c.01.23
Disk 1c.01.12 will have its ownership removed
Disk 1c.01.13 will have its ownership removed
Disk 1c.01.14 will have its ownership removed
Disk 1c.01.15 will have its ownership removed
Disk 1c.01.16 will have its ownership removed
Disk 1c.01.17 will have its ownership removed
Disk 1c.01.18 will have its ownership removed
Disk 1c.01.19 will have its ownership removed
Disk 1c.01.20 will have its ownership removed
Disk 1c.01.21 will have its ownership removed
Disk 1c.01.22 will have its ownership removed
Disk 1c.01.23 will have its ownership removed
Volumes must be taken offline. Are all impacted volumes offline(y/n)?? y

This gave me this result:

filer01a*> disk show -v


DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
1a.00.13 filer01b (151762803) Pool0 B9JGMTRT
1a.00.12 filer01b (151762803) Pool0 B9JB8DGF
1a.00.16 filer01b (151762803) Pool0 B9JMT0JF
1a.00.15 filer01b (151762803) Pool0 B9JM4N7T
1a.00.19 filer01b (151762803) Pool0 B9JMDT2T
1a.00.18 filer01b (151762803) Pool0 B9JHNPST
1a.00.14 filer01b (151762803) Pool0 B9JHNPLT
1a.00.17 filer01b (151762803) Pool0 B9JLT7UT
1c.01.9 filer01a (151762815) Pool0 6SL3T0EP0000N2401BQE
1c.01.10 filer01a (151762815) Pool0 6SL3QWQF0000N238L2U8
1c.01.4 filer01a (151762815) Pool0 6SL3SHRZ0000N240BGZE
1c.01.14 Not Owned NONE 6SL3R1FP0000N239553G
1c.01.22 Not Owned NONE 6SL3PBX20000N237NBWY
1c.01.11 filer01a (151762815) Pool0 6SL3PNB30000N2392UAG
1c.01.1 filer01a (151762815) Pool0 6SL3PF180000N24007YD
1c.01.6 filer01a (151762815) Pool0 6SL3SEE80000N2404HQC
1c.01.8 filer01a (151762815) Pool0 6SL3PDBR0000N2407Y4X
1c.01.23 Not Owned NONE 6SL3PR9T0000N237EL3Q
1c.01.19 Not Owned NONE 6SL3P3NH0000N238608X
1c.01.18 Not Owned NONE 6SL3PWDK0000N238L4QM
1c.01.15 Not Owned NONE 6SL3RC5M0000M125NTE1
1c.01.17 Not Owned NONE 6SL3R3LJ0000N239575J
1c.01.5 filer01a (151762815) Pool0 6SL3S87F0000N239GPHT
1c.01.21 Not Owned NONE 6SL3QQQP0000N23903DJ
1c.01.2 filer01a (151762815) Pool0 6SL3SB6E0000N2402QZE
1c.01.12 Not Owned NONE 6SL3QHLG0000N238D6W8
1c.01.20 Not Owned NONE 6SL3PND40000N238H5GX
1c.01.16 Not Owned NONE 6SL3R6KL0000N23907K2
1c.01.7 filer01a (151762815) Pool0 6SL3SG830000N2404H3Z
1c.01.3 filer01a (151762815) Pool0 6SL3SWBE0000N24007QV
1c.01.13 Not Owned NONE 6SL3RBAS0000N2395038
1a.00.2 filer01a (151762815) Pool0 B9JMDP0T
1a.00.0 filer01a (151762815) Pool0 B9JMZ14F
1a.00.7 filer01a (151762815) Pool0 B9JLS1MT
1a.00.1 filer01a (151762815) Pool0 B9JM4AUT
1a.00.5 filer01a (151762815) Pool0 B9JHNPJT
1a.00.4 filer01a (151762815) Pool0 B9JMYGNF
1a.00.3 filer01a (151762815) Pool0 B9JLT7ST
1a.00.6 filer01a (151762815) Pool0 B9JMDL9T
1c.01.0 filer01a (151762815) Pool0 6SL3SRHH0000N2409944

So now I only had to assign the other disks:

filer01a*> disk assign 1c.01.12 1c.01.13 1c.01.14 1c.01.15 1c.01.16 1c.01.17 1c.01.18 1c.01.19 1c.01.20 1c.01.21 1c.01.22 1c.01.23 -o
filer01b
filer01a*> disk show -v
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
1a.00.13 filer01b (151762803) Pool0 B9JGMTRT
1a.00.12 filer01b (151762803) Pool0 B9JB8DGF
1a.00.16 filer01b (151762803) Pool0 B9JMT0JF
1a.00.15 filer01b (151762803) Pool0 B9JM4N7T
1a.00.19 filer01b (151762803) Pool0 B9JMDT2T
1a.00.18 filer01b (151762803) Pool0 B9JHNPST
1a.00.14 filer01b (151762803) Pool0 B9JHNPLT
1a.00.17 filer01b (151762803) Pool0 B9JLT7UT
1c.01.9 filer01a (151762815) Pool0 6SL3T0EP0000N2401BQE
1c.01.10 filer01a (151762815) Pool0 6SL3QWQF0000N238L2U8
1c.01.4 filer01a (151762815) Pool0 6SL3SHRZ0000N240BGZE
1c.01.12 filer01b (151762803) Pool0 6SL3QHLG0000N238D6W8
1c.01.11 filer01a (151762815) Pool0 6SL3PNB30000N2392UAG
1c.01.1 filer01a (151762815) Pool0 6SL3PF180000N24007YD
1c.01.6 filer01a (151762815) Pool0 6SL3SEE80000N2404HQC
1c.01.8 filer01a (151762815) Pool0 6SL3PDBR0000N2407Y4X
1c.01.14 filer01b (151762803) Pool0 6SL3R1FP0000N239553G
1c.01.13 filer01b (151762803) Pool0 6SL3RBAS0000N2395038
1c.01.17 filer01b (151762803) Pool0 6SL3R3LJ0000N239575J
1c.01.22 filer01b (151762803) Pool0 6SL3PBX20000N237NBWY
1c.01.23 filer01b (151762803) Pool0 6SL3PR9T0000N237EL3Q
1c.01.20 filer01b (151762803) Pool0 6SL3PND40000N238H5GX
1c.01.5 filer01a (151762815) Pool0 6SL3S87F0000N239GPHT
1c.01.16 filer01b (151762803) Pool0 6SL3R6KL0000N23907K2
1c.01.2 filer01a (151762815) Pool0 6SL3SB6E0000N2402QZE
1c.01.19 filer01b (151762803) Pool0 6SL3P3NH0000N238608X
1c.01.21 filer01b (151762803) Pool0 6SL3QQQP0000N23903DJ
1c.01.15 filer01b (151762803) Pool0 6SL3RC5M0000M125NTE1
1c.01.7 filer01a (151762815) Pool0 6SL3SG830000N2404H3Z
1c.01.3 filer01a (151762815) Pool0 6SL3SWBE0000N24007QV
1c.01.18 filer01b (151762803) Pool0 6SL3PWDK0000N238L4QM
1a.00.2 filer01a (151762815) Pool0 B9JMDP0T
1a.00.0 filer01a (151762815) Pool0 B9JMZ14F
1a.00.7 filer01a (151762815) Pool0 B9JLS1MT
1a.00.1 filer01a (151762815) Pool0 B9JM4AUT
1a.00.5 filer01a (151762815) Pool0 B9JHNPJT
1a.00.4 filer01a (151762815) Pool0 B9JMYGNF
1a.00.3 filer01a (151762815) Pool0 B9JLT7ST
1a.00.6 filer01a (151762815) Pool0 B9JMDL9T
1c.01.0 filer01a (151762815) Pool0 6SL3SRHH0000N2409944

Which got them nicely devided:


filer01a*> disk show -o filer01a
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
1c.01.9 filer01a (151762815) Pool0 6SL3T0EP0000N2401BQE
1c.01.10 filer01a (151762815) Pool0 6SL3QWQF0000N238L2U8
1c.01.4 filer01a (151762815) Pool0 6SL3SHRZ0000N240BGZE
1c.01.11 filer01a (151762815) Pool0 6SL3PNB30000N2392UAG
1c.01.1 filer01a (151762815) Pool0 6SL3PF180000N24007YD
1c.01.6 filer01a (151762815) Pool0 6SL3SEE80000N2404HQC
1c.01.8 filer01a (151762815) Pool0 6SL3PDBR0000N2407Y4X
1c.01.5 filer01a (151762815) Pool0 6SL3S87F0000N239GPHT
1c.01.2 filer01a (151762815) Pool0 6SL3SB6E0000N2402QZE
1c.01.7 filer01a (151762815) Pool0 6SL3SG830000N2404H3Z
1c.01.3 filer01a (151762815) Pool0 6SL3SWBE0000N24007QV
1a.00.2 filer01a (151762815) Pool0 B9JMDP0T
1a.00.0 filer01a (151762815) Pool0 B9JMZ14F
1a.00.7 filer01a (151762815) Pool0 B9JLS1MT
1a.00.1 filer01a (151762815) Pool0 B9JM4AUT
1a.00.5 filer01a (151762815) Pool0 B9JHNPJT
1a.00.4 filer01a (151762815) Pool0 B9JMYGNF
1a.00.3 filer01a (151762815) Pool0 B9JLT7ST
1a.00.6 filer01a (151762815) Pool0 B9JMDL9T
1c.01.0 filer01a (151762815) Pool0 6SL3SRHH0000N2409944
filer01a*> disk show -o filer01b
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
1a.00.13 filer01b (151762803) Pool0 B9JGMTRT
1a.00.12 filer01b (151762803) Pool0 B9JB8DGF
1a.00.16 filer01b (151762803) Pool0 B9JMT0JF
1a.00.15 filer01b (151762803) Pool0 B9JM4N7T
1a.00.19 filer01b (151762803) Pool0 B9JMDT2T
1a.00.18 filer01b (151762803) Pool0 B9JHNPST
1a.00.14 filer01b (151762803) Pool0 B9JHNPLT
1a.00.17 filer01b (151762803) Pool0 B9JLT7UT
1c.01.12 filer01b (151762803) Pool0 6SL3QHLG0000N238D6W8
1c.01.14 filer01b (151762803) Pool0 6SL3R1FP0000N239553G
1c.01.13 filer01b (151762803) Pool0 6SL3RBAS0000N2395038
1c.01.17 filer01b (151762803) Pool0 6SL3R3LJ0000N239575J
1c.01.22 filer01b (151762803) Pool0 6SL3PBX20000N237NBWY
1c.01.23 filer01b (151762803) Pool0 6SL3PR9T0000N237EL3Q
1c.01.20 filer01b (151762803) Pool0 6SL3PND40000N238H5GX
1c.01.16 filer01b (151762803) Pool0 6SL3R6KL0000N23907K2
1c.01.19 filer01b (151762803) Pool0 6SL3P3NH0000N238608X
1c.01.21 filer01b (151762803) Pool0 6SL3QQQP0000N23903DJ
1c.01.15 filer01b (151762803) Pool0 6SL3RC5M0000M125NTE1
1c.01.18 filer01b (151762803) Pool0 6SL3PWDK0000N238L4QM
Create aggr

Now you can create the aggregates, and consider reading this page before you continue:

First see the aggregate status:

filer01a> aggr status -r


Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 1a.00.0 1a 0 0 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
parity 1a.00.1 1a 0 1 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.2 1a 0 2 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816

Spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 1c.01.0 1c 1 0 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.1 1c 1 1 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.2 1c 1 2 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.3 1c 1 3 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.4 1c 1 4 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.5 1c 1 5 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.6 1c 1 6 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.7 1c 1 7 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.8 1c 1 8 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.9 1c 1 9 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.10 1c 1 10 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.11 1c 1 11 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1a.00.3 1a 0 3 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
spare 1a.00.4 1a 0 4 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
spare 1a.00.5 1a 0 5 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
spare 1a.00.6 1a 0 6 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
spare 1a.00.7 1a 0 7 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816

Partner disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 1c.01.22 1c 1 22 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.15 1c 1 15 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.16 1c 1 16 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.21 1c 1 21 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.20 1c 1 20 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.17 1c 1 17 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.18 1c 1 18 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.13 1c 1 13 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.19 1c 1 19 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.12 1c 1 12 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.14 1c 1 14 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.23 1c 1 23 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1a.00.16 1a 0 16 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.12 1a 0 12 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.15 1a 0 15 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.13 1a 0 13 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.19 1a 0 19 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.17 1a 0 17 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.18 1a 0 18 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.14 1a 0 14 SA:A - BSAS 7200 0/0 1695759/3472914816
filer01a>

Now create the aggregates considering the speed and type of disks (do not mix that):

filer01a> aggr add aggr0 -d 1a.00.3 1a.00.4 1a.00.5 1a.00.6


Addition of 4 disks to the aggregate has completed.
Note: If you do not have mixed types of disks (so nothing to worry about you can add disks like this: aggr add aggr0 7, which will just add 7 disks to the aggregate.

filer01a> aggr status -r


Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 1a.00.0 1a 0 0 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
parity 1a.00.1 1a 0 1 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.2 1a 0 2 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.3 1a 0 3 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.4 1a 0 4 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.5 1a 0 5 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.6 1a 0 6 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816

Spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 1c.01.0 1c 1 0 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.1 1c 1 1 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.2 1c 1 2 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.3 1c 1 3 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.4 1c 1 4 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.5 1c 1 5 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.6 1c 1 6 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.7 1c 1 7 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.8 1c 1 8 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.9 1c 1 9 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.10 1c 1 10 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.11 1c 1 11 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1a.00.7 1a 0 7 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816

Partner disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 1c.01.22 1c 1 22 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.15 1c 1 15 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.16 1c 1 16 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.21 1c 1 21 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.20 1c 1 20 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.17 1c 1 17 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.18 1c 1 18 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.13 1c 1 13 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.19 1c 1 19 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.12 1c 1 12 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.14 1c 1 14 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.23 1c 1 23 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1a.00.16 1a 0 16 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.12 1a 0 12 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.15 1a 0 15 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.13 1a 0 13 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.19 1a 0 19 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.17 1a 0 17 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.18 1a 0 18 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.14 1a 0 14 SA:A - BSAS 7200 0/0 1695759/3472914816
filer01a>

filer01a> aggr create aggr1_SAS -T SAS -t raid_dp 11


Creation of an aggregate with 11 disks has completed.
filer01a> aggr status -r
Aggregate aggr1_SAS (online, raid_dp) (block checksums)
Plex /aggr1_SAS/plex0 (online, normal, active)
RAID group /aggr1_SAS/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 1c.01.0 1c 1 0 SA:A - SAS 15000 560000/1146880000 560208/1147307688
parity 1c.01.1 1c 1 1 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.2 1c 1 2 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.3 1c 1 3 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.4 1c 1 4 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.5 1c 1 5 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.6 1c 1 6 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.7 1c 1 7 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.8 1c 1 8 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.9 1c 1 9 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.10 1c 1 10 SA:A - SAS 15000 560000/1146880000 560208/1147307688
Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 1a.00.0 1a 0 0 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
parity 1a.00.1 1a 0 1 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.2 1a 0 2 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.3 1a 0 3 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.4 1a 0 4 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.5 1a 0 5 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.6 1a 0 6 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816

Spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 1c.01.11 1c 1 11 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1a.00.7 1a 0 7 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816

Partner disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 1c.01.22 1c 1 22 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.15 1c 1 15 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.16 1c 1 16 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.21 1c 1 21 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.20 1c 1 20 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.17 1c 1 17 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.18 1c 1 18 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.13 1c 1 13 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.19 1c 1 19 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.12 1c 1 12 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.14 1c 1 14 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.23 1c 1 23 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1a.00.16 1a 0 16 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.12 1a 0 12 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.15 1a 0 15 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.13 1a 0 13 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.19 1a 0 19 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.17 1a 0 17 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.18 1a 0 18 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.14 1a 0 14 SA:A - BSAS 7200 0/0 1695759/3472914816
filer01a>

This makes your files ready for use!

Potrebbero piacerti anche