Sei sulla pagina 1di 71

Physical Storage

Module 3 Data ONTAP 8.0 7-Mode Administration

Module Objectives
By the end of this module, you should be able to: Describe Data ONTAP RAID technology Identify a disk in a disk shelf based on its ID Execute commands to determine disk ID Identify a hot-spare disk in a FAS system Describe the effects of using multiple disk types Create a 32-bit and 64-bit aggregate Execute aggregate commands in Data ONTAP Calculate usable disk space
2009 NetApp. All rights reserved.

Storage
Data ONTAP provides data storage for clients
Storage is made available to clients by a volume (or a smaller increment within a volume) vol1 Volumes are discussed in Module 4 Volumes are made available to clients through protocols discussed later in this course Volumes are contained in an aggregate aggr1 Aggregates not visible to clients

2009 NetApp. All rights reserved.

Storage Architecture

2009 NetApp. All rights reserved.

Storage Architecture
Aggregates: Are created by administrators Contain one or more plexes Aggregates types: Traditional: Deprecated 32-bit: 16 TB limitation 64-bit: New in Data ONTAP 8.0
system> aggr status Aggr State aggr_trad online aggr0 aggr1
2009 NetApp. All rights reserved.

aggr1

plex0

online online

Status Options raid4, trad 32-bit raid_dp, aggr root 32-bit raid_dp, aggr 64-bit

Storage Architecture (Cont.)


Plex: Provides mirror capabilities (RAID 0) with the SyncMirror product Contain one or more RAID groups

aggr1

Only one plex in an aggregate unless mirroring

plex0 rg1

rg0

system> sysconfig -r ... Plex /aggr1/plex0 (online, normal, active, pool0) RAID group /aggr1/plex0/rg0 (normal) ... RAID group /aggr1/plex0/rg1 (normal) Disks will belong ...

to pool0 unless part of SyncMirror

2009 NetApp. All rights reserved.

Storage Architecture (Cont.)


RAID Group:
Provides data protection Contains two or more disks
aggr1

RAID types:
RAID 4 RAID-DP (a RAID 6 implementation)
rg0

plex0 rg1

system> sysconfig -r ... RAID group /aggr1/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHAN Pool... --------- ------ ------------- ---- ---parity 0a.24 0a 1 8 FC:A 0... data 0a.25 0a 1 9 FC:A 0...
2009 NetApp. All rights reserved.

Storage Architecture (Cont.)


Disks:
Store data Contained in shelves
aggr1

Disk types:
Parity
Data
Made up of 4-KB blocks
rg0

plex0 rg1

system> sysconfig -r ... RAID group /aggr1/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHAN Pool... --------- ------ ------------- ---- ---parity 0a.24 0a 1 8 FC:A 0... data 0a.25 0a 1 9 FC:A 0...
2009 NetApp. All rights reserved.

Disks

2009 NetApp. All rights reserved.

Disks
All data is stored on disks To understand how physical media is managed in your storage system, we will address:
Disk types Disk qualification Disk ownership Spare disks

2009 NetApp. All rights reserved.

Supported Disk Topologies


FC
DS14mark2 DS14mark4 (ESH2 and ESH4)

SATA
DS14mark2-AT

SAS
DS4243

FAS2000

FAS2000

FAS2000*

FAS3100
FAS6000

FAS3100
FAS6000
* Some limitation, check the NOW site

FAS3100
FAS6000

2009 NetApp. All rights reserved.

Disk Qualification
NetApp only allows qualified disks to be used with Data ONTAP Ensures:
Quality Reliability

Enforced by /etc/qual_devices
Dont modify
Caution!
Modifying the Disk Qualification Requirement file can cause your storage system to halt.

2009 NetApp. All rights reserved.

Disk Names
System assigns Disk ID automatically through the host_adapter (HA) and device_id
system> sysconfig -r Aggregate aggr0 (online, raid_dp, redirect) (block checksums) Plex /aggr0/plex0 (online, normal, active) RAID group /aggr0/plex0/rg0 (normal)
RAID Disk --------dparity parity data Device -----0a.16 0a.17 0a.18 HA -0a 0a 0a SHELF BAY CHAN Pool Type ----- --- ---- ---- ---1 0 FC:A FCAL 1 1 FC:A FCAL 1 2 FC:A FCAL RPM ---10000 10000 10000 Used (MB/blks)... -------------34000/69632000... 34000/69632000... 34000/69632000...

Disk ID = host_adapter.device_id

2009 NetApp. All rights reserved.

Disk Names: host_adapter


host_adapter is the designation for the slot and port where an adapter is located
PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

REPLACE THIS ITEM WITHIN 2 MINUTES OF REMOVAL

REPLACE THIS ITEM WITHIN 2 MINUTES OF REMOVAL

AC IN

PWR

AC IN

PWR

1
e0e

HI-POT 2200VDC

HI-POT 2200VDC

e0f

RLM

e0a

0a
0c

e0b

e0c

e0d

LNK

LNK

LNK

LNK

LNK

LNK

LNK

FAS6080

2009 NetApp. All rights reserved.

LNK

0a

0b

0d

0e

0f

0g

0h

Disk Names: device_id


DS14
FC
Power
MK4

Fault

Loop A

Loop B

System

Shelf ID

Shelf ID

450F

450F

450F

450F

450F

450F

450F

450F

450F

450F

450F

450F

450F

450F

13 12 11 10 Shelf ID 1 2

Bay Number

Bay Number 130 130

Device ID 2916 4532

3
4 5 6

130
130 130 130

6148
7764 9380 10996

7
2009 NetApp. All rights reserved.

130

125112

The fcstat device_map Command


Use the fcstat command to troubleshoot disks and shelves Use the fcstat device_map command to show disks and their relative physical position map of drives on an FC loop
system> fcstat device_map
Loop Map for channel 0a: Translated Map: Port Count 7 29 Shelf mapping: Shelf 1: 29 Loop Map for channel 0b: Translated Map: Port Count 7 45 Shelf mapping: Shelf 2: 45 15 28 27 25 26 23 22 21 20 16 19 18 17 24 28 27 26 25 24 23 22 21 20 19 18 17 16 15 44 43 41 42 39 38 37 36 32 35 34 33 40 44 43 42 41 40 39 38 37 36 35 34 33 32

2009 NetApp. All rights reserved.

Disk Ownership
Disks are assigned to one system controller Disk ownership is either:
Hardware-based
Determined by slot position of the host bus adapter (HBA) and shelf module port

Software-based
Determined by storage system administrator
Storage Systems Software Disk Ownership Hardware Disk Ownership

FAS6000 series FAS3100 series FAS3000 series FAS2000 series

X X X X

2009 NetApp. All rights reserved.

Disk Ownership (Cont.)


To determine your systems ownership:
system> storage show

Hardware-based: SANOWN not enabled Software-based will report the current ownership

In a stand alone storage system without SyncMirror


Disks are owned by a single controller Disks are in pool0
High-availability and SyncMirror are discussed in Module 13
2009 NetApp. All rights reserved.

Software-Based Ownership
Determined by storage system administrator To verify current ownership:
system> disk show -v DISK OWNER --------- --------------0b.43 Not Owned ... 0b.29 system (84165672) ... POOL ----NONE Pool0 SERIAL NUMBER ------------41229013 41229011

To view all disks without an owner:


system> disk show -n DISK OWNER --------- --------------0b.43 Not Owned ...
2009 NetApp. All rights reserved.

POOL ----NONE

SERIAL NUMBER ------------41229013

Software-Based Ownership (Cont.)


To assign disk ownership, use:
system> disk assign {disk_list|all| [-T storage_type] -n count|auto}... disk_list is the Disk IDs of the unassigned disk T is either ATA, FCAL, LUN, SAS, or SATA

To assign a specific set of disks:


system> disk assign 0b.43, 0b.41, 0b.39

To assign all unassigned disks:

Specify the Disk system> disk assign all IDs that you wish to work with To unassign disks: system> disk assign 0b.39 -s unowned -f
s is used to specify the sysid to take ownership f is used to force assignment of previously assigned disks NOTE: Unassign only hot spare disks
2009 NetApp. All rights reserved.

Software-Based Ownership (Cont.)


Automatic assignment option:
system> options disk.auto_assign

Specifies if disks will be automatically assigned on systems with software disk ownership Default on Data ONTAP looks for any unassigned disks and assigns them to the same system and pool as other disks on their loop Automatic assignment is invoked:
Every 5 minutes 10 minutes after boot system> disk assign auto
2009 NetApp. All rights reserved.

Matching Disk Speeds


When creating an aggregate, Data ONTAP selects disks:
With same speed That match speed of existing disks

Data ONTAP verifies that adequate spares are available


If spares are not available, Data ONTAP will warn you
NetApp recommends to have spares available

2009 NetApp. All rights reserved.

Using Multiple Disk Types in an Aggregate


Drives in an aggregate can be:
Different speeds (not recommended) On the same shelf or on different shelves

Avoid mixing drive types within an aggregate


FC and SAS can be mixed (not recommended) FC and SATA or SAS and SATA cannot be mixed

The spares pool is global with a 34 56 single controller


2009 NetApp. All rights reserved.

Spare Disks
Spare disks are used to:
Increase aggregate capacity Replace failed disks

Disks must be zeroed before use


Automatically zeroed when the disk is brought into use NOTE: NetApp recommends zeroing disks prior to use
system> disk zero spares

2009 NetApp. All rights reserved.

System Manager: Disk Management

Select Disks to reveal a list of disks

2009 NetApp. All rights reserved.

Disk Protection and Validation

2009 NetApp. All rights reserved.

Disk Protection and Validation


Data ONTAP protects data through:
RAID

Data ONTAP validates data through:


Disk scrubbing

2009 NetApp. All rights reserved.

RAID Groups
RAID groups are a collection of data disks and parity disks RAID groups provide protection through parity Data ONTAP organizes disks into RAID groups Data ONTAP supports:
RAID 4 RAID-DP

2009 NetApp. All rights reserved.

RAID 4 Technology
RAID 4 protects against data loss that results from a single-disk failure in a RAID group A RAID 4 group requires a minimum of two disks:
One parity disk One data disk

Data

Data

Data

Data

Data

Data

Data

Parity

2009 NetApp. All rights reserved.

RAID-DP Technology
RAID-DP protects against data loss that results from double-disk failures in a RAID group A RAID-DP group requires a minimum of three disks:
One parity disk One double-parity disk One data disk

Data

Data

Data

Data

Data

Data

Parity

DoubleParity

2009 NetApp. All rights reserved.

RAID Group Size


RAID-DP NetApp Platform All storage systems (with SATA disks) All storage systems (with FC or SAS disks) Minimum Group Size 3 3 Maximum Group Size 16 28 Default Group Size 14 16

RAID 4 NetApp Platform All storage systems (with SATA) All storage systems (with FC or SAS) Minimum Group Size 2 Maximum Group Size 7 Default Group Size 7

14

2009 NetApp. All rights reserved.

Growing Aggregates
Be care on how you grow your aggregates
Existing rg0
Data Data Data Data Data Data Data Parity

Existing rg1
Data Data Data Data Data Data Data Parity

If you grow this existing configuration that is near full by 5 disk... ... then all data disks can become hot disks
Data Parity

2009 NetApp. All rights reserved.

Data Validation
NetApp provides data validation, using several different methods:
RAID-level checksums Media scrub process RAID scrub process

RAID-level checksums enhance data protection and reliability


WAFL (Write Anywhere File Layout) provides that real-time validation always occurs Two different checksums:
Block Checksums (BCS) Zone Checksums (ZCS) - used only with V-Series
2009 NetApp. All rights reserved.

Data Validation and Disk Structure


To understand disk protection and validation, you must understand disk structure
Track Mathematical Sector

Sector

2009 NetApp. All rights reserved.

Block Checksums (BCS) Method with FC


In FC disks, sector size is 520 Bytes

4 Byte checksums for 4096 Bytes stored within cluster

1 520 Bytes

Inode number and timestamp NetApp Data Block = 4096 Bytes 64 Bytes

2009 NetApp. All rights reserved.

Block Checksums (BCS) Method w/ ATA


In ATA disks, sector size is 512 Bytes
Wasted space

Checksums for previous 4096 Bytes stored within 9th sector

1
512 Bytes

Inode number and timestamp NetApp Data Block = 4096 Bytes 64 Bytes

2009 NetApp. All rights reserved.

Data Validation Processes


Two processes:
system> options raid.media_scrub
Checks for media errors only If enabled, runs continuously in the background

system> options raid.scrub


Also called disk scrubbing Checks for media errors by verifying the checksum in every block

2009 NetApp. All rights reserved.

Comparing Media and RAID Scrubs


A media scrub: Is always running in the background when the storage system is not busy Looks for unreadable blocks at the lowest level (0s and 1s) Is unaware of the data stored in a block Takes corrective action when it finds too many unreadable blocks on a disk (sends warnings or fails a disk, depending on findings) A RAID scrub: Is enabled by default Can be scheduled or disabled Disabling is not recommended Uses RAID checksums Reads a block and then checks the data If the RAID scrub finds a discrepancy between the RAID checksum and the data read, it re-creates the data from parity and writes it back to the block Ensures that data has not become stale by reading every block in an aggregate, even when users havent accessed the data

2009 NetApp. All rights reserved.

About Disk Scrubbing


Automatic RAID scrub:
By default, begins at 1:00 a.m. on Sundays Schedule can be changed by an administrator Duration can be specified by an administrator

Manual RAID scrub overrides automatic settings To scrub disks manually:


system> options raid.scrub.enable off And then: system> aggr scrub start

To view scrub status:


system> aggr scrub status aggrname

To configure the reconstruction impact on performance


system> options raid.reconstruct.perf_impact

To configure the scrubbing performance impact


system> options raid.scrub.perf_impact
2009 NetApp. All rights reserved.

Disk Failure and Physical Removal


To fail a disk:
system> disk fail disk_id

To unfail a disk:
system> priv set advanced system*> disk unfail disk_id

To unload a disk so it can be physically removed:


system> disk remove disk_id
The disk is now ready to be pulled from the shelves

2009 NetApp. All rights reserved.

Disk Sanitization
If you have sensitive data on the disk, you might want to do more than remove the disk... sanitize the disk Disk sanitization is a process of physically obliterating data by overwriting disks with specified byte patterns or random data so that recovery of the original data becomes impossible Administrators may choose up to three patterns to use or use the default pattern specified by Data ONTAP

2009 NetApp. All rights reserved.

Disk Sanitization (Cont.)


License the storage for sanitization: system> license add XXXXXX Verify the disks to be sanitized: system> sysconfig -r Start the sanitization operation: system> disk sanitize start -r -c 3 disk_list
-r provides a random pattern to overwrite the disks -c provides the number of times to run to operation (max = 7) Administrators may provide their own pattern with the -p option disk_list specifies space-separated list of Disk IDs

To check the status of the sanitization operation: system> disk sanitize status To release disks back to the spare pool: system> disk sanitize release disk_list
2009 NetApp. All rights reserved.

Degraded Mode
Degraded mode occurs when:
A single disk fails in a RAID 4 group with no spares Two disks fail in a RAID-DP group with no spares

Degraded modes operates for 24 hours, during which time:


Data is still available Performance is less-than-optimal Data must be recalculated from parity until the failed disk is replaced CPU usage increases to calculate from parity

System shuts down after 24 hours To change time interval, use the options raid.timeout command If an additional disk in the RAID group fails during degraded mode, the result will be data loss

2009 NetApp. All rights reserved.

Replacing a Failed Disk by Hot Swapping


Hot-swapping is the process of removing or installing a disk drive while the system is running and allows for:
Minimal interruption The addition of new disks as needed

Removing two disks from a RAID 4 group:


Double-disk failure Data loss will occur

Removing two disks from a RAID-DP group:


Degraded mode No data loss
2009 NetApp. All rights reserved.

Replacing Failed Disks


750 GB

1 TB

750 GB

750 GB

750 GB

750 GB

NOTE: Disk resizing occurs if a smaller disk is replaced by a larger one

2009 NetApp. All rights reserved.

Disk Replacement
To replace a data disk with a spare disk:
system> disk replace start diskname spare_diskname system> disk replace start 0a.21 0a.23
0a.20 0a.21 0a.22 0a.23

Parity Disk

Data Disk

Target Disk

Data Disk

Spare Disk

To check the status of a replace operation:


system> disk replace status

To stop the disk replace operation:


system> disk replace stop diskname

2009 NetApp. All rights reserved.

Aggregates

2009 NetApp. All rights reserved.

Aggregates
Aggregates will logically contain flexible volumes (FlexVol volumes) - see next module NetApp recommends aggregates to be either: 32-bit 64-bit An aggregate name must: Begin with either a letter or the underscore character (_) Contain only letters, digits, and underscore characters Contain no more than 255 characters

2009 NetApp. All rights reserved.

Adding an Aggregate
To add an aggregate using the CLI: system> aggr create ... To add an aggregate using NetApp System Manager: Use the Aggregate Wizard When adding aggregates, you must have the following information available: Aggregate name Aggregate type (32-bit is default) Parity (DP is default) RAID group size (minimum) Disk selection method Disk size Number of disks (including parity)
2009 NetApp. All rights reserved.

Creating an Aggregate Using the CLI


To create a 64-bit aggregate:
system> aggr create aggrname -B 64 24

Creates a 64-bit aggregate called aggrname with 24 disks By default, this aggregate uses RAID-DP 24 disks must be available (spares) for the command to succeed To create a 32-bit aggregate:
system> aggr create aggrname -B 32 24

or
system> aggr create aggrname 24
2009 NetApp. All rights reserved.

32-bit or 64-bit Aggregate


NetApp recommends the following when creating an aggregate:

32-bit
Maximize performance with no need of allocating space more than 16 TB

64-bit
Provides high performance as well as the ability to exceed the 16 TB limitation

2009 NetApp. All rights reserved.

Common Aggregate Commands


To grow an existing aggregate:
system> aggr add aggrname [options] disklist
system> aggr status aggrname [options] system> aggr rename aggrname new_aggrname system> aggr offline aggrname system> aggr online aggrname You must destroy all volumes inside before taking the aggregate offline First take the aggregate offline

To check status of an existing aggregate: To rename an aggregate:

To take an aggregate offline:

To put an aggregate back online:

To destroy an aggregate:
system> aggr offline aggrname system> aggr destroy aggrname

2009 NetApp. All rights reserved.

System Manager: Storage View

Select Storage and launch the wizard to configure

2009 NetApp. All rights reserved.

Storage Configuration Wizard

NFS and CIFS are discussed in Module 7 and Module 8 respectively


2009 NetApp. All rights reserved.

This optional page within the wizard appears if you have NFS and CIFS licensed

Storage Configuration Wizard (Cont.)

2009 NetApp. All rights reserved.

System Manager: Aggregate

Select Aggregates to administrate aggregates

Select Create to create a new aggregate

2009 NetApp. All rights reserved.

Create Aggregate Wizard

Check for a 64-bit aggregate or leave it blank for a 32-bit aggregate


2009 NetApp. All rights reserved.

Create Aggregate Wizard (Cont.)

2009 NetApp. All rights reserved.

Create Aggregate Wizard (Cont.)

2009 NetApp. All rights reserved.

Space Allocation

2009 NetApp. All rights reserved.

Aggregate Space Allocation


Understanding how Data ONTAP allocates space is important Space allocation competing concerns:
Use space efficiency Protect Data
1-TB ATA disks are used

In the this example, we will use the following:


system> aggr create aggr1 5@847
RAID-DP is the default

Data
2009 NetApp. All rights reserved.

Data

Data

Parity Double-Parity

Aggregate Space Allocation (Cont.)


First 20 MB of every disk is used for kernel space and disk metadata such as disk labels
Data

...

1 TB

Data

20 MB

...

1 TB

Data

...

1 TB

Data

...

1 TB

Data

...

1 TB

2009 NetApp. All rights reserved.

Aggregate Space Allocation (Cont.)


Second, disks are right-sized
When you purchase a disk, the disk is originally calculated in decimal format where 1 GB = 1000 MB A 1-TB disk is in decimal format: 1000000 MB When the Data ONTAP analyzes the disk, it computes it in binary format where 1 GB = 1024 MB
1000000 MB / 1024 GB / 1024 MB = 976.56 GB

Data

...

977 GB

1 TB

system> aggr status -r aggr1 ... RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used(MB/blks) Phys(MB/blks) -----------------------------------------------------------------------------data 2b.52 2b 3 4 FC:A - ATA 7200 847555/... 847827/...

But wait Data ONTAP reports more space taken away


2009 NetApp. All rights reserved.

Aggregate Space Allocation (Cont.)


Right-sized also include reducing the size slightly to eliminate manufacturing variance
Data

...

847 GB 1 TB

Data

847 GB is the . . .right-size allocation for 1-TB ATA disks ...

847 GB 1 TB

Data

847 GB 1 TB

Data

...

847 GB

1 TB

Data

...

847 GB

1 TB

2009 NetApp. All rights reserved.

Aggregate Space Allocation (Cont.)


In Data ONTAP prior to version 7.3: Aggregate size is calculated using all disks in the aggregate

Data

Data

Data

Parity Double-Parity

In Data ONTAP 7.3 and later: Aggregate size is calculated using the size of data disks Only data disks in the aggregate are included

Data
2009 NetApp. All rights reserved.

Data

Data

Aggregate Space Allocation (Cont.)


Third, you can use 90% of the available space 10% is for WAFL Reserve which provides efficiency

Data

... 10% WAFL Reserve

847 GB 1 TB

Data

...

847 GB 1 TB

Data

... 90% of the available space

847 GB 1 TB

2009 NetApp. All rights reserved.

Space Usage of an Aggregate


To show the available space in an aggregate:
system> aggr show_space aggrname

Example:

In increments of GB

system> aggr show_space -g aggr1 Aggregate aggr1'

Space available after right-size and kernel space


Total space WAFL reserve Snap reserve Usable space BSR NVLOG A-SIS Smtape 2483GB 248GB 0GB 2234GB 0GB 0GB 0GB This aggregate contains no volume Aggregate Total space Snap reserve WAFL reserve Allocated 0GB 0GB 248GB

10% WAFL reserved


Used 0GB 0GB 0GB Avail 2234GB 0GB 248GB

90% available space can used


2009 NetApp. All rights reserved.

Module Summary
In this module, you should have learned to: Describe Data ONTAP RAID technology Identify a disk in a disk shelf based on its ID Execute commands to determine disk ID Identify a hot-spare disk in a FAS system Describe the effects of using multiple disk types Create a 32-bit and 64-bit aggregate Execute aggregate commands in Data ONTAP Calculate usable disk space
2009 NetApp. All rights reserved.

Exercise
Module 3: Physical Storage Estimated Time: 60 minutes

Check Your Understanding


What is a RAID group?
A collection of disks organized to protect data that includes:
One or more data disks One or two parity disks for protection

Why use double parity?


To protect against a double-disk failure

2009 NetApp. All rights reserved.

Check Your Understanding (Cont.)


What is the RAID group size and aggregate type of the following command? aggr create newaggr 32
Assuming a default RAID group size of 16, this creates two RAID groups Creates a 32-bit aggregate

What is the minimum number of disks in a RAID-DP group?


Three disks (one data, one parity and one double-parity disk)

2009 NetApp. All rights reserved.

Potrebbero piacerti anche