Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Administration
y
m
Activity Guide
e
d
a
D62269GC30
Edition 3.0
October 2010
D69176
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Author
Tammy Shannon
Technical Contributors and Reviewers
David Maxwell, Cindy Swearingen, Glynn Foster, Dominic Kay, Gary Riseborough
This book was published using:
a
r
O
e
l
c
Oracle Tutor
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
e
d
a
y
m
Table of Contents
Practices for Lesson 1: Course Introduction.................................................................................................1-1
Practices for Lesson 1....................................................................................................................................1-3
Practices for Lesson 2: Getting Started with ZFS .........................................................................................2-1
Practices for Lesson 2....................................................................................................................................2-3
Practices for Lesson 3: Mastering ZFS Basics..............................................................................................3-1
Practices for Lesson 3....................................................................................................................................3-3
Practices for Lesson 4: Managing ZFS Storage Pools .................................................................................4-1
Practices for Lesson 4....................................................................................................................................4-3
Practice 4-1: Working with ZFS Pools and Devices .......................................................................................4-4
Practice 4-2: Working with Mirrored Pools .....................................................................................................4-8
Practice 4-3: Managing Devices in Mirrored Pools.........................................................................................4-12
Practice 4-4: Destroying ZFS Storage Pools ..................................................................................................4-18
Practice 4-5: Working with RAID-Z Pools .......................................................................................................4-19
Practice 4-6: Managing Devices in RAID-Z Pools ..........................................................................................4-22
Practice 4-7: Working with the autoexpand Property .....................................................................................4-25
Practices for Lesson 5: Managing ZFS File Systems ...................................................................................5-1
Practices for Lesson 5....................................................................................................................................5-3
Practice 5-1: Creating, Renaming, and Destroying ZFS File Systems ...........................................................5-4
Practice 5-2: Working with ZFS Properties ....................................................................................................5-6
Practice 5-3: Demonstrating ZFS Property Inheritance ..................................................................................5-10
Practice 5-4: Mounting ZFS File Systems ......................................................................................................5-12
Practice 5-5: Sharing ZFS File Systems ........................................................................................................5-15
Practice 5-6: Working with ZFS Quotas and Reservations ............................................................................5-17
y
m
e
d
a
c
A
Practices for Lesson 6: Working with ZFS Snapshots and Clones .............................................................6-1
Practices for Lesson 6....................................................................................................................................6-3
Practice 6-1: Creating, Holding, and Destroying ZFS Snapshots ...................................................................6-4
Practice 6-2: Working with ZFS Snapshots ....................................................................................................6-7
Practice 6-3: Working with ZFS Clones..........................................................................................................6-12
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
Practices for Lesson 7: Installing and Booting a ZFS Root File System ....................................................7-1
Practices for Lesson 7....................................................................................................................................7-3
Practice 7-1: Migrating a UFS Root File System to a ZFS Root File System .................................................7-4
Practice 7-2: Booting an Alternate ZFS Root File System ..............................................................................7-11
Practice 7-3: Creating a Mirrored ZFS Root Pool ...........................................................................................7-13
Practice 7-4: Performing Root Pool Recovery ................................................................................................7-17
Practices for Lesson 8: ZFS Troubleshooting and Data Recovery ..............................................................8-1
Practices for Lesson 8....................................................................................................................................8-3
Practice 8-1: Creating ZFS Pools and File Systems ......................................................................................8-4
Practice 8-2: Configuring syslog to Send FMD Messages to a File................................................................8-6
Practice 8-3: Working with a Disk Error in a Mirrored Pool ............................................................................8-7
Practice 8-4: Working with a Disk Error in a RAID-Z Pool ..............................................................................8-11
e
l
c
a
r
O
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Preface
Profile
Before You Begin This Course
Before you begin this course, you should be able to:
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Related Publications
Oracle Publications
Title
Part Number
819-5461
Additional Publications
Read-me files
Oracle Magazine
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
Chapter 1
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
y
m
Chapter 2
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
y
m
Chapter 3
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
y
m
Chapter 4
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Preparations
To complete this exercise, you must already have a general understanding of disk devices and
file systems used on Solaris systems and you must already be familiar with the format utility.
Some of the ZFS file system space accounting might be different from the student guide
examples. It is important to wait for each command to complete after writing large files to review
reported file system space.
The disk storage used in this and subsequent exercises is provided by a Fibre Channel array.
Because of this, the device names of the LUNs the array provides include world-wide names
(WWNs) in the target portion of the name. For example:
/dev/rdsk/c0t226000C0FFA7C140d0s2
Use the luxadm probe command to display the list of these devices on your system. For
example:
# luxadm probe
No Network Array enclosures found in /dev/es
Found Fibre Channel device(s):
Node WWN:206000c0ff07c140 Device Type:Disk device
Logical Path:/dev/rdsk/c0t226000C0FFA7C140d0s2
Node WWN:206000c0ff07c140 Device Type:Disk device
Logical Path:/dev/rdsk/c0t226000C0FFA7C140d1s2
Node WWN:206000c0ff07c140 Device Type:Disk device
Logical Path:/dev/rdsk/c0t226000C0FFA7C140d2s2
...
# luxadm probe | grep d0s2
Logical Path:/dev/rdsk/c0t226000C0FFA7C140d0s2
Logical Path:/dev/rdsk/c0t256000C0FFD7C140d0s2
Logical Path:/dev/rdsk/c1t216000C0FF87C140d0s2
Logical Path:/dev/rdsk/c1t266000C0FFF7C140d0s2
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
y
m
e
d
a
c
A
LUNs labeled d0 through d15 are approximately 78 GB in size. LUNs labeled d16 through
d30 are all 9 GB in size.
You will use only a subset of the LUNs listed, as directed by your instructor. It is important that
you use only the LUNs you are assigned. All LUNs attached to the Fibre Channel network are
seen by all student systems, so it is important to avoid using LUNs that are in use on another
system.
To make it easier to know what LUNs to use, a script called make_disk_list in
/opt/ses/lab/zfs generates a list of the LUNS that are assigned to you.
After your instructor indicates what LUN numbers to use, and which group of LUNs they should
come from, run make_disk_list. When prompted, enter your LUN numbers, separated by
e
l
c
a
r
O
spaces. When prompted, enter a 1 or 2 to select the first or second group of 32 LUNs per
controller. The output is saved in a file named /opt/ses/lab/zfs/my_disks.
# cd /opt/ses/lab/zfs
# ./make_disk_list
Look for LUN number(s): 3 4 5 19 20 21
First or second LUN groups? (1 or 2): 1
Your assigned disks:
c1t226000C0FFA7C140d3
c2t216000C0FF87C140d3
c1t226000C0FFA7C140d4
c2t216000C0FF87C140d4
c1t226000C0FFA7C140d5
c2t216000C0FF87C140d5
c1t226000C0FFA7C140d19
c2t216000C0FF87C140d19
c1t226000C0FFA7C140d20
c2t216000C0FF87C140d20
c1t226000C0FFA7C140d21
c2t216000C0FF87C140d21
y
m
e
d
a
c
A
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
If zpool commands return error messages indicating disks are in use by other pools, evaluate
the list of disks you specified. Be certain to use only those disks that have been assigned to
you. Except where directed in task steps to do so, do not use the -f option to override these
errors. For example:
# zpool create firstpool c2t226000C0FFA7C140d0
invalid vdev specification
use -f to override the following errors:
/dev/dsk/c2t226000C0FFA7C140d0s0 is part of potentially active
pool firstpool
e
l
c
a
r
O
Tasks
1.
Use the zpool command to display the list of ZFS pools. Verify that no ZFS pool currently
exists.
# zpool list
no pools available
2.
Use the zfs command to display the list of ZFS file systems. Verify that no ZFS file system
currently exists.
# zfs list
no datasets available
3.
4.
Choose two of the 8-GB disks assigned to you that are attached to different controllers, and
use them to create a new ZFS mirrored pool called mirpool.
# zpool create mirpool mirror c1t226000C0FFA001ABd3
c2t216000C0FF8001ABd3
5.
Use the zpool command to verify that the new pool exists.
# zpool list
NAME
SIZE
ALLOC FREE
CAP
HEALTH
mirpool
7.75G 76.5K 7.75G
0%
ONLINE
y
m
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
d
a
c
A
ALTROOT
-
6.
7.
Use the zpool status command with the -x option to display the status of mirpool.
Run the command again without the -x option. Is mirpool healthy?
# zpool status -x
all pools are healthy
# zpool status
pool: mirpool
state: ONLINE
scrub: none requested
config:
NAME
STATE
READ WRITE CKSUM
mirpool
ONLINE
0
0
0
mirror-0
ONLINE
0
0
0
e
l
c
a
r
O
c1t226000C0FFA001ABd3
c2t216000C0FF8001ABd3
ONLINE
ONLINE
0
0
0
0
0
0
8.
9.
The mirpool pool should be healthy. All devices should be in the ONLINE state. The
zpool status command should report that there are no known data errors, and with
the -x option, it should report that all pools are healthy.
From the output of commands in the previous step, make note of the disks that the
mirpool pool uses.
Use the df h / command to identify the disk of the UFS root file system on your system
and make a note of it.
# df -h /
Filesystem
size
used avail capacity Mounted on
/dev/dsk/c0t0d0s0
67G
4.4G
62G
7%
/
10. Using the same disk that is in use by the UFS root file system /, attempt to create a pool
called newpool. What happens?
# zpool create newpool c0t0d0
invalid vdev specification
use -f to override the following errors:
/dev/dsk/c0t0d0s0 is currently mounted on /. Please see
umount(1M).
/dev/dsk/c0t0d0s1 is currently used by swap. Please see
swap(1M).
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Use the zpool command to display the status of mirpool. Verify that the pool and its
devices are in the ONLINE state.
# zpool status
pool: mirpool
state: ONLINE
scrub: none requested
config:
NAME
STATE
READ WRITE CKSUM
mirpool
ONLINE
0
0
0
mirror-0
ONLINE
0
0
0
c1t226000C0FFA001ABd3 ONLINE
0
0
0
c2t216000C0FF8001ABd3 ONLINE
0
0
0
y
m
e
d
a
Use the zpool command to display the size and space utilization for mirpool. What is the
reported size?
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
mirpool 7.75G
78K 7.75G
0% ONLINE -
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Identify the size of mirpool. In this example, the reported size should be 7.75 GB.
3.
4.
Add a new mirror to mirpool that uses the two 8-GB disks you identified in the previous
step.
# zpool add mirpool mirror c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
5.
Use the zpool command to display the size and space utilization for mirpool. What is the
new reported size, and how does it compare to the previous size of mirpool?
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
mirpool 15.5G
81K 15.5G
0% ONLINE -
e
l
c
a
r
O
6.
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
y
m
7.
Display the list of your disks from /opt/ses/lab/zfs/my_disks. Identify a pair of 8-GB
disks assigned to you to add as spare disks to mirpool. Choose two disks that are
accessed through different controllers.
# cat /opt/ses/lab/zfs/my_disks
8.
Add the 8-GB disks identified in the previous step as spares to mirpool.
# zpool add mirpool spare c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
9.
Use the zpool command to display the status of mirpool. What is the status of the newlyadded spares?
# zpool status mirpool
pool: mirpool
state: ONLINE
scrub: none requested
config:
e
l
c
a
r
O
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
NAME
mirpool
mirror-0
c1t226000C0FFA001ABd3
c2t216000C0FF8001ABd3
mirror-1
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
spares
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
e
d
a
c
A
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
AVAIL
AVAIL
y
m
The zpool iostat -v command reports 7.75 GB available on each mirror device.
The spares do not show capacity until they are used.
11. Create a 2-GB file called /mirpool/file_2g.
# mkfile 2g /mirpool/file_2g
e
d
a
c
A
12. Display the I/O statistics and capacity for the virtual devices in mirpool. Has the data for
file_2g been distributed between the two mirror devices? If so, how has it been
distributed?
# zpool iostat -v mirpool
capacity
operations
bandwidth
pool
alloc avail read write read write
------------------------ ----- ----- ----- ----- ----- ----mirpool
2.00G 13.5G
0
30
16 3.51M
mirror
1.00G 6.75G
0
15
16 1.76M
c1t226000C0FFA001ABd3
0
15
350 1.76M
c2t216000C0FF8001ABd3
0
15
238 1.76M
mirror
1.00G 6.75G
0
24
0 2.85M
c1t226000C0FFA001ABd4
0
24
387 2.86M
c2t216000C0FF8001ABd4
0
24
387 2.86M
------------------------- ----- ----- ----- ----- ----- -----
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
ra
The data for file_2g has been distributed evenly between the two mirror devices. 1
GB of data has been placed on each mirror.
13. Use zpool to list the capacity summary for mirpool.
FREE
13.5G
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
2.
Use the zpool command to display the size and space utilization for mirpool. Make note
of the current size.
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
mirpool 15.5G 2.00G 13.5G
12% ONLINE -
3.
Use the zpool status command to display the status of mirpool and identify the disk
components of the first 8-GB mirror device.
# zpool status mirpool
pool: mirpool
state: ONLINE
scrub: none requested
config:
NAME
mirpool
mirror-0
c1t226000C0FFA001ABd3
c2t216000C0FF8001ABd3
mirror-1
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
spares
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
y
m
e
d
a
c
A
AVAIL
AVAIL
Use the zpool command to attach the unused 9-GB disk you identified in step 1 to the first
8-GB mirror.
# zpool attach mirpool c2t216000C0FF8001ABd3
c1t226000C0FFA001ABd19
a
r
O
e
l
c
5.
Display the status of mirpool. What resilvering activity does zpool report?
# zpool status mirpool
pool: mirpool
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 77.42% done, 0h0m to go
config:
NAME
mirpool
mirror-0
c1t226000C0FFA001ABd3
c2t216000C0FF8001ABd3
c1t226000C0FFA001ABd19
mirror-1
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
spares
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
READ
0
0
0
0
0
0
0
0
WRITE
0
0
0
0
0
0
0
0
CKSUM
0
0
0
0
0 785M resilvered
0
0
0
AVAIL
AVAIL
Messages vary depending on when you run the zpool status command during the
resilvering process. As resilvering proceeds, the command reports how far along the
process is, and gives advice about actions to take. Once complete, messages indicate
that the process was successful, and when it finished.
Note: After the resilver process reports 100% complete, it may take a few seconds for
the status line to clear.
Use the zpool command to display the size and space utilization for mirpool. Does the
size reported differ from the size listed in step 2? If not, why not?
# zpool list mirpool
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
mirpool 15.5G 2.00G 13.5G
12% ONLINE -
y
m
6.
7.
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
No, the size reported is still 15.5 GB. The 9-GB drive makes the first mirror device a
three-way mirror, but does not increase its capacity.
Display the I/O statistics and capacity for the virtual devices in mirpool. Using the sum of
the allocated and free columns for each mirror device, compare the sizes of the two 8-GB
mirrors. What can you say about how much of the space might be used or wasted on the
new 9-GB disk?
# zpool iostat -v mirpool
capacity
operations bandwidth
pool
alloc free read write read write
-------------------------- ----- ----- ----- ----- ----- ----mirpool
2.00G 13.5G
8
19 1.09M 2.19M
mirror
1.00G 6.75G
8
10 1.09M 1.09M
c1t226000C0FFA001ABd3
5
9
746K 1.10M
e
l
c
a
r
O
8.
c2t216000C0FF8001ABd3
2
9
377K 1.10M
c1t226000C0FFA001ABd19
0
56 1.75K 6.62M
mirror
1.00G 6.75G
0
12
234 1.44M
c1t226000C0FFA001ABd4
0
12 5.31K 1.44M
c2t216000C0FF8001ABd4
0
12 1.54K 1.44M
-------------------------- ----- ----- ----- ----- ----- ----The 9-GB disk is part of the first 8-GB mirror. Because the disks are unequal sizes, only
8 GB of that disk can be used. 1 GB is wasted space.
Use the zpool command to detach the single 9-GB disk from the first 8-GB mirror in
mirpool.
# zpool detach mirpool c1t226000C0FFA001ABd19
9.
Display the I/O statistics and capacity for the virtual devices in mirpool. Have the
allocated and free values changed?
# zpool iostat -v mirpool
capacity
operations bandwidth
pool
alloc free read write read write
------------------------- ----- ----- ----- ----- ----- ----mirpool
2.00G 13.5G
8
18 1.01M 2.02M
mirror
1.00G 6.75G
8
9 1.01M 1.01M
c1t226000C0FFA001ABd3
5
9
690K 1.02M
c2t216000C0FF8001ABd3
2
9
349K 1.02M
mirror
1.00G 6.75G
0
11
211 1.30M
c1t226000C0FFA001ABd4
0
11 4.80K 1.30M
c2t216000C0FF8001ABd4
0
11 1.40K 1.30M
------------------------- ----- ----- ----- ----- ----- ----No, the values have not changed.
10. Detach the first 8-GB disk in the first 8-GB mirror device listed in mirpool.
# zpool detach mirpool c1t226000C0FFA001ABd3
y
m
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
11. Display the status of mirpool. Are all of the devices and the pool itself in the ONLINE
state?
# zpool status mirpool
pool: mirpool
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Wed Sep 8
10:39:29 2010
config:
e
l
c
a
r
O
NAME
mirpool
c2t216000C0FF8001ABd3
mirror-0
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
spares
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
AVAIL
AVAIL
y
m
e
d
a
e
l
c
a
r
O
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
NAME
mirpool
c2t216000C0FF8001ABd3
mirror-0
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
STATE
READ WRITE CKSUM
DEGRADED
0
0
0
ONLINE
0
0
0
DEGRADED
0
0
0
ONLINE
0
0
0
OFFLINE
0
0
0
c
A
spares
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
AVAIL
AVAIL
y
m
e
d
a
18. Display the I/O statistics and capacity for the virtual devices in mirpool. Verify that the
used capacity has increased, and that the single 8-GB mirror virtual device now contains
1.59 GB of data.
# zpool iostat -v mirpool
capacity
operations
bandwidth
pool
alloc free read write read write
------------------------- ----- ----- ----- ----- ----- ----mirpool
2.59G 12.9G
2
18 280K 2.10M
c2t216000C0FF8001ABd3
1.59G 6.16G
2
11 280K 1.29M
mirror
1.00G 6.75G
0
8
160 1008K
c1t226000C0FFA001ABd4
0
8 3.77K 1012K
c2t216000C0FF8001ABd4
0
8 1.06K 1012K
------------------------- ----- ----- ----- ----- ----- -----
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
19. Bring the disk that is currently OFFLINE in mirpool back to the ONLINE state.
# zpool online mirpool c2t216000C0FF8001ABd4
20. Display the status of mirpool. Are all of the devices and the pool itself in the ONLINE
state? Did the resilver operation complete successfully?
e
l
c
a
r
O
# zpool
pool:
state:
scrub:
config:
status mirpool
mirpool
ONLINE
resilver completed after 0h0m with 0 errors on Wed Sep 8 11:11:35 2010
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
NAME
STATE
mirpool
ONLINE
c2t216000C0FF8001ABd3
ONLINE
mirror-1
ONLINE
c1t226000C0FFA001ABd4 ONLINE
c2t216000C0FF8001ABd4 ONLINE
spares
c1t226000C0FFA001ABd5
AVAIL
c2t216000C0FF8001ABd5
AVAIL
READ
0
0
0
0
0
WRITE
0
0
0
0
0
CKSUM
0
0
0
0
0 59.5K resilvered
All devices and mirpool itself are in the ONLINE state, and the resilver operation
completed successfully.
21. Use the zpool history command to identify the disk that you detached from the first
mirror device in mirpool.
# zpool history
.
.
.
2010-08-10.10:43:48 zpool detach mirpool c1t226000C0FFA001ABd3
22. Attach the disk you identified in the previous step to the single 8 GB disk in mirpool.
# zpool attach mirpool c2t216000C0FF8001ABd3
c1t226000C0FFA001ABd3
y
m
e
d
a
23. Monitor the status of mirpool as the resilver process proceeds and verify that it completes
successfully.
# zpool
pool:
state:
scrub:
config:
a
r
O
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
NAME
mirpool
mirror-0
c2t216000C0FF8001ABd3
STATE
ONLINE
ONLINE
ONLINE
READ
0
0
0
c1t226000C0FFA001ABd3
mirror-1
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
spares
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
ONLINE
ONLINE
ONLINE
ONLINE
0
0
0
0
e
l
c
c
A
status -v mirpool
mirpool
ONLINE
resilver completed after 0h0m with 0 errors on Wed Sep 8 11:16:21 2010
WRITE
0
0
0
0
0
0
0
AVAIL
AVAIL
CKSUM
0
0
0
0 1.59G resilvered
0
0
0
Tasks
1.
Use the zpool list command to verify that mirpool is the only pool that exists on your
system.
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
mirpool
15.5G 2.59G 12.9G
16% ONLINE -
2.
3.
4.
Change directory to /.
# cd /
5.
6.
Destroy mirpool. Use the zpool list command to verify that mirpool no longer
exists.
# zpool destroy mirpool
# zpool list
no pools available
7.
8.
9.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
y
m
e
d
a
c
A
If more than one pool with the same name exists in the list of pools that can be imported,
then import the pool by the GUID that was identified above.
Confirm that mirpool is restored.
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
mirpool 15.5G 2.59G 12.9G
16% ONLINE -
e
l
c
Destroy mirpool again. Use the zpool list command to verify that mirpool no longer
exists.
# zpool destroy mirpool
# zpool list
no pools available
a
r
O
Tasks
1.
Display the list of your disks from /opt/ses/lab/zfs/my_disks. Identify six of the 8-GB
disks assigned to you.
# cat /opt/ses/lab/zfs/my_disks
2.
Create a new pool called rzpool that contains two RAID-Z devices of three disks each.
# zpool create rzpool raidz c1t226000C0FFA001ABd3
c1t226000C0FFA001ABd4 c1t226000C0FFA001ABd5 raidz
c2t216000C0FF8001ABd3 c2t216000C0FF8001ABd4
c2t216000C0FF8001ABd5
3.
Use the zpool list command to display the size and space utilization for rzpool. What
is the reported size, and how much space has been used?
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
rzpool
46.8G
147K 46.7G
0% ONLINE -
4.
5.
6.
7.
y
m
If you used six 8-GB disks for rzpool, as in this example, the available pool capacity is
approximately 46 GB.
Use the zfs list command to identify the disk space that is available to ZFS file
systems.
# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
rzpool
91.9K 30.6G 28.0K /rzpool
The pool space that is available to ZFS file systems is decreased due to the space that
is consumed by the two RAID-Z devices for parity.
Create a 1-GB file called /rzpool/file_1g.
# mkfile 1g /rzpool/file_1g
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Use the zpool list command to display the size and space utilization for rzpool. How
much space has been used to store the 1-GB file you just created?
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
rzpool
46.8G 1.50G 45.2G
3% ONLINE 1.5 GB of space has been used to store the 1 GB file.
Display the I/O statistics and capacity for the virtual devices in rzpool.
# zpool iostat -v rzpool
capacity
operations
bandwidth
pool
alloc free read write read write
------------------------- ----- ----- ----- ----- ----- ----rzpool
1.50G 45.2G
0
44
48 5.16M
e
l
c
a
r
O
8.
9.
raidz1
768M 22.6G
0
22
18 2.58M
c1t226000C0FFA001ABd3
0
11 1.01K 1.30M
c1t226000C0FFA001ABd4
0
11 1.01K 1.30M
c1t226000C0FFA001ABd5
0
11 1.01K 1.30M
raidz1
769M 22.6G
0
22
30 2.58M
c2t216000C0FF8001ABd3
0
11 1.01K 1.30M
c2t216000C0FF8001ABd4
0
11 1.01K 1.30M
c2t216000C0FF8001ABd5
0
11 1.01K 1.30M
------------------------- ----- ----- ----- ----- ----- ----The data is written across each RAID-Z device.
Use the zpool command to destroy the rzpool. Confirm that the pool is destroyed.
# zpool destroy rzpool
# zpool list
no pools available
Display the list of your disks from /opt/ses/lab/zfs/my_disks. Identify six 8-GB disks
assigned to you to create a RAID-Z2 pool with 1 RAID-Z2 device of six disks and two disks
designated as spares.
# cat /opt/ses/lab/zfs/my_disks
10. Use the disks you identified in the previous step to create the RAID-Z2 pool called rzpool
with one RAIDZ2 device of six 8-GB disks and two 9-GB spare disks.
# zpool create rzpool raidz2 c1t226000C0FFA001ABd3
c1t226000C0FFA001ABd4 c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd3 c2t216000C0FF8001ABd4
c2t216000C0FF8001ABd5 spare c1t226000C0FFA001ABd19
c2t216000C0FF8001ABd19
y
m
e
l
c
a
r
O
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
NAME
rzpool
raidz2-0
c1t226000C0FFA001ABd3
c1t226000C0FFA001ABd4
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd3
c2t216000C0FF8001ABd4
c2t216000C0FF8001ABd5
spares
c1t226000C0FFA001ABd19
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
AVAIL
e
d
a
c
A
c2t216000C0FF8001ABd19
AVAIL
y
m
15. Display the I/O statistics and capacity for the top-level virtual devices in rzpool. How much
space has been used to store the 2-GB file you just made?
# zpool iostat -v rzpool
capacity
operations
bandwidth
pool
alloc free read write read write
------------------------- ----- ----- ----- ----- ----- ----rzpool
3.00G 43.7G
0
69
38 8.10M
raidz2
3.00G 43.7G
0
69
38 8.10M
c1t226000C0FFA001ABd3
0
18
550 2.03M
c1t226000C0FFA001ABd4
0
18
550 2.03M
c1t226000C0FFA001ABd5
0
18
809 2.03M
c2t216000C0FF8001ABd3
0
18
809 2.03M
c2t216000C0FF8001ABd4
0
18
550 2.03M
c2t216000C0FF8001ABd5
0
18
809 2.03M
------------------------- ----- ----- ----- ----- ----- ----A total of 3.0 GB of space has been used to store the new 2-GB file.
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Display the list of your disks from /opt/ses/lab/zfs/my_disks. Identify a single 9-GB
disk assigned to you that is currently not in use.
# cat /opt/ses/lab/zfs/my_disks
2.
Attempt to attach the 9GB disk that you identified in the previous step to the RAID-Z2
device in rzpool. What happens and why?
# zpool attach rzpool c2t216000C0FF8001ABd5
c1t226000C0FFA001ABd20 cannot attach c1t226000C0FFA001ABd20 to
c2t216000C0FF8001ABd5: can only attach to mirrors and top-level
disks
The attempt fails because the attach operation is not applicable to RAID-Z devices. The
same is true of a zpool detach operation.
Display the status of rzpool. Verify that all devices and the pool are online.
# zpool status
pool: rzpool
state: ONLINE
scrub: none requested
config:
3.
y
m
NAME
rzpool
raidz2-0
c1t226000C0FFA001ABd3
c1t226000C0FFA001ABd4
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd3
c2t216000C0FF8001ABd4
c2t216000C0FF8001ABd5
spares
c1t226000C0FFA001ABd19
c2t216000C0FF8001ABd19
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
d
a
c
A
AVAIL
AVAIL
e
l
c
4.
Use the zpool command to take the sixth disk in the RAID-Z2 device in rzpool offline.
# zpool offline rzpool c2t216000C0FF8001ABd5
5.
Display the status of rzpool. What state is the pool in and why?
# zpool status
a
r
O
pool: rzpool
state: DEGRADED
status: One or more devices has been taken offline by the
administrator. Sufficient replicas exist for the pool
to continue functioning in a degraded state.
action: Online the device using 'zpool online' or replace the
device with 'zpool replace'.
scrub: none requested
config:
NAME
STATE
READ WRITE CKSUM
rzpool
DEGRADED
0
0
0
raidz2-0
DEGRADED
0
0
0
c1t226000C0FFA001ABd3
ONLINE
0
0
0
c1t226000C0FFA001ABd4
ONLINE
0
0
0
c1t226000C0FFA001ABd5
ONLINE
0
0
0
c2t216000C0FF8001ABd3
ONLINE
0
0
0
c2t216000C0FF8001ABd4
ONLINE
0
0
0
c2t216000C0FF8001ABd5
OFFLINE
0
0
0
spares
c1t226000C0FFA001ABd19 AVAIL
c2t216000C0FF8001ABd19 AVAIL
y
m
e
d
a
6.
7.
c
A
The pool is in a degraded state because one of the devices has been taken offline;
however, because sufficient replicas exist, the pool will continue to function in a
degraded state.
Use the zpool command to bring the off-line disk back online.
# zpool online rzpool c2t216000C0FF8001ABd5
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
Display the status of rzpool, and monitor the resilvering process until it completes. Verify
that the pool and all devices are in the ONLINE state.
# zpool status rzpool
pool: rzpool
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Thu Sep 9 14:30:37 2010
config:
NAME
rzpool
raidz2-0
c1t226000C0FFA001ABd3
c1t226000C0FFA001ABd4
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd3
c2t216000C0FF8001ABd4
e
l
c
a
r
O
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
READ
0
0
0
0
0
0
0
WRITE
0
0
0
0
0
0
0
CKSUM
0
0
0
0
0
0
0
c2t216000C0FF8001ABd5
resilvered
spares
c1t226000C0FFA001ABd19 AVAIL
c2t216000C0FF8001ABd19 AVAIL
ONLINE
0 6.50K
8.
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Create a new pool called mypool that contains one 8-GB disk from your list of disks found
in /opt/ses/lab/zfs/my_disks.
# cat /opt/ses/lab/zfs/my_disks
# zpool create mypool c1t226000C0FFA001ABd0
2.
Use the zpool list command to verify that the new pool exists.
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
mypool 7.75G 76.5K 7.75G
0% ONLINE Make note of the size of the pool.
Display the properties for mypool and make note of the autoexpand property setting.
# zpool get all mypool
NAME
PROPERTY
VALUE
SOURCE
mypool size
7.75G
mypool capacity
0%
mypool altroot
default
mypool health
ONLINE
mypool guid
4117010077574798133
default
mypool version
22
default
mypool bootfs
default
mypool delegation
on
default
mypool autoreplace
off
default
mypool cachefile
default
mypool failmode
wait
default
mypool listsnapshots on
default
mypool autoexpand
off
default
mypool free
7.75G
mypool allocated
95.5K
-
3.
y
m
e
d
a
4.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
l
c
a
r
O
5.
c
A
Run the zpool list command again. Do you notice any change in the size of the pool? If
not, why?
# zpool list
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
mypool 7.75G
87K 7.75G
0% ONLINE You see no change in the size of the pool because you have not yet enabled the
autoexpand property.
6.
Set the autoexpand property to on and then rerun the zpool list command. Has there
been a change in the size of the pool?
# zpool set autoexpand=on mypool
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
mypool 8.75G
117K 8.75G
0% ONLINE The 1-GB increase is now reflected in the zpool list output.
7.
Destroy mypool, and then use the zpool list command to verify that the pool no longer
exists.
# zpool destroy mypool
# zpool list
no pools available
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
Chapter 5
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Preparation
If the systems used for this exercise have been re-initialized, it will be necessary to run the
make_disk_list script in /opt/ses/lab/zfs. Your instructor will indicate if this is
necessary, and will assign disks to you before you run the script.
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Review the list of your disks from /opt/ses/lab/zfs/my_disks. Identify six 8-GB disks
assigned to you.
# cat /opt/ses/lab/zfs/my_disks
2.
Use the six 8-GB disks you identified in the previous step to create a pool that contains
three mirror devices. Name the pool mirpool.
# zpool create mirpool mirror c1t226000C0FFA001ABd3
c2t216000C0FF8001ABd3 mirror c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4 mirror c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
3.
Verify that mirpool and all of its devices are in the ONLINE state.
# zpool status
pool: mirpool
state: ONLINE
scrub: none requested
config:
NAME
mirpool
mirror-0
c1t226000C0FFA001ABd3
c2t216000C0FF8001ABd3
mirror-1
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
mirror-2
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
4.
5.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
l
c
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
y
m
e
d
a
c
A
MOUNTPOINT
/mirpool
Create a file system named mirpool/users with a mount point of /user. Confirm that
the file system is created.
# zfs create -o mountpoint=/users mirpool/users
# zfs list -r mirpool
NAME
USED AVAIL
REFER MOUNTPOINT
a
r
O
6.
7.
mirpool
120K 22.9G
21K /mirpool
mirpool/users
21K 22.9G
21K /users
Create an intermediate file system and descendent file systems named
mirpool/users/it/admin1 and mirpool/users/it/admin2. Confirm that the file
systems are created.
# zfs create -p mirpool/users/it/admin1
# zfs create mirpool/users/it/admin2
# zfs list -r mirpool
NAME
USED AVAIL
REFER MOUNTPOINT
mirpool
204K 22.9G
21K /mirpool
mirpool/users
88K 22.9G
23K /users
mirpool/users/it
65K 22.9G
23K /users/it
mirpool/users/it/admin1
21K 22.9G
21K /users/it/admin1
mirpool/users/it/admin2
21K 22.9G
21K /users/it/admin2
Rename the mirpool/users/it file system to mirpool/users/staff. Confirm that
the file system name has changed.
# zfs rename mirpool/users/it mirpool/users/staff
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
206K 22.9G 21K /mirpool
mirpool/users
89K 22.9G 23K /users
mirpool/users/staff
66K 22.9G 24K /users/staff
mirpool/users/staff/admin1
21K 22.9G 21K /users/staff/admin1
mirpool/users/staff/admin2
21K 22.9G 21K /users/staff/admin2
8.
9.
y
m
e
d
a
Attempt to destroy the mirpool/users/staff file system. What happens and why?
# zfs destroy mirpool/users/staff
cannot destroy mirpool/users/staff: filesystem has children
use -r to destroy the following datasets:
mirpool/users/staff/admin1
mirpool/users/staff/admin2
The descendent file systems prevent the staff file system from being destroyed.
Destroy the mirpool/users/staff file system and descendent file systems.
# zfs destroy -r mirpool/users/staff
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Care should be taken when destroying file systems. No feature currently exists to
recover a destroyed file system other than restoring from a snapshot or a backup copy.
10. Destroy the mirpool/users file system. Confirm that the file systems are destroyed.
# zfs destroy mirpool/users
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
84K 22.9G
21K /mirpool
e
l
c
a
r
O
Tasks
1.
2.
3.
4.
Use the zfs get command to display all of the properties for the mirpool file system. Do
all the properties use the same source? If so, which source is it, and why?
# zfs get all mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
type
filesystem
mirpool
creation
Fri Sep 17 15:00 2010
mirpool
used
84K
mirpool
available
22.9G
mirpool
referenced
21K
mirpool
compressratio
1.00x
mirpool
mounted
yes
mirpool
quota
none
default
<output omitted>
y
m
All of the settable properties have the same source, default, because none of the
properties have been set manually or inherited.
Create a new UFS directory called /class. Confirm that the new directory is created.
# mkdir /class
# ls /class
e
d
a
c
A
Use the tar command to create an archive of the /usr/lib directory, and save the
archive as /class/archive.tar. Use the -k option to limit the size of this archive to 820
MB. You will use this file to demonstrate how the compression property functions in ZFS file
systems.
# tar cfk /class/archive.tar 839680 /usr/lib
tar: please insert new volume, then press RETURN.(Enter ControlC)
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
Note: The tar command prompts you to press RETURN when it has created an archive
of the size (in kilobytes) you specified. Enter Control-C in response to this prompt.
Doing so prevents tar from overwriting archive.tar with the next set of files that
exceed the limit you specified.
Use the zfs list command to list the space currently used by mirpool. Make note of
the value indicated.
# zfs list mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
204K 22.9G
21K /mirpool
e
l
c
5.
Create a directory named /mirpool/cmp to hold the files that you will copy.
# mkdir /mirpool/cmp
6.
Use ls with the -lh options to list the size of /class/archive.tar. Make note of the
size displayed.
a
r
O
# ls -lh /class/archive.tar
-rw-r--r-- 1 root root 820M Sep 17 15:15 /class/archive.tar
7.
Use the zfs get command to display the current settings of the compression and
compressratio properties for mirpool. Verify that compression is off, and the
compression ratio is 1.00x.
# zfs get compression,compressratio mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool compression
off
default
mirpool compressratio 1.00x
-
8.
9.
Use the zfs list command to list the space used by mirpool. Does the space used
match the size of /mirpool/cmp/archive1.tar?
# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
821M 22.1G 821M /mirpool
y
m
e
d
a
c
A
11. Set the compression property for mirpool to gzip, and verify that the new value is set.
# zfs set compression=gzip mirpool
# zfs get compression mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
compression
gzip
local
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
l
c
Yes, the archive1.tar and archive2.tar files are the same size.
13. Use the zfs list command to list the space used by mirpool. Does the space used
match the sum of the size of the two files in /mirpool/cmp?
# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
a
r
O
mirpool
No, zfs list reports 1.14 GB used, where the sum of the size of the two files is
approximately 1.64 GB.
14. Use the zfs get command to display the current setting of the compressratio property
for mirpool. What is the current compression ratio?
# zfs get compressratio mirpool
NAME
PROPERTY
VALUE SOURCE
mirpool compressratio 1.40x
The compression ratio is currently 1.40x in this example.
15. Copy /class/archive.tar to /mirpool/cmp/archive3.tar, and list all files in
/mirpool/cmp to display their sizes. Are the files in /mirpool/cmp the same size?
# cp /class/archive.tar /mirpool/cmp/archive3.tar
# ls -lh /mirpool/cmp/*
-rw-r--r-- 1 root root 820M Sep 17 15:22
/mirpool/cmp/archive1.tar
-rw-r--r-- 1 root root 820M Sep 17 15:25
/mirpool/cmp/archive2.tar
-rw-r--r-- 1 root root 820M Sep 17 15:30
/mirpool/cmp/archive3.tar
Yes, the three files are the same size.
16. Use the du -h command to display the space used by the files in /mirpool/cmp. How
does the space these files use compare?
# du -h /mirpool/cmp/*
821M /mirpool/cmp/archive1.tar
345M /mirpool/cmp/archive2.tar
345M /mirpool/cmp/archive3.tar
y
m
e
d
a
c
A
The archive1.tar file uses 821 MB of space and the other two files use 345 MB each
in this example.
17. Use the zfs get command to display the current value of the compressratio property
for mirpool. What is the current compression ratio? How has it changed and why?
# zfs get compressratio mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
compressratio
1.62x
The compression ratio has increased to 1.62x with the addition of the second
compressed file. A larger proportion of the data in the pool is now compressed.
18. Remove the /mirpool/cmp/archive1.tar file.
# rm /mirpool/cmp/archive1.tar
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
19. Use the zfs get command to display the current value of the compressratio property
for mirpool. What is the current compression ratio and how has it changed?
# zfs get compressratio mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
compressratio
2.37x
The compression ratio has increased to 2.37x with the removal of the uncompressed file.
e
l
c
a
r
O
20. Use the zfs list command to list the space used by mirpool, and du -h to list the
space used by the remaining two files in /mirpool/cmp. Does the used value reported by
zfs list reflect the sum of the space used by the two files in /mirpool/cmp?
# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
690M 22.2G 690M /mirpool
# du -h /mirpool/cmp/*
345M /mirpool/cmp/archive2.tar
345M /mirpool/cmp/archive3.tar
Yes, the two values correlate. The two compressed archives consume approximately
690 MB.
21. Remove the mirpool/cmp directory.
# rm -rf /mirpool/cmp
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Use the zfs get command to display the current setting of the compression property for
mirpool. What source is listed for it?
# zfs get compression mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
compression
gzip
local
The compression property lists the local source.
2.
Use the zfs list command to verify that mirpool is the only ZFS file system currently
available.
# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
128K 22.9G
22K /mirpool
3.
Use the zfs command to create a new file system named mirpool/fs1.
# zfs create mirpool/fs1
4.
Use zfs get to display all properties for mirpool/fs1. Do all of the properties list the
same source? What inheritance, if any, is described in the source list?
# zfs get all mirpool/fs1
NAME
PROPERTY
VALUE
mirpool/fs1
type
mirpool/fs1
creation
mirpool/fs1
used
mirpool/fs1
available
mirpool/fs1
referenced
mirpool/fs1
compressratio
mirpool/fs1
mounted
mirpool/fs1
quota
mirpool/fs1
reservation
mirpool/fs1
recordsize
mirpool/fs1
mountpoint
mirpool/fs1
sharenfs
mirpool/fs1
checksum
mirpool/fs1
compression
mirpool
<output omitted>
SOURCE
e
d
a
filesystem
Fri Sep 17 15:35 2010 21K
22.9G
21K
1.00x
yes
none
default
none
default
128K
default
/mirpool/fs1
default
off
default
on
default
gzip
inherited from
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
The compression property differs from the others that list the default source. The
compression property for mirpool/fs1 is inherited from mirpool.
5.
e
l
c
Use the zfs command to create a new file system named mirpool/fs1/fs2.
# zfs create mirpool/fs1/fs2
a
r
O
6.
y
m
Set the compression property for the mirpool/fs1 file system to off.
# zfs set compression=off mirpool/fs1
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
7.
Use the zfs get command to display the compression property for all file systems
below mirpool. Has this property been inherited among the file systems listed?
# zfs get -r compression mirpool
NAME
PROPERTY
mirpool
compression
mirpool/fs1
compression
mirpool/fs1/fs2 compression
VALUE
gzip
off
off
SOURCE
local
local
inherited from mirpool/fs1
The mirpool/fs1/fs2 file system now inherits the compression value from
mirpool/fs1.
8.
Set the compression property for the mirpool/fs1/fs2 file system to gzip.
# zfs set compression=gzip mirpool/fs1/fs2
9.
Use the zfs get command to display the compression property for all file systems
below mirpool. Has this property been inherited among the file systems listed?
# zfs get -r compression mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
compression
gzip
local
mirpool/fs1
compression
off
local
mirpool/fs1/fs2
compression
gzip
local
None of the three file systems inherit the compression property setting. All are locally
set.
10. Use a single zfs inherit command to cause mirpool/fs1 and mirpool/fs1/fs2 to
inherit their compression values from mirpool.
# zfs inherit -r compression mirpool/fs1
e
d
a
11. Use zfs get to verify that mirpool/fs1 and mirpool/fs1/fs2 now inherit their
compression value from mirpool.
# zfs get -r compression mirpool
NAME
PROPERTY
VALUE
mirpool
compression gzip
mirpool/fs1
compression gzip
mirpool/fs1/fs2 compression gzip
c
A
SOURCE
local
inherited from mirpool
inherited from mirpool
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
13. Confirm that compression is disabled for mirpool and its descendents.
# zfs get -r compression mirpool
NAME
PROPERTY
VALUE
mirpool
compression off
mirpool/fs1
compression off
mirpool/fs1/fs2 compression off
e
l
c
SOURCE
local
inherited from mirpool
inherited from mirpool
a
r
O
y
m
Tasks
1.
Use the zfs set command to set the mountpoint property for mirpool to /home1. Then
list the properties for mirpool/fs1.
# zfs set mountpoint=/home1 mirpool
# zfs get -r mountpoint mirpool
NAME
PROPERTY
VALUE
mirpool
mountpoint /home1
mirpool/fs1
mountpoint /home1/fs1
mirpool/fs1/fs2 mountpoint /home1/fs1/fs2
SOURCE
local
inherited from mirpool
inherited from mirpool
2.
Use the zfs mount command to verify the mount points of mirpool file systems.
# zfs mount | grep mirpool
mirpool
/home1
mirpool/fs1
/home1/fs1
mirpool/fs1/fs2
/home1/fs1/fs2
3.
Use the zfs get command to display the mountpoint and mounted properties for
mirpool. Verify that the values match the information displayed in the previous step. What
source is listed for the mountpoint property, and why?
# zfs get mountpoint,mounted mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
mountpoint /home1
local
mirpool
mounted
yes
-
y
m
e
d
a
4.
c
A
The mountpoint property used the local source because you set the mount point to
a directory different from what would be used by default.
Examine the /etc/vfstab file and verify that no entry for mirpool exists in it.
# grep mirpool /etc/vfstab
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
5.
Use zfs unmount to unmount the mirpool file system and verify that it has been
unmounted.
# zfs unmount /home1
# zfs mount | grep mirpool
6.
Use zfs mount to mount the mirpool file system and verify that it has been mounted as
/home1.
# zfs mount -a
# zfs mount | grep mirpool
mirpool
/home1
mirpool/fs1
/home1/fs1
mirpool/fs1/fs2
/home1/fs1/fs2
e
l
c
a
r
O
7.
8.
Use zfs mount to verify that the new file systems have been mounted below /home1.
# zfs mount | grep mirpool
mirpool
/home1
mirpool/fs1
/home1/fs1
mirpool/fs1/fs2
/home1/fs1/fs2
mirpool/fsa
/home1/fsa
9.
10. Attempt to use the mount command to mount mirpool/fsa as /home/fsa. What
happens and why?
# mount -F zfs mirpool/fsa /home1/fsa
filesystem mirpool/fsa cannot be mounted using mount -F zfs
Use zfs set mountpoint=/home1/fsa instead.
If you must use mount -F zfs or /etc/vfstab, use zfs set
mountpoint=legacy.
See zfs(1M) for more information.
The attempt fails because you cannot use the mount command to mount a ZFS file
system whose mountpoint property is not set to legacy.
11. Set the mountpoint property for mirpool/fsa to legacy.
# zfs set mountpoint=legacy mirpool/fsa
e
d
a
12. Attempt to use zfs mount to mount mirpool/fsa. What happens and why?
# zfs mount mirpool/fsa
cannot mount mirpool/fsa: legacy mountpoint
use mount(1M) to mount this filesystem
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
y
m
c
A
The zfs mount attempt fails because the mountpoint property for mirpool/fsa is
set to legacy.
13. Use the zfs inherit command to recursively unset the mountpoint property for file
systems associated with mirpool.
# zfs inherit -r mountpoint mirpool
14. Use zfs mount command to list the currently mounted ZFS file systems. How are the file
systems associated with mirpool mounted?
# zfs mount | grep mirpool
mirpool
/mirpool
mirpool/fs1
/mirpool/fs1
mirpool/fs1/fs2
/mirpool/fs1/fs2
mirpool/fsa
/mirpool/fsa
e
l
c
All of the mirpool file systems are mounted at their default locations.
a
r
O
15. Use zfs get to display the mountpoint property for file systems in mirpool. What
source is now listed for these file systems?
# zfs get -r mountpoint mirpool
PROPERTY
VALUE
SOURCE
NAME
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
mirpool
mirpool/fs1
mirpool/fs1/fs2
mirpool/fsa
mountpoint
mountpoint
mountpoint
mountpoint
/mirpool
/mirpool/fs1
/mirpool/fs1/fs2
/mirpool/fsa
default
default
default
default
y
m
The attempt fails because the readonly property of the file system is set to on.
e
d
a
20. Use the mount command to display the mount options used for mirpool/fsa. Verify that
the options now reflect read-only permission.
# mount | grep mirpool/fsa
/mirpool/fsa on mirpool/fsa read
only/setuid/devices/nonbmand/exec/xattr/atime/dev= ...
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
l
c
a
r
O
c
A
Tasks
1.
Use the zfs get command to display the sharenfs property for the mirpool file
systems. Is this property currently inherited?
# zfs get -r sharenfs mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
sharenfs
off
default
mirpool/fs1
sharenfs
off
default
mirpool/fs1/fs2
sharenfs
off
default
No, the file systems use the default source for the sharenfs property.
2.
Attempt to share the mirpool/fs1 file system by using the zfs share command. What
happens and why?
# zfs share mirpool/fs1
cannot share mirpool/fs1: legacy share
use share(1M) to share this filesystem, or set sharenfs property
on
The attempt fails because ZFS file systems are not shared by default. With the
sharenfs property set to off, the file system is managed as a legacy share.
y
m
e
d
a
3.
Set the sharenfs property for the mirpool/fs1 file system to on.
# zfs set sharenfs=on mirpool/fs1
4.
Use the zfs get command to display the sharenfs property for all file systems below
mirpool. How has the property you set in the previous command been inherited?
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
SOURCE
default
local
inherited from mirpool/fs1
The mirpool/fs1 and mirpool/fs1/fs2 file systems have sharenfs set to on,
and mirpool/fs1/fs2 inherits this property from mirpool/fs1.
5.
Use the share command to verify the list of NFS shared file systems.
# share
/mirpool/fs1 rw ""
/mirpool/fs1/fs2 rw ""
6.
e
l
c
a
r
O
7.
Use the zfs get command to display the sharenfs property for all file systems below
mirpool. What is the source and value for the sharenfs property listed for the three file
systems?
# zfs get -r sharenfs mirpool
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
NAME
mirpool
mirpool/fs1
mirpool/fs1/fs2
8.
9.
PROPERTY
sharenfs
sharenfs
sharenfs
VALUE
off
off
off
SOURCE
default
default
default
The default source is listed for all three file systems, and all are set to off.
Verify that no file system is currently shared.
# share
Use the ifconfig -a command to identify the IP address your system is using, and
determine the network portion of that IP address.
# ifconfig -a
lo0: flags=2001000849(UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL) mtu
8232 index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=1000843(UP,BROADCAST,RUNNING,MULTICAST,IPv4) mtu 1500
index 2
inet 192.168.201.25 netmask ffffff00 broadcast 192.168.201.255
ether 0:3:ba:59:94:15
y
m
11. Use the zfs get command to display the sharenfs property for all file systems below
mirpool. Make note of the sharenfs options listed for mirpool/fs1/fs2.
# zfs get -r sharenfs mirpool
NAME
PROPERTY VALUE
mirpool
sharenfs off
mirpool/fs1
sharenfs off
mirpool/fs1/fs2 sharenfs ro,rw=@192.168.201,anon=0
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
d
a
SOURCE
default
default
local
c
A
12. Use the share command to display the list of NFS shared file systems. Verify that the
share options listed for /mirpool/fs1/fs2 match the options listed for
mirpool/fs1/fs2 in the previous command.
# share
/mirpool/fs1/fs2 anon=0,sec=sys,ro,rw=@192.168.201 ""
13. Use zfs unshare to unshare mirpool/fs1/fs2.
# zfs unshare mirpool/fs1/fs2
14. Verify that no file system is currently shared.
# share
e
l
c
16. Use the zpool command to identify who destroyed the file systems.
# zpool history -l | grep destroy
2010-09-09.15:14:48 zfs destroy -r mirpool/fs1 [user root on
host01:global]
The user is identified as [user root on host01:global].
a
r
O
Tasks
1.
Set the mirpool mount point to /users. Then create six new file systems: mirpool/mkt,
mirpool/edu, mirpool/mkt/usera, mirpool/mkt/userb, mirpool/edu/userc,
and mirpool/edu/userd.
# zfs set mountpoint=/users mirpool
# zfs create -p mirpool/mkt/usera
# zfs create mirpool/mkt/userb
# zfs create -p mirpool/edu/userc
# zfs create mirpool/edu/userd
2.
List the descendent mirpool file systems. What is the amount of available space listed for
all of them?
# zfs list -r mirpool
NAME
USED AVAIL
REFER MOUNTPOINT
mirpool
330K 22.9G
24K /users
mirpool/edu
65K 22.9G
23K /users/edu
mirpool/edu/userc
21K 22.9G
21K /users/edu/userc
mirpool/edu/userd
21K 22.9G
21K /users/edu/userd
mirpool/mkt
66K 22.9G
24K /users/mkt
mirpool/mkt/usera
21K 22.9G
21K /users/mkt/usera
mirpool/mkt/userb
21K 22.9G
21K /users/mkt/userb
y
m
c
A
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
d
a
3.
Set an 8-GB reservation on the mirpool/mkt and mirpool/edu file systems and verify
the settings you made.
# zfs set reservation=8g mirpool/mkt mirpool/edu
# zfs get -r reservation mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
reservation
none
default
mirpool/edu
reservation
8G
local
mirpool/edu/userc
reservation
none
default
mirpool/edu/userd
reservation
none
default
mirpool/mkt
reservation
8G
local
mirpool/mkt/usera
reservation
none
default
mirpool/mkt/userb
reservation
none
default
4.
Use the zfs list command to list the space used by descendent mirpool file systems.
In the USED column, which file system accounts for the two reservations you just made?
Why is 14.9 GB available to the file systems below both mirpool/mkt and mirpool/edu,
when they both reserve 8 GB from the 22.9 GB of total space in mirpool?
# zfs list -r mirpool
e
l
c
a
r
O
NAME
USED
mirpool
16.0G
mirpool/edu
66K
mirpool/edu/userc
21K
mirpool/edu/userd
21K
mirpool/mkt
66K
mirpool/mkt/usera
21K
mirpool/mkt/userb
21K
AVAIL
6.89G
14.9G
14.9G
14.9G
14.9G
14.9G
14.9G
REFER
24K
24K
21K
21K
24K
21K
21K
MOUNTPOINT
/users
/users/edu
/users/edu/userc
/users/edu/userd
/users/mkt
/users/mkt/usera
/users/mkt/userb
The 16 GB of space reserved is reflected in the USED column for the mirpool
(/users) file system. The reservations guarantee mirpool/mkt and mirpool/edu
each have 8 GB of space reserved, and the 6.89 GB of remaining unreserved space in
mirpool is available to either mirpool/mkt or mirpool/edu.
5.
y
m
6.
e
d
a
c
A
Use the zfs list command to list the space used by descendent mirpool file systems.
In the USED column, which file systems account for the two reservations you just made?
Why has the space available to mirpool/mkt and mirpool/edu been reduced to 10.9
GB from14.9 GB?
# zfs list -r mirpool
NAME
USED AVAIL
REFER MOUNTPOINT
mirpool
16.0G 6.89G
24K /users
mirpool/edu
4.00G 10.9G
24K /users/edu
mirpool/edu/userc
21K 14.9G
21K /users/edu/userc
mirpool/edu/userd
21K 10.9G
21K /users/edu/userd
mirpool/mkt
4.00G
10.9G
24K /users/mkt
mirpool/mkt/usera
21K 14.9G
21K /users/mkt/usera
mirpool/mkt/userb
21K 10.9G
21K /users/mkt/userb
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
The mirpool/mkt and mirpool/edu file systems account for the two 4-GB
reservations made in the previous step. The reservations on mirpool/mkt/usera and
mirpool/edu/userc claim 4 GB from the 14.9 GB of space that was available to
mirpool/mkt and mirpool/edu, respectively.
e
l
c
a
r
O
7.
Set the reservation on the mirpool/edu/userc to none, and verify the settings you
made.
# zfs set reservation=none mirpool/edu/userc
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
VALUE
none
8G
none
none
8G
4G
none
SOURCE
default
local
default
default
local
local
default
Use the zfs list command to list the space used by descendent mirpool file systems.
How has removing the reservation for mirpool/edu/userc affected the space available
to mirpool/edu and mirpool/edu/userd? Have the used and available values
changed for file systems below mirpool/mkt?
# zfs list -r mirpool
NAME
USED AVAIL
REFER MOUNTPOINT
mirpool
16.0G 6.89G
24K /users
mirpool/edu
66K 14.9G
24K /users/edu
mirpool/edu/userc
21K 14.9G
21K /users/edu/userc
mirpool/edu/userd
21K 14.9G
21K /users/edu/userd
mirpool/mkt
4.00G 10.9G
24K /users/mkt
mirpool/mkt/usera
21K 14.9G
21K /users/mkt/usera
mirpool/mkt/userb
21K 10.9G
21K /users/mkt/userb
y
m
e
d
a
Removing the 4-GB reservation for mirpool/edu/userc has increased the space
available to mirpool/edu and mirpool/edu/userd to 14.9 GB. The space used and
available below mirpool/mkt has not changed.
9.
c
A
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
10. Use the zfs list command to list the space used by descendent mirpool file systems.
How have the two quotas you just established affected the amount of space used and
available for the file systems in mirpool?
# zfs list -r mirpool
NAME
USED AVAIL REFER
MOUNTPOINT
a
r
O
mirpool
16.0G
mirpool/edu
66K
mirpool/edu/userc
21K
mirpool/edu/userd
21K
mirpool/mkt
4.00G
mirpool/mkt/usera
21K
mirpool/mkt/userb
21K
6.89G
14.9G
14.9G
4.00G
10.9G
14.9G
4.00G
24K
24K
21K
21K
24K
21K
21K
/users
/users/edu
/users/edu/userc
/users/edu/userd
/users/mkt
/users/mkt/usera
/users/mkt/userb
y
m
e
d
a
12. Use the zfs list command to list the space used by descendent mirpool file systems.
# zfs list -r mirpool
NAME
USED AVAIL REFER
MOUNTPOINT
mirpool
16.0G 6.89G
24K
/users
mirpool/edu
66K 14.9G
24K
/users/edu
mirpool/edu/userc
21K 1024M
21K
/users/edu/userc
mirpool/edu/userd
21K 4.00G
21K
/users/edu/userd
mirpool/mkt
4.00G
10.9G
24K
/users/mkt
mirpool/mkt/usera
21K 14.9G
21K
/users/mkt/usera
mirpool/mkt/userb
21K 4.00G
21K
/users/mkt/userb
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
e
l
c
a
r
O
mirpool/mkt/usera
mirpool/mkt/userb
21K 14.9G
21K 4.00G
21K
21K
/users/mkt/usera
/users/mkt/userb
Yes, the used and available values for mirpool/edu and mirpool/edu/userc have
changed. The others have not. This is true because space has been taken from the
space already reserved to mirpool/edu and its dependents, but no space has been
taken from space that is also available to mirpool/mkt and its descendents.
14. Attempt to copy /class/archive.tar to /users/edu/userc/archive2.tar. What
happens and why?
# cp /class/archive.tar /users/edu/userc/archive2.tar
cp: /class/archive.tar: Disc quota exceeded
The copy attempt fails because the copy request would exceed the quota for the
destination file system.
15. Attempt to set a 2-GB reservation on the mirpool/edu/userc file system. What happens
and why?
# zfs set reservation=2g mirpool/edu/userc
cannot set property for mirpool/edu/userc: size is greater
than available space
The attempt fails because the size of the reservation you requested was larger than the
quota in place for the file system.
16. Increase the quota on the mirpool/edu/userc file system to 2 GB and list the space
used by descendent mirpool file systems. Verify that the new space is available to
mirpool/edu/userc.
# zfs set quota=2g mirpool/edu/userc
# zfs list -r mirpool
NAME
USED AVAIL
REFER MOUNTPOINT
mirpool
16.0G 6.89G
24K /users
mirpool/edu
821M 14.1G
24K /users/edu
mirpool/edu/userc
821M 1.20G
821M /users/edu/userc
mirpool/edu/userd
21K 4.00G
21K /users/edu/userd
mirpool/mkt
4.00G 10.9G
24K /users/mkt
mirpool/mkt/usera
21K 14.9G
21K /users/mkt/usera
mirpool/mkt/userb
21K 4.00G
21K /users/mkt/userb
y
m
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
17. List the quota and reservation properties for the mirpool/mkt file system. Verify that a
quota has not been set.
# zfs get reservation,quota mirpool/mkt
NAME
PROPERTY
VALUE
SOURCE
mirpool/mkt
reservation
8G
local
mirpool/mkt
quota
none
default
18. Set a 12-GB quota on mirpool/mkt, and verify that the quota has been set.
# zfs set quota=12g mirpool/mkt
# zfs get reservation,quota mirpool/mkt
NAME
PROPERTY
VALUE
SOURCE
mirpool/mkt
reservation
8G
local
mirpool/mkt
quota
12G
local
e
l
c
a
r
O
19. List the space used by descendent mirpool file systems. How have the space used and
available values changed since you last displayed them and why?
# zfs list -r mirpool
NAME
USED AVAIL
REFER MOUNTPOINT
mirpool
16.0G 6.89G
24K /users
mirpool/edu
821M 14.1G
24K /users/edu
mirpool/edu/userc
821M 1.20G
821M /users/edu/userc
mirpool/edu/userd
21K 4.00G
21K /users/edu/userd
mirpool/mkt
4.00G 8.00G
24K /users/mkt
mirpool/mkt/usera
21K 12.0G
21K /users/mkt/usera
mirpool/mkt/userb
21K 4.00G
21K /users/mkt/userb
None of the space used values have changed. The space available to mirpool/mkt
has been reduced from 10.9 to 8 GB because of the 12 GB quota for mirpool/mkt, 4
GB have been reserved by mirpool/mkt/usera.
The space available to the mirpool/mkt/usera file system is now limited to the 12GB quota on mirpool/mkt.
The space available to the mirpool/edu file system has not changed because the
reservation for mirpool/mkt has not changed. Space available outside of the 8-GB
reservation for mirpool/mkt is still available to mirpool/edu.
20. Destroy the mirpool/edu and mirpool/mkt file systems.
# zfs destroy -r mirpool/edu
# zfs destroy -r mirpool/mkt
y
m
e
d
a
Create a new group called students and assign group number 200 to it.
Create a user account called student1 that belongs to the students group.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
22. Set the mirpool mount point to /mirpool/home/students. Then create a new file
system, mirpool/student1, and change the ownership to student1.
# zfs set mountpoint=/mirpool/home/students mirpool
# zfs create mirpool/student1
# chown student1 /mirpool/home/students/student1
# chgrp students /mirpool/home/students/student1
e
l
c
23. Set a 1-GB user quota on student1 and display the quota information.
# zfs set userquota@student1=1G mirpool/student1
# zfs get userquota@student1 mirpool/student1
NAME
PROPERTY
VALUE
mirpool/student1
userquota@student1
1G
a
r
O
SOURCE
local
January 2005
25. Attempt to create a 2-GB file in the student1 home directory. What happens and why?
$ /usr/sbin/mkfile 2g file2
file2: initialized 1717968896 of 2147483648 bytes: Disc quota
exceeded
The user quota is exceeded but possibly part of the file or all of the file is written.
Note: Due to CR 6813406, enforcement of a user quota can be delayed by several
seconds, which means part of the file might be written before the quota is exceeded. To
determine how much of the file is actually written before the quota is exceeded, use the
following command:
$ ls -s file2
3356571 file2
26. Attempt to create another 2-GB file in the student1 home directory. What happens and
why?
$ /usr/sbin/mkfile 2g file2a
Could not open file2a: Disc quota exceeded
The file is not written because the quota has already been exceeded.
27. Exit the student1 account.
$ exit
#
y
m
e
d
a
c
A
28. Destroy the mirpool/student1 file system. Set the mirpool mount point back to
/mirpool.
# zfs destroy -r mirpool/student1
# zfs inherit mountpoint mirpool
# zfs list
NAME
USED AVAIL REFER
MOUNTPOINT
mirpool
176K 22.9G
21K
/mirpool
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
y
m
e
d
a
Chapter 6
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Preparation
Both the /class/archive.tar and the mirrored storage pool mirpool will be used in this
lab exercise. Make sure they both exist.
If the systems used for this exercise have been re-initialized, create a directory called /class,
and then use the tar command to create an archive of the /usr/lib directory, and save the
archive as /class/archive.tar. Limit the size of this archive to 820 MB. You will use this
file to demonstrate functions of ZFS snapshots and clones.
# mkdir /class
# tar cfk /class/archive.tar 839680 /usr/lib
tar: please insert new volume, then press RETURN. (Enter
Control-C)
The size of this archive may differ slightly from the size presented in this exercise's examples.
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Set the mirpool mount point to /users. Create two file systems named mirpool/devel
and mirpool/devel/usera. Display the list of descendent mirpool file systems. Make
note of the space used and available in the file systems listed.
# zfs set mountpoint=/users mirpool
# zfs create mirpool/devel
# zfs create mirpool/devel/usera
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
232K 22.9G
21K /users
mirpool/devel
42K 22.9G
21K /users/devel
mirpool/devel/usera
21K 22.9G
21K /users/devel/usera
2.
3.
4.
List the descendent mirpool file systems. Which file system references the new data?
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
821M 22.1G
21K /users
mirpool/devel
821M 22.1G
23K /users/devel
mirpool/devel/usera 821M 22.1G
821M /users/devel/usera
y
m
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
The mirpool/devel/usera file system references the new data, as indicated by the
REFER column.
5.
e
l
c
a
r
O
6.
# cp /class/archive.tar /users/devel/usera/archive2.tar
7.
List the descendent mirpool file systems. Make note of the amount of data used in
mirpool. Has the amount of data referenced by the usera@today snapshot changed?
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
1.60G 21.3G
23K /users
mirpool/devel
1.60G 21.3G
23K /users/devel
mirpool/devel/usera
1.60G 21.3G 1.60G /users/devel/usera
mirpool/devel/usera@today
19K
- 821M No, the usera@today snapshot still references 821 MB of data.
8.
9.
y
m
10. List the descendent mirpool file systems. Has the amount of data used or referenced by
the usera@today snapshot changed? Has the amount of space used or referenced by
mirpool/devel/usera changed?
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
1.60G 21.3G
21K /users
mirpool/devel
1.60G 21.3G
23K /users/devel
mirpool/devel/usera
1.60G 21.3G 821M /users/devel/usera
mirpool/devel/usera@today 821M
- 821M -
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
The usera@today snapshot now uses and references 821 MB, where before removing
archive1.tar, it only referenced that amount of data. The amount of space
referenced by mirpool/devel/usera has reduced to 821 MB, but the amount used
has not changed. The space used that is indicated for mirpool/devel/usera
includes the space that is now used by the mirpool/devel/usera@today snapshot.
You can also use the zfs list -o space command to identify how much space is
consumed by snapshots and descendent datasets.
11. Attempt to destroy the mirpool/devel/usera file system. What happens and why?
# zfs destroy mirpool/devel/usera
cannot destroy mirpool/devel/usera: filesystem has children
use -r to destroy the following datasets:
mirpool/devel/usera@today
The attempt to destroy the file system fails because the
mirpool/devel/usera@today snapshot exists.
e
l
c
a
r
O
y
m
e
d
a
The user-reference count has been reset to 0. You can now destroy the snapshot.
17. Destroy the mirpool/devel/usera@today snapshot.
# zfs destroy mirpool/devel/usera@today
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
18. List the descendent mirpool file systems. Has the amount of space used or referenced by
mirpool/devel/usera changed?
# zfs list -r mirpool
NAME
USED AVAIL
REFER MOUNTPOINT
mirpool
821M 22.1G
21K /users
mirpool/devel
821M 22.1G
23K /users/devel
mirpool/devel/usera 821M 22.1G
821M /users/devel/usera
The amount of space used by mirpool/devel/usera has changed to 821 MB, but
the amount referenced has not changed.
19. Destroy the mirpool/devel file system and descendent file systems.
# zfs destroy -r mirpool/devel
e
l
c
a
r
O
Tasks
1.
Set the mount point of mirpool to the default. Then create a file system named
mirpool/students with a mount point of /students.
# zfs inherit mountpoint mirpool
# zfs create -o mountpoint=/students mirpool/students
2.
y
m
3.
4.
e
d
a
c
A
Copy /class/archive.tar to each student file system. Note the amount of data used
and referenced by these file systems.
# cp /class/archive.tar /students/student1/archive1.tar
# cp /class/archive.tar /students/student2/archive2.tar
# cp /class/archive.tar /students/student3/archive3.tar
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
2.40G 20.5G
21K /mirpool
mirpool/students
2.40G 20.5G
25K /students
mirpool/students/student1 821M 20.5G 821M /students/student1
mirpool/students/student2 821M 20.5G 821M /students/student2
mirpool/students/student3 821M 20.5G 821M /students/student3
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
The mirpool file system uses 2.40 GB of disk space, which is the sum of the used and
reference count of each new student file system that contains the 821 MB tar archive.
Create a recursive snapshot of mirpool/students named
mirpool/students@monday. List the descendent mirpool file systems. Note the
amount of data used and referenced by mirpool/students@monday.
a
r
O
e
l
c
mirpool
2.40G 20.5G
mirpool/students
2.40G 20.5G
mirpool/students@monday
0
mirpool/students/student1
821M 20.5G
mirpool/students/student1@monday
0
mirpool/students/student2
821M 20.5G
mirpool/students/student2@monday
0
mirpool/students/student3
821M 20.5G
mirpool/students/student3@monday
0
-
5.
21K
25K
25K
821M
821M
821M
821M
821M
821M
/mirpool
/students
/students/student1
/students/student2
/students/student3
-
The student snapshots do not use any additional space, but their reference counts
include the 821 MB of the tar archive.
Create a file named /students/student1/file1. Confirm that the file exists.
# touch /students/student1/file1
# ls /students/student1/file1
/students/student1/file1
6.
7.
Attempt to roll back the mirpool/students@monday snapshot. What happens and why?
# zfs rollback mirpool/students@monday
cannot rollback to mirpool/students@monday: more recent
snapshots exist
use -r to force deletion of the following snapshots:
mirpool/students@tuesday
y
m
e
d
a
The attempt fails because snapshots of mirpool/students exist that were taken after
mirpool/students@monday.
c
A
8.
If the mirpool/students snapshot was rolled back to Monday's snapshot, which data
would be lost?
The file named /students/student1/file1.
9.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
10. List the descendent mirpool file systems. Roll back the
mirpool/students/student1@tuesday snapshot.
# zfs list -r mirpool
NAME
mirpool
mirpool/students
mirpool/students@monday
mirpool/students@tuesday
mirpool/students/student1
/students/student1
mirpool/students/student1@monday
mirpool/students/student1@tuesday
mirpool/students/student2
/students/student2
mirpool/students/student2@monday
e
l
c
a
r
O
USED
2.40G
2.40G
0
0
821M
19K
19K
821M
20.5G
821M -
mirpool/students/student2@tuesday
0
- 821M mirpool/students/student3
821M 20.5G 821M
/students/student3
mirpool/students/student3@monday
0
- 821M mirpool/students/student3@tuesday
0
- 821M # zfs rollback mirpool/students/student1@tuesday
y
m
c
A
USED
AVAIL
REFER
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
mirpool/students
e
d
a
MOUNTPOINT
2.40G
20.5G
21K
2.40G
20.5G
25K
/students
25K
mirpool/students@monday
mirpool/students@tuesday.used
/mirpool
821M
821M
20.5G
821M
/students/student1
19K
821M
821M
821M
20.5G
821M
/students/student2
mirpool/students/student2@monday
821M
mirpool/students/student2@tuesday.used
821M
821M
20.5G
821M
/students/student3
mirpool/students/student3@monday
821M
mirpool/students/student3@tuesday.used
821M
mirpool/students/student1
mirpool/students/student1@monday
mirpool/students/student1@tuesday.used
mirpool/students/student2
mirpool/students/student3
e
l
c
a
r
O
mirpool/students
mirpool/students@monday
mirpool/students/student1
/students/student1
mirpool/students/student1@monday
mirpool/students/student2
/students/student2
mirpool/students/student2@monday
mirpool/students/student3
/students/student3
mirpool/students/student3@monday
2.40G 20.5G
0
821M 20.5G
25K /students
25K 821M
19K
821M 20.5G
821M 821M
0
821M 20.5G
821M 821M
821M -
17. Use the ls -lh command to list the size of the file in /backup. Verify that it matches the
size of the space used by the mirpool/students file systems.
# ls -lh /backup
total 5054896
-rw-r--r-- 1 root root 2.4G Sep 17 16:18
mirpool.students.monday
Yes, the recursive snapshot is 2.4 GB in size.
18. Use the zfs send command to send the mirpool/students/student1@monday
snapshot to the /backup directory. Then list the size of the snapshot stream.
y
m
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
/backup/mirpool.students.student1monday
20. Use the zfs receive command to re-create the mirpool/students/student1 file
system. List the descendent mirpool file systems and list the contents of the
mirpool/students/student1 file system.
# zfs receive mirpool/students/student1 < /backup/mirpool.students.student1monday
# zfs list -r mirpool
NAME
mirpool
mirpool/students
e
l
c
mirpool/students@monday
mirpool/students/student1
a
r
O
mirpool/students/student1@monday
mirpool/students/student2
mirpool/students/student2@monday
mirpool/students/student3
mirpool/students/student3@monday
USED
AVAIL
2.40G
20.5G
2.40G
21K
REFER
MOUNTPOINT
21K
/mirpool
20.5G
24K
/students
25K
821M
20.5G
821M
/students/student1
821M
821M
20.5G
821M
/students/student2
821M
821M
20.5G
821M
/students/student3
821M
# ls /students/student1
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
archive1.tar
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Create a new ZFS file system named mirpool/projects with a mount point of
/projects. Create a new ZFS file system named mirpool/projects/project1. List
the descendent mirpool file systems to verify that they exist.
# zfs create -o mountpoint=/projects mirpool/projects
# zfs create mirpool/projects/project1
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
280K 22.9G
21K /mirpool
mirpool/projects
42K 22.9G
21K /projects
mirpool/projects/project1 21K 22.9G
21K /projects/project1
2.
y
m
e
d
a
3.
c
A
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
4.
e
l
c
ra
mirpool
mirpool/projects
mirpool/projects/baseline1
mirpool/projects/project1
mirpool/projects/project1@baseline1
821M
821M
0
821M
22.1G
21K
22.1G
23K
22.1G 821M
22.1G 821M
0
- 821M
/mirpool
/projects
/projects/baseline1
/projects/project1
-
5.
6.
AVAIL REFER
21.1G
21K
21.1G
24K
21.1G 1.80G
21.1G 821M
- 821M
MOUNTPOINT
/mirpool
/projects
/projects/baseline1
/projects/project1
-
y
m
e
d
a
7.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Use the zfs get command to display the value of the origin property for the descendent
mirpool file systems. What origin is listed for the mirpool/projects/baseline1
clone?
# zfs get -r origin mirpool
NAME
mirpool
mirpool/projects
PROPERTY
VALUE
SOURCE
origin
origin
mirpool/projects/baseline1
origin
mirpool/projects/project1@baseline1
mirpool/projects/project1
origin
mirpool/projects/project1@baseline1
origin
e
l
c
ra
9.
AVAIL
21.1G
21.1G
21.1G
21.1G
REFER
21K
24K
1.80G
821M
821M
MOUNTPOINT
/mirpool
/projects
/projects/baseline1
/projects/project1
The baseline1 clone is promoted to become the original ZFS file system, including the
space that is used and referenced.
10. Use the zfs get command to display the value of the origin property for the descendent
mirpool file systems. What origin is listed for the mirpool/projects/baseline1 file
system?
# zfs get -r origin mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
origin
mirpool/projects
origin
mirpool/projects/baseline1
origin
mirpool/projects/baseline1@baseline1
origin
mirpool/projects/project1
origin
mirpool/projects/baseline1@baseline1
y
m
The origin property for the mirpool/projects/baseline1 file system is now set
to the original source and the origin property for mirpool/projects/project1
indicates that it is a clone of mirpool/projects/baseline1.
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
12. Destroy the remaining descendent mirpool file systems. List the descendent mirpool
datasets to confirm that they are destroyed.
# zfs destroy -r mirpool
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
224K 22.9G
21K /mirpool
e
l
c
a
r
O
y
m
e
d
a
Chapter 7
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 1
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Preparation
To complete this exercise, you must already have a general understanding of disk devices and
file systems used on Solaris systems. If disks have been used previously, then you might need
to use the -f option with the zpool create or zpool attach commands.
Disk storage used in this and subsequent exercises are provided by two internal disk drives and
the Fibre Channel array, if available.
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 3
c
A
Practice 7-1: Migrating a UFS Root File System to a ZFS Root File
System
Overview
In this practice, you migrate a UFS root file system to a ZFS root file system.
Tasks
1.
Identify the software and hardware requirements to install a ZFS root file system.
a. Identify the Solaris 10 release on the system.
# cat /etc/release
Oracle Solaris 10 9/10 s10s_u9wos_11 SPARC
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
Assembled 15 June 2010
b.
y
m
e
d
a
d.
Based on the above example output, c0t0d0s0 is used by the UFS root file system.
Approximately 16 GB of disk space is recommended for a ZFS root file system.
Use the format utility to identify the internal disks on your system. Make a note of the
disks.
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@0,0
1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@1,0
.
.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
e
l
c
a
r
O
# format c0t1d0
selecting c0t1d0
.
.
.
[disk formatted]
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 4
partition> p
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part
Tag
0
unassigned
1
unassigned
2
backup
3
unassigned
4
unassigned
5
unassigned
6
unassigned
7
unassigned
partition>
Flag
wm
wm
wm
wm
wm
wm
wm
wm
Cylinders
Size
0 - 3297 16.00GB
3298 - 14086 52.35GB
0 - 14086 68.35GB
0
0
0
0
0
0
0
0
0
0
Blocks
(3298/0/0)
33560448
(10789/0/0) 109788864
(14087/0/0) 143349312
(0/0/0)
0
(0/0/0)
0
(0/0/0)
0
(0/0/0)
0
(0/0/0)
0
f.
An SMI labeled disk is identified by the cylinder information. An EFI labeled disk
contains no cylinder information.
If the disk has an SMI label, but the entire disk space is not in slice 0, repartition the
disk so that all the disk space is in slice 0.
partition> p
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part
Tag
Flag
Cylinders
Size
Blocks
0 unassigned
wm
0 - 3297
16.00GB
(3298/0/0)
33560448
1 unassigned
wm
3298 - 14086 52.35GB
(10789/0/0) 109788864
2
backup
wm
0 - 14086 68.35GB
(14087/0/0) 143349312
3 unassigned
wm
0
0
(0/0/0)
0
4 unassigned
wm
0
0
(0/0/0)
0
5 unassigned
wm
0
0
(0/0/0)
0
6 unassigned
wm
0
0
(0/0/0)
0
7 unassigned
wm
0
0
(0/0/0)
0
partition> modify
Select partitioning base:
0. Current partition table (original)
1. All Free Hog
Choose base (enter number) [0]? 1
y
m
e
d
a
Part
Tag
Flag
Cylinders
Size
Blocks
0
root
wm
0
0
(0/0/0)
0
1
swap
wu
0
0
(0/0/0)
0
2
backup
wu
0 - 14086 68.35GB
(14087/0/0) 143349312
3 unassigned
wm
0
0
(0/0/0)
0
4 unassigned
wm
0
0
(0/0/0)
0
5 unassigned
wm
0
0
(0/0/0)
0
6
usr
wm
0
0
(0/0/0)
0
7 unassigned
wm
0
0
(0/0/0)
0
Do you wish to continue creating a new partition
table based on above table[yes]?
e
l
c
a
r
O
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 5
[0b,
[0b,
[0b,
[0b,
[0b,
[0b,
0c,
0c,
0c,
0c,
0c,
0c,
0.00mb,
0.00mb,
0.00mb,
0.00mb,
0.00mb,
0.00mb,
0.00gb]:
0.00gb]:
0.00gb]:
0.00gb]:
0.00gb]:
0.00gb]:
Part
Tag
Flag
Cylinders
Size
Blocks
0
root
wm
0 - 14086 68.35GB
(14087/0/0) 143349312
1
swap
wu
0
0
(0/0/0)
0
2
backup
wu
0 - 14086 68.35GB
(14087/0/0) 143349312
3 unassigned
wm
0
0
(0/0/0)
0
4 unassigned
wm
0
0
(0/0/0)
0
5 unassigned
wm
0
0
(0/0/0)
0
6
usr
wm
0
0
(0/0/0)
0
7 unassigned
wm
0
0
(0/0/0)
0
Okay to make this the current partition table[yes]? yes
Enter table name (remember quotes): "disk1"
Ready to label disk, continue? yes
partition> quit
g.
y
m
If the disk has an EFI label, it must be relabeled with an SMI label. At the same time,
ensure that all the disk space is in slice 0.
# format -e c0t1d0s0
selecting c0t1d0s0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
inquiry - show vendor, product and revision
scsi - independent SCSI mode selects
cache - enable, disable or query SCSI disk cache
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> label
e
l
c
a
r
O
c
A
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 6
y
m
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
l
c
a
r
O
Flag
wm
wu
wu
wm
wm
c
A
a new partition
0c,
0c,
0c,
0c,
0c,
0c,
Cylinders
0 - 14086
0
0 - 14086
0
0
0.00mb,
0.00mb,
0.00mb,
0.00mb,
0.00mb,
0.00mb,
Size
68.35GB
0
68.35GB
0
0
0.00gb]:
0.00gb]:
0.00gb]:
0.00gb]:
0.00gb]:
0.00gb]:
Blocks
(14087/0/0) 143349312
(0/0/0)
0
(14087/0/0) 143349312
(0/0/0)
0
(0/0/0)
0
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 7
5 unassigned
wm
0
0
(0/0/0)
6
usr
wm
0
0
(0/0/0)
7 unassigned
wm
0
0
(0/0/0)
Okay to make this the current partition table[yes]? yes
Enter table name (remember quotes): "disk1"
Ready to label disk, continue? yes
partition> quit
format> quit
0
0
0
2.
Create the ZFS pool that is intended to be the root pool. Confirm that the pool is created.
# zpool create rpool c0t1d0s0
# zpool list
NAME
SIZE ALLOC
FREE
CAP HEALTH ALTROOT
rpool
68G 95.5K 68.0G
0% ONLINE -
3.
Confirm that the /etc/vfstab file does not contain an /export/home entry. If it does
contain an /export/home entry, comment the entry, so that upgrade conflicts do not
occur later.
Migrate the UFS root file system to a ZFS root file system.
4.
y
m
e
d
a
e
l
c
a
r
O
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 8
c
A
5.
6.
Is
Complete
-------yes
yes
Active
Now
-----yes
no
Active
Can
Copy
On Reboot Delete Status
--------- ------ ---------yes
no
no
yes
-
y
m
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
7.
8.
# lustatus
Boot Environment
Name
-------------------------ufsBE
zfsBE
9.
c
A
Is
Active Active
Can
Copy
Complete Now
On Reboot Delete Status
-------- ------ --------- ------ ---------yes
no
no
yes
yes
yes
yes
no
Identify the ZFS BE components and review the dump device and the swap device.
# zfs list -r rpool
NAME
USED AVAIL REFER MOUNTPOINT
rpool
7.67G 59.3G
98K /rpool
rpool/ROOT
5.16G 59.3G
21K /rpool/ROOT
rpool/ROOT/zfsBE
5.16G 59.3G 5.16G /
e
l
c
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 9
rpool/dump
2.00G 59.3G 2.00G rpool/swap
517M 59.8G
16K # swap -l
swapfile
dev
swaplo blocks free
/dev/zvol/dsk/rpool/swap 256,1
16 1058800 1058800
# dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash/host01
Savecore enabled: yes
Save compressed: on
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 10
c
A
Tasks
1.
y
m
2.
3.
e
d
a
c
A
Is
Active Active
Can
Copy
Complete Now
On Reboot Delete Status
-------- ------ --------- ------ ---------yes
no
no
yes
yes
yes
yes
no
yes
no
no
yes
-
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
# luactivate zfs2BE
A Live Upgrade Sync operation will be performed on startup of boot environment
<zfs2BE>.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
.
.
.
e
l
c
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 11
4.
5.
6.
7.
Use the boot command to identify the ZFS BEs that are available for booting.
ok boot -L
Rebooting with command: boot -L
Boot device: /pci@1c,600000/scsi@2/sd@1,0:a File and args: -L
1 zfsBE
2 zfs2BE
Select environment to boot: [ 1 - 2 ]:
8.
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 12
c
A
Tasks
1.
2.
Determine the disk that was previously used for the UFS root file system.
# lufslist ufsBE
boot environment name: ufsBE
Filesystem
fstype
device size Mounted on
Mount Options
----------------------- -------- ------------ ------------------- ------------/dev/dsk/c0t0d0s1
swap
541851648 /dev/dsk/c0t0d0s0
ufs
72852996096 /
-
3.
4.
y
m
e
d
a
# format
Specify disk (enter its number): 0
selecting c0t0d0
[disk formatted]
.
.
.
format> p
PARTITION MENU:
0 - change 0 partition
1 - change 1 partition
2 - change 2 partition
3 - change 3 partition
e
l
c
a
r
O
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 13
c
A
4 - change 4 partition
5 - change 5 partition
6 - change 6 partition
7 - change 7 partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part
Tag
Flag
Cylinders
Size
Blocks
0
root
wm
104 - 14086
67.85GB (13983/0/0) 142291008
1
swap
wu
0 - 103
516.75MB (104/0/0)
1058304
2
backup
wm
0 - 14086
68.35GB (14087/0/0) 143349312
3 unassigned
wu
0
0
(0/0/0)
0
4 unassigned
wu
0
0
(0/0/0)
0
5 unassigned
wu
0
0
(0/0/0)
0
6 unassigned
wu
0
0
(0/0/0)
0
7 unassigned
wu
0
0
(0/0/0)
0
partition> modify
Select partitioning base:
0. Current partition table (original)
1. All Free Hog
Choose base (enter number) [0]? 1
e
d
a
c
A
Part
Tag
Flag
Cylinders
Size
Blocks
0
root
wm
0
0
(0/0/0)
0
1
swap
wu
0
0
(0/0/0)
0
2
backup
wu
0 - 14086
68.35GB (14087/0/0) 143349312
3 unassigned
wm
0
0
(0/0/0)
0
4 unassigned
wm
0
0
(0/0/0)
0
5 unassigned
wm
0
0
(0/0/0)
0
6
usr
wm
0
0
(0/0/0)
0
7 unassigned
wm
0
0
(0/0/0)
0
Do you wish to continue creating a new partition
table based on above table[yes]?
Free Hog partition[6]? 0
Enter size of partition 1 [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition 3 [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition 4 [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition 5 [0b, 0c, 0.00mb, 0.00gb]:
e
l
c
a
r
O
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
y
m
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 14
Tag
root
swap
backup
unassigned
unassigned
unassigned
usr
unassigned
Flag
wm
wu
wu
wm
wm
wm
wm
wm
Cylinders
0 - 14086
0
0 - 14086
0
0
0
0
0
Size
68.35GB
0
68.35GB
0
0
0
0
0
Blocks
(14087/0/0) 143349312
(0/0/0)
0
(14087/0/0) 143349312
(0/0/0)
0
(0/0/0)
0
(0/0/0)
0
(0/0/0)
0
(0/0/0)
0
5.
6.
Use the zpool command to check the resilvering status to ensure that the resilvering of the
second disk is complete.
Resilvering could take some time.
y
m
e
d
a
c
A
7.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
Install the boot blocks on the newly attached disk after the disk has resilvered.
e
l
c
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 15
NAME
STATE
READ WRITE CKSUM
rpool
ONLINE
0
0
0
mirror-0
ONLINE
0
0
0
c1t1d0s0
ONLINE
0
0
0
c1t0d0s0
ONLINE
0
0
0
errors: No known data errors
# installboot -F zfs /usr/platform/uname -i/lib/fs/zfs/bootblk
/dev/rdsk/c0t0d0s0
8.
9.
Boot the system from the newly attached disk. Use the devalias or a similar command to
identify the disk from which to boot. Make sure you use the disk that you identified in step 3.
# init 0
ok devalias
disk0 /pci@1c,600000/scsi@2/disk@0,0
disk1 /pci@1c,600000/scsi@2/disk@1,0
.
.
.
ok boot disk0
y
m
e
d
a
The disk pathname identified by the devalias command should match the physical
path name identified in step 8.
10. Confirm that you are booted from the second disk in the mirrored ZFS storage pool.
# prtconf -vp | grep bootpath
bootpath: bootpath: /pci@1c,600000/scsi@2/disk@0,0:a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 16
c
A
Tasks
1.
2.
Is
Complete
-------yes
yes
Active
Now
-----no
yes
Active
On Reboot
--------no
yes
Can
Delete
-----yes
no
Copy
Status
----------
y
m
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
3.
4.
c
A
Boot Environment
Is
Active Active
Can
Copy
Name
Complete Now
On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
zfsBE
zfs2BE
5.
yes
yes
yes
no
yes
no
no
yes
e
l
c
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 17
6.
7.
8.
y
m
9.
Starting shell.
Roll back the root pool snapshot.
# zfs list -r rpool
NAME
USED AVAIL
rpool
7.68G 59.3G
rpool@0918
17K
rpool/ROOT
5.17G 59.3G
rpool/ROOT@0918
0
rpool/ROOT/zfsBE
5.17G 59.3G
rpool/ROOT/zfsBE@0918 3.94M
rpool/dump
2.00G 59.3G
rpool/dump@0918
0
rpool/swap
517M 59.8G
rpool/swap@0918
0
# zfs rollback rpool@0918
# zfs rollback rpool/ROOT@0918
# zfs rollback rpool/ROOT/zfsBE@0918
e
l
c
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
REFER
98K
98K
21K
21K
5.17G
5.17G
2.00G
2.00G
16K
16K
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 18
c
A
MOUNTPOINT
/a/rpool
/a/rpool/ROOT
/a
-
Current ZFS snapshot rollback behavior is that recursive snapshots are not rolled back
with the -r option. You must roll back the individual snapshots from the recursive
snapshot. No need to rollback the swap and dump snapshots.
10. Reboot the system.
# init 6
11. Walk through the remaining root disk failure recovery steps with the instructor.
In the event of root disk failure, you should review the steps for complete root pool
recovery.
a. Create a remote file system for storing root pool snapshots. Then, share the remote file
system for the snapshots.
remote# zfs create rpool/snaps
remote# zfs set sharenfs=rw=local-system,root=local-system rpool/snaps
# share
-@rpool/snaps /rpool/snaps sec=sys,rw=local-system,root=local-system ""
b.
c.
d.
e.
If the root pool disk is replaced and does not contain a disk label that is usable by ZFS,
you will have to relabel the disk.
Re-create the root pool.
f.
e
d
a
g.
h.
Use the zfs command to verify that the root pool datasets are restored.
# zfs list -r rpool
i.
j.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
l
c
a
r
O
Practices for Lesson 7: Installing and Booting a ZFS Root File System
Chapter 7 - Page 19
y
m
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
y
m
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Preparation
This lab requires access to the Internet and an available Web browser.
This exercise assumes that all the pools, rpool and mirpool, are available.
If the systems used for this exercise have been re-initialized, it will be necessary to run the
make_disk_list script in /opt/ses/lab/zfs. Your instructor will indicate if this is
necessary, and will assign disks to you before you run the script.
If the systems used for this exercise have been re-initialized, create a directory called /class,
and then use the tar command to create an archive of the /usr/lib directory, and save the
archive as /class/archive.tar. Limit the size of this archive to 820 MB.
# mkdir /class
# tar cfk /class/archive.tar 839680 /usr/lib
tar: please insert new volume, then press RETURN. (Enter
Control-C)
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Display the list of your disks from /opt/ses/lab/zfs/my_disks. Identify two 9-GB
disks to be used as two spares for the existing pool, mirpool, and three 9-GB disks to be
used to create a new RAID-Z pool, rzpool.
# cat /opt/ses/lab/zfs/my_disks
2.
Use the zpool command to add two 9-GB disks as spares to mirpool. Use the zpool
status to identify that all disks are online and available.
# zpool add mirpool spare c1t226000C0FFA001ABd19
c2t216000C0FF8001ABd19
# zpool status mirpool
pool: mirpool
state: ONLINE
scrub: none requested
config:
NAME
mirpool
mirror-0
c1t226000C0FFA001ABd3
c2t216000C0FF8001ABd3
mirror-1
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
mirror-2
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
spares
c1t226000C0FFA001ABd19
c2t216000C0FF8001ABd19
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
y
m
e
d
a
c
A
AVAIL
AVAIL
Use three of your 9-GB disks to create a pool named rzpool that contains one RAID-Z
device.
# zpool create rzpool raidz c1t226000C0FFA001ABd20
c2t216000C0FF8001ABd20 c1t226000C0FFA001ABd21
# zpool status rzpool
pool: rzpool
e
l
c
a
r
O
state: ONLINE
scrub: none requested
config:
NAME
rzpool
raidz1-0
c1t226000C0FFA001ABd20
c2t216000C0FF8001ABd20
c1t226000C0FFA001ABd21
errors: No known data errors
4.
5.
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
Run the zpool status command with the -x option. What does this command report?
# zpool status -x
all pools are healthy
The command reports that all pools are healthy.
Create two ZFS file systems named mirpool/fs1 and rzpool/fs1.
# zfs create mirpool/fs1
# zfs create rzpool/fs1
# zfs mount | grep fs1
mirpool/fs1 /mirpool/fs1
rzpool/fs1 /rzpool/fs1
e
d
a
y
m
6.
7.
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
2.
3.
4.
Note: Remember to separate the left and right columns in /etc/syslog.conf with
tabs only.
FMA automatically writes ZFS and hardware-related messages to the
/var/adm/messages file. This task ensures that a log of all messages are available
during the lab exercise.
Use the svcadm command to re-start the syslog service.
# svcadm restart svc:/system/system-log:default
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Use tar tvf to verify that you can read the content of /mirpool/fs1/archive.tar.
# tar tvf /mirpool/fs1/archive.tar
drwxr-xr-x 0/2 0 Oct 13 14:15 2010 /usr/lib/
drwxr-xr-x 0/2 0 Oct 13 14:09 2010 /usr/lib/class/
drwxr-xr-x 0/2 0 Oct 13 13:55 2010 /usr/lib/class/FX/
(output omitted)
2.
Display the status of mirpool and verify that the pool and all of its devices are in the
ONLINE state.
# zpool status mirpool
pool: mirpool
state: ONLINE
scrub: none requested
config:
NAME
mirpool
mirror-0
c1t226000C0FFA001ABd3
c2t216000C0FF8001ABd3
mirror-1
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
mirror-2
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
spares
c1t226000C0FFA001ABd19
c2t216000C0FF8001ABd19
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
y
m
e
d
a
c
A
AVAIL
AVAIL
e
l
c
Ask your instructor to disable access to the first disk in the first mirror device in mirpool to
simulate the spare activation process. Use the output of the previous command to identify
the disk. The first three characters of the target WWN, and the LUN number are sufficient to
identify the disk for your instructor. In this example, you could specify your disk as t226,
d3.Take no further action until your instructor indicates the disk is unavailable.
a
r
O
4.
Display the status of mirpool. You may need to run the zpool status command more
than once before you see a change in the pool's status. Or, use the zpool scrub
command to access all the devices in the pool. What is the state of the pool and the
devices within it?
# zpool
pool:
state:
status:
status mirpool
mirpool
DEGRADED
One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using zpool online.
see: http://www.sun.com/msg/ZFS-8000-D3
scrub: resilver completed after 0h0m with 0 errors on Wed Oct 13 09:56:59
2010
config:
NAME
mirpool
mirror-0
spare
c1t226000C0FFA001ABd3
c1t226000C0FFA001ABd19
c2t216000C0FF8001ABd3
mirror-1
c1t226000C0FFA001ABd4
c2t216000C0FF8001ABd4
mirror-2
c1t226000C0FFA001ABd5
c2t216000C0FF8001ABd5
spares
c1t226000C0FFA001ABd19
c2t216000C0FF8001ABd19
STATE
DEGRADED
DEGRADED
DEGRADED
UNAVAIL
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
INUSE
AVAIL
y
m
e
d
a
c
A
currently in use
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
5.
The pool and the first mirror device are in the DEGRADED state. The first disk in the first
mirror device is in the UNAVAIL state and the spare disk has replaced the unavailable
disk.
Examine the content of /var/adm/messages.fmd. What is the severity of the event
recorded by the fault manager daemon in this file? Does the URL listed match the URL
found in the zpool status command output?
# more /var/adm/messages.fmd
(output omitted)
Oct 13 14:06:24 host01 fmd: [ID 377184 daemon.error] SUNW-MSGID: ZFS-8000-D3,
TYPE: Fault, VER: 1, SEVERITY: Major
Oct 13 14:06:24 host01 EVENT-TIME: Wed Oct 14 14:06:24 PDT 2010
Oct 14 14:06:24 host01 PLATFORM: SUNW,Sun-Fire-V210, CSN: -,
HOSTNAME: host01
Oct 14 14:06:24 host01 SOURCE: zfs-diagnosis, REV: 1.0
e
l
c
a
r
O
6.
7.
8.
9.
y
m
e
d
a
c
A
Clear the error statistics for mirpool, and verify that the error values have been reset to
zero.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
e
l
c
a
r
O
NAME
mirpool
mirror-0
c1t226000C0FFA001ABd0
c2t216000C0FF8001ABd0
mirror-1
c1t226000C0FFA001ABd1
c2t216000C0FF8001ABd1
mirror-2
c1t226000C0FFA001ABd2
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
c2t216000C0FF8001ABd2
spares
c1t226000C0FFA001ABd22
c2t216000C0FF8001ABd22
ONLINE
AVAIL
AVAIL
STATE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
AVAIL
AVAIL
e
l
c
a
r
O
c
A
e
d
a
y
m
Tasks
1.
Use tar tvf to verify that you can read the content of /rzpool/fs1/archive.tar.
# tar tvf /rzpool/fs1/archive.tar
drwxr-xr-x 0/2 0 Oct 13 14:15 2010 /usr/lib/
drwxr-xr-x 0/2 0 Oct 13 14:09 2010 /usr/lib/class/
drwxr-xr-x 0/2 0 Oct 13 13:55 2010 /usr/lib/class/FX/
(output omitted)
2.
Display the status of rzpool and verify that the pool and all of its devices are in the
ONLINE state.
# zpool status rzpool
pool: rzpool
state: ONLINE
scrub: none requested
config:
NAME
STATE
READ WRITE CKSUM
rzpool
ONLINE
0
0
0
raidz1-0
ONLINE
0
0
0
c1t226000C0FF8001ABd20 ONLINE
0
0
0
c2t216000C0FF8001ABd20 ONLINE
0
0
0
c1t226000C0FFA001ABd21 ONLINE
0
0
0
errors: No known data errors
y
m
e
d
a
3.
4.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Ask your instructor to disable access to the first disk in the RAID-Z device in rzpool. Use
the output of the previous command to identify the disk. The first three characters of the
target WWN, and the LUN number are sufficient to identify the disk for your instructor. In
this example, you could specify your disk as t226, d20.Take no further action until your
instructor indicates that the disk is unavailable.
Display the status of rzpool. You may need to run the zpool status command more
than once before you see a change in the pool's status. What is the state of the pool and
the devices within it? What read or write errors are reported?
# zpool
pool:
state:
status:
status rzpool
rzpool
DEGRADED
One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using zpool online.
see: http://www.sun.com/msg/ZFS-8000-D3
scrub: none requested
config:
NAME
STATE
READ WRITE CKSUM
e
l
c
a
r
O
rzpool
raidz1-0
c1t226000C0FF8001ABd20
c2t216000C0FF8001ABd20
c1t226000C0FFA001ABd21
errors: No known data errors
5.
6.
DEGRADED
DEGRADED
UNAVAIL
ONLINE
ONLINE
0
0
0
0
0
0
0
0
0
0
0
0
0 cannot open
0
0
The pool and the RAID-Z device are in the DEGRADED state. The first disk in the RAIDZ device is in the UNAVAIL state, and the additional information indicates that the
system cannot open the disk.
In a Web browser, use the URL listed in the output from the previous command to display
diagnostic information for the current state of rzpool. What is the severity of the problem
described by the article? What action does it recommend?
The article describes the failing device error as major. The article recommends running
zpool status -x, offers advice about how to identify a faulty device, and
recommends further action.
Examine the content of /var/adm/messages.fmd. What is the severity of the event
recorded by the fault manager daemon in this file?
# more /var/adm/messages.fmd
(output omitted)
Oct 13 14:19:05 host01 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-D3,
TYPE: Fault, VER: 1, SEVERITY: Major
Oct 13 14:19:05 host01 EVENT-TIME: Wed Oct 13 14:19:05 PDT 2010
Oct 13 14:19:05 host01 PLATFORM: SUNW,Sun-Fire-V210, CSN: -, HOSTNAME: host01
Oct 13 14:19:05 host01 SOURCE: zfs-diagnosis, REV: 1.0
Oct 13 14:19:05 host01 EVENT-ID: 44facf03-71a3-c90e-e85a-a9532c9a5095
Oct 13 14:19:05 host01 DESC: A ZFS device failed. Refer to
http://sun.com/msg/ZFS-8000-D3
for more information.
Oct 13 14:19:05 host01 AUTO-RESPONSE: No automated response will occur.
Oct 13 14:19:05 host01 IMPACT: Fault tolerance of the pool may be compromised.
Oct 13 14:19:05 host01 REC-ACTION: Run zpool status -x and replace the bad
device.
y
m
e
d
a
7.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
8.
9.
Display the status of rzpool, and use the -x option. Make note of any changes in error
statistics.
a
r
O
e
l
c
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using zpool online.
see: http://www.sun.com/msg/ZFS-8000-D3
scrub: none requested
config:
NAME
STATE
READ WRITE CKSUM
rzpool
DEGRADED
0
0
0
raidz1-0
DEGRADED
0
0
0
c1t226000C0FF8001ABd20 UNAVAIL
0
45
0 cannot open
c2t216000C0FF8001ABd20 ONLINE
0
0
0
c1t226000C0FFA001ABd21 ONLINE
0
0
0
errors: No known data errors
10. Display the list of your disks from /opt/ses/lab/zfs/my_disks. Choose one of your 9GB disks to replace the disk that the instructor disabled for you.
# cat /opt/ses/lab/zfs/my_disks
11. Use the zpool replace command to replace the disabled disk in rzpool.
# zpool replace rzpool c1t226000C0FFA001ABd20
c2t216000C0FF8001ABd21
12. Use the zpool status command to monitor the replacement process and verify that it
completes successfully.
# zpool
pool:
state:
status:
status rzpool
rzpool
DEGRADED
One or more devices is currently being resilvered. The
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 50.39% done, 0h0m to go
config:
NAME
STATE
READ WRITE CKSUM
rzpool
ONLINE
0
0
0
raidz1-0
ONLINE
0
0
0
replacing
c1t226000C0FFA001ABd20 ONLINE
0
0
0
c2t216000C0FF8001ABd21 ONLINE
0
0
0
c2t216000C0FF8001ABd20
ONLINE
0
0
0
c1t226000C0FFA001ABd21
ONLINE
0
0
0
errors: No known data errors
# zpool status rzpool
pool: rzpool
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Wed Oct
2010
config:
NAME
STATE
READ WRITE CKSUM
rzpool
ONLINE
0
0
0
raidz1-0
ONLINE
0
0
0
c2t216000C0FF8001ABd21 ONLINE
0
0
0
c2t216000C0FF8001ABd20 ONLINE
0
0
0
c1t226000C0FFA001ABd21 ONLINE
0
0
0
e
l
c
a
r
O
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
y
m
e
d
a
pool will
c
A
412M resilvered
13 15:12:59
821M resilvered
Note that the write error count has been cleared. The disk that had incurred the errors
has been replaced, and the new disk has no error counted against it. The resilver
process has reconstructed all of the data onto the new disk.
13. Scrub rzpool to check for data integrity errors.
# zpool scrub rzpool
14. Display the status of rzpool. Did the scrub operation complete successfully?
# zpool status rzpool
pool: rzpool
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Wed Oct 13 15:12:59
2010
config:
NAME
STATE
READ WRITE CKSUM
rzpool
ONLINE
0
0
0
raidz1-0
ONLINE
0
0
0
c2t216000C0FF8001ABd21 ONLINE
0
0
0
c2t216000C0FF8001ABd20 ONLINE
0
0
0
c1t226000C0FFA001ABd21 ONLINE
0
0
0
errors: No known data errors
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
Chapter 9
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
y
m
e
d
a
a
r
O
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Preparation
This exercise uses both the existing pools, rpool and mirpool. You will need access to the
/opt/ses/lab/zfs/zone_template script.
Your instructor will indicate what IP address to use for the non-global zone that you will create in
this exercise.
If the systems used for this exercise have been re-initialized, it will be necessary to run the
make_disk_list script in /opt/ses/lab/zfs. Your instructor will indicate if this is
necessary and will assign disks to you before you run the script.
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A
Tasks
1.
Use the zfs command to review the root pool components that were created when the
system was migrated to a ZFS root file system by using Solaris Live Upgrade.
# zfs list -r rpool
NAME
USED AVAIL REFER MOUNTPOINT
rpool
7.68G 59.3G
98K
/rpool
rpool/ROOT
5.18G 59.3G
21K
/rpool/ROOT
rpool/ROOT/zfsBE
5.18G 59.3G 5.18G /
rpool/dump
2.00G 59.3G 2.00G rpool/swap 517M 59.8G 16K -
2.
Use the zfs get command to identify the properties of the rpool/dump device. Identify
the type of dataset and the volsize property of this device.
# zfs get volsize rpool/dump
NAME
PROPERTY VALUE
SOURCE
rpool/dump volsize
2G
-
y
m
4.
Use the dumpadm command and the swap -l command to confirm that these devices are
the active dump and swap devices.
# dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash/host01
Savecore enabled: yes
Save compressed: on
# swap -l
swapfile
dev
swaplo blocks
free
/dev/zvol/dsk/rpool/swap 256,1
16 1058800 1058800
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Use the zfs set command to resize the rpool/dump device to 3 GB in size. This step
might take some time. Then confirm that the rpool/dump device size has increased.
# zfs set volsize=3g rpool/dump
# zfs get volsize rpool/dump
NAME
PROPERTY VALUE
SOURCE
rpool/dump volsize
3G
e
l
c
a
r
O
Tasks
1.
Use the ifconfig -a command to display your system's network interface configuration
information. Make note of the interface type used as your primary network interface (bge0
in the example below), and the IP address it uses.
# ifconfig -a
lo0: flags=2001000849(UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL) mtu
8232 index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=1000843(UP,BROADCAST,RUNNING,MULTICAST,IPv4) mtu 1500
index 2
inet 192.168.201.25 netmask ffffff00 broadcast 192.168.201.255
ether 0:3:ba:59:94:15
2.
3.
e
d
a
c
A
e
l
c
ra
O
4.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
y
m
Change physical device in the line that reads set physical=bge0 so it matches the
physical network device in use as the primary interface on your system.
Create a new ZFS file system named /rpool/zones.
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
6.
Use the zonecfg command to configure a non-global zone named zone1, using the
information in /opt/ses/lab/zfs/zone_template.
# zonecfg -z zone1 -f /opt/ses/lab/zfs/zone_template
7.
Use the zoneadm command to install zone1. Zone installation takes different amounts of
time on different types of systems.
# zoneadm -z zone1 install
A ZFS file system has been created for this zone.
Preparing to install zone <zone1>.
Creating list of files to copy from the global zone.
Copying <8952> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1261> packages on the zone.
Initialized <1261> packages on zone.
Zone <zone1> is initialized.
The file </rpool/zones/zone1/root/var/sadm/system/logs/install_log>
contains a log of the zone installation.
y
m
8.
Use the zoneadm command to boot zone1. The initial zone boot process can take a few
minutes to complete.
# zoneadm -z zone1 boot
9.
Use zlogin -C to connect to the console of zone1, and respond to the system
identification questions that are presented. Be certain to select a locale and terminal type
that is appropriate for your system. Use the vt100 terminal type.
Note: If you are connected to your system remotely, it may be useful to specify an
alternate escape character for zlogin console connections. Do this to avoid inadvertently
disconnecting your remote connection. For example, to use the caret symbol instead of
the tilde symbol (the default), you would use: zlogin C -e \^ zone1
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
# zlogin -C zone1
[Connected to zone zone1 console]
146/146
Reading ZFS config: done.
Select a Language
0. English
1. es
2. fr
e
l
c
a
r
O
Select a Locale
0. English (C - 7-bit ASCII)
1. Canada (English) (UTF-8)
2. Canada-English (ISO8859-1)
Copyright 2010, Oracle and/or its affiliates. All rights reserved.
3.
4.
5.
6.
U.S.A. (UTF-8)
U.S.A. (en_US.ISO8859-1)
U.S.A. (en_US.ISO8859-15)
Go Back to Previous Screen
y
m
e
d
a
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
10. Exit the zlogin console session. If you specified an alternate escape character in your
zlogin command line, enter it instead of the tilde character.
zone1 console login: ~.
[Connection to zone zone1 console closed]
11. Use the zoneadm list command to verify that zone1 is in the running state.
# zoneadm list -cv
ID NAME
STATUS
PATH
BRAND IP
0 global
running
/
native shared
- zone1
installed /rpool/zones/zone1
native shared
e
l
c
a
r
O
Tasks
1.
2.
Set the mountpoint property for mirpool/fs1 to legacy, and use zfs list to display
the list of available ZFS file systems. What mount point is listed for mirpool/fs1?
# zfs set mountpoint=legacy mirpool/fs1
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
161K
22.9G
23K /mirpool
mirpool/fs1
21K
22.9G
21K legacy
The mirpool/fs1 file system lists the legacy setting instead of a mount point.
3.
Use the zonecfg command to add the mirpool/fs1 file system to the zone1 non-global
zone. Set mirpool/fs1 to mount as /shared/fs1 in the non-global zone.
# zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set type=zfs
zonecfg:zone1:fs> set special=mirpool/fs1
zonecfg:zone1:fs> set dir=/shared/fs1
zonecfg:zone1:fs> end
zonecfg:zone1> exit
y
m
e
d
a
4.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Display the zoned property for all mirpool file systems. What is the value of this property
for mirpool/fs1?
# zfs get -r zoned mirpool
NAME
PROPERTY
VALUE
SOURCE
mpool
zoned
off
default
mpool/fs1
zoned
off
default
The zoned property for mirpool/fs1 is set to off.
5.
6.
Display the zoned property for all mirpool file systems. What is the value of this property
for mirpool/fs1?
# zfs get -r zoned mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
zoned
off
default
mirpool/fs1
zoned
off
default
e
l
c
a
r
O
7.
8.
Use the df -h command to display space usage information for mirpool/fs1. What
mount point is listed for the file system, and how much space is available in it?
# df -h mirpool/fs1
Filesystem
mirpool/fs1
9.
size
23G
used
21K
avail capacity
23G
1%
Mounted on
/shared/fs1
10. Change directory to /shared/fs1 and create a new file called file1.Verify that file1
exists.
# cd /shared/fs1
# touch file1
# ls
file1
y
m
11. Exit your login session to zone1 to return to the global zone.
# exit
[Connection to zone zone1 pts/2 closed]
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
e
d
a
e
l
c
a
r
O
Tasks
1.
Create a new ZFS file system named mirpool2/fs2 and verify that it exists.
# zfs create mirpool/fs2
# zfs list -r mirpool
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
186K
22.9G
21K /mirpool
mirpool/fs1
21K 22.9G
21K legacy
mirpool/fs2
21K 22.9G
21K /mirpool/fs2
2.
Set the mountpoint property for the mirpool/fs2 file system to /fsb/fs2. Verify that
the source of the mountpoint property for this file system is now local.
# zfs set mountpoint=/fsb/fs2 mirpool/fs2
# zfs get mountpoint mirpool/fs2
NAME
PROPERTY
VALUE
SOURCE
mirpool/fs2 mountpoint /fsb/fs2
local
3.
Use the zonecfg command to add the mirpool/fs2 as a dataset to the zone1 nonglobal zone.
# zonecfg -z zone1
zonecfg:zone1> add dataset
zonecfg:zone1:dataset> set name=mirpool/fs2
zonecfg:zone1:dataset> end
zonecfg:zone1> exit
y
m
e
d
a
4.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
c
A
Display the zoned property for all mirpool file systems. What is the value of this property
for mirpool/fs2?
# zfs get -r zoned mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
zoned
off
default
mirpool/fs1
zoned
off
default
mirpool/fs2
zoned
off
default
The zoned property for mirpool/fs2 is set to off.
5.
6.
Display the zoned property for descendent mirpool file systems. What is the value of this
property for mirpool/fs2?
# zfs get -r zoned mirpool
NAME
PROPERTY
VALUE
SOURCE
mirpool
zoned
off
default
mirpool/fs1
zoned
off
default
mirpool/fs2
zoned
on
local
e
l
c
a
r
O
8.
Use the df -h command to display space usage information for mirpool/fs2. What
mount point is listed for mirpool/fs2, and from what source is it derived?
# df -h mirpool/fs2
Filesystem
mirpool/fs2
9.
size
23G
used
23K
avail capacity
23G
1%
Mounted on
/fsb/fs2
The mirpool/fs2 file system is mounted as /fsb/fs2. This mount point is derived
from the mountpoint property for this file system.
Use the zfs list command to display the ZFS file systems that are visible in zone1.
Verify that mirpool and mirpool/fs2 are listed.
# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
mirpool
195K 22.9G
21K /mirpool
mirpool/fs2
21K 22.9G
21K /fsb/fs2
10. Attempt to set the compression property for the mirpool file system to on. What
happens and why?
# zfs set compression=on mirpool
cannot set compression for mirpool: permission denied
e
d
a
y
m
The attempt fails because the mirpool file system has not been delegated to zone1.
c
A
11. Set the compression property for the mirpool/fs2 file system to on.
# zfs set compression=on mirpool/fs2
This attempt succeeds because the file system has been delegated to zone1.
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
12. Exit your login session to zone1 to return to the global zone.
# exit
[Connection to zone zone1 pts/1 closed]
e
l
c
a
r
O
Tasks
1.
2.
3.
4.
y
m
e
d
a
e
l
c
e
l
c
a
r
O ly
& On
l
a e
n
r
s
e
t
U
n
I
a
r
O
c
A