Sei sulla pagina 1di 7

10/23/2020 Document 2018655.

1
Copyright (c) 2020, Oracle. All rights reserved. Oracle Confidential.

Solaris Volume Manager (SVM) How to 'growfs' an UFS on Top of a SVM mirror/concat/soft
partition by Expanding an Existing LUN (Doc ID 2018655.1)

In this Document

Goal
Solution
The general approach would be:
Example A: Growing a mirror with downtime
Example B: Growing a concat/stripe with downtime
Example E: Growing a soft partition with downtime
References

APPLIES TO:

Solaris Cluster - Version 3.0 to 4.3 [Release 3.0 to 4.3]


Sun Solaris Volume Manager (SVM) - Version 11.9.0 to 11.11 [Release 11.0]
Oracle Solaris on x86 (32-bit)
Oracle Solaris on SPARC (64-bit)
Oracle Solaris on SPARC (32-bit)
Oracle Solaris on x86-64 (64-bit)

GOAL

Solstice Disk Suite (SDS) and Solaris Volume Manager (SVM) have traditionally allowed you to increase the amount of
available space on mirror/concat/raid 5/soft partition.
The aim of this document is to provide you with practical examples on how to achieve the goal of increasing the file
system (fs) size by expanding the LUNs used for metadevices in any possible configurations.

Before increasing the fs size, you will need to increase the metadevice underlying layer by adding new free space on it.
With this document free space will be added by expanding the existing LUNs at the storage level.
Such a change can be achieved *only* offline unmounting the fs.

The reason is that there is no way to make SVM aware of the new size of the partition/LUN. To increase the size of the
SVM metadevice and filesystem you will have to remove metadevice and to recreate it afterwards.

The subsequent growing of the file system will ONLY work if the amount of space added does not alter the
existing drive geometry or structure of the disk label. If the disk label gets altered in any way other than
appending more sectors to the existing structure, data must be restored afterwards.
One example of such a change, where is no fs expansion possible, is a LUN expansion that forces a transition from SMI to
EFI label. (On Solaris 10, anything greater than 2 TB requires an EFI label.)

The following are different metadevices type you may deal with:

A) mirror
B) concat
C) stripe
D) raid 5
E) soft partition

Either if you have a stripe (C) or a raid 5 (D) metadevices, you cannot use this procedure because the
recreation of the metadevice will destroy data (RAID 5 and the stripe need to be initialized). Therefore no
examples available for (C) and (D) in this doc.
There are further limits to increase fs on SVM mirrored root. Details in:
Document 1012206.1 Solaris Volume Manager How To Grow SVM (SDS) Mirrored root

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=82skjboz1_356&id=2018655.1 1/7
10/23/2020 Document 2018655.1

If you want to expand your fs online by adding a new LUN you can refer to the following document:
Document 2018654.1 Solaris Volume Manager (SVM) How to 'growfs' an UFS on Top of a SVM mirror/concat/raid5/soft
partition by Adding a New LUN

It is always recommended to have a valid backup of the filesystem available before making such kind of
changes.

SOLUTION

The general approach would be:

The very first step is to extend the LUN at the storage side. For doing it please refer to the documentation of the
storage vendor.
Any filesystems on the disk must be unmounted first
Auto detect the disk size - run format, select the disk, and set the type to 'Auto configure'
Modify/verify slice to the required size in partition table
Relabel the disk/LUN
Recreate SVM configuration
Mount the UFS filesystem
And grow the UFS filesystem

Within the approach take care of:

a) The following Alert when reflect the changes at Solaris level:


Document 1382180.1 Solaris Does Not Automatically Handle an Increase in LUN Size

b) In case you are dealing within a SVM diskset/metaset and Solaris Cluster and you must re-partition the LUN maintaining
the slice 7 configuration as it was to prevent the SVM diskset replica loss. Also remember that the meta commands require
"-s <disksetname>" option.

After the Disk/LUN is unmounted, expanded and relabeled start with one of the following examples to grow
the filesystem.

Example A: Growing a mirror with downtime

This example will use mirror metadevice d80 which is 1GB in size.

The mirror metadevice is using disk partition c1t3d0s0 and c2t3d0s0 which have a size of 1GB

# metastat d80
d80: Mirror
Submirror 0: d81
State: Okay
Submirror 1: d82
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2104515 blocks (1.0 GB)

d81: Concat/Stripe
Size: 2104515 blocks (1.0 GB) <=== 1GB of size
Stripe 0:
Device Start Block Dbase Reloc
c1t3d0s0 0 No Yes

d82: Concat/Stripe
Size: 2104515 blocks (1.0 GB) <=== 1GB of size
Stripe 0:
https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=82skjboz1_356&id=2018655.1 2/7
10/23/2020 Document 2018655.1
Device Start Block Dbase Reloc
c2t3d0s0 0 No Yes

Details about vtocs before and after the LUN expansion.

The prtvtoc of the disks BEFORE the change:

# prtvtoc /dev/rdsk/c1t3d0s0
* /dev/rdsk/c1t3d0s0 partition map
*
....(output truncated for brevity)
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 0 00 417690 2104515 2522204 <=== Original size of 1GB
(2104515 sectors)

# prtvtoc /dev/rdsk/c2t3d0s0
* /dev/rdsk/c2t3d0s0 partition map
*
....(output truncated for brevity)
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 0 00 417690 2104515 2522204 <=== Original size of 1GB
(2104515 sectors)

The prtvtoc of the disk AFTER the change:

# prtvtoc /dev/rdsk/c1t3d0s0
* /dev/rdsk/c1t3d0s0 partition map
*
....(output truncated for brevity)
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 0 00 417690 3148740 3566430 <=== Size increased to
1.5GB (3148740 sectors)

# prtvtoc /dev/rdsk/c2t3d0s0
* /dev/rdsk/c2t3d0s0 partition map
*
....(output truncated for brevity)
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 0 00 417690 3148740 3566430 <=== Size increased to
1.5GB (3148740 sectors)

After the change metastat command will still show original size of the metadevice.

# metastat -c d80
d80 m 1.0GB d81 d82 <=== size unchanged despite the increase of the partition
d81 s 1.0GB c1t3d0s0
d82 s 1.0GB c2t3d0s0

Now delete the metadevice in order to reflect this change to SVM.

# metaclear -r d80
d80: Mirror is cleared
d81: Concat/Stripe is cleared
d82: Concat/Stripe is cleared

Remember filesystem still unmounted at this point.

Then recreate the metadevice.

# metainit d81 1 1 c1t3d0s0


d81: Concat/Stripe is setup

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=82skjboz1_356&id=2018655.1 3/7
10/23/2020 Document 2018655.1
# metainit d82 1 1 c2t3d0s0
d82: Concat/Stripe is setup
# metainit -m d80 d81
# metattach d80 d82

Now metastat shows up new size of 1.5GB.

# metastat -c d80
d80 m 1.5GB d81 d82
d81 s 1.5GB c1t3d0s0
d82 s 1.5GB c2t3d0s0

Mount the fs back (mount /dev/md/dsk/d80 /mnt) and the df command shows:

# df -h /mnt
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d80 996M 17M 919M 2% /mnt

The metadevice is grown and the file system is still there after the change of the size. But notice that the size of the
filesystem is still reflecting original size of the metadevice (1.0 GB).

Now you are ready to grow the file system.

# growfs -M /mnt /dev/md/rdsk/d80


/dev/md/rdsk/d80: 3148740 sectors in 209 cylinders of 240 tracks, 63 sectors
1537.5MB in 35 cyl groups (6 c/g, 44.30MB/g, 10688 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 90816, 181600, 272384, 363168, 453952, 544736, 635520, 726304, 817088, 2269632, 2360416,
2451200, 2541984, 2632768, 2723552, 2814336, 2905120, 2995904, 3086688

And 'df -h' is reporting increased size also for the filesystem 1.5GB.

# df -h /mnt
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d80 1.5G 18M 1.4G 2% /mnt

Example B: Growing a concat/stripe with downtime

This example will use the metadevice d80, which is 1GB in size.

The concat metadevice using disk partition c1t3d0s0 which has a size of 1GB.

# metastat d80
d80: Concat/Stripe
Size: 2104515 blocks (1.0 GB) <=== 1GB of size
Stripe 0:
Device Start Block Dbase Reloc
c1t3d0s0 0 No Yes

Details about vtocs before and after the LUN expansion.


The prtvtoc of the disk BEFORE the change:

# prtvtoc /dev/rdsk/c1t3d0s0
* /dev/rdsk/c1t3d0s0 partition map
*
....(output truncated for brevity)
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 0 00 417690 2104515 2522204 <=== Original size of 1GB
(2104515 sectors)
1 0 00 2522205 2104515 4626719

The prtvtoc of the disk AFTER the change:


https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=82skjboz1_356&id=2018655.1 4/7
10/23/2020 Document 2018655.1

# prtvtoc /dev/rdsk/c1t3d0s0
* /dev/rdsk/c1t3d0s0 partition map
*
....(output truncated for brevity)
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 0 00 417690 3148740 3566430 <=== Size increased to
1.5gb (3148740 sectors)
1 0 00 3566431 1060288 4626719

After increasing the slice the metastat command will still show original size of the metadevice.

# metastat -c d80
d80 s 1.0GB c1t3d0s0 <=== size unchanged despite the increase of the partition

Now delete the metadevice in order to reflect this change to SVM.

# metaclear -r d80
d80: Concat/Stripe is cleared

Remember filesystem still unmounted at this point.

Then recreate the metadevice.

# metainit d80 1 1 c1t3d0s0


d80: Concat/Stripe is setup

Now the metastat shows up new size of 1.5GB.

# metastat -c
d80 s 1.5GB c1t3d0s0 <=== 1.5GB of size

Mount the fs back (mount /dev/md/dsk/d80 /mnt) and verify with df command.

# df -h /mnt
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d80 996M 17M 919M 2% /mnt

The metadevice is grown and the file system is still there after the change of the size. But notice that the size of the
filesystem is still reflecting original size of the metadevice (1.0 GB).

Now you are ready to grow the file system.

# growfs -M /mnt /dev/md/rdsk/d80


/dev/md/rdsk/d80: 3148740 sectors in 209 cylinders of 240 tracks, 63 sectors
1537.5MB in 35 cyl groups (6 c/g, 44.30MB/g, 10688 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 90816, 181600, 272384, 363168, 453952, 544736, 635520, 726304, 817088, 2269632, 2360416,
2451200, 2541984, 2632768, 2723552, 2814336, 2905120, 2995904, 3086688

And 'df -h' is reporting increased size also for the filesystem 1.5GB.

# df -h /mnt
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d80 1.5G 18M 1.4G 2% /mnt

Example E: Growing a soft partition with downtime

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=82skjboz1_356&id=2018655.1 5/7
10/23/2020 Document 2018655.1

Soft partitions can be placed directly above a disk slice, or on top of a mirror, stripe, or RAID-5 volume.

a) If you have a soft partition on top a metadevice then follow "Example E" in:
Document 2018654.1 Solaris Volume Manager (SVM) How to 'growfs' an UFS on Top of a SVM mirror/concat/raid5/soft
partition by Adding a New LUN (Doc ID 2018654.1)

b) If the soft partition is on top of a physical disk slice and you have previously extended the disk/LUN at the storage level,
as described above.
Then check the amount of free space by following:
Document 1002219.1 Solaris Volume Manager (SVM)/Solstice DiskSuite (SDS): How much Free Space is Available with soft
partitions?

This example show to grow a 2GB soft partition (d100) on physical disk with additional 4GB.

# metastat d100
d100: Soft Partition
Device: c0t8d0s1
State: Okay
Size: 4194304 blocks (2.0 GB)
Device Start Block Dbase Reloc
c0t8d0s1 0 No Yes

Extent Start Block Block count


0 1 4194304

Run metattach command to add the 4GB.

# metattach d100 4gb


d100: Soft Partition has been grown

Check again the status that the additional 'Extent' is added.

d100: Soft Partition


Device: c0t8d0s1
State: Okay
Size: 12582912 blocks (6.0 GB)
Device Start Block Dbase Reloc
c0t8d0s1 0 No Yes

Extent Start Block Block count


0 1 4194304
1 8388611 8388608

Now the filesystem can be grown.

# growfs /dev/md/rdsk/d100

After mounting the file system check increased size with df command.

# df -h /data

Some more information to grow SVM soft partition is available in:


Document 1417827.1 Solaris Volume Manager (SVM): Best Practices for Creation and Implementation of Soft Partitions

Finally during one of these processes, whatever was the type of metadevice grown, the file system must be
unmounted and therefore all applications accessing must be stopped.

The following previously existing documents were included in this document:


Doc IDs: 1451858.1, 1444612.1, 1608729.1

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=82skjboz1_356&id=2018655.1 6/7
10/23/2020 Document 2018655.1
Didn't find what you are looking for?

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=82skjboz1_356&id=2018655.1 7/7

Potrebbero piacerti anche