Sei sulla pagina 1di 3

http://webcache.googleusercontent.com/search?q=cache:TC-gUgUS4XcJ:datastoragesol utions.us/migration-using-snapmirror/+snapmirror+old+new+vol&cd=3&hl=en&ct=clnk& gl=us&client=firefox-a&source=www.google.

com

Migration using snapmirror On , in Uncategorized, by StorageSolutions Volume migration using SnapMirror Ontap Snapmirror is designed to be simple, reliable and cheap tool to facilitate disaster recovery for business critical applications. It comes default with Ont ap but has to be licensed to use. Apart from DR, snapmirror is an extreamly useful in suituation like 1. Aggregates or volumes reached maximum size limit. 2, Need to change volume disk type (tiering). I experienced a similar suituation last week. My filer runs Ontap 7.2.3 which du e for an upgrade). The maximum aggregate size on this version of Ontap is 12TB u sable. A couple of my volumes hosted by this aggregate aggr1 were quickly runnin g out of disk space as nightly database dumps were chewing it up . So far I surv ived by oversubscribing the volumes. Although I have spares in the filer, I am u nable to expand the aggregate as it has touched its limit. The plan is to create a new aggregate (aggrgate_new) and migrate these volumes o nto it. I used snapmirror for this and it worked like a charm. Listed below is what one needs to do Prep work Build a new aggregate from free disks 1. List the spares in the system # vol status -s Spare disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) - -

Spare disks for block or zoned checksum traditional volumes or aggregates spare 7a.18 7a 1 2 FC:B spare 7a.19 7a 1 3 FC:B spare 7a.20 7a 1 4 FC:B spare 7a.21 7a 1 5 FC:B spare 7a.22 7a 1 6 FC:B spare 7a.23 7a 1 7 FC:B FCAL 10000 372000/761856000 560879/1148681096 FCAL 10000 372000/761856000 560879/1148681096 FCAL 10000 372000/761856000 560879/1148681096 FCAL 10000 372000/761856000 560879/1148681096 FCAL 10000 372000/761856000 560879/1148681096 FCAL 10000 372000/761856000 560879/1148681096

spare 7a.24 7a 1 8 FC:B spare 7a.25 7a 1 9 FC:B spare 7a.26 7a 1 10 FC:B 2. Create new aggregate

FCAL 10000 372000/761856000 560879/1148681096 FCAL 10000 372000/761856000 560879/1148681096 FCAL 10000 372000/761856000 560879/1148681096

Add the new disks. Make sure you add sufficient disks to create complete raid gr oups. Else later when you add new disks to the aggregate , all the new writes wi ll go to the newly added disks until it fills up to the level of other disks in the raid group. This creates a disk bottleneck in the filer as all the writes ar e now handled by limited number of spindles. # aggr add aggr_new 7a.18,7a.19,7a.20,7a.21,7a.22,7a.23,7a.24,7a.25,7a.26,7a.27 3. Verify the aggregate is online # aggr status aggr_new 3. Create new volumes with name vol_new and size 1550g on aggr_new # vol create vol_new aggr_new 1500g 4. Verify the volume is online # vol status vol_new 5. Setup snapmirror between old and new volumes First you need to restrict the destination volume by using the command # vol res trict vol_new a. snapmirror initialize -S filername:volname filername:vol_new b. Also make an entry in /etc/snapmirror.conf file for this snapmirror session filername:/vol/volume filername:/vol/vol_new kbs=1000 0 0-23 * * Note kbs=1000 is throttling the snapmirror speed On day of cut over Update snapmirror session # snapmirror update vol_new Transfer started. Monitor progress with snapmirror status

or the snapmirror log.

# snapmirror status vol_new Snapmirror is on. Source Destination State Lag Status filername:volume_name filername:vol_new Snapmirrored 00:00:38 Idle Quiesce the relationship this will finish the in session transfers, and then hal t any further updates from snapmirror source to snapmirror destination. Quiecse the destination # snapmirror quiesce vol_new snapmirror quiesce: in progress

This can be a long-running operation. Use Control C (^C) to interrupt. snapmirror quiesce: dbdump_pb : Successfully quiesced Break the relationship this will cause the destination volume to become writable

# snapmirror break vol_new snapmirror break: Destination vol_new is now writable. Volume size is being retained for potential snapmirror resync. If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off Enable quotas: quota on volname Rename volumes Once the snapmirror session isnterminated, we can now rename the volumes # vol rename volume_name volume_name_temp # vol rename vol_new volume_name Remember, the shares move with the volume name. ie. if the volume hosting the sh are is renames the corresonding change is reflected in the recreate the path of the share. This requires us to delete the old share and recreate it with the cor rect volume name. File cifsconfig_share.cfg under etc$ has listing of the comman ds run to create the shares. Use this file as reference. cifs shares -add test_share$ cifs access test_share$ /vol/volume_name Admin Share Server Admins

S-1-5-32-544 Full Control

Use a -f at the end of the cifs shares -add line to eliminate the y or n prompt. Start quotas on the new volume # quota on volume_name Voila ! You are done. The shares and qtrees are now refering to the new volume o n a new aggregate. Test the shares by mapping them on a windows host. If you enjoyed this article, please consider sharing it!

Potrebbero piacerti anche