Sei sulla pagina 1di 13

Migrando de storage con SRDF.

Este documento no reemplaza la documentacion de SRDF ni explica como migrar storage, sino,
es una breve guia, a modo de procedimiento para quienes tienen experencia en SRDF.

Pasos a seguir:

0. Hacer las zonas entre los puertos de SRDF de los storage, si estos no estan DAS.

0. Estando en modo configuracion del switch, correr “zone name (nombre de zona) ; member
(wwn del puerto del storage) ; member (wwn del host a conectar) NO habilitar las zonas.

Ejemplo:

zone name servename01_HBAp _VMAX_7G1_VSANxxx


member 21:00:00:e0:8b:9a:24:39
member 50:00:09:74:C0:0D:4D:99

1. Establecer relacion entre cajas. Para eso, se conectaran ambos equipos mediante el
vinculo SRDF, y se establecera una relacion logica. (utilizar adaptive copy)

1. symcfg -sid xyz list -rdfg all list

2. Armado de parejas SRDF. Para esto, se arma un documento xls con las parejas que
incluye cada host y un archivo txt para la parte practica.

2. symrdf -sid xyz createpair -file pares.txt -type RDF1 -rdfg <Ragroup> -invalidate R2 -g
<nuevogrupo>

3. Sincronizacion de SRDF (dejar todos los vinculos en SYNC)

3. symrdf -g nuevogrupo set mode acp_wp


symrdf -g nuevogrupo establish -full
symrdf -g nuevogrupo query  al llegar al 80 o 90%, hacer sync
symrdf -g nuevogrupo set mode sync

4. Bajar el sistema operativo del servidor a migrar (en windows 2xxx hacere un flush del
filesystem)

4. shutdown
flushfs.vbs:
BOOL WINAPI FlushFileBuffers(
__in HANDLE hFile
);
5. Hacer un query del estado de la copia (symrdf query)

5. # symntctl umount –drive “drive_letter” –fs


symrdf -g nuevogrupo query

6. Split de la copia (corta la copia y pone los discos R2 destino en RW, los deja usables ya
para el equipo migrado)

6. symrdf -g <grupoexistente> split

7. Deshabilitar zonas orginales contra el Storage origen y habilitar las zonas nuevas, contra
el Storage destino.

7. deshabilitar las zonas viejas / habilitar las nuevas segun explicado en paso 0.

8. Hacer el masking correspondiente. (o symacl, segun corresponda).

8. symmask -sid xxx -wwn 210000e08b9a2439 -dir 8G -p 0 add devs 418 -noprompt

-O-

symaccess –sid xxx create –type……………. Etc. (solo en VMAX)

9. Inicializar los equipos (bootear) normalmente, los equipos deberian levantar


normalmente, viendo los discos como veia los originales.

10. Luego de haber verificado la consistencia de datos y que todo levanto OK, sacar el
atributo R2 a los devices destino en el VMAX.

10. symrdf -g <grupoexistente> deletepair


1) Verify that SRDF is properly configured on both the local and remote hosts.
a) Use the following SYMCLI commands to list and verify the device groups you created in
the previous exercise.
# symdg list (on both the local and remote hosts)
# symdg show mysrcdg (on the local host)
# symdg show mytgtdg (on the remote host)
b) Check the environment variables for SYMCLI and if necessary set the SYMCLI_DG
variable to your device group.
# symcli –def
# SYMCLI_DG=mysrcdg (on the local host)
# set SYMCLI_DG
# SYMCLI_DG=mytgtdg (on the remote host)
# set SYMCLI_DG
c) Verify the status of the devices in your device group. The local host should have RW
access to the source (R1) volumes, the remote host should see the target (R2) volumes
as being WD, the SRDF link should be enabled (RW), and there should be no invalid
tracks.
# symrdf query (on the local host)

2) Verify that drive letters are assigned and available for use. Perform this step on the
local host.
a) Add more data files to your filesystems in Windows Explorer.

3) Pass control of the SRDF volume and associated data to the remote host. Before
we test disaster failover, we will do a normal failover to verify the procedure. This step not
only involves passing control of the SRDF volumes to the remote host, it also requires that
the remote host understand how the volumes are configured. If the local host has created
volume partitions and filesystems on the SRDF source volumes, then the Volume information
must be imported to the remote host, after a failover. In Windows 2000 drive letter
assignments to Basic disks should be automatic after a failover.
a) While in a disaster situation it is not possible to ”gracefully” shutdown applications and
unassign drive letters, it is always less risky to do so when ever possible. Before passing
control of the SRDF volume to the remote host, we will first unassign the drive letters on
the local host to deactivate the Volume.
i) Unassign the drive letters in “Disk Management” by right clicking on the correct
disk and selecting the option to “change drive letter and path”, then click on the
drive and select “remove”. If you have the SIU/CLI then use the following command
strings:

# symntctl openhandle –drive “drive_letter” (on the local host) - to verify that no
processes are using the R1 volume
# symntctl umount –drive “drive_letter” –fs (to dismiss cache buffers)
# symntctl umount –drive “drive_letter”

b) Initiate failover of the SRDF volumes by executing the following command from the
remote host. The failover command can be executed from the local host. How ever, in a
true disaster situation, we may not have access to the local host.
# symrdf failover (on the remote host)
Note the verbose output from the failover command. Each step is displayed as it is
executed. The output could be piped to a file and saved for future study or reference.
More detailed information is logged in the /var/symapi/log directory.
c) View the status of both the R1 and R2 volumes.
# symrdf query (on the remote host)
What level of access does the local host have to the source (R1) volumes?
_________________________________________
What level of access does the remote host have to the target (R2) volumes?
_________________________________
The source volumes should be Write Disabled (WD) and the target volumes Read/Write
(RW). Even though the local host has read access to the source volumes, use caution
when accessing it as integrity cannot be guaranteed.

d) Before the remote W2K host can use the data on the SRDF target volumes, the Volume
definition information must be ”imported”. At the remote host, find the drive letters assigned to
the R2 copies. In the Windows Explorer program, select the assigned drive letters to verify that
the copied data is available.

4) Resume activities on the target host. The remote host now has full access to the SRDF
volumes and associated data. To simulate production after a failover, we will make some
changes to the filesystem.
a) While at the remote host, add some data on the R2 mirrored volumes at that server.
b) While changes are being made to the SRDF (R2) volumes from the remote host, the
link between the source and target volumes is disabled. Check to see how many invalid
tracks have accumulated, using the appropriate command.
# symrdf query
How many invalid tracks are there? ___________________________
How many MB does this represent? ___________________________
Are the invalid tracks on the R1 or R2 Volumes? ____________________

5) Pass control of the SRDF volumes and data back to the local host.
a) The remote host should not have the drive letters of the R2 volumes mounted while
control is being passed back to the local host because the remote host’s access to the
target volumes will change to Read Only (WD). Attempting to write to the filesystem
while the volumes are Write Disabled will cause unpredictable results.
i) Unassign the drive letters in “Disk Management” by right clicking on the correct
disk and selecting the option to “change drive letter and path”, then click on the
drive and select “remove”. If you have the SIU/CLI then use the following command
strings:
# symntctl openhandle –drive “drive_letter” (on the local host) - to verify that no
processes are using the R1 volume
# symntctl umount –drive “drive_letter” –fs (to dismiss cache buffers)
# symntctl umount –drive “drive_letter”
b) Make the source volumes active by executing the following command.
# symrdf failback (on the remote host)
c) View the status of both the source (R1) and the target (R2) volumes.
# symrdf query (on the remote host)
What level of access does the local host have to the source volumes?
b) Change mode of operation to enable adaptive copy-disk mode for all devices in the device
group. Verify that the mode change occurred and then disable adaptive copy.

# symrdf set mode acp disk

# symrdf query

# symrdf set mode acp off

# symrdf query

Once You upgrade RDF to Dynamic RDF the syntax for removing RDF attribute from the devices
would be as follows:

1)symrdf -file <filepath.txt> -sid [SYMMid] -rdfg [rdf_group_number] split


This will split the RDF relationship.

Then run:
2)symrdf -file <filepath.txt> -sid [Local_SYMMid] -rdfg [rdf_group_number] deletepair

Where the content of the config (filepath.txt) file is :


R1_Symmdev R2_Symmdev

This will remove RDF attributte from both source and target device, reverting them back to their
original configuration.
Adicionales, software no-EMC

MS/SQL Clusters:

The quorum disk is located on a shared bus in the cluster. Use the following procedure to
designate a different drive for the quorum device:

NOTE: I f you cannot start Cluster service because the quorum disk is unavai lable, use the
/FIXQUORUM switch to start Cluster service. You are then able to change the quorum disk
designation.

1. Start Cluster Administrator (CluAdmin.exe).


2. Right-click the cluster name in the upper-left corner, and then cl ick Properties.
3. Click the Quorum tab.
4. In the Quorum resource box, click a di fferent disk resource.
5. If the disk has more than one partition, click the partition where you want the cluster-
specific data to be kept, andthen click OK.

NOTE: I f you cannot start Cluster service because the quorum disk is unavai lable, use the
/FIXQUORUM switch to start Cluster service. You are then able to change the quorum disk
designation.
When you change the quorum disk designation, Cluster service does not remove the /Mscs
directory from the old drive. For administrative purposes, you may want to delete this old di
rectory, or keep it as a backup. Do not continue running Cluster service with the /FIXQUORUM
switch enabled. When the new quorum disk is established, stop the service and restart it
without a switch. Then it is safe to bring other nodes online.
It is recommended that you increase the quorum log size to 4,096 KB.
Se detalla aqui el procedimiento normalizado por EMC para Storage Vmotion.

1. Bajar Remote Command-line Interface e instalarlo.


http://www.vmware.com/download/download.do?downloadGroup=VI-RCLI.
Abrir un command line en c:\program files\vmware\vmware vi remote CLI\bin.
Ejecutar: svmotion.pl – -interactive.

2. Aparece el siguiente mensaje: “ Enter the VirtualCenter service url you


wish to connect to (e.g. https://myvc.my corp.com/sdk, or just
myvc.mycorp.com): ”
Ingresar la dirección del Virtual Center al que se quiere conectar.

3. Ante los siguientes command prompts ingresar usuario y passwords válidos


en el Virtual Center.

4. Una vez que svmotion se conectó al Virtual Center, aparece el siguiente


mensaje: “ Connected to server. ”

5. Ante el siguiente mensaje “ Enter the name of the datacenter:” ingrese el


nombre del datacenter donde se encuentra el ESX server.

6. Ante el mensaje “ Enter the datastore path of the virtual machine (e.g.
[datastore1] myvm/myvm.vmx):” ingrese los datos solicitados siguiendo el
siguiente formato: [datastorename] VM name/VM name.vmx.

7. Ante el siguiente mensaje “ Enter the name of the destination datastore:”


ingrese el nombre del datastore de destino.

8. Ante el siguiente mensaje “ You can also move disks independently of the
virtual machine. If you want the disks to stay with the virtual machine,
then skip this step. "
“ Wo u l d y o u l i k e t o i n d i v i d u a l l y p l a c e t h e d i s k s (yes/no)? ”
seleccionar No si se desea que todos los discos virtuales de esta
VM sean almacenados en el mismo datastore junto con la misma VM.

9. A continuación, los datos de la máquina virtual indicada serán migrados de


manera online hacia el nuevo datastore.
Se detalla aqui el procedimiento normalizado por EMC para AIX LVM.

• Add the new disk to the volume group (for example, hdisk1):
# extendvg AIXTSM hdiskX
• If you use one mirror disk, be sure that a quorum is not required for varyon:
# chvg -Qn AIXTSM

Add the mirrors for all AIXTSM logical volumes:


# mklvcopy hd1 2 hdisk1 ; # mklvcopy hd2 2 hdisk1 ; # mklvcopy hd3 2 hdisk1 ; #
mklvcopy hd4 2 hdisk1 ; # mklvcopy hd5 2 hdisk1 ; # mklvcopy hd6 2 hdisk1 ; #
mklvcopy hd8 2 hdisk1 ; # mklvcopy hd9var 2 hdisk1 ; # mklvcopy hd10opt 2 hdisk1
OR
# mirrorvg -s –c’x’ AIXTSM; smit  or: mirrorvg –m datavg hdiskX hdiskY
If you have other logical volumes in your rootvg, be sure to create copies for them as
well.

An alternative to running multiple mklvcopy commands is to use mirrorvg. This command


was added in version 4.2 to simplify mirroring VGs. The mirrorvg command by default will disable
quorum and mirror the existing LVs in the specified VG. To mirror AIXTSM, use the command:
mirrorvg -s –c ‘x’ AIXTSM

• Now synchronize the new copies you created:


# syncvg -v AIXTSM
Listado de comandos SRDF.
Step 1

Create SYMCLI Device Groups. Each group can have one or more Symmetrix devices specified in
it.

SYMCLI device group information (name of the group, type, members, and any associations) are
maintained in the SYMAPI database.

In the following we will create a device group that includes two SRDF volumes.

SRDF operations can be performed from the local host that has access to the source volumes or
the remote host that has access to the target volumes. Therefore, both hosts should have device
groups defined.

Complete the following steps on both the local and remote hosts.

a) Identify the SRDF source and target volumes available to your assigned hosts. Execute the
following commands on both the local and remote hosts.

# symrdf list pd (execute on both local and remote hosts)

or

# syminq

b) To view all the RDF volumes configured in the Symmetrix use the following

# symrdf list dev

c) Display a synopsis of the symdg command and reference it in the following steps.

# symdg –h

d) List all device groups that are currently defined.

# symdg list

e) On the local host, create a device group of the type of RDF1. On the remote host, create a
device group of the type RDF2.

# symdg –type RDF1 create newsrcdg (on local host)

# symdg –type RDF2 create newtgtdg (on remote host)

f) Verify that your device group was added to the SYMAPI database on both the local and remote
hosts.

# symdg list
g) Add your two devices to your device group using the symld command. Again use (–h) for a
synopsis of the command syntax.

On local host:

# symld –h

# symld –g newsrcdg add dev ###

or

# symld –g newsrcdg add pd Physicaldrive#

On remote host:

# symld –g newtgtdg add dev ###

or

# symld –g newtgtdg add pd Physicaldrive#

h) Using the syminq command, identify the gatekeeper devices. Determine if it is currently
defined in the SYMAPI database, if not, define it, and associate it with your device group.

On local host:

# syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newsrcdg associate pd Physicaldrive# (to associate)

On remote host:

# syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newtgtdg associate pd Physicaldrive# (to associate)

i) Display your device groups. The output is verbose so pipe it to more.

On local host:

# symdg show newsrcdg |more


On remote host:

# symdg show newtgtdg | more

j) Display a synopsis of the symld command.

# symld -h

k) Rename DEV001 to NEWVOL1

On local host:

# symld –g newsrcdg rename DEV001 NEWVOL1

On remote host:

# symld –g newtgtdg rename DEV001 NEWVOL1

l) Display the device group on both the local and remote hosts.

On local host:

# symdg show newsrcdg |more

On remote host:

# symdg show newtgtdg | more

Step 2

Use the SYMCLI to display the status of the SRDF volumes in your device group.

a) If on the local host, check the status of your SRDF volumes using the following command:

# symrdf -g newsrcdg query

Step 3

Set the default device group. You can use the “Environmental Variables” option.

# set SYMCLI_DG=newsrcdg (on the local host)

# set SYMCLI_DG=newtgtdg (on the remote host)

a) Check the SYMCLI environment.

# symcli –def (on both the local and remote hosts)

b) Test to see if the SYMCLI_DG environment variable is working properly by performing a


“query” without specifying the device group.
# symrdf query (on both the local and remote hosts)

Step 4

Changing Operational mode. The operational mode for a device or group of devices can be set
dynamically with the symrdf set mode command.

a) On the local host, change the mode of operation for one of your SRDF volumes to enable
semi-synchronous operations. Verify results and change back to synchronous mode.

# symrdf set mode semi NEWVOL1

# symrdf query

# symrdf set mode sync NEWVOL1

# symrdf query

b) Change mode of operation to enable adaptive copy-disk mode for all devices in the device
group. Verify that the mode change occurred and then disable adaptive copy.

# symrdf set mode acp disk

# symrdf query

# symrdf set mode acp off

# symrdf query

Step 5

Check the communications link between the local and remote Symmetrix.

a) From the local host, verify that the remote Symmetrix is “alive”. If the host is attached to
multiple Symmetrix, you may have to specify the Symmetrix Serial Number (SSN) through the –
sid option.

# symrdf ping [ -sid xx ] (xx=last two digits of the remote SSN)

b) From the local host, display the status of the Remote Link Directors.

# symcfg –RA all list


c) From the local host, display the activity on the Remote Link Directors.

# symstat -RA all –i 10 –c 2

Step 6

Create a partition on each disk, format the partition and assign a filesystem to the partition. Add
data on the R1 volumes defined in the newsrcdg device group.

Step 7

Suspend RDF Link and add data to filesystem. In this step we will suspend the SRDF link, add
data to the filesystem and check for invalid tracks.

a) Check that the R1 and R2 volumes are fully synchronized.

# symrdf query

b) Suspend the link between the source and target volumes.

# symrdf suspend

c) Check link status.

# symrdf query

d) Add data to the filesystems.

e) Check for invalid tracks using the following command:

# symrdf query

f) Invalid tracks can also be displayed using the symdev show command. Execute the following
command on one of the devices in your device group. Look at the Mirror set information.

On the local host:

# symdev show ###

g) From the local host, resume the link and monitor invalid tracks.

# symrdf resume

# symrdf query

Happy SRDF’ing!!!!!

Potrebbero piacerti anche