Sei sulla pagina 1di 4

HA NFS Sun Cluster Setup Goal: dual-node NFS failover cluster that shares 2 concatenated SVM volumes 2 nodes

(hostname: node1, node2) with installed Solaris 10 01/06, patches, Sun Cluster, NFS agent for Sun Cluster, VxFS Both nodes are connected to the FC SAN storage, 8 storage LUNs are mapped to each node. Configure SVM On both nodes Create the 25MB partition on the boot disk (s7) Create the SVM database replica metadb -afc 3 c0d0s7 (c0t0d0s7 on sparc) On one node (node1) - Create disk sets : metaset -s nfs1 -a -h node1 node2 metaset -s nfs1 -t -f metaset -s nfs1 -a /dev/did/rdsk/d2 /dev/did/rdsk/d3 /dev/did/rdsk/d4 /dev/did/rdsk/d5 metainit -s nfs1 d1 4 1 /dev/did/rdsk/d2s0 1 /dev/did/rdsk/d3s0 1 /dev/did/rdsk/d4s0 1 /dev/did/rdsk/d5s0 metastat -s nfs1 -p >> /etc/lvm/md.tab metaset -s nfs2 -a -h node2 node1 metaset -s nfs2 -t -f metaset -s nfs2 -a /dev/did/rdsk/d6 /dev/did/rdsk/d7 /dev/did/rdsk/d8 /dev/did/rdsk/d9 metainit -s nfs2 d1 4 1 /dev/did/rdsk/d6s0 1 /dev/did/rdsk/d7s0 1 /dev/did/rdsk/d8s0 1 /dev/did/rdsk/d9s0 metastat -s nfs2 -p >> /etc/lvm/md.tab scp /etc/lvm/md.tab node2:/tmp/md.tab ssh node2 'cat /tmp/md.tab >> /etc/lvm/md.tab' - Create VxFS on shared devices

mkfs -F vxfs /dev/md/nfs1/rdsk/d1 mkfs -F vxfs /dev/md/nfs2/rdsk/d1 On both nodes - Create the directories mkdir -p /global/nfs1 mkdir -p /global/nfs2 - Add the mount entries to the vfstab file cat >> /etc/vfstab << EOF /dev/md/nfs1/dsk/d1 /dev/md/nfs1/rdsk/d1 /global/nfs1 vxfs 2 no noatime /dev/md/nfs2/dsk/d1 /dev/md/nfs2/rdsk/d1 /global/nfs2 vxfs 2 no noatime EOF (mount-at-boot "no" because we'll use the HAStoragePlus resource type) - Add logical hostnames : cat >> /etc/hosts << EOF 10.1.1.1 log-name1 10.1.1.2 log-name2 EOF On one node (node1) - Mount metavolumes and create the PathPrefix directories mount /global/nfs1 mount /global/nfs2 mkdir -p /global/nfs1/share mkdir -p /global/nfs2/share Configure HA NFS On one node (node1) - Register resource types : scrgadm -a -t SUNW.HAStoragePlus scrgadm -a -t SUNW.nfs - Create failover resource groups : scrgadm -a -g nfs-rg1 -h node1,node2 -y

PathPrefix=/global/nfs1 -y Failback=true scrgadm -a -g nfs-rg2 -h node2,node1 -y PathPrefix=/global/nfs2 -y Failback=true - Add logical hostname resources to the resource groups : scrgadm -a -j nfs-lh-rs1 -L -g nfs-rg1 -l log-name1 scrgadm -a -j nfs-lh-rs2 -L -g nfs-rg2 -l log-name2 - Create dfstab file for each NFS resource : mkdir -p /global/nfs1/SUNW.nfs /global/nfs1/share mkdir -p /global/nfs2/SUNW.nfs /global/nfs2/share echo 'share -F nfs -o rw /global/nfs1/share' > /global/nfs1/SUNW.nfs/dfstab.share1 echo 'share -F nfs -o rw /global/nfs2/share' > /global/nfs2/SUNW.nfs/dfstab.share2 - Configure device groups : scconf -c -D name=nfs1,nodelist=node1:node2,failback=enabled scconf -c -D name=nfs2,nodelist=node2:node1,failback=enabled - Create HAStoragePlus resources : scrgadm -a -j nfs-hastp-rs1 -g nfs-rg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/nfs1 -x AffinityOn=True scrgadm -a -j nfs-hastp-rs2 -g nfs-rg2 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/nfs2 -x AffinityOn=True - Share : share -F nfs -o rw /global/nfs1/share share -F nfs -o rw /global/nfs2/share - Bring the groups online : scswitch -Z -g nfs-rg1 scswitch -Z -g nfs-rg2 - Create NFS resources :

scrgadm -a -j share1 -g nfs-rg1 -t SUNW.nfs -y Resource_dependencies=nfs-hastp-rs1 scrgadm -a -j share2 -g nfs-rg2 -t SUNW.nfs -y Resource_dependencies=nfs-hastp-rs2 - Change the probe interval for each NFS resource to a different value so each probe runs at at different time (see InfoDoc 84817) : scrgadm -c -j share1 -y Thorough_probe_interval=130 scrgadm -c -j share2 -y Thorough_probe_interval=140 - Change the number of NFS threads - on each node edit the file /opt/SUNWscnfs/bin/nfs_start_daemons instead of DEFAULT_NFSDCMD="/usr/lib/nfs/nfsd -a 16" put DEFAULT_NFSDCMD="/usr/lib/nfs/nfsd -a 1024" - Enable scswitch scswitch scswitch scswitch - Switch scswitch scswitch scswitch scswitch scswitch NFS resources : -e -j share1 -e -j share2 -e -j share3 -e -j share4 resource groups to check the cluster : -z -h node2 -g nfs-rg1 -z -h node2 -g nfs-rg2 -z -h node1 -g nfs-rg1 -z -h node1 -g nfs-rg2 -z -h node2 -g nfs-rg2

Potrebbero piacerti anche