Sei sulla pagina 1di 12

Configuration

Before starting with the configuration, the cluster must be properly planned. The online planning worksheets (OLPW)
can be used for the planning purpose. Here, it explains the configuration of a two node cluster. In the example
provided, both nodes have 3 Ethernet adapters and 2 shared disks.

Step 1: Fileset installation


After installing AIX, the first step is to install the required filesets. The RSCT and BOS filesets can be found in the AIX
base version CDs. The license for PowerHA needs to be purchased to install the HACMP filesets. Install the following
filesets:

HACMP 5.5 filesetscluster.adt.es


cluster.es.assist
cluster.es.cspoc
cluster.es.plugins
RSCT filesetsrsct.compat.basic.hacmp
cluster.assist.license
rsct.compat.clients.hacmp
BOS filesetsbos.data cluster.doc.en_US.assist
rsct.basic.hacmp
bos.adt.libm cluster.doc.en_US.es
rsct.basic.rte
bos.adt.syscalls cluster.es.worksheets
rsct.opt.storagerm
bos.clvm.enh cluster.license
rsct.crypt.des
bos.net.nfs.server cluster.man.en_US.assist
rsct.crypt.3des
cluster.man.en_US.es
rsct.crypt.aes256
cluster.es.client
cluster.es.server
cluster.es.nfs
cluster.es.cfs

After installing the filesets, reboot the partition.

Step 2: Setting the path


Next, the path needs to be set. To do that, add the following to the /.profile file:

export PATH=$PATH:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities

Step 3: Network configuration


To configure an IP address on the Ethernet adapters, do the following:

#smitty tcpip -> Minimum Configuration and Startup -> Choose Ethernet network
interface

You will have three Ethernet adapters. Two with private IP address and one with public IP address. As shown in the
image below, enter the relevant fields for en0 (which you will configure for a public IP address).
Image 1. Configuration of a public IP address

This will configure the IP address and start the TCP/IP services on it.

Similarly, you configure the private IP addresses on en1 and en2, as shown in Image 2.
Image 2. Configuration of a private IP address
Similarly, configure en2 with private IP 10.10.210.21 and start the TCP/IP services. Next, you need to add the IP
addresses (of both node1, node2 and the service IP which db2live here) and the labels into /etc/hosts file. It should
look like the following:

# Internet Address Hostname # Comments


127.0.0.1 loopback localhost # loopback (lo0) name/address
192.168.20.72 node1.in.ibm.com node1
192.168.20.201 node2.in.ibm.com node2
10.10.35.5 node2ha1
10.10.210.11 node2ha2
10.10.35.4 node1ha1
10.10.210.21 node1ha2
192.168.22.39 db2live

The idea is that you should include each of the three ports for each machine with relevant labels for name resolution.

Perform similar operations on node2. Configure en0 with the public IP and en1 and en2 with private IPs and edit the
/etc/hosts file. To test that all is well, you can issue pings to the various IP addresses from each machine.

Step 4: Storage configuration


We need to have a shared storage to create heartbeat over FC disk. The disks need to be allocated from SAN. Once
both the nodes are able to see the same disks (this can be identified using LUN number), heartbeat over disks will be
configured.

This method does not use Ethernet to avoid a single point of failure from the Ethernet network/switches/protocols.
The first step is to identify the available major number on all the nodes (as shown on Image 3 below).
Image 3. Identifying available major number

Pick a unique number. In this case, we picked 100.

On node1
1. Create a vg “hbvg” on the shared disk “hdisk1″ with enhanced concurrent capability.

#smitty mkvg
2. Image 4. Volume group creation

3. Once hbvg is created, the autovaryon flag needs to be disabled. To do that, run the following command:

#chvg -an hbvg


4. Next, we create logical volumes in the volume group “hbvg”. Enter an LV name such as hbloglv, select 1 for the
number of logical partitions, select jfslog as the type, and set scheduling to sequential. Let the remaining options
have the default value and press Enter.

#smitty mklv
5. Image 5. Logical Volume creation

6. Once lv is created, initialize the logform with the following:

#logform /dev/hbloglv

7. Repeat this process to create another LV of type jfs and named hblv (but otherwise identical).
8. Next, we create a filesystem. To do that, enter the following:

#smitty crfs ->Add a Journaled File System -> Add a Journaled


File System on a
Previously Defined Logical Volume -> Add a Standard Journaled
File System

9. Here enter the lv name “hblv”, lv for log as ” hbloglv” and the mount point /hb_fs

10. Image 6. Filesystem creation in a Logical Volume

11. Once the Filesystem is created, try mounting the file system. Before moving to node2, unmount /hb_fs and
varyoffvg hbvg.

On Node 2
1. Identify the shared disk using PVID. Import the volume group with same major number (we used 100) from the
shared disk (hdisk1):

#importvg -V 100 -y hbvg hdisk1


2. Varyon the volume group and disable auto start at mount.

#varyonvg hbvg
#chvg -an hbvg

3. Now, you should be able to mount the filesystem. Once done, unmount the filesystem and varyoffvg hbvg.

4. Verification of Heartbeat over FC:Open 2 different sessions of both the nodes. On node1, run following command
where hdisk1 is the shared disk.

#/usr/sbin/rsct/bin/dhb_read -p hdisk1 -r

5. On node2:

/usr/sbin/rsct/bin/dhb_read -p hdisk1 -t
6. Basically, one node will heartbeat to the disk and the other will detect it. Both nodes should return to the command
line after the reporting link operates normally.

Application specific configuration


If you are creating any application (for example DB2 server) highly available, specific configuration needs to be done.
That is beyond the scope of this article.

HACMP related configuration


Network takeover on both nodes:

1. Run grep -i community /etc/snmpdv3.conf | grep public and ensure that there is anuncommented line
similar to COMMUNITY public public noAuthNoPriv 0.0.0.0 0.0.0.0.
2. Next we need to add all the IP addresses of nodes, NIC’s in the /etc/rhosts file.

# cat /usr/es/sbin/cluster/etc/rhosts
192.168.20.72
192.168.20.201
10.10.35.5
10.10.210.11
10.10.35.4
10.10.210.21
192.168.22.39

Configuring PowerHA cluster

On Node 1:
1. First, define a cluster:

#smitty hacmp –> Extended Configuration –> Extended Topology


Configuration –>
Configure an HACMP Cluster –> Add/Change/Show
an HACMP Cluster
2. Image 7. Defining a cluster

3. Press Enter; now, the cluster is defined.

4. Add nodes to the defined cluster:


#smitty hacmp –> Extended Configuration –> Extended Topology
Configuration –> Configure HACMP Nodes –> Add a Node to the
HACMP
Cluster
5. Image 8. Adding nodes to a cluster

6. Similarly, add another node to the cluster. Now, we have defined a cluster and added nodes to it. Next, we will
make the two nodes communicate with each other.

7. To add networks, we will add two kinds of networks, IP (Ethernet) and non-IP (diskhb).

#smitty hacmp –> Extended Configuration –> Extended Topology


Configuration–> Configure HACMP Networks –> Add a Network to
the
HACMP Cluster

8. Select “ether” from the list.

9. Image 9. Adding networks to the cluster

10. After this is added, return to “Add a network to the HACMP cluster” and also add the diskhb network.

11. The next step establishes what physical devices from each node are connected to each network.

#smitty hacmp –> Extended Configuration –> Extended Topology


Configuration–> Configure HACMP Communication
Interfaces/Devices
–> Add Communication Interfaces/Devices –>Add Pre-defined
Communication Interfaces and Devices–> Communication Interfaces

12. Pick the network that we added in the last step (IP_network) and enter configuration similar to this:

13. Image 10. Adding communication devices to the cluster

14. There should be a warning about an insufficient number of communication ports on particular networks. These last
steps need to be repeated for the different adapters to be assigned to the various networks for HACMP purposes.
The warnings can be ignored. By the time all adapters are assigned to networks, the warnings must be gone. In any
case, repeat for all interfaces.

15. Note that for the disk communication (the disk heartbeat), the steps are slightly different.

#smitty hacmp –> Extended Configuration –> Extended Topology


Configuration–> Configure HACMP Communication
Interfaces/Devices –> Add
Communication Devices

16. Select shared_diskhb or the relevant name as appropriate and fill in the details as below:

17. Image 11. Adding communication interfaces to the cluster


18. Each node in the cluster also needs to have a persistent node IP address. We associate each node with its
persistent IP as follows:

#smitty hacmp –> Extended Configuration –> Extended Topology


Configuration–> Configure HACMP Persistent Node IP
Label/Addresses

19. Add all the details as below:

20. Image 12. Adding persistent IP address to the cluster

21. Checkpoint:

22. After adding everything, we should check if everything was added correctly.

#smitty hacmp –> Extended Configuration –> Extended Topology


Configuration–> Show HACMP Topology –>Show Cluster Topology

23. It will list all the networks, interfaces, devices. Verify that they are added correctly.

24. Adding resource group:Now we have defined a cluster, added nodes to it, and also configured both IP as well as
non-IP_network. The next step is to configure a resource group. As defined earlier, a resource group is a collection
of resources. Application server is one such resource which needs to be kept highly available, for example a DB2
server.Adding an application server to resource group:

#smitty hacmp –> Extended Configuration–>Extended Resource


Configuration–>HACMP Extended Resources Configuration–>
Configure
HACMP Application Servers–>Add an Application Server
25. Image 13. Adding resources – Application server

26. This specifies the server name, the start and the stop scripts needed to start/stop the application server. For
applications such as DB2, WebSphere, SAP, Oracle, TSM, ECM, LDAP, IBM HTTP, the start/stop scripts come
with the product. For other applications, administrators should write their own scripts to start/stop the application.

27. The next resource that we will add into the resource group is a service IP. It is through this IP only that the end
users will connect to the application. Hence, service IP should be kept highly available.

#smitty hacmp –> Extended Configuration–>Extended Resource


Configuration–>HACMP Extended Resources Configuration–
>Configure HACMP
Service IP Labels/Addresses–> Add a Service IP Label/Address

28. Choose “Configurable on Multiple Nodes” and then “IP_network”. Here we have db2live as the service IP.

29. Image 14. Adding resources – Service IP

30. Now the resources are added, we will create a resource group (RG), define RG policies, and add all these
resources to it.

#smitty hacmp –> Extended Configuration–>HACMP Extended


Resource Group
Configuration–> Add a Resource Group
31. Image 15. Resource group creation

32. Once RG is created, we can change attributes of it using,

#smitty hacmp –> Extended Configuration–>HACMP Extended


Resource Group
Configuration–>Change/Show Resources and Attributes for a
Resource Group

33. Select db2_rg and configure as desired:

34. Image 16. Defining various attributes of the resource group

35. Verification and synchronizationOnce everything is configured on the primary node (node1), we need to
synchronize this with all other nodes in the cluster. To do that, do the following:

#smitty hacmp–> Extended Configuration–> Extended Verification


and
Synchronization
36. Image 17. Verification and synchronization of the cluster

37. This will check the status and configuration of the local node first, and then it will propagate the configuration to the
other nodes in the cluster, if they are reachable. There should be plenty of details on any errors and passes, too.
Once this is done, your cluster is ready. You can test it by moving the RG manually. To do that, do the following:

#smitty hacmp–> System Management (C-SPOC)–> HACMP Resource


Group and
Application Management–> Move a Resource Group to Another Node
/ Site–>
Move Resource Groups to Another Node

38. Choose “node2″ and press Enter. You should see the stop scripts running on node1 and start scripts running on
node2. After few seconds, the RG will be online on node2.

Potrebbero piacerti anche