Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Before starting with the configuration, the cluster must be properly planned. The online planning worksheets (OLPW)
can be used for the planning purpose. Here, it explains the configuration of a two node cluster. In the example
provided, both nodes have 3 Ethernet adapters and 2 shared disks.
export PATH=$PATH:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities
#smitty tcpip -> Minimum Configuration and Startup -> Choose Ethernet network
interface
You will have three Ethernet adapters. Two with private IP address and one with public IP address. As shown in the
image below, enter the relevant fields for en0 (which you will configure for a public IP address).
Image 1. Configuration of a public IP address
This will configure the IP address and start the TCP/IP services on it.
Similarly, you configure the private IP addresses on en1 and en2, as shown in Image 2.
Image 2. Configuration of a private IP address
Similarly, configure en2 with private IP 10.10.210.21 and start the TCP/IP services. Next, you need to add the IP
addresses (of both node1, node2 and the service IP which db2live here) and the labels into /etc/hosts file. It should
look like the following:
The idea is that you should include each of the three ports for each machine with relevant labels for name resolution.
Perform similar operations on node2. Configure en0 with the public IP and en1 and en2 with private IPs and edit the
/etc/hosts file. To test that all is well, you can issue pings to the various IP addresses from each machine.
This method does not use Ethernet to avoid a single point of failure from the Ethernet network/switches/protocols.
The first step is to identify the available major number on all the nodes (as shown on Image 3 below).
Image 3. Identifying available major number
On node1
1. Create a vg “hbvg” on the shared disk “hdisk1″ with enhanced concurrent capability.
#smitty mkvg
2. Image 4. Volume group creation
3. Once hbvg is created, the autovaryon flag needs to be disabled. To do that, run the following command:
#smitty mklv
5. Image 5. Logical Volume creation
#logform /dev/hbloglv
7. Repeat this process to create another LV of type jfs and named hblv (but otherwise identical).
8. Next, we create a filesystem. To do that, enter the following:
9. Here enter the lv name “hblv”, lv for log as ” hbloglv” and the mount point /hb_fs
11. Once the Filesystem is created, try mounting the file system. Before moving to node2, unmount /hb_fs and
varyoffvg hbvg.
On Node 2
1. Identify the shared disk using PVID. Import the volume group with same major number (we used 100) from the
shared disk (hdisk1):
#varyonvg hbvg
#chvg -an hbvg
3. Now, you should be able to mount the filesystem. Once done, unmount the filesystem and varyoffvg hbvg.
4. Verification of Heartbeat over FC:Open 2 different sessions of both the nodes. On node1, run following command
where hdisk1 is the shared disk.
#/usr/sbin/rsct/bin/dhb_read -p hdisk1 -r
5. On node2:
/usr/sbin/rsct/bin/dhb_read -p hdisk1 -t
6. Basically, one node will heartbeat to the disk and the other will detect it. Both nodes should return to the command
line after the reporting link operates normally.
1. Run grep -i community /etc/snmpdv3.conf | grep public and ensure that there is anuncommented line
similar to COMMUNITY public public noAuthNoPriv 0.0.0.0 0.0.0.0.
2. Next we need to add all the IP addresses of nodes, NIC’s in the /etc/rhosts file.
# cat /usr/es/sbin/cluster/etc/rhosts
192.168.20.72
192.168.20.201
10.10.35.5
10.10.210.11
10.10.35.4
10.10.210.21
192.168.22.39
On Node 1:
1. First, define a cluster:
6. Similarly, add another node to the cluster. Now, we have defined a cluster and added nodes to it. Next, we will
make the two nodes communicate with each other.
7. To add networks, we will add two kinds of networks, IP (Ethernet) and non-IP (diskhb).
10. After this is added, return to “Add a network to the HACMP cluster” and also add the diskhb network.
11. The next step establishes what physical devices from each node are connected to each network.
12. Pick the network that we added in the last step (IP_network) and enter configuration similar to this:
14. There should be a warning about an insufficient number of communication ports on particular networks. These last
steps need to be repeated for the different adapters to be assigned to the various networks for HACMP purposes.
The warnings can be ignored. By the time all adapters are assigned to networks, the warnings must be gone. In any
case, repeat for all interfaces.
15. Note that for the disk communication (the disk heartbeat), the steps are slightly different.
16. Select shared_diskhb or the relevant name as appropriate and fill in the details as below:
21. Checkpoint:
22. After adding everything, we should check if everything was added correctly.
23. It will list all the networks, interfaces, devices. Verify that they are added correctly.
24. Adding resource group:Now we have defined a cluster, added nodes to it, and also configured both IP as well as
non-IP_network. The next step is to configure a resource group. As defined earlier, a resource group is a collection
of resources. Application server is one such resource which needs to be kept highly available, for example a DB2
server.Adding an application server to resource group:
26. This specifies the server name, the start and the stop scripts needed to start/stop the application server. For
applications such as DB2, WebSphere, SAP, Oracle, TSM, ECM, LDAP, IBM HTTP, the start/stop scripts come
with the product. For other applications, administrators should write their own scripts to start/stop the application.
27. The next resource that we will add into the resource group is a service IP. It is through this IP only that the end
users will connect to the application. Hence, service IP should be kept highly available.
28. Choose “Configurable on Multiple Nodes” and then “IP_network”. Here we have db2live as the service IP.
30. Now the resources are added, we will create a resource group (RG), define RG policies, and add all these
resources to it.
35. Verification and synchronizationOnce everything is configured on the primary node (node1), we need to
synchronize this with all other nodes in the cluster. To do that, do the following:
37. This will check the status and configuration of the local node first, and then it will propagate the configuration to the
other nodes in the cluster, if they are reachable. There should be plenty of details on any errors and passes, too.
Once this is done, your cluster is ready. You can test it by moving the RG manually. To do that, do the following:
38. Choose “node2″ and press Enter. You should see the stop scripts running on node1 and start scripts running on
node2. After few seconds, the RG will be online on node2.