Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
CentOS 6.5 - 64 bit (You can use either 32 or 64 bit and also if you use earlier versions, some
rpm and packages would differ for any version below 6.0)
1.
Configure a 2 node Red Hat Cluster using CentOS 6.5 (64 bit)
2.
One node will be used for management purpose of cluster with luci using CentOS 6.5 (64
bit)
3.
Openfiler will be used to configure a shared iSCSI storage for the cluster
4.
5.
6.
Create a common service GFS2 which will run on any one node of our cluster with failover
policy
NOTE: I will not be able to configure fencing related settings as it is not supported on vmware. For
more information please visit this site Fence Device and Agent Information for Red Hat Enterprise Linux
IMPORTANT NOTE: In this article I will not be able to explain properly all the terms used, for that you
can always refer the Official Guide from Red Hat on Cluster Administration for further clarification
Lab Setup
2 nodes with CentOS 6.5 - 64 bit
Node 1
Hostname: node1.cluster
IP Address: 192.168.1.5
Node 2
Hostname: node2.cluster
IP Address: 192.168.1.6
Create a new partition with the below shown options for the available disk. Mention a cylinder value
for the partition
Next is to create a new Logical Volume. Create 2 Logical Volumes with custom size as per your
requirement.
For my case I will create two volumes
1. quorum with size 1400 MB (Quorum disk does not requires disk space more than 1GB)
2. SAN with all the left size which will be used for GFS2 filesystem in our cluster
On the home page of system create a ACL for the subnet which will try to access the openfiler storage.
For my case the subnet is 192.168.1.0 so I will add a new entry for the same with relative subnet
mask.
Next Add iscsi target for the first disk i.e. quorum volume. You can edit the iscsi target value with
custom name as I have done for my case so that it becomes easier for me to understand
Next map the volume to the iSCSI target. For quorum target select quorum partition and click
on Map as shown below
Do the same steps for SAN volume also as we did for quorum volume above. Edit the target value as
shown below
Map the volume to the iSCSI target as shown in the figure below. Be sure to the map the correct
volume
Allow the ACL for that particular target in Network ACL section
What is Conga?
Conga is an integrated set of software components that provides centralized configuration and
management of Red Hat clusters and storage. Conga provides the following major features:
No Need to Re-Authenticate
[ OK ]
192.168.1.8:3260,1 iqn.2006-01.com.openfiler:san
192.168.1.8:3260,1 iqn.2006-01.com.openfiler:quorum
command with openfiler IP address, the iSCSi targets got discovered automatically as configured on
openfiler
Now restart the iscsi service once again to refresh the settings
[root@node1 ~]# service iscsi restart
Stopping iscsi:
Starting iscsi:
[ OK ]
[ OK ]
[ OK ]
[ OK ]
Device:
/dev/sdc
Blocksize:
4096
Device Size
Filesystem Size:
Journals:
Resource Groups:
42
Locking Protocol:
Lock Table:
UUID:
"lock_dlm"
"cluster1:GFS"
2ff81375-31f9-c57d-59d1-7573cdfaff42
[ OK ]
[ OK ]
[ OK ]
[ OK ]
[ OK ]
[ OK ]
Click on Create
Provide the following details for the clusterCluster name: Cluster1(As provided above)
Node Name: node1.cluster (192.168.1.5) Make sure that hostname is resolvable
node2.cluster (192.168.1.6) Make sure that hostname is resolvable
Password: As provided for agent ricci in Step 6
Check Shared storage box as we are using GFS2
Once you click on submit, the nodes will start the procedure to add the nodes (if everything goes
correct or else it will throw the error)
Now the nodes are added but they are shown in red color. Let us check the reason behind it. Click on
any of the nodes for more details
So the reason looks like most of the services are not running . Let us login to the console and start the
services
[ OK ]
Stopping cluster:
Leaving fence domain...
Stopping gfs_controld...
Stopping dlm_controld...
[ OK ]
[ OK ]
[ OK ]
Stopping fenced...
[ OK ]
Stopping cman...
[ OK ]
[ OK ]
[ OK ]
IMPORTANT NOTE: If you are planning to configure Red Hat Cluster then make sure NetworkManager
service is not running
[root@node1 ~]# service NetworkManager stop
Stopping NetworkManager daemon:
[ OK ]
[ OK ]
[ OK ]
[ OK ]
[ OK ]
[ OK ]
[ OK ]
[ OK ]
Starting fenced...
[ OK ]
Starting dlm_controld...
[ OK ]
[ OK ]
Starting gfs_controld...
[ OK ]
Unfencing self...
[ OK ]
[ OK ]
[ OK ]
Starting cluster:
Checking if cluster has been disabled at boot...
Checking Network Manager...
[ OK ]
[ OK ]
Global setup...
[ OK ]
[ OK ]
Mounting configfs...
[ OK ]
Starting cman...
[ OK ]
[ OK ]
Starting fenced...
[ OK ]
Starting dlm_controld...
[ OK ]
[ OK ]
Starting gfs_controld...
[ OK ]
Unfencing self...
[ OK ]
[ OK ]
[ OK ]
Now once all the services have started, let us refresh the web console and see the changes
So all the services are running and there is no more warning message on either cluster or the nodes
If everything goes fine you should be able to see the below message
Give a name to your failover domain and follow the setting as shown below
Select GFS2 from the drop down menu and fill in the details
Name: Give any name
Mount Point: Before giving the mount point make sure it exists on both the nodes
Let us create these mount points on node1 and node2
[root@node1 ~]# mkdir /GFS
[root@node2 ~]# mkdir /GFS
Next fill in the device details which we formatted for GFS2 i.e. /dev/sdc
Check the Force Unmount box and click on Submit
You will see the below box on your screen. Select the Resource we created in Step 11.
As soon as you select GFS, all the saved setting under GFS resource will be visible under service group
section as shown below. Click on Submit to save the changes
Once you click on submit, refresh the web console and you should be able to see the GFS service
running on your cluster on any of the node as shown below
13. Verification
On node1
[root@node1 ~]# clustat
Cluster Status for cluster1 @ Wed Feb 26 00:49:04 2014
Member Status: Quorate
Member Name
------ ----
ID Status
---- ------
node1.cluster
node2.cluster
2 Online, rgmanager
/dev/block/8:16
Service Name
State
------- ----
-----
service:GFS
Owner (Last)
----- ------
started
node1.cluster
So, if GFS is running on node1 then GFS should be mounted on /GFS on node1. Let us verify
[root@node1 ~]# df -h
Filesystem
0 100% /media/CentOS_6.5_Final
Member Name
ID Status
------ ----
---- ------
node1.cluster
node2.cluster
2 Online, rgmanager
/dev/block/8:16
Service Name
State
------- ----
-----
service:GFS
Owner (Last)
----- ------
started
node2.cluster
0 100% /media/CentOS_6.5_Final
On node2
[root@node2 ~]# df -h
Filesystem
/dev/sda1
/dev/sr0
/dev/sdc
0 100% /media/CentOS_6.5_Final
# clusvcadm -e GFS