Sei sulla pagina 1di 138

Configuring and Managing a Red Hat

Cluster
4.6
Red Hat Cluster Suite
for Red Hat
Enterprise Linux 4.6
lSBN: N}A
Publication date:
Confgurng and Managng a Red Hat Custer descrbes the confguraton and
management of Red Hat custer systems for Red Hat Enterprse Lnux 4.6. It does
not ncude nformaton about Red Hat Lnux Vrtua Servers (LVS). Informaton
about nstang and confgurng LVS s n a separate document.
Configuring and Managing a Red Hat Cluster
Configuring and Managing a Red Hat Cluster: Red
Hat Cluster Suite for Red Hat Enterprise Linux 4.6
Copyrght 2007 Red Hat, Inc.
Copyrght 2007 by Red Hat, Inc. Ths matera may be dstrbuted ony sub|ect to the terms and
condtons set forth n the Open Pubcaton Lcense, V1.0 or ater (the atest verson s presenty avaabe
at http://www.opencontent.org/openpub/).
Dstrbuton of substantvey modfed versons of ths document s prohbted wthout the expct
permsson of the copyrght hoder.
Dstrbuton of the work or dervatve of the work n any standard (paper) book form for commerca
purposes s prohbted uness pror permsson s obtaned from the copyrght hoder.
Red Hat and the Red Hat "Shadow Man" ogo are regstered trademarks of Red Hat, Inc. n the Unted
States and other countres.
A other trademarks referenced heren are the property of ther respectve owners.
The GPG fngerprnt of the securty@redhat.com key s:
CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E
1801 Varsty Drve
Raegh, NC 27606-2072
USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
PO Box 13588
Research Trange Park, NC 27709
USA
Configuring and Managing a Red Hat Cluster
Introducton ..................................................................................................... x
1. Document Conventons ......................................................................... x
2. Feedback ............................................................................................. x
1. Red Hat Custer Confguraton and Management Overvew ..........................1
1. Confguraton Bascs ..............................................................................1
1.1. Settng Up Hardware ..................................................................1
1.2. Instang Red Hat Custer software ............................................3
1.3. Confgurng Red Hat Custer Software ........................................3
2. Conga ....................................................................................................5
3. sys1em-coh11g-cus1eJ Custer Admnstraton GUI ...............................8
3.1. Cluster Configuration Tool .....................................................9
3.2. Cluster Status Tool ...............................................................12
4. Command Lne Admnstraton Toos ...................................................13
2. Before Confgurng a Red Hat Custer .........................................................15
1. Compatbe Hardware ..........................................................................15
2. Enabng IP Ports .................................................................................15
2.1. Enabng IP Ports on Custer Nodes ...........................................16
2.2. Enabng IP Ports on Computers That Run luci .........................17
2.3. Exampes of 1p1abes Rues ......................................................17
3. Confgurng ACPI For Use wth Integrated Fence Devces ....................20
3.1. Dsabng ACPI Soft-Off wth chkcoh11g Management ................22
3.2. Dsabng ACPI Soft-Off wth the BIOS .......................................22
3.3. Dsabng ACPI Competey n the gJub.coh1 Fe .......................24
4. Confgurng max_uns ..........................................................................26
5. Consderatons for Usng Ouorum Dsk ................................................27
6. Red Hat Custer Sute and SELnux ......................................................29
7. Consderatons for Usng Conga .........................................................29
8. Genera Confguraton Consderatons .................................................29
3. Confgurng Red Hat Custer Wth Conga ...................................................31
1. Confguraton Tasks .............................................................................31
2. Startng luci and ricci .........................................................................32
3. Creatng A Custer ...............................................................................33
4. Goba Custer Propertes .....................................................................35
5. Confgurng Fence Devces ..................................................................38
5.1. Creatng a Shared Fence Devce ...............................................40
5.2. Modfyng or Deetng a Fence Devce ......................................42
6. Confgurng Custer Members ..............................................................42
6.1. Intay Confgurng Members ...................................................43
6.2. Addng a Member to a Runnng Custer ....................................43
v
6.3. Deetng a Member from a Custer ............................................45
7. Confgurng a Faover Doman ............................................................46
7.1. Addng a Faover Doman .........................................................48
7.2. Modfyng a Faover Doman ....................................................49
8. Addng Custer Resources ...................................................................50
9. Addng a Custer Servce to the Custer ...............................................53
10. Confgurng Custer Storage ..............................................................55
4. Managng Red Hat Custer Wth Conga ......................................................57
1. Startng, Stoppng, and Deetng Custers ...........................................57
2. Managng Custer Nodes .....................................................................58
3. Managng Hgh-Avaabty Servces ...................................................59
4. Dagnosng and Correctng Probems n a Custer ...............................61
5. Confgurng Red Hat Custer Wth sys1em-coh11g-cus1eJ ..........................63
1. Confguraton Tasks .............................................................................63
2. Startng the Cluster Configuration Tool .........................................64
3. Confgurng Custer Propertes ............................................................69
4. Confgurng Fence Devces ..................................................................71
5. Addng and Deetng Members ............................................................72
5.1. Addng a Member to a New Custer ..........................................72
5.2. Addng a Member to a Runnng DLM Custer ............................75
5.3. Deetng a Member from a DLM Custer ....................................78
5.4. Addng a GULM Cent-ony Member .........................................80
5.5. Deetng a GULM Cent-ony Member .......................................81
5.6. Addng or Deetng a GULM Lock Server Member .....................83
6. Confgurng a Faover Doman ............................................................86
6.1. Addng a Faover Doman .........................................................88
6.2. Removng a Faover Doman ....................................................91
6.3. Removng a Member from a Faover Doman ..........................91
7. Addng Custer Resources ...................................................................92
8. Addng a Custer Servce to the Custer ...............................................95
9. Propagatng The Confguraton Fe: New Custer ................................99
10. Startng the Custer Software ............................................................99
6. Managng Red Hat Custer Wth sys1em-coh11g-cus1eJ ...........................101
1. Startng and Stoppng the Custer Software ......................................101
2. Managng Hgh-Avaabty Servces .................................................102
3. Modfyng the Custer Confguraton ..................................................105
4. Backng Up and Restorng the Custer Database ...............................106
5. Dsabng the Custer Software ..........................................................108
6. Dagnosng and Correctng Probems n a Custer .............................109
A. Exampe of Settng Up Apache HTTP Server .............................................111
Configuring and Managing a Red Hat Cluster
v
1. Apache HTTP Server Setup Overvew ................................................111
2. Confgurng Shared Storage ..............................................................112
3. Instang and Confgurng the Apache HTTP Server ..........................113
B. Fence Devce Parameters .........................................................................117
Index .............................................................................................................123
v
v
lntroduction
Ths document provdes nformaton about nstang, confgurng and managng Red
Hat Custer components. Red Hat Custer components are part of Red Hat Custer
Sute and aow you to connect a group of computers (caed nodes or members) to
work together as a custer. Ths document does not ncude nformaton about
nstang, confgurng, and managng Lnux Vrtua Server (LVS) software.
Informaton about that s n a separate document.
The audence of ths document shoud have advanced workng knowedge of Red
Hat Enterprse Lnux and understand the concepts of custers, storage, and server
computng.
Ths document s organzed as foows:
Chapter J, Red Hat Cluster Configuration and Management Overview
Chapter 2, 8efore Configuring a Red Hat Cluster
Chapter 3, Configuring Red Hat Cluster With Conga
Chapter 4, Managing Red Hat Cluster With Conga
Chapter 5, Configuring Red Hat Cluster With sys1en-con1Ig-c1us1eI
Chapter 6, Managing Red Hat Cluster With sys1en-con1Ig-c1us1eI
Appendix A, Example of 5etting Up Apache HTTP 5erver
Appendix 8, Fence Device Parameters
For more nformaton about Red Hat Enterprse Lnux 4.6, refer to the foowng
resources:
Red Hat Enterprise Linux lnstallation Cuide - Provdes nformaton regardng
nstaaton.
Red Hat Enterprise Linux lntroduction to 5ystem Administration - Provdes
ntroductory nformaton for new Red Hat Enterprse Lnux system admnstrators.
Red Hat Enterprise Linux 5ystem Administration Cuide - Provdes more detaed
nformaton about confgurng Red Hat Enterprse Lnux to sut your partcuar
needs as a user.
x
Red Hat Enterprise Linux Reference Cuide - Provdes detaed nformaton suted
for more experenced users to reference when needed, as opposed to
step-by-step nstructons.
Red Hat Enterprise Linux 5ecurity Cuide - Detas the pannng and the toos
nvoved n creatng a secured computng envronment for the data center,
workpace, and home.
For more nformaton about Red Hat Custer Sute for Red Hat Enterprse Lnux 4.6
and reated products, refer to the foowng resources:
Red Hat Cluster 5uite Overview - Provdes a hgh eve overvew of the Red Hat
Custer Sute.
LVM Administrator's Cuide: Configuration and Administration - Provdes a
descrpton of the Logca Voume Manager (LVM), ncudng nformaton on
runnng LVM n a custered envronment.
Clobal File 5ystem: Configuration and Administration - Provdes nformaton
about nstang, confgurng, and mantanng Red Hat GFS (Red Hat Goba Fe
System).
Using Device-Mapper Multipath - Provdes nformaton about usng the
Devce-Mapper Mutpath feature of Red Hat Enterprse Lnux 4.6.
Using CN8D with Clobal File 5ystem - Provdes an overvew on usng Goba
Network Bock Devce (GNBD) wth Red Hat GFS.
Linux Virtual 5erver Administration - Provdes nformaton on confgurng
hgh-performance systems and servces wth the Lnux Vrtua Server (LVS).
Red Hat Cluster 5uite Release Notes - Provdes nformaton about the current
reease of Red Hat Custer Sute.
Red Hat Custer Sute documentaton and other Red Hat documents are avaabe n
HTML, PDF, and RPM versons on the Red Hat Enterprse Lnux Documentaton CD
and onne at http://www.redhat.com/docs/.
1. Document Conventions
Certan words n ths manua are represented n dfferent fonts, styes, and weghts.
Ths hghghtng ndcates that the word s part of a specfc category. The
lntroduction
x
categores ncude the foowng:
CouJ1eJ 1oh1
Courer font represents commahds, 11e hames ahd pa1hs, and pJomp1s .
When shown as beow, t ndcates computer output:
0esk1op abou1.h1m ogs pauwes1eJbeJg.phg
ha1 backup11es ma1 JepoJ1s
bo1d Courer 1on1
Bod Courer font represents text that you are to type, such as: servce ]onas
s1ar1
If you have to run a command as root, the root prompt (#) precedes the
command:
# gcon11oo1-2
I1a1Ic CouIIeI 1on1
Itac Courer font represents a varabe, such as an nstaaton drectory:
Ins1a11_dII/b1h/
bold font
Bod font represents application programs and text found on a graphical
interface.
When shown ke ths: OK , t ndcates a button on a graphca appcaton
nterface.
Addtonay, the manua uses dfferent strateges to draw your attenton to peces of
nformaton. In order of how crtca the nformaton s to you, these tems are
marked as foows:
Document Conventions
x
Note
A note s typcay nformaton that you need to understand the
behavor of the system.
Tip
A tp s typcay an aternatve way of performng a task.
lmportant
Important nformaton s necessary, but possby unexpected, such as a
confguraton change that w not persst after a reboot.
Caution
A cauton ndcates an act that woud voate your support agreement,
such as recompng the kerne.
Warning
A warnng ndcates potenta data oss, as may happen when tunng
hardware for maxmum performance.
2. Feedback
If you spot a typo, or f you have thought of a way to make ths manua better, we
woud ove to hear from you. Pease submt a report n Bugza
(http://bugzilla.redhat.com/bugzilla/) aganst the component Jh-cs.
Be sure to menton the manua's dentfer:
lntroduction
x
Jh-cs{Eh)-4.6 {2007-11-14T22.02)
By mentonng ths manua's dentfer, we know exacty whch verson of the gude
you have.
If you have a suggeston for mprovng the documentaton, try to be as specfc as
possbe. If you have found an error, pease ncude the secton number and some of
the surroundng text so we can fnd t easy.
Feedback
x
xv
Red Hat Cluster Configuration
and Management Overview
Red Hat Custer aows you to connect a group of computers (caed nodes or
members) to work together as a custer. You can use Red Hat Custer to sut your
custerng needs (for exampe, settng up a custer for sharng fes on a GFS fe
system or settng up servce faover).
1. Configuration Basics
To set up a custer, you must connect the nodes to certan custer hardware and
confgure the nodes nto the custer envronment. Ths chapter provdes an overvew
of custer confguraton and management, and toos avaabe for confgurng and
managng a Red Hat Custer.
Confgurng and managng a Red Hat Custer conssts of the foowng basc steps:
1. Settng up hardware. Refer to 5ection J.J, 5etting Up Hardware".
2. Instang Red Hat Custer software. Refer to 5ection J.2, lnstalling Red Hat
Cluster software".
3. Confgurng Red Hat Custer Software. Refer to 5ection J.3, Configuring Red Hat
Cluster 5oftware".
1.1. Setting Up Hardware
Settng up hardware conssts of connectng custer nodes to other hardware
requred to run a Red Hat Custer. The amount and type of hardware vares
accordng to the purpose and avaabty requrements of the custer. Typcay, an
enterprse-eve custer requres the foowng type of hardware (refer to Figure J.J,
Red Hat Cluster Hardware Overview").
Custer nodes - Computers that are capabe of runnng Red Hat Enterprse Lnux
4 software, wth at east 1GB of RAM.
Ethernet swtch or hub for pubc network - Ths s requred for cent access to
the custer.
Chapter 1.
1
Ethernet swtch or hub for prvate network - Ths s requred for communcaton
among the custer nodes and other custer hardware such as network power
swtches and Fbre Channe swtches.
Network power swtch - A network power swtch s recommended to perform
fencng n an enterprse-eve custer.
Fbre Channe swtch - A Fbre Channe swtch provdes access to Fbre Channe
storage. Other optons are avaabe for storage accordng to the type of storage
nterface; for exampe, SCSI or GNBD. A Fbre Channe swtch can be confgured
to perform fencng.
Storage - Some type of storage s requred for a custer. The type requred
depends on the purpose of the custer.
For consderatons about hardware and other custer confguraton concerns, refer to
Chapter 2, 8efore Configuring a Red Hat Cluster or check wth an authorzed Red
Hat representatve.
Chapter 1. Red Hat Cluster Configuration and Management Overview
2
Figure 1.1. Red Hat Cluster Hardware Overview
1.2. lnstalling Red Hat Cluster software
To nsta Red Hat Custer software, you must have enttements for the software. If
you are usng the Conga confguraton GUI, you can et t nsta the custer
software. If you are usng other toos to confgure the custer, secure and nsta the
software as you woud wth Red Hat Enterprse Lnux software.
1.3. Configuring Red Hat Cluster Software
Confgurng Red Hat Custer software conssts of usng confguraton toos to specfy
the reatonshp among the custer components. Figure J.2, Cluster Configuration
5tructure" shows an exampe of the herarchca reatonshp among custer nodes,
hgh-avaabty servces, and resources. The custer nodes are connected to one or
more fencng devces. Nodes can be grouped nto a faover doman for a custer
lnstalling Red Hat Cluster software
3
servce. The servces comprse resources such as NFS exports, IP addresses, and
shared GFS parttons.
Figure 1.2. Cluster Configuration Structure
The foowng custer confguraton toos are avaabe wth Red Hat Custer:
Conga - Ths s a comprehensve user nterface for nstang, confgurng, and
managng Red Hat custers, computers, and storage attached to custers and
computers.
sys1em-coh11g-cus1eJ - Ths s a user nterface for confgurng and managng a
Red Hat custer.
Command ne toos - Ths s a set of command ne toos for confgurng and
managng a Red Hat custer.
Chapter 1. Red Hat Cluster Configuration and Management Overview
4
A bref overvew of each confguraton too s provded n the foowng sectons:
5ection 2, Conga"
5ection 3, sys1en-con1Ig-c1us1eI Cluster Administration CUl"
5ection 4, Command Line Administration Tools"
In addton, nformaton about usng Conga and sys1em-coh11g-cus1eJ s provded
n subsequent chapters of ths document. Informaton about the command ne toos
s avaabe n the man pages for the toos.
2. Conga
Conga s an ntegrated set of software components that provdes centrazed
confguraton and management of Red Hat custers and storage. Conga provdes
the foowng ma|or features:
One Web nterface for managng custer and storage
Automated Depoyment of Custer Data and Supportng Packages
Easy Integraton wth Exstng Custers
No Need to Re-Authentcate
Integraton of Custer Status and Logs
Fne-Graned Contro over User Permssons
The prmary components n Conga are luci and ricci, whch are separatey
nstaabe. luci s a server that runs on one computer and communcates wth
mutpe custers and computers va ricci. ricci s an agent that runs on each
computer (ether a custer member or a standaone computer) managed by Conga.
luci s accessbe through a Web browser and provdes three ma|or functons that
are accessbe through the foowng tabs:
homebase - Provdes toos for addng and deetng computers, addng and
deetng users, and confgurng user prveges. Ony a system admnstrator s
aowed to access ths tab.
Conga
5
cluster - Provdes toos for creatng and confgurng custers. Each nstance of
luci sts custers that have been set up wth that luci. A system admnstrator can
admnster a custers sted on ths tab. Other users can admnster ony custers
that the user has permsson to manage (granted by an admnstrator).
storage - Provdes toos for remote admnstraton of storage. Wth the toos on
ths tab, you can manage storage on computers whether they beong to a custer
or not.
To admnster a custer or storage, an admnstrator adds (or registers) a custer or a
computer to a luci server. When a custer or a computer s regstered wth luci, the
FODN hostname or IP address of each computer s stored n a luci database.
You can popuate the database of one luci nstance from another lucinstance. That
capabty provdes a means of repcatng a luci server nstance and provdes an
effcent upgrade and testng path. When you nsta an nstance of luci, ts
database s empty. However, you can mport part or a of a luci database from an
exstng luci server when depoyng a new luci server.
Each luci nstance has one user at nta nstaaton - admn. Ony the admn user
may add systems to a luci server. Aso, the admn user can create addtona user
accounts and determne whch users are aowed to access custers and computers
regstered n the luci database. It s possbe to mport users as a batch operaton n
a new luci server, |ust as t s possbe to mport custers and computers.
When a computer s added to a luci server to be admnstered, authentcaton s
done once. No authentcaton s necessary from then on (uness the certfcate used
s revoked by a CA). After that, you can remotey confgure and manage custers
and storage through the luci user nterface. luci and ricci communcate wth each
other va XML.
The foowng fgures show sampe dspays of the three ma|or luci tabs:
homebase, cluster, and storage.
For more nformaton about Conga, refer to Chapter 3, Configuring Red Hat Cluster
With Conga, Chapter 4, Managing Red Hat Cluster With Conga, and the onne hep
avaabe wth the luci server.
Chapter 1. Red Hat Cluster Configuration and Management Overview
6
Figure 1.3. luci homebase Tab
Figure 1.4. luci cluster Tab
Conga
7
Figure 1.5. luci storage Tab
3. sys1em-con1g-c1us1er Cluster Administration GUl
Ths secton provdes an overvew of the custer admnstraton graphca user
nterface (GUI) avaabe wth Red Hat Custer Sute - sys1em-coh11g-cus1eJ. It s
for use wth the custer nfrastructure and the hgh-avaabty servce management
components. sys1em-coh11g-cus1eJ conssts of two ma|or functons: the Cluster
Configuration Tool and the Cluster Status Tool. The Cluster Configuration
Tool provdes the capabty to create, edt, and propagate the custer confguraton
fe (/e1c/cus1eJ/cus1eJ.coh1). The Cluster Status Tool provdes the capabty
to manage hgh-avaabty servces. The foowng sectons summarze those
functons.
Chapter 1. Red Hat Cluster Configuration and Management Overview
8
Note
Whe sys1em-coh11g-cus1eJ provdes severa convenent toos for
confgurng and managng a Red Hat Custer, the newer, more
comprehensve too, Conga, provdes more convenence and fexbty
than sys1em-coh11g-cus1eJ.
3.1. Cluster Configuration Tool
You can access the Cluster Configuration Tool (Figure J.6, Cluster
Configuration Tool") through the Cluster Configuration tab n the Custer
Admnstraton GUI.
Cluster Configuration Tool
9
Figure 1.6. Cluster Configuration Tool
The Cluster Configuration Tool represents custer confguraton components n
the confguraton fe (/e1c/cus1eJ/cus1eJ.coh1) wth a herarchca graphca
dspay n the eft pane. A trange con to the eft of a component name ndcates
that the component has one or more subordnate components assgned to t.
Cckng the trange con expands and coapses the porton of the tree beow a
component. The components dspayed n the GUI are summarzed as foows:
Cluster Nodes - Dspays custer nodes. Nodes are represented by name as
Chapter 1. Red Hat Cluster Configuration and Management Overview
10
subordnate eements under Cluster Nodes. Usng confguraton buttons at the
bottom of the rght frame (beow Properties), you can add nodes, deete nodes,
edt node propertes, and confgure fencng methods for each node.
Fence Devices - Dspays fence devces. Fence devces are represented as
subordnate eements under Fence Devices. Usng confguraton buttons at the
bottom of the rght frame (beow Properties), you can add fence devces, deete
fence devces, and edt fence-devce propertes. Fence devces must be defned
before you can confgure fencng (wth the Manage Fencing For This Node
button) for each node.
Managed Resources - Dspays faover domans, resources, and servces.
Failover Domains - For confgurng one or more subsets of custer nodes
used to run a hgh-avaabty servce n the event of a node faure. Faover
domans are represented as subordnate eements under Failover Domains.
Usng confguraton buttons at the bottom of the rght frame (beow
Properties), you can create faover domans (when Failover Domains s
seected) or edt faover doman propertes (when a faover doman s
seected).
Resources - For confgurng shared resources to be used by hgh-avaabty
servces. Shared resources consst of fe systems, IP addresses, NFS mounts
and exports, and user-created scrpts that are avaabe to any hgh-avaabty
servce n the custer. Resources are represented as subordnate eements
under Resources. Usng confguraton buttons at the bottom of the rght frame
(beow Properties), you can create resources (when Resources s seected) or
edt resource propertes (when a resource s seected).
Note
The Cluster Configuration Tool provdes the capabty to confgure
prvate resources, aso. A prvate resource s a resource that s
confgured for use wth ony one servce. You can confgure a prvate
resource wthn a Service component n the GUI.
Services - For creatng and confgurng hgh-avaabty servces. A servce s
confgured by assgnng resources (shared or prvate), assgnng a faover
doman, and defnng a recovery pocy for the servce. Servces are represented
as subordnate eements under Services. Usng confguraton buttons at the
Cluster Configuration Tool
11
bottom of the rght frame (beow Properties), you can create servces (when
Services s seected) or edt servce propertes (when a servce s seected).
3.2. Cluster Status Tool
You can access the Cluster Status Tool (Figure J.7, Cluster Status Tool")
through the Cluster Management tab n Custer Admnstraton GUI.
Figure 1.7. Cluster Status Tool
The nodes and servces dspayed n the Cluster Status Tool are determned by
Chapter 1. Red Hat Cluster Configuration and Management Overview
12
the custer confguraton fe (/e1c/cus1eJ/cus1eJ.coh1). You can use the Cluster
Status Tool to enabe, dsabe, restart, or reocate a hgh-avaabty servce.
4. Command Line Administration Tools
In addton to Conga and the sys1em-coh11g-cus1eJ Custer Admnstraton GUI,
command ne toos are avaabe for admnsterng the custer nfrastructure and the
hgh-avaabty servce management components. The command ne toos are
used by the Custer Admnstraton GUI and nt scrpts supped by Red Hat.
Table J.J, Command Line Tools" summarzes the command ne toos.
Command
Line Tool
Used With Purpose
ccs_1oo -
Custer
Confguraton
System Too
Custer
Infrastructure
ccs_1oo s a program for makng onne updates
to the custer confguraton fe. It provdes the
capabty to create and modfy custer
nfrastructure components (for exampe,
creatng a custer, addng and removng a node).
For more nformaton about ths too, refer to the
ccs_too(8) man page.
cmah_1oo -
Custer
Management
Too
Custer
Infrastructure
cmah_1oo s a program that manages the CMAN
custer manager. It provdes the capabty to |on
a custer, eave a custer, k a node, or change
the expected quorum votes of a node n a
custer. cmah_1oo s avaabe wth DLM custers
ony. For more nformaton about ths too, refer
to the cman_too(8) man page.
gum_1oo -
Custer
Management
Too
Custer
Infrastructure
gum_1oo s a program used to manage GULM. It
provdes an nterface to ock_gumd, the GULM
ock manager. gum_1oo s avaabe wth GULM
custers ony. For more nformaton about ths
too, refer to the gum_too(8) man page.
1ehce_1oo -
Fence Too
Custer
Infrastructure
1ehce_1oo s a program used to |on or eave the
defaut fence doman. Specfcay, t starts the
fence daemon (1ehced) to |on the doman and
ks 1ehced to eave the doman. 1ehce_1oo s
avaabe wth DLM custers ony. For more
nformaton about ths too, refer to the
fence_too(8) man page.
Command Line Administration Tools
13
Command
Line Tool
Used With Purpose
cus1a1 -
Custer Status
Utty
Hgh-avaabty
Servce
Management
Components
The cus1a1 command dspays the status of the
custer. It shows membershp nformaton,
quorum vew, and the state of a confgured
user servces. For more nformaton about ths
too, refer to the custat(8) man page.
cusvcadm -
Custer User
Servce
Admnstraton
Utty
Hgh-avaabty
Servce
Management
Components
The cusvcadm command aows you to enabe,
dsabe, reocate, and restart hgh-avaabty
servces n a custer. For more nformaton about
ths too, refer to the cusvcadm(8) man page.
Table 1.1. Command Line Tools
Chapter 1. Red Hat Cluster Configuration and Management Overview
14
Before Configuring a Red Hat
Cluster
Ths chapter descrbes tasks to perform and consderatons to make before nstang
and confgurng a Red Hat Custer, and conssts of the foowng sectons:
5ection J, Compatible Hardware"
5ection 2, Enabling lP Ports"
5ection 3, Configuring ACPl For Use with lntegrated Fence Devices"
5ection 4, Configuring max_luns"
5ection 5, Considerations for Using Ouorum Disk"
5ection 7, Considerations for Using Conga"
5ection 8, Ceneral Configuration Considerations"
1. Compatible Hardware
Before confgurng Red Hat Custer software, make sure that your custer uses
approprate hardware (for exampe, supported fence devces, storage devces, and
Fbre Channe swtches). Refer to the hardware confguraton gudenes at
http://www.redhat.com/cluster_suite/hardware/ for the most current hardware
compatbty nformaton.
2. Enabling lP Ports
Before depoyng a Red Hat Custer, you must enabe certan IP ports on the custer
nodes and on computers that run luci (the Conga user nterface server). The
foowng sectons specfy the IP ports to be enabed and provde exampes of
1p1abes rues for enabng the ports:
5ection 2.J, Enabling lP Ports on Cluster Nodes"
5ection 2.2, Enabling lP Ports on Computers That Run luci"
5ection 2.3, Examples of Ip1ab1es Rules"
Chapter 2.
15
2.1. Enabling lP Ports on Cluster Nodes
To aow Red Hat Custer nodes to communcate wth each other, you must enabe
the IP ports assgned to certan Red Hat Custer components. Table 2.J, Enabled lP
Ports on Red Hat Cluster Nodes" sts the IP port numbers, ther respectve
protocos, the components to whch the port numbers are assgned, and references
to 1p1abes rue exampes. At each custer node, enabe IP ports accordng to
Table 2.J, Enabled lP Ports on Red Hat Cluster Nodes". (A exampes are n
5ection 2.3, Examples of Ip1ab1es Rules".)
lP Port
Number
Protocol Component Reference to Example of
p1ab1es Rules
6809 UDP cmah (Custer Manager), for
use n custers wth
Dstrbuted Lock Manager
(DLM) seected
Example 2.J, Port 6809:
cman"
11111 TCP J1cc1 (part of Conga remote
agent)
Example 2.3, Port JJJJJ:
ricci (Cluster Node and
Computer Running luci)"
14567 TCP ghbd (Goba Network Bock
Devce)
Example 2.4, Port J4567:
gnbd"
16851 TCP modcus1eJd (part of Conga
remote agent)
Example 2.5, Port J685J:
modclusterd"
21064 TCP dm (Dstrbuted Lock
Manager), for use n custers
wth Dstrbuted Lock
Manager (DLM) seected
Example 2.6, Port 2J064:
dlm"
40040,
40042,
41040
TCP ock_gumd (GULM daemon),
for use n custers wth Grand
Unfed Lock Manager (GULM)
seected
Example 2.7, Ports 40040,
40042, 4J040: lock_gulmd"
41966,
41967,
41968,
41969
TCP JgmahageJ (hgh-avaabty
servce management)
Example 2.8, Ports 4J966,
4J967, 4J968, 4J969:
rgmanager"
50006,
50008,
50009
TCP ccsd (Custer Confguraton
System daemon)
Example 2.9, Ports 50006,
50008, 50009: ccsd (TCP)"
Chapter 2. Before Configuring a Red Hat Cluster
16
lP Port
Number
Protocol Component Reference to Example of
p1ab1es Rules
50007 UDP ccsd (Custer Confguraton
System daemon)
Example 2.J0, Port 50007:
ccsd (UDP)"
Table 2.1. Enabled lP Ports on Red Hat Cluster Nodes
2.2. Enabling lP Ports on Computers That Run luci
To aow cent computers to communcate wth a computer that runs luci (the
Conga user nterface server), and to aow a computer that runs luci to
communcate wth ricci n the custer nodes, you must enabe the IP ports assgned
to luci and ricci. Table 2.2, Enabled lP Ports on a Computer That Runs luci" sts
the IP port numbers, ther respectve protocos, the components to whch the port
numbers are assgned, and references to 1p1abes rue exampes. At each computer
that runs luci, enabe IP ports accordng to Table 2.J, Enabled lP Ports on Red Hat
Cluster Nodes". (A exampes are n 5ection 2.3, Examples of Ip1ab1es Rules".)
Note
If a custer node s runnng luci, port 11111 shoud aready have been
enabed.
lP Port
Number
Protocol Component Reference to Example of
p1ab1es Rules
8084 TCP luci (Conga user nterface
server)
Example 2.2, Port 8084: luci
(Cluster Node or Computer
Running luci)"
11111 TCP J1cc1 (Conga remote agent) Example 2.3, Port JJJJJ:
ricci (Cluster Node and
Computer Running luci)"
Table 2.2. Enabled lP Ports on a Computer That Runs luci
2.3. Examples of p1ab1es Rules
Enabling lP Ports on Computers That
17
Ths secton provdes 1p1abes rue exampes for enabng IP ports on Red Hat
Custer nodes and computers that run luci. The exampes enabe IP ports for a
computer havng an IP address of 10.10.10.200, usng a subnet mask of
10.10.10.0/24.
Note
Exampes are for custer nodes uness otherwse noted n the exampe
ttes.
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -p udp -s 10.10.10.0/24 -d
10.10.10.0/24 --dpoJ1 6809 - ACCEPT
Example 2.1. Port 6809: cman
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -m mu11poJ1 -p 1cp -s
10.10.10.0/24 -d 10.10.10.0/24 --dpoJ1s 8084 - ACCEPT
Example 2.2. Port 8084: luci (Cluster Node or Computer
Running luci)
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -m mu11poJ1 -p 1cp -s
10.10.10.0/24 -d 10.10.10.0/24 --dpoJ1s 11111 - ACCEPT
Example 2.3. Port 11111: ricci (Cluster Node and Computer
Running luci)
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -m mu11poJ1 -p 1cp -s
10.10.10.0/24 -d 10.10.10.0/24 --dpoJ1s 14567 - ACCEPT
Chapter 2. Before Configuring a Red Hat Cluster
18
Example 2.4. Port 14567: gnbd
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -m mu11poJ1 -p 1cp -s
10.10.10.0/24 -d 10.10.10.0/24 --dpoJ1s 16851 - ACCEPT
Example 2.5. Port 16851: modclusterd
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -m mu11poJ1 -p 1cp -s
10.10.10.0/24 -d 10.10.10.0/24 --dpoJ1s 21064 - ACCEPT
Example 2.6. Port 21064: dlm
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -m mu11poJ1 -p 1cp -s
10.10.10.0/24 -d 10.10.10.0/24 --dpoJ1s 40040,40042,41040 - ACCEPT
Example 2.7. Ports 40040, 40042, 41040: lock_gulmd
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -m mu11poJ1 -p 1cp -s
10.10.10.0/24 -d 10.10.10.0/24 --dpoJ1s 41966,41967,41968,41969 - ACCEPT
Example 2.8. Ports 41966, 41967, 41968, 41969: rgmanager
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -m mu11poJ1 -p 1cp -s
10.10.10.0/24 -d 10.10.10.0/24 --dpoJ1s 50006,50008,50009 - ACCEPT
Example 2.9. Ports 50006, 50008, 50009: ccsd (TCP)
Run luci
19
-A ThPbT -1 10.10.10.200 -m s1a1e --s1a1e hEW -m mu11poJ1 -p udp -s
10.10.10.0/24 -d 10.10.10.0/24 --dpoJ1s 50007 - ACCEPT
Example 2.10. Port 50007: ccsd (UDP)
3. Configuring ACPl For Use with lntegrated
Fence Devices
If your custer uses ntegrated fence devces, you must confgure ACPI (Advanced
Confguraton and Power Interface) to ensure mmedate and compete fencng.
Note
For the most current nformaton about ntegrated fence devces
supported by Red Hat Custer Sute, refer to
http://www.redhat.com/cluster_suite/hardware/.
If a custer node s confgured to be fenced by an ntegrated fence devce, dsabe
ACPI Soft-Off for that node. Dsabng ACPI Soft-Off aows an ntegrated fence devce
to turn off a node mmedatey and competey rather than attemptng a cean
shutdown (for exampe, shu1dowh -h how). Otherwse, f ACPI Soft-Off s enabed, an
ntegrated fence devce can take four or more seconds to turn off a node (refer to
note that foows). In addton, f ACPI Soft-Off s enabed and a node pancs or
freezes durng shutdown, an ntegrated fence devce may not be abe to turn off the
node. Under those crcumstances, fencng s deayed or unsuccessfu. Consequenty,
when a node s fenced wth an ntegrated fence devce and ACPI Soft-Off s enabed,
a custer recovers sowy or requres admnstratve nterventon to recover.
Note
The amount of tme requred to fence a node depends on the
ntegrated fence devce used. Some ntegrated fence devces perform
the equvaent of pressng and hodng the power button; therefore, the
fence devce turns off the node n four to fve seconds. Other
Chapter 2. Before Configuring a Red Hat Cluster
20
ntegrated fence devces perform the equvaent of pressng the power
button momentary, reyng on the operatng system to turn off the
node; therefore, the fence devce turns off the node n a tme span
much onger than four to fve seconds.
To dsabe ACPI Soft-Off, use chkcoh11g management and verfy that the node turns
off mmedatey when fenced. The preferred way to dsabe ACPI Soft-Off s wth
chkcoh11g management: however, f that method s not satsfactory for your custer,
you can dsabe ACPI Soft-Off wth one of the foowng aternate methods:
Changng the BIOS settng to "nstant-off" or an equvaent settng that turns off
the node wthout deay
Note
Dsabng ACPI Soft-Off wth the BIOS may not be possbe wth some
computers.
Appendng acp=o11 to the kerne boot command ne of the /boo1/gJub/gJub.coh1
fe
lmportant
Ths method competey dsabes ACPI; some computers do not boot
correcty f ACPI s competey dsabed. Use ths method only f the
other methods are not effectve for your custer.
The foowng sectons provde procedures for the preferred method and aternate
methods of dsabng ACPI Soft-Off:
5ection 3.J, Disabling ACPl 5oft-Off with chkcon1Ig Management" - Preferred
method
5ection 3.2, Disabling ACPl 5oft-Off with the 8lO5" - Frst aternate method
Configuring ACPl For Use with
21
5ection 3.3, Disabling ACPl Completely in the gIub.con1 File" - Second aternate
method
3.1. Disabling ACPl Soft-Off with chkcon1g Management
You can use chkcoh11g management to dsabe ACPI Soft-Off ether by removng the
ACPI daemon (acp1d) from chkcoh11g management or by turnng off acp1d.
Note
Ths s the preferred method of dsabng ACPI Soft-Off.
Dsabe ACPI Soft-Off wth chkcoh11g management at each custer node as foows:
1. Run ether of the foowng commands:
chkcoh11g --de acp1d - Ths command removes acp1d from chkcoh11g
management.
- OR -
chkcoh11g --eve 2345 acp1d o11 - Ths command turns off acp1d.
2. Reboot the node.
3. When the custer s confgured and runnng, verfy that the node turns off
mmedatey when fenced.
Tip
You can fence the node wth the 1ehce_hode command or Conga.
3.2. Disabling ACPl Soft-Off with the BlOS
The preferred method of dsabng ACPI Soft-Off s wth chkcoh11g management
(5ection 3.J, Disabling ACPl 5oft-Off with chkcon1Ig Management"). However, f
the preferred method s not effectve for your custer, foow the procedure n ths
secton.
Chapter 2. Before Configuring a Red Hat Cluster
22
Note
Dsabng ACPI Soft-Off wth the BIOS may not be possbe wth some
computers.
You can dsabe ACPI Soft-Off by confgurng the BIOS of each custer node as
foows:
1. Reboot the node and start the BT0S Ch0S Se1up b1111y program.
2. Navgate to the Power menu (or equvaent power management menu).
3. At the Power menu, set the Soft-Off by PWR-BTTN functon (or equvaent) to
lnstant-Off (or the equvaent settng that turns off the node va the power
button wthout deay). Example 2.JJ, 8J0S C|0S Se1up d1I1I1y: Soft-Off by
PWR-BTTN set to lnstant-Off" shows a Power menu wth ACPl Function set to
Enabled and Soft-Off by PWR-BTTN set to lnstant-Off.
Note
The equvaents to ACPl Function, Soft-Off by PWR-BTTN, and
lnstant-Off may vary among computers. However, the ob|ectve of
ths procedure s to confgure the BIOS so that the computer s turned
off va the power button wthout deay.
4. Ext the BT0S Ch0S Se1up b1111y program, savng the BIOS confguraton.
5. When the custer s confgured and runnng, verfy that the node turns off
mmedatey when fenced.
Tip
You can fence the node wth the 1ehce_hode command or Conga.
+-------------------------------------------------]------------------------+
lntegrated Fence Devices
23
] ACPT Fuhc11oh |Ehabed] ] T1em hep ]
] ACPT Suspehd Type |S1{P0S)] ]------------------------]
] x Ruh vCABT0S 11 S3 Resume Au1o ] hehu Leve * ]
] Suspehd hode |01sabed] ] ]
] h00 PoweJ 0owh |01sabed] ] ]
] So11-011 by PWR-BTTh |Ths1ah1-011] ] ]
] CPb ThRh-ThJo111hg |50.0] ] ]
] Wake-bp by PCT caJd |Ehabed] ] ]
] PoweJ 0h by R1hg |Ehabed] ] ]
] Wake bp 0h LAh |Ehabed] ] ]
] x bSB KB Wake-bp FJom S3 01sabed ] ]
] Resume by AaJm |01sabed] ] ]
] x 0a1e{o1 hoh1h) AaJm 0 ] ]
] x T1me{hh.mm.ss) AaJm 0 . 0 . 0 ] ]
] P0WER 0h Fuhc11oh |BbTT0h 0hLY] ] ]
] x KB PoweJ 0h PasswoJd Eh1eJ ] ]
] x ho1 Key PoweJ 0h C1J-F1 ] ]
] ] ]
] ] ]
+-------------------------------------------------]------------------------+
Ths exampe shows ACPl Function set to Enabled, and Soft-Off by PWR-BTTN
set to lnstant-Off.
Example 2.11. I5 CN5 5e1up u111y: Soft-Off by PWR-BTTN set
to lnstant-Off
3.3. Disabling ACPl Completely in the grub.con1 File
The preferred method of dsabng ACPI Soft-Off s wth chkcoh11g management
(5ection 3.J, Disabling ACPl 5oft-Off with chkcon1Ig Management"). If the
preferred method s not effectve for your custer, you can dsabe ACPI Soft-Off wth
the BIOS power management (5ection 3.2, Disabling ACPl 5oft-Off with the 8lO5").
If nether of those methods s effectve for your custer, you can dsabe ACPI
competey by appendng acp=o11 to the kerne boot command ne n the gJub.coh1
fe.
Chapter 2. Before Configuring a Red Hat Cluster
24
lmportant
Ths method competey dsabes ACPI; some computers do not boot
correcty f ACPI s competey dsabed. Use ths method only f the
other methods are not effectve for your custer.
You can dsabe ACPI competey by edtng the gJub.coh1 fe of each custer node
as foows:
1. Open /boo1/gJub/gJub.coh1 wth a text edtor.
2. Append acp=o11 to the kerne boot command ne n /boo1/gJub/gJub.coh1 (refer
to Example 2.J2, lernel 8oot Command Line with acz=o11 Appended to lt").
3. Reboot the node.
4. When the custer s confgured and runnng, verfy that the node turns off
mmedatey when fenced.
Tip
You can fence the node wth the 1ehce_hode command or Conga.
# gJub.coh1 geheJa1ed by ahacohda
#
# ho1e 1ha1 you do ho1 have 1o JeJuh gJub a11eJ mak1hg chahges 1o 1h1s 11e
# h0TTCE. You have a /boo1 paJ1111oh. Th1s meahs 1ha1
# a keJhe ahd 1h11Jd pa1hs aJe Jea11ve 1o /boo1/, eg.
# Joo1 {hd0,0)
# keJhe /vm1huz-veJs1oh Jo Joo1=/dev/voCJoup00/Logvo00
# 1h11Jd /1h11Jd-veJs1oh.1mg
#boo1=/dev/hda
de1au1=0
11meou1=5
seJ1a --uh11=0 --speed=115200
1eJm1ha --11meou1=5 seJ1a cohsoe
111e Red ha1 Eh1eJpJ1se L1hux SeJveJ {2.6.18-36.e5)
25
Joo1 {hd0,0)
keJhe /vm1huz-2.6.18-36.e5 Jo Joo1=/dev/voCJoup00/Logvo00
cohsoe=11yS0,115200h8 acp1=o11
1h11Jd /1h11Jd-2.6.18-36.e5.1mg
In ths exampe, acp=o11 has been appended to the kerne boot command ne -
the ne startng wth "kerne /vmnuz-2.6.18-36.e5".
Example 2.12. Kernel Boot Command Line with acp=o11
Appended to lt
4. Configuring max_luns
If RAID storage n your custer presents mutpe LUNs (Logca Unt Numbers), each
custer node must be abe to access those LUNs. To enabe access to a LUNs
presented, confgure max_uhs n the /e1c/modpJobe.coh1 fe of each node as
foows:
1. Open /e1c/modpJobe.coh1 wth a text edtor.
2. Append the foowng ne to /e1c/modpJobe.coh1. Set | to the hghest numbered
LUN that s presented by RAID storage.
op11ohs scs1_mod max_uhs=|
For exampe, wth the foowng ne appended to the /e1c/modpJobe.coh1 fe, a
node can access LUNs numbered as hgh as 255:
op11ohs scs1_mod max_uhs=255
3. Save /e1c/modpJobe.coh1.
4. Run mk1h11Jd to rebud 1h11Jd for the currenty runnng kerne as foows. Set the
keIne1 varabe to the currenty runnng kerne:
Chapter 2. Before Configuring a Red Hat Cluster
26
# cd /boo1
# mkn1rd -1 -v n1rd-kerne!.mg kerne!
For exampe, the currenty runnng kerne n the foowng mk1h11Jd command s
2.6.9-34.0.2.EL:
# mkn1rd -1 -v n1rd-2.6.9-34.6.2.EL.mg 2.6.9-34.6.2.EL
Tip
You can determne the currenty runnng kerne by runnng uhame -J.
5. Restart the node.
5. Considerations for Using uorum Disk
Ouorum Dsk s a dsk-based quorum daemon, qd1skd, that provdes suppementa
heurstcs to determne node ftness. Wth heurstcs you can determne factors that
are mportant to the operaton of the node n the event of a network partton. For
exampe, n a four-node custer wth a 3:1 spt, ordnary, the three nodes
automatcay "wn" because of the three-to-one ma|orty. Under those
crcumstances, the one node s fenced. Wth qd1skd however, you can set up
heurstcs that aow the one node to wn based on access to a crtca resource (for
exampe, a crtca network path). If your custer requres addtona methods of
determnng node heath, then you shoud confgure qd1skd to meet those needs.
Note
Confgurng qd1skd s not requred uness you have speca
requrements for node heath. An exampe of a speca requrement s
an "a-but-one" confguraton. In an a-but-one confguraton, qd1skd s
confgured to provde enough quorum votes to mantan quorum even
though ony one node s workng.
Considerations for Using uorum
27
lmportant
Overa, heurstcs and other qd1skd parameters for your Red Hat
Custer depend on the ste envronment and speca requrements
needed. To understand the use of heurstcs and other qd1skd
parameters, refer to the qdsk(5) man page. If you requre assstance
understandng and usng qd1skd for your ste, contact an authorzed
Red Hat support representatve.
If you need to use qd1skd, you shoud take nto account the foowng consderatons:
Custer node votes
Each custer node shoud have the same number of votes.
CMAN membershp tmeout vaue
The CMAN membershp tmeout vaue (the tme a node needs to be
unresponsve before CMAN consders that node to be dead, and not a member)
shoud be at east two tmes that of the qd1skd membershp tmeout vaue. The
reason s because the quorum daemon must detect faed nodes on ts own, and
can take much onger to do so than CMAN. The defaut vaue for CMAN
membershp tmeout s 10 seconds. Other ste-specfc condtons may affect the
reatonshp between the membershp tmeout vaues of CMAN and qd1skd. For
assstance wth ad|ustng the CMAN membershp tmeout vaue, contact an
authorzed Red Hat support representatve.
Fencng
To ensure reabe fencng when usng qd1skd, use power fencng. Whe other
types of fencng (such as watchdog tmers and software-based soutons to
reboot a node nternay) can be reabe for custers not confgured wth qd1skd,
they are not reabe for a custer confgured wth qd1skd.
Maxmum nodes
A custer confgured wth qd1skd supports a maxmum of 16 nodes. The reason
for the mt s because of scaabty; ncreasng the node count ncreases the
amount of synchronous I/O contenton on the shared quorum dsk devce.
Ouorum dsk devce
A quorum dsk devce shoud be a shared bock devce wth concurrent
read/wrte access by a nodes n a custer. The mnmum sze of the bock devce
Chapter 2. Before Configuring a Red Hat Cluster
28
s 10 Megabytes. Exampes of shared bock devces that can be used by qd1skd
are a mut-port SCSI RAID array, a Fbre Channe RAID SAN, or a
RAID-confgured SCSI target. You can create a quorum dsk devce wth mkqd1sk,
the Custer Ouorum Dsk Utty. For nformaton about usng the utty refer to
the mkqdsk(8) man page.
Note
Usng |BOD as a quorum dsk s not recommended. A |BOD cannot
provde dependabe performance and therefore may not aow a node
to wrte to t qucky enough. If a node s unabe to wrte to a quorum
dsk devce qucky enough, the node s fasey evcted from a custer.
6. Red Hat Cluster Suite and SELinux
Red Hat Custer Sute for Red Hat Enterprse Lnux 4 requres that SELnux be
dsabed. Before confgurng a Red Hat custer, make sure to dsabe SELnux. For
exampe, you can dsabe SELnux upon nstaaton of Red Hat Enterprse Lnux 4 or
you can specfy 5ELINuX=dsab1ed n the /e1c/se1hux/coh11g fe.
7. Considerations for Using Conga
When usng Conga to confgure and manage your Red Hat Custer, make sure that
each computer runnng luci (the Conga user nterface server) s runnng on the
same network that the custer s usng for custer communcaton. Otherwse, luci
cannot confgure the nodes to communcate on the rght network. If the computer
runnng luci s on another network (for exampe, a pubc network rather than a
prvate network that the custer s communcatng on), contact an authorzed Red
Hat support representatve to make sure that the approprate host name s
confgured for each custer node.
8. General Configuration Considerations
You can confgure a Red Hat Custer n a varety of ways to sut your needs. Take
nto account the foowng consderatons when you pan, confgure, and mpement
your Red Hat Custer.
No-snge-pont-of-faure hardware confguraton
Disk
29
Custers can ncude a dua-controer RAID array, mutpe bonded network
channes, mutpe paths between custer members and storage, and redundant
un-nterruptbe power suppy (UPS) systems to ensure that no snge faure
resuts n appcaton down tme or oss of data.
Aternatvey, a ow-cost custer can be set up to provde ess avaabty than a
no-snge-pont-of-faure custer. For exampe, you can set up a custer wth a
snge-controer RAID array and ony a snge Ethernet channe.
Certan ow-cost aternatves, such as host RAID controers, software RAID
wthout custer support, and mut-ntator parae SCSI confguratons are not
compatbe or approprate for use as shared custer storage.
Data ntegrty assurance
To ensure data ntegrty, ony one node can run a custer servce and access
custer-servce data at a tme. The use of power swtches n the custer hardware
confguraton enabes a node to power-cyce another node before restartng that
node's custer servces durng a faover process. Ths prevents two nodes from
smutaneousy accessng the same data and corruptng t. It s strongy
recommended that fence devices (hardware or software soutons that remotey
power, shutdown, and reboot custer nodes) are used to guarantee data ntegrty
under a faure condtons. Watchdog tmers provde an aternatve way to to
ensure correct operaton of custer servce faover.
Ethernet channe bondng
Custer quorum and node heath s determned by communcaton of messages
among custer nodes va Ethernet. In addton, custer nodes use Ethernet for a
varety of other crtca custer functons (for exampe, fencng). Wth Ethernet
channe bondng, mutpe Ethernet nterfaces are confgured to behave as one,
reducng the rsk of a snge-pont-of-faure n the typca swtched Ethernet
connecton among custer nodes and other custer hardware.
Chapter 2. Before Configuring a Red Hat Cluster
30
Configuring Red Hat Cluster
With Conga
Ths chapter descrbes how to confgure Red Hat Custer software usng Conga, and
conssts of the foowng sectons:
5ection J, Configuration Tasks"
5ection 2, 5tarting luci and ricci".
5ection 3, Creating A Cluster"
5ection 4, Clobal Cluster Properties"
5ection 5, Configuring Fence Devices"
5ection 6, Configuring Cluster Members"
5ection 7, Configuring a Failover Domain"
5ection 8, Adding Cluster Resources"
5ection 9, Adding a Cluster 5ervice to the Cluster"
5ection J0, Configuring Cluster 5torage"
1. Configuration Tasks
Confgurng Red Hat Custer software wth Conga conssts of the foowng steps:
1. Confgurng and runnng the Conga confguraton user nterface - the luci
server. Refer to 5ection 2, 5tarting luci and ricci".
2. Creatng a custer. Refer to 5ection 3, Creating A Cluster".
3. Confgurng goba custer propertes. Refer to 5ection 4, Clobal Cluster
Properties".
4. Confgurng fence devces. Refer to 5ection 5, Configuring Fence Devices".
5. Confgurng custer members. Refer to 5ection 6, Configuring Cluster Members".
Chapter 3.
31
6. Creatng faover domans. Refer to 5ection 7, Configuring a Failover Domain".
7. Creatng resources. Refer to 5ection 8, Adding Cluster Resources".
8. Creatng custer servces. Refer to 5ection 9, Adding a Cluster 5ervice to the
Cluster".
9. Confgurng storage. Refer to 5ection J0, Configuring Cluster 5torage".
2. Starting luci and ricci
To admnster Red Hat Custers wth Conga, nsta and run luci and ricci as foows:
1. At each node to be admnstered by Conga, nsta the ricci agent. For exampe:
# up2da1e - rcc
2. At each node to be admnstered by Conga, start ricci. For exampe:
# servce rcc s1ar1
S1aJ11hg J1cc1. | 0K ]
3. Seect a computer to host luci and nsta the luci software on that computer. For
exampe:
# up2da1e - 1uc
Note
Typcay, a computer n a server cage or a data center hosts luci;
however, a custer computer can host luci.
4. At the computer runnng luci, ntaze the luci server usng the uc1_adm1h 1h11
command. For exampe:
Chapter 3. Configuring Red Hat Cluster With Conga
32
# 1uc_admn n1
Th111a1z1hg 1he Luc1 seJveJ
CJea11hg 1he `adm1h` useJ
Eh1eJ passwoJd. <Type passwoJd ahd pJess EhTER.>
Coh11Jm passwoJd. <Re-1ype passwoJd ahd pJess EhTER.>
Pease wa11...
The adm1h passwoJd has beeh success1uy se1.
CeheJa11hg SSL ceJ1111ca1es...
Luc1 seJveJ has beeh success1uy 1h111a1zed
Res1aJ1 1he Luc1 seJveJ 1oJ chahges 1o 1ake e11ec1
eg. seJv1ce uc1 Jes1aJ1
5. Start luci usng seJv1ce uc1 Jes1aJ1. For exampe:
# servce 1uc res1ar1
Shu111hg dowh uc1. | 0K ]
S1aJ11hg uc1. geheJa11hg h11ps SSL ceJ1111ca1es... dohe
| 0K ]
Pease, po1h1 youJ web bJowseJ 1o h11ps.//haho-01.8084 1o access uc1
6. At a Web browser, pace the URL of the luci server nto the URL address box and
cck Go (or the equvaent). The URL syntax for the luci server s
h11ps://!ucz_server_hos1nae:8684. The frst tme you access luci, two SSL
certfcate daog boxes are dspayed. Upon acknowedgng the daog boxes, your
Web browser dspays the luci ogn page.
3. Creating A Cluster
Creatng a custer wth luci conssts of seectng custer nodes, enterng ther
passwords, and submttng the request to create a custer. If the node nformaton
and passwords are correct, Conga automatcay nstas software nto the custer
Creating A Cluster
33
nodes and starts the custer. Create a custer as foows:
1. As admnstrator of luci, seect the cluster tab.
2. Cck Create a New Cluster.
3. At the Cluster Name text box, enter a custer name. The custer name cannot
exceed 15 characters. Add the node name and password for each custer node.
Enter the node name for each node n the Node Hostname coumn; enter the
root password for each node n the n the Root Password coumn. Check the
Enable Shared Storage Support checkbox f custered storage s requred.
4. Cck Submit. Cckng Submit causes the the Create a new cluster page to be
dspayed agan, showng the parameters entered n the precedng step, and
Lock Manager parameters. The Lock Manager parameters consst of the ock
manager opton buttons, DLM (preferred) and GULM, and Lock Server text
boxes n the GULM lock server properties group box. Confgure Lock
Manager parameters for ether DLM or GULM as foows:
For DLM - Cck DLM (preferred) or confrm that t s set.
For GULM - Cck GULM or confrm that t s set. At the GULM lock server
properties group box, enter the FODN or the IP address of each ock server n
a Lock Server text box.
Note
You must enter the FODN or the IP address of one, three, or fve GULM
ock servers.
5. Re-enter enter the root password for each node n the n the Root Password
coumn.
6. Cck Submit. Cckng Submit causes the foowng actons:
a. Custer software packages to be downoaded onto each custer node.
b. Custer software to be nstaed onto each custer node.
c. Custer confguraton fe to be created and propagated to each node n the
custer.
Chapter 3. Configuring Red Hat Cluster With Conga
34
d. Startng the custer.
A progress page shows the progress of those actons for each node n the custer.
When the process of creatng a new custer s compete, a page s dspayed
provdng a confguraton nterface for the newy created custer.
4. Global Cluster Properties
When a custer s created, or f you seect a custer to confgure, a custer-specfc
page s dspayed. The page provdes an nterface for confgurng custer-wde
propertes and detaed propertes. You can confgure custer-wde propertes wth
the tabbed nterface beow the custer name. The nterface provdes the foowng
tabs: General , GULM (GULM custers ony), Fence (DLM custers ony), Multicast
(DLM custers ony), and uorum Partition (DLM custers ony). To confgure the
parameters n those tabs, foow the steps n ths secton. If you do not need to
confgure parameters n a tab, skp the step for that tab.
1. General tab - Ths tab dspays custer name and provdes an nterface for
confgurng the confguraton verson and advanced custer propertes. The
parameters are summarzed as foows:
The Cluster Name text box dspays the custer name; t does not accept a
custer name change. You cannot change the custer name. The ony way to
change the name of a Red Hat custer s to create a new custer confguraton
wth the new name.
The Configuration Version vaue s set to 1 by defaut and s automatcay
ncremented each tme you modfy your custer confguraton. However, f you
need to set t to another vaue, you can specfy t at the Configuration
Version text box.
You can enter advanced custer propertes by cckng Show advanced cluster
properties. Cckng Show advanced cluster properties reveas a st of
advanced propertes. You can cck any advanced property for onne hep about
the property.
Enter the vaues requred and cck Apply for changes to take effect.
2. Fence tab (DLM custers ony) - Ths tab provdes an nterface for confgurng
these Fence Daemon Properties parameters: Post-Fail Delay and Post-]oin
Global Cluster Properties
35
Delay. The parameters are summarzed as foows:
The Post-Fail Delay parameter s the number of seconds the fence daemon
(1ehced) wats before fencng a node (a member of the fence doman) after the
node has faed. The Post-Fail Delay defaut vaue s 6. Its vaue may be vared
to sut custer and network performance.
The Post-]oin Delay parameter s the number of seconds the fence daemon
(1ehced) wats before fencng a node after the node |ons the fence doman. The
Post-]oin Delay defaut vaue s 3. A typca settng for Post-]oin Delay s
between 20 and 30 seconds, but can vary accordng to custer and network
performance.
Enter vaues requred and Cck Apply for changes to take effect.
Note
For more nformaton about Post-]oin Delay and Post-Fail Delay,
refer to the fenced(8) man page.
3. GULM tab (GULM custers ony) - Ths tab provdes an nterface for confgurng
GULM ock servers. The tab ndcates each node n a custer that s confgured as
a GULM ock server and provdes the capabty to change ock servers. Foow the
rues provded at the tab for confgurng GULM ock servers and cck Apply for
changes to take effect.
lmportant
The number of nodes that can be confgured as GULM ock servers s
mted to ether one, three, or fve.
4. Multicast tab (DLM custers ony) - Ths tab provdes an nterface for
confgurng these Multicast Configuration parameters: Do not use multicast
and Use multicast. Multicast Configuration specfes whether a mutcast
address s used for custer management communcaton among custer nodes. Do
not use multicast s the defaut settng. To use a mutcast address for custer
management communcaton among custer nodes, cck Use multicast. When
Use multicast s seected, the Multicast address and Multicast network
Chapter 3. Configuring Red Hat Cluster With Conga
36
interface text boxes are enabed. If Use multicast s seected, enter the
mutcast address nto the Multicast address text box and the mutcast
network nterface nto the Multicast network interface text box. Cck Apply
for changes to take effect.
5. uorum Partition tab (DLM custers ony) - Ths tab provdes an nterface for
confgurng these uorum Partition Configuration parameters: Do not use a
uorum Partition, Use a uorum Partition, lnterval, Votes, TKO, Minimum
Score, Device, Label, and Heuristics. The Do not use a uorum Partition
parameter s enabed by defaut. Table 3.J, Ouorum-Disk Parameters" descrbes
the parameters. If you need to use a quorum dsk, cck Use a uorum
Partition, enter quorum dsk parameters, cck Apply, and restart the custer for
the changes to take effect.
lmportant
Ouorum-dsk parameters and heurstcs depend on the ste
envronment and the speca requrements needed. To understand the
use of quorum-dsk parameters and heurstcs, refer to the qdsk(5)
man page. If you requre assstance understandng and usng quorum
dsk, contact an authorzed Red Hat support representatve.
Note
Cckng Apply on the uorum Partition tab propagates changes to
the custer confguraton fe (/e1c/cus1eJ/cus1eJ.coh1) n each
custer node. However, for the quorum dsk to operate, you must
restart the custer (refer to 5ection J, 5tarting, 5topping, and
Deleting Clusters").
Parameter Description
Do not use a
uorum Partition
Dsabes quorum partton. Dsabes quorum-dsk parameters
n the uorum Partition tab.
Use a uorum
Partition
Enabes quorum partton. Enabes quorum-dsk parameters
n the uorum Partition tab.
lnterval The frequency of read/wrte cyces, n seconds.
Global Cluster Properties
37
Parameter Description
Votes The number of votes the quorum daemon advertses to
CMAN when t has a hgh enough score.
TKO The number of cyces a node must mss to be decared dead.
Minimum Score The mnmum score for a node to be consdered "ave". If
omtted or set to 0, the defaut functon, 1ooJ{{n+1)/2), s
used, where n s the sum of the heurstcs scores. The
Minimum Score vaue must never exceed the sum of the
heurstc scores; otherwse, the quorum dsk cannot be
avaabe.
Device The storage devce the quorum daemon uses. The devce
must be the same on a nodes.
Label Specfes the quorum dsk abe created by the mkqd1sk utty.
If ths fed contans an entry, the abe overrdes the Device
fed. If ths fed s used, the quorum daemon reads
/pJoc/paJ1111ohs and checks for qdsk sgnatures on every
bock devce found, comparng the abe aganst the specfed
abe. Ths s usefu n confguratons where the quorum
devce name dffers among nodes.
Heuristics
Path to Program - The program used to determne f ths
heurstc s ave. Ths can be anythng that can be executed
by /b1h/sh -c. A return vaue of 0 ndcates success; anythng
ese ndcates faure. Ths fed s requred.
lnterval - The frequency (n seconds) at whch the heurstc
s poed. The defaut nterva for every heurstc s 2 seconds.
Score - The weght of ths heurstc. Be carefu when
determnng scores for heurstcs. The defaut score for each
heurstc s 1.
Apply Propagates the changes to the custer confguraton fe
(/e1c/cus1eJ/cus1eJ.coh1) n each custer node.
Table 3.1. uorum-Disk Parameters
5. Configuring Fence Devices
Confgurng fence devces conssts of creatng, modfyng, and deetng fence
Chapter 3. Configuring Red Hat Cluster With Conga
38
devces. Creatng a fence devce conssts of seectng a fence devce type and
enterng parameters for that fence devce (for exampe, name, IP address, ogn,
and password). Modfyng a fence devce conssts of seectng an exstng fence
devce and changng parameters for that fence devce. Deetng a fence devce
conssts of seectng an exstng fence devce and deetng t.
Tip
If you are creatng a new custer, you can create fence devces when
you confgure custer nodes. Refer to 5ection 6, Configuring Cluster
Members".
Wth Conga you can create shared and non-shared fence devces.
The foowng shared fence devces are avaabe:
APC Power Swtch
Brocade Fabrc Swtch
Bu PAP
Egenera SAN Controer
GNBD
IBM Bade Center
McData SAN Swtch
OLogc SANbox2
SCSI Fencng
Vrtua Machne Fencng
Vxe SAN Swtch
WTI Power Swtch
The foowng non-shared fence devces are avaabe:
Configuring Fence Devices
39
De DRAC
HP LO
IBM RSA II
IPMI LAN
RPS10 Sera Swtch
Ths secton provdes procedures for the foowng tasks:
Creatng shared fence devces - Refer to 5ection 5.J, Creating a 5hared Fence
Device". The procedures appy only to creatng shared fence devces. You can
create non-shared (and shared) fence devces whe confgurng nodes (refer to
5ection 6, Configuring Cluster Members").
Modfyng or deetng fence devces - Refer to 5ection 5.2, Modifying or
Deleting a Fence Device". The procedures appy to both shared and non-shared
fence devces.
The startng pont of each procedure s at the custer-specfc page that you
navgate to from Choose a cluster to administer dspayed on the cluster tab.
5.1. Creating a Shared Fence Device
To create a shared fence devce, foow these steps:
1. At the detaed menu for the custer (beow the clusters menu), cck Shared
Fence Devices. Cckng Shared Fence Devices causes the dspay of the fence
devces for a custer and causes the dspay of menu tems for fence devce
confguraton: Add a Fence Device and Configure a Fence Device.
Note
If ths s an nta custer confguraton, no fence devces have been
created, and therefore none are dspayed.
2. Cck Add a Fence Device. Cckng Add a Fence Device causes the Add a
Sharable Fence Device page to be dspayed (refer to Figure 3.J, Fence
Chapter 3. Configuring Red Hat Cluster With Conga
40
Device Configuration").
Figure 3.1. Fence Device Configuration
3. At the Add a Sharable Fence Device page, cck the drop-down box under
Fencing Type and seect the type of fence devce to confgure.
4. Specfy the nformaton n the Fencing Type daog box accordng to the type of
fence devce. Refer to Appendix 8, Fence Device Parameters for more
nformaton about fence devce parameters.
5. Cck Add this shared fence device.
6. Cckng Add this shared fence device causes a progress page to be dspayed
temporary. After the fence devce has been added, the detaed custer
Creating a Shared Fence Device
41
propertes menu s updated wth the fence devce under Configure a Fence
Device.
5.2. Modifying or Deleting a Fence Device
To modfy or deete a fence devce, foow these steps:
1. At the detaed menu for the custer (beow the clusters menu), cck Shared
Fence Devices. Cckng Shared Fence Devices causes the dspay of the fence
devces for a custer and causes the dspay of menu tems for fence devce
confguraton: Add a Fence Device and Configure a Fence Device.
2. Cck Configure a Fence Device. Cckng Configure a Fence Device causes
the dspay of a st of fence devces under Configure a Fence Device.
3. Cck a fence devce n the st. Cckng a fence devce n the st causes the
dspay of a Fence Device Form page for the fence devce seected from the st.
4. Ether modfy or deete the fence devce as foows:
To modfy the fence devce, enter changes to the parameters dspayed. Refer
to Appendix 8, Fence Device Parameters for more nformaton about fence
devce parameters. Cck Update this fence device and wat for the
confguraton to be updated.
To deete the fence devce, cck Delete this fence device and wat for the
confguraton to be updated.
Note
You can create shared fence devces on the node confguraton page,
aso. However, you can ony modfy or deete a shared fence devce va
Shared Fence Devices at the detaed menu for the custer (beow
the clusters menu).
6. Configuring Cluster Members
Confgurng custer members conssts of ntay confgurng nodes n a newy
confgured custer, addng members, and deetng members. The foowng sectons
provde procedures for nta confguraton of nodes, addng nodes, and deetng
Chapter 3. Configuring Red Hat Cluster With Conga
42
nodes:
5ection 6.J, lnitially Configuring Members"
5ection 6.2, Adding a Member to a Running Cluster"
5ection 6.3, Deleting a Member from a Cluster"
6.1. lnitially Configuring Members
Creatng a custer conssts of seectng a set of nodes (or members) to be part of the
custer. Once you have competed the nta step of creatng a custer and creatng
fence devces, you need to confgure custer nodes. To ntay confgure custer
nodes after creatng a new custer, foow the steps n ths secton. The startng
pont of the procedure s at the custer-specfc page that you navgate to from
Choose a cluster to administer dspayed on the cluster tab.
1. At the detaed menu for the custer (beow the clusters menu), cck Nodes.
Cckng Nodes causes the dspay of an Add a Node eement and a Configure
eement wth a st of the nodes aready confgured n the custer.
2. Cck a nk for a node at ether the st n the center of the page or n the st n
the detaed menu under the clusters menu. Cckng a nk for a node causes a
page to be dspayed for that nk showng how that node s confgured.
3. At the bottom of the page, under Main Fencing Method, cck Add a fence
device to this level.
4. Seect a fence devce and provde parameters for the fence devce (for exampe
port number).
Note
You can choose from an exstng fence devce or create a new fence
devce.
5. Cck Update main fence properties and wat for the change to take effect.
6.2. Adding a Member to a Running Cluster
lnitially Configuring Members
43
To add a member to a runnng custer, foow the steps n ths secton. The startng
pont of the procedure s at the custer-specfc page that you navgate to from
Choose a cluster to administer dspayed on the cluster tab.
1. At the detaed menu for the custer (beow the clusters menu), cck Nodes.
Cckng Nodes causes the dspay of an Add a Node eement and a Configure
eement wth a st of the nodes aready confgured n the custer. (In addton, a
st of the custer nodes s dspayed n the center of the page.)
2. Cck Add a Node. Cckng Add a Node causes the dspay of the Add a node
to c!us1er nae page.
3. At that page, enter the node name n the Node Hostname text box; enter the
root password n the Root Password text box. Check the Enable Shared
Storage Support checkbox f custered storage s requred. If you want to add
more nodes, cck Add another entry and enter node name and password for
the each addtona node.
4. Cck Submit. Cckng Submit causes the foowng actons:
a. Custer software packages to be downoaded onto the added node.
b. Custer software to be nstaed (or verfcaton that the approprate software
packages are nstaed) onto the added node.
c. Custer confguraton fe to be updated and propagated to each node n the
custer - ncudng the added node.
d. |onng the added node to custer.
A progress page shows the progress of those actons for each added node.
5. When the process of addng a node s compete, a page s dspayed provdng a
confguraton nterface for the custer.
6. At the detaed menu for the custer (beow the clusters menu), cck Nodes.
Cckng Nodes causes the foowng dspays:
A st of custer nodes n the center of the page
The Add a Node eement and the Configure eement wth a st of the nodes
confgured n the custer at the detaed menu for the custer (beow the
clusters menu)
Chapter 3. Configuring Red Hat Cluster With Conga
44
7. Cck the nk for an added node at ether the st n the center of the page or n
the st n the detaed menu under the clusters menu. Cckng the nk for the
added node causes a page to be dspayed for that nk showng how that node s
confgured.
8. At the bottom of the page, under Main Fencing Method, cck Add a fence
device to this level.
9. Seect a fence devce and provde parameters for the fence devce (for exampe
port number).
Note
You can choose from an exstng fence devce or create a new fence
devce.
10. Cck Update main fence properties and wat for the change to take effect.
6.3. Deleting a Member from a Cluster
To deete a member from an exstng custer that s currenty n operaton, foow the
steps n ths secton. The startng pont of the procedure s at the Choose a cluster
to administer page (dspayed on the cluster tab).
1. Cck the nk of the node to be deeted. Cckng the nk of the node to be deeted
causes a page to be dspayed for that nk showng how that node s confgured.
Note
To aow servces runnng on a node to fa over when the node s
deeted, skp the next step.
2. Dsabe or reocate each servce that s runnng on the node to be deeted:
Note
Deleting a Member from a Cluster
45
Repeat ths step for each servce that needs to be dsabed or started
on another node.
a. Under Services on this Node, cck the nk for a servce. Cckng that nk
cause a confguraton page for that servce to be dspayed.
b. On that page, at the Choose a task drop-down box, choose to ether dsabe
the servce are start t on another node and cck Go.
c. Upon confrmaton that the servce has been dsabed or started on another
node, cck the cluster tab. Cckng the cluster tab causes the Choose a
cluster to administer page to be dspayed.
d. At the Choose a cluster to administer page, cck the nk of the node to be
deeted. Cckng the nk of the node to be deeted causes a page to be
dspayed for that nk showng how that node s confgured.
3. On that page, at the Choose a task drop-down box, choose Delete this node
and cck Go. When the node s deeted, a page s dspayed that sts the nodes n
the custer. Check the st to make sure that the node has been deeted.
7. Configuring a Failover Domain
A faover doman s a named subset of custer nodes that are egbe to run a
custer servce n the event of a node faure. A faover doman can have the
foowng characterstcs:
Unrestrcted - Aows you to specfy that a subset of members are preferred, but
that a custer servce assgned to ths doman can run on any avaabe member.
Restrcted - Aows you to restrct the members that can run a partcuar custer
servce. If none of the members n a restrcted faover doman are avaabe, the
custer servce cannot be started (ether manuay or by the custer software).
Unordered - When a custer servce s assgned to an unordered faover doman,
the member on whch the custer servce runs s chosen from the avaabe
faover doman members wth no prorty orderng.
Ordered - Aows you to specfy a preference order among the members of a
Chapter 3. Configuring Red Hat Cluster With Conga
46
faover doman. The member at the top of the st s the most preferred, foowed
by the second member n the st, and so on.
Note
Changng a faover doman confguraton has no effect on currenty
runnng servces.
Note
Faover domans are not requred for operaton.
By defaut, faover domans are unrestrcted and unordered.
In a custer wth severa members, usng a restrcted faover doman can mnmze
the work to set up the custer to run a custer servce (such as h11pd), whch
requres you to set up the confguraton dentcay on a members that run the
custer servce). Instead of settng up the entre custer to run the custer servce,
you must set up ony the members n the restrcted faover doman that you
assocate wth the custer servce.
Tip
To confgure a preferred member, you can create an unrestrcted
faover doman comprsng ony one custer member. Dong that
causes a custer servce to run on that custer member prmary (the
preferred member), but aows the custer servce to fa over to any of
the other members.
The foowng sectons descrbe addng a faover doman and modfyng a faover
doman:
5ection 7.J, Adding a Failover Domain"
5ection 7.2, Modifying a Failover Domain"
Adding a Failover Domain
47
7.1. Adding a Failover Domain
To add a faover doman, foow the steps n ths secton. The startng pont of the
procedure s at the custer-specfc page that you navgate to from Choose a
cluster to administer dspayed on the cluster tab.
1. At the detaed menu for the custer (beow the clusters menu), cck Failover
Domains. Cckng Failover Domains causes the dspay of faover domans
wth reated servces and the dspay of menu tems for faover domans: Add a
Failover Domain and Configure a Failover Domain .
2. Cck Add a Failover Domain. Cckng Add a Failover Domain causes the
dspay of the Add a Failover Domain page.
3. At the Add a Failover Domain page, specfy a faover doman name at the
Failover Domain Name text box.
Note
The name shoud be descrptve enough to dstngush ts purpose
reatve to other names used n your custer.
4. To enabe settng faover prorty of the members n the faover doman, cck the
Prioritized checkbox. Wth Prioritized checked, you can set the prorty vaue,
Priority, for each node seected as members of the faover doman.
5. To restrct faover to members n ths faover doman, cck the checkbox next to
Restrict failover to this domain's members. Wth Restrict failover to this
domain's members checked, servces assgned to ths faover doman fa over
ony to nodes n ths faover doman.
6. Confgure members for ths faover doman. Under Failover domain
membership, cck the Membercheckbox for each node that s to be a member
of the faover doman. If Prioritized s checked, set the prorty n the Priority
text box for each member of the faover doman.
7. Cck Submit. Cckng Submit causes a progress page to be dspayed foowed
by the dspay of the Failover Domain Form page. That page dspays the
added resource and ncudes the faover doman n the custer menu to the eft
under Domain.
Chapter 3. Configuring Red Hat Cluster With Conga
48
8. To make addtona changes to the faover doman, contnue modfcatons at the
Failover Domain Form page and cck Submit when you are done.
7.2. Modifying a Failover Domain
To modfy a faover doman, foow the steps n ths secton. The startng pont of
the procedure s at the custer-specfc page that you navgate to from Choose a
cluster to administer dspayed on the cluster tab.
1. At the detaed menu for the custer (beow the clusters menu), cck Failover
Domains. Cckng Failover Domains causes the dspay of faover domans
wth reated servces and the dspay of menu tems for faover domans: Add a
Failover Domain and Configure a Failover Domain .
2. Cck Configure a Failover Domain. Cckng Configure a Failover Domain
causes the dspay of faover domans under Configure a Failover Domain at
the detaed menu for the custer (beow the clusters menu).
3. At the detaed menu for the custer (beow the clusters menu), cck the faover
doman to modfy. Cckng the faover doman causes the dspay of the Failover
Domain Form page. At the Failover Domain Form page, you can modfy the
faover doman name, prortze faover, restrct faover to ths doman, and
modfy faover doman membershp.
4. Modfyng faover name - To change the faover doman name, modfy the text
at the Failover Domain Name text box.
Note
The name shoud be descrptve enough to dstngush ts purpose
reatve to other names used n your custer.
5. Faover prorty - To enabe or dsabe prortzed faover n ths faover doman,
cck the Prioritized checkbox. Wth Prioritized checked, you can set the
prorty vaue, Priority, for each node seected as members of the faover
doman. Wth Prioritizednot checked, settng prorty eves s dsabed for ths
faover doman.
6. Restrcted faover - To enabe or dsabe restrcted faover for members n ths
Modifying a Failover Domain
49
faover doman, cck the checkbox next to Restrict failover to this domain's
members. Wth Restrict failover to this domain's members checked,
servces assgned to ths faover doman fa over ony to nodes n ths faover
doman. Wth Restrict failover to this domain's membersnot checked,
servces assgned to ths faover doman can fa over to nodes outsde ths
faover doman.
7. Modfyng faover doman membershp - Under Failover domain membership,
cck the Membercheckbox for each node that s to be a member of the faover
doman. A checked box for a node means that the node s a member of the
faover doman. If Prioritized s checked, you can ad|ust the prorty n the
Priority text box for each member of the faover doman.
8. Cck Submit. Cckng Submit causes a progress page to be dspayed foowed
by the dspay of the Failover Domain Form page. That page dspays the
added resource and ncudes the faover doman n the custer menu to the eft
under Domain.
9. To make addtona changes to the faover doman, contnue modfcatons at the
Failover Domain Form page and cck Submit when you are done.
8. Adding Cluster Resources
To add a custer resource, foow the steps n ths secton. The startng pont of the
procedure s at the custer-specfc page that you navgate to from Choose a
cluster to administer dspayed on the cluster tab.
1. At the detaed menu for the custer (beow the clusters menu), cck Resources.
Cckng Resources causes the dspay of resources n the center of the page and
causes the dspay of menu tems for resource confguraton: Add a Resource
and Configure a Resource.
2. Cck Add a Resource. Cckng Add a Resource causes the Add a Resource
page to be dspayed.
3. At the Add a Resource page, cck the drop-down box under Select a Resource
Type and seect the type of resource to confgure. The resource optons are
descrbed as foows:
GFS
Name - Create a name for the fe system resource.
Chapter 3. Configuring Red Hat Cluster With Conga
50
Mount Point - Choose the path to whch the fe system resource s mounted.
Device - Specfy the devce fe assocated wth the fe system resource.
Options - Mount optons.
File System lD - When creatng a new fe system resource, you can eave ths
fed bank. Leavng the fed bank causes a fe system ID to be assgned
automatcay after you cck Submit at the File System Resource
Configuration daog box. If you need to assgn a fe system ID expcty,
specfy t n ths fed.
Force Unmount checkbox - If checked, forces the fe system to unmount. The
defaut settng s unchecked. Force Unmount ks a processes usng the
mount pont to free up the mount when t tres to unmount. Wth GFS resources,
the mount pont s not unmounted at servce tear-down unless ths box s
checked.
Fe System
Name - Create a name for the fe system resource.
File System Type - Choose the fe system for the resource usng the
drop-down menu.
Mount Point - Choose the path to whch the fe system resource s mounted.
Device - Specfy the devce fe assocated wth the fe system resource.
Options - Mount optons. system.
File System lD - When creatng a new fe system resource, you can eave ths
fed bank. Leavng the fed bank causes a fe system ID to be assgned
automatcay after you cck Submit at the File System Resource
Configuration daog box. If you need to assgn a fe system ID expcty,
specfy t n ths fed.
Checkboxes - Specfy mount and unmount actons when a servce s stopped
(for exampe, when dsabng or reocatng a servce):
Force unmount - If checked, forces the fe system to unmount. The defaut
settng s unchecked. Force Unmount ks a processes usng the mount
pont to free up the mount when t tres to unmount.
Reboot host node if unmount fails - If checked, reboots the node f
Adding Cluster Resources
51
unmountng ths fe system fas. The defaut settng s unchecked.
Check file system before mounting - If checked, causes 1sck to be run on
the fe system before mountng t. The defaut settng s unchecked.
IP Address
lP Address - Type the IP address for the resource.
Monitor Link checkbox - Check the box to enabe or dsabe nk status
montorng of the IP address resource
NFS Mount
Name - Create a symboc name for the NFS mount.
Mount Point - Choose the path to whch the fe system resource s mounted.
Host - Specfy the NFS server name.
Export Path - NFS export on the server.
NFS version - Specfy NFS protoco:
NFS3 - Specfes usng NFSv3 protoco. The defaut settng s NFS.
NFS4 - Specfes usng NFSv4 protoco.
Options - Mount optons. For more nformaton, refer to the nfs(5) man page.
Force Unmount checkbox - If checked, forces the fe system to unmount. The
defaut settng s unchecked. Force Unmount ks a processes usng the
mount pont to free up the mount when t tres to unmount.
NFS Cent
Name - Enter a name for the NFS cent resource.
Target - Enter a target for the NFS cent resource. Supported targets are
hostnames, IP addresses (wth wd-card support), and netgroups.
Options - Addtona cent access rghts. For more nformaton, refer to the
exports(5) man page, Genera Optons
NFS Export
Name - Enter a name for the NFS export resource.
Scrpt
Name - Enter a name for the custom user scrpt.
Chapter 3. Configuring Red Hat Cluster With Conga
52
File (with path) - Enter the path where ths custom scrpt s ocated (for
exampe, /e1c/1h11.d/useIscIIp1)
Samba Servce
Name - Enter a name for the Samba server.
Workgroup - Enter the Wndows workgroup name or Wndows NT doman of
the Samba servce.
Note
When creatng or edtng a custer servce, connect a Samba-servce
resource drecty to servce, not to a resource wthn a servce.
4. Cck Submit. Cckng Submit causes a progress page to be dspayed foowed
by the dspay of Resources forc!us1er nae page. That page dspays the added
resource (and other resources).
9. Adding a Cluster Service to the Cluster
To add a custer servce to the custer, foow the steps n ths secton. The startng
pont of the procedure s at the custer-specfc page that you navgate to from
Choose a cluster to administer dspayed on the cluster tab.
1. At the detaed menu for the custer (beow the clusters menu), cck Services.
Cckng Services causes the dspay of servces n the center of the page and
causes the dspay of menu tems for servces confguraton: Add a Service and
Configure a Service.
2. Cck Add a Service. Cckng Add a Service causes the Add a Service page to
be dspayed.
3. On the Add a Service page, at the Service name text box, type the name of
the servce. Beow the Service name text box s an checkbox abeed
Automatically start this service. The checkbox s checked by defaut. When
the checkbox s checked, the servce s started automatcay when a custer s
started and runnng. If the checkbox s not checked, the servce must be started
manuay any tme the custer comes up from the stopped state.
Adding a Cluster Service to the
53
Tip
Use a descrptve name that ceary dstngushes the servce from
other servces n the custer.
4. Add a resource to the servce; cck Add a resource to this service. Cckng
Add a resource to this service causes the dspay of two drop-down boxes:
Add a new local resource and Use an existing global resource. Addng a
new oca resource adds a resource that s avaabe only to ths servce. The
process of addng a oca resource s the same as addng a goba resource
descrbed n 5ection 8, Adding Cluster Resources". Addng a goba resource
adds a resource that has been prevousy added as a goba resource (refer to
5ection 8, Adding Cluster Resources").
5. At the drop-down box of ether Add a new local resource or Use an existing
global resource, seect the resource to add and confgure t accordng to the
optons presented. (The optons are the same as descrbed n 5ection 8, Adding
Cluster Resources".)
Note
If you are addng a Samba-servce resource, connect a Samba-servce
resource drecty to the servce, not to a resource wthn a servce.
6. If you want to add resources to that resource, cck Add a child. Cckng Add a
child causes the dspay of addtona optons to oca and goba resources. You
can contnue addng chdren resources to the resource to sut your requrements.
To vew chdren resources, cck the trange con to the eft of Show Children.
7. When you have competed addng resources to the servce, and have competed
addng chdren resources to resources, cck Submit. Cckng Submit causes a
progress page to be dspayed foowed by a page dspayng the added servce
(and other servces).
Note
Chapter 3. Configuring Red Hat Cluster With Conga
54
To verfy the exstence of the IP servce resource used n a custer
servce, you must use the /sb1h/1p addJ 1s1 command on a custer
node. The foowng output shows the /sb1h/1p addJ 1s1 command
executed on a node runnng a custer servce:
1. o. <L00PBACK,bP> m1u 16436 qd1sc hoqueue
1hk/oopback 00.00.00.00.00.00 bJd 00.00.00.00.00.00
1he1 127.0.0.1/8 scope hos1 o
1he16 ..1/128 scope hos1
va1d_11 1oJeveJ pJe1eJJed_11 1oJeveJ
2. e1h0. <BR0A0CAST,hbLTTCAST,bP> m1u 1356 qd1sc p111o_1as1 qeh 1000
1hk/e1heJ 00.05.5d.9a.d8.91 bJd 11.11.11.11.11.11
1he1 10.11.4.31/22 bJd 10.11.7.255 scope goba e1h0
1he16 1e80..205.5d11.1e9a.d891/64 scope 1hk
1he1 10.11.4.240/22 scope goba secohdaJy e1h0
va1d_11 1oJeveJ pJe1eJJed_11 1oJeveJ
10. Configuring Cluster Storage
To confgure storage for a custer, cck the storage tab. Cckng that tab causes
the dspay of the Welcome to Storage Configuration lnterface page.
The storage tab aows you to montor and confgure storage on remote systems. It
provdes a means for confgurng dsk parttons, ogca voumes (custered and
snge system use), fe system parameters, and mount ponts. The storage tab
provdes an nterface for settng up shared storage for custers and offers GFS and
other fe systems as fe system optons. When a you seect the storage tab, the
Welcome to Storage Configuration lnterface page shows a st of systems
avaabe to the you n a navgaton tabe to the eft. A sma form aows you to
choose a storage unt sze to sut your preference. That choce s perssted and can
be changed at any tme by returnng to ths page. In addton, you can change the
unt type on specfc confguraton forms throughout the storage user nterface. Ths
genera choce aows you to avod dffcut decma representatons of storage sze
(for exampe, f you know that most of your storage s measured n ggabytes,
terabytes, or other more famar representatons).
Addtonay, the Welcome to Storage Configuration lnterface page sts
Cluster
55
systems that you are authorzed to access, but currenty are unabe to admnster
because of a probem. Exampes of probems:
A computer s unreachabe va the network.
A computer has been re-maged and the luci server admn must re-authentcate
wth the ricci agent on the computer.
A reason for the troube s dspayed f the storage user nterface can determne t.
Ony those computers that the user s prveged to admnster s shown n the man
navgaton tabe. If you have no permssons on any computers, a message s
dspayed.
After you seect a computer to admnster, a genera propertes page s dspayed for
the computer. Ths page s dvded nto three sectons:
Hard Drives
Partitions
Volume Groups
Each secton s set up as an expandabe tree, wth nks to property sheets for
specfc devces, parttons, and storage enttes.
Confgure the storage for your custer to sut your custer requrements. If you are
confgurng Red Hat GFS, confgure custered ogca voumes frst, usng CLVM. For
more nformaton about CLVM and GFS refer to Red Hat documentaton for those
products.
Chapter 3. Configuring Red Hat Cluster With Conga
56
Managing Red Hat Cluster
With Conga
Ths chapter descrbes varous admnstratve tasks for managng a Red Hat Custer
and conssts of the foowng sectons:
5ection J, 5tarting, 5topping, and Deleting Clusters"
5ection 2, Managing Cluster Nodes"
5ection 3, Managing High-Availability 5ervices"
5ection 4, Diagnosing and Correcting Problems in a Cluster"
1. Starting, Stopping, and Deleting Clusters
You can perform the foowng custer-management functons through the luci
server component of Conga:
Restart a custer.
Start a custer.
Stop a custer.
Deete a custer.
To perform one of the functons n the precedng st, foow the steps n ths secton.
The startng pont of the procedure s at the cluster tab (at the Choose a cluster
to administer page).
1. At the rght of the Cluster Name for each custer sted on the Choose a cluster
to administer page s a drop-down box. By defaut, the drop-down box s set to
Restart this cluster. Cckng the drop-down box box reveas a the seectons
avaabe: Restart this cluster, Stop this cluster/Start this cluster, and
Delete this cluster. The actons of each functon are summarzed as foows:
Restart this cluster - Seectng ths acton causes the custer to be restarted.
Chapter 4.
57
You can seect ths acton for any state the custer s n.
Stop this cluster/Start this cluster - Stop this cluster s avaabe when a
custer s runnng. Start this cluster s avaabe when a custer s stopped.
Seectng Stop this cluster shuts down custer software n a custer nodes.
Seectng Start this cluster starts custer software.
Delete this cluster - Seectng ths acton hats a runnng custer, dsabes
custer software from startng automatcay, and removes the custer
confguraton fe from each node. You can seect ths acton for any state the
custer s n. Deetng a custer frees each node n the custer for use n another
custer.
2. Seect one of the functons and cck Go.
3. Cckng Go causes a progress page to be dspayed. When the acton s compete,
a page s dspayed showng ether of the foowng pages accordng to the acton
seected:
For Restart this cluster and Stop this cluster/Start this cluster -
Dspays a page wth the st of nodes for the custer.
For Delete this cluster - Dspays the Choose a cluster to administer
page n the cluster tab, showng a st of custers.
2. Managing Cluster Nodes
You can perform the foowng node-management functons through the luci server
component of Conga:
Make a node eave or |on a custer.
Fence a node.
Reboot a node.
Deete a node.
To perform one the functons n the precedng st, foow the steps n ths secton.
The startng pont of the procedure s at the custer-specfc page that you navgate
to from Choose a cluster to administer dspayed on the cluster tab.
Chapter 4. Managing Red Hat Cluster With Conga
58
1. At the detaed menu for the custer (beow the clusters menu), cck Nodes.
Cckng Nodes causes the dspay of nodes n the center of the page and causes
the dspay of an Add a Node eement and a Configure eement wth a st of
the nodes aready confgured n the custer.
2. At the rght of each node sted on the page dspayed from the precedng step,
cck the Choose a task drop-down box. Cckng Choose a task drop-down box
reveas the foowng seectons: Have node leave cluster/Have node join
cluster, Fence this node, Reboot this node, and Delete. The actons of each
functon are summarzed as foows:
Have node leave cluster/Have node join cluster - Have node leave
cluster s avaabe when a node has |oned of a custer. Have node join
cluster s avaabe when a node has eft a custer.
Seectng Have node leave cluster shuts down custer software and makes
the node eave the custer. Makng a node eave a custer prevents the node
from automatcay |onng the custer when t s rebooted.
Seectng Have node join cluster starts custer software and makes the node
|on the custer. Makng a node |on a custer aows the node to automatcay
|on the custer when t s rebooted.
Fence this node - Seectng ths acton causes the node to be fenced
accordng to how the node s confgured to be fenced.
Reboot this node - Seectng ths acton causes the node to be rebooted.
Delete - Seectng ths acton causes the node to be deeted from the custer
confguraton. It aso stops a custer servces on the node, and deetes the
cus1eJ.coh1 fe from /e1c/cus1eJ/.
3. Seect one of the functons and cck Go.
4. Cckng Go causes a progress page to be dspayed. When the acton s compete,
a page s dspayed showng the st of nodes for the custer.
3. Managing High-Availability Services
You can perform the foowng management functons for hgh-avaabty servces
through the luci server component of Conga:
Managing High-Availability Services
59
Confgure a servce.
Stop or start a servce.
Restart a servce.
Deete a servce
To perform one the functons n the precedng st, foow the steps n ths secton.
The startng pont of the procedure s at the custer-specfc page that you navgate
to from Choose a cluster to administer dspayed on the cluster tab.
1. At the detaed menu for the custer (beow the clusters menu), cck Services.
Cckng Services causes the dspay of servces for the custer n the center of
the page.
2. At the rght of each servce sted on the page, cck the Choose a task
drop-down box. Cckng Choose a task drop-down box reveas the foowng
seectons dependng on f the servce s runnng:
If servce s runnng - Configure this service, Restart this service, and
Stop this service.
If servce s not runnng - Configure this service, Start this service, and
Delete this service.
The actons of each functon are summarzed as foows:
Configure this service - Configure this service s avaabe when the
servce s runnng or not runnng. Seectng Configure this service causes the
servces confguraton page for the servce to be dspayed. On that page, you
can change the confguraton of the servce. For exampe, you can add a
resource to the servce. (For more nformaton about addng resources and
servces, refer to5ection 8, Adding Cluster Resources" and 5ection 9, Adding
a Cluster 5ervice to the Cluster".) In addton, a drop-down box on the page
provdes other functons dependng on f the servce s runnng.
When a servce s runnng, the drop-down box provdes the foowng functons:
restartng, dsabng, and reocatng the servce.
When a servce s not runnng, the drop-down box on the confguraton page
provdes the foowng functons: enabng and deetng the servce.
Chapter 4. Managing Red Hat Cluster With Conga
60
If you are makng confguraton changes, save the changes by cckng Save.
Cckng Save causes a progress page to be dspayed. When the change s
compete, another page s dspayed showng a st of servces for the custer.
If you have seected one of the functons n the drop-down box on the
confguraton page, cck Go. Cckng Go causes a progress page to be
dspayed. When the change s compete, another page s dspayed showng a
st of servces for the custer.
Restart this service and Stop this service - These seectons are avaabe
when the servce s runnng. Seect ether functon and cck Go to make the
change take effect. Cckng Go causes a progress page to be dspayed. When
the change s compete, another page s dspayed showng a st of servces for
the custer.
Start this service and Delete this service - These seectons are avaabe
when the servce s not runnng. Seect ether functon and cck Go to make the
change take effect. Cckng Go causes a progress page to be dspayed. When
the change s compete, another page s dspayed showng a st of servces for
the custer.
4. Diagnosing and Correcting Problems in a
Cluster
For nformaton about dagnosng and correctng probems n a custer, contact an
authorzed Red Hat support representatve.
Diagnosing and Correcting Problems
61
62
Configuring Red Hat Cluster
With system-config-cluster
Ths chapter descrbes how to confgure Red Hat Custer software usng
sys1em-coh11g-cus1eJ, and conssts of the foowng sectons:
5ection J, Configuration Tasks"
5ection 2, 5tarting the Cluster Configuration Tool"
5ection 3, Configuring Cluster Properties"
5ection 4, Configuring Fence Devices"
5ection 5, Adding and Deleting Members"
5ection 6, Configuring a Failover Domain"
5ection 7, Adding Cluster Resources"
5ection 8, Adding a Cluster 5ervice to the Cluster"
5ection 9, Propagating The Configuration File: New Cluster"
5ection J0, 5tarting the Cluster 5oftware"
Tip
Whe sys1em-coh11g-cus1eJ provdes severa convenent toos for
confgurng and managng a Red Hat Custer, the newer, more
comprehensve too, Conga, provdes more convenence and fexbty
than sys1em-coh11g-cus1eJ. You may want to consder usng Conga
nstead (refer to Chapter 3, Configuring Red Hat Cluster With Conga
and Chapter 4, Managing Red Hat Cluster With Conga).
1. Configuration Tasks
Confgurng Red Hat Custer software wth sys1em-coh11g-cus1eJ conssts of the
foowng steps:
Chapter 5.
63
1. Startng the Cluster Configuration Tool, sys1em-coh11g-cus1eJ. Refer to
5ection 2, 5tarting the Cluster Configuration Tool".
2. Confgurng custer propertes. Refer to 5ection 3, Configuring Cluster
Properties".
3. Creatng fence devces. Refer to 5ection 4, Configuring Fence Devices".
4. Creatng custer members. Refer to 5ection 5, Adding and Deleting Members".
5. Creatng faover domans. Refer to 5ection 6, Configuring a Failover Domain".
6. Creatng resources. Refer to 5ection 7, Adding Cluster Resources".
7. Creatng custer servces.
Refer to 5ection 8, Adding a Cluster 5ervice to the Cluster".
8. Propagatng the confguraton fe to the other nodes n the custer.
Refer to 5ection 9, Propagating The Configuration File: New Cluster".
9. Startng the custer software. Refer to 5ection J0, 5tarting the Cluster
5oftware".
2. Starting the Cluster Configuration Tool
You can start the Cluster Configuration Tool by oggng n to a custer node as
root wth the ssh -Y command and ssung the sys1em-coh11g-cus1eJ command. For
exampe, to start the Cluster Configuration Tool on custer node nano-01, do the
foowng:
1. Log n to a custer node and run sys1em-coh11g-cus1eJ. For exampe:
$ ssh -Y roo1Qnano-61
.
.
.
# sys1em-con1g-c1us1er
2. If ths s the frst tme you have started the Cluster Configuration Tool, the
program prompts you to ether open an exstng confguraton or create a new
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
64
one. Cck Create New Configuration to start a new confguraton fe (refer to
Figure 5.J, 5tarting a New Configuration File").
Figure 5.1. Starting a New Configuration File
Note
The Cluster Management tab for the Red Hat Custer Sute
management GUI s avaabe after you save the confguraton fe wth
the Cluster Configuration Tool, ext, and restart the the Red Hat
Custer Sute management GUI (sys1em-coh11g-cus1eJ). (The Cluster
Management tab dspays the status of the custer servce manager,
custer nodes, and resources, and shows statstcs concernng custer
servce operaton. To manage the custer system further, choose the
Cluster Configuration tab.)
3. Cckng Create New Configuration causes the New Configuration daog box
to be dspayed (refer to Figure 5.2, Creating A New Configuration"). The New
Configuration daog box provdes a text box for a custer name and group
boxes for the foowng confguraton optons: Choose Lock Method, Use
Multicast (DLM custers ony), and Use a uorum Disk (DLM custers ony). In
most crcumstances you ony need to confgure a custer name and a ock
method. Distributed Lock Manager (DLM) s the defaut ock method. To
confgure a GULM custer, seect Grand Unified Lock Manager (GULM).
(Seectng Grand Unified Lock Manager (GULM) dsabes Use Multicast and
Use a uorum Disk, whch are appcabe ony to DLM custers). Use Multicast
Starting the Cluster Configuration
65
specfes whether a mutcast address s used for custer management
communcaton among custer nodes. Use Multicast s dsabed (checkbox
unchecked) by defaut. To use a mutcast address for custer management
communcaton among custer nodes, cck the Use Multicast checkbox (enabed
when checked). When Use Multicast s enabed, the Address text boxes are
enabed; enter the mutcast address nto the Address text boxes. To use a
quorum dsk, cck the Use a uorum Disk checkbox and enter quorum dsk
parameters. The foowng quorum-dsk parameters are avaabe n the daog box
f you enabe Use a uorum Disk: lnterval, TKO, Votes, Minimum Score,
Device, Label, and uorum Disk Heuristic. Table 5.J, Ouorum-Disk
Parameters" descrbes the parameters.
lmportant
Ouorum-dsk parameters and heurstcs depend on the ste
envronment and speca requrements needed. To understand the use
of quorum-dsk parameters and heurstcs, refer to the qdsk(5) man
page. If you requre assstance understandng and usng quorum dsk,
contact an authorzed Red Hat support representatve.
Tip
It s probabe that confgurng a quorum dsk requres changng
quorum-dsk parameters after the nta confguraton. The Cluster
Configuration Tool (sys1em-coh11g-cus1eJ) provdes ony the dspay
of quorum-dsk parameters after nta confguraton. If you need to
confgure quorum dsk, consder usng Conga nstead; Conga aows
modfcaton of quorum dsk parameters.
Overall:
Whe sys1em-coh11g-cus1eJ provdes severa convenent toos for
confgurng and managng a Red Hat Custer, the newer, more
comprehensve too, Conga, provdes more convenence and fexbty
than sys1em-coh11g-cus1eJ. You may want to consder usng Conga
nstead (refer to Chapter 3, Configuring Red Hat Cluster With Conga
and Chapter 4, Managing Red Hat Cluster With Conga).
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
66
Figure 5.2. Creating A New Configuration
Tool
67
4. When you have competed enterng the custer name and other parameters n the
New Configuration daog box, cck OK. Cckng OK starts the Cluster
Configuration Tool, dspayng a graphca representaton of the confguraton
(Figure 5.3, The Cluster Configuration Tool").
Figure 5.3. The Cluster Configuration Tool
Parameter Description
Use a uorum
Disk
Enabes quorum dsk. Enabes quorum-dsk parameters n the
New Configuration daog box.
lnterval The frequency of read/wrte cyces, n seconds.
TKO The number of cyces a node must mss n order to be
decared dead.
Votes The number of votes the quorum daemon advertses to
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
68
Parameter Description
CMAN when t has a hgh enough score.
Minimum Score The mnmum score for a node to be consdered "ave". If
omtted or set to 0, the defaut functon, 1ooJ{{n+1)/2), s
used, where n s the sum of the heurstcs scores. The
Minimum Score vaue must never exceed the sum of the
heurstc scores; otherwse, the quorum dsk cannot be
avaabe.
Device The storage devce the quorum daemon uses. The devce
must be the same on a nodes.
Label Specfes the quorum dsk abe created by the mkqd1sk utty.
If ths fed contans an entry, the abe overrdes the Device
fed. If ths fed s used, the quorum daemon reads
/pJoc/paJ1111ohs and checks for qdsk sgnatures on every
bock devce found, comparng the abe aganst the specfed
abe. Ths s usefu n confguratons where the quorum
devce name dffers among nodes.
uorum Disk
Heuristics
Program - The program used to determne f ths heurstc
s ave. Ths can be anythng that can be executed by
/b1h/sh -c. A return vaue of 0 ndcates success; anythng
ese ndcates faure. Ths fed s requred.
Score - The weght of ths heurstc. Be carefu when
determnng scores for heurstcs. The defaut score for each
heurstc s 1.
lnterval - The frequency (n seconds) at whch the heurstc
s poed. The defaut nterva for every heurstc s 2 seconds.
Table 5.1. uorum-Disk Parameters
3. Configuring Cluster Properties
In addton to confgurng custer parameters n the precedng secton (5ection 2,
5tarting the Cluster Configuration Tool"), you can confgure the foowng
custer propertes: Cluster Alias (optona), a Config Version (optona), and
Fence Daemon Properties. To confgure custer propertes, foow these steps:
Configuring Cluster Properties
69
1. At the eft frame, cck Cluster.
2. At the bottom of the rght frame (abeed Properties), cck the Edit Cluster
Properties button. Cckng that button causes a Cluster Properties daog box
to be dspayed. The Cluster Properties daog box presents text boxes for
Cluster Alias, and Config Version, and two Fence Daemon Properties
parameters (DLM custers ony): Post-]oin Delay and Post-Fail Delay.
3. (Optona) At the Cluster Alias text box, specfy a custer aas for the custer.
The defaut custer aas s set to the true custer name provded when the custer
s set up (refer to 5ection 2, 5tarting the Cluster Configuration Tool"). The
custer aas shoud be descrptve enough to dstngush t from other custers and
systems on your network (for exampe, n1s_c1us1er or h11pd_c1us1er). The custer
aas cannot exceed 15 characters.
4. (Optona) The Config Version vaue s set to 1 by defaut and s automatcay
ncremented each tme you save your custer confguraton. However, f you need
to set t to another vaue, you can specfy t at the Config Version text box.
5. Specfy the Fence Daemon Properties parameters (DLM custers ony):
Post-]oin Delay and Post-Fail Delay.
a. The Post-]oin Delay parameter s the number of seconds the fence daemon
(1ehced) wats before fencng a node after the node |ons the fence doman. The
Post-]oin Delay defaut vaue s 3. A typca settng for Post-]oin Delay s
between 20 and 30 seconds, but can vary accordng to custer and network
performance.
b. The Post-Fail Delay parameter s the number of seconds the fence daemon
(1ehced) wats before fencng a node (a member of the fence doman) after the
node has faed.The Post-Fail Delay defaut vaue s 6. Its vaue may be vared
to sut custer and network performance.
Note
For more nformaton about Post-]oin Delay and Post-Fail Delay,
refer to the fenced(8) man page.
6. Save custer confguraton changes by seectng File => Save.
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
70
4. Configuring Fence Devices
Confgurng fence devces for the custer conssts of seectng one or more fence
devces and specfyng fence-devce-dependent parameters (for exampe, name, IP
address, ogn, and password).
To confgure fence devces, foow these steps:
1. Cck Fence Devices. At the bottom of the rght frame (abeed Properties),
cck the Add a Fence Device button. Cckng Add a Fence Device causes the
Fence Device Configuration daog box to be dspayed (refer to Figure 5.4,
Fence Device Configuration").
Figure 5.4. Fence Device Configuration
2. At the Fence Device Configuration daog box, cck the drop-down box under
Add a New Fence Device and seect the type of fence devce to confgure.
3. Specfy the nformaton n the Fence Device Configuration daog box
accordng to the type of fence devce. Refer to Appendix 8, Fence Device
Parameters for more nformaton about fence devce parameters.
4. Cck OK.
Configuring Fence Devices
71
5. Choose File => Save to save the changes to the custer confguraton.
5. Adding and Deleting Members
The procedures to add or deete a custer member vary dependng on whether the
custer s a newy confgured custer or a custer that s aready confgured and
runnng.
To add a member to a new custer, refer to 5ection 5.J, Adding a Member to a
New Cluster".
To add or deete a custer member n an exstng custer, refer to the foowng
sectons:
5ection 5.2, Adding a Member to a Running DLM Cluster"
5ection 5.3, Deleting a Member from a DLM Cluster"
5ection 5.4, Adding a CULM Client-only Member"
5ection 5.5, Deleting a CULM Client-only Member"
5ection 5.6, Adding or Deleting a CULM Lock 5erver Member"
5.1. Adding a Member to a New Cluster
To add a member to a new custer, foow these steps:
1. At sys1em-coh11g-cus1eJ, n the Cluster Configuration Tool tab, cck Cluster
Node.
2. At the bottom of the rght frame (abeed Properties), cck the Add a Cluster
Node button. Cckng that button causes a Node Properties daog box to be
dspayed. For a DLM custer, the Node Properties daog box presents text
boxes for Cluster Node Name and uorum Votes (refer to Figure 5.5, Adding
a Member to a New DLM Cluster"). For a GULM custer, the Node Properties
daog box presents text boxes for Cluster Node Name and uorum Votes,
and presents a checkbox for GULM Lockserver (refer to Figure 5.6, Adding a
Member to a New CULM Cluster")
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
72
lmportant
The number of nodes that can be confgured as GULM ock servers s
mted to ether one, three, or fve.
Figure 5.5. Adding a Member to a New DLM Cluster
Figure 5.6. Adding a Member to a New GULM Cluster
3. At the Cluster Node Name text box, specfy a node name. The entry can be a
name or an IP address of the node on the custer subnet.
Note
Each node must be on the same subnet as the node from whch you
are runnng the Cluster Configuration Tool and must be defned
ether n DNS or n the /e1c/hos1s fe of each custer node.
Adding a Member to a New Cluster
73
Note
The node on whch you are runnng the Cluster Configuration Tool
must be expcty added as a custer member; the node s not
automatcay added to the custer confguraton as a resut of runnng
the Cluster Configuration Tool.
4. Optonay, at the uorum Votes text box, you can specfy a vaue; however n
most confguratons you can eave t bank. Leavng the uorum Votes text box
bank causes the quorum votes vaue for that node to be set to the defaut vaue
of 1.
5. Cck OK.
6. Confgure fencng for the node:
a. Cck the node that you added n the prevous step.
b. At the bottom of the rght frame (beow Properties), cck Manage Fencing
For This Node. Cckng Manage Fencing For This Node causes the Fence
Configuration daog box to be dspayed.
c. At the Fence Configuration daog box, bottom of the rght frame (beow
Properties), cck Add a New Fence Level. Cckng Add a New Fence
Level causes a fence-eve eement (for exampe, Fence-Level-1,
Fence-Level-2, and so on) to be dspayed beow the node n the eft frame of
the Fence Configuration daog box.
d. Cck the fence-eve eement.
e. At the bottom of the rght frame (beow Properties), cck Add a New Fence
to this Level. Cckng Add a New Fence to this Level causes the Fence
Properties daog box to be dspayed.
f. At the Fence Properties daog box, cck the Fence Device Type drop-down
box and seect the fence devce for ths node. Aso, provde addtona
nformaton requred (for exampe, Port and Switch for an APC Power Devce).
g. At the Fence Properties daog box, cck OK. Cckng OK causes a fence
devce eement to be dspayed beow the fence-eve eement.
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
74
h. To create addtona fence devces at ths fence eve, return to step 6d.
Otherwse, proceed to the next step.
. To create addtona fence eves, return to step 6c. Otherwse, proceed to the
next step.
|. If you have confgured a the fence eves and fence devces for ths node, cck
Close.
7. Choose File => Save to save the changes to the custer confguraton.
To contnue confgurng a new custer, proceed to 5ection 6, Configuring a Failover
Domain".
5.2. Adding a Member to a Running DLM Cluster
The procedure for addng a member to a runnng DLM custer depends on whether
the custer contans ony two nodes or more than two nodes. To add a member to a
runnng DLM custer, foow the steps n one of the foowng sectons accordng to
the number of nodes n the custer:
For custers wth only two nodes -
5ection 5.2.J, Adding a Member to a Running DLM Cluster That Contains Only
Two Nodes"
For custers wth more than two nodes -
5ection 5.2.2, Adding a Member to a Running DLM Cluster That Contains More
Than Two Nodes"
5.2.1. Adding a Member to a Running DLM Cluster That
Contains Only Two Nodes
To add a member to an exstng DLM custer that s currenty n operaton, and
contans only two nodes, foow these steps:
1. Add the node and confgure fencng for t as n 5ection 5.J, Adding a Member to
a New Cluster".
2. Cck Send to Cluster to propagate the updated confguraton to other runnng
Adding a Member to a Running DLM
75
nodes n the custer.
3. Use the scp command to send the updated /e1c/cus1eJ/cus1eJ.coh1 fe from
one of the exstng custer nodes to the new node.
4. At sys1em-coh11g-cus1eJ, n the Cluster Status Tool tab, dsabe each servce
sted under Services.
5. Stop the custer software on the two runnng nodes by runnng the foowng
commands at each node n ths order:
a. seJv1ce JgmahageJ s1op, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
b. seJv1ce g1s s1op, f you are usng Red Hat GFS
c. seJv1ce cvmd s1op, f CLVM has been used to create custered voumes
d. seJv1ce 1ehced s1op
e. seJv1ce cmah s1op
f. seJv1ce ccsd s1op
6. Start custer software on a custer nodes (ncudng the added one) by runnng
the foowng commands n ths order:
a. seJv1ce ccsd s1aJ1
b. seJv1ce cmah s1aJ1
c. seJv1ce 1ehced s1aJ1
d. seJv1ce cvmd s1aJ1, f CLVM has been used to create custered voumes
e. seJv1ce g1s s1aJ1, f you are usng Red Hat GFS
f. seJv1ce JgmahageJ s1aJ1, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
7. Start sys1em-coh11g-cus1eJ (refer to 5ection 2, 5tarting the Cluster
Configuration Tool"). At the Cluster Configuration Tool tab, verfy that the
confguraton s correct. At the Cluster Status Tool tab verfy that the nodes and
servces are runnng as expected.
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
76
Note
Make sure to confgure other parameters that may be affected by
changes n ths secton. Refer to 5ection J, Configuration Tasks".
5.2.2. Adding a Member to a Running DLM Cluster That
Contains More Than Two Nodes
To add a member to an exstng DLM custer that s currenty n operaton, and
contans more than two nodes, foow these steps:
1. Add the node and confgure fencng for t as n 5ection 5.J, Adding a Member to
a New Cluster".
2. Cck Send to Cluster to propagate the updated confguraton to other runnng
nodes n the custer.
3. Use the scp command to send the updated /e1c/cus1eJ/cus1eJ.coh1 fe from
one of the exstng custer nodes to the new node.
4. Start custer servces on the new node by runnng the foowng commands n ths
order:
a. seJv1ce ccsd s1aJ1
b. seJv1ce cmah s1aJ1
c. seJv1ce 1ehced s1aJ1
d. seJv1ce cvmd s1aJ1, f CLVM has been used to create custered voumes
e. seJv1ce g1s s1aJ1, f you are usng Red Hat GFS
f. seJv1ce JgmahageJ s1aJ1, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
5. Start sys1em-coh11g-cus1eJ (refer to 5ection 2, 5tarting the Cluster
Configuration Tool"). At the Cluster Configuration Tool tab, verfy that the
confguraton s correct. At the Cluster Status Tool tab verfy that the nodes and
servces are runnng as expected.
Cluster
77
Note
Make sure to confgure other parameters that may be affected by
changes n ths secton. Refer to 5ection J, Configuration Tasks".
5.3. Deleting a Member from a DLM Cluster
To deete a member from an exstng DLM custer that s currenty n operaton,
foow these steps:
1. At one of the runnng nodes (not at a node to be deeted), start
sys1em-coh11g-cus1eJ (refer to 5ection 2, 5tarting the Cluster Configuration
Tool"). At the Cluster Status Tool tab, under Services, dsabe or reocate each
servce that s runnng on the node to be deeted.
2. Stop the custer software on the node to be deeted by runnng the foowng
commands at that node n ths order:
a. seJv1ce JgmahageJ s1op, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
b. seJv1ce g1s s1op, f you are usng Red Hat GFS
c. seJv1ce cvmd s1op, f CLVM has been used to create custered voumes
d. seJv1ce 1ehced s1op
e. seJv1ce cmah s1op
f. seJv1ce ccsd s1op
3. At sys1em-coh11g-cus1eJ (runnng on a node that s not to be deeted), n the
Cluster Configuration Tool tab, deete the member as foows:
a. If necessary, cck the trange con to expand the Cluster Nodes property.
b. Seect the custer node to be deeted. At the bottom of the rght frame (abeed
Properties), cck the Delete Node button.
c. Cckng the Delete Node button causes a warnng daog box to be dspayed
requestng confrmaton of the deeton (Figure 5.7, Confirm Deleting a
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
78
Member").
Figure 5.7. Confirm Deleting a Member
d. At that daog box, cck Yes to confrm deeton.
e. Propagate the updated confguraton by cckng the Send to Cluster button.
(Propagatng the updated confguraton automatcay saves the confguraton.)
4. Stop the custer software on the remanng runnng nodes by runnng the
foowng commands at each node n ths order:
a. seJv1ce JgmahageJ s1op, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
b. seJv1ce g1s s1op, f you are usng Red Hat GFS
c. seJv1ce cvmd s1op, f CLVM has been used to create custered voumes
d. seJv1ce 1ehced s1op
e. seJv1ce cmah s1op
f. seJv1ce ccsd s1op
5. Start custer software on a remanng custer nodes by runnng the foowng
commands n ths order:
a. seJv1ce ccsd s1aJ1
b. seJv1ce cmah s1aJ1
c. seJv1ce 1ehced s1aJ1
d. seJv1ce cvmd s1aJ1, f CLVM has been used to create custered voumes
Deleting a Member from a DLM
79
e. seJv1ce g1s s1aJ1, f you are usng Red Hat GFS
f. seJv1ce JgmahageJ s1aJ1, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
6. At sys1em-coh11g-cus1eJ (runnng on a node that was not deeted), n the
Cluster Configuration Tool tab, verfy that the confguraton s correct. At the
Cluster Status Tool tab verfy that the nodes and servces are runnng as
expected.
Note
Make sure to confgure other parameters that may be affected by
changes n ths secton. Refer to 5ection J, Configuration Tasks".
5.4. Adding a GULM Client-only Member
The procedure for addng a member to a runnng GULM custer depends on the type
of GULM node: ether a node that functons ony as a GULM cent (a custer member
capabe of runnng appcatons, but not egbe to functon as a GULM ock server)
or a node that functons as a GULM ock server. Ths procedure descrbes how to
add a member that functons ony as a GULM cent. To add a member that functons
as a GULM ock server, refer to 5ection 5.6, Adding or Deleting a CULM Lock
5erver Member".
To add a member that functons ony as a GULM cent to an exstng custer that s
currenty n operaton, foow these steps:
1. At one of the runnng members, start sys1em-coh11g-cus1eJ (refer to 5ection 2,
5tarting the Cluster Configuration Tool"). At the Cluster Configuration
Tool tab, add the node and confgure fencng for t as n 5ection 5.J, Adding a
Member to a New Cluster".
2. Cck Send to Cluster to propagate the updated confguraton to other runnng
nodes n the custer.
3. Use the scp command to send the updated /e1c/cus1eJ/cus1eJ.coh1 fe from
one of the exstng custer nodes to the new node.
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
80
4. Start custer servces on the new node by runnng the foowng commands n ths
order:
a. seJv1ce ccsd s1aJ1
b. seJv1ce ock_gumd s1aJ1
c. seJv1ce cvmd s1aJ1, f CLVM has been used to create custered voumes
d. seJv1ce g1s s1aJ1, f you are usng Red Hat GFS
e. seJv1ce JgmahageJ s1aJ1, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
5. At sys1em-coh11g-cus1eJ, n the Cluster Configuration Tool tab, verfy that the
confguraton s correct. At the Cluster Status Tool tab verfy that the nodes and
servces are runnng as expected.
Note
Make sure to confgure other parameters that may be affected by
changes n ths secton. Refer to 5ection J, Configuration Tasks".
5.5. Deleting a GULM Client-only Member
The procedure for deetng a member from a runnng GULM custer depends on the
type of member to be removed: ether a node that functons ony as a GULM cent
(a custer member capabe of runnng appcatons, but not egbe to functon as a
GULM ock server) or a node that functons as a GULM ock server. The procedure n
ths secton descrbes how to deete a member that functons ony as a GULM cent.
To remove a member that functons as a GULM ock server, refer to 5ection 5.6,
Adding or Deleting a CULM Lock 5erver Member".
To deete a member functonng ony as a GULM cent from an exstng custer that
s currenty n operaton, foow these steps:
1. At one of the runnng nodes (not at a node to be deeted), start
sys1em-coh11g-cus1eJ (refer to 5ection 2, 5tarting the Cluster Configuration
Tool"). At the Cluster Status Tool tab, under Services, dsabe or reocate each
servce that s runnng on the node to be deeted.
Cluster
81
2. Stop the custer software on the node to be deeted by runnng the foowng
commands at that node n ths order:
a. seJv1ce JgmahageJ s1op, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
b. seJv1ce g1s s1op, f you are usng Red Hat GFS
c. seJv1ce cvmd s1op, f CLVM has been used to create custered voumes
d. seJv1ce ock_gumd s1op
e. seJv1ce ccsd s1op
3. At sys1em-coh11g-cus1eJ (runnng on a node that s not to be deeted), n the
Cluster Configuration Tool tab, deete the member as foows:
a. If necessary, cck the trange con to expand the Cluster Nodes property.
b. Seect the custer node to be deeted. At the bottom of the rght frame (abeed
Properties), cck the Delete Node button.
c. Cckng the Delete Node button causes a warnng daog box to be dspayed
requestng confrmaton of the deeton (Figure 5.8, Confirm Deleting a
Member").
Figure 5.8. Confirm Deleting a Member
d. At that daog box, cck Yes to confrm deeton.
e. Propagate the updated confguraton by cckng the Send to Cluster button.
(Propagatng the updated confguraton automatcay saves the confguraton.)
4. Stop the custer software on the remanng runnng nodes by runnng the
foowng commands at each node n ths order:
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
82
a. seJv1ce JgmahageJ s1op, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
b. seJv1ce g1s s1op, f you are usng Red Hat GFS
c. seJv1ce cvmd s1op, f CLVM has been used to create custered voumes
d. seJv1ce ock_gumd s1op
e. seJv1ce ccsd s1op
5. Start custer software on a remanng custer nodes by runnng the foowng
commands n ths order:
a. seJv1ce ccsd s1aJ1
b. seJv1ce ock_gumd s1aJ1
c. seJv1ce cvmd s1aJ1, f CLVM has been used to create custered voumes
d. seJv1ce g1s s1aJ1, f you are usng Red Hat GFS
e. seJv1ce JgmahageJ s1aJ1, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
6. At sys1em-coh11g-cus1eJ (runnng on a node that was not deeted), n the
Cluster Configuration Tool tab, verfy that the confguraton s correct. At the
Cluster Status Tool tab verfy that the nodes and servces are runnng as
expected.
Note
Make sure to confgure other parameters that may be affected by
changes n ths secton. Refer to 5ection J, Configuration Tasks".
5.6. Adding or Deleting a GULM Lock Server Member
The procedure for addng or deetng a GULM custer member depends on the type
of GULM node: ether a node that functons ony as a GULM cent (a custer member
capabe of runnng appcatons, but not egbe to functon as a GULM ock server)
or a node that functons as a GULM ock server. The procedure n ths secton
descrbes how to add or deete a member that functons as a GULM ock server. To
Adding or Deleting a GULM Lock
83
add a member that functons ony as a GULM cent, refer to 5ection 5.4, Adding a
CULM Client-only Member"; to deete a member that functons ony as a GULM
cent, refer to 5ection 5.5, Deleting a CULM Client-only Member".
lmportant
The number of nodes that can be confgured as GULM ock servers s
mted to ether one, three, or fve.
To add or deete a GULM member that functons as a GULM ock server n an
exstng custer that s currenty n operaton, foow these steps:
1. At one of the runnng members (runnng on a node that s not to be deeted),
start sys1em-coh11g-cus1eJ (refer to 5ection 2, 5tarting the Cluster
Configuration Tool"). At the Cluster Status Tool tab, dsabe each servce
sted under Services.
2. Stop the custer software on each runnng node by runnng the foowng
commands at each node n ths order:
a. seJv1ce JgmahageJ s1op, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
b. seJv1ce g1s s1op, f you are usng Red Hat GFS
c. seJv1ce cvmd s1op, f CLVM has been used to create custered voumes
d. seJv1ce ock_gumd s1op
e. seJv1ce ccsd s1op
3. To add a a GULM ock server member, at sys1em-coh11g-cus1eJ, n the Cluster
Configuration Tool tab, add each node and confgure fencng for t as n
5ection 5.J, Adding a Member to a New Cluster". Make sure to seect GULM
Lockserver n the Node Properties daog box (refer to Figure 5.6, Adding a
Member to a New CULM Cluster").
4. To deete a GULM ock server member, at sys1em-coh11g-cus1eJ (runnng on a
node that s not to be deeted), n the Cluster Configuration Tool tab, deete
each member as foows:
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
84
a. If necessary, cck the trange con to expand the Cluster Nodes property.
b. Seect the custer node to be deeted. At the bottom of the rght frame (abeed
Properties), cck the Delete Node button.
c. Cckng the Delete Node button causes a warnng daog box to be dspayed
requestng confrmaton of the deeton (Figure 5.9, Confirm Deleting a
Member").
Figure 5.9. Confirm Deleting a Member
d. At that daog box, cck Yes to confrm deeton.
5. Propagate the confguraton fe to the custer nodes as foows:
a. Log n to the node where you created the confguraton fe (the same node
used for runnng sys1em-coh11g-cus1eJ).
b. Usng the scp command, copy the /e1c/cus1eJ/cus1eJ.coh1 fe to a nodes n
the custer.
Note
Propagatng the custer confguraton fe ths way s necessary under
these crcumstances because the custer software s not runnng, and
therefore not capabe of propagatng the confguraton. Once a custer
s nstaed and runnng, the custer confguraton fe s propagated
usng the Red Hat custer management GUI Send to Cluster button.
For more nformaton about propagatng the custer confguraton usng
the GUI Send to Cluster button, refer to 5ection 3, Modifying the
Cluster Configuration".
Server Member
85
c. After you have propagated the custer confguraton to the custer nodes you
can ether reboot each node or start the custer software on each custer node
by runnng the foowng commands at each node n ths order:
. seJv1ce ccsd s1aJ1
. seJv1ce ock_gumd s1aJ1
.seJv1ce cvmd s1aJ1, f CLVM has been used to create custered voumes
v.seJv1ce g1s s1aJ1, f you are usng Red Hat GFS
v. seJv1ce JgmahageJ s1aJ1, f the node s aso functonng as a GULM cent and
the custer s runnng custer servces (JgmahageJ)
d. At sys1em-coh11g-cus1eJ (runnng on a node that was not deeted), n the
Cluster Configuration Tool tab, verfy that the confguraton s correct. At
the Cluster Status Tool tab verfy that the nodes and servces are runnng as
expected.
Note
Make sure to confgure other parameters that may be affected by
changes n ths secton. Refer to 5ection J, Configuration Tasks".
6. Configuring a Failover Domain
A faover doman s a named subset of custer nodes that are egbe to run a
custer servce n the event of a node faure. A faover doman can have the
foowng characterstcs:
Unrestrcted - Aows you to specfy that a subset of members are preferred, but
that a custer servce assgned to ths doman can run on any avaabe member.
Restrcted - Aows you to restrct the members that can run a partcuar custer
servce. If none of the members n a restrcted faover doman are avaabe, the
custer servce cannot be started (ether manuay or by the custer software).
Unordered - When a custer servce s assgned to an unordered faover doman,
the member on whch the custer servce runs s chosen from the avaabe
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
86
faover doman members wth no prorty orderng.
Ordered - Aows you to specfy a preference order among the members of a
faover doman. The member at the top of the st s the most preferred, foowed
by the second member n the st, and so on.
Note
Changng a faover doman confguraton has no effect on currenty
runnng servces.
Note
Faover domans are not requred for operaton.
By defaut, faover domans are unrestrcted and unordered.
In a custer wth severa members, usng a restrcted faover doman can mnmze
the work to set up the custer to run a custer servce (such as h11pd), whch
requres you to set up the confguraton dentcay on a members that run the
custer servce). Instead of settng up the entre custer to run the custer servce,
you must set up ony the members n the restrcted faover doman that you
assocate wth the custer servce.
Tip
To confgure a preferred member, you can create an unrestrcted
faover doman comprsng ony one custer member. Dong that
causes a custer servce to run on that custer member prmary (the
preferred member), but aows the custer servce to fa over to any of
the other members.
The foowng sectons descrbe addng a faover doman, removng a faover
doman, and removng members from a faover doman:
Configuring a Failover Domain
87
5ection 6.J, Adding a Failover Domain"
5ection 6.2, Removing a Failover Domain"
5ection 6.3, Removing a Member from a Failover Domain"
6.1. Adding a Failover Domain
To add a faover doman, foow these steps:
1. At the eft frame of the the Cluster Configuration Tool, cck Failover
Domains.
2. At the bottom of the rght frame (abeed Properties), cck the Create a
Failover Domain button. Cckng the Create a Failover Domain button causes
the Add Failover Domain daog box to be dspayed.
3. At the Add Failover Domain daog box, specfy a faover doman name at the
Name for new Failover Domain text box and cck OK. Cckng OK causes the
Failover Domain Configuration daog box to be dspayed (Figure 5.J0,
Failover Domain Configuration: Configuring a Failover Domain").
Note
The name shoud be descrptve enough to dstngush ts purpose
reatve to other names used n your custer.
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
88
Figure 5.10. Failover Domain Configuration: Configuring a
Failover Domain
4. Cck the Available Cluster Nodes drop-down box and seect the members for
ths faover doman.
5. To restrct faover to members n ths faover doman, cck (check) the Restrict
Failover To This Domains Members checkbox. (Wth Restrict Failover To
This Domains Members checked, servces assgned to ths faover doman fa
over ony to nodes n ths faover doman.)
6. To prortze the order n whch the members n the faover doman assume
contro of a faed custer servce, foow these steps:
a. Cck (check) the Prioritized List checkbox (Figure 5.JJ, Failover Domain
Configuration: Adjusting Priority"). Cckng Prioritized List causes the
Priority coumn to be dspayed next to the Member Node coumn.
Adding a Failover Domain
89
Figure 5.11. Failover Domain Configuration: Adjusting
Priority
b. For each node that requres a prorty ad|ustment, cck the node sted n the
Member Node}Priority coumns and ad|ust prorty by cckng one of the
Adjust Priority arrows. Prorty s ndcated by the poston n the Member
Node coumn and the vaue n the Priority coumn. The node prortes are
sted hghest to owest, wth the hghest prorty node at the top of the
Member Node coumn (havng the owest Priority number).
7. Cck Close to create the doman.
8. At the Cluster Configuration Tool, perform one of the foowng actons
dependng on whether the confguraton s for a new custer or for one that s
operatona and runnng:
New custer - If ths s a new custer, choose File => Save to save the
changes to the custer confguraton.
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
90
Runnng custer - If ths custer s operatona and runnng, and you want to
propagate the change mmedatey, cck the Send to Cluster button. Cckng
Send to Cluster automatcay saves the confguraton change. If you do not
want to propagate the change mmedatey, choose File => Save to save the
changes to the custer confguraton.
6.2. Removing a Failover Domain
To remove a faover doman, foow these steps:
1. At the eft frame of the the Cluster Configuration Tool, cck the faover
doman that you want to deete (sted under Failover Domains).
2. At the bottom of the rght frame (abeed Properties), cck the Delete Failover
Domain button. Cckng the Delete Failover Domain button causes a warnng
daog box do be dspayed askng f you want to remove the faover doman.
Confrm that the faover doman dentfed n the warnng daog box s the one
you want to deete and cck Yes. Cckng Yes causes the faover doman to be
removed from the st of faover domans under Failover Domains n the eft
frame of the Cluster Configuration Tool.
3. At the Cluster Configuration Tool, perform one of the foowng actons
dependng on whether the confguraton s for a new custer or for one that s
operatona and runnng:
New custer - If ths s a new custer, choose File => Save to save the
changes to the custer confguraton.
Runnng custer - If ths custer s operatona and runnng, and you want to
propagate the change mmedatey, cck the Send to Cluster button. Cckng
Send to Cluster automatcay saves the confguraton change. If you do not
want to propagate the change mmedatey, choose File => Save to save the
changes to the custer confguraton.
6.3. Removing a Member from a Failover Domain
To remove a member from a faover doman, foow these steps:
1. At the eft frame of the the Cluster Configuration Tool, cck the faover
Removing a Failover Domain
91
doman that you want to change (sted under Failover Domains).
2. At the bottom of the rght frame (abeed Properties), cck the Edit Failover
Domain Properties button. Cckng the Edit Failover Domain Properties
button causes the Failover Domain Configuration daog box to be dspayed
(Figure 5.J0, Failover Domain Configuration: Configuring a Failover
Domain").
3. At the Failover Domain Configuration daog box, n the Member Node
coumn, cck the node name that you want to deete from the faover doman
and cck the Remove Member from Domain button. Cckng Remove
Member from Domain removes the node from the Member Node coumn.
Repeat ths step for each node that s to be deeted from the faover doman.
(Nodes must be deeted one at a tme.)
4. When fnshed, cck Close.
5. At the Cluster Configuration Tool, perform one of the foowng actons
dependng on whether the confguraton s for a new custer or for one that s
operatona and runnng:
New custer - If ths s a new custer, choose File => Save to save the
changes to the custer confguraton.
Runnng custer - If ths custer s operatona and runnng, and you want to
propagate the change mmedatey, cck the Send to Cluster button. Cckng
Send to Cluster automatcay saves the confguraton change. If you do not
want to propagate the change mmedatey, choose File => Save to save the
changes to the custer confguraton.
7. Adding Cluster Resources
To specfy a devce for a custer servce, foow these steps:
1. On the Resources property of the Cluster Configuration Tool, cck the
Create a Resource button. Cckng the Create a Resource button causes the
Resource Configuration daog box to be dspayed.
2. At the Resource Configuration daog box, under Select a Resource Type,
cck the drop-down box. At the drop-down box, seect a resource to confgure.
The resource optons are descrbed as foows:
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
92
GFS
Name - Create a name for the fe system resource.
Mount Point - Choose the path to whch the fe system resource s mounted.
Device - Specfy the devce fe assocated wth the fe system resource.
Options - Mount optons.
File System lD - When creatng a new fe system resource, you can eave ths
fed bank. Leavng the fed bank causes a fe system ID to be assgned
automatcay after you cck OK at the Resource Configuration daog box. If
you need to assgn a fe system ID expcty, specfy t n ths fed.
Force Unmount checkbox - If checked, forces the fe system to unmount. The
defaut settng s unchecked. Force Unmount ks a processes usng the
mount pont to free up the mount when t tres to unmount. Wth GFS resources,
the mount pont s not unmounted at servce tear-down unless ths box s
checked.
Fe System
Name - Create a name for the fe system resource.
File System Type - Choose the fe system for the resource usng the
drop-down menu.
Mount Point - Choose the path to whch the fe system resource s mounted.
Device - Specfy the devce fe assocated wth the fe system resource.
Options - Mount optons.
File System lD - When creatng a new fe system resource, you can eave ths
fed bank. Leavng the fed bank causes a fe system ID to be assgned
automatcay after you cck OK at the Resource Configuration daog box. If
you need to assgn a fe system ID expcty, specfy t n ths fed.
Checkboxes - Specfy mount and unmount actons when a servce s stopped
(for exampe, when dsabng or reocatng a servce):
Force unmount - If checked, forces the fe system to unmount. The defaut
settng s unchecked. Force Unmount ks a processes usng the mount
pont to free up the mount when t tres to unmount.
Adding Cluster Resources
93
Reboot host node if unmount fails - If checked, reboots the node f
unmountng ths fe system fas. The defaut settng s unchecked.
Check file system before mounting - If checked, causes 1sck to be run on
the fe system before mountng t. The defaut settng s unchecked.
IP Address
lP Address - Type the IP address for the resource.
Monitor Link checkbox - Check the box to enabe or dsabe nk status
montorng of the IP address resource
NFS Mount
Name - Create a symboc name for the NFS mount.
Mount Point - Choose the path to whch the fe system resource s mounted.
Host - Specfy the NFS server name.
Export Path - NFS export on the server.
NFS and NFS4 optons - Specfy NFS protoco:
NFS - Specfes usng NFSv3 protoco. The defaut settng s NFS.
NFS4 - Specfes usng NFSv4 protoco.
Options - Mount optons. For more nformaton, refer to the nfs(5) man page.
Force Unmount checkbox - If checked, forces the fe system to unmount. The
defaut settng s unchecked. Force Unmount ks a processes usng the
mount pont to free up the mount when t tres to unmount.
NFS Cent
Name - Enter a name for the NFS cent resource.
Target - Enter a target for the NFS cent resource. Supported targets are
hostnames, IP addresses (wth wd-card support), and netgroups.
Read-Write and Read Only optons - Specfy the type of access rghts for ths
NFS cent resource:
Read-Write - Specfes that the NFS cent has read-wrte access. The defaut
settng s Read-Write.
Read Only - Specfes that the NFS cent has read-ony access.
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
94
Options - Addtona cent access rghts. For more nformaton, refer to the
exports(5) man page, Genera Optons
NFS Export
Name - Enter a name for the NFS export resource.
Scrpt
Name - Enter a name for the custom user scrpt.
File (with path) - Enter the path where ths custom scrpt s ocated (for
exampe, /e1c/1h11.d/useIscIIp1)
Samba Servce
Name - Enter a name for the Samba server.
Workgroup - Enter the Wndows workgroup name or Wndows NT doman of
the Samba servce.
Note
When creatng or edtng a custer servce, connect a Samba-servce
resource drecty to the servce, not to a resource wthn a servce. That
s, at the Service Management daog box, use ether Create a new
resource for this service or Add a Shared Resource to this
service; do not use Attach a new Private Resource to the
Selection or Attach a Shared Resource to the selection.
3. When fnshed, cck OK.
4. Choose File => Save to save the change to the /e1c/cus1eJ/cus1eJ.coh1
confguraton fe.
8. Adding a Cluster Service to the Cluster
To add a custer servce to the custer, foow these steps:
1. At the eft frame, cck Services.
2. At the bottom of the rght frame (abeed Properties), cck the Create a
Service button. Cckng Create a Service causes the Add a Service daog box
Adding a Cluster Service to the
95
to be dspayed.
3. At the Add a Service daog box, type the name of the servce n the Name text
box and cck OK. Cckng OK causes the Service Management daog box to be
dspayed (refer to Figure 5.J2, Adding a Cluster 5ervice").
Tip
Use a descrptve name that ceary dstngushes the servce from
other servces n the custer.
Figure 5.12. Adding a Cluster Service
4. If you want to restrct the members on whch ths custer servce s abe to run,
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
96
choose a faover doman from the Failover Domain drop-down box. (Refer to
5ection 6, Configuring a Failover Domain" for nstructons on how to confgure a
faover doman.)
5. Autostart This Service checkbox - Ths s checked by defaut. If Autostart
This Service s checked, the servce s started automatcay when a custer s
started and runnng. If Autostart This Service s not checked, the servce must
be started manuay any tme the custer comes up from stopped state.
6. Run Exclusive checkbox - Ths sets a pocy wheren the servce ony runs on
nodes that have no other servces runnng on them. For exampe, for a very busy
web server that s custered for hgh avaabty, t woud woud be advsabe to
keep that servce on a node aone wth no other servces competng for hs
resources - that s, Run Exclusive checked. On the other hand, servces that
consume few resources (ke NFS and Samba), can run together on the same
node wthout tte concern over contenton for resources. For those types of
servces you can eave the Run Exclusive unchecked.
Note
Crcumstances that requre enabng Run Exclusive are rare. Enabng
Run Exclusive can render a servce offne f the node t s runnng on
fas and no other nodes are empty.
7. Seect a recovery pocy to specfy how the resource manager shoud recover
from a servce faure. At the upper rght of the Service Management daog
box, there are three Recovery Policy optons avaabe:
Restart - Restart the servce n the node the servce s currenty ocated. The
defaut settng s Restart. If the servce cannot be restarted n the the current
node, the servce s reocated.
Relocate - Reocate the servce before restartng. Do not restart the node
where the servce s currenty ocated.
Disable - Do not restart the servce at a.
8. Cck the Add a Shared Resource to this service button and choose the a
resource sted that you have confgured n 5ection 7, Adding Cluster
Resources".
Cluster
97
Note
If you are addng a Samba-servce resource, connect a Samba-servce
resource drecty to the servce, not to a resource wthn a servce. That
s, at the Service Management daog box, use ether Create a new
resource for this service or Add a Shared Resource to this
service; do not use Attach a new Private Resource to the
Selection or Attach a Shared Resource to the selection.
9. If needed, you may aso create a private resource that you can create that
becomes a subordnate resource by cckng on the Attach a new Private
Resource to the Selection button. The process s the same as creatng a
shared resource descrbed n 5ection 7, Adding Cluster Resources". The prvate
resource w appear as a chd to the shared resource to whch you assocated
wth the shared resource. Cck the trange con next to the shared resource to
dspay any prvate resources assocated.
10. When fnshed, cck OK.
11. Choose File => Save to save the changes to the custer confguraton.
Note
To verfy the exstence of the IP servce resource used n a custer
servce, you must use the /sb1h/1p addJ 1s1 command on a custer
node. The foowng output shows the /sb1h/1p addJ 1s1 command
executed on a node runnng a custer servce:
1. o. <L00PBACK,bP> m1u 16436 qd1sc hoqueue
1hk/oopback 00.00.00.00.00.00 bJd 00.00.00.00.00.00
1he1 127.0.0.1/8 scope hos1 o
1he16 ..1/128 scope hos1
va1d_11 1oJeveJ pJe1eJJed_11 1oJeveJ
2. e1h0. <BR0A0CAST,hbLTTCAST,bP> m1u 1356 qd1sc p111o_1as1 qeh 1000
1hk/e1heJ 00.05.5d.9a.d8.91 bJd 11.11.11.11.11.11
1he1 10.11.4.31/22 bJd 10.11.7.255 scope goba e1h0
1he16 1e80..205.5d11.1e9a.d891/64 scope 1hk
1he1 10.11.4.240/22 scope goba secohdaJy e1h0
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
98
va1d_11 1oJeveJ pJe1eJJed_11 1oJeveJ
9. Propagating The Configuration File: New
Cluster
For newy defned custers, you must propagate the confguraton fe to the custer
nodes as foows:
1. Log n to the node where you created the confguraton fe.
2. Usng the scp command, copy the /e1c/cus1eJ/cus1eJ.coh1 fe to a nodes n
the custer.
Note
Propagatng the custer confguraton fe ths way s necessary for the
frst tme a custer s created. Once a custer s nstaed and runnng,
the custer confguraton fe s propagated usng the Red Hat custer
management GUI Send to Cluster button. For more nformaton about
propagatng the custer confguraton usng the GUI Send to Cluster
button, refer to 5ection 3, Modifying the Cluster Configuration".
10. Starting the Cluster Software
After you have propagated the custer confguraton to the custer nodes you can
ether reboot each node or start the custer software on each custer node by
runnng the foowng commands at each node n ths order:
1. seJv1ce ccsd s1aJ1
2. seJv1ce cmah s1aJ1 (or seJv1ce ock_gumd s1aJ1 for GULM custers)
3. seJv1ce 1ehced s1aJ1 (DLM custers ony)
Propagating The Configuration File:
99
4. seJv1ce cvmd s1aJ1, f CLVM has been used to create custered voumes
5. seJv1ce g1s s1aJ1, f you are usng Red Hat GFS
6. seJv1ce JgmahageJ s1aJ1, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
7. Start the Red Hat Custer Sute management GUI. At the Cluster Configuration
Tool tab, verfy that the confguraton s correct. At the Cluster Status Tool tab
verfy that the nodes and servces are runnng as expected.
Chapter 5. Configuring Red Hat Cluster With sys1em-con1g-c1us1er
100
Managing Red Hat Cluster
With system-config-cluster
Ths chapter descrbes varous admnstratve tasks for managng a Red Hat Custer
and conssts of the foowng sectons:
5ection J, 5tarting and 5topping the Cluster 5oftware"
5ection 2, Managing High-Availability 5ervices"
5ection 4, 8acking Up and Restoring the Cluster Database"
5ection 5, Disabling the Cluster 5oftware"
5ection 6, Diagnosing and Correcting Problems in a Cluster"
1. Starting and Stopping the Cluster Software
To start the custer software on a member, type the foowng commands n ths
order:
1. seJv1ce ccsd s1aJ1
2. seJv1ce cmah s1aJ1 (or seJv1ce ock_gumd s1aJ1 for GULM custers)
3. seJv1ce 1ehced s1aJ1 (DLM custers ony)
4. seJv1ce cvmd s1aJ1, f CLVM has been used to create custered voumes
5. seJv1ce g1s s1aJ1, f you are usng Red Hat GFS
6. seJv1ce JgmahageJ s1aJ1, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
To stop the custer software on a member, type the foowng commands n ths
order:
1. seJv1ce JgmahageJ s1op, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
Chapter 6.
101
2. seJv1ce g1s s1op, f you are usng Red Hat GFS
3. seJv1ce cvmd s1op, f CLVM has been used to create custered voumes
4. seJv1ce 1ehced s1op (DLM custers ony)
5. seJv1ce cmah s1op (or seJv1ce ock_gumd s1op for GULM custers)
6. seJv1ce ccsd s1op
Stoppng the custer servces on a member causes ts servces to fa over to an
actve member.
2. Managing High-Availability Services
You can manage custer servces wth the Cluster Status Tool (Figure 6.J,
Cluster Status Tool") through the Cluster Management tab n Custer
Admnstraton GUI.
Chapter 6. Managing Red Hat Cluster With sys1em-con1g-c1us1er
102
Figure 6.1. Cluster Status Tool
You can use the Cluster Status Tool to enabe, dsabe, restart, or reocate a
hgh-avaabty servce. The Cluster Status Tool dspays the current custer
status n the Services area and automatcay updates the status every 10 seconds.
To enabe a servce, you can seect the servce n the Services area and cck
Enable. To dsabe a servce, you can seect the servce n the Services area and
cck Disable. To restart a servce, you can seect the servce n the Services area
and cck Restart. To reocate a servce from one node to another, you can drag the
servce to another node and drop the servce onto that node. Reocatng a servce
Managing High-Availability Services
103
restarts the servce on that node. (Reocatng a servce to ts current node - that s,
draggng a servce to ts current node and droppng the servce onto that node -
restarts the servce.)
The foowng tabes descrbe the members and servces status nformaton
dspayed by the Cluster Status Tool.
Members Status Description
Member
The node s part of the custer.
Note: A node can be a member of a custer; however, the
node may be nactve and ncapabe of runnng servces. For
exampe, f JgmahageJ s not runnng on the node, but a
other custer software components are runnng n the node,
the node appears as a Member n the Cluster Status Tool.
Dead The node s unabe to partcpate as a custer member. The
most basc custer software s not runnng on the node.
Table 6.1. Members Status
Services Status Description
Started The servce resources are confgured and avaabe on the
custer system that owns the servce.
Pending The servce has faed on a member and s pendng start on
another member.
Disabled The servce has been dsabed, and does not have an
assgned owner. A dsabed servce s never restarted
automatcay by the custer.
Stopped The servce s not runnng; t s watng for a member capabe
of startng the servce. A servce remans n the stopped state
f autostart s dsabed.
Failed The servce has faed to start on the custer and cannot
successfuy stop the servce. A faed servce s never
restarted automatcay by the custer.
Table 6.2. Services Status
Chapter 6. Managing Red Hat Cluster With sys1em-con1g-c1us1er
104
3. Modifying the Cluster Configuration
To modfy the custer confguraton (the custer confguraton fe
(/e1c/cus1eJ/cus1eJ.coh1), use the Cluster Configuration Tool. For more
nformaton about usng the Cluster Configuration Tool, refer to Chapter 5,
Configuring Red Hat Cluster With sys1en-con1Ig-c1us1eI.
Warning
Do not manuay edt the contents of the /e1c/cus1eJ/cus1eJ.coh1
fe wthout gudance from an authorzed Red Hat representatve or
uness you fuy understand the consequences of edtng the
/e1c/cus1eJ/cus1eJ.coh1 fe manuay.
lmportant
Athough the Cluster Configuration Tool provdes a uorum Votes
parameter n the Properties daog box of each custer member, that
parameter s ntended only for use durng nta custer confguraton.
Furthermore, t s recommended that you retan the defaut uorum
Votes vaue of 1. For more nformaton about usng the Cluster
Configuration Tool, refer to Chapter 5, Configuring Red Hat Cluster
With sys1en-con1Ig-c1us1eI.
lmportant
If you are changng the number of custer members, refer to 5ection 5,
Adding and Deleting Members". You must take nto account certan
crcumstances for both DLM and GULM custers when addng or
deetng members.
To edt the custer confguraton fe, cck the Cluster Configuration tab n the
custer confguraton GUI. Cckng the Cluster Configuration tab dspays a
graphca representaton of the custer confguraton. Change the confguraton fe
accordng the the foowng steps:
Modifying the Cluster Configuration
105
1. Make changes to custer eements (for exampe, create a servce).
2. Propagate the updated confguraton fe throughout the custer by cckng Send
to Cluster.
Note
The Cluster Configuration Tool does not dspay the Send to
Cluster button f the custer s new and has not been started yet, or f
the node from whch you are runnng the Cluster Configuration Tool
s not a member of the custer. If the Send to Cluster button s not
dspayed, you can st use the Cluster Configuration Tool; however,
you cannot propagate the confguraton. You can st save the
confguraton fe. For nformaton about usng the Cluster
Configuration Tool for a new custer confguraton, refer to
Chapter 5, Configuring Red Hat Cluster With sys1en-con1Ig-c1us1eI.
3. Cckng Send to Cluster causes a Warning daog box to be dspayed. Cck
Yes to save and propagate the confguraton.
4. Cckng Yes causes an lnformation daog box to be dspayed, confrmng that
the current confguraton has been propagated to the custer. Cck OK.
5. Cck the Cluster Management tab and verfy that the changes have been
propagated to the custer members.
4. Backing Up and Restoring the Cluster
Database
The Cluster Configuration Tool automatcay retans backup copes of the three
most recenty used confguraton fes (besdes the currenty used confguraton fe).
Retanng the backup copes s usefu f the custer does not functon correcty
because of msconfguraton and you need to return to a prevous workng
confguraton.
Each tme you save a confguraton fe, the Cluster Configuration Tool saves
backup copes of the three most recenty used confguraton fes as
/e1c/cus1eJ/cus1eJ.coh1.bak.1, /e1c/cus1eJ/cus1eJ.coh1.bak.2, and
/e1c/cus1eJ/cus1eJ.coh1.bak.3. The backup fe /e1c/cus1eJ/cus1eJ.coh1.bak.1
Chapter 6. Managing Red Hat Cluster With sys1em-con1g-c1us1er
106
s the newest backup, /e1c/cus1eJ/cus1eJ.coh1.bak.2 s the second newest
backup, and /e1c/cus1eJ/cus1eJ.coh1.bak.3 s the thrd newest backup.
If a custer member becomes noperabe because of msconfguraton, restore the
confguraton fe accordng to the foowng steps:
1. At the Cluster Configuration Tool tab of the Red Hat Custer Sute
management GUI, cck File => Open.
2. Cckng File => Open causes the system-config-cluster daog box to be
dspayed.
3. At the the system-config-cluster daog box, seect a backup fe (for exampe,
/e1c/cus1eJ/cus1eJ.coh1.bak.1). Verfy the fe seecton n the Selection box
and cck OK.
4. Cck File => Save As.
5. Cckng File => Save As causes the system-config-cluster daog box to be
dspayed.
6. At the the system-config-cluster daog box, seect /e1c/cus1eJ/cus1eJ.coh1
and cck OK. (Verfy the fe seecton n the Selection box.)
7. Cckng OK causes an lnformation daog box to be dspayed. At that daog
box, cck OK.
8. Propagate the updated confguraton fe throughout the custer by cckng Send
to Cluster.
Note
The Cluster Configuration Tool does not dspay the Send to
Cluster button f the custer s new and has not been started yet, or f
the node from whch you are runnng the Cluster Configuration Tool
s not a member of the custer. If the Send to Cluster button s not
dspayed, you can st use the Cluster Configuration Tool; however,
you cannot propagate the confguraton. You can st save the
confguraton fe. For nformaton about usng the Cluster
Configuration Tool for a new custer confguraton, refer to
Chapter 5, Configuring Red Hat Cluster With sys1en-con1Ig-c1us1eI.
Backing Up and Restoring the Cluster
107
9. Cckng Send to Cluster causes a Warning daog box to be dspayed. Cck
Yes to propagate the confguraton.
10. Cck the Cluster Management tab and verfy that the changes have been
propagated to the custer members.
5. Disabling the Cluster Software
It may become necessary to temporary dsabe the custer software on a custer
member. For exampe, f a custer member experences a hardware faure, you may
want to reboot that member, but prevent t from re|onng the custer to perform
mantenance on the system.
Use the /sb1h/chkcoh11g command to stop the member from |onng the custer at
boot-up as foows:
# chkcon1g --1eve1 2345 rgmanager o11
# chkcon1g --1eve1 2345 g1s o11
# chkcon1g --1eve1 2345 c1vmd o11
# chkcon1g --1eve1 2345 1enced o11
# chkcon1g --1eve1 2345 1ock_gu1md o11
# chkcon1g --1eve1 2345 cman o11
# chkcon1g --1eve1 2345 ccsd o11
Once the probems wth the dsabed custer member have been resoved, use the
foowng commands to aow the member to re|on the custer:
# chkcon1g --1eve1 2345 rgmanager on
# chkcon1g --1eve1 2345 g1s on
# chkcon1g --1eve1 2345 c1vmd on
# chkcon1g --1eve1 2345 1enced on
# chkcon1g --1eve1 2345 1ock_gu1md on
# chkcon1g --1eve1 2345 cman on
# chkcon1g --1eve1 2345 ccsd on
You can then reboot the member for the changes to take effect or run the foowng
commands n the order shown to restart custer software:
Chapter 6. Managing Red Hat Cluster With sys1em-con1g-c1us1er
108
1. seJv1ce ccsd s1aJ1
2. seJv1ce cmah s1aJ1 (or seJv1ce ock_gumd s1aJ1 for GULM custers)
3. seJv1ce 1ehced s1aJ1 (DLM custers ony)
4. seJv1ce cvmd s1aJ1, f CLVM has been used to create custered voumes
5. seJv1ce g1s s1aJ1, f you are usng Red Hat GFS
6. seJv1ce JgmahageJ s1aJ1, f the custer s runnng hgh-avaabty servces
(JgmahageJ)
6. Diagnosing and Correcting Problems in a
Cluster
For nformaton about dagnosng and correctng probems n a custer, contact an
authorzed Red Hat support representatve.
Database
109
110
Appendix A. Example of
Setting Up Apache HTTP
Server
Ths appendx provdes an exampe of settng up a hghy avaabe Apache HTTP
Server on a Red Hat Custer. The exampe descrbes how to set up a servce to fa
over an Apache HTTP Server. Varabes n the exampe appy to ths exampe ony;
they are provded to assst settng up a servce that suts your requrements.
Note
Ths exampe uses the Cluster Configuration Tool
(sys1em-coh11g-cus1eJ). You can use comparabe Conga functons to
make an Apache HTTP Server hghy avaabe on a Red Hat Custer.
1. Apache HTTP Server Setup Overview
Frst, confgure Apache HTTP Server on a nodes n the custer. If usng a faover
doman , assgn the servce to a custer nodes confgured to run the Apache HTTP
Server. Refer to 5ection 6, Configuring a Failover Domain" for nstructons. The
custer software ensures that ony one custer system runs the Apache HTTP Server
at one tme. The exampe confguraton conssts of nstang the h11pd RPM package
on a custer nodes (or on nodes n the faover doman, f used) and confgurng a
shared GFS shared resource for the Web content.
When nstang the Apache HTTP Server on the custer systems, run the foowng
command to ensure that the custer nodes do not automatcay start the servce
when the system boots:
# chkcon1g --de1 h11pd
Rather than havng the system nt scrpts spawn the h11pd daemon, the custer
nfrastructure ntazes the servce on the actve custer node. Ths ensures that the
correspondng IP address and fe system mounts are actve on ony one custer
111
node at a tme.
When addng an h11pd servce, a floating IP address must be assgned to the servce
so that the IP address w transfer from one custer node to another n the event of
faover or servce reocaton. The custer nfrastructure bnds ths IP address to the
network nterface on the custer system that s currenty runnng the Apache HTTP
Server. Ths IP address ensures that the custer node runnng h11pd s transparent to
the cents accessng the servce.
The fe systems that contan the Web content cannot be automatcay mounted on
the shared storage resource when the custer nodes boot. Instead, the custer
software must mount and unmount the fe system as the h11pd servce s started
and stopped. Ths prevents the custer systems from accessng the same data
smutaneousy, whch may resut n data corrupton. Therefore, do not ncude the
fe systems n the /e1c/1s1ab fe.
2. Configuring Shared Storage
To set up the shared fe system resource, perform the foowng tasks as root on
one custer system:
1. On one custer node, use the nteractve paJ1ed utty to create a partton to use
for the document root drectory. Note that t s possbe to create mutpe
document root drectores on dfferent dsk parttons.
2. Use the mk1s command to create an ext3 fe system on the partton you created
n the prevous step. Specfy the drve etter and the partton number. For
exampe:
# mk1s -1 ex13 /dev/sde3
3. Mount the fe system that contans the document root drectory. For exampe:
# moun1 /dev/sde3 /var/WWW/h1m1
Do not add ths mount nformaton to the /e1c/1s1ab fe because ony the custer
software can mount and unmount fe systems used n a servce.
4. Copy a the requred fes to the document root drectory.
Appendix A. Example of Setting Up Apache HTTP Server
112
5. If you have CGI fes or other fes that must be n dfferent drectores or n
separate parttons, repeat these steps, as needed.
3. lnstalling and Configuring the Apache HTTP
Server
The Apache HTTP Server must be nstaed and confgured on a nodes n the
assgned faover doman, f used, or n the custer. The basc server confguraton
must be the same on a nodes on whch t runs for the servce to fa over correcty.
The foowng exampe shows a basc Apache HTTP Server nstaaton that ncudes
no thrd-party modues or performance tunng.
On a node n the custer (or nodes n the faover doman, f used), nsta the h11pd
RPM package. For exampe:
Jpm -bvh h11pd-<veIsIon>.<aIch>.Jpm
To confgure the Apache HTTP Server as a custer servce, perform the foowng
tasks:
1. Edt the /e1c/h11pd/coh1/h11pd.coh1 confguraton fe and customze the fe
accordng to your confguraton. For exampe:
Specfy the drectory that contans the HTML fes. Aso specfy ths mount pont
when addng the servce to the custer confguraton. It s ony requred to
change ths fed f the mount pont for the web ste's content dffers from the
defaut settng of /vaJ/www/h1m/. For exampe:
0ocumeh1Roo1 "/mh1/h11pdseJv1ce/h1m"
Specfy a unque IP address to whch the servce w sten for requests. For
exampe:
L1s1eh 192.168.1.100.80
Ths IP address then must be confgured as a custer resource for the servce
usng the Cluster Configuration Tool.
lnstalling and Configuring the
113
If the scrpt drectory resdes n a non-standard ocaton, specfy the drectory
that contans the CGI programs. For exampe:
ScJ1p1A1as /cg1-b1h/ "/mh1/h11pdseJv1ce/cg1-b1h/"
Specfy the path that was used n the prevous step, and set the access
permssons to defaut to that drectory. For exampe:
<01Jec1oJy /mh1/h11pdseJv1ce/cg1-b1h">
Aow0veJJ1de hohe
0p11ohs hohe
0JdeJ aow,dehy
Aow 1Jom a
</01Jec1oJy>
Addtona changes may need to be made to tune the Apache HTTP Server or
add modue functonaty. For nformaton on settng up other optons, refer to
the Red Hat Enterprise Linux 5ystem Administration Cuide and the Red Hat
Enterprise Linux Reference Cuide.
2. The standard Apache HTTP Server start scrpt, /e1c/Jc.d/1h11.d/h11pd s aso
used wthn the custer framework to start and stop the Apache HTTP Server on
the actve custer node. Accordngy, when confgurng the servce, specfy ths
scrpt by addng t as a Script resource n the Cluster Configuration Tool.
3. Copy the confguraton fe over to the other nodes of the custer (or nodes of the
faover doman, f confgured).
Before the servce s added to the custer confguraton, ensure that the Apache
HTTP Server drectores are not mounted. Then, on one node, nvoke the Cluster
Configuration Tool to add the servce, as foows. Ths exampe assumes a faover
doman named h11pd-doma1h was created for ths servce.
1. Add the nt scrpt for the Apache HTTP Server servce.
Seect the Resources tab and cck Create a Resource. The Resources
Configuration propertes daog box s dspayed.
Appendix A. Example of Setting Up Apache HTTP Server
114
Seect Script form the drop down menu.
Enter a Name to be assocated wth the Apache HTTP Server servce.
Specfy the path to the Apache HTTP Server nt scrpt (for exampe,
/e1c/rc.d/n1.d/h11pd) n the File (with path) fed.
Cck OK.
2. Add a devce for the Apache HTTP Server content fes and/or custom scrpts.
Cck Create a Resource.
In the Resource Configuration daog, seect File System from the
drop-down menu.
Enter the Name for the resource (for exampe, h11pd-con1en1.
Choose ext3 from the File System Type drop-down menu.
Enter the mount pont n the Mount Point fed (for exampe, /var/WWW/h1m1/).
Enter the devce speca fe name n the Device fed (for exampe, /dev/sda3).
3. Add an IP address for the Apache HTTP Server servce.
Cck Create a Resource.
Choose lP Address from the drop-down menu.
Enter the lP Address to be assocated wth the Apache HTTP Server servce.
Make sure that the Monitor Link checkbox s eft checked.
Cck OK.
4. Cck the Services property.
5. Create the Apache HTTP Server servce.
Cck Create a Service. Type a Name for the servce n the Add a Service
daog.
In the Service Management daog, seect a Failover Domain from the
drop-down menu or eave t as None.
Cck the Add a Shared Resource to this service button. From the avaabe
Apache HTTP Server
115
st, choose each resource that you created n the prevous steps. Repeat ths
step unt a resources have been added.
Cck OK.
6. Choose File => Save to save your changes.
Appendix A. Example of Setting Up Apache HTTP Server
116
Appendix B. Fence Device
Parameters
Ths appendx provdes tabes wth parameter descrptons of fence devces.
Note
Certan fence devces have an optona Password Script parameter.
The Password Scriptparameter aows specfyng that a fence-devce
password s supped from a scrpt rather than from the Password
parameter. Usng the Password Script parameter supersedes the
Password parameter, aowng passwords to not be vsbe n the
custer confguraton fe (/e1c/cus1eJ/cus1eJ.coh1).
Field Description
Name A name for the APC devce connected to the custer.
IP Address The IP address assgned to the devce.
Logn The ogn name used to access the devce.
Password The password used to authentcate the connecton to the devce.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Table B.1. APC Power Switch
Field Description
Name A name for the Brocade devce connected to the custer.
IP Address The IP address assgned to the devce.
Logn The ogn name used to access the devce.
Password The password used to authentcate the connecton to the devce.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
117
Table B.2. Brocade Fabric Switch
Field Description
IP Address The IP address assgned to the PAP consoe.
Logn The ogn name used to access the PAP consoe.
Password The password used to authentcate the connecton to the PAP
consoe.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Doman Doman of the Bu PAP system to power cyce
Table B.3. Bull PAP (Platform Administration Processor)
Field Description
Name The name assgned to the DRAC.
IP Address The IP address assgned to the DRAC.
Logn The ogn name used to access the DRAC.
Password The password used to authentcate the connecton to the DRAC.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Table B.4. Dell DRAC
Field Description
Name A name for the BadeFrame devce connected to the custer.
CServer The hostname (and optonay the username n the form of
usernameQhos1name) assgned to the devce. Refer to the
fence_egenera(8) man page.
ESH Path
(optona)
The path to the esh command on the cserver (defaut s /opt/pan-
mgr/bn/esh)
Appendix B. Fence Device Parameters
118
Table B.5. Egenera SAN Controller
Field Description
Name A name for the GNBD devce used to fence the custer. Note that the
GFS server must be accessed va GNBD for custer node fencng
support.
Server The hostname of each GNBD to dsabe. For mutpe hostnames,
separate each hostname wth a space.
Table B.6. GNBD (Global Network Block Device)
Field Description
Name A name for the server wth HP LO support.
Hostname The hostname assgned to the devce.
Logn The ogn name used to access the devce.
Password The password used to authentcate the connecton to the devce.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Table B.7. HP iLO (lntegrated Lights Out)
Field Description
Name A name for the IBM BadeCenter devce connected to the custer.
IP Address The IP address assgned to the devce.
Logn The ogn name used to access the devce.
Password The password used to authentcate the connecton to the devce.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Table B.8. lBM Blade Center
119
Field Description
Name A name for the RSA devce connected to the custer.
IP Address The IP address assgned to the devce.
Logn The ogn name used to access the devce.
Password The password used to authentcate the connecton to the devce.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Table B.9. lBM Remote Supervisor Adapter ll (RSA ll)
Field Description
IP Address The IP address assgned to the IPMI port.
Logn The ogn name of a user capabe of ssung power on/off
commands to the gven IPMI port.
Password The password used to authentcate the connecton to the
IPMI port.
Password Scrpt
(optona)
The scrpt that suppes a password for access to the fence
devce. Usng ths supersedes the Password parameter.
Authentcaton Type hohe, passwoJd, md2, or md5
Use Lanpus TJue or 1. If bank, then vaue s Fase.
Table B.10. lPMl (lntelligent Platform Management lnterface)
LAN
Field Description
Name A name to assgn the Manua fencng agent. Refer to
1ehce_mahua(8) for more nformaton.
Table B.11. Manual Fencing
Appendix B. Fence Device Parameters
120
Warning
Manua fencng s not supported for producton envronments.
Field Description
Name A name for the McData devce connected to the custer.
IP Address The IP address assgned to the devce.
Logn The ogn name used to access the devce.
Password The password used to authentcate the connecton to the devce.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Table B.12. McData SAN Switch
Field Description
Name A name for the WTI RPS-10 power swtch connected to the custer.
Devce The devce the swtch s connected to on the controng host (for
exampe, /dev/11ys2).
Port The swtch outet number.
Table B.13. RPS-10 Power Switch (two-node clusters only)
Field Description
Name A name for the SANBox2 devce connected to the custer.
IP Address The IP address assgned to the devce.
Logn The ogn name used to access the devce.
Password The password used to authentcate the connecton to the devce.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Table B.14. Logic SANBox2 Switch
121
Field Description
Name Name of the node to be fenced. Refer to 1ehce_scs1(8) for more
nformaton.
Table B.15. SCSl Fencing
Field Description
Name Name of the guest to be fenced.
Table B.16. Virtual Machine Fencing
Field Description
Name A name for the Vxe swtch connected to the custer.
IP Address The IP address assgned to the devce.
Password The password used to authentcate the connecton to the devce.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Table B.17. Vixel SAN Switch
Field Description
Name A name for the WTI power swtch connected to the custer.
IP Address The IP address assgned to the devce.
Password The password used to authentcate the connecton to the devce.
Password
Scrpt
(optona)
The scrpt that suppes a password for access to the fence devce.
Usng ths supersedes the Password parameter.
Table B.18. WTl Power Switch
Appendix B. Fence Device Parameters
122
lndex
A
ACPI
confgurng, 20
Apache HTTP Server
httpd.conf, 113
settng up servce, 111
C
custer
admnstraton, 15, 57, 101
dagnosng and correctng probems,
61, 109
dsabng the custer software, 108
dspayng status, 13, 104
managng node, 58
startng, 99
startng, stoppng, restartng, and
deetng, 57
custer admnstraton, 15, 57, 101
backng up the custer database, 106
compatbe hardware, 15
confgurng ACPI, 20
confgurng ptabes, 15
confgurng max_uns, 26
Conga consderatons, 29
consderatons for usng qdsk, 27
consderatons for usng quorum dsk,
27
dagnosng and correctng probems n
a custer, 61, 109
dsabng the custer software, 108
dspayng custer and servce status,
13, 104
enabng IP ports, 15
genera consderatons, 29
managng custer node, 58
managng hgh-avaabty servces,
59
modfyng the custer confguraton,
105
restorng the custer database, 106
SELnux, 29
startng and stoppng the custer
software, 101
startng, stoppng, restartng, and
deetng a custer, 57
custer confguraton, 31
modfyng, 105
Custer Confguraton Too
accessng, 12
custer database
backng up, 106
restorng, 106
custer servce
dspayng status, 13, 104
custer servce managers
confguraton, 53, 95, 99
custer servces, 53, 95
(see aso addng to the custer
confguraton)
Apache HTTP Server, settng up, 111
httpd.conf, 113
custer software
confguraton, 31
dsabng, 108
nstaaton and confguraton, 63
startng and stoppng, 101
custer software nstaaton and
confguraton, 63
custer storage
confguraton, 55
command ne toos tabe, 13
confguraton fe
propagaton of, 99
confgurng custer storage , 55
Conga
123
accessng, 3
consderatons for custer
admnstraton, 29
overvew, 5
Conga overvew, 5
F
feedback, x, x
G
genera
consderatons for custer
admnstraton, 29
H
hardware
compatbe, 15
HTTP servces
Apache HTTP Server
httpd.conf, 113
settng up, 111
l
ntegrated fence devces
confgurng ACPI, 20
ntroducton, x
other Red Hat Enterprse Lnux
documents, x
IP ports
enabng, 15
ptabes
confgurng, 15
M
max_uns
confgurng, 26
P
parameters, fence devce, 117
power controer connecton, confgurng,
117
power swtch, 117
(see aso power controer)

qdsk
consderatons for usng, 27
quorum dsk
consderatons for usng, 27
S
SELnux
confgurng, 29
startng the custer software, 99
System V nt, 101
T
tabe
command ne toos, 13
tabes
power controer connecton,
confgurng, 117
troubeshootng
dagnosng and correctng probems n
a custer, 61, 109
lndex
124

Potrebbero piacerti anche