Sei sulla pagina 1di 11

Cluster - Quick Reference

Page 1 of 11

Quick Reference
HOME

STORAGES & SAN SWITCHES

F. Mohaideen Abdul Kader

OS

SERVER

LEARNINGS

USEFUL PDF FILES

TAPE LIBRARIES & BACKUP

EC DATA

Sun Cluster 3.2 Features & Limitations


Support for 2-16 nodes.
Global device capability--devices can be shared across the cluster.
Global file system --allows a file system to be accessed simultaneously by all cluster nodes.
Tight implementation with Solaris--The cluster framework services have been implemented in the kernel.
Application agent support.
Tight integration with zones.
Each node must run the same revision and update of the Solaris OS.
Two node clusters must have at least one quorum device.
Each cluster needs at least two separate private networks. (Supported hardware, such as ce and bge may use tagged VLANs to run private
and public networks on the same physical connection.)
Each node's boot disk should include a 500M partition mounted at /globaldevices
Attached storage must be multiply connected to the nodes.
ZFS is a supported file system and volume manager. Veritas Volume Manager (VxVM) and Solaris Volume Manager (SVM) are also
supported volume managers.
Veritas multipathing (vxdmp) is not supported. Since vxdmp must be enabled for current VxVM versions, it must be used in conjunction
with mpxio or another similar solution like EMC's Powerpath.
SMF services can be integrated into the cluster, and all framework daemons are defined as SMF services
PCI and SBus based systems cannot be mixed in the same cluster.
Boot devices cannot be on a disk that is shared with other cluster nodes. Doing this may lead to a locked-up cluster due to data fencing.

Cluster Configuration
The cluster's configuration information is stored in global files known as the "cluster configuration repository" (CCR). The cluster framework files
in /etc/cluster/ccr should not be edited manually; they should be managed via the administrative commands.
The cluster show command displays the cluster configuration in a nicely-formatted report. The CCR contains:
Names of the cluster and the nodes.
The configuration of the cluster transport.
Device group configuration.
Nodes that can master each device group.
NAS device information (if relevant).
Data service parameter values and callback method paths.
Disk ID (DID) configuration.
Cluster status.
Some commands to directly maintain the CCR are:
ccradm: Allows (among other things) a checksum re-configuration of files in /etc/cluster/ccr after manual edits. (Do NOT edit these files
manually unless there is no other option. Even then, back up the original files.) ccradm -i /etc/cluster/ccr/filename -o
scgdefs: Brings new devices under cluster control after they have been discovered by devfsadm.
The scinstall and clsetup commands may
We have observed that the installation process may disrupt a previously installed NTP configuration (even though the installation notes promise
that this will not happen). It may be worth using ntpq to verify that NTP is still working properly after a cluster installation.
Resource Groups
Resource groups are collections of resources, including data services. Examples of resources include disk sets, virtual IP addresses, or server

http://quickreference.weebly.com/cluster.html

1/23/2014

Cluster - Quick Reference

Page 2 of 11

processes like httpd.


Resource groups may either be failover or scalable resource groups. Failover resource groups allow groups of services to be started on a node
together if the active node fails. Scalable resource groups run on several nodes at once.
The rgmd is the Resource Group Management Daemon. It is responsible for monitoring, stopping, and starting the resources within the different
resource groups.
Some common resource types are:
SUNW.LogicalHostname: Logical IP address associated with a failover service.
SUNW.SharedAddress: Logical IP address shared between nodes running a scalable resource group.
SUNW.HAStoragePlus: Manages global raw devices, global file systems, non-ZFS failover file systems, and failover ZFS zpools.
Resource groups also handle resource and resource group dependencies. Sun Cluster allows services to start or stop in a particular order.
Dependencies are a particular type of resource property. The r_properties man page contains a list of resource properties and their meanings.
The rg_properties man page has similar information for resource groups. In particular, the Resource_dependencies property specifies something
on which the resource is dependent.
Some resource group cluster commands are:
# clrt register resource-type: Register a resource type.
# clrt register -n node1name,node2name resource-type: Register a resource type to specific nodes.
# clrt unregister resource-type: Unregister a resource type.
# clrt list -v: List all resource types and their associated node lists.
# clrt show resource-type: Display all information for a resource type.
# clrg create -n node1name,node2name rgname: Create a resource group.
# clrg delete rgname: Delete a resource group.
# clrg set -p property-name rgname: Set a property.
# clrg show -v rgname: Show resource group information.
# clrs create -t HAStoragePlus -g rgname -p AffinityOn=true -p FilesystemMountPoints=/mountpoint resource-name
# clrg online -M rgname
# clrg switch -M -n nodename rgname
# clrg offline rgname: Offline the resource, but leave it in a managed state.
# clrg restart rgname
# clrs disable resource-name: Disable a resource and its fault monitor.
# clrs enable resource-name: Re-enable a resource and its fault monitor.
# clrs clear -n nodename -f STOP_FAILED resource-name
# clrs unmonitor resource-name: Disable the fault monitor, but leave resource running.
# clrs monitor resource-name: Re-enable the fault monitor for a resource that is currently enabled.
# clrg suspend rgname: Preserves online status of group, but does not continue monitoring.
# clrg resume rgname: Resumes monitoring of a suspended group
# clrg status: List status of resource groups.
# clrs status -g rgname

Data Services
A data service agent is a set of components that allow a data service to be monitored and fail over within the cluster. The agent includes
methods for starting, stopping, monitoring, or failing the data service. It also includes a registration information file allowing the CCR to store the
information about these methods in the CCR. This information is encapsulated as a resource type.
The fault monitors for a data sevice place the daemons under the control of the process monitoring facility (rpc.pmfd), and the service, using
client commands.
Public Network
The public network uses pnmd (Public Network Management Daemon) and the IPMP in.mpathd daemon to monitor and control the public
network addresses.
IPMP should be used to provide failovers for the public network paths. The health of the IPMP elements can be monitored with scstat -i
The clrslh and clrssa commands are used to configure logical and shared hostnames, respectively.
# clrslh create -g rgname logical-hostname
Private Network

http://quickreference.weebly.com/cluster.html

1/23/2014

Cluster - Quick Reference

Page 3 of 11

The "private," or "cluster transport" network is used to provide a heartbeat between the nodes so that they can determine which nodes are
available. The cluster transport network is also used for traffic related to global devices.
While a 2-node cluster may use crossover cables to construct a private network, switches should be used for anything more than two nodes.
(Ideally, separate switching equipment should be used for each path so that there is no single point of failure.)
The default base IP address is 172.16.0.0, and private networks are assigned subnets based on the results of the cluster setup. Available
network interfaces can be identified by using a combination of dladm show-dev and ifconfig.
Private networks should be installed and configured using the scinstall command during cluster configuration. Make sure that the interfaces in
question are connected, but down and unplumbed before configuration. The clsetup command also has menu options to guide you through the
private network setup process.
Alternatively, something like the following command string can be used to establish a private network:
# clintr add nodename1:ifname
# clintr add nodename2:ifname2
# clintr add switchname
# clintr add nodename1:ifname1,switchname
# clintr add nodename2:ifname2,switchname
# clintr status
The health of the heartbeat networks can be checked with the scstat -W command. The physical paths may be checked with clintr status or
cluster status -t intr.
Quorum
Sun Cluster uses a quorum voting system to prevent split-brain and cluster amnesia. The Sun Cluster documentation refers to "failure fencing" as
the mechanism to prevent split-brain (where two nodes run the same service at the same time, leading to potential data corruption).
"Amnesia" occurs when a change is made to the cluster while a node is down, then that node attempts to bring up the cluster. This can result in
the changes being forgotten, hence the use of the word "amnesia."
One result of this is that the last node to leave a cluster when it is shut down must be the first node to re-enter the cluster. Later in this section,
we will discuss ways of circumventing this protection.
Quorum voting is defined by allowing each device one vote. A quorum device may be a cluster node, a specified external server running quorum
software, or a disk or NAS device. A majority of all defined quorum votes is required in order to form a cluster. At least half of the quorum votes
must be present in order for cluster services to remain in operation. (If a node cannot contact at least half of the quorum votes, it will panic.
During the reboot, if a majority cannot be contacted, the boot process will be frozen. Nodes that are removed from the cluster due to a quorum
problem also lose access to any shared file systems. This is called "data fencing" in the Sun Cluster documentation.)
Quorum devices must be available to at least two nodes in the cluster.
Disk quorum devices may also contain user data. (Note that if a ZFS disk is used as a quorum device, it should be brought into the zpool
before being specified as a quorum device.)
Sun recommends configuring n-1 quorum devices (the number of nodes minus 1). Two node clusters must contain at least one quorum
device.
Disk quorum devices must be specified using the DID names.
Quorum disk devices should be at least as available as the storage underlying the cluster resource groups.
Quorum status and configuration may be investigating using:
# scstat -q
# clq status
These commands report on the configured quorum votes, whether they are present, and how many are required for a majority.
Quorum devices can be manipulated through the following commands:
# clq add did-device-name
# clq remove did-device-name: (Only removes the device from the quorum configuration. No data on the device is affected.)
# clq enable did-device-name
# clq disable did-device-name: (Removes the quorum device from the total list of available quorum votes. This might be
valuable if the device is down for maintenance.)
# clq reset: (Resets the configuration to the default.)

http://quickreference.weebly.com/cluster.html

1/23/2014

Cluster - Quick Reference

Page 4 of 11

By default, doubly-connected disk quorum devices use SCSI-2 locking. Devices connected to more than two nodes use SCSI-3 locking. SCSI-3
offers persistent reservations, but SCSI-2 requires the use of emulation software. The emulation software uses a 64-bit reservation key written to
a private area on the disk.
In either case, the cluster node that wins a race to the quorum device attempts to remove the keys of any node that it is unable to contact,
which cuts that node off from the quorum device. As noted before, any group of nodes that cannot communicate with at least half of the quorum
devices will panic, which prevents a cluster partition (split-brain).
In order to add nodes to a 2-node cluster, it may be necessary to change the default fencing with scdidadm -G prefer3 or cluster set -p
global_fencing=prefer3, create a SCSI-3 quorum device with clq add, then remove the SCSI-2 quorum device with clq remove.
NetApp filers and systems running the scqsd daemon may also be selected as quorum devices. NetApp filers use SCSI-3 locking over the iSCSI
protocol to perform their quorum functions.
The claccess deny-all command may be used to deny all other nodes access to the cluster. claccess allow nodename re-enables access for a
node.
Purging Quorum Keys
CAUTION: Purging the keys from a quorum device may result in amnesia. It should only be done after careful diagnostics have been done to
verify why the cluster is not coming up. This should never be done as long as the cluster is able to come up. It may need to be done if the last
node to leave the cluster is unable to boot, leaving everyone else fenced out. In that case, boot one of the other nodes to single-user mode,
identify the quorum device, and:
For SCSI 2 disk reservations, the relevant command is pgre, which is located in /usr/cluster/lib/sc:
# pgre -c pgre_inkeys -d /dev/did/rdks/d#s2 (List the keys in the quorum device.)
# pgre -c pgre_scrub -d /dev/did/rdks/d#s2 (Remove the keys from the quorum device.)
Similarly, for SCSI 3 disk reservations, the relevant command is scsi:
# scsi -c inkeys -d /dev/did/rdks/d#s2 (List the keys in the quorum device.)
# scsi -c scrub -d /dev/did/rdks/d#s2 (Remove the keys from the quorum device.)

Global Storage
Sun Cluster provides a unique global device name for every disk, CD, and tape drive in the cluster. The format of these global device names
is /dev/did/device-type. (eg /dev/did/dsk/d2s3) (Note that the DIDs are a global naming system, which is separate from the global device or
global file system functionality.)
DIDs are componentsof SVM volumes, though VxVM does not recognize DID device names as components of VxVM volumes.
DID disk devices, CD-ROM drives, tape drives, SVM volumes, and VxVM volumes may be used as global devices. A global device is physically
accessed by just one node at a time, but all other nodes may access the device by communicating across the global transport network.
The file systems in /global/.devices store the device files for global devices on each node. These are mounted on mount points of the
form /global/.devices/node@nodeid, where nodeid is the identification number assigned to the node. These are visible on all nodes. Symbolic
links may be set up to the contents of these file systems, if they are desired. Sun Cluster sets up some such links in the /dev/global directory.
Global file systems may be ufs, VxFS, or hsfs. To mount a file system as a global file system, add a "global" mount option to the file system's
vfstab entry and remount. Alternatively, run a mount -o global... command.
(Note that all nodes in the cluster should have the same vfstab entry for all cluster file systems. This is true for both global and failover file
systems, though ZFS file systems do not use the vfstab at all.)
In the Sun Cluster documentation, global file systems are also known as "cluster file systems" or "proxy file systems."
Note that global file systems are different from failover file systems. The former are accessible from all nodes; the latter are only accessible from
the active node.
Maintaining Devices
New devices need to be read into the cluster configuration as well as the OS. As usual, we should run something like devfsadm or drvconfig;
disks to create the /device and /dev links across the cluster. Then we use the scgdevs or scdidadm command to add more disk devices to the

http://quickreference.weebly.com/cluster.html

1/23/2014

Cluster - Quick Reference

Page 5 of 11

cluster configuration.
Some useful options for scdidadm are:
# scdidadm -l: Show local DIDs
# scdidadm -L: Show all cluster DIDs
# scdidadm -r: Rebuild DIDs
We should also clean up unused links from time to time with devfsadm -C and scdidadm -C
The status of device groups can be checked with scstat -D. Devices may be listed with cldev list -v. They can be switched to a different node via
a cldg switch -n target-node dgname command.
Monitoring for devices can be enabled and disabled by using commands like:
# cldev monitor all
# cldev unmonitor d#
# cldev unmonitor -n nodename d#
# cldev status -s Unmonitored
Parameters may be set on device groups using the cldg set command, for example:
# cldg set -p failback=false dgname
A device group can be taken offline or placed online with:
# cldg offline dgname
# cldg online dgname
The overall health of the cluster may be monitored using the cluster status or scstat -v commands. Other useful options include:
scstat -g: Resource group status
scstat -D: Device group status
scstat -W: Heartbeat status
scstat -i: IPMP status
scstat -n: Node status
Failover applications (also known as "cluster-unaware" applications in the Sun Cluster documentation) are controlled by rgmd (the resource
group manager daemon). Each application has a data service agent, which is the way that the cluster controls application startups, shutdowns,
and monitoring. Each application is typically paired with an IP address, which will follow the application to the new node when a failover occurs.
"Scalable" applications are able to run on several nodes concurrently. The clustering software provides load balancing and makes a single service
IP address available for outside entities to query the application.
"Cluster aware" applications take this one step further, and have cluster awareness programmed into the application. Oracle RAC is a good
example of such an application.
All the nodes in the cluster may be shut down with cluster shutdown -y -g0. To boot a node outside of the cluster (for troubleshooting or
recovery operations, run boot -x
clsetup is a menu-based utility that can be used to perform a broad variety of configuration tasks, including configuration of resources and
resource groups.

File Location
man pages

/usr/cluster/man

log files

/var/cluster/logs
/var/adm/messages

Configuration files (CCR, eventlog,


etc)

/etc/cluster/
/usr/cluser/lib/sc

Cluster and other commands

/etc/cluster/ccr/infrastructure (Version 3.1)

Cluster infrastructure file

/etc/cluster/ccr/global/infrastructure (Version 3.2)

Basic Commands
Sun Cluster 3.2

http://quickreference.weebly.com/cluster.html

Sun Cluster 3.1

1/23/2014

Cluster - Quick Reference

Page 6 of 11

# cluster status

# scstat

# clrg status

# scstat -g

# clrs status

# scstat -D

# cldev status

# scstat -n

# scdidadm -l
# scdidadm -L
# scrgadm -pvv

Resources & Resource Groups Related Commands


Version 3.2

Version 3.1

Disabling Resource Group

clrg offline [-n <node>] <res_group>

scswitch F g <res_group>

Enabling Resource Group

clrg online [-n <node>] <res_group>

scswitch -Z -g <res_group>

Disabling Resources

clrs disable <resource>

scswitch n j res-ip

Enabling ResourcesClearing a failed

clrs enable <resource>

scswitch e j res-ip

resource

clrs clear -f STOP_FAILED <resource>

scswitch c h<host>,<host> -j <resource> -f


STOP_FAILED

Command Reference
Cluster Information

Cluster

Sun Cluster 3.1

scstat -pv

Sun Cluster 3.2

cluster list -v
cluster show
cluster status

Nodes

scstat n

clnode list -v
clnode show
clnode status

Devices

scstat D

cldevice list
cldevice show
cldevice status

Quorum

scstat q

clquorum list -v
clquorum show
clqorum status

Transport info

scstat W

clinterconnect show
clinterconnect status

Resources

scstat g

clresource list -v
clresource show
clresource status

Resource Groups

scstat -g

clresourcegroup list -v

scrgadm -pv

clresourcegroup show
clresourcegroup status

Resource Types

clresourcetype list -v
clresourcetype list-props -v
clresourcetype show

IP Networking Multipathing

scstat i

clnode status -m

Installation info (prints packages and

scinstall pv

clnode show-rev -v

version)

http://quickreference.weebly.com/cluster.html

1/23/2014

Cluster - Quick Reference

Page 7 of 11

Shutting Down and Booting a Cluster


The Sun Cluster cluster shutdown command stops cluster services in an orderly fashion and cleanly shuts down the entire cluster.
Note Use the cluster shutdown command instead of the shutdown or halt commands to ensure proper shutdown of the entire cluster. The
Solaris shutdown command is used with the clnode evacuate command to shut down individual nodes.
The cluster shutdown command stops all nodes in a cluster by performing the following actions:
Takes offline all running resource groups.
Unmounts all cluster file systems.
Shuts down active device services.
Runs init 0 and brings all nodes to the ok prompt on a SPARC based system or to the GRUB menu on an x86 based system.

How to Shut Down a Cluster


From a single node in the cluster, type the following command.
# cluster shutdown -g0 -y
Verify that all nodes are showing the ok prompt on a SPARC-based system or a GRUB menu on an x86 based system.
Do not power off any nodes until all cluster nodes are at the ok prompt on a SPARC-based system or in a Boot Subsystem on an x86 based
system.
Example 1 : SPARC: Shutting Down a Cluster
The following example shows the console output when normal cluster operation is stopped and all nodes are shut down so that the ok prompt is
shown. The -g 0 option sets the shutdown grace period to zero, and the -y option provides an automatic yes response to the confirmation
question. Shutdown messages also appear on the consoles of the other nodes in the cluster.
# cluster shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime:
WARNING: CMM monitoring disabled.
phys-schost-1#
INIT: New run level: 0
The system is coming down. Please wait.
System services are now being stopped.
/etc/rc0.d/K05initrgm: Calling scswitch -S (evacuate)
The system is down.
syncing file systems... done
Program terminated
ok
Example 2 : x86: Shutting Down a Cluster
The following example shows the console output when normal cluster operation is stopped all nodes are shut down. In this example, the ok
prompt is not displayed on all of the nodes. The -g 0 option sets the shutdown grace period to zero, and the -y option provides an automatic yes
response to the confirmation question. Shutdown messages also appear on the consoles of the other nodes in the cluster.
# cluster shutdown -g0 -y
May 2 10:32:57 phys-schost-1 cl_runtime:
WARNING: CMM: Monitoring disabled.
root@phys-schost-1#
INIT: New run level: 0
The system is coming down. Please wait.
System services are now being stopped.
/etc/rc0.d/K05initrgm: Calling scswitch -S (evacuate)
failfasts already disabled on node 1
Print services already stopped.
May 2 10:33:13 phys-schost-1 syslogd: going down on signal 15
The system is down.
syncing file systems... done
Type any key to continue

http://quickreference.weebly.com/cluster.html

1/23/2014

Cluster - Quick Reference

Page 8 of 11

How to Boot a Cluster


Boot each node into cluster mode.
On SPARC based systems, do the following:
ok boot
On x86 based systems, do the following:
When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:

GNU GRUB version 0.95 (631K lower / 2095488K upper memory)


+-------------------------------------------------------------------------+
| Solaris 10 /sol_10_x86
| Solaris failsafe

|
|

+-------------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted.
Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.
Note Cluster nodes must have a working connection to the cluster interconnect to attain cluster membership.
Verify that the nodes booted without error and are online. The cluster status command reports the nodes' status.
# cluster status -t node
Example 3 SPARC: Booting a Cluster
The following example shows the console output when node phys-schost-1 is booted into the cluster. Similar messages appear on the consoles
of the other nodes in the cluster.
ok boot
Rebooting with command: boot
...
Hostname: phys-schost-1
Booting as part of a cluster
NOTICE: Node phys-schost-1 with votecount = 1 added.
NOTICE: Node phys-schost-2 with votecount = 1 added.
NOTICE: Node phys-schost-3 with votecount = 1 added.
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-2 (incarnation # 937690106) has become reachable.
NOTICE: Node phys-schost-3 (incarnation # 937690290) has become reachable.
NOTICE: cluster has reached quorum.
NOTICE: node phys-schost-1 is up; new incarnation number = 937846227.
NOTICE: node phys-schost-2 is up; new incarnation number = 937690106.
NOTICE: node phys-schost-3 is up; new incarnation number = 937690290.
NOTICE: Cluster members: phys-schost-1 phys-schost-2 phys-schost-3.
...

How to Shut Down a Single Node in Cluster


Before shutting down Switch all resource groups, resources, and device groups from the node being shut down to other cluster members.
On the node to be shut down, type the following command. The clnode evacuate command switches over all resource groups and device groups
including all non-global zones from the specified node to the next preferred node.
# clnode evacuate node-name
node-name : Specifies the node from which you are switching resource groups and device groups.
Shut down the cluster node.

http://quickreference.weebly.com/cluster.html

1/23/2014

Cluster - Quick Reference

Page 9 of 11

On the node to be shut down, type the following command.


# shutdown -g0 -y -i0
Verify that the cluster node is showing the ok prompt on a SPARC based system or the Press any key to continue message on the GRUB menu
on an x86 based system.
If necessary, power off the node.
Example 4 : SPARC: Shutting Down a Cluster Node
The following example shows the console output when node phys-schost-1 is shut down. The -g0 option sets the grace period to zero, and the y option provides an automatic yes response to the confirmation question. Shutdown messages for this node appear on the consoles of other
nodes in the cluster.
# clnode evacuate -S -h phys-schost-1
# shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime:
WARNING: CMM monitoring disabled.
phys-schost-1#
INIT: New run level: 0
The system is coming down. Please wait.
Notice: rgmd is being stopped.
Notice: rpc.pmfd is being stopped.
Notice: rpc.fed is being stopped.
umount: /global/.devices/node@1 busy
umount: /global/phys-schost-1 busy
The system is down.
syncing file systems... done
Program terminated
ok

How to Boot a Cluster Node in Noncluster Mode


Shut down the node by using the clnode evacuate and shutdown commands.
The clnode evacuate command switches over all device groups from the specified node to the next preferred node. The command also switches
all resource groups from global or non-global zones on the specified node to the next-preferred global or non-global zones on other nodes.
# clnode evacuate node-name
# shutdown -g0 -y
Verify that the node is showing the ok prompt on a Solaris based system or the Press any key to continue message on a GRUB menu on an x86
based system.
Boot the node in noncluster mode.
On SPARC based systems, perform the following command:
phys-schost# boot -xs
On x86 based system, perform the following commands:
phys-schost# shutdown -g -y -i0
Press any key to continue
In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
+-------------------------------------------------------------------------+
| Solaris 10 /sol_10_x86
| Solaris failsafe
|

|
|
|

http://quickreference.weebly.com/cluster.html

1/23/2014

Cluster - Quick Reference

Page 10 of 11

+-------------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted.
Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
+----------------------------------------------------------------------+
| root (hd0,0,a)

| kernel /platform/i86pc/multiboot

| module /platform/i86pc/boot_archive

+----------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted.
Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line
after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.
Add -x to the command to specify that the system boot into noncluster mode.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible
completions of a device/filename. ESC at any time exits. ]
grub edit> kernel /platform/i86pc/multiboot -x
Press the Enter key to accept the change and return to the boot parameters screen. The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
+----------------------------------------------------------------------+
| root (hd0,0,a)
|
| kernel /platform/i86pc/multiboot -x

| module /platform/i86pc/boot_archive

+----------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted.
Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O'
for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.- Type b to boot the node
into noncluster mode.
Note This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will
boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter
command.

Example 5 : SPARC: Booting a Cluster Node in Noncluster Mode


The following example shows the console output when node phys-schost-1 is shut down and restarted in noncluster mode. The -g0 option sets
the grace period to zero, the -y option provides an automatic yes response to the confirmation question, and -i0 invokes run level 0 (zero).
Shutdown messages for this node appear on the consoles of other nodes in the cluster.
# clnode evacuate phys-schost-1
# cluster shutdown -g0 -y
Shutdown started.

Wed Mar 10 13:47:32 phys-schost-1 cl_runtime:

WARNING: CMM monitoring disabled.


phys-schost-1#
...
rg_name = schost-sa-1 ...
offline node = phys-schost-2 ...
num of node = 0 ...
phys-schost-1#
INIT: New run level: 0
The system is coming down. Please wait.
System services are now being stopped.
Print services stopped.
syslogd: going down on signal 15
...
The system is down.
syncing file systems... done

http://quickreference.weebly.com/cluster.html

1/23/2014

Cluster - Quick Reference

Page 11 of 11

WARNING: node phys-schost-1 is being shut down.


Program terminated
ok boot -x
...
Not booting as part of cluster
...
The system is ready.
phys-schost-1 console login:

Create a free website with

http://quickreference.weebly.com/cluster.html

1/23/2014

Potrebbero piacerti anche