Sei sulla pagina 1di 14

How to Install and Configure a Two-Node

Cluster
Using Oracle Solaris Cluster 4.0 on Oracle Solaris 11
by Subarna Ganguly and Jonathan Mellors, December 2011

How to quickly and easily install and configure Oracle Solaris Cluster software for two nodes, including
configuring a quorum device.

Introduction
This article provides a step-by-step process for using the
interactive scinstall utility to install and configure Oracle Solaris Cluster
software for two nodes, including the configuration of a quorum device. It does
not cover the configuration of highly available services.

Want technical articles like this


one delivered to your
inbox? Subscribe to the
Systems Community
Newsletteronly technical
content for sysadmins and
developers.

Note: For more details on how to install and configure other Oracle Solaris
Cluster software configurations, see the Oracle Solaris Cluster Software
Installation Guide.
The interactive scinstall utility is menu-driven. The menus help reduce the chance of mistakes and promote
best practices by using default values, prompting you for information specific to your cluster, and identifying
invalid entries.
The scinstall utility also eliminates the need to manually set up a quorum device by automating the
configuration of a quorum device for the new cluster.
Note: This article refers to the Oracle Solaris Cluster 4.0 release. For more information about the latest Oracle
Solaris Cluster release, see the release notes.
Prerequisites, Assumptions, and Defaults
This section discusses several prerequisites, assumptions, and defaults for two-node clusters.

Configuration Assumptions
This article assumes the following conditions are met:

You are installing on Oracle Solaris 11 and you have basic system administration skills.

You are installing Oracle Solaris Cluster 4.0 software.

The cluster hardware is supported with Oracle Solaris Cluster 4.0 software. (See Oracle Solaris Cluster
System Requirements.)

A two-node x86 cluster is installed. However, the installation procedure is applicable to SPARC clusters
as well.

Each node has two spare network interfaces to be used as private interconnects, also known as
transports, and at least one network interface that is connected to the public network.
SCSI shared storage is connected to the two nodes.
Your setup looks like Figure 1, although you might have fewer or more devices, depending on your
system or network configuration.
Note: It is recommended, but not required, that you have console access to the nodes during cluster installation.

Figure 1. Oracle Solaris Cluster Hardware Configuration

Prerequisites for Each System


This article assumes that Oracle Solaris 11 has been installed on both systems.

Initial Preparation of Public IP Addresses and Logical Host Names


You must have the logical names (host names) and IP addresses of the nodes that are to be configured as a
cluster. Add those entries to each node's /etc/inet/hosts file or to a naming service if a naming service,
such as DNS, NIS, or NIS+ maps, is used.
The example in this article uses the NIS service and the configuration shown in Table 1.
Table 1. Configuration
COMPONENT
Cluster Name
Node 1
Node 2

Defaults
The scinstall interactive utility in Typical mode installs the Oracle Solaris Cluster software with the following
defaults:

Private-network address 172.16.0.0

Private-network netmask 255.255.248.0

Cluster-transport switches switch1 and switch2


The example in this article has no cluster-transport switches. Instead, the private networking is resolved by using
back-to-back cables.
In the example in this article, the interfaces of the private interconnects are nge1 and e1000g1 on both cluster
nodes.
Preinstallation Checks
Perform the following steps.

1.
2.

Temporarily enable rsh or ssh access for root on the cluster nodes.
Log in to the cluster nodes on which you are installing Oracle Solaris Cluster software and become
superuser.

3.

On each node, verify the /etc/inet/hosts file entries. If no other name resolution service is
available, add the name and IP address of the other node to this file.

In our example (with the NIS service), the /etc/inet/hosts files are as follows.
On node 1:

# Internet host table


#
::1 phys-schost-1 localhost
127.0.0.1 phys-schost-1 localhost loghost
On node 2:

# Internet host table


#
::1 phys-schost-2 localhost
127.0.0.1 phys-schost-2 localhost loghost
4.

On each node, verify that at least one shared storage disk is available, as shown in Listing 1.

In our example, there are two disks that are shared between the two nodes:
o
o

c0t600A0B800026FD7C000019B149CCCFAEd0
c0t600A0B800026FD7C000019D549D0A500d0
Listing 1. Verifying Shared Storage Is Available

# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c4t0d0 <FUJITSU-MBB2073RCSUN72G-0505 cyl 8921 alt 2 hd 255 sec 63>
/pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@0,0
/dev/chassis/SYS/HD0/disk
1. c4t1d0 <SUN72G cyl 14084 alt 2 hd 24 sec 424>
/pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@1,0
/dev/chassis/SYS/HD1/disk
2. c0t600A0B800026FD7C000019B149CCCFAEd0 <SUN-CSM200_R-0660 cyl 2607 alt
2 hd 255 sec 63>
/scsi_vhci/disk@g600a0b800026fd7c000019b149cccfae
3. c0t600A0B800026FD7C000019D549D0A500d0 <SUN-CSM200_R-0660 cyl 2607 alt
2 hd 255 sec 63>
/scsi_vhci/disk@g600a0b800026fdb600001a0449d0a6d3
5.

On each node, ensure the right OS version is installed:

6.
7.
8.

# more /etc/release

Oracle Solaris 11 11/11 X86


Copyright (c) 1983, 2011, Oracle and/or its affiliates.
reserved.
9.
Assembled 26 September 2011
10.
11.

All rights

Ensure that the network interfaces are configured as static IP addresses (not DHCP or of
type addrconf, as displayed by the command ipadmshowaddroall).

If the nodes are configured as static, proceed to the section Configuring the Oracle Solaris Cluster Publisher.
Otherwise, continue with this procedure and do the following:

If the network interfaces are not configured as static IP addresses, on each node, run the
command shown in Listing 2 to unconfigure all network interfaces and services.

o
o
o
o
o
o
o
o
o
o
o
o
o
o
o

Listing 2. Unconfigure the Network Interfaces and Services

# netadm enable -p ncp defaultfixed

Enabling ncp 'DefaultFixed'


phys-schost-1: Sep 27 08:19:19 phys-schost-1
has been removed from
kernel. in.ndpd will no longer use it
Sep 27 08:19:19 phys-schost-1 in.ndpd[1038]:
removed from kernel
. in.ndpd will no longer use it
Sep 27 08:19:19 phys-schost-1 in.ndpd[1038]:
removed from kernel
. in.ndpd will no longer use it
Sep 27 08:19:20 phys-schost-1 in.ndpd[1038]:
removed from kernel
. in.ndpd will no longer use it
Sep 27 08:19:20 phys-schost-1 in.ndpd[1038]:
removed from kernel
. in.ndpd will no longer use it
Sep 27 08:19:20 phys-schost-1 in.ndpd[1038]:
removed from kernel
. in.ndpd will no longer use it

o
o
o
o
o
o
o
o
o
o
o
o

Interface net2 has been


Interface net3 has been
Interface net4 has been
Interface net5 has been

Listing 3. Commands to Run on Both Nodes

# svccfg -s svc:/network/nis/domain setprop config/domainname = hostname:


nisdomain.example.com
# svccfg -s svc:/network/nis/domain:default refresh
# svcadm enable svc:/network/nis/domain:default
# svcadm enable

svc:/network/nis/client:default

# /usr/sbin/svccfg -s svc:/system/name-service/switch setprop config/host =


astring: \"files nis\"
# /usr/sbin/svccfg -s svc:/system/name-service/switch setprop
config/netmask = astring: \"files nis\"
# /usr/sbin/svccfg -s svc:/system/name-service/switch setprop
config/automount = astring: \"files nis\"
# /usr/sbin/svcadm refresh svc:/system/name-service/switch
On each node, bind back to the NIS server:

Interface net1 has been

Then, on each node, run the commands shown in Listing 3.

in.ndpd[1038]: Interface net0

# ypinit -c
Reboot each node to make sure the new network setup is working fine.

o
b.

(Optional) On each node, create a boot environment (BE), without the cluster software, as a pre-cluster backup
BE, for example:

c.
d.
e.
f.
g.
h.

# beadm create Pre-Cluster-s11


# beadm list
BE
--

Active Mountpoint Space


------ ---------- -----

Policy
------

Created
-------

i. Pre-Cluster-s11
j. s11

NR

179.0K
4.06G

static
static

2011-09-27 08:51
2011-09-26 08:50

Configuring the Oracle Solaris Cluster Publisher


There are two main ways to access the Oracle Solaris Cluster package repository, depending on whether the
cluster nodes have direct access (or access through a Web proxy) to the Internet:

Use a repository hosted on pkg.oracle.com.


Use a local copy of the repository.

Using a Repository Hosted on pkg.oracle.com


To access either the Oracle Cluster Solaris Release or Support repositories, obtain the SSL public and private
keys, as follows:
1.
2.

Go to http://pkg-register.oracle.com (login required).


Choose the Oracle Solaris Cluster Release or Support repository.

3.

Accept the license.

4.

Request a new certificate by choosing the Oracle Solaris Cluster software and submitting a request. (A
certification page is displayed with download buttons for the key and certificate.)

5.

Download the key and certificate and install them, as described in the certification page.

6.

Configure the hacluster publisher with the downloaded SSL keys to point to the selected repository
URL on pkg.oracle.com. The following example uses the release repository:

7.

# pkg set-publisher \ -k
/var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem \ -c
/var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem \
https://pkg.oracle.com/ha-cluster/release/ ha-cluster

-g

Using a Local Copy of the Repository


To access a local copy of the Oracle Solaris Cluster Release or Support repository, download the repository
image, as follows.
1.

Download the repository image from one of the following sites:

o
o
2.
3.
4.

Oracle Technology Network


Oracle Software Delivery Cloud (login required)
On the Media Pack Search page, select Oracle Solaris as the Product Pack and click Go.
Choose Oracle Solaris Cluster 4.0 Media Pack and download the file.
Mount the repository image and copy the data to a shared file system that all the cluster nodes can
access.

5.
6.
7.
8.
9.
10.

# lofiadm -a /tmp/osc4.0-repo-full.iso
/dev/lofi/1
# mount -F hsfs /dev/lofi/1 /mnt
# rsync -aP /mnt/repo /export
# share /export/repo
Configure the hacluster publisher. The following example uses node 1 as the system that shared
the local copy of the repository:

11.

# pkg set-publisher -g file:///net/phys-schost-1/export/repo hacluster

1.

Installing the Oracle Solaris Cluster Software Packages


On each node, ensure the correct Oracle Solaris package repositories are published. If they are not,
unset the incorrect publishers and set the correct ones. The installation of the hacluster packages is likely to
fail if it cannot access the Oracle Solaris publisher.

2.
3.
4.
5.
6.

# pkg publisher

7.

On each cluster node, install the haclusterfull package group, as shown in Listing 4.

PUBLISHER
solaris
ha-cluster

TYPE
origin
origin

STATUS
online
online

URI
<solaris repository>
<ha-cluster repository>

Listing 4. Installing the Package Group

8.
9.
10.
11.
12.
13.
14.
15.

# pkg install ha-cluster-full


Packages to install: 68
Create boot environment: No
Create backup boot environment: Yes
Services to change:
1
DOWNLOAD
(MB)

16.

Completed
48.5/48.5$<3>

17.
18.
19.
20.
21.
22.
23.
24.
25.
1.

PKGS

FILES

68/68

6456/6456

PHASE
Install Phase

XFER

ACTIONS
8928/8928

PHASE
Package State Update Phase
Image State Update Phase
Loading smf(5) service descriptions: 9/9
Loading smf(5) service descriptions: 57/57

ITEMS
68/68
2/2

Configuring the Oracle Solaris Cluster Software


On each node of the cluster, identify the network interfaces that will be used for the private
interconnects, for example:

On node 1, run this command.

# dladm show-phys
LINK
net3
net0
net4
net2
net1
net5

MEDIA
Ethernet
Ethernet
Ethernet
Ethernet
Ethernet
Ethernet

STATE
unknown
up
unknown
unknown
unknown
unknown

SPEED
0
1000
0
0
0
0

DUPLEX
unknown
full
unknown
unknown
unknown
unknown

DEVICE
e1000g1
nge0
e1000g2
e1000g0
nge1
e1000g3

STATE
unknown
up
unknown
unknown
unknown
unknown

SPEED
0
1000
0
0
0
0

DUPLEX
unknown
full
unknown
unknown
unknown
unknown

DEVICE
e1000g1
nge0
e1000g2
e1000g0
nge1
e1000g3

On node 2, run this command.

# dladm show-phys
LINK
net3
net0
net4
net2
net1
net5
2.

3.

MEDIA
Ethernet
Ethernet
Ethernet
Ethernet
Ethernet
Ethernet

In our example, we will be using net1 and net3 on each node as private interconnects.
On both nodes, ensure that SMF services are not disabled.

# svcs -x

4.

5.
6.
7.

On each node, ensure that the service network/rpc/bind:default has


its local_only configuration set to false.

# svcprop network/rpc/bind:default | grep local_only


config/local_only boolean false
If it is not set to false, set it as follows:

# svccfg
svc:> select network/rpc/bind
svc:/network/rpc/bind> setprop config/local_only=false
svc:/network/rpc/bind> quit
# svcadm refresh network/rpc/bind:default
# svcprop network/rpc/bind:default | grep local_only
config/local_only boolean false
8.

From one of the nodes, start the Oracle Solaris Cluster configuration utility by running
the scinstall command, which will configure the software on the other node as well, and then type 1 from the
Main menu to choose to create a new cluster or add a cluster node.

In the example shown in Listing 5, the command is run on the second node, physschost2.
Listing 5. Running the scinstall Command

# /usr/cluster/bin/scinstall
*** Main Menu ***
Please select from one of the following (*) options:
*
*
*
*

1)
2)
?)
q)

Create a new cluster or add a cluster node


Print release information for this cluster node
Help with menu options
Quit

Option:
9.

In the Create a New Cluster screen, shown in Listing 6, answer yes and then press Enter.
Listing 6. Creating a New Cluster

10.
11.
12.
13.
14.
15.
16.
17.
18.
19.

*** Create a New Cluster ***


This option creates and configures a new cluster.
Press Control-D at any time to return to the Main Menu.
Do you want to continue (yes/no) [yes]?

Checking the value of property "local_only" of service


svc:/network/rpc/bind
20.
...
21.
Property "local_only" of service svc:/network/rpc/bind is already
correctly
22.
set to "false" on this node.
23.
24.
Press Enter to continue:
25.

In the installation mode selection screen, select the default option (Typical), as shown in Listing 7.
Listing 7. Selecting the Installation Mode

26.
27.

>>> Typical or Custom Mode <<<

28.

This tool supports two modes of operation, Typical mode and

29.

mode. For most clusters, you can use Typical mode. However, you

30.

need to select the Custom mode option if not all of the Typical

31.
32.
33.

defaults can be applied to your cluster.

34.
35.
36.
37.
38.
39.
40.
41.
42.
43.

modes, select the Help option from the menu.


Please select from one of the following options:

Custom
might
mode

For more information about the differences between Typical and

Custom

1) Typical
2) Custom
?) Help
q) Return to the Main Menu
Option [1]:

44.

Provide the name of the cluster (in our example, physschost).

45.
46.
47.

>>> Cluster Name <<<


Each cluster has a name assigned to it. The name can be made up
of any

48.

characters other than whitespace. Each cluster name should be

unique

49.
50.
51.

within the namespace of your enterprise.


What is the name of the cluster you want to establish? phys-

schost
52.

Provide the name of the other node (in our example, physschost1), press Control-D to finish the
node list, and answer yes to confirm the list of nodes, as shown in Listing 8.
Listing 8. Confirming the List of Nodes

53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.

>>> Cluster Nodes <<<


This Oracle Solaris Cluster release supports a total of up to 16
nodes.
List the names of the other nodes planned for the initial cluster
configuration. List one node name per line. When finished, type
Control-D:
Node name (Control-D to finish):
Node name (Control-D to finish):

phys-schost-1

^D
This is the complete list of nodes:
phys-schost-2
phys-schost-1
Is it correct (yes/no) [yes]?
The next screen configures the cluster's private interconnects, also known as the transport adapters. In
our example, we are selecting interfaces net1 and net3, as determined previously. If the tool finds network

traffic on those interfaces, it will ask for confirmation to use them anyway. Ensure that those interfaces are not
connected to any other network, and then confirm their use as transport adapters, as shown in Listing 9.
Listing 9. Selecting the Transport Adapters

75.
76.
77.

>>> Cluster Transport Adapters and Cables <<<


this

78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
1.
2.
3.

You must identify the cluster transport adapters which attach


node to the private cluster interconnect.
Select the first cluster transport adapter:
1)
2)
3)
4)
5)
6)

net1
net2
net3
net4
net5
Other

Option: 1
Searching for any unexpected network traffic on "net1" ... done
Unexpected network traffic was seen on "net1".
"net1" may be cabled to a public network.
Do you want to use "net1" anyway (yes/no) [no]?

yes

Select the second cluster transport adapter:


1)
2)
3)
4)
5)
6)

net1
net2
net3
net4
net5
Other

Option: 3
Searching for any unexpected network traffic on "net3" ... done
Unexpected network traffic was seen on "net3".
"net3" may be cabled to a public network.
Do you want to use "net3" anyway (yes/no) [no]?

yes

Next, configure the quorum device by selecting the following default answers, as shown in Listing 10:
Disable automatic quorum devices selection by answering no.
Confirm that it is okay to create the new cluster by answering yes.
When asked whether to interrupt cluster creation for cluster check errors, answer no.
Listing 10. Configuring the Quorum Device

4. >>> Quorum Configuration <<<


5.
6.
Every two-node cluster requires at least one quorum device. By
7.
default, scinstall selects and configures a shared disk quorum device
8.
for you.
9.
10.
This screen allows you to disable the automatic selection and
11.
configuration of a quorum device.
12.
13.
You have chosen to turn on the global fencing. If your shared
storage
14.
devices do not support SCSI, such as Serial Advanced Technology

15.
16.
17.
18.

Attachment (SATA) disks, or if your shared disks do not support


SCSI-2, you must disable this feature.
If you disable automatic quorum device selection now, or if you
intend

19.

to use a quorum device that is not a shared disk, you must

instead use

20.

joined

21.
22.
23.

clsetup(1M) to manually configure quorum once both nodes have


the cluster for the first time.

Do you want to disable automatic quorum device selection


(yes/no) [no]?

24.
25.
26.

Is it okay to create the new cluster (yes/no) [yes]?


During the cluster creation process, cluster check is run on each
of the new cluster nodes.
27.
If cluster check detects problems, you can either interrupt the
process or check the log
28.
files after the cluster has been established.
29.
30.
Interrupt cluster creation for cluster check errors (yes/no)
[no]?
Listing 11 shows the final output, which indicates the configuration of the nodes and the installation log file name.
The utility then reboots each node in cluster mode.
Listing 11. Details of the Node Configuration

Cluster Creation
Log file - /var/cluster/logs/install/scinstall.log.3386
Configuring global device using lofi on phys-schost-1: done
Starting discovery of the cluster transport configuration.
The following connections were discovered:
phys-schost-2:net1
phys-schost-2:net3

switch1
switch2

phys-schost-1:net1
phys-schost-1:net3

Completed discovery of the cluster transport configuration.


Started cluster check on "phys-schost-2".
Started cluster check on "phys-schost-1".
.
.
.
Refer to the log file for details.
The name of the log file is /var/cluster/logs/install/scinstall.log.3386.
Configuring "phys-schost-1" ... done
Rebooting "phys-schost-1" ...
Configuring "phys-schost-2" ...
Rebooting "phys-schost-2" ...
Log file - /var/cluster/logs/install/scinstall.log.3386
b.

When the scinstall utility finishes, the installation and configuration of the basic Oracle Solaris Cluster
software is complete. The cluster is now ready for you to configure the components you will use to support highly

c.

available applications. These cluster components can include device groups, cluster file systems, highly available
local file systems, and individual data services and zone clusters. To configure these components, consult
the documentation library.
On each node, verify that multi-user services for the Oracle Solaris Service Management Facility (SMF) are
online. Also ensure that the new services added by Oracle Solaris Cluster are all online.

d.
e.
f.
g.

# svcs -x
# svcs multi-user-server
STATE
STIME
FMRI
online
9:58:44 svc:/milestone/multi-user-server:default

h.

From one of the nodes, verify that both nodes have joined the cluster, as shown in Listing 12.
Listing 12. Verifying that Both Nodes Joined the Cluster

i. # cluster status
j. === Cluster Nodes ===
k.
l. --- Node Status --m.
n. Node Name
Status
o. -------------p. phys-schost-1
Online
q. phys-schost-2
Online
r.
s. === Cluster Transport Paths ===
t. Endpoint1
Endpoint2
Status
u. --------------------v. phys-schost-1:net3
phys-schost-2:net3
Path online
w. phys-schost-1:net1
phys-schost-2:net1
Path online
x.
y. === Cluster Quorum ===
z.
aa.
--- Quorum Votes Summary from (latest node reconfiguration) --ab.
ac.
ad.
Needed
Present
Possible
ae.
------------------af.
2
3
3
ag.
ah.
--- Quorum Votes by Node (current status) --ai.
aj.
Node Name
Present
Possible
Status
ak.
--------------------------al.
phys-schost-1
1
1
Online
am.
phys-schost-2
1
1
Online
an.
ao.
--- Quorum Votes by Device (current status) --ap.
aq.
Device Name
Present
Possible
Status
ar.
----------------------------as.
d1
1
1
Online
at.
au.
=== Cluster Device Groups ===
av.
aw.
--- Device Group Status --ax.
ay.
Device Group Name
Primary
Secondary
Status
az.
-----------------------------------ba.
bb.
--- Spare, Inactive, and In Transition Nodes --bc.
Device Group Name
Spare Nodes
Inactive Nodes
In Transition
Nodes

bd.

------------------------------------

be.
bf.
bg.
bh.
bi.
bj.
bk.
bl.
bm.
bn.
bo.
bp.
bq.
br.
bs.
bt.
bu.
bv.
bw.
bx.
by.
bz.
ca.
cb.
cc.
cd.
ce.
cf.
cg.
ch.
ci.
cj.
ck.
cl.

1.

-----------

--------------

--- Multi-owner Device Group Status --Device Group Name


-----------------

Node Name
---------

Status
------

=== Cluster Resource Groups ===


Group Name
----------

Node Name
---------

Suspended
---------

State
-----

=== Cluster Resources ===


Resource Name
-------------

Node Name
---------

=== Cluster DID Devices ===


Device Instance
--------------/dev/did/rdsk/d1
/dev/did/rdsk/d2
/dev/did/rdsk/d3
/dev/did/rdsk/d4
/dev/did/rdsk/d5
/dev/did/rdsk/d6

State
----Node
---phys-schost-1
phys-schost-2
phys-schost-1
phys-schost-2
phys-schost-1
phys-schost-1
phys-schost-2
phys-schost-2

Status Message
-------------Status
-----Ok
Ok
Ok
Ok
Ok
Ok
Ok
Ok

=== Zone Clusters ===


--- Zone Cluster Status --Name
----

Node Name
---------

Zone HostName
-------------

Status
------

Zone Status
-----------

Verification (Optional)
Now, we will create a failover resource group with a LogicalHostname resource for a highly available network
resource and anHAStoragePlus resource for a highly available ZFS file system on a zpool resource.
Identify the network address that will be used for this purpose and add it to the /etc/inet/hosts file
on the nodes.

In the following example, schostlh is used as the logical host name for the resource group. This resource is of
the typeSUNW.LogicalHostname, which is a preregistered resource type.
On node 1:

# Internet host table


#
::1 localhost
127.0.0.1 localhost loghost
1.2.3.4
phys-schost-1 # Cluster Node
1.2.3.5
phys-schost-2 # Cluster Node
1.2.3.6
schost-lh
On node 2:

# Internet host table


#
::1 localhost
127.0.0.1 localhost loghost
1.2.3.4
phys-schost-1 # Cluster Node

1.2.3.5
1.2.3.6

phys-schost-2 # Cluster Node


schost-lh

2.

From one of the nodes, create a zpool with the two shared storage
disks: /dev/did/rdsk/d1s0 and /dev/did/rdsk/d2s0. In our example, we have assigned the entire disk to
slice 0 of the disks, using the format utility.

3.

# zpool create
/dev/did/dsk/d2s0

-m /zfs1 pool1 mirror /dev/did/dsk/d1s0

4.
5.
6.

# df -k /zfs1
Filesystem
Mounted on
7.
pool1
/zfs1

1024-blocks

Used

20514816

31

Available Capacity
20514722

1%

8.

The zpool will now be placed in a highly available resource group as a resource of type SUNW.HAStoragePlus.
This resource type has to be registered before it is used for the first time.
Create a highly available resource group to house the resources by doing the following on one node:

9.

# /usr/cluster/bin/clrg create

10.

Then add the network resource to the group:

11.

test-rg

# /usr/cluster/bin/clrslh create -g

test-rg -h schost-lh schost-

lhres
12.

Register the storage resource type:

13.

# /usr/cluster/bin/clrt register SUNW.HAStoragePlus

14.

Add the zpool to the group:

15.

# /usr/cluster/bin/clrs create
zpools=pool1 hasp-res

16.

Bring the group online:

17.

# /usr/cluster/bin/clrg

18.

Check the status of the group and the resources, as shown in Listing 13.

-p \

-g

online -eM

test-rg

-t SUNW.HAStoragePlus

test-rg

Listing 13. Checking the Group and Resource Status

19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.

# /usr/cluster/bin/clrg

status

=== Cluster Resource Groups ===


Group Name
---------test-rg

Node Name
-------phys-schost-1
phys-schost-2

Suspended
--------No
No

Status
-----Online
Offline

# /usr/cluster/bin/clrs status
=== Cluster Resources ===
Resource Name
------------hasp-res

Node Name
--------phys-schost-1
phys-schost-2

State
----Online
Offline

Status Message
-------------Online
Offline

38.
39.

phys-schost-1

Online

Online -

40.

phys-schost-2

Offline

Offline

schost-lhres
LogicalHostname online.

From the above status, we see that the resources and the group are online on node 1.
41.

To verify availability, switch the resource group to node 2 and check the status of the resources and the
group, as shown in Listing 14.
Listing 14. Switching the Resource Group to Node 2

42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.

# /usr/cluster/bin/clrg switch -n phys-schost-2 test-rg


# /usr/cluster/bin/clrg status
=== Cluster Resource Groups ===
Group Name
---------test-rg

Node Name
--------phys-schost-1
phys-schost-2

Suspended
--------No
No

Status
-----Offline
Online

# /usr/cluster/bin/clrs status
=== Cluster Resources ===
Resource Name
------------hasp-res

Node Name
--------phys-schost-1
phys-schost-2

State
----Offline
Online

Status Message
-------------Offline
Online

schost-lhres
LogicalHostname offline.

phys-schost-1

Offline

Offline -

phys-schost-2

Online

Online -

65.

LogicalHostname online.
Summary
This article described how to install and configure a two-node cluster with Oracle Solaris Cluster 4.0 on Oracle
Solaris 11. It also explained how to verify that the cluster is behaving correctly by creating and running two
resources on one node and then switching over those resources to the secondary node.

Potrebbero piacerti anche