Sei sulla pagina 1di 66

Oracle applications high availability on

HP ProLiant blades utilizing Oracle


Clusterware

Technical white paper

Table of contents
Executive summary............................................................................................................................... 2
High availability for Oracle applications ................................................................................................ 2
Configuration tested ............................................................................................................................. 2
Why the HP BladeSystem c-Class enclosure and ProLiant BL460c blades? .................................................. 3
Why the HP 4400 Enterprise Virtual Array? ............................................................................................ 5
Why Oracle Clusterware and Oracle ASM Cluster File System software? ................................................... 5
Oracle Clusterware .............................................................................................................................. 6
Steps for creating a highly available environment for Oracle applications .................................................. 6
Create virtual disks for server boot and root file systems ....................................................................... 7
Use host bus adapter BIOS to record HBA WWIDs .............................................................................. 7
Utilize HP Command View EVA to associate WWIDs with Vdisks .......................................................... 9
Install operating system ................................................................................................................... 19
Create virtual disks to utilize for ASM disk storage ............................................................................. 27
Check Oracle prerequisites ............................................................................................................. 29
Modify multipath configuration to prevent blacklist of new virtual disks ................................................. 32
Create partitions and ASM disks ...................................................................................................... 35
Install Oracle Clusterware ............................................................................................................... 38
Create ASM cluster file system ......................................................................................................... 49
Install application in the cluster file system ......................................................................................... 57
Steps for placing an application under the protection of the Oracle Clusterware ....................................... 62
Create and register an application VIP ............................................................................................. 63
Create an action program ............................................................................................................... 63
Create and register an application ................................................................................................... 63
Commands to monitor, start, stop, service ......................................................................................... 64
Appendix A: Oracle HTTPD Server action program ............................................................................... 65
For more information .......................................................................................................................... 66
Executive summary
This document describes a set of best practices for Oracle applications high availability (HA) on Linux
using HP BladeSystem, Oracle Clusterware, and the Oracle ASM Cluster File Systems (ACFS). These
best practices provide the basis for a highly available failover clustered server environment for Oracle
applications. The final section of the document details installing the Oracle Web Server in an Oracle
Clusterware environment and provides an example of using this environment for a real world
application.
Target audience: This document is intended to assist Oracle application administrators, HP pre-sales,
and HP partners. Readers should already be familiar with their Oracle applications products, Oracle
Clusterware, and Oracle ASM, including basic administration and installation.

High availability for Oracle applications


The goal of this HA cluster solution is to provide continuous availability of an Oracle application
environment by eliminating single points of failure and by failing over an inoperative Oracle
application service. To provide HA for the physical layer, the following are utilized:
 HP 4400 Enterprise Virtual Array (EVA4400)
 Boot from SAN
 Dual HBAs
 HBA multipathing
 Multiple NICs
 Multiple ProLiant servers
There are multiple software clustering solutions available; this solution utilizes an all-Oracle software
stack for the clustering solution.
HP provides excellent solutions for disaster recovery and scheduled downtime maintenance with the
HP Matrix Operating Environment recovery management. This solution addresses unscheduled
application downtime and how to keep an Oracle application highly available without manual
intervention.
The operating system utilized in this example is Red Hat Enterprise Linux (RHEL 5.3). The same
concepts apply for utilizing these components in a Microsoft® Windows® environment.

Configuration tested
The diagram shown in figure 1 illustrates the configuration tested for the preparation of this document.
The table below shows the mapping of systems, names, HBA information, and virtual disks for this
example.

Server Name nodea nodeb

Blade bay #4 #5

HBAs (SAN Initiator Host blade4-mezzslot1, blade4- blade5-mezzslot1, blade5-


names) mezzslot2 mezzslot2

Vdisks (SAN targets) Vdisk001, Vdisk002 Vdisk003, Vdisk004

2
Figure 1. Tested configuration

Why the HP BladeSystem c-Class enclosure and ProLiant


BL460c blades?
The BladeSystem c7000 enclosure provides all the power, cooling, and I/O infrastructure needed to
support modular servers, interconnects, and storage components today with expansion for the future.
The enclosure is 10U high and holds up to 16 servers and/or storage blades plus optional redundant
network and storage interconnect modules. More information on the c-Class enclosure can be found
at http://h18004.www1.hp.com/products/blades/components/enclosures/c-class/index.html.

3
Figure 2. BladeSystem c7000 enclosure

As the world's most popular blade server, the HP ProLiant BL460c server blade sets the standard for
the data center computing. Packing two processors, two hot plug hard drives, up to 384GB of
memory, and a dual-port FlexFabric adapter into a half-height blade, the BL460c gives IT managers
the performance and expandability they need for demanding data center applications. More
information on the BL460c server blades can be found at: http://www.hp.com/servers/bl460c.

Figure 3. HP ProLiant BL460c BladeSystem server

4
Why the HP 4400 Enterprise Virtual Array?
The HP 4400 Enterprise Virtual Array (EVA4400) offers an easily deployed enterprise class virtual
storage array for midsized customers at an affordable price. More information on the EVA4400 can
be found at: http://www.hp.com/go/eva4400.

Figure 4. HP EVA4400

Why Oracle Clusterware and Oracle ASM Cluster File


System software?
Oracle Clusterware manages the availability of user applications and Oracle databases in a
clustered environment. In an Oracle Real Application Clusters (Oracle RAC) environment, Oracle
Clusterware manages all of the Oracle database processes automatically. Anything managed by
Oracle Clusterware is known as a cluster resource, which could be a database instance, a listener, a
virtual IP (VIP) address, or an application process.
In this solution, Oracle Clusterware is being used for a high availability failover cluster solution with a
shared storage backend. Oracle Clusterware will monitor the resource status of an Oracle
application, and fail that application and its associated storage over to an alternate node in the event
of a failure.
Oracle ASM Cluster File System (ACFS) is a general purpose cluster file system implemented as part
of ASM. It can be used to store almost anything. The only things that should not be stored in ACFS
are the Grid Infrastructure home and any Oracle files that can be directly stored in Oracle ASM.
ACFS can be used to store application files (static and dynamic). In this solution, an example
application (the Oracle Web Server) has all of its binaries, plugins, and document root stored in
ACFS.
More information on Oracle Clusterware can be found at:
http://www.oracle.com/technetwork/database/index-090666.html
Oracle licensing information can be found at:
http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/editions.htm#CJAHFHBJ

5
Oracle Clusterware
Oracle Clusterware can be used to protect any application (restarting or failing over the application
in the event of a failure), free of charge, if one or more of the following conditions are met.
 The server OS is supported by a valid Oracle Unbreakable Linux support contract.
 The product to be protected is either an Oracle product (e.g. Oracle applications, Siebel,
Hyperion, Oracle Database EE, Oracle Database XE), or any third-party product that directly or
indirectly stores data in an Oracle database.
 At least one of the servers in the cluster is licensed for Oracle Database (SE or EE).
A cluster is defined to include all the machines that share the same Oracle Cluster Registry (OCR) and
voting disk.

Steps for creating a highly available environment for


Oracle applications
This section describes the steps that can be used to create a HA environment for an Oracle
application. The HP white paper, Booting Linux x86 and x86_64 systems from a Storage Area
Network with Device Mapper Multipath, available at:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01513866/c01513866.pdf?ju
mpid=reg_R1002_USEN, details the steps for properly configuring storage and installing Linux on a
multipath device. The document details the importance of going to the Single Point of Connectivity
Knowledge (SPOCK) web site, http://www.hp.com/storage/spock. Utilize SPOCK to verify your
configuration (hardware, firmware, software).
HP Command View EVA (the web interface) was utilized to configure the EVA4400 storage. More
information on Command View EVA can be found at:
http://h10032.www1.hp.com/ctg/Manual/c00605846.pdf.
The steps listed below can be used to set up a HA environment. Steps 1-4 will have be done for each
of the two servers (nodea and nodeb). Start with nodeb (blade in bay 5). Details for each step are
provided in the remainder of this paper.
1. Create virtual disks for your server boot and root file systems using your SAN storage management
tools. In this case, the Command View EVA for the EVA4400 was utilized.
2. Use host bus adapter BIOS to record the WWIDs of your HBA adapters.
3. Utilize the Command View EVA to associate these WWIDs with the assigned virtual disks.
4. Install your operating system in a boot from SAN multipath environment created in steps 1-3.
5. On the storage side, create five additional ASM virtual disks to use for the cluster registry, voting
disk, and data storage. Make them 50GB each; mirror them.
6. Check Oracle prerequisites (RPMs, host file definitions, kernel tuning, etc.).
7. Modify multipath.conf so it does not blacklist all WWIDS (i.e. the newly created storage for ASM
virtual disks).
8. Create partitions for each ASM disk, install ASM libraries, and create ASM disks.
9. Install Oracle Clusterware.
10. Create ASM cluster file system.
11. Install application in the cluster file system.

6
Create virtual disks for server boot and root file systems
You must first create the virtual disks that you intend to utilize for your boot from SAN environment. In
this example we utilized the Command View EVA (web interface) to create the virtual disks on the
EVA4400 SAN. In this example, four 100 GB virtual disks (RAID 0+1) were created, which is two
virtual disks for each server blade. You could use one for the boot partition (in which case it can be
smaller), and use another one for the root file system. In this example, we will place the boot and root
file systems on a single 100 GB virtual disk and utilize the second drive for possible future expansion.

Use host bus adapter BIOS to record HBA WWIDs


The general steps are detailed in the white paper Booting Linux x86 and x86_64 systems from a
Storage Area Network with Device Mapper Multipath found at:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01513866/c01513866.pdf?ju
mpid=reg_R1002_USEN. When accessing the Host Bus Adapter BIOS, record the WWID numbers
for each of your Host Bus Adapters. Each of the Host Bus Adapter has dual ports, and there are two
Host Bus Adapters in each blade. The naming convention we will utilize later for presenting this
storage will be blade4-mezzslot1, blade4-mezzslot2, blade5-mezzslot1, and blade5-mezzslot2. This
task allows you to associate the HBAs with specific SAN virtual disks (i.e. the ones for the boot and
root file systems). Figure 5 shows example screen shots for a QLogic HBA (model 2462).

Note
You may utilize control-q to escape to the QLogic HBA BIOS during the boot
process. This process is observed via the iLO console for a server blade.

Figure 5. Two HBAs with two ports each

In Figure 5 you see two QMH2462 HBAs, each with two ports. Select the first adapter in slot 1.

7
Figure 6. Adapter settings for HBA in slot 1

Make sure the Host Bus Adapter (HBA) BIOS is enabled. Record the adapter port name (WWID) in a
safe location. The WWID in Figure 6 is 50060B00006AE2A4. This is the bay 5, BL460c server
blade mezzanine HBA in slot 1.
Repeat for the second blade mezzanine HBA in bay 5. It is located in slot 2. We are not utilizing the
second ports on the HBA in slot 1 or the HBA in slot 2. The goal is to ensure that we multipath across
two distinct HBAs for the highest availability.

8
Figure 7. Adaptor settings for HBA in slot 2

Make sure the Host Bus Adapter (HBA) BIOS is enabled. Record the second Adapter Port Name
(WWID) in a safe location. The WWID in Figure 7 is 500110A000863260. This is the bay 5,
BL460c server blade mezzanine HBA in slot 2.

Utilize HP Command View EVA to associate WWIDs with Vdisks


In HP Command View EVA screen add the host entries for the two WWIDs, one for each HBA in the
server blade. Figure 8 below shows the initial Command View EVA screen after login.

9
Figure 8. Initial Command View EVA screen after login

10
Figure 9 below shows the screen for adding host information.

Figure 9. Add host information

Add host entries for blade5-mezzslot1 and blade5-mezzslot2.


Use HP Command View EVA to record the World Wide names for each controller so they can be
used later in the blade HBA BIOS as a boot device. See Figure 10 for the controller 2 example.

11
Figure 10. EVA controller 2 WWN

The Controller WWN in this example is 50014380 025B7B8C for Controller 2 Port FP1, and
50014380 025B7B89 for Controller 1 Port FP2. We use different controllers for high availability.
In this example, Vdisk003 will be used for both the boot and root file systems of the server blade in
bay 5. A second virtual disk was created for future expansion. In this example Vdisk004, was utilized
for that purpose.
Now present the virtual disks to the associated host as shown in Figure 11. The names used in this
example are blade4-mezzslot1, blade4-mezzslot2, blade5-mezzslot1, and blade5-mezzslot2.

12
Figure 11. Present Vdisk to hosts

Specify the LUN numbers for these virtual disks. In Figure 12, Vdisk003 is presented as LUN3 to both
HBAs of the server blade in bay 5.

13
Figure 12. Change LUN number for first disk (Vdisk003)

Repeat the process for the second virtual disk. In this example, present Vdisk004 as LUN4

Note
It is very IMPORTANT under Vdisk properties to click on
“Save changes” when you have made changes so that they
are properly saved.

Now go back to the blade server iLO console (which is in the HBA BIOS) and scan the fibre channel
devices to view the controllers on the EVA.

14
The screen in Figure 13 below shows 4 SAN targets because there are two controllers with two ports
each.

Figure 13. Results of scan of fibre channel loop

15
The server blade in bay 5 has LUN3 assigned to Vdisk003 and LUN4 to Vdisk004. Enter 3 (LUN3)
as a boot device as shown in Figure 14.

Figure 14. Assign boot device

Repeat the steps for the second HBA as shown in Figure 15, but use a different SAN target WWN
(controller 1 - 50014380025B7B89). A different controller is utilized for high availability. The LUN
will still be LUN3.

16
Figure 15. Assign boot device for second controller

Now reboot the server and press F9 during the system BIOS startup (figure 16). This action will allow
you to access the system BIOS.

17
Figure 16. System options

Note

HP best practices are to utilize the latest versions of firmware.

Make sure the HBA storage is listed first in the Boot Controller order, as shown in Figure 17. The
Local disk storage is not utilized in this solution.

18
Figure 17. Boot controller order

Repeat the following steps for the blade server in bay 4.


 Use host bus adapter BIOS to record HBA WWNs
 Utilize the Command View EVA to associate these WWNs with the assigned virtual disks.
There will be different WWNs for HBAs, Vdisks, and the LUNs used. LUNs 1 and 2 were used for the
blade server in bay 4.

Install operating system


Multipathing will ensure that there is no single path from your server system to the disk storage.
Multiple Host Bus Adapters and multiple storage controllers are utilized to achieve no single point of
failure.
In this example we are creating a two-node cluster. Ensure that you have decided on the IP addresses
for the node names. Three IP address will be needed for each node. They are a public, a private, and
a virtual IP address. Utilize eth0 for these public node names, and eth1 for the private node names.
Add entries in your domain name server (DNS) for the host names. You’ll need public, private, and
virtual IPs. Add a Single Client Access Name (SCAN) address in DNS. The SCAN address is the
single-node name that you wish clients to access for a service. An example is listed below.
# Public
192.168.10.91 nodea.localdomain nodea
192.168.10.92 nodeb.localdomain nodeb

19
# Private
192.168.20.91 nodea-priv.localdomain nodea-priv
192.168.20.92 nodeb-priv.localdomain nodeb--priv
# Virtual
192.168.10.93 nodea-vip.localdomain nodea-vip
192.168.10.94 nodeb-vip.localdomain nodeb-vip
# SCAN
192.168.10.95 node-scan.localdomain node-scan

Note
The SCAN address should be defined on the DNS to round-robin between
the two public IPs.

Figure 18. Red Hat Enterprise Linux 5

The mpath option is necessary to do a multipath installation of the OS. The vnc option makes it easy
to access the installation consoles via a VNC client. Refer to Figure 18. This option allows you to view
the installation process via a VNC client and to use the iLO console interface to switch to alternate
terminal windows and verify the multipath configuration during the install. Use the iLO interface to
define a hot key for switching to alternate consoles (e.g. ctrl-alt-f1) as shown in Figure 19.

20
Figure 19 shows how to program a remote console hot key.

Figure 19. Remote console hot keys

Once you press control-t in your iLO session, the cntrl-alt-f1 is sent to the console window, which
allows you to switch between alternate terminal windows during the installation. This action is useful
to check on the multipath configurations that the installation is intending to utilize.

21
Figure 20. Select the drives for installation

Check (by pressing control-t) and using the multipath –ll command to see if the LUNs you
expected are being presented for my mpath0 and mpath1 device. See Figure 21.
This solution utilized mpath0 for/boot and the root file system so that mpath1 will be left unchecked.

22
Figure 21. Verify usage of the correct LUN

Verify that mpath0 is using LUN3 (Vdisk003). The syntax is 1:0:0:LUN#

23
The image in Figure 22 shows the chosen drive (mpath0).

Figure 22. Red Hat Drive mpath0

Figure 23. Red Hat GRUB boot loader

Next check the grub boot loader options as shown in Figure 23.

24
Make sure grub is installed in the master boot record as shown in Figure 24.

Figure 24. Ensure grub installation in master boot record

25
Customize software install now to add multipath options. Check the Customize now option as shown
in Figure 25.

Figure 25. Customize software install

Add the device mapper multipath software package from the base and ensure that the prerequisite
Oracle-required software is installed. The Oracle Clusterware Installation guide for Linux lists the
software prerequisites. It can be found at:
http://download.oracle.com/docs/cd/B28359_01/install.111/b28263/prelinux.htm#BABIHHFG

Check Oracle prerequisites for the cluster nodes.


Oracle requires the following prerequisites.
 GNOME desktop environment
 Editors
 Graphical internet
 Text-based internet
 Development libraries
 Development tools
 Server configuration tools
 Administration tools
 Base

26
 System tools
 X Windows system
The following information should be set during the installation.
 hostname: nodeb.localdomain
 IP Address eth0: 192.168.20.92 (private address)
 Default Gateway eth0: none
 IP Address eth1: 192.168.10.92 (public address)
 Default Gateway eth1: 192.168.10.10 (public address)

Utilize IP addresses to suit your network configuration.

Figure 26. Select the device-mapper-multipath package

Installation of the operating system with support for multipath should now be completed. Reboot the
system and verify a successful boot. Repeat the operating system installation using similar steps for
nodea (the blade in bay 4). Nodea will utilize LUN1 as its mpath0 device for the boot and root file
systems.

Create virtual disks to utilize for ASM disk storage


Create five additional virtual disks for ASM to use as cluster registry, voting disk, and data storage. In
this example these are RAID 1+0 and 50GB each. They are presented as LUNs 5-9. This presentation

27
is necessary because LUN1 is used for OS multipath on blade 4. LUN2 is used for future expansion
on blade 4, LUN 3 is used for the OS multipath on blade 5, and LUN4 is used for future expansion
on blade 5.
In the example below the ASM disk storage was prefixed with the name ASM. In Figure 27,
ASMVdisk001, ASMVdisk002, ASMVdisk003, ASMVdisk004, ASMVdisk005 are shown.

Figure 27. ASM disks

Present these virtual disks to all hosts in this cluster. These are the hosts’ definitions you created during
the OS installation (i.e. blade4-mezzslot1, blade4-mezzslot2, blade5-mezzslot1, blade5-mezzslot2).
Figure 28 shows the presentation the first ASM disk.

28
Figure 28. ASM Vdisks presented to both HBAs on both blades.

As the conclusion to the step to create virtual disks for ASM storage, you will reboot blade 4 and
blade 5.

Check Oracle prerequisites


These steps will need to be done on both nodea and nodeb. After the successful OS installation and
successful creation of the storage to be utilized by the cluster file system, add the prerequisites for
Oracle ASM and OCFS. Prerequisite rpms are found on the RHEL 5.3 DVD ISO or via the Red Hat
Network. For convenience, in this example the RHEL 5.3 DVD ISO was copied to the local system and
loopback mounted. It could also have been mounted using the iLO virtual device interface.
# mount –o loop RHEL5.3-Server-20090106.0-x86_64-DVD.iso /mnt/5.3dvd
Now modify the yum configuration file to utilize this local media mount for the Server packages. An
example yum configuration is shown below with modifications to the local-mediaserver section for the
name and baseurl.

29
The /etc/yum.conf file:

[main]
cachedir=/var/cache/yum
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
distroverpkg=redhat-release
tolerant=1
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1

# Note that yum-RHN-plugin doesn't honor this.


metadata_expire=1h

# Default.
# installonly_limit = 3

# PUT YOUR REPOS HERE OR IN separate files named file.repo


# in /etc/yum.repos.d
[local-mediaserver]
name=loopbackmounted5.3dvdserver
baseurl=file:///mnt/5.3dvd/Server
enabled=1
gpgcheck=0

Once the basic installation is complete, install the following packages while logged in as the root
user, including the 64-bit and 32-bit versions of prerequisite packages from the Enterprise Linux 5
DVD.
# cd /media/cdrom/Server
# rpm -Uvh binutils-2.*
# rpm -Uvh compat-libstdc++-33*
# rpm -Uvh elfutils-libelf-0.*
# rpm -Uvh elfutils-libelf-devel-*
# rpm -Uvh gcc-4.*
# rpm -Uvh gcc-c++-4.*
# rpm -Uvh glibc-2.*
# rpm -Uvh glibc-common-2.*
# rpm -Uvh glibc-devel-2.*
# rpm -Uvh glibc-headers-2.*
# rpm -Uvh ksh-2*
# rpm -Uvh libaio-0.*
# rpm -Uvh libaio-devel-0.*
# rpm -Uvh libgcc-4.*
# rpm -Uvh libstdc++-4.*
# rpm -Uvh libstdc++-devel-4.*
# rpm -Uvh make-3.*
# rpm -Uvh sysstat-7.*
# rpm -Uvh unixODBC-2.*
# rpm -Uvh unixODBC-devel-2.*
# cd /

30
Perform the following steps while logged into the nodea machine as the root user.
Ensure that the shared memory file system is big enough for Oracle Automatic Memory Manager to
work.
# umount tmpfs
# mount -t tmpfs shmfs -o size=1500m /dev/shm

Add or amend the following lines to the /etc/sysctl.conf file.


fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
Run the following command to change the current kernel parameters.
# /sbin/sysctl -p
Add the following lines to the "/etc/security/limits.conf" file.
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
Add the following lines to the "/etc/pam.d/login" file, if it does not already exist.
session required pam_limits.so
Disable secure Linux by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as
follows.
SELINUX=disabled
Alternatively, this can be done using the GUI tool (System > Administration > Security Level and
Firewall). Click on the SELinux tab and disable the feature.
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization
Service (ctssd) can synchronize the times of the cluster nodes. In this case, we will deconfigure NTP.
# service ntpd stop
Shutting down ntpd: [OK]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.org
# rm /var/run/ntpd.pid

If you are using NTP, you must add the "-x" option into the following line in the "/etc/sysconfig/ntpd"
file.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

Then restart NTP.


# service ntpd restart

31
Create the new groups and users.
# groupadd -g 1000 oinstall
# groupadd -g 1200 dba
# useradd -u 1100 -g oinstall -G dba oracle
# passwd oracle

Create the directories in which the Oracle software will be installed.


# mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01/

Login as the Oracle user and add the following lines at the end of the .bash_profile file:
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=nodea.localdomain; export ORACLE_HOSTNAME


ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=RAC1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH


CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
export CLASSPATH

if [ $USER = "oracle" ]; then


if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

Modify multipath configuration to prevent blacklist of new virtual disks


By default the multipath configuration will blacklist newly added virtual disks. To see the multipath
devices for the disks that we added for cluster storage, we need to modify the multipath.conf file so
that it does not by default blacklist all WWID numbers other than the devices used for the operating
system installation.
While logged into nodea, remove the line
wwid “*”

from the blacklist section of the /etc/multipath.conf file. Then reload the multipath daemon.
service multipathd reload

List all multipath devices using multipath –ll.

32
Below is an example of the output from the multipath –ll command.
mpath2 (36001438002a5642d00011000004e0000) dm-6 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:0:9 sdai 66:32 [active][ready]
\_ 2:0:1:9 sdap 66:144 [active][ready]
\_ 0:0:0:9 sdg 8:96 [active][ready]
\_ 0:0:1:9 sdn 8:208 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 0:0:3:9 sdab 65:176 [active][ready]
\_ 2:0:2:9 sdaw 67:0 [active][ready]
\_ 2:0:3:9 sdbd 67:112 [active][ready]
\_ 0:0:2:9 sdu 65:64 [active][ready]
mpath1 (36001438002a5642d00011000004a0000) dm-5 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 0:0:3:8 sdaa 65:160 [active][ready]
\_ 2:0:2:8 sdav 66:240 [active][ready]
\_ 2:0:3:8 sdbc 67:96 [active][ready]
\_ 0:0:2:8 sdt 65:48 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:0:8 sdah 66:16 [active][ready]
\_ 2:0:1:8 sdao 66:128 [active][ready]
\_ 0:0:0:8 sdf 8:80 [active][ready]
\_ 0:0:1:8 sdm 8:192 [active][ready]
mpath0 (36001438002a5642d00011000000a0000) dm-0 HP,HSV300
[size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:0:1 sdac 65:192 [active][ready]
\_ 2:0:1:1 sdaj 66:48 [active][ready]
\_ 0:0:0:1 sda 8:0 [active][ready]
\_ 0:0:1:1 sdh 8:112 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:2:1 sdaq 66:160 [active][ready]
\_ 2:0:3:1 sdax 67:16 [active][ready]
\_ 0:0:2:1 sdo 8:224 [active][ready]
\_ 0:0:3:1 sdv 65:80 [active][ready]
mpath6 (36001438002a5642d0001100000460000) dm-12 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:0:7 sdag 66:0 [active][ready]
\_ 2:0:1:7 sdan 66:112 [active][ready]
\_ 0:0:0:7 sde 8:64 [active][ready]
\_ 0:0:1:7 sdl 8:176 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:2:7 sdau 66:224 [active][ready]
\_ 2:0:3:7 sdbb 67:80 [active][ready]
\_ 0:0:2:7 sds 65:32 [active][ready]
\_ 0:0:3:7 sdz 65:144 [active][ready]
mpath5 (36001438002a5642d0001100000420000) dm-11 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:2:6 sdat 66:208 [active][ready]
\_ 2:0:3:6 sdba 67:64 [active][ready]
\_ 0:0:2:6 sdr 65:16 [active][ready]
\_ 0:0:3:6 sdy 65:128 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:0:6 sdaf 65:240 [active][ready]
\_ 2:0:1:6 sdam 66:96 [active][ready]
\_ 0:0:0:6 sdd 8:48 [active][ready]
\_ 0:0:1:6 sdk 8:160 [active][ready]

33
mpath4 (36001438002a5642d00011000003e0000) dm-10 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:0:5 sdae 65:224 [active][ready]
\_ 2:0:1:5 sdal 66:80 [active][ready]
\_ 0:0:0:5 sdc 8:32 [active][ready]
\_ 0:0:1:5 sdj 8:144 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:2:5 sdas 66:192 [active][ready]
\_ 2:0:3:5 sdaz 67:48 [active][ready]
\_ 0:0:2:5 sdq 65:0 [active][ready]
\_ 0:0:3:5 sdx 65:112 [active][ready]
mpath3 (36001438002a5642d00011000000e0000) dm-8 HP,HSV300
[size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:2:2 sdar 66:176 [active][ready]
\_ 2:0:3:2 sday 67:32 [active][ready]
\_ 0:0:2:2 sdp 8:240 [active][ready]
\_ 0:0:3:2 sdw 65:96 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:0:2 sdad 65:208 [active][ready]
\_ 2:0:1:2 sdak 66:64 [active][ready]
\_ 0:0:0:2 sdb 8:16 [active][ready]
\_ 0:0:1:2 sdi 8:128 [active][ready]

You can verify the WWNs given from the multipath –ll output with the WWNs information provided
by the HP Command View EVA - see Figure 29. Verify the device names of the virtual disks you
created on the EVA for the ASM cluster file system. Record the WWNs mappings and their respective
mpath device names. These WWNs will be used later in a multipath aliases file. The purpose of the
aliases file is to ensure that the mpath devices are always mapped to the same vdisk (as identified by
the WWNs).

34
Figure 29. ASM Virtual disk properties

Create partitions and ASM disks


Create the primary partition for each ASM disk. They are /dev/mpath/mpath* devices. To verify the
correct mpath[] device name, you need to verify the output from multipath –ll to see the
mappings from the WWID to the mpath device number. In this solution example, the devices are
mpath1, mpath2, mpath4, mpath5, mpath6.
While logged into nodea use the following command,
fdisk /dev/mpath/mpath1
to create a primary partition that spans the whole disk. Your responses will be (n, p, 1, return, return,
p,w).
Repeat this step for each of the five mpath devices. Creating the partitions will only be done on
nodea.
Determine your current kernel version, download/install the appropriate Linux ASM packages, and
configure on both nodea and nodeb.
oracleasm-support
oracleasmlib
oracleasm drivers

35
The packages can be found at:
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html

Check kernel version.

uname -rm
2.6.18-164.el5 x86_64

Download the appropriate ASMLib RPMs from OTN. In our example, we needed the following:
 oracleasm-support-2.1.3-1.el5.x86_64.rpm
 oracleasmlib-2.0.4-1.el5.x86_64.rpm
 oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm

Install the packages using this command.


rpm -Uvh oracleasm*.rpm

Configure ASMLib using this command.


# oracleasm configure -i
The following questions will determine whether the driver is loaded on boot and its permissions. The
current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep
that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
#
Load the kernel module using the following command.
# /usr/sbin/oracleasm init
Loading module "oracleasm": oracleasm
Mounting ASMlib driver file system: /dev/oracleasm
#
In case of problems loading the kernel module, you can force the drivers to be updated using the
update-driver command option.
# /usr/sbin/oracleasm update-driver

Repeat the ASM software download, ASM package installation and ASM library configuration for
nodeb.
While logged into nodea. Create the ASM disk for each of the 5 devices. Do this only on nodea.
/usr/sbin/oracleasm createdisk DISK1 /dev/mpath/mpath1p1
/usr/sbin/oracleasm createdisk DISK2 /dev/mpath/mpath2p1
/usr/sbin/oracleasm createdisk DISK3 /dev/mpath/mpath4p1
/usr/sbin/oracleasm createdisk DISK4 /dev/mpath/mpath5p1
/usr/sbin/oracleasm createdisk DISK5 /dev/mpath/mpath6p1

36
Scan and lists the ASM disks.
/usr/sbin/oracleasm scandisks

/usr/sbin/oracleasm listdisks

DISK1
DISK2
DISK3
DISK4
DISK5

Next, on nodeb,
make a copy of the /var/lib/multipath/bindings file from nodea. Save it in /tmp and edit the
/var/lib/multipath/bindings file for nodeb, so mpath1, 2, 4, 5, and 6 map to the same WWIDs To
ensure those devices will have consistent naming across the cluster.

Note
DO NOT CHANGE the bindings for mpath0 as it is unique for each root file
system. Other ways to accomplish this would be to use aliases in the
multipath.conf file for these WWIDs.

Following is an example multipath.aliases file to demonstrate how to set up the aliases.


multipaths {
multipath {
wwid 36001438002a5642d00011000004a0000
alias mpath1
}
multipath {
wwid 36001438002a5642d00011000004e0000
alias mpath2
}
multipath {
wwid 36001438002a5642d00011000003e0000
alias mpath4
}
multipath {
wwid 36001438002a5642d0001100000420000
alias mpath5
}
multipath {
wwid 36001438002a5642d0001100000460000
alias mpath6
}
}

Modify the asm scan order, and scan exclude to ensure that the mpath devices are scanned first.
/etc/sysconfig/oracleasm

Modify.
ORACLEASM_SCANORDER=”mapper”
ORACLEASM_SCANEXCLUDE=”sd”

Then restart the asm drivers on both nodes.


/etc/init.d/oracleasm restart

37
Verify on each node that the disks can be seen.
/usr/sbin/oracleasm listdisks

DISK1
DISK2
DISK3
DISK4
DISK5

Verify that all disks show up properly after a reboot and that the multipath bindings (multipath –ll) all
map to the same WWID.

Install Oracle Clusterware


First, verify network connectivity between nodes.

ping –c 3 nodea
ping –c 3 nodeb
ping –c 3 nodea-priv
ping –c 3 nodeb-priv

The Oracle Clusterware software is part of the Oracle 11gR2 Grid infrastructure kit. It can be found
under the database 11g downloads for Linux, available at
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-
100572.html
Download and save the Oracle 11gR2 Grid infrastructure kit for Linux. This can be done from either
nodea or nodeb. This example uses nodeb.
Download and run the cluster verification utility.
http://www.oracle.com/technetwork/database/clustering/downloads/index.html

runcluvfy.sh stage -pre crsinst -n nodea,nodeb -verbose

Note that the cluster verification will fail because user equivalence has not been set up yet; in 11gR2
the install process will set up user equivalence for you.
Unzip the Grid infrastructure kit.
Unzip linux.x64_11gR2.grid.zip
Perform the Grid installation.
./runInstaller
Figures 30-40 show the installation process. Select Install and Configure Grid Infrastructure for a
Cluster – see Figure 30.

38
Figure 30. Install and configure grid infrastructure for a cluster option

Select Typical Installation – see figure 31.

39
Figure 31. Typical Installation

Modify scan name to node-scan – see figure 32.

40
Figure 32. Set scan name

Set your inventory directory (pick default) – see figure 33.

41
Figure 33. Inventory directory

Add the additional nodes. In our example, node nodea was added, because the install was done
from nodeb.

42
Figure 34. Additional nodes

Click on SSH connectivity, do setup and then test – see Figure 35. You will need to give the oracle
user password (oracle was used in our example case).

43
Figure 35. SSH connectivity

Set up the network interfaces for the private and public networks for nodea and nodeb – see figure
36.

44
Figure 36. Private and public networks

Specify Oracle base, software location, and utilize ASM for the cluster registry storage type. Set your
passwords and Oracle ASM group – see figure 37.

45
Figure 37. Install location

Select the Disk Group Characteristics. The Disk Group Name used was DATA. Select all five disks.
Redundancy could be external since we used hardware RAID (1+0) for the disks. This example just
happened to use the default of “normal” – see figure 38.

46
Figure 38. ASM Disk Group

Check the summary and finish the installation – see figure 39.

47
Figure 39. Summary

Follow the instructions for configuration scripts. See figure 40.

48
Figure 40. Configuration scripts

Create ASM cluster file system


Utilize the ASM configuration assistant to create the ASM cluster file system for the Oracle Web
Server. The asmcmd utility can be used later for perusing, modifying, and to further configure your
environment. The environment settings follow:

ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID
export ORACLE_HOME
In this example we used nodea.
As the oracle user, set the correct environment to work with ASM.
. oraenv
ORACLE_SID = [RAC2] ? +ASM2
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
Run the ASM Configuration Assistant.
asmca
It will show the disk group shown in Figure 41.

49
Figure 41. ASM configuration assistant

Click on the ASM Instances tab.

50
Figure 42. ASM Instance

Click on the ASM cluster file system tab – see figure 43.

51
Figure 43. ASM Cluster File Systems tab

Click on the Create (see figure 44) and specify volume name and size (figure 45). In this example,
siebelvol1 was used for the volume name and 20.0GB was used for the size.

52
Figure 44. Create ASM cluster file system initial screen before changes

53
Figure 45. Create volume screen before entering a volume name and size

54
Figure 46. Mount the volume as a general purpose file system

When the system is finished creating the volume, select General Purpose File System (see figure 46),
the default mount point which will be used by your Oracle application for installation. The register
mount point value is “yes” (automatically mount and unmount at startup/shutdown). Click OK.

55
Figure 47. Confirmation of creation of ASM cluster file system

Note
The default mount point will be /u01/app/oracle/acfsmounts/data_siebelvol1

Directory names must begin with an alphabetic character and can use underscores as part of the
name, but no spaces or special characters may be used.

56
Figure 48. ASM cluster file systems list

Open a terminal window to see the new ASM cluster file system. Create a file on one node and verify
that it is seen on the others. If you need to manually mount the ASM cluster file system, use the
following:
/sbin/mount.acfs –o all
To unmount, use /bin/umount –t acfs –a
This ASM cluster file system can now be used for Oracle applications such as an Oracle Web Server,
E-Business Suite or Siebel. This shared file system can be used for application binaries, logs, shared
documents, reports, etc. You can also export the ACFS file system as an NFS file system and NFS
mount it for access by other middle-tier nodes.

Install application in the cluster file system


The example application used in this solution is the Oracle HTTP Server. It can be found on the Oracle
Application Server 10.1.3 companion CD. More information can be found at:
http://www.oracle.com/technetwork/middleware/ias/index-091236.html

Downloads are located at:


http://www.oracle.com/technetwork/middleware/ias/downloads/101310-085449.html

57
For more information, please refer to the following documentation guide on how to install Oracle
Application Server on a 64-bit Linux at:
http://download.oracle.com/docs/cd/B14099_19/linux.1012/install.1012/install/reqs.htm#CIHD
EIFG
Unpack the companion CD.
On nodea,
cpio –idmv < as_linux_x86_companion_cd_101300_disk1.cpio

Steps for all cluster nodes


There are 32-bit installation requirements on Linux x86-64 in order to allow compatibility with the
32-bit Application Server product. The 32 bit version of the web server was utilized in this example.
Before beginning the installation, the following steps must be performed on all nodes in the cluster.
 On Linux x86-64, ensure 32-bit packages are in place and always use 32-bit shell emulation with
the "linux32 bash" command before running the installer and any other Oracle Application Server
commands or scripts.
This soft link must be performed.
# ln –s /usr/lib/libgdbm.so.2.0.0 /usr/lib/libdb.so.2

Otherwise, httpd will give the following error.


"httpd: error while loading shared libraries: libdb.so.2:
cannot open shared object file: No such file or directory"
With this link in place, httpd should start up successfully in RHEL 5.0 or higher.
 Verify the following kernel parameters:

Verify values for these lines in the file /etc/sysctl.conf


kernel.shmmni = 4096
kernel.sem = 256 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.ipv4.tcp_wmem = 262144 262144 262144
net.ipv4.tcp_rmem = 4194304 4194304 4194304
kernel.shmax=4294967295

To put these changes into effect, execute the command.


# sysctl –p

 Change the default group of the user doing the installation to oinstall. In this example, we used
siebeluser since this web server can be the base for a Siebel Web Server installation.
usermod –g oinstall siebeluser
 The libXp.so libraries are deprecated and no longer installed as part of RHEL 5.0. The missing
libraries are on the RHEL 5.0 installation DVD.
In the server directory on the DVD,
libXp-1.0.0-8.1.el5.i386.rpm
libXp-1.0.0-8.1.el5.x86_64.rpm

58
Install both versions using the following:
rpm –Uvh libXp-1.0.0-8.1.el5.i386.rpm
rpm –Uvh libXp-1.0.0-8.1.el5.x86_64.rpm

The installation will be successful for the Oracle HTTPD server, but the HTTPD process will not
automatically start until the libdb library is fixed. This process is detailed in Oracle document
415244.1
To solve the issue with not being able to find the libdb-3.3.so library,
cd /usr/lib
ln –s libdb-4.3.so libdb-3.3.so

Begin installation on nodea


 Prior to running the OUI, the following command must be executed.
$ linux32 bash
The post installation root.sh script will attempt to overwrite dbhome, oraenv and coraenv; do not
overwrite them.
 Make sure the shared file system is writable for the Oracle HTTP Server (OHS) installation.
chmod 777 /u01/app/oracle/acfsmounts/data_siebelvol1
 Run the Web Server installer.
./runInstaller
 Specify the installation location.
/u01/app/oracle/acfsmounts/data_siebelvol1/ohinst

59
Figure 49. OHS installer

Select the Web Server Services.

60
Figure 50. Web Server Services

61
You will see the summary screen shown in Figure 51.

Figure 51. Summary of installation for Web Server Services

 After the installation is complete, verify the httpd server is running. You start the daemon by going
to the installation directory and using the opmnctl application.
In this case,
#cd /home/siebeluser/oracle/companionCDHome_1/opmn/bin
#linux32 bash
#./opmnctl startall
 Test that the web server is up and running. The default port is 7780. In this example
http://nodea.zko.hp:7780

Steps for placing an application under the protection of the


Oracle Clusterware
There are three steps that need to be completed to successfully place an application under the
protection of Oracle Clusterware.
 Create and register an application VIP. This registration is required if the application is accessed
via network clients.
 Create an action program - This program is used by Oracle Clusterware to start, stop and query the
status of the protected application. This program can be written in C, Java or almost any scripting
language.

62
 Create and register an Application Profile – The profile describes the application process and the
limits covering how it is to be protected.

Once established, Oracle Clusterware will manage the application. It will be started, stopped and
made highly available according to the information and rules contained in the Application Profile
used to register with the Oracle Cluster Registry (OCR). The OCR is used to provide consistent
configuration information of applications to Oracle Clusterware.
On all cluster nodes, add ORA_CRS_HOME environment variable to your root .bash_profile.
ORA_CRS_HOME=/u01/app/11.2.0/grid
export ORA_CRS_HOME

Create and register an application VIP


Create a profile for the VIP.
# $ORA_CRS_HOME/bin/crs_profile -create webvip -t application -a \
> $ORA_CRS_HOME/bin/usrvip -o oi=eth1,ov=192.168.10.96,on=255.255.0.0

Register the VIP with Oracle Clusterware.


# $ORA_CRS_HOME/bin/crs_register webvip

The Application VIP script has to run as root. Change the owner of the resource.
# $ORA_CRS_HOME/bin/crs_setperm webvip -o root

Allow oracle to execute this script.


# $ORA_CRS_HOME/bin/crs_setperm webvip -u user:oracle:r-x

Start the VIP as the oracle user.


# $ORA_CRS_HOME/bin/crs_start webvip
Attempting to start `webvip` on member `nodea`
Start of `webvip` on member `nodea` succeeded.

You can use ifconfig to verify that this IP has also been assigned to interface eth1.

Create an action program


A simple bash script for starting, stopping, and checking the Oracle HTTPD server is included in
Appendix A. It returns 0 if success, and 1 if failure.

Create and register an application


The application profile is a text file with name-value key pairs. Use the crs_profile command to create
this profile.
oracle@nodea$ $ORA_CRS_HOME/bin/crs_profile -create webapp \
> -t application \
> -d "Oracle HTTPD Server" \
> -r webvip \
> -a /u01/app/oracle/acfsmounts/data_siebelvol1/webapp.sh -o ci=5

In the example above, a profile webapp.cap has been created in the $ORA_CRS_HOME/crs/public
directory. It is an application type profile; the description is “Oracle HTTPD Server”. It has a
dependency on another Clusterware-managed resource called webvip. The action program is called
webapp.sh. The check interval is every 5 seconds.

63
The application profile is now registered with the Oracle Clusterware service using the crs_register
command.
oracle@nodea$ $ORA_CRS_HOME/bin/crs_register webapp

Commands to monitor, start, stop, service


The application (the Oracle httpd web server) is now registered with Oracle Clusterware. Oracle
Clusterware now controls the availability of the service.
You can check the status using the crs_stat command.
oracle@nodea ~]$ $ORA_CRS_HOME/bin/crs_stat webapp
NAME=webapp
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE

You can start the service using crs_start command.


oracle@nodea ~]$ $ORA_CRS_HOME/bin/crs_start webapp
Attempting to start `webvip` on member `nodea`
Start of `webvip` on member `nodea` succeeded.
Attempting to start `webapp` on member `nodea`
Start of `webapp` on member `nodea` succeeded.

Verify that the service is now online using the crs_stat command.
oracle@nodea ~]$ $ORA_CRS_HOME/bin/crs_stat webapp
NAME=webapp
TYPE=application
TARGET=ONLINE
STATE=ONLINE on nodea

You can force the relocation of a service with the crs_relocate command.
$ORA_CRS_HOME/bin/crs_relocate –f webapp
Attempting to stop `webapp` on member `nodea`
Stop of `webapp` on member `nodea` succeeded.
Attempting to stop `webvip` on member `nodea`
Stop of `webvip` on member `nodea` succeeded.
Attempting to start `webvip` on member `nodeb`
Start of `webvip` on member `nodeb` succeeded.
Attempting to start `webapp` on member `nodeb`
Start of `webapp` on member `nodeb` succeeded.

oracle@nodea ~]$ $ORA_CRS_HOME/bin/crs_stat webapp


NAME=webapp
TYPE=application
TARGET=ONLINE
STATE=ONLINE on nodeb

64
Appendix A: Oracle HTTPD Server action program
#!/bin/bash
#
start() {
echo -n $"Starting $prog: "

/u01/app/oracle/acfsmounts/data_siebelvol1/ohinst/opmn/bin/opmnctl
startall
if [ "$?" -eq 0 ];
then
RETVAL=0
else
RETVAL=1
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "

/u01/app/oracle/acfsmounts/data_siebelvol1/ohinst/opmn/bin/opmnctl
stopall
if [ "$?" -eq 0 ];
then
RETVAL=0
else
RETVAL=1
fi
return $RETVAL
}
check() {
echo -n $"check $prog: "

/u01/app/oracle/acfsmounts/data_siebelvol1/ohinst/opmn/bin/opmnctl status
| grep "Alive"
if [ "$?" -eq 0 ];
then
RETVAL=0
else
RETVAL=1
fi
return $RETVAL
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
check)
check
;;
*)
echo $"Usage: $prog {start|stop|check}"
exit 1
esac
exit $RETVAL

65
For more information
HP ProLiant BL460c server: http://www.hp.com/servers/bl460c
HP BladeSystem c7000 Enclosure:
http://h18004.www1.hp.com/products/blades/components/enclosures/c-class/c7000/
HP 4400 Enterprise Virtual Array: www.hp.com/go/eva4400
Booting Linux x86 and x86_64 systems from a Storage Area Network with Device Mapper Multipath:
(http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01513866/c01513866.pdf?ju
mpid=reg_R1002_USEN)
HP Single Point of Connectivity Knowledge (SPOCK) web site: http://www.hp.com/storage/spock
Red Hat Enterprise Linux for ProLiant:
http://h18004.www1.hp.com/products/servers/linux/redhat/rhel/index.html?jumpid=reg_R1002_
USEN
Oracle licensing information can be found at:
http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/editions.htm#CJAHFHBJ

Oracle ASM libraries for Linux: packages can be found at:


http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html

The Oracle Clusterware software is part of the Oracle 11gR2 Grid infrastructure kit. It can be found
under the database 11g downloads for Linux.
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-
100572.html

Oracle Cluster verification utility:


http://www.oracle.com/technetwork/database/clustering/downloads/index.html
Red Hat Enterprise Linux: http://www.redhat.com
HP Oracle Alliances website: http://www.hporacleapps.com/

To help us improve our documents, please provide feedback at www.hp.com/solutions/feedback

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to
change without notice. The only warranties for HP products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Oracle and Java are
registered trademarks of Oracle and/or its affiliates.

4AA3-5126ENW, Created June 2011

Potrebbero piacerti anche