Sei sulla pagina 1di 7

Verify Requirements

The cluster verify utility – ‘cluvfy’ – is used to determine that the new node is, in fact, ready to be added to the
cluster.

Verify New Node (HWOS)

From an existing node, run ‘cluvfy’ to ensure that ‘rac42′ – the cluster node to be added – is ready from a
hardware and operating system perspective:

# su - grid
$ export GRID_HOME=/u01/app/11.2.0/grid
$ $GRID_HOME/bin/cluvfy stage -post hwos -n node2

If successful, the command will end with: ‘Post-check for hardware and operating system setup was
successful.’ Otherwise, the script will print meaningful error messages.

Verify Peer (REFNODE)

$ $GRID_HOME/bin/cluvfy comp peer -refnode node1 -n node2 -orainv oinstall -osdba dba -verbose

In this case, existing node ‘node1′ is compared with the new node ‘node2,’ comparing such things as the
existance/version of required binaries, kernel settings, etc. Invariably, the command will report that
‘Verification of peer compatibility was unsuccessful.’ This is due to the fact that the command simply looks
for mismatches between the systems in question. Certain properties between systems will undoubtedly differ.
For example, the amount of free space in /tmp rarely matches exactly. Therefore, certain errors from this
command should be ignored, such as ‘Free disk space for “/tmp”,’ and so forth. Differences in kernel settings
and OS packages/rpms should, however, be addressed.

Verify New Node (NEW NODE PRE)

The cluster verify utility – ‘cluvfy’ – is used to determine the integrity of the cluster and whether it is ready for
a new node. From an existing node, run ‘cluvfy’ to verify the integrity of the cluster:

$GRID_HOME/bin/cluvfy stage -pre nodeadd -n node2 -fixup -verbose

If your shared storage is ASM using asmlib you may get an error – similar to the following – due to Bug
#10310848:
ERROR:
PRVF-5449: Check of Voting Disk location "ORCL: CRS1(ORCL:CRS1)" failed on the
Following nodes:
rac3: No such file or directory
PRVF-5431: Oracle Cluster Voting Disk configuration check failed
The aforementioned error can be safely ignored whereas other errors should be addressed before continuing.

Extend Clusterware

The cluster ware software will be extended to the new node.

Run “addNode.sh”

From an existing node, run “addNode.sh” to extend the clusterware to the new node ‘node2:’
]$ export IGNORE_PREADDNODE_CHECKS=Y

]$ $GRID_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={node2}"


"CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}"

Using ASM with asmlib on 11.2.0.2.0, so I experienced the aforementioned: Bug #10310848. Therefore, I had
to add the ‘IGNORE_PREADDNODE_CHECKS’ environment variable so that the command would run to
fruition; if you did not experience the bug in prior steps, then you can omit it.
If the command is successful, you should see a prompt similar to the following:
The following configuration scripts need to be executed as the "root" user in each cluster node.
/u01/app/oraInventory/orainstRoot.sh #On nodes node2
/u01/app/11.2.0/grid/root.sh #On nodes node2
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Run the ‘root.sh’ commands on the new node as directed:
/u01/app/oraInventory/orainstRoot.sh
/u01/app/11.2.0/grid/root.sh
If successful, cluster ware daemons, the listener, the ASM instance, etc. should be started by the ‘root.sh’
script:
$GRID_HOME/bin/crs_stat -t -v -c node2

Verify New Node (NEW NODE POST)

Again, the cluster verify utility – ‘cluvfy’ – is used to verify that the clusterware has been extended to the new
node properly:

]$ $GRID_HOME/bin/cluvfy stage -post nodeadd -n node2 -verbose

A successful run should yield ‘Post-check for node addition was successful.’

Extend Oracle Database Software

Run “addNode.sh”

From an existing node – as the database software owner – run the following command to extend the Oracle
database software to the new node ‘node2:’

]$ echo $ORACLE_HOME

]$ $ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={node2}"

If the command is successful, you should see a prompt similar to the following:
The following configuration scripts need to be executed as the "root" user in each cluster node.
/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes rac42
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.
Run the ‘root.sh’ commands on the new node as directed:

# /u01/app/oracle/product/11.2.0/db_1/root.sh
Change ownership of ‘oracle’

If you are using job/role separation, then you will have to ‘chmod’ the ‘oracle’ executable in the newly created
$ORACLE_HOME on ‘node2:’

# export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
# chgrp asmadmin $ORACLE_HOME/bin/oracle
# Chmod 6751 $ORACLE_HOME/bin/oracle
# ls -lart $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle asmadmin 228886450 Feb 21 11:33 /u01/app/oracle/product/11.2.0/db_1/bin/oracle

The end-goal is to have the permissions of the ‘oracle’ binary match the other nodes in the cluster.

Verify Administrative Privileges (ADMPRV)

Verify administrative privileges across all nodes in the cluster for the Oracle database software home:

$ $ORACLE_HOME/bin/cluvfy comp admprv -o db_config -d $ORACLE_HOME -n node1,node2 -


verbose
A successful run should yield ‘Verification of administrative privileges was successful.’

Add Instance to Clustered Database

A database instance will be established on the new node. Specifically, an instance named ‘orcl2′ will be added
to ‘orcl’ –
a pre-existing clustered database.
Satisfy Node Instance Dependencies
Satisfy all node instance dependencies, such as passwordfile, init.ora parameters, etc.

From the new node ‘node2,’ run the following commands to create the passwordfile, ‘init.ora’ file, and
‘oratab’ entry for the new instance:

$ echo $ORACLE_HOME

/u01/app/oracle/product/11.2.0/db_1
$ cd $ORACLE_HOME/dbs
$ mv initorcldb1.ora initorcldb2.ora
$ mv orapworcldb1 orapworcldb2
$ echo "orcldb2:$ORACLE_HOME:N" >> /etc/oratab
From a node with an existing instance of ‘orcl,’ issue the following commands to create the needed public log
thread, undo tablespace, and ‘init.ora’ entries for the new instance:
$ export ORACLE_SID=orcl1
$ . oraenv
The Oracle base remains unchanged with value /u01/app/oracle

$ sqlplus "/ as sysdba"

SQL> alter database add logfile thread 2 group 4 ('+DATA','+FRA') size 100M, group 5
('+DATA','+FRA') size 100M, group 6 ('+DATA','+FRA') size 100M;

SQL> alter database enable public thread 2;

SQL> create undo tablespace undotbs2 datafile '+DATA' size 200M;

SQL> alter system set undo_tablespace=undotbs2 scope=spfile sid='orcl2';

SQL> alter system set instance_number=2 scope=spfile sid='orcl2';

SQL> alter system set cluster_database_instances=2 scope=spfile sid='*';

Update Oracle Cluster Registry (OCR)

The OCR will be updated to account for a new instance – ‘orcl2′ – being added to the ‘orcl’ cluster database as
well as changes to a service – ‘ora.orcl.serv1.svc’

Add ‘orcl2′ instance to the 'orcl’ database and verify:

$ srvctl add instance -d orcl -i orcl2 -n node2

$ srvctl status database -d orcl -v

$ srvctl config database -d orcl

$ srvctl add service -d orcl -s ora.orcl.serv1.svc -r orcl2 -u

$ srvctl config service -d orcl -s ora.orcl.serv1.svc


Start the Instance

Now that all the prerequisites have been satisfied and OCR updated, the ‘orcl2′ instance will be started.
Start the newly created instance – ‘orcl2′ – and verify:

$ srvctl start instance -d orcl -i orcl2


$ srvctl status database -d orcl -v

Potrebbero piacerti anche