Sei sulla pagina 1di 4

Will an Operating System Upgrade Affect Oracle Clusterware? [ID 743649.

1] Modified 03-NOV-2011
In this Document Goal Solution

Type HOWTO

Status PUBLISHED

Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.7 - Release: 10.1 to 11.1 Generic UNIX Generic Linux Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.7

Goal
Is it necessary to relink Oracle Clusterware 10.2.0.1 to 11.1.0.7 when upgrading the Operating System? What is the best procedure to follow when doing this? This note does not apply to Oracle Clusterware 11gR2 since relink is always needed after OS patching/upgrading. For details, refer to the following documentation: Oracle Grid Infrastructure Installation Guide 6.3 Relinking Oracle Grid Infrastructure for a Cluster Binaries

Solution
The Oracle Clusterware code itself cannot be relinked. The client shared libraries can be relinked, however this is only necessary if problems are encountered after upgrading or patching the operating system; in most cases relinking will not be needed.

This note has been provided to give the methodology to relink the client shared libraries on those occasions where it is necessary. The operating system on a cluster running Oracle Clusterware can be upgraded either by shutting down all servers and upgrading them together, or by performing a rolling upgrade. The method given below is for a rolling upgrade, but that for a simultaneous upgrade of all servers would be similar, with the exception that that all services on all nodes would be shut down at the same time.

Oracle supports the rolling upgrade of the operating system in a cluster when both the old and the new versions of the operating system are certified with the version of Oracle Database you are running. Note that mixed operating system versions are only supported for the duration of an upgrade (i.e., 1-2 hours). The cluster should never be operated with mixed operating systems for an extended period. To perform a rolling upgrade: 1. Ensure that the services used by the application are available on more than one instance: $ srvctl status service -d <dbname> If you are running more than one version of the database under the clusterware you will need to repeat each srvctl step from the $ORACLE_HOME of each version that is installed. 2. stop and disable the services for instances running on the node being worked on: $ srvctl stop service -d <db_unique_name> -s <service_name> -i <inst_name> $ srvctl disable service -d <db_unique_name> -s <service_name> -i <inst_name> will stop the services on the specified instance. 3. use srvctl to shutdown immediate the instances on the current node: $ srvctl stop instance -d <db_unique_name> -i <instance_name> -o immediate 4. Stop any listeners running from the $ORACLE_HOME or from the $ASM_HOME: srvctl stop listener -n <nodename> -l <listener list> 5. as root, shutdown the Oracle Clusterware stack on the current node: $CRS_HOME/bin/crsctl stop crs 6. as owner root disable the Oracle Clusterware stack on the current node: $CRS_HOME/bin/crsctl disable crs 7. Do the operating system upgrade. Ensure that a full backup of the server has been performed before the upgrade is done. It is essential, if the method being outlined in this note is being followed, that the Oracle software is in the same location after the upgrade as it was before. 8. Relink the Oracle RDBMS code as the RDBMS owner: $ cd $ORACLE_HOME/bin $ relink all 9. Check that the following files are owned by the owner of the clusterware install (this usually defaults to oracle) in $CRS_HOME/lib and $CRS_HOME/lib32, where they exist: clntsh.map clntst_1.lis clntst_2.lis

clntst.lis libclntsh.so libclntsh.so.10.1 libclntst10.so libclntsh.so.11.1 libclntst11.so libclntst11.a Keep a note of their original settings so that you can change them back later. 10. Relink the Oracle Clusterware client-shared libraries as the Clusterware owner: $ cd <CRS-HOME>/bin $ ./genclntsh 11. If you changed the permissions on the files in step 9 then change them back to their original settings. 12. Enable the Oracle Clusterware stack on the current node as owner root: <CRS-HOME>/bin/crsctl enable crs 13. Start the Oracle Clusterware stack as root on the current node: <CRS-HOME>/bin/crsctl start crs 14. Restart any services on the instances that have been restarted: $ srvctl enable service -d <db_unique_name> -s <service_name> -i <instance_name> $ srvctl start service -d <db_unique_name> -s <service_name> -i <instance_name> 15. If this is a rolling upgrade then repeat on the next node until all have been upgraded. For further information, read the White Paper, 'Best Practices for Optimizing Availability During Planned Maintenance Using Oracle Clusterware and Oracle Real Application Clusters'

Update History 2009-Jan-29 set <CRS_HOME> correctly 2009-Mar-10 altered syntax on point 2 and added syntax to stop listener 2009-Mar-12 altered guidelines on when relink needed. Added in permissions checks.

Related

Products

Oracle Database Products > Oracle Database > Oracle Database > Oracle Server Enterprise Edition

Keywords

CLUSTERWARE; GENCLNTSH; ORACLE DATABASE; REAL APPLICATION CLUSTERS


Back to top

Potrebbero piacerti anche