Sei sulla pagina 1di 113

A Practical Guide to Oracle 10g RAC Its REAL Easy!

Gavin Soorma, Emirates Airline, Dubai Session# 106

Agenda
RAC concepts Planning for a RAC installation Pre Installation steps Installation of 10g R2 Clusterware Installation of 10g R2 Software Creation of RAC database Configuring Services and TAF Migration of single instance to RAC

What Is a RAC Cluster?


Nodes Interconnect Shared disk subsystem Node Instances Database

Interconnect

Node

Disks

Database vs Instance
RAC Cluster consists of . One or more instances One Database residing on shared storage Instance 1 Node 1 Local Disk Shared Storage
Database

Interconnect

Instance 2

Node 2
Local Disk

Why RAC?
High Availability survive node and instance failures Scalability Add or remove nodes when needed Pay as you grow harness the power of multiple low-cost computers Enable Grid Computing DBAs have their own vested interests!

What is Real Application Clusters?


Two or more interconnected, but independent servers One instance per node Multiple instances accessing the same database Database files stored on disks physically or logically connected to each node, so that every instance can read from or write to them

A RAC Database whats different?


Contents similar to single instance database except
Create and enable one redo thread per instance If using Automatic Undo Management also require one UNDO tablespace per instance Additional cluster specific data dictionary views created by running the script $ORACLE_HOME/rdbms/admin/catclust.sql

New background processes


Cluster specific init.ora parameters

RAC specific Background Processes


LMON Global Enqueue Service Monitor LMD0 Global Enqueue Service Daemon LMSx Global Cache Server Processes LCK0 Lock Process DIAG Diagnosability Process

RAC init.ora Parameters


*.db_cache_size=113246208 *.java_pool_size=4194304 *.db_name='racdb racdb2.instance_number=2 racdb1.instance_number=1 *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' racdb2.thread=2 racdb1.thread=1 *.undo_management='AUTO' racdb2.undo_tablespace='UNDOTBS2' racdb1.undo_tablespace='UNDOTBS1'

10g RAC Implementation Steps


Hardware Network Interface Cards, HBA cards etc Interconnects Physical cable, Gigabit Ethernet switch Network Virtual IP addresses Plan the type of shared storage (ASM, OCFS etc) Download latest RPMs ASM, OCFS Install Clusterware (Cluster Ready Services) Install 10g RAC software Create RAC database Configure Services and TAF ( Transparent Application Failover)

RAC Database Storage


Oracle files (control file, data files, redo log files) Server Parameter File ( SPFILE) Archive log files Flash Recovery Area Voting File Oracle Cluster Registry (OCR) File OCFS version 2.x will support shared ORACLE_HOME

Oracle Cluster Registry File


OCR contains important metadata about RAC instances and nodes that make up the cluster Needs to be on a shared storage device About 100MB in size In Oracle 10g Release 2, higher availability for this critical component is provided by enabling a second OCR file location

Voting Disk File


Contains information about cluster membership Used by CRS to avoid split-brain scenarios if any node loses contact over the interconnect Mandatory to be located on shared storage Typically about 20MB in size Can be mirrored in Oracle 10g Release 2

Shared Storage Considerations


Mandatory for: Datafiles, Redo Log Files, Control Files, SPFile Optional for: Archive logfiles, Executables, Binaries, Network Configuration files Supported shared storage NAS (network attached storage) SAN ( storage area network) Supported file storage Raw volumes Cluster File System ASM

Shared Storage Considerations


Archive log files cannot be placed on raw devices CRS Files ( Voting Disk/Cluster Registry (OCR) cannot be stored on ASM Software is installed on regular file system local to each node Database files can exist on raw devices, ASM or Cluster File System (OCFS)

Network Requirements
Each node must have at least two network adapters; one for the public network interface and one for the private network interface (the interconnect). The public network adapter must support TCP/IP For the private network, the interconnect must preferably be a Gigabit Ethernet switch that supports UDP. This is used for Cache Fusion inter-node connection Host name and IP addresses associated with the public interface should be registered in DNS and /etc/hosts

IP Address Requirements
For each Public Network Interface an IP address and host name registered in the DNS
One unused Virtual IP address and associated host name registered in the DNS for each node to be used in the cluster A private IP address and optional host name for each private interface Virtual IP addresses is used in the network config files

Virtual IP Addresses
VIPs are used in order to facilitate faster failover in the event of a node failure Each node not only has its own statically assigned IP address as well as also a virtual IP address that is assigned to the node The listener on each node will be listening on the Virtual IP and client connections will also come via this Virtual IP. Without VIP, clients will have to wait for long TCP/IP timeout before getting an error message or TCP reset from nodes that have died

Sample /etc/hosts file


racdb1:/opt/oracle> cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. #127.0.0.1 itlinuxbl53.hq.emirates.com itlinuxbl53 localhost.localdomain localhost 57.12.70.59 itlinuxbl54.hq.emirates.com itlinuxbl54 57.12.70.58 itlinuxbl53.hq.emirates.com itlinuxbl53 10.20.176.74 itlinuxbl54-pvt.hq.emirates.com itlinuxbl54-pvt 10.20.176.73 itlinuxbl53-pvt.hq.emirates.com itlinuxbl53-pvt 57.12.70.80 itlinuxbl54-vip.hq.emirates.com itlinuxbl54-vip 57.12.70.79 itlinuxbl53-vip.hq.emirates.com itlinuxbl53-vip

Setup User equivalence using SSH


To install on all nodes in the cluster by launching OUI on one node Will not prompt for password OUI will use ssh or rcp to copy files to remote nodes
ssh-keygen -t dsa cat id_dsa.pub > authorized_keys

Copy authorized_keys from this node to other nodes Run the same command on all nodes to generate the authorized_keys file Finally all nodes will have the same authorized_keys file

Setting up User Equivalence


ITLINUXBL53
ssh-keygen -t dsa cat id_dsa.pub > authorized_keys scp authorized_keys itlinuxbl54:/opt/oracle

ITLINUXBL54
ssh-keygen -t dsa cat id_dsa.pub >> authorized_keys scp authorized_keys itlinuxbl53:/opt/oracle/.ssh ssh itlinuxbl54 hostname ssh itlinuxbl53 hostname

Configure the hang check timer


Monitors the Linux kernel for hangs If hang occurs module reboots the node Define how often in seconds module checks for hangs Define how long module waits for response from kernel

[root@itlinuxbl53 rootpre]# /sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 Using /lib/modules/2.4.2137.ELsmp/kernel/drivers/char/hangcheck-timer.o

[root@itlinuxbl53 rootpre]# lsmod | grep hang hangcheck-timer 2672 0 (unused)

Case Study Environment


Operating System: LINUX X86_64 RHEL 3AS Hardware: HP BL25P Blade Servers with 2 CPUs (AMD 64 bit processors) and 4 GB of RAM Oracle Software: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Two Node Cluster: ITLINUXBL53.hq.emirates.com, ITLINUXBL54.hq.emirates.com Shared Storage: OCFS for Cluster Registry and Voting Disks. ASM for all other database related files Database Name: racdb Instance Names: racdb1, racdb2

Oracle 10g CRS Install


Oracle 10g Clusterware Cluster Ready Services Oracles own full-stack clusterware coupled with RAC Replaces earlier dependency on third-party clusterware Oracle CRS replaces the Oracle Cluster Manager (ORACM) in Oracle9i RAC CRS must be installed prior to the installation of Oracle RAC

CRS Installation Key Steps


Voting Disk about 20MB (Oracle9i Quorum Disk) Maintains the node heartbeat and avoids the node split-brain syndrome Oracle Cluster Registry about 100MB Stores cluster configuration and cluster database information Private Interconnect Information Select the network interface for internode communication A Gigabit Ethernet interface is recommended Run root.sh Start CRS daemon processes evmd, cssd, crsd

Oracle Cluster File System


Shared disk cluster file system for LINUX and Windows Improves management of data by eliminating the need to manage raw devices Can be downloaded from OTN http://oss.oracle.com/projects/ocfs OCFS 2.1.2 provides support on Linux for Oracle Software installation as well

Install the OCFS RPMs


[root@itlinuxbl54 recyclebin]# rpm -ivh ocfs-support-1.1.5-1.x86_64.rpm Preparing... ########################################### [100%] 1:ocfs-support ########################################### [100%]
[root@itlinuxbl54 recyclebin]# rpm -ivh ocfs-tools-1.0.10-1.x86_64.rpm Preparing... ########################################### [100%] 1:ocfs-tools ########################################### [100%] [root@itlinuxbl54 recyclebin]# rpm -ivh ocfs-2.4.21-EL-smp-1.0.14-1.x86_64.rpm Preparing... ########################################### [100%] 1:ocfs-2.4.21-EL-smp ########################################### [100%]

OCFSTOOL Generate Config

The OCFS Configuration File


[root@itlinuxbl53 etc]# cat /etc/ocfs.conf # # ocfs config # Ensure this file exists in /etc # node_name = itlinuxbl53.hq.emirates.com ip_address = 10.20.176.73 ip_port = 7000 comm_voting = 1 guid = 5D9FF90D969078C471310016353C6B23

OCFSTOOL Format Partition

OCFSTOOL Mount File System

OCFSTOOL Mount File System

OCFSTOOL Mount File System

ASM Architecture

ASM Instance Oracle DB Instance

ASM Instance Oracle DB Instance

RAC Database

Clustered Servers

Disk Group

Clustered Pool of Storage

Install the ASMLIB RPMs


[root@itlinuxbl53 recyclebin]# rpm -ivh oracleasm-support-2.0.1-1.x86_64.rpm Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [100%] [ [root@itlinuxbl53 recyclebin]# rpm -ivh oracleasm-2.4.21-37.ELsmp-1.0.41.x86_64.rpm Preparing... ########################################### [100%] 1:oracleasm-2.4.21-37.ELs###########################################
[root@itlinuxbl53 recyclebin]# rpm -ivh oracleasmlib-2.0.1-1.x86_64.rpm Preparing... ########################################### [100%] 1:oracleasmlib ########################################### [100%]

Creating the ASM Disks


[root@itlinuxbl53 init.d]# ./oracleasm createdisk VOL1 /dev/sddlmab1 Marking disk "/dev/sddlmab1" as an ASM disk: [ OK ]

[root@itlinuxbl53 init.d]# ./oracleasm createdisk VOL2 /dev/sddlmac1 Marking disk "/dev/sddlmac1" as an ASM disk: [ OK ]
[root@itlinuxbl53 init.d]# ./oracleasm createdisk VOL3 /dev/sddlmaf1 Marking disk "/dev/sddlmaf1" as an ASM disk: [ OK ]

[root@itlinuxbl53 init.d]# ./oracleasm listdisks VOL1 VOL2 VOL3


[root@itlinuxbl54 init.d]# ./oracleasm scandisks Scanning system for ASM disks: [ OK ]

The Cluster Verify Utility(cluvfy)


Perform pre-installation and post-installation checks at various stages of the RAC installation Available in 10g Release 2
./runcluvfy.sh ./runcluvfy.sh ./runcluvfy.sh ./runcluvfy.sh comp nodereach -n itlinuxbl53,itlinuxbl54 verbose stage -pre crsinst -n itlinuxbl53,itlinuxbl54 verbose comp nodecon -n itlinuxbl53,itlinuxbl54 verbose stage -post hwos -n itlinuxbl53 -verbose

Install the cvuqdisk RPM for cluvfy


[root@itlinuxbl53 root]# cd /opt/oracle/cluster_cd/clusterware/rpm [root@itlinuxbl53 rpm]# ls cvuqdisk-1.0.1-1.rpm

[root@itlinuxbl53 rpm]# export CVUQDISK_GRP=dba


[root@itlinuxbl53 rpm]# rpm -ivh cvuqdisk-1.0.1-1.rpm Preparing... ########################################### [100%] 1:cvuqdisk ########################################### [100%]

10g Clusterware Installation

Prerequisites Validation

Configuring the 10g RAC Cluster

Configuring the 10g RAC Cluster

Configuring the Network Interfaces

Oracle Cluster Registry (OCR)

Mirroring the OCR

Voting Disk

10g Clusterware OUI Remote Installation

10g Clusterware root.sh

Configuration Assistants

10g RAC phase one complete!

Verifying the Oracle Clusterware Installation


Check node reachability [oracle@itlinuxbl53 bin]$ ./olsnodes -n itlinuxbl53 1 itlinuxbl54 2 Check for Clusterware processes ps ef |grep crs ps ef |grep css ps ef |grep evm Check the health of the CRS stack ./crsctl check crs

10g RAC Software Installation

10g RAC Installation

10g RAC Software Installation

10g RAC Software Installation

10g RAC Software Installation

10g RAC Software Installation


Remote Node Installation

10g RAC Software Installation

Creating the RAC Database using DBCA

Configuring ASM

Creating the ASM Instances

Creating the ASM Instances

Creating the ASM Instances

Creating the ASM Instances

Creating the ASM Disk Groups

Creating the ASM Disk Groups

Creating the ASM Disk Groups

Creating the ASM Disk Groups

Creating the ASM Disk Groups

Creating the RAC Database using DBCA

DBCA is also Cluster aware

Creating the RAC Database using DBCA

Creating the RAC Database using DBCA

Creating the RAC Database using DBCA

Creating the RAC Database using DBCA

Creating the RAC Database using DBCA

Creating the RAC Database using DBCA

Creating the RAC Database using DBCA

Creating the RAC Database using DBCA

Enabling Flashback & Archive logging


Archive log files preferably need to be located on shared storage in this case the ASM Disk Group
SQL> alter system set db_recovery_file_dest_size=2G scope=both sid='*'; SQL> alter system set db_recovery_file_dest='+DG1' scope=both sid='*'; SQL> alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST' scope=both sid='*'; SQL> shutdown immediate;

Note: shutdown the other instances as well

Enabling Flashback & Archive logging


Connect to one of the instances in the RAC cluster and mount the instance
SQL> startup mount; SQL> alter database archivelog; SQL> alter database open;

Startup other instances in the RAC cluster as well


SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 15 Next log sequence to archive 16 Current log sequence 16

Services
Logically group consumers who share common attributes like workload, a database schema or some common application functionality Manage client load balancing Manage server-side load balancing Connect-time failover with TAF Controlled by tnsnames.ora parameters FAILOVER=ON, FAILOVER_MODE, METHOD Managed via DBCA or SRVCTL commands

Configuring Services

Configuring Services

Configuring Services

Configuring Services

Configuring Services

Managing services (srvctl)


racdb2:/opt/oracle/product/10.2.0/db/bin>srvctl status asm -n itlinuxbl53 ASM instance +ASM1 is running on node itlinuxbl53.
racdb2:/opt/oracle/product/10.2.0/db/bin>srvctl config database -d racdb itlinuxbl53 racdb1 /opt/oracle/product/10.2.0/db itlinuxbl54 racdb2 /opt/oracle/product/10.2.0/db racdb2:/var/opt/oracle>srvctl start database -d racdb racdb2:/var/opt/oracle>srvctl status database -d racdb Instance racdb1 is running on node itlinuxbl53 Instance racdb2 is running on node itlinuxbl54

racdb2:/var/opt/oracle>srvctl config service -d racdb racdb_blade53 PREF: racdb1 AVAIL: racdb2 racdb_blade54 PREF: racdb2 AVAIL: racdb1
racdb2:/var/opt/oracle>srvctl status service -d racdb -s racdb_blade53 Service racdb_blade53 is running on instance(s) racdb1

Transparent Application Failover (TAF)


TAF defined by FAILOVER_MODE parameter TYPE=SESSION - User does not need to reconnect - Session failed over to another available instance in the list - But SQL statements in progress will have to be reissued TYPE=SELECT - Query will be restarted after failover - Rows not fetched before failover will be retrieved

Transparent Application Failover (TAF)


Connection modes METHOD=BASIC or PRECONNECT BASIC - After failover connection must reconnect to next address in the list - Additional time to failover PRECONNECT - Session is opened against all addresses in the list - Only one is used others remain connected - Faster failover with preconnected sessions - More memory resources consumed by preconnected sessions on other nodes

tnsnames.ora for RAC


Client-side load balancing
ERP = (DESCRIPTION = (LOAD_BALANCE = ON) (FAILOVER=ON) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac2vip)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = ERP.WORLD) (FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC))

Server-side load balancing


*.REMOTE_LISTENERS=RACDB_LISTENERS init.ora parameter RACDB_LISTENERS= (DESCRIPTION= (ADDRESS=(PROTOCOL = TCP)(HOST = rac1vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST =rac2vip)(PORT = 1521))

Recovery and RAC


SQL> INSERT INTO SH.MYOBJECTS SELECT * FROM DBA_OBJECTS ; SQL> DELETE FROM SH.MYOBJECTS; SQL> COMMIT;

Now get the log sequence .

SQL> SELECT SEQUENCE#, THREAD#, STATUS FROM V$LOG; SEQUENCE# THREAD# ---------- ---------9 1 10 1 4 2 5 2 STATUS ---------------INACTIVE CURRENT ACTIVE CURRENT

Recovery and RAC (contd)


RMAN> LIST BACKUP OF DATABASE SUMMARY; List of Backups =============== Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag ------- -- -- - ----------- --------------- ------- ------- ---------- --12 B F A DISK 05-FEB-06 1 1 NO BACKUP_RACDB.HQ.EM_020506082809 Connect to instance racdb2 RMAN> LIST BACKUP OF DATABASE SUMMARY; List of Backups =============== Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag ------- -- -- - ----------- --------------- ------- ------- ---------- --12 B F A DISK 05-FEB-06 1 1 NO BACKUP_RACDB.HQ.EM_020506082809

Since the same control files are used by both instances RACDB1 and RACDB2 (same db RACDB) the output is the same on both sides.

Recovery and RAC (contd)


srvctl stop database -d RACDB srvctl start database -d RACDB -o mount
export ORACLE_SID=racdb1

run { set until logseq 10 thread 1; set autolocate on; allocate channel c1 type disk; restore database ; recover database ; release channel c1; }

Recovery and RAC (contd)


Starting restore at 05-FEB-06 channel c1: starting datafile backupset restore channel c1: specifying datafile(s) to restore from backup set restoring datafile 00001 to +DG1/racdb/datafile/system.256.1 .. .. piece handle=+DG1/racdb/backupset/2006_02_05/nnndf0_tag20060205t101703_0.665.7 tag=TAG20060205T101703 channel c1: restore complete Finished restore at 05-FEB-06 Starting recover at 05-FEB-06 starting media recovery archive log thread 1 sequence 9 is already on disk as file +DG1/racdb/archivelog/2006_02_05/thread_1_seq_9.653.7 archive log thread 2 sequence 4 is already on disk as file +DG1/racdb/archivelog/2006_02_05/thread_2_seq_4.662.7 RMAN> sql ' alter database open resetlogs';

Migrate a Single-instance database to RAC


Create the directory structure for the database files and archive log files on the OCFS file system $ cd /ocfs/oradata/ $ mkdir gavin $ cd /ocfs/oradata/gavin $ mkdir arch

Migrate a Single-instance database to RAC


Backup the current control file to trace SQL> alter database backup controlfile to trace; Edit the CREATE CONTROLFILE script to change the location of all the datafiles and redo log files to the OCFS file system

Migrate a Single-instance database to RAC


CREATE CONTROLFILE REUSE DATABASE "GAVIN" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 MAXINSTANCES 8 MAXLOGHISTORY 454 LOGFILE GROUP 1 '/ocfs/oradata/gavin/redo01.log' SIZE 10M, GROUP 2 '/ocfs/oradata/gavin/redo02.log' SIZE 10M, GROUP 3 '/ocfs/oradata/gavin/redo03.log' SIZE 10M -- STANDBY LOGFILE DATAFILE '/ocfs/oradata/gavin/system01.dbf', '/ocfs/oradata/gavin/undotbs01.dbf', '/ocfs/oradata/gavin/sysaux01.dbf', '/ocfs/oradata/gavin/users01.dbf', '/ocfs/oradata/gavin/example01.dbf' CHARACTER SET WE8ISO8859P1 ;

Migrate a Single-instance database to RAC


Shutdown the database and copy the files from the original location to the OCFS location $ cd /u01/ORACLE/gavin/ $ ls arch control03.ctl redo02.log system01.dbf users01.dbf control01.ctl example01.dbf redo03.log temp01.dbf control02.ctl redo01.log sysaux01.dbf undotbs01.dbf $ cp *.* /ocfs/oradata/gavin

Migrate a Single-instance database to RAC


Change the location of the control files in the init.ora *.control_files='/ocfs/oradata/gavin/control01.ctl','/ocfs/oradata/ga vin/control02.ctl','/ocfs/oradata/gavin/control03.ctl Run the script to recreate the controlfile

SQL> startup nomount; SQL> @crectl

Migrate a Single-instance database to RAC


Note that the new location of the datafiles of the database is now the shared OCFS file system SQL> select name from v$datafile; NAME -------------------------------------------------------------------------------/ocfs/oradata/gavin/system01.dbf /ocfs/oradata/gavin/undotbs01.dbf /ocfs/oradata/gavin/sysaux01.dbf /ocfs/oradata/gavin/users01.dbf /ocfs/oradata/gavin/example01.dbf

Migrate a Single-instance database to RAC


Create the cluster specific data dictionary views by running the catclust.sql script SQL> @?/rdbms/admin/catclust.sql

Migrate a Single-instance database to RAC


Each instance in the cluster needs to have access to its own thread of online redo log files.

Create another thread of online log files


SQL> alter database add logfile thread 2 2 group 4 ('/ocfs/oradata/gavin/redo04.log') size 10m, 3 group 5 ('/ocfs/oradata/gavin/redo05.log') size 10m, 4 group 6 ('/ocfs/oradata/gavin/redo06.log') size 10m; Database altered.

Enable the thread


SQL> alter database enable public thread 2; Database altered.

Migrate a Single-instance database to RAC


Each instance needs to have its own dedicated undo tablespace as well
SQL> create undo tablespace undotbs2 datafile 2 '/ocfs/oradata/gavin/undotbs02.dbf' size 200m; Tablespace created

Migrate a Single-instance database to RAC


Make the following changes to the Init.ora parameter file:
ADD the following entries *.cluster_database=TRUE *.cluster_database_instances=2 gavin1.instance_name=gavin1 gavin2.instance_name=gavin2 gavin1.instance_number=1 gavin2.instance_number=2 gavin1.thread=1 gavin2.thread=2 gavin1.undo_tablespace=UNDOTBS1 gavin2.undo_tablespace=UNDOTBS2 *.remote_listener='LISTENERS_GAVIN'

Migrate a Single-instance database to RAC


Make the following changes to the Init.ora parameter file:
EDIT the following entries
gavin.__db_cache_size=171966464 gavin.__java_pool_size=8388608 gavin.__large_pool_size=4194304 gavin.__shared_pool_size=75497472 Change to ... *.__db_cache_size=171966464 *.__java_pool_size=8388608 *.__large_pool_size=4194304 *.__shared_pool_size=75497472

Migrate a Single-instance database to RAC


Change the archive log destination to the shared disk as all instances need access to the archive log files generated by each individual instance.
*.log_archive_dest_1='LOCATION=/ocfs/oradata/gavin/arch/

Create the password file on each node


$ cd $ORACLE_HOME/dbs $ orapwd file=orapwgavin1 password=oracle

Migrate a Single-instance database to RAC


Add the following lines to the tnsnames.ora file on BOTH NODES
LISTENERS_GAVIN = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = hqlinuxrac101.hq.emirates.com)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = hqlinuxrac102.hq.emirates.com)(PORT = 1521)) )

Migrate a Single-instance database to RAC


Create the spfile which will be used by both instances on the shared disk storage as well
SQL> create spfile='/ocfs/oradata/gavin/spfilegavin.ora' from 2 pfile='initgavin.ora';

Create the init.ora for the instance gavin1 only add one line with the SPFILE value pointing to the spfile we created on the OCFS file system $ cat initgavin1.ora
SPFILE=/ocfs/oradata/gavin/spfilegavin.ora

Note: Do the same on the other node for the instance gavin2

Migrate a Single-instance database to RAC


Start the instance on both nodes first on Hqlinux05 and then on Hqlinux06
SQL> select name from v$datafile; NAME -------------------------------------------------------------------------------/ocfs/oradata/gavin/system01.dbf /ocfs/oradata/gavin/undotbs01.dbf /ocfs/oradata/gavin/sysaux01.dbf /ocfs/oradata/gavin/users01.dbf /ocfs/oradata/gavin/example01.dbf /ocfs/oradata/gavin/undotbs02.dbf SQL> select host_name from v$instance; HOST_NAME ---------------------------------------------------------------hqlinux06.hq.emirates.com

Migrate a Single-instance database to RAC


Using the SRVCTL commands to configure services
$ srvctl add database -d gavin -o /opt/oracle/product10g/10.1.0.3 $ srvctl add instance -d gavin -i gavin1 -n hqlinux05 $ srvctl add instance -d gavin -i gavin2 -n hqlinux06 $ srvctl status instance -d gavin -i gavin1 Instance gavin1 is running on node hqlinux05 $ srvctl status instance -d gavin -i gavin2 Instance gavin2 is running on node hqlinux06

Thanks for attending!!


GAVIN SOORMA Technical Team Manager, Databases Emirates Airline, Dubai Contact me at : 971507843900 or gavin.soorma@emirates.com

QUESTIONS ANSWERS
Contact me: Email: gavin.soorma@emirates.com Phone: + 971507843900

Acknowledgements & Thanks


10g RAC Madhu Tumma High Availability with RAC, Flashback and Data Guard Matthew Hart & Scott Jesse A Rough Guide to RAC Julian Dyke Oracle 10g Linux Administration Edward Whalen

Potrebbero piacerti anche