Sei sulla pagina 1di 7

11g RAC TO RAC DATA GUARD SETUP

Env Detail:
ORACLE_HOME=/u03/oracle/TESTdb/11.2.0
ORACLE_SID=TEST1
Check the Archive log mode:
SQL> select log mode from v$database;
LOG_MODE
-----------ACHIVELOG
Enabling the Force Logging:
SQL> alter database force logging;
SQL> select force_logging from v$database;
Check the space usage of datafiles to get the space to accomadate the backup
Select sum(bytes/1024/1024/1024) "MB" from dba_segmants;
After identifying the required space raise a ticket with sysadmin for space at the file system to stage the
backup.
Check the ASM Usage:
select NAME,TOTAL_MB/1024 "Total Mb",(TOTAL_MB/1024-FREE_MB/1024) "Used
Mb",FREE_MB/1024 "Free Mb" from v$asm_diskgroup;
After Identified the free space of the Asm we can start creating the log file if the free space is not enough
request to sysadmin team to add addtionanal lun.
Check the logfile detail
SELECT * FROM gv$logfile ORDER BY GROUP#;
Calculation to add the Standby redo log
Total thread *(Group per member + 1)
we are working with 4 node rac and each node we have 1 thread and 2 group so totally we have 8 group
for 4 thread let work on the calculation. So we are going to add 3 standby logfile for each thread. For
Example 4*(2+1) == 12
Command To add the Standby Logfile:

Before adding the standby logfile we nee to change the standby file management to manual once you
have created the standby log file after we need to change to auto.
alter system set standby_file_management=manual scope=both sid='*';
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 9 ('+DATA','+DATA') SIZE 250M,
GROUP 10 ('+DATA','+DATA') SIZE 250M,GROUP 11 ('+DATA','+DATA') SIZE 250M;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 12 ('+DATA','+DATA') SIZE 250M,
GROUP 13 ('+DATA','+DATA') SIZE 250M, GROUP 14 ('+DATA','+DATA') SIZE 250M;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 3 GROUP 15 ('+DATA','+DATA') SIZE 250M,
GROUP 16 ('+DATA','+DATA') SIZE 250M, GROUP 17 ('+DATA','+DATA') SIZE 250M;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 4 GROUP 18 ('+DATA','+DATA') SIZE 250M,
GROUP 19 ('+DATA','+DATA') SIZE 250M, GROUP 20 ('+DATA','+DATA') SIZE 250M;
alter system set standby_file_management=auto scope=both sid='*';
Check the log file detail.
SELECT * FROM gv$logfile ORDER BY GROUP#;
Parameter Modification at the Primary Level:
alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(TEST,TEST_KAN)' sid='*';
alter system set LOG_ARCHIVE_DEST_2='SERVICE=TEST_KAN LGWR ASYNC NOAFFIRM
REOPEN=60 DB_UNIQUE_NAME=TEST_KAN VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)'
sid='*';
After the DR Configured then this should be enabled.
To Stop the Archive log shipping
alter system set LOG_ARCHIVE_DEST_STATE_2='DEFER' sid='*';
alter system set standby_file_management='auto' sid='*';
The below parameter is necessary at the time of swithover
alter system set fal_server=TEST_KAN scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=TEST' scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_STATE_1='ENABLE' sid='*';
alter system set fal_server=TEST_KAN scope=both sid='*';
alter system set db_file_name_convert='+DATA/TEST_KAN','+DATA/TEST' scope=spfile sid='*';
alter system set
log_file_name_convert='+DATA/TEST_KAN','+DATA/TEST','+FRA/TEST_KAN','+FRA/TEST' scope=spfile
sid='*';
Create the Stage Area:

mkdir -p /u03/oracle/stage
Creating the pfile:
create pfile='/u03/oracle/stage/pfile.ora ' from spfile;
Taking the Rman Backup :
run
{
allocate channel c1 type disk;
allocate channel c2 type disk; backup current controlfile for standby format
'/u03/oracle/stage/TEST_STBY_BFR_%U'; BACKUP COMPRESSED BACKUPSET DATABASE format
'/u03/oracle/stage/%d_%U.bckp' PLUS ARCHIVELOG format '/u03/oracle/stage/%d_%U.bckp';
backup current controlfile for standby format '/u03/oracle/stage/TEST_STBY_AFT_%U'; release channel
c1;
release channel c2;
}
Moving the backup from the Primary to Standby:
scp /u03/oracle/stage/*.bckp /u03/oracle/stage/ scp
/u03/oracle/stage/TEST_STBY_BFR_cnnpjpsq_1_1 /u03/oracle/stage/ scp
/u03/oracle/stage/TEST_STBY_AFT_cvnpjq1u_1_1 /u03/oracle/stage/
Adding the standby tns entry with primary and copy this tns entry to all the node including standby node
Primary standby - SAMPLE:

Test =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = Scan Name)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = Database Service Name)
)
)

Configure the password file and copy to all primary and standby node
orapwd file=orapwTEST1 password=***** entries=5 ignorecase=Y
Standby pfile file creation

audit_file_dest='/u03/oracle/admin/TEST/adump'
audit_trail='db'
cluster_database=false
compatible='11.2.0.0.0'
control_files='+DATA/TEST_KAN/CONTROLFILE/current.346.798618115','+FRA/TEST_KAN/CONTROL
FILE/current.3587.798618115' #Restore Controlfile
db_block_size=8192
db_create_file_dest='+DATA'
db_file_name_convert='+DATA/TEST','+DATA/TEST_KAN'
db_name='TEST'
db_recovery_file_dest='+FRA' db_recovery_file_dest_size=26214400000 db_unique_name='TEST_KAN'
diagnostic_dest='/u03/oracle' dispatchers='(PROTOCOL=TCP) (SERVICE=TESTXDB)'
fal_client='TEST_KAN' fal_server='TEST' TEST2.instance_number=2 TEST4.instance_number=4
TEST3.instance_number=3 TEST1.instance_number=1
log_archive_config='DG_CONFIG=(TEST,TEST_KAN)'
log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST LGWR MANDATORY REOPEN=5
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=TEST_KAN'
log_archive_dest_2='SERVICE=TEST LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=TEST'
log_archive_dest_state_1='enable' log_archive_dest_state_2='defer' log_archive_format='TEST_%t_%s_
%r.arc' log_file_name_convert='+DATA/TEST','+DATA/TEST_KAN','+
FRA/TEST','+FRA/TEST_KAN'
open_cursors=300
pga_aggregate_target=536870912 processes=515 remote_listener='scanname:1521'
remote_login_passwordfile='exclusive' sessions=800 sga_target=1610612736 TEST4.thread=4
TEST3.thread=3 TEST2.thread=2 TEST1.thread=1 TEST4.undo_tablespace='UNDOTBS4'
TEST3.undo_tablespace='UNDOTBS3'
TEST1.undo_tablespace='UNDOTBS1'
TEST2.undo_tablespace='UNDOTBS2'

Startup the Standby database:


Set the env:
export ORACLE_HOME=/u03/oracle/TESTdb/11.2.0 export ORACLE_SID=TEST1 export
PATH=$ORACLE_HOME/bin:$PATH
Startup the Database in Nomount State:
sqlplus / as sysdba
startup nomount pfile='/u03/oracle/stage/pfile.ora'
After Instance startup Connect to the rman for creating the physical standby
rman target sys/******@TEST auxiliary /

DUPLICATE TARGET DATABASE FOR STANDBY


Once the restore have got completed then enable the log archvie dest in primary database
alter system set LOG_ARCHIVE_DEST_STATE_2='ENABLE' sid='*';
Enable the recover in the standby site
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE
DISCONNECT;
Check the archive log gap:
SQL> select * from v$archive_gap;
Check for the gap if the gap is not there then we are good to open the database in read only mode.
Open the Database with read only mode:
SQL> alter database open read only;
Enable the recover in the standby site
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE
DISCONNECT;
SQL> SELECT NAME,DB_UNIQUE_NAME,DATABASE_ROLE,OPEN_MODE FROM V$DATABASE;
NAME DB_UNIQUE_NAME DATABASE_ROLE OPEN_MODE
--------- ------------------------------ ---------------- -------------------TEST TEST_KAN PHYSICAL STANDBY READ ONLY WITH APPLY

Now we are going to see that how to:


Convert the Single instance to RAC
Creating the spfile for RAC
create spfile='+DATA/TEST_kan/spfile_TEST.ora' from file='/u03/oracle/stage/pfile.ora';
Down the instance and startup with spfile:
shutdown immediate;
Nomout using spfile:
startup nomount
Enable the RAC Parameter:

alter system set cluster_database='TRUE' sid='*';


After modification of this parameter we can change the single instance to RAC
Down the instance:
shutdown immediate;

Add this database with OCR


srvctl add database -d TEST -n TEST -o /u03/oracle/TESTdb/11.2.0 -m KAN.IRONMOUNTAIN.COM -p
+DATA/TEST_kan/spfile_TEST.ora -r physical_standby -a DATA,FRA
Add the Instance with OCR
srvctl add instance -d TEST -i TEST1 -n li9200
srvctl add instance -d TEST -i TEST2 -n li9201
srvctl add instance -d TEST -i TEST3 -n li9202
srvctl add instance -d TEST -i TEST4 -n li9203
Down the database: shutdown immediate;
Startup the cluster database
srvctl start database -d TEST -o 'READ ONLY'
Check the final status of the database with the cluster
set the asm env
export ORACLE_HOME=/u00/grid/11.2.0
export PATH=$ORACLE_HOME/bin:$PATH export ORACLE_SID=+ASM1 Cluster Status checking
Cluster Verification Command
crsctl stat res -t
Sample Output:
ora.TEST.db
1 ONLINE ONLINE li9200 Open,Readonly
2 ONLINE ONLINE li9201 Open,Readonly
3 ONLINE ONLINE li9202 Open,Readonly
4 ONLINE ONLINE li9203 Open,Readonly

Now the DR is fully operation


=================> End of the Document <=============================

Potrebbero piacerti anche