Sei sulla pagina 1di 36

Author A.Kishore http://www.appsdba.

info

Introduction
Oracle Data Guard (known as Oracle Standby Database prior to Oracle9i), forms an extension to the Oracle RDBMS and provides organizations with high availability, data protection, and disaster recovery for enterprise databases. Oracle Data Guard provides the DBA with services for creating, maintaining, managing, and monitoring one or more standby databases. The functionality included with Oracle Data Guard enables enterprise data systems to survive both data corruption as well as major disasters. This article provides instructions for creating and configuring a physical standby database from a primary database using Oracle Database 10g Release 2 (10.2) operating in maximum performance protection mode. It should be noted that several different methods exist to create a physical standby database configuration and that this is just one of those ways. The methods outlined in this guide present a simple approach that should be easy to implement in most situations. In fact, if you break down the essential tasks required to build a physical standby database, you will see that it is essentially nothing more than taking a backup of the primary database, creating a standby controlfile, transferring the files to the standby host, mounting the standby database, putting the standby database in managed recovery mode (Redo Apply), and starting remote archiving from the primary database (Redo Transport). Obviously there are a number of smaller steps I am leaving out which will all be discussed in more depth throughout this guide. All configuration parameters related to the Oracle instance and networking will be discussed as well as how to place the standby database in Managed Recovery Mode

Introduction to Oracle Data Guard


The standby database feature of Oracle was first introduced with the release of Oracle 7 in the early 1990's. The design was fairly simple. Oracle used media recovery to apply archive logs to a remote standby database, however, none of the automation we now take for granted was present in this release of the product. DBA's were required to write custom scripts that shipped and applied archive logs to the remote standby database. It wasn't until Oracle8i where some form of automation was introduced that relied on Oracle Net Services to transfer and apply archive redo logs. DBA's were still required to supply scripts that handled gap resolution and resynchronize the primary and standby database when they lost connectivity with one another. Also included in Oracle8i was a set of pre-written scripts that simplified the switchover and failover process. With the introduction of Oracle9i, the standby database feature was renamed to Oracle Data Guard. In addition to the re-branding of the product, Oracle delivered a comprehensive automated solution for disaster recovery that was fully integrated with the database kernel. Finally, a fully integrated disaster recovery solution without the need to maintain custom written scripts! Oracle9i also provided a vast array of new features which included automatic gap resolution, enhanced redo transport methods (synchronous and asynchronous redo transport), the ability to configure zero data loss, and the concept of protection modes. Until Oracle9i Release 2, the only standby database type available was the physical standby database. A physical standby database is an identical, block-for-block copy of the primary database and is kept in sync with the primary using media recovery (also referred to as Redo Apply). Oracle introduced a new type of standby database with Oracle9i Release 2 named Logical Standby Database. This new type of

Author A.Kishore http://www.appsdba.info


standby database keeps in sync with the primary database using SQL Apply (versus Redo Apply used with a physical standby database). A logical standby database remains open for user access while logical records are being received and applied from the primary database which makes this a great candidate for a reporting database. When the standby database site is hosted in a different geographical location than the primary site, it provides for an excellent High Availability (HA) solution. When creating a standby database configuration, the DBA should always attempt to keep the primary and standby database sites identical as well as keeping the physical location of the production database transparent to the end user. This allows for an easy role transition scenario for both planned and unplanned outages. When the secondary (standby) site is identical to the primary site, it allows predictable performance and response time after failing over (or switching over) from the primary site.

Oracle Database Enterprise Edition Requirement


Oracle Data Guard is only available as a bundled feature included within its Enterprise Edition release of the Oracle Database software. It is not available with Oracle Database Standard Edition. With the exception of performing a rolling database upgrade using logical standby database, it is mandatory that the same release of Oracle Database Enterprise Edition be installed on the primary database and all standby databases! While it remains possible to simulate a standby database environment running Oracle Database Standard Edition, it requires the DBA to develop custom scripts that manually transfer archived redo log files and then manually applying them to the standby database. This is similar to the methods used to maintain a standby database with Oracle 7. The consequence with this type of configuration is that it does not provide the ease-of-use, manageability, performance, and disaster-recovery capabilities available with Data Guard.

Standby Database Types


There are two types of standby databases that can be created with Oracle Data Guard physical or logical. Deciding which of the two types of standby databases to create is critical and depends on the nature of the business needs the organization is trying to satisfy. A physical standby database is an identical, block-for-block copy of the primary database and is kept in sync with the primary using media recovery. As redo gets generated on the primary database, it gets transferred to the standby database where an RFS process receives the primary redo and applies the change vectors directly to the standby database. A physical standby database is an excellent choice for disaster recovery. A logical standby database works in a different manner which keeps in sync with the primary by transforming redo data received from the primary database into logical SQL statements and then executes those SQL statements against the standby database. With a logical standby database, the standby remains open for user access in read/write mode while still receiving and applying logical records from the primary. While a physical standby database is an exact physical replica of the primary, a logical standby database is not. Because Oracle is applying SQL statements to the standby database and not performing media recovery (as is done with a physical standby database), it is possible for the logical standby database to contain the same logical data, but at the same time have a different physical structure. A logical standby database is an excellent solution for a reporting database while at the same

Author A.Kishore http://www.appsdba.info


time retaining the attributes of a disaster recovery solution. Not only does a logical standby database contain the same logical information as the primary, it can also support the creation of additional objects to support improved reporting requirements.

Data Protection Modes


After deciding between a physical or logical standby database, the next major decision is which data protection mode should be used to operate the Data Guard configuration. At the heart of this decision lies the answer to one important question how much data loss is your organization willing to endure in the event of a failover? The obvious answer to expect from management is none. Configuring Data Guard with guaranteed no data loss, however, requires a significant investment in equipment and other resources necessary to provide support for this type of environment. An Oracle Database 10g Data Guard configuration will always run in one of three data protection modes: Maximum Protection Maximum Availability Maximum Performance Each of the three modes provide a high degree of data protection; however they differ with regards to data availability and performance of the primary database. When selecting a protection mode, always consider the one that best meets the needs of your business. Carefully take into account the need to protect the data against any loss vs. availability and performance expectations of the primary database. An in-depth discussion on the three available data protection modes and how redo transport works to support them is beyond the scope of this guide. To keep the article simple, I will be using the default protection mode of Maximum Performance.

Author A.Kishore http://www.appsdba.info Creating a 10gr2 Data Guard Physical Standby database with Real-Time apply Primary host : ggate Standby host : ggate1 ORACLE_SID=source Kernel :2.6.9-78.0.0.0.1.EL Service names : Primary source / Standby source_s1

Primary (ggate) Service_Namesource Oracle_sid=Source

Standby (ggate1) Service_Namesource_s1 Oracle_sid=Source

Author A.Kishore http://www.appsdba.info #Primary Initialization parameters db_name='source' db_unique_name=source ##COMMON TO BOTH PRIMARY AND STANDBY ROLES LOG_ARCHIVE_CONFIG='DG_CONFIG=(source,source_s1)' LOG_ARCHIVE_DEST_1='LOCATION=/home/oracle10g/oracle/temp/oracle/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=source' LOG_ARCHIVE_DEST_2='SERVICE=source_s1 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=source_s1' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_MAX_PROCESSES=10 log_archive_format='%t_%s_%r.dbf' #SPECIFIC TO STANDBY ROLE STANDBY_FILE_MANAGEMENT=AUTO STANDBY_ARCHIVE_DEST='/home/oracle10g/oracle/temp/oracle/arch' FAL_SERVER=source_s1 FAL_CLIENT=source

Standby Initialization parameters db_name='source' db_unique_name=source_s1 #COMMON TO BOTH PRIMARY AND STANDBY ROLES LOG_ARCHIVE_CONFIG='DG_CONFIG=(source,source_s1)' LOG_ARCHIVE_DEST_1='LOCATION=/home/oracle10g/oracle/temp/oracle/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=source_s1' LOG_ARCHIVE_DEST_2='SERVICE=source LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=source' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_MAX_PROCESSES=10 log_archive_format='%t_%s_%r.dbf' #SPECIFIC TO STANDBY ROLE STANDBY_FILE_MANAGEMENT=AUTO STANDBY_ARCHIVE_DEST='/home/oracle10g/oracle/temp/oracle/arch' FAL_SERVER=source FAL_CLIENT=source_s1

Author A.Kishore http://www.appsdba.info initsource.ora -----------------source.__db_cache_size=117440512 source.__java_pool_size=4194304 source.__large_pool_size=4194304 source.__shared_pool_size=54525952 source.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/source/adump' *.background_dump_dest='/u01/app/oracle/admin/source/bdump' *.compatible='10.2.0.1.0' *.control_files='/u01/app/oracle/oradata/source/control01.ctl','/u01/app/oracle/oradata/source/contr ol02.ctl','/u01/app/oracle/oradata/source/control03.ctl' *.core_dump_dest='/u01/app/oracle/admin/source/cdump' *.db_block_size=8192 *.db_domain='' *.db_file_multiblock_read_count=16 *.dispatchers='(PROTOCOL=TCP) (SERVICE=sourceXDB)' *.job_queue_processes=10 *.open_cursors=300 *.pga_aggregate_target=60817408 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=184549376 *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/u01/app/oracle/admin/source/udump' #Primary Initialization parameters db_name='source' db_unique_name=source ##COMMON TO BOTH PRIMARY AND STANDBY ROLES LOG_ARCHIVE_CONFIG='DG_CONFIG=(source,source_s1)' LOG_ARCHIVE_DEST_1='LOCATION=/home/oracle10g/oracle/temp/oracle/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=source' LOG_ARCHIVE_DEST_2='SERVICE=source_s1 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=source_s1' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_MAX_PROCESSES=10 log_archive_format='%t_%s_%r.dbf' #SPECIFIC TO STANDBY ROLE STANDBY_FILE_MANAGEMENT=AUTO STANDBY_ARCHIVE_DEST='/home/oracle10g/oracle/temp/oracle/arch'

Author A.Kishore http://www.appsdba.info FAL_SERVER=source_s1 FAL_CLIENT=source Note 1: - Create just one initialization parameter file that contains parameters used in both Roles (primary / standby) Parameters specific to that role will be used eg: FAL_ parameter will be used only when the database is in standby role For Log_archive_dest_n the VALID_FOR attribute will differentiate the roles, if not specified the default is (ALL_LOGFILES,ALL_ROLES). This VALID_FOR attribute allows us to use the same initialization parameter file for both the primary and standby roles. Note 2: - If the file structure is the same on both the nodes then no need to specifythe file name convert strings. If the file structure is different then we will need to specify the additional two parameters : DB_FILE_NAME_CONVERT - Converts the path names of the primary database data files to the standby data file path names LOG_FILE_NAME_CONVERT - Converts the path names of the primary database log files to the path names on the standby database -- In my case the file structures are different; here are my parameters in standby init file db_file_name_convert='/u01/app/oracle/oradata/source','/u01/app/oracle/oradata/source_s1' log_file_name_convert='/u01/app/oracle/oradata/source','/u01/app/oracle/oradata/source_s1' Enable Archiving : Ensure that the primary is in archive log mode SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT; SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN;

SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination /home/oracle10g/oracle/temp/oracle/arch Oldest online log sequence 36 Next log sequence to archive 38 Current log sequence 38

Author A.Kishore http://www.appsdba.info

TNSNAMES.ORA sample - Primary SOURCE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ggate.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = source) ) ) SOURCE_S1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ggate1.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = source_s1) ) )

Author A.Kishore http://www.appsdba.info Listener.ora (primary host) ---------------------------SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0) (PROGRAM = extproc) ) (SID_DESC = (GLOBAL_DBNAME = source.com) (SID_NAME = source) (ORACLE_HOME = /u01/app/oracle/product/10.2.0) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1)) (ADDRESS = (PROTOCOL = TCP)(HOST = ggate.com)(PORT = 1521)) ) )

Listener.ora (standby host) ---------------------------SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0) (PROGRAM = extproc) ) (SID_DESC = (GLOBAL_DBNAME = source_s1) (SID_NAME = source_s1) (ORACLE_HOME = /u01/app/oracle/product/10.2.0) ) )

Author A.Kishore http://www.appsdba.info Standby Redo Log (SRL) creation : SQL>SELECT * FROM V$LOGFILE; GROUP# STATUS TYPE MEMBER IS_RECOVERY_DEST_FILE ----------------------------------------------------------------1 ONLINE /u01/app/oracle/oradata/source/redo01.log NO 2 ONLINE /u01/app/oracle/oradata/source/redo02.log NO 3 ONLINE /u01/app/oracle/oradata/source/redo03.log NO

The number of standby redo logs required for the physical standby database in this example is (3 + 1) * 1 = 4 at 50MB each. A best practice generally followed is to create the standby redo logs on both the primary and the standby database so as to make role transitions smoother. By creating the standby redo logs at this stage, it is assured that they will exist on both the primary and the newly created standby database. From the primary database, connect as SYS and run the following to create four standby redo log file groups:

Create the SRL's : -----------------ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('/u01/app/oracle/oradata/source/redo04.log') SIZE 50M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('/u01/app/oracle/oradata/source/redo05.log') SIZE 50M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ('/u01/app/oracle/oradata/source/redo06.log') SIZE 50M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 ('/u01/app/oracle/oradata/source/redo07.log') SIZE 50M;

Author A.Kishore http://www.appsdba.info Create a Password File As part of the new redo transport security and authentication features, it is now mandatory that each database in an Oracle Data Guard configuration utilize a password file. In addition, the SYS password must be identical on every database in order for redo transport to function. If a password file does not exist for the primary database, create one using the following steps: cd $ORACLE_HOME/dbs orapwd file=orapwsource password=oracle After creating the password file, set the remote_login_passwordfile initialization parameter to EXCLUSIVE in the spfile on the primary database. Since this parameter cannot be dynamically modified for the current running instance, the change will have to be made to the spfile and bounced in order to take effect: SQL> alter system set remote_login_passwordfile=exclusive scope=spfile; Enable Force Logging (optional) Any nologging operations performed on the primary database do not get fully logged within the redo stream. As Oracle Data Guard relies on the redo stream to maintain the standby database, this can result in data inconsistencies between the primary and standby along with a massive headache for the DBA to resolve. To prevent this from occurring, one solution is to place the primary database into force logging mode. In this mode, all nologging operations are permitted to run without error, however, the changes will be placed in the redo stream anyway. Although this is considered an optional step, I make it mandatory when designing an Oracle Data Guard configuration for my clients. Overlooking this on a production environment can result in the DBA spending considerable time during the implementation of a disaster recovery situation. To place the primary database in forced logging mode, connect as SYS and run the following: SQL> alter database force logging; Database altered.

To verify force logging is enabled for the database: SQL> select force_logging from v$database; FORCE_LOGGING -------------YES

Author A.Kishore http://www.appsdba.info Net Services : Set up the net services as entries in the tnsnames.ora such a way that you can connect via sqlplus to the remote database using this alias. Eg: from the primary sqlplus sys/oracle@source_s1 as sysdba SQL> SELECT DB_UNIQUE_NAME FROM V$DATABASE; DB_UNIQUE_NAME ---------------source_s1 Do the same from the standby to the primary. Backup the Primary Take a cold backup of the primary. ( You can also take a hot backup or oracle recommends the usage of RMAN.) SQL>SHUTDOWN IMMEDIATE - Backup the datafiles, online redologs and the standby logs if created and ftp / restore it on the standby site; Note 4: - Standby redo logs can be created even after the standby has been created. In this case we created the SRL's on the primary before the creation of the standby database. Also, we have used the default ARCH to ship the logs across in the log_archive_dest_2 parameter. In 10g the archiver (ARCn) process or the log writer (LGWR) process on the primary database can transmit redo data directly to remote standby redo logs. - MAXLOGFILE defaults to 16 , To create online redologs + standby redologs more than this ensure that you recreate the control file to modify maxlogfile to accomodate this number.

A physical standby database can be created using either a hot or cold backup of the primary as long as all of the necessary archivelogs are available to bring the standby database to a consistent state. For the purpose of this guide, I will be performing an online (hot) backup of the primary database using RMAN. The RMAN backupsets will be written to a staging directory located outside of the Flash Recovery Area; namely /home/oracle10g/backup. I start by creating the staging directory on both the primary and standby hosts:

Author A.Kishore http://www.appsdba.info

oracle10g@ggate - mkdir -p /home/oracle10g/backup oracle10g@ggate1 - mkdir -p /home/oracle10g/backup From the primary host, perform an RMAN backup of the primary database that places the backupset into the staging directory: [oracle10g@ggate ~]$ rman target / backup device type disk format '/home/oracle10g/backup/%U' database plus archivelog;

Starting backup at 08-OCT-11 current log archived using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=159 devtype=DISK channel ORA_DISK_1: starting archive log backupset channel ORA_DISK_1: specifying archive log(s) in backup set input archive log thread=1 sequence=11 recid=10 stamp=763835014 input archive log thread=1 sequence=12 recid=11 stamp=763929332 input archive log thread=1 sequence=13 recid=12 stamp=763930110 input archive log thread=1 sequence=14 recid=13 stamp=763930216 input archive log thread=1 sequence=15 recid=14 stamp=763997170 input archive log thread=1 sequence=16 recid=15 stamp=764000962 input archive log thread=1 sequence=17 recid=16 stamp=764001297 input archive log thread=1 sequence=18 recid=17 stamp=764001476 input archive log thread=1 sequence=19 recid=18 stamp=764001491 input archive log thread=1 sequence=20 recid=19 stamp=764002512 input archive log thread=1 sequence=21 recid=20 stamp=764003526 input archive log thread=1 sequence=22 recid=21 stamp=764004777 input archive log thread=1 sequence=23 recid=22 stamp=764005140 input archive log thread=1 sequence=24 recid=23 stamp=764005633 input archive log thread=1 sequence=25 recid=24 stamp=764006489 input archive log thread=1 sequence=26 recid=25 stamp=764010669 input archive log thread=1 sequence=27 recid=26 stamp=764010697 input archive log thread=1 sequence=28 recid=35 stamp=764011369 input archive log thread=1 sequence=29 recid=55 stamp=764011419 input archive log thread=1 sequence=30 recid=57 stamp=764011425 input archive log thread=1 sequence=31 recid=59 stamp=764011500 input archive log thread=1 sequence=32 recid=61 stamp=764011708 input archive log thread=1 sequence=33 recid=62 stamp=764011740

Author A.Kishore http://www.appsdba.info input archive log thread=1 sequence=34 recid=63 stamp=764011767 input archive log thread=1 sequence=35 recid=64 stamp=764011772 input archive log thread=1 sequence=36 recid=65 stamp=764011794 input archive log thread=1 sequence=37 recid=71 stamp=764015097 input archive log thread=1 sequence=38 recid=73 stamp=764026085 channel ORA_DISK_1: starting piece 1 at 08-OCT-11 channel ORA_DISK_1: finished piece 1 at 08-OCT-11 piece handle=/home/oracle10g/backup/06mok776_1_1 tag=TAG20111008T212806 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:09 Finished backup at 08-OCT-11 Starting backup at 08-OCT-11 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archive log backupset channel ORA_DISK_1: specifying archive log(s) in backup set input archive log thread=1 sequence=39 recid=74 stamp=764026172 channel ORA_DISK_1: starting piece 1 at 08-OCT-11 channel ORA_DISK_1: finished piece 1 at 08-OCT-11 piece handle=/home/oracle10g/backup/09mok79t_1_1 tag=TAG20111008T212932 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02 Finished backup at 08-OCT-11

Create a Standby Controlfile Using the same process as above, create a standby controlfile in the staging directory using RMAN: [oracle10g@ggate ~]$ rman target / RMAN> backup device type disk format '/home/oracle10g/backup/%U' current controlfile for standby; Starting backup at 08-OCT-11 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset including standby control file in backupset channel ORA_DISK_1: starting piece 1 at 08-OCT-11 channel ORA_DISK_1: finished piece 1 at 08-OCT-11 piece handle=/home/oracle10g/backup/0cmok7dr_1_1 tag=TAG20111008T213139 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 08-OCT-11

Author A.Kishore http://www.appsdba.info Prepare an Initialization Parameter for the Standby Database Create an initialization parameter for the standby database using the primary as the source. The primary database in this example is using an spfile which will need to be copied to a pfile so it can be modified and used by the standby database. When configuring the standby database later in this guide, I will be converting the modified standby pfile back to an spfile. From the primary database, create a pfile in the staging directory:

SQL> create spfile from pfile; File created. shutdown immediate startup SQL> create pfile='/home/oracle10g/backup/initsource_s1.ora' from spfile; File created. Next, modify the necessary parameters in the new pfile to allow the database to operate in the standby role source_s1.__db_cache_size=117440512 source_s1.__java_pool_size=4194304 source_s1.__large_pool_size=4194304 source_s1.__shared_pool_size=54525952 source_s1.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/source_s1/adump' *.background_dump_dest='/u01/app/oracle/admin/source_s1/bdump' *.compatible='10.2.0.1.0' *.control_files='/u01/app/oracle/oradata/source_s1/control01.ctl','/u01/app/oracle/oradata/source_s1 /control02.ctl','/u01/ app/oracle/oradata/source_s1/control03.ctl' *.core_dump_dest='/u01/app/oracle/admin/source_s1/cdump' *.db_block_size=8192 *.db_domain='' *.db_file_multiblock_read_count=16 *.db_name='source' *.db_unique_name='source' *.dispatchers='(PROTOCOL=TCP) (SERVICE=source_s1XDB)'

Author A.Kishore http://www.appsdba.info *.FAL_CLIENT='source' *.FAL_SERVER='source_s1' *.job_queue_processes=10 *.LOG_ARCHIVE_CONFIG='DG_CONFIG=(source,source_s1)' *.LOG_ARCHIVE_DEST_1='LOCATION=/home/oracle10g/oracle/temp/oracle/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=source_s1' *.LOG_ARCHIVE_DEST_2='SERVICE=source_s1 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=source_s1' *.LOG_ARCHIVE_DEST_STATE_1='ENABLE' *.LOG_ARCHIVE_DEST_STATE_2='ENABLE' *.log_archive_format='%t_%s_%r.dbf' *.LOG_ARCHIVE_MAX_PROCESSES=10 *.open_cursors=300 *.pga_aggregate_target=60817408 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=184549376 *.STANDBY_ARCHIVE_DEST='/home/oracle10g/oracle/temp/oracle/arch' *.STANDBY_FILE_MANAGEMENT='AUTO' *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/u01/app/oracle/admin/source/udump' db_file_name_convert='/u01/app/oracle/oradata/source','/u01/app/oracle/oradata/source_s1' log_file_name_convert='/u01/app/oracle/oradata/source','/u01/app/oracle/oradata/source_s1'

Transfer Files to the Standby Host Using an OS remote copy utility, transfer the backup of the primary database, standby controlfile, and standby initialization parameter file to the standby host - ggate1 [oracle10g@ggate backup]$ scp * oracle10g@ggate1:/home/oracle10g/backup oracle10g@ggate1's password: 06mok776_1_1 100% 92MB 13.1MB/s 00:07 07mok77f_1_1 100% 600MB 14.0MB/s 00:43 08mok79q_1_1 100% 7040KB 3.4MB/s 00:02 09mok79t_1_1 100% 41KB 40.5KB/s 00:00 0cmok7dr_1_1 100% 7040KB 6.9MB/s 00:00 initsource_s1.ora 100% 1553 1.5KB/s 00:00

Author A.Kishore http://www.appsdba.info Configure the Standby Database This section contains the steps used to create, mount, and start Redo Apply services for the physical standby database.

Create the Standby Password File As part of the new redo transport security and authentication features, it is now mandatory that each database in an Oracle Data Guard configuration utilize a password file. In addition, the SYS password must be identical on every database in order for redo transport to function. Create the password file on the standby database using the following steps: [oracle10g@ggate1 ~]$ cd $ORACLE_HOME/dbs [oracle10g@ggate1 dbs]$ orapwd file=orapwsource_s1 password=oracle

Create an spfile for the Standby Instance Using the prepared standby initialization parameter file created and copied from the primary host, convert the pfile to an spfile by entering the following command on the standby instance: create spfile from pfile='/home/oracle10g/backup/initsource_s1.ora';

!ls -l $ORACLE_HOME/dbs orapwsource_s1 spfilesource_s1.ora

Create and Start the Standby Instance Start by creating the "dump directories" on the standby host as referenced in the standby initialization parameter file

mkdir -p /u01/app/oracle/admin/source_s1/adump mkdir -p /u01/app/oracle/admin/source_s1/bdump mkdir -p /u01/app/oracle/admin/source/udump mkdir -p /u01/app/oracle/admin/source/cdump

Next, create and verify all directories on the standby host that will be used for database files

Author A.Kishore http://www.appsdba.info mkdir -p /u01/app/oracle/admin/source_s1 After verifying the appropriate environment variables are set on the standby host ($ORACLE_SID, $ORACLE_HOME, $PATH, $LD_LIBRARY_PATH), start the physical standby instance: export ORACLE_BASE=/u01/app/oracle export ORACLE_SID=source_s1 export ORACLE_HOME=$ORACLE_BASE/product/10.2.0 export PATH=$PATH:$ORACLE_HOME/bin:/u01/app/oracle/product/10.2.0/ggate export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/u01/app/oracle/product/10.2.0/ggate -- Add the below two parameter to init file create pfile from spfile; db_file_name_convert='/u01/app/oracle/oradata/source','/u01/app/oracle/oradata/source_s1' log_file_name_convert='/u01/app/oracle/oradata/source','/u01/app/oracle/oradata/source_s1' sqlplus "/as sysdba" SQL*Plus: Release 10.2.0.1.0 - Production on Sat Oct 8 21:55:31 2011 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to an idle instance. SQL> startup pfile=initsource_s1.ora nomount ORACLE instance started. Total System Global Area 184549376 bytes Fixed Size 1218412 bytes Variable Size 62916756 bytes Database Buffers 117440512 bytes Redo Buffers 2973696 bytes Create the Physical Standby Database From the standby host where the standby instance was just started, duplicate the primary database as a standby using RMAN: [oracle10g@ggate1 ~]$ rman target sys/oracle@source auxiliary sys/oracle@source_s1

Author A.Kishore http://www.appsdba.info Recovery Manager: Release 10.2.0.1.0 - Production on Sat Oct 8 21:56:34 2011 Copyright (c) 1982, 2005, Oracle. All rights reserved. connected to target database: SOURCE (DBID=2877522219) connected to auxiliary database: SOURCE (not mounted) RMAN> duplicate target database for standby;

Starting Duplicate Db at 08-OCT-11 using target database control file instead of recovery catalog allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: sid=156 devtype=DISK contents of Memory Script: { restore clone standby controlfile; sql clone 'alter database mount standby database'; } executing Memory Script Starting restore at 08-OCT-11 using channel ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: starting datafile backupset restore channel ORA_AUX_DISK_1: restoring control file channel ORA_AUX_DISK_1: reading from backup piece /home/oracle10g/backup/0cmok7dr_1_1 channel ORA_AUX_DISK_1: restored backup piece 1 piece handle=/home/oracle10g/backup/0cmok7dr_1_1 tag=TAG20111008T213139 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02 output filename=/u01/app/oracle/oradata/source_s1/control01.ctl output filename=/u01/app/oracle/oradata/source_s1/control02.ctl output filename=/u01/app/oracle/oradata/source_s1/control03.ctl Finished restore at 08-OCT-11 sql statement: alter database mount standby database released channel: ORA_AUX_DISK_1 contents of Memory Script: { set newname for tempfile 1 to "/u01/app/oracle/oradata/source_s1/temp01.dbf"; switch clone tempfile all;

Author A.Kishore http://www.appsdba.info set newname for datafile 1 to "/u01/app/oracle/oradata/source_s1/system01.dbf"; set newname for datafile 2 to "/u01/app/oracle/oradata/source_s1/undotbs01.dbf"; set newname for datafile 3 to "/u01/app/oracle/oradata/source_s1/sysaux01.dbf"; set newname for datafile 4 to "/u01/app/oracle/oradata/source_s1/users01.dbf"; set newname for datafile 5 to "/u01/app/oracle/oradata/source_s1/example01.dbf"; set newname for datafile 6 to "/u01/app/oracle/oradata/source_s1/ggate_data01.dbf"; restore check readonly clone database ; } executing Memory Script executing command: SET NEWNAME renamed temporary file 1 to /u01/app/oracle/oradata/source_s1/temp01.dbf in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting restore at 08-OCT-11 allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: sid=156 devtype=DISK channel ORA_AUX_DISK_1: starting datafile backupset restore channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set restoring datafile 00001 to /u01/app/oracle/oradata/source_s1/system01.dbf restoring datafile 00002 to /u01/app/oracle/oradata/source_s1/undotbs01.dbf restoring datafile 00003 to /u01/app/oracle/oradata/source_s1/sysaux01.dbf

Author A.Kishore http://www.appsdba.info restoring datafile 00004 to /u01/app/oracle/oradata/source_s1/users01.dbf restoring datafile 00005 to /u01/app/oracle/oradata/source_s1/example01.dbf restoring datafile 00006 to /u01/app/oracle/oradata/source_s1/ggate_data01.dbf channel ORA_AUX_DISK_1: reading from backup piece /home/oracle10g/backup/07mok77f_1_1 channel ORA_AUX_DISK_1: restored backup piece 1 piece handle=/home/oracle10g/backup/07mok77f_1_1 tag=TAG20111008T212815 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:16 Finished restore at 08-OCT-11 contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 1 switched to datafile copy input datafile copy recid=9 stamp=764028368 filename=/u01/app/oracle/oradata/source_s1/system01.dbf datafile 2 switched to datafile copy input datafile copy recid=10 stamp=764028368 filename=/u01/app/oracle/oradata/source_s1/undotbs01.dbf datafile 3 switched to datafile copy input datafile copy recid=11 stamp=764028368 filename=/u01/app/oracle/oradata/source_s1/sysaux01.dbf datafile 4 switched to datafile copy input datafile copy recid=12 stamp=764028368 filename=/u01/app/oracle/oradata/source_s1/users01.dbf datafile 5 switched to datafile copy input datafile copy recid=13 stamp=764028368 filename=/u01/app/oracle/oradata/source_s1/example01.dbf datafile 6 switched to datafile copy input datafile copy recid=14 stamp=764028368 filename=/u01/app/oracle/oradata/source_s1/ggate_data01.dbf Finished Duplicate Db at 08-OCT-11

Start Redo Apply on the Standby Database Now that the standby is in place, start Redo Apply on the standby database by putting it in managed recovery mode. This instructs the standby database to begin applying changes from archived redo logs transferred from the primary database: SQL> alter database recover managed standby database disconnect;

Author A.Kishore http://www.appsdba.info

Verify the Standby : - Identify the existing files on the standby SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

alter system set log_archive_dest_state_2=defer scope=both; alter system set log_archive_dest_state_2=enable scope=both; alter system switch logfile; select dest_id,error from v$archive_dest_status; DEST_ID ERROR ---------- ----------------------------------------------------------------1 2 3 4 5 6 7 8 9 10 -- On standby SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; 2 SEQUENCE# FIRST_TIM NEXT_TIME ---------- --------- --------39 08-OCT-11 08-OCT-11 40 08-OCT-11 08-OCT-11 41 08-OCT-11 08-OCT-11 42 08-OCT-11 08-OCT-11 43 08-OCT-11 08-OCT-11 44 08-OCT-11 08-OCT-11 45 08-OCT-11 08-OCT-11 - Standby Set up is complete.

Author A.Kishore http://www.appsdba.info Note 5: ARCH processing : - In our setup so far we have used the arch process for log_archive_dest_2 ( if nothing is specified then default is arch) and SRL's are created on the standby; - Archiving happens when there is a log switch on the primary. - On the primary database, after the ARC0 process successfully archives the local online redo log to the local destination (LOG_ARCHIVE_DEST_1), the ARC1 process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2). - On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. Log apply services use Redo Apply ( MRP ) to apply the redo to the standby database. With the protection mode used in this guide (maximum performance), archiving of redo data to the remote standby does not occur until after a log switch. By default, a log switch occurs when an online redo log becomes full which means the standby database does not get updated until then. To force the current redo logs to be archived immediately, use the following statement on the primary database: SQL> alter system archive log current; System altered. Verifying the Physical Standby Database With the standby and primary databases now in operation, the next step is to verify the Data Guard configuration. This will ensure that Redo Transport on the primary and Redo Apply on the physical standby are working correctly.

Given this Data Guard configuration is running in maximum performance mode, the validation tasks will involve switching redo log files from the primary and verifying those log files are being shipped and applied to the physical standby database. Redo Transport From the primary database, perform a log switch and then verify the transmissions of the archive redo log file was successful: SQL> alter system switch logfile; System altered.

Author A.Kishore http://www.appsdba.info SQL> select status, error from v$archive_dest where dest_id = 2; STATUS ERROR --------- --------------------------------------------------------VALID

If the transmission was successful, the status of the destination will be VALID as shown above. If for any reason the transmission was unsuccessful, the status will be INVALID and the full text of the error message will be populated in the ERROR column which can be used to investigate and correct the issue.
Apply redo logs in the standby database Remember archived logs will be transferred to stand by database. But we need to apply the archived logs to standby database. We can apply these archived logs either thru SQL or data guard command line tool (DGMGRL) Let us see how we can apply thru SQL. Go to standby database environment:

SQL> startup nomount SQL> alter database mount standby database; SQL> alter database recover managed standby database disconnect;
Warning!!!After executing the above statement you will get the sql prompt. But that does not mean the recovery is complete. This statement will kick off Disaster Recovery Service. SQL> recover managed standby database cancel; The above statement will complete media recovery. Now all archive redo logs applied to standby database.

Author A.Kishore http://www.appsdba.info

Standby DB as Read-Only
For Oracle10gR2: When the physical database is down, simply do: STARTUP. This will bring up the database in read-only mode.

SQL> select open_mode, database_role, switchover_status from v$database; OPEN_MODE DATABASE_ROLE SWITCHOVER_STATUS ---------- ---------------- -------------------MOUNTED PHYSICAL STANDBY NOT ALLOWED SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 167772160 bytes Fixed Size 1218316 bytes Variable Size 75499764 bytes Database Buffers 88080384 bytes Redo Buffers 2973696 bytes Database mounted. Database opened. SQL> select open_mode, database_role, switchover_status from v$database; OPEN_MODE DATABASE_ROLE SWITCHOVER_STATUS ---------- ---------------- -------------------READ ONLY PHYSICAL STANDBY NOT ALLOWED or To open a standby database for read-only access when it is currently performing managed recovery: Cancel log apply services: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; Open the database for read-only access: SQL> ALTER DATABASE OPEN READ ONLY;

Author A.Kishore http://www.appsdba.info

Redo Apply To verify Redo Apply, identify the existing archived redo logs on the standby, archive a log or two from the primary, and then check the standby database again. This test will ensure that redo data was shipped from the primary and then successfully received, archived, and applied to the standby. First, identify the existing archived redo redo logs on the standby database:

select sequence#, first_time, next_time, archived, applied from v$archived_log order by sequence#;

From the primary database, archive the current log using the following SQL statement: SQL> alter system archive log current; System altered.

Go back to the standby database and re-query the V$ARCHIVED_LOG view to verify redo data was shipped, received, archived, and applied: SQL> select sequence#, first_time, next_time, archived, applied from v$archived_log order by sequence#; Monitoring the alert.log of the Standby Database Querying the V$ARCHIVED_LOG view from the standby database is a good way to ensure Redo Transport and Redo Apply is doing their job correctly. In addition, I also like to tail the alert.log file of the standby database as a double check. From the standby database, perform a tail -f against the alert.log while issuing the "alter system archive log current" statement from the primary:

tail -f alert_source_s1.log RFS[7]: Archived Log: '/home/oracle10g/oracle/temp/oracle/arch/1_40_759161580.dbf' Sat Oct 8 22:20:32 2011 Media Recovery Log /home/oracle10g/oracle/temp/oracle/arch/1_39_759161580.dbf Media Recovery Log /home/oracle10g/oracle/temp/oracle/arch/1_40_759161580.dbf Media Recovery Log /home/oracle10g/oracle/temp/oracle/arch/1_41_759161580.dbf Media Recovery Log /home/oracle10g/oracle/temp/oracle/arch/1_42_759161580.dbf

Author A.Kishore http://www.appsdba.info Media Recovery Log /home/oracle10g/oracle/temp/oracle/arch/1_43_759161580.dbf Media Recovery Log /home/oracle10g/oracle/temp/oracle/arch/1_44_759161580.dbf Media Recovery Log /home/oracle10g/oracle/temp/oracle/arch/1_45_759161580.dbf

Switchover databases
In real time environment, if primary database is crashed or unavailable for some reason you need to make standby database as primary database. Switchover Standby to Primary
On primary SQL> conn sys@prod as sysdba Enter password: Connected. SQL> select SWITCHOVER_STATUS from v$database ; SWITCHOVER_STATUS -----------------SESSIONS ACTIVE

SQL> alter database commit to switchover to standby with session shutdown;


On STandby

Author A.Kishore http://www.appsdba.info

After this statement completes, the primary database is converted into a standby database. The current control file is backed up to the current SQL session trace file before the switchover operation. This makes it possible to reconstruct a current control file, if necessary.

Author A.Kishore http://www.appsdba.info

Author A.Kishore http://www.appsdba.info Shut down and restart the former primary instance

Author A.Kishore http://www.appsdba.info

You get the below error if you run in Standby SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE; SWITCHOVER_STATUS -------------------NOT ALLOWED

NOT ALLOWED - Either this is a standby database and the primary database has not been switched first or this is a primary database and there are no standby databases. SESSIONS ACTIVE - Indicates that there are active SQL sessions attached to the primary or standby database that need to be disconnected before the switchover operation is permitted. Query the V$SESSION view to identify the specific processes that need to be terminated. SWITCHOVER PENDING - This is a standby database and the primary database switchover request has been received but not processed. SWITCHOVER LATENT - The switchover was in pending mode, but did not complete and went back to the primary database. TO PRIMARY - This is a standby database and is allowed to switch over to a primary database.

http://download.oracle.com/docs/cd/B10501_01/server.920/a96653/role_management.htm

Author A.Kishore http://www.appsdba.info

Some more useful queries select sequence#, first_time, next_time from v$archived_log; select sequence#,applied from v$archived_log order by sequence#; shutdown immediate startup nomount alter database mount standby database; alter database recover automatic standby database;

SQL> SELECT name, value FROM gv$parameter WHERE name = 'log_archive_dest_state_1';

NAME -------------------------------------------------------------------------------VALUE -------------------------------------------------------------------------------log_archive_dest_state_1 ENABLE

SQL> SQL> SELECT name, value FROM gv$parameter WHERE name = 'log_archive_dest_state_2';

NAME -------------------------------------------------------------------------------VALUE -------------------------------------------------------------------------------log_archive_dest_state_2 ENABLE

SQL> SQL> SELECT name, value FROM gv$parameter WHERE name = 'standby_archive_dest';

NAME -------------------------------------------------------------------------------VALUE -------------------------------------------------------------------------------standby_archive_dest

Author A.Kishore http://www.appsdba.info /home/oracle10g/oracle/temp/oracle/arch

SQL> SQL> select status, error from v$archive_dest where dest_id=2;

STATUS ERROR --------- ----------------------------------------------------------------VALID SQL> SQL> select switchover_status from v$database;

SWITCHOVER_STATUS -------------------SESSIONS ACTIVE SQL> SQL> select protection_mode, protection_level, database_role from v$database; PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE -------------------- -------------------- ---------------MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PRIMARY SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

MAX(SEQUENCE#) THREAD# -------------- ---------45 1 ***** on physical stand by ***** SQL> SQL> select protection_mode, protection_level, database_role from v$database;

PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE -------------------- -------------------- ---------------MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PHYSICAL STANDBY SQL> SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

MAX(SEQUENCE#) THREAD# -------------- ---------45 1

Author A.Kishore http://www.appsdba.info

Problems and solutions ----------------------

- Switch a log on the primary SQL>ALTER SYSTEM SWITCH LOGFILE;

SQL> select dest_id,error from v$archive_dest_status; DEST_ID ERROR ---------- ----------------------------------------------------------------1 2 ORA-16047: DGID mismatch between destination setting and standby 3 4 5 6 7 8 9 10 10 rows selected.

Solution Both on the primary and standby the db_unique_name is same, we need to change it. SQL> show parameter db_unique_name NAME TYPE VALUE ------------------------------------ ----------- -----------------------------db_unique_name string source

Author A.Kishore http://www.appsdba.info

Author A.Kishore http://www.appsdba.info References Creating a 10gr2 Data Guard Physical Standby database with Real-Time apply (Doc ID 343424.1) http://www.idevelopment.info/data/Oracle/DBA_tips/Data_Guard/DG_40.shtml

Potrebbero piacerti anche