Sei sulla pagina 1di 17

Oracle Streams 10g+

==>set the following initialization parameters, as necessary, at each participating instance: global_names, _job_queue_interval, sga_target, streams_pool_size ==>Tablespace/User for Streams Administrator queues : CREATE TABLESPACE &streams_tbs_name DATAFILE '&db_file_directory/&db_file_name' SIZE 100M REUSE AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED; create user STRMADMIN identified by STRMADMIN; ALTER USER strmadmin DEFAULT TABLESPACE &streams_tbs_name QUOTA UNLIMITED ON &streams_tbs_name; ==>Privileges GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN; execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN'); ==>Separate queues for capture and apply Configure separate queues for changes that are captured locally and for receiving captured changes from each remote site. This is especially important when configuring bi-directional replication between multiple databases. For example, consider the situation where Database db1.net replicates its changes to databases db2.net, and Database db2.net replicates to db1.net. Each database will maintain 2 queues: one for capturing the changes made locally and other queue receiving changes from the other database. ==>Streams and Flash Recovery Area (FRA) In Oracle 10g and above, configure a separate log archive destination independent of the Flash Recovery Area for the Streams capture process for the database. Archive logs in the FRA can be removed automatically on space pressure, even if the Streams capture process still requires them. Do not allow the archive logs for Streams capture to reside solely in the FRA. ==>Archive Logging must be enabled ==>Supplemental logging If you set the parallelism apply process parameter to a value greater than 1, then you must specify a conditional supplemental log group at the source database for all of the unique and foreign key columns in the tables for which an apply process applies changes. Supplemental logging may be required for other columns in these tables as well, depending on your configuration. Any columns specified in rule-based transformations or used within DML Handlers at target site must be unconditionally logged at the source site. Supplemental logging can be specified at the source either at the database level or for the individual replicated table. In 10gR2, supplemental logging is automatically configured for tables on which primary, unique, or foreign keys are defined when the database object is prepared for Streams capture. The procedures for maintaining streams and adding rules in the DBMS_STREAMS_ADM package automatically prepare objects for a local Streams capture. -->Database level logging: -->Minimal supplemental Logging SQL> Alter database add supplemental log data; -->Identification Key Logging ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) COLUMNS; select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK, SUPPLEMENTAL_LOG_DATA_all from v$database;

-->Table level logging: alter table HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME, DEPARTMENT_ID); alter table HR.EMPLOYEES add SUPPLEMENTAL LOG data (PRIMARY KEY,UNIQUE,FOREIGN,ALL) columns; SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_tables UNION SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_schemas UNION SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_database; -->Check supplemental log groups Select log_group_name, table_name, decode(always, 'ALWAYS', 'Unconditional', NULL, 'Conditional') ALWAYS from DBA_LOG_GROUPS -->Check columns in supplemental log groups Select log_group_name, column_name, position from dba_log_group_columns where table_name = 'DEPARTMENTS' and owner='HR'; ==>Implement a Heartbeat Table To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically, implement a "heart beat" table. A "heart beat" table is especially useful for databases that have a low activity rate. The streams capture process requests a checkpoint after every 10Mb of generated redo. During the checkpoint, the metadata for streams is maintained if there are active transactions. Implementing a heartbeat table ensures that there are open transactions occurring regularly within the source database enabling additional opportunities for the metadata to be updated frequently --> Submit a job to moniter heartbeat table create table job (a number primary key, b date); create sequence temp_seq start with 1; variable jobno number; begin dbms_job.submit(:jobno, 'insert into zul.job values (temp_seq.nextval, sysdate);', sysdate, 'sysdate+60/(60*60*24)'); commit; end; / ==>Configuring Capture use the DBMS_STREAMS_ADM.MAINTAIN_* (where *=TABLE,SCHEMA,GLOBAL, TTS) These procedures minimize the number of steps required to configure Streams processes. Also, it is possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully. CAPTURE requires a rule set with rules.The ADD_GLOBAL_RULES procedure cannot be used to capture DML changes for entire database. ADD_GLOBAL_RULES can be used to capture all DDL changes for the database. ==>Propagation Configuration If the maintain_* (TABLE,SCHEMA,GLOBAL) procedures are used to configure Streams, queue_to_queue is automatically set to TRUE, if possible. The database link for this queue_to_queue propagation must use a TNS servicename (or connect name) that specifies the GLOBAL_NAME in the CONNECT_DATA clause of the descriptor ==>Propagation Restart Use the procedures START_PROPAGATION and STOP_PROPAGATION from DBMS_PROPAGATION_ADM to enable and disable the propagation schedule. These procedures automatically handle queue_to_queue propagation. Example: exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation'); or exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation',force=>true);

exec DBMS_PROPAGATION_ADM.START_PROPAGATION('name_of_propagation'); ======= ========= ============ ============ ==============

==>Streams Table Level Replication Setup Script


Configuring the Script To run this script either set your environment so the values below are the same as yours or replace them in the script with values appropriate to your environment : STRM1.NET = Global Database name of the Source (capture) Site STRM2.NET = Global Database name of the Target (apply) Site STRMADMIN = Streams Administrator with password strmadmin HR.EMPLOYEES = table to be replicated to the target database. Running the Script The script assumes that : -- The sample HR schema is installed on the source site - STRM1.NET -- A user HR_DEMO exists on the destination site -STRM2.NET -- The target site table is empty . Script /* Step 1 - Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at STRM1.NET. */ conn strmadmin/strmadmin@strm1.net BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE', queue_table =>'STREAMS_QUEUE_TABLE', queue_user => 'STRMADMIN'); END; / conn sys/oracle@strm1.net as sysdba create public database link STRM2.NET using 'strm2.net'; conn strmadmin/strmadmin@strm1.net create database link STRM2.NET connect to strmadmin identified by strmadmin; /* Step 2 - Connect as the Streams Administrator in the target site STRM2.NET and create the streams queue */ conn strmadmin/strmadmin@strm2.net BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE', queue_table =>'STREAMS_QUEUE_TABLE', queue_user => 'STRMADMIN'); END; / /*Step 3 -Connected to STRM1.NET, create CAPTURE and PROPAGATION rules for HR.EMPLOYESS */ conn strmadmin/strmadmin@strm1.net BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name => 'HR.EMPLOYEES', streams_name => 'STRMADMIN_PROP', source_queue_name => 'STRMADMIN.STREAMS_QUEUE', destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@STRM2.NET', include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; / BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES(

table_name => 'HR.EMPLOYEES', streams_type => 'CAPTURE', streams_name => 'STRMADMIN_CAPTURE', queue_name => 'STRMADMIN.STREAMS_QUEUE', include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; / /*Step 4 - Connected as STRMADMIN at STRM2.NET, create APPLY rules for HR.EMPLOYEES */ conn STRMADMIN/STRMADMIN@strm2.net BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'HR.EMPLOYEES', streams_type => 'APPLY', streams_name => 'STRMADMIN_APPLY', queue_name => 'STRMADMIN.STREAMS_QUEUE', include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; / BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'STRMADMIN_APPLY', apply_user => 'HR'); END; / BEGIN DBMS_APPLY_ADM.SET_PARAMETER( apply_name => 'STRMADMIN_APPLY', parameter => 'disable_on_error', value => 'n'); END; / /*Step 5 - Take an export of the table at STRM1.NET */ exp USERID=SYSTEM/oracle@strm1.net TABLES=EMPLOYEES FILE=hr.dmp LOG=hr_exp.log OBJECT_CONSISTENT=Y STATISTICS = NONE /*Step 6 - Transfer the export dump file to STRM2.NET and import */ imp USERID=SYSTEM/<password>@strm2.net CONSTRAINTS=Y FULL=Y FILE=hr.dmp IGNORE=Y COMMIT=Y LOG=hr_imp.log STREAMS_INSTANTIATION=Y /*Step 7 - Start Apply and capture */ conn strmadmin/strmadmin@strm2.net BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'STRMADMIN_APPLY'); END; / conn strmadmin/strmadmin@strm1.net BEGIN DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => 'STRMADMIN_CAPTURE'); END; / For bidirectionals treams setup, Please run steps 1 through 9 after interchanging Db1 and Db2. Caution should be exercised while setting the instantiation SCN this time as one maynot want to export and import the data. Export option ROWS=N can be used for the instantiation of objects from DB2--> DB1. Script Output

/* Perform changes HR.EMPLOYEES and confirm that these are applied to tables on the destination */ conn hr/hr@strm1.net insert into hr.Employees values (99999,'TEST','TEST','TEST@oracle','1234567',sysdate,'ST_MAN',null,null,null,null); commit; conn hr / hr@strm2.net select * From employees where employee_id=99999; ======= ========= ============ ============ ==============

==> How To Setup One-Way SCHEMA Level Streams Replication :


Running the Sample Code To run this script either set your environment so the values below are the same as yours or replace them in the script with values appropriate to your environment : STRM1.NET = Global Database name of the Source (capture) Site STRM2.NET = Global Database name of the Target (apply) Site STRMADMIN = Streams Administrator with password strmadmin HR = Source schema to be replicated - This schema is already installed on the source site The sample code replicates both DML and DDL. The Streams Administrator (STRMADMIN) has been created as per Note 786528.1 How to create STRMADMIN user and grant privileges. /************************* BEGINNING OF SCRIPT ****************************** Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script. */ SET ECHO ON SPOOL stream_oneway.out /* STEP 1.- Create the streams queue and the database links that will be used for propagation. */ connect STRMADMIN/STRMADMIN@STRM1.NET BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'STREAMS_QUEUE_TABLE', queue_name => 'STREAMS_QUEUE', queue_user => 'STRMADMIN'); END; / --CREATE DATABASE LINK AT SOURCE as SYS conn sys/&sys_pwd_source@strm1.net as sysdba create public database link STRM2.NET using 'strm2.net'; --CREATE DATABASE LINK AT SOURCE as STRMADMIN conn strmadmin/strmadmin@strm1.net create database link STRM2.NET connect to strmadmin identified by strmadmin; -- Create the database link at the destination database too /* STEP 2.- Connect as the Streams Administrator in the target site strm2.net and create the streams queue */ connect STRMADMIN/STRMADMIN@STRM2.NET BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'STREAMS_QUEUE_TABLE', queue_name => 'STREAMS_QUEUE', queue_user => 'STRMADMIN'); END; / /* STEP 3.- Add apply rules for the Schema at the destination database */ BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'HR', streams_type => 'APPLY ', streams_name => 'STREAM_APPLY', queue_name => 'STRMADMIN.STREAMS_QUEUE',

include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; / /* STEP 4.- Add capture rules for the schema HR at the source database */ CONN STRMADMIN/STRMADMIN@STRM1.NET BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'HR', streams_type => 'CAPTURE', streams_name => 'STREAM_CAPTURE', queue_name => 'STRMADMIN.STREAMS_QUEUE', include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; / /* STEP 5.- Add propagation rules for the schema HR at the source database. This step will also create a propagation job to the destination database */ BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES( schema_name => 'HR', streams_name => 'STREAM_PROPAGATE', source_queue_name => 'STRMADMIN.STREAMS_QUEUE', destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@STRM2.NET', include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; / /* STEP 6.- Export, import and instantiation of tables from Source to Destination Database; if the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database Export from the Source Database: Specify the OBJECT_CONSISTENT=Y clause on the export command. By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN). */ $ exp USERID=SYSTEM/&system_pwd_source@STRM1.NET OWNER=HR FILE=hr.dmp LOG=hr_exp.log OBJECT_CONSISTENT=Y STATISTICS = NONE /* Import into the Destination Database: Specify STREAMS_INSTANTIATION=Y clause in the import command. By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file */ $ imp USERID=SYSTEM/&system_pwd_dest@STRM2.NET FULL=Y CONSTRAINTS=Y FILE=hr.dmp IGNORE=Y COMMIT=Y LOG=hr_imp.log STREAMS_INSTANTIATION=Y /* If the objects are already present in the destination database, there are two ways of instantiating the objects at the destination site. 1. By means of Metadata-only export/import : Specify ROWS=N during Export Specify IGNORE=Y during Import along with above import parameters. 2. By Manaually instantiating the objects Get the Instantiation SCN at the source database: connect STRMADMIN/STRMADMIN@STRM1.NET set serveroutput on DECLARE

iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn); END; / /*Instantiate the objects at the destination database with this SCN value. The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN, then the apply process discards the LCR. Else, the apply process applies the LCR. */ connect STRMADMIN/STRMADMIN@STRM2.NET BEGIN DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN( SOURCE_SCHEMA_NAME => 'HR', SOURCE_DATABASE_NAME => 'STRM1.NET', RECURSIVE => TRUE, INSTANTIATION_SCN => &iscn ); END; / Enter value for iscn: <Provide the value of SCN that you got from the source database above> /* STEP 7.- Specify an 'APPLY USER' at the destination database. This is the user who would apply all statements and DDL statements. The user specified in the APPLY_USER parameter must have the necessary privileges to perform DML and DDL changes on the apply objects. */ conn strmadmin/strmadmin@strm2.net BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'STREAM_APPLY', apply_user => 'HR'); END; / /* STEP 8.- Set stop_on_error to false so apply does not abort for every error; then, start the Apply process on the destination */ conn strmadmin/strmadmin@strm2.net BEGIN DBMS_APPLY_ADM.SET_PARAMETER( apply_name => 'STREAM_APPLY', parameter => 'disable_on_error', value => 'n'); END; / DECLARE v_started number; BEGIN SELECT decode(status, 'ENABLED', 1, 0) INTO v_started FROM DBA_APPLY WHERE APPLY_NAME = 'STREAM_APPLY'; if (v_started = 0) then DBMS_APPLY_ADM.START_APPLY(apply_name => 'STREAM_APPLY'); end if; END; / /* STEP 9.- Set up capture to retain 7 days worth of logminer checkpoint information, then start the Capture process on the source */ conn strmadmin/strmadmin@strm1.net BEGIN DBMS_CAPTURE_ADM.ALTER_CAPTURE( capture_name => 'STREAM_CAPTURE', checkpoint_retention_time => 7); END; / begin DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE'); end;

/ /* Check the Spool Results Check the stream_oneway.out spool file to ensure that all actions finished successfully after this script is completed. */ SET ECHO OFF SPOOL OFF /*************************** END OF SCRIPT ******************************/ -->Sample Code Output: /* Perform changes in tables belonging to HR on the source site and check that these are applied on the destination */ conn HR/HR@strm1.net insert into HR.DEPARTMENTS values (99,'OTHER',205,1700); commit; alter table HR.EMPLOYEES add (NEWCOL VARCHAR2(10)); /* Confirm the insert has been done on HR.DEPARTMENTS at destination and a HR.EMPLOYEES has now a new column */ conn HR/HR@strm2.net select * from HR.DEPARTMENTS where department_id=99; desc HR.EMPLOYEES; ======= ========= ============ ============ ==============

==>How to setup Database Level Streams Replication


Script /* Step 1 - Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at STRM1.NET. */ conn strmadmin/strmadmin@strm1.net BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE', queue_table =>'STREAMS_QUEUE_TABLE', queue_user => 'STRMADMIN'); END; / conn sys/oracle@strm1.net as sysdba create public database link STRM2.NET using 'strm2.net'; conn strmadmin/strmadmin@strm1.net create database link STRM2.NET connect to strmadmin identified by strmadmin; /* Step 2 - Connect as the Streams Administrator in the target site STRM2.NET and create the streams queue */ conn strmadmin/strmadmin@strm2.net BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE', queue_table =>'STREAMS_QUEUE_TABLE', queue_user => 'STRMADMIN'); END; / /*Step 3 -Connected to STRM1.NET, create CAPTURE and PROPAGATION rules */ conn strmadmin/strmadmin@strm1.net BEGIN DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES( streams_name => 'STRMADMIN_PROP', source_queue_name => 'STRMADMIN.STREAMS_QUEUE', destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@STRM2.NET', include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; /

BEGIN DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( streams_type => 'CAPTURE', streams_name => 'STRMADMIN_CAPTURE', queue_name => 'STRMADMIN.STREAMS_QUEUE', include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; / /*Step 4 - Connected as STRMADMIN at STRM2.NET, create APPLY rules */ conn STRMADMIN/STRMADMIN@strm2.net BEGIN DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( streams_type => 'APPLY', streams_name => 'STRMADMIN_APPLY', queue_name => 'STRMADMIN.STREAMS_QUEUE', include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; / BEGIN DBMS_APPLY_ADM.SET_PARAMETER( apply_name => 'STRMADMIN_APPLY', parameter => 'disable_on_error', value => 'n'); END; / /*Step 7 - Take an export of the DB at STRM1.NET */ $ exp USERID=SYSTEM/oracle@strm1.net FULL=Y FILE=hr.dmp LOG=hr_exp.log OBJECT_CONSISTENT=Y STATISTICS = NONE /*Step 8 - Transfer the export dump file to STRM2.NET and import */ $ imp USERID=SYSTEM/<password>@strm2.net CONSTRAINTS=Y FULL=Y FILE=hr.dmp IGNORE=Y COMMIT=Y LOG=hr_imp.log STREAMS_INSTANTIATION=Y /*Step 9 - Start Apply and capture */ conn strmadmin/strmadmin@strm2.net BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'STRMADMIN_APPLY'); END; / conn strmadmin/strmadmin@strm1.net BEGIN DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => 'STRMADMIN_CAPTURE'); END; / For bidirectional streams setup, Please run steps 1 through 9 after interchanging Db1 and Db2. Caution should be exercised while setting the instantiation SCN this time as one maynot want to export and import the data. Export option ROWS=N can be used for the instantiation of objects from DB2--> DB1. Script Output Please perform DML on one of the objects and make sure it is propagated to the other site. ======= ========= ============ ============ ==============

==>How To Configure Streams Real-Time Downstream Environment

Creating the streams tablespace: ========================= -- You may create the tablespace on both sides, but most important the downstream site: conn /as sysdba CREATE TABLESPACE streams_tbs DATAFILE 'streams_tbs_01.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; Creating the streams admin on both sides: =============================== -- Create the streams admin. on both sides: conn /as sysdba CREATE USER strmadmin IDENTIFIED BY strmadmin DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs; GRANT DBA TO strmadmin; BEGIN DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE( grantee => 'strmadmin', grant_privileges => true); END; / -- checking that the streams admin is created: SELECT * FROM dba_streams_administrator; -Instruct the logmnr to use streams_tbs tablespace from downstream site: -- From downstream site: conn /as sysdba exec DBMS_LOGMNR_D.SET_TABLESPACE ('streams_tbs'); --Creating connection between source and downstream: -- Setting Connection between the databases: Check $TNS_ADMIN, to point to the place of tnsnames.ora. Make sure that service names of both databases exists in tnsnames.ora files of each other. Check the listeners are working and sdatabases are registered. If source database is RAC, check that all nodes have the connection identifiers to connect to downstream database. Make sure that Source and Target can both connect to each other, you may use the TNSPING to make sure. -- Setting the SYS password: The SYS password need to be the same on both source and target, for the connection to be established successfully between source and downstream sites to send the redo data. -- Set GLOBAL_NAMES = TRUE on both sides: Alter system set global_names=TRUE scope=BOTH; -- Create dblink form Downstream to Source for administration purposes: conn strmadmin/strmadmin create database link ORCL102C.EG.ORACLE.COM connect to strmadmin identified by strmadmin using 'ORCL102C'; select * from dual@ORCL102C.EG.ORACLE.COM; Setting parameters for downstream archiving: ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=SPFILE; ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/home/oracle/archives/ORCL102D/standby-archives/ VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)' SCOPE=SPFILE;

10

LOCATION - place where archived-logs will be written from the standby redo logs coming from the source site. VALID FOR - Specify either (STANDBY_LOGFILE,PRIMARY_ROLE) or (STANDBY_LOGFILE,ALL_ROLES). REDO LOGS ARE STORED IN TWO LOCATIONS :We will get duplicate file on downstreams capture because of parameter VALID_FOR=(ONLINE_LOGFILE, ALL_ROLES) for dest_1 -- Specify Source and downstream database for LOG_ARCHIVE_CONFIG, using DB_UNIQUE_NAME of both sites: ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(ORCL102C,ORCL102D)' SCOPE=SPFILE; -Creating standby redo-logs to receive redo data from Source: -- From the source: 1) Determine the log file size used on the source database: conn /as sysdba select THREAD#, GROUP#, BYTES/1024/1024 from V$LOG; Note: - The standby log file size must exactly match (or be larger than) the source database log file size. - The number of standby log file groups must be at least one more than the number of online log file groups on the source database. If source is NON-RAC: ============== For example, if the query from V$LOG showed the following: THREAD# GROUP# BYTES/1024/1024 ---------- ---------- --------------1 1 50 1 2 50 1 3 50 Previous output indicates that we got 3 groups each with size 50M. This means that we will need to have 4 standby-log groups with atleast 50M each. -- From the downstream site: 2) Add standby logs: -- For example, the source database has three online redo log file groups and each log file size of 50 MB. In this case, use the following statements to create the appropriate standby log file groups: conn /as sysdba ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('/home/oracle/archives/ORCL102D/standbylogs/slog4.rdo') SIZE 50M; ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('/home/oracle/archives/ORCL102D/standbylogs/slog5.rdo') SIZE 50M; ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ('/home/oracle/archives/ORCL102D/standbylogs/slog6.rdo') SIZE 50M; ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 ('/home/oracle/archives/ORCL102D/standbylogs/slog7.rdo') SIZE 50M; 3) Ensure that the standby log file groups were added successfully: conn /as sysdba SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG; -- Your output should be similar to the following: GROUP# THREAD# SEQUENCE# ARC STATUS ---------- ---------- ---------- --- ---------4 0 0 YES UNASSIGNED 5 0 0 YES UNASSIGNED 6 0 0 YES UNASSIGNED 7 0 0 YES UNASSIGNED If source is RAC: ========== For example, the souce is a 2-Node RAC database, the output of V$LOG will show something like this: THREAD# GROUP# BYTES/1024/1024 ---------- ---------- --------------1 1 50 1 2 50 2 3 50

11

2 4 50 Previous output indicates that we got two THREADS(instances) each one has two redo-log groups each one have a size 50M. This means that we will need to have 3 standby-log groups per THREAD with each group atleast 50M. -- From the downstream site: 2) Add standby logs: -- Use the following statements to create the appropriate standby log file groups: conn /as sysdba ALTER DATABASE ADD STANDBY logs/slog4.rdo') SIZE 50M; ALTER DATABASE ADD STANDBY logs/slog5.rdo') SIZE 50M; ALTER DATABASE ADD STANDBY logs/slog6.rdo') SIZE 50M; ALTER DATABASE ADD STANDBY logs/slog7.rdo') SIZE 50M; ALTER DATABASE ADD STANDBY logs/slog8.rdo') SIZE 50M; ALTER DATABASE ADD STANDBY logs/slog9.rdo') SIZE 50M; LOGFILE THREAD 1 GROUP 4 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 1 GROUP 5 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 1 GROUP 6 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 2 GROUP 7 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 2 GROUP 8 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 2 GROUP 9 ('/home/oracle/archives/ORCL102D/standby-

3) Ensure that the standby log file groups were added successfully: conn /as sysdba SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG; -- Your output should be similar to the following: GROUP# THREAD# SEQUENCE# ARC STATUS ---------- ---------- ---------- --- ---------4 1 0 YES UNASSIGNED 5 1 0 YES UNASSIGNED 6 1 0 YES UNASSIGNED 7 2 0 YES UNASSIGNED 8 2 0 YES UNASSIGNED 9 2 0 YES UNASSIGNED Get Downstream database to archive-log mode: ==================================== -- Set the following parameters for the location and format of local archives: conn /as sysdba ALTER SYSTEM SET log_archive_dest_1='LOCATION=/home/oracle/archives/ORCL102D/redo-archives/' SCOPE=SPFILE; ALTER SYSTEM SET log_archive_dest_state_1 = 'ENABLE' SCOPE=SPFILE; ALTER SYSTEM SET log_archive_format = 'ORCL102D_%t_%s_%r.arc' SCOPE=SPFILE; -- set the database to archive-log mode: conn /as sysdba shutdown immediate startup mount alter database archivelog; alter database open; -- Increase the number of archiving processes: ALTER SYSTEM SET log_archive_max_processes=5 SCOPE=BOTH; ************************************************** *** Preparing the Source site (ORCL102C) **** ************************************************** --Enable Shipping of online redo log data from Source to Downstream database: ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=SPFILE; ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=ORCL102D LGWR SYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL102D'

12

SCOPE=SPFILE; SERVICE - Specify the identifier of downstream database from tnsnames.ora of source. LGWR ASYNC or LGWR SYNC - Specify a redo transport mode. The advantage of specifying LGWR SYNC is that redo data is sent to the downstream database faster then when LGWR ASYNC is specified. You can specify LGWR SYNC for a real-time downstream capture process only. NOREGISTER - Specify this attribute so that the location of the archived redo log files is not recorded in the downstream database control file. VALID_FOR - Specify either (ONLINE_LOGFILE,PRIMARY_ROLE) or (ONLINE_LOGFILE,ALL_ROLES). DB_UNIQUE_NAME - The value of db_unique_name of the downstream database. -- Specify Source and downstream database for LOG_ARCHIVE_CONFIG, using DB_UNIQUE_NAME of both sites: ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(ORCL102C,ORCL102D)' SCOPE=SPFILE; --Get source database to archivelog mode: -- Set the following parameters for the location and format of local archives: -- For single instance source: conn /as sysdba ALTER SYSTEM SET log_archive_dest_1='LOCATION=/home/oracle/archives/ORCL102C/redo-archvies/' SCOPE=SPFILE; ALTER SYSTEM SET log_archive_dest_state_1 = 'ENABLE' SCOPE=SPFILE; ALTER SYSTEM SET log_archive_format = 'ORCL102C_%t_%s_%r.arc' SCOPE=SPFILE; -- set the database to archive log mode: conn /as sysdba shutdown immediate startup mount alter database archivelog; alter database open; -- For a RAC source: conn /as sysdba ALTER SYSTEM SET log_archive_dest_1='LOCATION='+FLASH/NODE/' SCOPE=SPFILE SID='*'; ALTER SYSTEM SET log_archive_dest_state_1 = 'ENABLE' SCOPE=SPFILE SID='*'; -- From NODE1 ALTER SYSTEM SET log_archive_format = 'NODE1_%t_%s_%r.arc' SCOPE=SPFILE SID='bless1'; -- From NODE2 ALTER SYSTEM SET log_archive_format = 'NODE2_%t_%s_%r.arc' SCOPE=SPFILE SID='bless2'; -- Shutdown all nodes: conn /as sysdba shutdown immediate -- set the database to archive log mode: startup mount alter database archivelog; alter database open; -- Startup the rest of the nodes: conn /as sysdba startup Note: - Check alertlogs of all the RAC nodes, or the single instance for the source database, to make sure that there is no errors reported for the log shipping. - Check in the background_dump_dest directory for traces with the string "lns" and "arc" and make sure they are not reporting any errors for archving or connection to the downstream database. - If any errors were shown, then do not proceed untill you fix these errors. ********************************************* **** Setting up Streams Replication **** ********************************************* Creating the replicated schema at Source site, if not already created: ============================================================ conn /as sysdba

13

drop user mars cascade; create user mars identified by mars; grant connect, resource, create table to mars; conn mars/mars create table test(id number, name varchar(20)); Creating the streams queue on the downstream site: ======================================== conn strmadmin/strmadmin BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'strmadmin.DOWNSTREAM_Q_TABLE', queue_name => 'strmadmin.DOWNSTREAM_Q', queue_user => 'STRMADMIN'); END; / -- check the created queue: select name, queue_table from user_queues;Using a single queue is the best practice for queue configuration for real-time downstream. A single combined queue for both capture and apply is preferable as it eliminates the redundant propagation/queue to queue transfer. --Creating the apply process at the downstream site: conn strmadmin/strmadmin BEGIN DBMS_APPLY_ADM.CREATE_APPLY( queue_name => 'strmadmin.DOWNSTREAM_Q', apply_name => 'DOWNSTREAM_APPLY', apply_captured => TRUE ); END; / -- Checking apply info. SELECT apply_name, status, queue_name FROM DBA_APPLY; SELECT parameter, value, set_by_user FROM DBA_APPLY_PARAMETERS WHERE apply_name = 'DOWNSTREAM_APPLY'; --Creating the capture process at the downstream site: conn strmadmin/strmadmin BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'strmadmin.DOWNSTREAM_Q', capture_name => 'DOWNSTREAM_CAPTURE', rule_set_name => NULL, start_scn => NULL, source_database => 'ORCL102C.EG.ORACLE.COM', use_database_link => true, -- For administrative purposes. first_scn => NULL, logfile_assignment => 'implicit'); -- capture process accepts redo data implicitly from Source. END; / -- Checking the capture info. SELECT capture_name, status from dba_capture; SELECT parameter, value, set_by_user FROM DBA_CAPTURE_PARAMETERS; Set capture for real-time captuing of changes: ===================================== -- To be executed from downstream site: conn strmadmin/strmadmin BEGIN

14

DBMS_CAPTURE_ADM.SET_PARAMETER( capture_name => 'DOWNSTREAM_CAPTURE', parameter => 'downstream_real_time_mine', value => 'y'); END; / -- Archive the current log file from the source database. -- If source is RAC, then do this from one of the nodes: conn /as sysdba ALTER SYSTEM ARCHIVE LOG CURRENT; Note: Archiving the current log file at the source database starts real time mining of the source database redo log. -- Now, check that the status of one/more of the standby logs has changed from UNASSIGNED to ACTIVE: conn /as sysdba SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG; --Add rules to instruct the capture process, what to capture: conn strmadmin/strmadmin BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'mars', streams_type => 'capture', streams_name => 'downstream_capture', queue_name => 'strmadmin.downstream_q', include_dml => true, include_ddl => true, include_tagged_lcr => false, source_database => 'ORCL102C.EG.ORACLE.COM', inclusion_rule => TRUE); END; / -- Check the created rules: SELECT rule_name, rule_condition FROM DBA_STREAMS_SCHEMA_RULES WHERE streams_name = 'DOWNSTREAM_CAPTURE' AND streams_type = 'CAPTURE'; --Instantiating the replicated objects: There are three ways to instantiate: In our example we want to replicate schema MARS, so where what we shall do: 1) If using datapump to exp/imp the replicated objects from source to downstream. -- From Source sqlplus session: conn system/oracle !mkdir /tmp/schema_export !chmod 777 /tmp/schema_export create or replace directory schema_export as '/tmp/schema_export'; !expdp system/oracle SCHEMAS=MARS DUMPFILE=schema_export:schema.dmp LOGFILE=schema_export:schema.log -- From Downstream sqlplus session: conn system/oracle !mkdir /tmp/schema_import !chmod 777 /tmp/schema_import create or replace directory schema_import as '/tmp/schema_import'; -- copy the dump file schema.dmp from '/tmp/schema_export' on source site to '/tmp/schema_import' on downstream site. !impdp system/oracle SCHEMAS=mars DIRECTORY=schema_import DUMPFILE=schema.dmp 2) If Using ordinary exp/imp to instantiate: -- From Source:

15

exp system/oracle owner=mars file=mars.dump log=mars.log object_consistent=Y -- Object_consistent must be set to Y, so the imported data would be consistent with the source. -- From Downstream: imp system/oracle file=mars.dump full=y ignore=y STREAMS_INSTANTIATION=Y Note: When doing STREAMS_INSTANTIATION=Y and having the export done with object_consistent=Y, the instantiation SCN for the apply will be modified to the SCN at the time the export was taken, and this will insure that data at the target is consistent with data at the source. 3) If you want to instantiate Manually: A- Create the replicated objects at the downstream site. You need to create the same objects at the downstream site, and if tables contains data, you will need to copy them by any means like insert into downstream_table select * form source_table@dblink. B- Set instantiation for the replicated objects: -- Run the following from the downstream site: conn strmadmin/strmadmin DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER@ORCL102C.EG.ORACLE.COM; -- Get current SCN from Source DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN( source_schema_name => 'mars', source_database_name => 'ORCL102C.EG.ORACLE.COM', instantiation_scn => iscn, recursive => TRUE); END; / * You have to make sure that objects between source and downstream are consistent at the time you set the instantiation manually, or you may end up with ORA-01403. -- After instantiating, check that instantiation is done: select * from DBA_APPLY_INSTANTIATED_OBJECTS; select * from DBA_APPLY_INSTANTIATED_SCHEMAS; --Start the apply process: conn strmadmin/strmadmin exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'DOWNSTREAM_APPLY'); select apply_name, status from dba_apply; --Start the capture process: conn strmadmin/strmadmin exec DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'DOWNSTREAM_CAPTURE'); select capture_name, status from dba_capture; *********************** *** Testing... **** *********************** -- From source: conn mars/mars insert into mars.test values(1,'Test message'); commit;

16

-- From Downstream: conn mars/mars select * from mars.test; ======= ========= ============ ============ ==============

= HealthCheck For Stream :


streams_hc_10GR2.sql

streams_hc_10GR2.sql

streams_hc_11_2_0_2.sql

streams_hc_11_2_0_2.sql

Check Apply:

Wp_appy.sql

Change Data Capture Health Check:

cdc_healthcheck.sql

Stream_performance_Advisor.sql

Stream_performance_Advisor.sql

======= ========= ============ ============ ==============

17

Potrebbero piacerti anche