Sei sulla pagina 1di 210

loading requested content...

9876

Doc

ID

Streams Troubleshooting Guide (Doc ID 883372.1) To Bottom

PURPOSE

This Notes explains what diagnostics are required when troubleshooting the Streams
replication problems.

SCOPE & APPLICATION

To be used by DBAs as a reference when troubleshooting the Streams associated process.

CONTENT

Troubleshooting the Capture Process


Troubleshooting the Propagation Process
Troubleshooting the Apply Process
Troubleshooting Rules and Rule-Based Transformations Problems

Troubleshooting the Capture Process

Is the capture process enabled?

Verify if the capture process is ENABLED using the following query:

prompt
prompt ++ <a name="Capture Processes">CAPTURE PROCESSES IN DATABASE</a> ++
col capture_name HEADING 'Capture|Name' format a30 wrap
col status HEADING 'Status' format a10 wrap
col QUEUE HEADING 'Queue' format a25 wrap
col RSN HEADING 'Positive|Rule Set' format a25 wrap
col RSN2 HEADING 'Negative|Rule Set' format a25 wrap
col capture_type HEADING 'Capture|Type' format a10 wrap
col error_message HEADING 'Capture|Error Message' format a60 word
col logfile_assignment HEADING 'Logfile|Assignment'
col checkpoint_retention_time HEADING 'Days to |Retain|Checkpoints'
col Status_change_time HEADING 'Status|Timestamp'
col error_number HEADING 'Error|Number'
col version HEADING 'Version'

SELECT capture_name, queue_owner||'.'||queue_name QUEUE, capture_type, status,


rule_set_owner||'.'||rule_set_name RSN,
negative_rule_set_owner||'.'||negative_rule_set_name RSN2,
checkpoint_retention_time, version, logfile_assignment,error_number, status_change_time,
error_message
FROM DBA_CAPTURE;

If the capture process is DISABLED, restart the process uisng:

dbms_capture_adm.start_capture(<CAPTURE_NAME>)

If the capture process is ABORTED, check the error_number and error_messages columns for
the error that caused the process to abort.

Some typical capture errors are:

ORA-1323 Invalid State

This error typically indicates that an archived log file is missing.

Check the Logminer log table (SYSTEM.LOGMNR_LOG$) for the last log file for each thread,
then look at the operating system files for the next log file in sequence. Typically, this is the log
file that cannot be found.

If this doesn't help, try turning on logminer and capture tracing and restart capture, look at the
capture trace file in the bdump directory.

ALTER SYSTEM SET EVENTS '1349 trace name context forever, level 7';
exec dbms_capture_adm.set_parameter('yourcapturename','trace_level','127');
exec dbms_capture_adm.start_capture('yourcapturename');

To turn off tracing:

ALTER SYSTEM SET EVENTS '1349 trace name context off';


exec dbms_capture_adm.set_parameter('yourcapturename','trace_level',null);

ORA-1
ORA-1347 Supplemental log data no longer found

This error indicates that minimum supplemental logging is not enabled for the instance. This
occurs most commonly on 9iR2 RAC instances. When configuring supplemental logging for RAC
in 9iR2, it is necessary to issue the ALTER DATABASE command at each instance in the cluster
BEFORE creating the capture process. In 10g, supplemental logging can be initiated from a
single instance so it is no longer necessary to issue the ALTER DATABASE ADD SUPPLEMENTAL
LOG DATA command at multiple instances. After issuing the ALTER DATABASE ADD
SUPPLEMENTAL LOG DATA, be sure to issue an ALTER SYSTEM ARCHIVE LOG CURRENT or
ALTER SYSTEM SWITCH LOGFILE.
This error can also be signaled if supplemental logging has been dropped, either explicitly or
implicitly. ALTER DATABASE DROP SUPPLEMENTAL LOG DATA explicitly disables supplemental
logging. If this command is issued, the capture process will abort with an ORA-1347 error.

Supplemental logging can be implicitly disabled by DML statements that use a BUFFER hint.
The BUFFER hint is frequently used in TPCC benchmarks. Logging can also be disabled when
using a TEMPORARY TABLE and CLOB in combination. This is reported as bug 3172456 and
fixed in 9.2.0.6

ORA-1372: Insufficient processes for specified LogMiner operation

This error indicates that there are not enough processes available to start the capture process.
Check the following:

1. Verify that the init.ora parameter parallel_max_servers is sufficient to start the capture and
apply processes. For each capture defined on the database, the number of processes required
is 2 + parallelism defined for capture. If the capture parallelism parameter is set to 1 (the
default), then 3 processes are required to start the capture. For capture parallelism value of 3,
then 2+3 or 5 processes are required to start the capture.

2. Check if the database resource manager is used for this database. Check for any plans that
have limitations et for parallel processes by running the following:

select PLAN, TYPE, PARALLEL_DEGREE_LIMIT_P1 from DBA_RSRC_PLAN_DIRECTIVES;

PARALLEL_DEGREE_LIMIT_P1 specifies a limit on the degree of parallelism for any operation


and the default is UNLIMITED.

If this is the cause you may need to disable the plan or set the parallelism value high enough
for the system_plan.

ORA-23605 invalid value for the Streams parameter FILE_NAME

Capture abort with ORA-23605 can occur for one of the following reasons:

1. Invalid value for the Streams parameter FILE_NAME indicates an inconsistency between the
capture logminer session and existing logminer sessions. Generally, the registered archived
logs view is empty when this occurs.

This error can occur when attempting to add a logfile to a logminer session. To confirm this
problem run the following query:

select logminer_id from dba_capture where not exists (select session# from
system.logmnr_session$);

If rows are returned, most likely this is the problem. Check if the customer
attempted to remove metadata at some point or performed an incomplete drop of the
capture process? To fix, drop the existing capture process with the non-existent logminer
session. Then recreate the capture process.
2. Attempting to use a dictionary build from a previously deleted logfile (bug 5278539, fixed in
10.2.0.4, 11.1). In this situation, there are multiple entries in the V$ARCHIVED_LOG view for
the same logfile, with the name being NULL for deleted logfiles. The patch avoids checking
DELETED entries in V$ARCHIVED_LOG:

select name, status from v$archived_log;

What is the status of the capture process?

If the capture process is ENABLED check its current status using the following query:

prompt ++ <a name="Capture Statistics">CAPTURE STATISTICS</a> ++


COLUMN PROCESS_NAME HEADING "Capture|Process|Number" FORMAT A7
COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A10
COLUMN SID HEADING 'Session|ID' FORMAT 99999999999999
COLUMN SERIAL# HEADING 'Session|Serial|Number'
COLUMN STATE HEADING 'State' FORMAT A17

SELECT SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,


c.CAPTURE_NAME,
c.STARTUP_TIME,
c.SID,
c.SERIAL#,
c.STATE
FROM gV$STREAMS_CAPTURE c, gV$SESSION s
WHERE c.SID = s.SID AND.SERIAL# = s.SERIAL#;

If the status of the capture process is CAPTURING CHANGES, verify if messages are being
enqueued int the capture queue.

Are LCRs being enqueued into the capture queue?

The following query can be used to verify if capture is enqueuing messages to the queue:

SELECT SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,


c.CAPTURE_NAME,
C.STARTUP_TIME,
c.SID,
c.SERIAL#,
c.STATE,
c.state_changed_time,
c.TOTAL_MESSAGES_CAPTURED,
c.TOTAL_MESSAGES_ENQUEUED, total_messages_created
FROM gV$STREAMS_CAPTURE c, gV$SESSION s
WHERE c.SID = s.SID AND c.SERIAL# = s.SERIAL#;

Are capture rules defined properly?


If the capture process is capturing changes but messages are not being enqueued to the
capture queue, verify if capture rules are defined properly using the following queries:

prompt ++ STREAMS TABLE SUBSETTING RULES ++


col NAME Heading 'Capture Name' format a25 wraP
col object format A25 WRAP
col source_database format a15 wrap
col RULE format a35 wrap
col TYPE format a15 wrap
col dml_condition format a40 wrap

select streams_name NAME,schema_name||'.'||object_name OBJECT,


RULE_TYPE || 'TABLE RULE' TYPE,
rule_owner||'.'||rule_name RULE,
DML_CONDITION , SUBSETTING_OPERATION
from dba_streams_rules where streams_type = 'CAPTURE' and
(dml_condition is not null or subsetting_operation is not null);

prompt
prompt ++ CAPTURE RULES BY RULE SET ++
col capture_name format a25 wrap heading 'Capture|Name'
col RULE_SET format a25 wrap heading 'Rule Set|Name'
col RULE_NAME format a25 wrap heading 'Rule|Name'
col condition format a50 wrap heading 'Rule|Condition'
set long 4000
REM break on rule_set
select c.capture_name, rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET ,
rsr.rule_owner||'.'||sr.rule_name RULE_NAME, r.rule_condition CONDITION
from dba_rule_set_rules rsr, DBA_RULES r ,DBA_CAPTURE c
where rsr.rule_name = r.rule_name and rsr.rule_owner = r.rule_owner and
rsr.rule_set_owner=c.rule_set_owner and rsr.rule_set_name=c.rule_set_name and
rsr.rule_set_name in
(select rule_set_name from dba_capture) order by rsr.rule_set_owner,rsr.rule_set_name;

prompt +** CAPTURE RULES IN NEGATIVE RULE SET **+


prompt
select c.capture_name, rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET
,rsr.rule_owner||'.'||rsr.rule_name RULE_NAME, r.rule_condition CONDITION
from dba_rule_set_rules rsr, DBA_RULES r ,DBA_CAPTURE c
where rsr.rule_name = r.rule_name and rsr.rule_owner = r.rule_owner and
rsr.rule_set_owner=c.negative_rule_set_owner and
rsr.rule_set_name=c.negative_rule_set_name and
rsr.rule_set_name in
(select negative_rule_set_name rule_set_name from dba_capture) order by
rsr.rule_set_owner,rsr.rule_set_name;

Are there any overlapping rules?


If capture rules are set up correctly and capture is still not enqueuing messages, verify if there
are any overlapping rules. The following PL/SQL code can be used for that:

set serveroutput on
declare
overlap_rules boolean := FALSE;
verbose boolean := TRUE;
cursor overlapping_rules is
select a.streams_name sname, a.streams_type stype,
a.rule_set_owner rule_set_owner, a.rule_set_name rule_set_name,
a.rule_owner owner1, a.rule_name name1, a.streams_rule_type type1,
b.rule_owner owner2, b.rule_name name2, b.streams_rule_type type2
from dba_streams_rules a, dba_streams_rules b
where a.rule_set_owner = b.rule_set_owner
and a.rule_set_name = b.rule_set_name
and a.streams_name = b.streams_name and a.streams_type = b.streams_type
and a.rule_type = b.rule_type
and (a.subsetting_operation is null or b.subsetting_operation is null)
and (a.rule_owner != b.rule_owner or a.rule_name != b.rule_name)
and ((a.streams_rule_type = 'GLOBAL' and b.streams_rule_type
in ('SCHEMA', 'TABLE') and a.schema_name = b.schema_name)
or (a.streams_rule_type = 'SCHEMA' and b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name)
or (a.streams_rule_type = 'TABLE' and b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name and a.object_name = b.object_name
and a.rule_name < b.rule_name)
or (a.streams_rule_type = 'SCHEMA' and b.streams_rule_type = 'SCHEMA'
and a.schema_name = b.schema_name and a.rule_name < b.rule_name)
or (a.streams_rule_type = 'GLOBAL' and b.streams_rule_type = 'GLOBAL'
and a.rule_name < b.rule_name))
order by a.rule_name;
begin
for rec in overlapping_rules loop
overlap_rules := TRUE;
dbms_output.put_line('+ WARNING: The rule ''' || rec.owner1 || '''.''' || rec.name1 || ''' and
''' || rec.owner2 || '''.''' || rec.name2 || ''' from rule set ''' || rec.rule_set_owner || '''.''' ||
rec.rule_set_name || ''' overlap.');
end loop;
if overlap_rules and verbose then
dbms_output.put_line('+Overlapping rules are a problem especially when rule-based
transformations exist.');
dbms_output.put_line('+Streams makes no guarantees of which rule in a rule set will evaluate
to TRUE,');
dbms_output.put_line('+ hus overlapping rules will cause inconsistent behavior, and should be
avoided.');
end if;
dbms_output.put_line('+');
end;
/

Is the capture paused for flow control?

If the capture is PAUSED FOR FLOW CONTROL, verify if messages that have been enqueued
have been browsed using the following queries:

select * from gv$buffered_publishers;

Check the current number of messages in the queue and compare it to the number of
unbrowsed messages. If messages have been browsed and are not getting removed from the
queue then verify if aq_tm_processes is set explicitly to 0 in the source or target databases:

declare
mycheck number;
begin
select 1 into mycheck from v$parameter where name = 'aq_tm_processes' and value = '0' and
(ismodified <> 'FALSE' OR isdefault='FALSE');
if mycheck = 1 then
dbms_output.put_line('+ERROR: The parameter ''aq_tm_processes'' should not be explicitly
set to 0!');
dbms_output.put_line('+Queue monitoring is disabled for all queues.');
dbms_output.put_line('+To resolve this problem, set the value to 1 using: ALTER SYSTEM SET
AQ_TM_PROCESSES=1; ');
end if;
exception when no_data_found then null;
end;
/

If messages are not being browsed, check the status of the propagation and apply processes.

For additional information, please review the following note:

Note.746247.1 Troubleshooting Streams Capture when status is Paused For Flow Control

Is the capture waiting for REDO or in DICTIONARY INITIALIZATION state?

Turn on logminer and capture tracing and restart capture. Look at the capture trace file in the
bdump directory.

ALTER SYSTEM SET EVENTS '1349 trace name context forever, level 7';
exec dbms_capture_adm.set_parameter('yourcapturename','trace_level','127');
exec dbms_capture_adm.start_capture('yourcapturename');

To turn off tracing:

ALTER SYSTEM SET EVENTS '1349 trace name context off';


exec dbms_capture_adm.set_parameter('yourcapturename','trace_level',null);
The trace will indicate which archive is required by the capture process. Verify if the archive
exists in the archive_destination specified for the database. If the file exists, verify if it is
registered with the capture process using the following query:

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15


COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 999999
COLUMN NAME HEADING 'Archived Redo Log|File Name' format a35
COLUMN FIRST_SCN HEADING 'Archived Log|First SCN'
COLUMN FIRST_TIME HEADING 'Archived Log Begin|Timestamp'
COLUMN NEXT_SCN HEADING 'Archived Log|Last SCN'
COLUMN NEXT_TIME HEADING 'Archived Log Last|Timestamp'
COLUMN MODIFIED_TIME HEADING 'Archived Log|Registered Time'
COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' format A6
COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' format A6
COLUMN PURGEABLE HEADING 'Purgeable|Archive|Log' format a9
SELECT r.CONSUMER_NAME,
r.SOURCE_DATABASE,
r.SEQUENCE#,
r.NAME,
r.first_scn,
r.FIRST_TIME,
r.next_scn,
r.next_time,
r.MODIFIED_TIME,
r.DICTIONARY_BEGIN,
r.DICTIONARY_END,
r.purgeable
FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE r.CONSUMER_NAME = c.CAPTURE_NAME
ORDER BY source_database, consumer_name, r.first_scn;

If the archive is not registered, check the alert.log for any errors during registration and try
registering it manually using:

alter database register logfile '<path/file_name>' for '<capture_name>';

Is the capture process current?

The following query can be used to determine the message enqueuing latency of each capture
process on the database:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10


COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN CREATE_TIME HEADING 'Message Creation|Time' FORMAT A20
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time' FORMAT A20
COLUMN ENQUEUE_MESSAGE_NUMBER HEADING 'Message|Number' FORMAT 999999
SELECT CAPTURE_NAME,
(ENQUEUE_TIME-ENQUEUE_MESSAGE_CREATE_TIME)*86400 LATENCY_SECONDS,
TO_CHAR(ENQUEUE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME,
TO_CHAR(ENQUEUE_TIME, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
ENQUEUE_MESSAGE_NUMBER
FROM V$STREAMS_CAPTURE;

Are there any long running transactions?

The following query can be usde to identify if there are any long running transactions in the
system:

prompt
prompt ++ Current Long Running Transactions ++
prompt Current transactions open for more than 20 minutes
prompt
col runlength HEAD 'Txn Open|Minutes' format 9999.99
col sid HEAD 'Session' format a13
col xid HEAD 'Transaction|ID' format a18
col terminal HEAD 'Terminal' format a10
col program HEAD 'Program' format a27 wrap
select t.inst_id, sid||','||serial# sid,xidusn||'.'||xidslot||'.'||xidsqn xid,
(sysdate - start_date ) * 1440 runlength ,terminal,program from gv$transaction t, gv$session s
where t.addr=s.taddr and (sysdate - start_date) * 1440 > 20;

Long running transaction are also reported in the alert.log.

Are there any large transactions being reported in the system?

Check the alert.log for any messages related to large transactions. The alert.log will show
information of when the large transaction was identified and also if it has been committed or
rolled back.

If not commit or rollback for the transaction have been reported in the alert.log, it means the
transaction is still running.

Troubleshooting the Propagation Process

Does the Propagation Use the Correct Source and Destination Queue ?

Make sure the propagation has been configured properly to propagate messages from the
correct source queue to the correct destination queue, and using a valid database link.

Check how the propagation has been defined:

- Queue-to-Database Link : The propagation is defined by a source queue and a database link
pair. This is the default. The QUEUE_TO_QUEUE parameter is set to FALSE in this case.

- Queue-to-Queue : The propagation is defined by a source queue and destination queue pair.
The QUEUE_TO_QUEUE parameter is set to TRUE.
From the Healthcheck report this can be visualized in section "++ PROPAGATIONS IN
DATABASE ++"

COLUMN 'Source Queue' FORMAT A39


COLUMN 'Destination Queue' FORMAT A39
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A35
column queue_to_queue HEADING 'Q-2-Q'
column error_message HEADING 'Error Message'
column error_date HEADING 'Error Date'

SELECT p.propagation_name, p.SOURCE_QUEUE_OWNER ||'.'||


p.SOURCE_QUEUE_NAME ||'@'||
g.GLOBAL_NAME "Source Queue",
p.DESTINATION_QUEUE_OWNER ||'.'||
p.DESTINATION_QUEUE_NAME ||'@'||
p.DESTINATION_DBLINK "Destination Queue",
p.queue_to_queue,
p.status,
p.error_date,
p.error_message
FROM DBA_PROPAGATION p, GLOBAL_NAME g;

Is the Propagation Enabled ?

For a propagation job to propagate messages, the propagation must be enabled. If messages
are not being propagated by a propagation as expected, then the propagation might not be
enabled.

Check :

- The propagation has an associated Jxxx Process

- whether the propagation is ENABLED, DISABLED, or ABORTED

- The date of the last error, if there are any propagation errors and the error number/ error
message of the last error

From the Healthcheck report this can be visualized in section "++ SCHEDULE FOR EACH
PROPAGATION++"

prompt
COLUMN PROPAGATION_NAME Heading 'Propagation|Name' format a17 wrap
COLUMN START_DATE HEADING 'Start Date'
COLUMN PROPAGATION_WINDOW HEADING 'Duration|in Seconds' FORMAT
9999999999999999
COLUMN NEXT_TIME HEADING 'Next|Time' FORMAT A8
COLUMN LATENCY HEADING 'Latency|in Seconds' FORMAT 9999999999999999
COLUMN SCHEDULE_DISABLED HEADING 'Status' FORMAT A8
COLUMN PROCESS_NAME HEADING 'Process' FORMAT A8
COLUMN FAILURES HEADING 'Number of|Failures' FORMAT 99
COLUMN LAST_ERROR_MSG HEADING 'Error Message' FORMAT A50
COLUMN TOTAL_BYTES HEADING 'Total Bytes|Propagated' FORMAT 9999999999999999
COLUMN CURRENT_START_DATE HEADING 'Current|Start' FORMAT A17
COLUMN LAST_RUN_DATE HEADING 'Last|Run' FORMAT A17
COLUMN NEXT_RUN_DATE HEADING 'Next|Run' FORMAT A17
COLUMN LAST_ERROR_DATE HEADING 'Last|Error' FORMAT A17
column message_delivery_mode HEADING 'Message|Delivery|Mode'
column queue_to_queue HEADING 'Q-2-Q'

SELECT p.propagation_name,TO_CHAR(s.START_DATE, 'HH24:MI:SS MM/DD/YY') START_DATE,


s.PROPAGATION_WINDOW,
s.NEXT_TIME,
s.LATENCY,
DECODE(s.SCHEDULE_DISABLED,
'Y', 'Disabled',
'N', 'Enabled') SCHEDULE_DISABLED,
s.PROCESS_NAME, s.total_bytes,
s.FAILURES,
s.message_delivery_mode,
p.queue_to_queue,
s.LAST_ERROR_MSG
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(s.destination, '[^@]+', 1, 2), s.destination)
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME order by message_delivery_mode,
propagation_name;

SELECT p.propagation_name, message_delivery_mode, TO_CHAR(s.LAST_RUN_DATE,


'HH24:MI:SS MM/DD/YY') LAST_RUN_DATE,
TO_CHAR(s.CURRENT_START_DATE, 'HH24:MI:SS MM/DD/YY') CURRENT_START_DATE,
TO_CHAR(s.NEXT_RUN_DATE, 'HH24:MI:SS MM/DD/YY') NEXT_RUN_DATE,
TO_CHAR(s.LAST_ERROR_DATE, 'HH24:MI:SS MM/DD/YY') LAST_ERROR_DATE
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(s.destination, '[^@]+', 1, 2), s.destination)
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME order by message_delivery_mode,
propagation_name;
Are There Enough Job Queue Processes ?

In 10.2, propagation jobs use job queue processes to propagate messages. Make sure the
JOB_QUEUE_PROCESSES initialization parameter is set to 2 or higher in each database instance
that runs propagations.
It should be set to a value that is high enough to accommodate all of the jobs that run
simultaneously.

Check DBA_JOBS to ensure that:

- It shows a JOB like:

next_date := sys.dbms_aqadm.aq$_propaq(job);

- This associated propagation JOB is not broken (BROKEN flag is N),

- The LOG_USER, PRIV_USER, SCHEMA_USER for this propagation JOB is SYS

prompt
set recsep each
set recsepchar =
select * from dba_jobs;

In 11.1 AQ Propagation uses Oracle SCheduler, enabling AQ propagation to take advantage of


Scheduler features. Job queue processes parameters need not be set in Oracle Database 11g
for propagation to work. Oracle Scheduler automatically starts up the required number of
slaves for the existing propagation schedules.

prompt
select * from dba_scheduler_jobs;

Check the Trace Files and Alert Log for Problems

Messages about propagation are recorded in trace files for the database in which the
propagation job is running. A propagation job runs on the database containing the source
queue in the propagation. These trace file messages can help you to identify and resolve
problems in a Streams environment.

All trace files for background processes are written to the destination directory specified by
the initialization parameter BACKGROUND_DUMP_DEST. The names of trace files are
operating system specific, but each file usually includes the name of the process writing the
file.
Each propagation uses a propagation job that depends on the job queue coordinator process
and a job queue process. The job queue coordinator process is named cjqnn, where nn is the
job queue coordinator process number, and a job queue process is named jnnn, where nnn is
the job queue process number.

For example, on some operating systems, if the system identifier for a database running a
propagation job is hqdb and the job queue coordinator process is 01, then the trace file for the
job queue coordinator process starts with hqdb_cjq01. Similarly, on the same database, if a
job queue process is 001, then the trace file for the job queue process starts with
hqdb_j001. You can check the process name by querying the PROCESS_NAME column in the
DBA_QUEUE_SCHEDULES data dictionary view.

Determining the Rule Sets for Each Propagation

Make sure the Rule_Sets and Rules are setup properly according to the requirements

From the Healthcheck report this can be visualized in sections:

++ PROPAGATION RULE SETS IN DATABASE ++

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A35


COLUMN Positive HEADING 'Positive|Rule Set' FORMAT A35
COLUMN Negative HEADING 'Negative|Rule Set' FORMAT A35

SELECT PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME Positive,


NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME Negative
FROM DBA_PROPAGATION;

++ STREAMS PROPAGATION RULES CONFIGURED ++

col NAME Heading 'Name' format a25 wrap


col PropNAME format a25 Heading 'Propagation Name'
col object format a25 wrap
col source_database format a15 wrap
col RULE format a35 wrap
col TYPE format a15 wrap
col dml_condition format a40 wrap

select streams_name NAME,schema_name||'.'||object_name OBJECT,


rule_set_type,
SOURCE_DATABASE,
STREAMS_RULE_TYPE ||' '||Rule_type TYPE ,
INCLUDE_TAGGED_LCR,
rule_owner||'.'||rule_name RULE
from dba_streams_rules where streams_type = 'PROPAGATION'
order by name,object, source_database, rule_set_type,rule;

++ STREAMS TABLE SUBSETTING RULES ++

col NAME format a25 wraP


col object format A25 WRAP
col source_database format a15 wrap
col RULE format a35 wrap
col TYPE format a15 wrap
col dml_condition format a40 wrap

select streams_name NAME,schema_name||'.'||object_name OBJECT,


RULE_TYPE || 'TABLE RULE' TYPE,
rule_owner||'.'||rule_name RULE,
DML_CONDITION , SUBSETTING_OPERATION
from dba_streams_rules where streams_type = 'PROPAGATION' and (dml_condition is not null
or subsetting_operation is not null);

++ PROPAGATION RULES BY RULE SET ++

col RULE_SET format a25 wrap


col RULE_NAME format a25 wrap
col condition format a60 wrap
set long 4000
REM break on RULE_SET

set long 4000


select rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET
,rsr.rule_owner||'.'||rsr.rule_name RULE_NAME,
r.rule_condition CONDITION from
dba_rule_set_rules rsr, dba_rules r
where rsr.rule_name = r.rule_name and rsr.rule_owner = r.rule_owner and rule_set_name in
(select rule_set_name from dba_propagation) order by rsr.rule_set_owner,rsr.rule_set_name;

++ PROPAGATION RULES IN NEGATIVE RULE SET ++

select c.propagation_name, rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET


,rsr.rule_owner||'.'||rsr.rule_name RULE_NAME,
r.rule_condition CONDITION from
dba_rule_set_rules rsr, DBA_RULES r ,DBA_PROPAGATION c
where rsr.rule_name = r.rule_name and rsr.rule_owner = r.rule_owner and
rsr.rule_set_owner=c.negative_rule_set_owner and
rsr.rule_set_name=c.negative_rule_set_name
and rsr.rule_set_name in
(select negative_rule_set_name rule_set_name from dba_propagation) order by
rsr.rule_set_owner,rsr.rule_set_name;

++ PROPAGATION RULE TRANSFORMATIONS BY RULE SET ++

col RULE_SET format a25 wrap


col RULE_NAME format a25 wrap
col action_context_name format a32 wrap
col action_context_value format a32 wrap
REM break on RULE_SET

select rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET , r.* from


dba_rule_set_rules rsr, dba_streams_transformations r
where
r.rule_name = rsr.rule_name and r.rule_owner = rsr.rule_owner and rule_set_name in
(select rule_set_name from dba_propagation)
order by rsr.rule_set_owner,rsr.rule_set_name, r.rule_owner, r.rule_name,transform_type
desc, step_number, precedence;

Determining the Total Number of Messages and Bytes Propagated

For determining the number of messages sent by a propagation, as well as the number of
acknowledgements being returned from the target site, query the V$PROPAGATION_SENDER
view at the Source site and the V$PROPAGATION_RECEIVER view at the destinarion site.

Following queries display information for each propagation:

++ EVENTS AND BYTES PROPAGATED FOR EACH PROPAGATION++

COLUMN Elapsed_propagation_TIME HEADING 'Elapsed |Propagation Time|(Seconds)'


FORMAT 9999999999999999
COLUMN TOTAL_NUMBER HEADING 'Total |Events|Propagated' FORMAT 9999999999999999
COLUMN TOTAL_BYTES HEADING 'Total Bytes|Propagated' FORMAT 9999999999999999
COLUMN SCHEDULE_STATUS HEADING 'Schedule|Status'
column elapsed_dequeue_time HEADING 'Elapsed|Dequeue Time|(Seconds)'
column elapsed_pickle_time HEADING 'Total Time|(Seconds)'
column total_time HEADING 'Elapsed|Pickle Time|(Seconds)'
column high_water_mark HEADING 'High|Water|Mark'
column acknowledgement HEADING 'Target |Ack'

SELECT p.propagation_name,q.message_delivery_mode, DECODE(p.STATUS,


'DISABLED', 'Disabled',
'ENABLED', 'Enabled') SCHEDULE_STATUS,
q.instance,
q.total_number TOTAL_NUMBER, q.TOTAL_BYTES ,
q.elapsed_dequeue_time/100 elapsed_dequeue_time, q.elapsed_pickle_time/100
elapsed_pickle_time,
q.total_time/100 total_time
FROM DBA_PROPAGATION p, dba_queue_schedules q
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(q.destination, '[^@]+', 1, 2), q.destination)
AND q.SCHEMA = p.SOURCE_QUEUE_OWNER
AND q.QNAME = p.SOURCE_QUEUE_NAME
order by q.message_delivery_mode, p.propagation_name;

++ PROPAGATION SENDER STATISTICS ++

col queue_id HEADING 'Queue ID'


col queue_schema HEADING 'Source|Queue|Owner'
col queue_name HEADING 'Source|Queue|Name'
col dst_queue_schema HEADING 'Destination|Queue|Owner'
col dst_queue_name HEADING 'Destination|Queue|Name'
col dblink Heading 'Destination|Database|Link'
col total_msgs HEADING 'Total|Messages'
col max_num_per_win HEADING 'Max Msgs|per|Window'
col max_size HEADING 'Max|Size'
col src_queue_schema HEADING 'Source|Queue|Owner'
col src_queue_name HEADING 'Source|Queue|Name'
column elapsed_dequeue_time HEADING 'Elapsed|Dequeue Time|(CentiSecs)'
column elapsed_pickle_time HEADING 'Total Time|(CentiSecs)'
column total_time HEADING 'Elapsed|Pickle Time|(CentiSecs)'

SELECT * from v$propagation_sender;

++ PROPAGATION RECEIVER STATISTICS++

column src_queue_name HEADING 'Source|Queue|Name'


column src_dbname HEADING 'Source|Database|Name'
column startup_time HEADING 'Startup|Time'
column elapsed_unpickle_time HEADING 'Elapsed|Unpickle Time|(CentiSeconds'
column elapsed_rule_time HEADING 'Elapsed|Rule Time|(CentiSeconds)'
column elapsed_enqueue_time HEADING 'Elapsed|Enqueue Time|(CentiSeconds)'

SELECT * from gv$propagation_receiver;


To display information about suscribers for all buffered queues in the instance, query
V$BUFFERED_SUBSCRIBERS.
In Oracle0g Release 2 and later, there are two rows for each propagation: one for queue-to-
database link propagation
and one for queue-to-queue propagation.

++ BUFFERED SUBSCRIBERS ++

select * from gv$buffered_subscribers;

NOTE: An optimization first available in Oracle Database 11g, Release 1, isa capture process
that automatically sends LCRs directly to an apply process. This occurs when there is a single
publisher and consumer defined for the queue that contains the captured changes. This
optimized configuration is called Combined Capture and Apply (CCA). When CCA is in use, LCRs
are transmitted directly from the capture process to the apply process via a database link. In
this mode, the capture does not stage the LCRs in a queue or use queue propagation to deliver
them.

Propagation Statistics are zero when CCA optimization is in effect.

Buffered Subscribers view statistics are zero when CCA optimization is in effect.

Propagation does not start after error or database restart

At times, the propagation job may become "broken" or fail to start after an error has been
encountered or after a database
restart.

The typical solution is to disable the propagation and then re-enable it.
For example, for the propagation named STRMADMIN_PROPAGATE the commands would be:

10.2
exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE');
exec dbms_propagation_adm.start_propagation('STRMAMDIN_PROPAGATE');

If the above does not fix the problem, stop the propagation specifying the force parameter
(2nd parameter on stop_propagation) as TRUE.
For example, for the propagation named STRMADMIN_PROPAGATE , the commands would be:

exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true);
exec dbms_propagation_adm.start_propagation('STRMAMDIN_PROPAGATE');

The statistics for the propagation are cleared when the force parameter is set to TRUE.
Common Propagation Errors

The most common propagation errors result from an incorrect network configuration.

Below list shows errors caused by tnsnames.ora file or database links being configured
incorrectly.

- ORA-12154: TNS:could not resolve service name.

- ORA-12505: TNS:listener does not currently know of service requested in connect descriptor.

- ORA-12514: TNS:listener does not currently know of service requested in connect descriptor.

- ORA-12541: TNS-12541 TNS:no listener

Other common errors:

- ORA-02082: A loopback database link must have a connection qualifier.

Can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first
configuring the GLOBAL_NAME for your database.

- ORA-25307: Enqueue rate too high. Enable flow control.

This is an informative message that indicates flow control has been automatically enabled to
reduce the rate at which events are being enqueued into the staging area.

DBA_QUEUE_SCHEDULES will display this informational message when the automatic flow
control (10g feature of Streams) has been invoked.

This typically occurs when the target site is unable to keep up with the rate of messages
flowing from the source site. Other than checking that the apply process is running normally
on the target site, no action is required by the DBA. Propagation and the capture process will
be resumed automatically when the target site is able to accept more messages.

In some situations, propagation may become disabled (if the number of failures is 16). In
these situations, the propagation can be re-enabled manually.

- ORA-25315 unsupported configuration for propagation of buffered messages.

This error typically indicates that an attempt was made to propagate buffered messages with
the database link pointing to an instance in the destination database which is not the owner
instance of the destination queue. To resolve, use queue to queue propagation for buffered
messages.
- ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation

For cause/fixes refer to:

Note 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after


dropping a Streams Propagation

PROPAGATION Related NOTEs

Note 437838.1 Recommended Patches for Streams


Note.749181.1 How to Recover Streams After Dropping Propagation
Note 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC
environment

Troubleshooting the Apply Process

Is the Apply process ENABLED?

SELECT APPLY_NAME, STATUS FROM DBA_APPLY;

If the apply process is DISABLED, try starting it using:

EXEC DBMS_APPLY_ADM.START_APPLY( '<apply name>' )

If the apply process is ABORTED, you can use the following query to identified the error
condition that caused the apply to abort:

COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A15

COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time'

COLUMN ERROR_NUMBER HEADING 'Error|Number ' FORMAT 99999999

COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A33


SELECT APPLY_NAME, TO_CHAR(STATUS_CHANGE_TIME, 'DD-MON-RRRR hh24:mi:ss')
STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE FROM DBA_APPLY WHERE
STATUS='ABORTED';

If the apply process is aborted due an error on Apply Error queue, check the error queue using
following query:

col source_commit_scn HEADING 'Source|Commit|Scn'

col message_number HEADING 'Message in| Txn causing|Error'

col message_count HEADING 'Total|Messages|in Txn'

col local_transaction_id HEADING 'Local|Transaction| ID'

col error_message HEADING 'Apply|Error|Message'

col ERROR_CREATION_TIME HEADING 'Error|Creation|Timestamp'

col source_transaction_id HEADING 'Source|Transaction| ID'

select apply_name, source_database,source_commit_scn, message_number,


message_count,

local_transaction_id, error_message , error_creation_time, source_transaction_id

from DBA_APPLY_ERROR order by apply_name ,source_commit_scn ;

Apply won't re-start if there is any error on apply error queue when apply parameter
DISABLE_ON_ERROR is TRUE (default) for the specific apply process. Use below query to check
this parameter:

col APPLY_NAME format a30

col parameter format a20

col value format a20

break on apply_name

select * from dba_apply_parameters order by apply_name,parameter;

Some common apply errors include:

APPLY MOST COMMON ERRORS TROUBLESHOOTING


ORA-26666 cannot alter STREAMS process

On shutdown of the apply process, the error ORA- timeout occurred. When attempting to
restart the apply process, the apply process aborts with the following message:

ORA-26666: cannot alter STREAMS process STRMADMIN_SITE1_US_ORACLE


ORA-6512: at SYS.DBMS_LOGREP_UTIL
ORA-6512: at SYS.DBMS_APPLY_ADM
ORA-6512: at line 2

In the case, you will need to do a "forced" shutdown of the apply process. Then restart the
apply process. For example:

exec dbms_apply_adm.stop_apply('STRMADMIN_SITE1_US_ORACLE',force=>true)
exec dbms_apply_adm.start_apply('STRMADMIN_SITE1_US_ORACLE');

ORA-01403 No Data Found

Classic error where most of the time caused by a data mismatch from source and destination
tables. The ORA-1403 error occurs when an apply process tries to update an existing row and
the OLD_VALUES in the row LCR do not match the current values at the destination database.
Basically conditions to receive the errors:

- Missing supplemental log for key columns at source database;

- Missing Primary Key or Unique Key at source database;

- Missing substitute key columns at destination database when there is no primary key
or unique key at source database;

- Applying a transaction out of order or a transaction being applied depends on another


transaction which has not yet executed.

- Data mismatch between the row LCR and the table for which the LCR is applying the
change.

There are two ways to fix the ORA-1403:


- You can manage the ORA-1403 automatically via DML Handler or

- You can update the current values in the row so that the row LCR can be applied
successfully. If changes to the row at apply site are also captured by a capture process at the
destination database, then you will need to use DBMS_STREAMS.SET_TAG to avoid re-
capturing these changes, which would lead for new ORA-1403 at others Apply sites. For
example:

a. Set a tag in the session that corrects the row. Make sure you set the tag to a value that
prevents the manual change from being replicated. For example, the tag can prevent the
change from being captured by a capture process.

EXEC DBMS_STREAMS.SET_TAG(tag => HEXTORAW('17'));

In some environments, you might need to set the tag to a different value.

b. Update the row in the table so that the data matches the old values in the LCR.

c. Re-execute the error or re-execute all errors. To re-execute an error, run the
EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package, and specify the transaction
identifier for the transaction that caused the error. For example:

EXEC DBMS_APPLY_ADM.EXECUTE_ERROR(local_transaction_id => '5.4.312');

Or, execute all errors for the apply process by running the EXECUTE_ALL_ERRORS
procedure:

EXEC DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(apply_name => 'APPLY');

d. If you are going to make other changes in the current session that you want to
replicate destination databases, then reset the tag for the session to an appropriate value, as
in the following example:

EXEC DBMS_STREAMS.SET_TAG(tag => NULL);


In some environments, you might need to set the tag to a value other than NULL.

Refer to the follow note for additional instructions:

Note:265201.1 Troubleshooting Streams Apply Error ORA-1403, ORA-26787 or ORA-


26786

ORA-26687 Instantiation SCN Not Set

Error occurs because the instantiation SCN does not for the object. Basically it might be caused
by :

- Importing an object without preparing the object for instantiation prior export;

- You used original export/import for instantiation, and you performed the import without
specifying y for the STREAMS_INSTANTIATION import parameter.

- You have not called any of DBMS_APPLY_ADM procedures:

SET_TABLE_INSTANTIATION_SCN

SET_SCHEMA_INSTANTIATION_SCN

SET_GLOBAL_INSTANTIATION_SCN

- When applying DDL changes, and you have not set the instantiation SCN at the SCHEMA or
GLOBAL level.

Check current objects prepared for instantiation thru the view


DBA_APPLY_INSTANTIATED_OBJECTS or section "++ TABLES PREPARED FOR CAPTURE ++" of
your Streams Health Check

report.
Refer to the following notes for additional instructions:

Note:272731.1 Objects Not Instantiated When Using


DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN ORA-26687

Note:223000.1 Streams Apply Process Fails with ORA-26687 or ''Missing Streams multi-version
data dictionary''

Note:783815.1 DBA_APPLY_INSTANTIATED_OBJECTS and ORA-26687

ORA-26688 Missing Key in LCR

- Missing/disabled primary key at source database. When not using primary keys, make
sure you have set an alternative key at destination database using SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM package.

- Missing supplemental logging at source database, if using Apply PARALLELISM greater


than 1, for any indexed column at a destination database, which includes unique or non
unique index columns, foreign key columns, and bitmap index columns.

Refer to the follow note for additional instructions:

Note:290280.1 Ora-26688 on Apply Process of Streams after Online Table Redefinition

ORA-1031 Insufficient Privileges

The user designated as the apply user does not have the necessary privileges to perform SQL
operations on the replicated objects. The apply user privileges must be granted by an explicit
grant of each privilege. Granting these privileges through a role is not sufficient for the
Streams apply user.

Additionally if the apply user does not have explicit EXECUTE privilege on an apply handler
procedure or custom rule-based transformation function, then an ORA-06550 error might
result when the apply user tries to run the procedure or function. Typically, this error is causes
the apply process to abort without adding errors to the DBA_APPLY_ERROR view. However,
the trace file for the apply coordinator reports the error. Specifically, errors similar to the
following appear in the trace file:

ORA-12801 in STREAMS process

ORA-12801: error signaled in parallel query server P000

ORA-06550: line 1, column 15:

PLS-00201: identifier 'STRMADMIN.TO_AWARDFCT_RULEDML' must be declared

ORA-06550: line 1, column 7:

PL/SQL: Statement ignored

In this example, the apply user dssdbo does not have execute privilege on the
to_award_fct_ruledml function in the strmadmin schema. To correct the problem, grant the
required EXECUTE privilege to the apply user.

Refer to the following note for additional instructions:

Note:215372.1 Streams Apply Process Fails With ORA-1031: Insufficient Privileges

MISSING Streams multi-version data dictionary

Although you see this message on Streams Apply destination site, it is caused by the Streams
data dictionary information for the specified object not available on source database at the
time Streams Capture was created.

Pleaser refer to the Streams Capture Troubleshooting section on this same article and below
Metalink note for additional instructions:
Note 212044.1 Resolving the MISSING Streams Multi-version Data Dictionary Error

Other common errors:

ORA-24031 Invalid Value, parameter_name Should Be Non-NULL

ORA-26689 Column Type Mismatch

For additional details on above errors, please refer to section "Troubleshooting Specific Apply
Errors" from on-line documentation of your Oracle book:

Manual: Oracle Streams Replication Administrator's Guide.

Chapter: 13 Troubleshooting Streams Replication

Section: Troubleshooting Specific Apply Errors

Apply Enabled but not dequeuing/moving

1. First of all check if messages are reaching Apply destination queue by querying the
buffer queue with following query multiple times:

col QUEUE format a50 wrap

col "Message Count" format 9999999999999999 heading 'Current Num.


of|Outstanding|Mesg|in Queue'

col "Spilled Msgs" format 9999999999999999 heading 'Current Number


of|Spilled|Messages|in Queue'

col "TOtal Messages" format 9999999999999999 heading 'Cumulative |Number| of


Messages|in Queue'

col "Total Spilled Msgs" format 9999999999999999 heading 'Cum. Num|of


Spilled|Messages|in Queue'

col "Expired_Msgs" heading 'Current Number of|Expired|Messages|in Queue'


SELECT queue_schema||'.'||queue_name Queue, startup_time, num_msgs "Message Count",
spill_msgs "Spilled Msgs", cnum_msgs "Total Messages", cspill_msgs "Total Spilled Msgs",

expired_msgs FROM gv$buffered_queues;

Look for values changing for "Cumulative Number of Messages in Queue" and "Current
Number of Outstanding Messages in Queue". In an event that messages are not reaching Apply
buffered queue, this might be an indication for some problem at Capture and/or Propagation
processes, please refer to the related sections on this Article for further troubleshooting
information.

2. If messages are reaching Apply queue, check if messages are being dequeued by Apply
Reader process using below query:

col oldest_scn_num HEADING 'Oldest|SCN'

col apply_name HEADING 'Apply Name'

col apply_captured HEADING 'Captured or|User-Enqueued LCRs'

col process_name HEADING 'Process'

col state HEADING 'STATE'

col total_messages_dequeued HEADING 'Total Messages|Dequeued'

col total_messages_spilled Heading 'Total Messages|Spilled'

col sga_used HEADING 'SGA Used'

col oldest_transaction_id HEADING 'Oldest|Transaction'

SELECT ap.APPLY_NAME,

DECODE(ap.APPLY_CAPTURED,

'YES','Captured LCRS',

'NO','User-Enqueued','UNKNOWN') APPLY_CAPTURED,

SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,

r.STATE,

r.TOTAL_MESSAGES_DEQUEUED,

r.TOTAL_MESSAGES_SPILLED,

r.SGA_USED,
oldest_scn_num,

oldest_xidusn||'.'||oldest_xidslt||'.'||oldest_xidsqn

oldest_transaction_id

FROM gV$STREAMS_APPLY_READER r, gV$SESSION s, DBA_APPLY ap

WHERE r.SID = s.SID AND

r.SERIAL# = s.SERIAL# AND

r.APPLY_NAME = ap.APPLY_NAME;

Ideally "Total Messages Dequeued" must be increasing, otherwise 'Total Messages Spilled' in
case of spilling.

3. We also might get into the scenario where messages are reaching Apply queue but dequeue
is not happening, therefore the problem might be that the apply process has fallen behind.
You can check apply process latency by querying the V$STREAMS_APPLY_COORDINATOR
dynamic performance view. If apply process latency is high, then you might be able to improve
performance by adjusting the setting of the parallelism apply process parameter.

Run the following queries to display the capture to apply latency using the
V$STREAMS_APPLY_COORDINATOR view for a message for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17

COLUMN 'Latency in Seconds' FORMAT 999999

COLUMN 'Message Creation' FORMAT A17

COLUMN 'Apply Time' FORMAT A17

COLUMN HWM_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 999999

COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT


999999

SELECT APPLY_NAME,

(HWM_TIME-HWM_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",

TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY')

"Message Creation",

TO_CHAR(HWM_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",


HWM_MESSAGE_NUMBER

FROM V$STREAMS_APPLY_COORDINATOR;

SELECT APPLY_NAME,

(APPLY_TIME-APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",

TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY')

"Message Creation",

TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",

APPLIED_MESSAGE_NUMBER

FROM DBA_APPLY_PROGRESS;

4. Also check Apply Servers:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A22

COLUMN PROCESS_NAME HEADING 'Process Name' FORMAT A12

COLUMN STATE HEADING 'State' FORMAT A17

COLUMN TOTAL_ASSIGNED HEADING 'Total|Transactions|Assigned' FORMAT 99999999

COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total|Messages|Applied' FORMAT 99999999

SELECT r.APPLY_NAME,

SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,

r.STATE,

r.TOTAL_ASSIGNED,

r.TOTAL_MESSAGES_APPLIED

FROM V$STREAMS_APPLY_SERVER R, V$SESSION S

WHERE r.SID = s.SID AND

r.SERIAL# = s.SERIAL#

ORDER BY r.APPLY_NAME, r.SERVER_ID;


You might be able to see one of these Apply server states when it is processing a transaction:

RECORD LOW-WATERMARK

ADD PARTITION

DROP PARTITION

EXECUTE TRANSACTION

WAIT COMMIT

WAIT DEPENDENCY

WAIT FOR NEXT CHUNK

TRANSACTION CLEANUP

or IDLE in case of there is no transaction currently being applied.

5. Check if AQ_TM_PROCESSES initialization parameter is set explicitly to 0 (zero) or is not set.

conn / as sysdba

show parameter aq_tm_processes

Refer to the follow note for additional instructions:

Note.428441.1 "Warning Aq_tm_processes Is Set To 0" Message in Alert Log After Upgrade
to 10.2.0.3 or 10.2.0.4

6. Check Apply PARALLELISM parameter running below query:

col APPLY_NAME format a30

col parameter format a20

col value format a20

break on apply_name

select * from dba_apply_parameters order by apply_name,parameter;


You may need to increase Parallelism process, for example:

BEGIN

DBMS_APPLY_ADM.SET_PARAMETER(

apply_name => 'strm01_apply',

parameter => 'parallelism',

value => '2');

END;

7. Check for Apply Process Encountering Contention

An apply server encounters contention when the apply server must wait for a resource
that is being used by another session. Contention can result from logical dependencies. For
example, when an apply server tries to apply a change to a row that a user has locked, then
the apply server must wait for the user. Contention can also result from physical
dependencies. For example, interested transaction list (ITL) contention results when two
transactions that are being applied, which might not be logically dependent, are trying to lock
the same block on disk. In this case, one apply server locks rows in the block, and the other
apply server must wait for access to the block, even though the second apply server is trying to
lock different rows. See "Is the Apply Process Waiting for a Dependent Transaction?" for
detailed information about ITL contention

The following four wait states are possible for an apply server:

- Not waiting: The apply server is not encountering contention and is not waiting. No action is
necessary in this case.

- Waiting for an event that is not related to another session:

An example of an event that is not related to another session is a log file sync event,
where redo data must be flushed because of a commit or rollback. In these cases, nothing is
written to the log initially because such waits are common and are usually transient. If the
apply server is waiting for the same event after a certain interval of time, then the apply
server writes a message to the alert log and apply process trace file. For example, an apply
server a001 might write a message similar to the following:

A001: warning -- apply server 1, sid 26 waiting for event:

A001: [log file sync] ...

This output is written to the alert log at intervals until the problem is rectified.

- Waiting for an event that is related to a non apply server session:

The apply server writes a message to the alert log and apply process trace file
immediately. For example, an apply server a001 might write a message similar to the
following:

A001: warning -- apply server 1, sid 10 waiting on user sid 36 for event:

A001: [enq: TM - contention] name|mode=544d0003, object #=a078, table/partition=0

This output is written to the alert log at intervals until the problem is rectified.

- Waiting for another apply server session:

This state can be caused by interested transaction list (ITL) contention, but it can also
be caused by more serious issues, such as an apply handler that obtains conflicting locks. In
this case, the apply server that is blocked by another apply server prints only once to the alert
log and the trace file for the apply process, and the blocked apply server issues a rollback to
the blocking apply server. When the blocking apply server rolls back, another message
indicating that the apply server has been rolled back is printed to the log files, and the rolled
back transaction is reassigned by the coordinator process for the apply process.

For example, if apply server 1 of apply process a001 is blocked by apply server 2 of the
same apply process (a001), then the apply process writes the following messages to the log
files:

A001: apply server 1 blocked on server 2

A001: [enq: TX - row lock contention] name|mode=54580006, usn<<16 | slot=1000e,


sequence=1853
A001: apply server 2 rolled back

Check TOTAL_ROLLBACKS from below query:

col apply_name HEADING 'Apply Name' format a22 wrap

col process HEADING 'Process' format a7

col RECEIVED HEADING 'Total|Txns|Received'

col ASSIGNED HEADING 'Total|Txns|Assigned'

col APPLIED HEADING 'Total|Txns|Applied'

col ERRORS HEADING 'Total|Txns|w/ Error'

col total_ignored HEADING 'Total|Txns|Ignored'

col total_rollbacks HEADING 'Total|Txns|Rollback'

col WAIT_DEPS HEADING 'Total|Txns|Wait_Deps'

col WAIT_COMMITS HEADING 'Total|Txns|Wait_Commits'

col STATE HEADING 'State' format a10 word

SELECT ap.APPLY_NAME,

SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS,

c.STATE,

c.TOTAL_RECEIVED RECEIVED,

c.TOTAL_ASSIGNED ASSIGNED,

c.TOTAL_APPLIED APPLIED,

c.TOTAL_ERRORS ERRORS,

c.total_ignored,

c.total_rollbacks,

c.TOTAL_WAIT_DEPS WAIT_DEPS, c.TOTAL_WAIT_COMMITS WAIT_COMMITS

FROM gV$STREAMS_APPLY_COORDINATOR c, gV$SESSION s, DBA_APPLY ap

WHERE c.SID = s.SID AND

c.SERIAL# = s.SERIAL# AND

c.APPLY_NAME = ap.APPLY_NAME;
8. Check for Apply Process waiting for dependent transactions (applies only when having
Apply PARALLELISM parameter greater than 1). Use same query as above:

SELECT ap.APPLY_NAME,

SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS,

c.STATE,

c.TOTAL_RECEIVED RECEIVED,

c.TOTAL_ASSIGNED ASSIGNED,

c.TOTAL_APPLIED APPLIED,

c.TOTAL_ERRORS ERRORS,

c.total_ignored,

c.total_rollbacks,

c.TOTAL_WAIT_DEPS WAIT_DEPS, c.TOTAL_WAIT_COMMITS WAIT_COMMITS

FROM gV$STREAMS_APPLY_COORDINATOR c, gV$SESSION s, DBA_APPLY ap

WHERE c.SID = s.SID AND

c.SERIAL# = s.SERIAL# AND

c.APPLY_NAME = ap.APPLY_NAME;

To avoid the problem in the future, perform one of the following actions:

- Increase the number of ITLs available. You can do so by changing the INITRANS setting
for the table using the ALTER TABLE statement.

- Set the commit_serialization parameter to none for the apply process.

- Set the parallelism apply process parameter to 1 for the apply process.
9. Check for poor Apply performance for certain transactions:

The following query displays information about the transactions being applied by each
apply server:

SET LINE 120

COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A20

COLUMN SERVER_ID HEADING 'Apply Server ID' FORMAT 99999999

COLUMN STATE HEADING 'Apply Server State' FORMAT A20

COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied Message|Number' FORMAT


99999999

COLUMN MESSAGE_SEQUENCE HEADING 'Message Sequence|Number' FORMAT 99999999

SELECT APPLY_NAME, SERVER_ID, STATE, APPLIED_MESSAGE_NUMBER, MESSAGE_SEQUENCE

FROM V$STREAMS_APPLY_SERVER

ORDER BY APPLY_NAME,SERVER_ID;

If you run this query repeatedly, then over time the apply server state, applied
message number, and message sequence number should continue to change for each apply
server as it applies transactions. If these values do not change for one or more apply servers,
then the apply server might not be performing well. In this case, you should make sure that,
for each table to which the apply process applies changes, every key column has an index.

Use following queries to determine the object in interest for the poor Apply processing
transaction:

COLUMN OPERATION HEADING 'Operation' FORMAT A20

COLUMN OPTIONS HEADING 'Options' FORMAT A20

COLUMN OBJECT_OWNER HEADING 'Object|Owner' FORMAT A10

COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A10

COLUMN COST HEADING 'Cost' FORMAT 99999999


SELECT p.OPERATION, p.OPTIONS, p.OBJECT_OWNER, p.OBJECT_NAME, p.COST

FROM V$SQL_PLAN p, V$SESSION s, V$STREAMS_APPLY_SERVER a

WHERE a.APPLY_NAME = '<apply name>' AND a.SERVER_ID = <n>

AND s.SID = a.SID

AND p.HASH_VALUE = s.SQL_HASH_VALUE;

SELECT t.SQL_TEXT

FROM V$SESSION s, V$SQLTEXT t, V$STREAMS_APPLY_SERVER a

WHERE a.APPLY_NAME = '<apply name>' AND a.SERVER_ID = <n>

AND s.SID = a.SID

AND s.SQL_ADDRESS = t.ADDRESS

AND s.SQL_HASH_VALUE = t.HASH_VALUE

ORDER BY PIECE;

p.s. Fill out "a.APPLY_NAME" and "a.SERVER_ID" from the WHERE clause
appropriately with information from previous query.

This query returns the operation being performed currently by the specified apply
server. The query also returns the owner and name of the table on which the operation is
being performed and the cost of the operation. If the results show FULL for the COST column,
then the operation is causing full table scans

You might increase the Apply performance on such scenario by:

- Creating indexes for each key column in this table has an index.

10. Refer to Note 335516.1 "Streams Performance Recommendations", section "Apply


Recommendations" for further suggestions in order to speed up Streams Apply process.
APPLY DML AND ERROR HANDLERS TROUBLESHOOTING & SAMPLE

1. Check for Apply DML / Error Handlers defined at Destination database:

col object format a35 wrap

col user_procedure HEADING 'User |Procedure' format a40 wrap

col dblink Heading 'Apply|DBLink' format a15 wrap

col apply_database_link HEAD 'Database Link|for Remote|Apply' format a25 wrap

col operation_name HEADING 'Operation|Name' format a13

col typ Heading 'Handler|Type' format a5 wrap

col lob_assemble HEADING 'Assemble|Lob?' format a8

col apply_name HEADING 'Apply Process|Name' FORMAT A15

select object_owner||'.'||object_name OBJECT, operation_name ,

user_procedure, apply_name, decode(error_handler,'Y','Error','N','DML','UNKNOWN') TYP,

decode(assemble_lobs,'Y','Yes','N','No','UNKNOWN') lob_assemble,

APPLY_Database_link

from dba_apply_dml_handlers

order by object_owner,object_name,apply_name;

p.s. (i) If "Apply Process Name" is NULL as result from above query, it means that
handler is a general handler that runs for all of the local apply processes.

(ii) "Handler Type" indicates if the Apply Handler is for any "DML" (DML Handler)
applied or only when an "ERROR" (Error Handler) happens.

DML and Error handler are customized accordingly application needs and data model,
so if an apply process is not behaving as expected, then check the handler PL/SQL procedures
used by the apply process, and correct any flaws. You might need to modify a handler
procedure or remove it to correct an apply problem.
2. Common errors when DML / Error handlers are implemented:

a. ORA-1435 user not found

If using schema name transformation in any way, you might get this error if the source
database schema does not exist at destination database. Let say you have tables on schema
'REP1' to be replicated to another database where the schema name is 'REP2', you will get the
ORA-1435 error if the schema 'REP1' does not exist at the destination database. The schema
name and it's object definitions need to exist at the destination site but NO ROWs or data is
required in this schema. The workaround for this problem it to create the structure definition
of the original schema and objects.

This can generally by done by a schema level export from the source site and a schema level
import with the ROWS=NO into the target site.

b. ORA-6550 line <n>, column <nn>:

One of the most common reasons for receiving this error in a DML HANDLER or
TRANSFORMATION is privileges. Typically, this error is causes the apply process to 'ABORT'
with no ERRORS in the DBA_APPLY_ERROR view. However, the trace file for the apply
coordinator will report the error. If the specified apply user does not have explicit privilege
to execute the dml_handler procedure or the transformation function, you will receive errors
similar to the following in the apply trace files:

ORA-12801 in STREAMS process

ORA-12801: error signaled in parallel query server P000

ORA-06550: line 1, column 15:

PLS-00201: identifier 'STRMADMIN.USER_FUNCTION_NAME' must be declared

ORA-06550: line 1, column 7:

PL/SQL: Statement ignored

In this example, the apply user does not have execute privilege on the
"USER_FUNCTION_NAME" function in the STRMADMIN schema.
p.s. Check "APPLY_USER" column from DBA_APPLY view to see what schema is being used
to Apply the changes.

c. ORA-23605 Invalid Value for Streams Parameter

When calling SYS.LCR$ member functions this error may be raised if the value of the
parameters do not match the lcr. For example adding a old column value to an insert lcr, or
setting the value of lob column to a number. This error can occur if an incorrect value is passed
for a Streams parameter or if an INSERT LCR contains 'OLD' values, or if a DELETE LCR contains
'NEW' values. Verify that the correct parameter type ('OLD','NEW') is specified for the LCR
type (INSERT/DELETE/UPDATE).

d. ORA-23607 Invalid Column

This error is raised by SYS.LCR$* member functions, when the value of the column_name
parameter does not match the name of any of the columns in the lcr. Check the column names
in the LCR. This error is encountered if:

- You attempt to delete a column from an LCR and the LCR does not have the column
(typically occurs on UPDATEs);

- You attempt to rename a column that does not exist in the LCR;

- In an UPDATE statement, use GET_VALUE or GET_VALUES members for NEW values in


a dml_handler or transformation and explicitly set the USE_OLD parameter to 'N' (default is
'Y').

e. ORA-24031 Invalid parameter, should be non-NULL

This error can occur when a 'NULL' value is passed to an LCR method instead of an ANYDATA.

Wrong:

new_lcr.add_column('OLD','LANGUAGE',NULL);
Correct:

new_lcr.add_column('OLD','LANGUAGE',sys.AnyData.ConvertVarchar2(NULL));

f. Ora-26689 Column Type Mismatch

Generally is some sort of metadata mismatch. Some possibilities:

- "column name is valid but the types do not match"

Source type" not equal to "target type". Confirm that Conversion utility data type
matches the column data type in the handler / transformation. For example, if the column is
specified as VARCHAR2, then use sys.anydata.convertvarchar2 to convert the data from type
ANY to VARCHAR2. Confirm that the DATATYPE of the column name matches between the
LCR and the target table.

- "lcr contains extra columns"

Confirm that all of the columns in the LCR are defined at the destination site. If the destination
table does not have all of the columns specified in the LCR. Eliminate any columns from the
LCR that should not be applied at the destination table. Check that column name casing
matches the database. Generally column names are Upper Case

3. For DML Handler samples, please refer to the following Metalink Notes:

Note.265867.1 Example Streams Apply DML Handler Showing the Adding of Columns to the
Apply LCR

Note.302018.1 Example Streams Apply DML Handler Showing LCR Column Rename

Note.265481.1 Example Streams Apply DML Handler Showing Rows and Columns Filter from
the Apply Process
Note.234094.1 Usage and Restrictions of Streams Apply Handlers

4. For Error Handler samples, please refer to the following Metalink Notes:

Note.401275.1 Handling Apply Insert And Delete Conflicts In A Streams Environment -


Error Handlers

Note.387829.1 Auto Correction Example for Streams using Error Handlers

APPLY MEMORY CONTENTION TROUBLESHOOTING

<to be completed>

APPLY PERFORMANCE TROUBLESHOOTING

<to be completed>

APPLY IN HETEROGENEOUS ENVIRONMENTS

In a heterogeneous environment where destination database is a non-Oracle database,


Streams Apply process will exist at source Oracle database and it will apply the changes
through an Oracle Transparent Gateway to the destination non-Oracle database.

1. Configuration checking:
a. If you use substitute key columns for any of the tables at the non-Oracle database,
then make sure to specify the database link to the non-Oracle database when you run the
SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package.

b. If you use a DML handler to process row LCRs for any of the tables at the non-Oracle
database, then specify the database link to the non-Oracle database when you run the
SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package.

c. If you want to use a message handler to process user-enqueued messages for a non-
Oracle database, then, when you run the CREATE_APPLY procedure in the DBMS_APPLY_ADM
package, specify the database link to the non-Oracle database using the apply_database_link
parameter, and specify the message handler procedure using the message_handler parameter.

d. You must set the parallelism apply process parameter to 1, the default setting, when
an apply process is applying changes to a non-Oracle database. Currently, parallel apply to
non-Oracle databases is not supported. However, you can use multiple apply processes to
apply changes a non-Oracle database.

2. You can refer to the most of Apply sections from this article to troubleshoot the
heterogeneous Streams Apply process. All the steps apply, except by Error and Conflict
Handlers which currently are not supported. If an apply error occurs, then the transaction
containing the LCR that caused the error is moved into the error queue in the Oracle database.

Also please refer to the follow Metalink notes for additional information:

Note.313766.1 Streams Heterogeneous Apply To SQL*Server

Note.377857.1 Apply process aborts with ORA-28584 setting up streams replication to MySQL

Note.436112.1 'ORA-28550 : Pass-Through SQL: Cursor Not Found' Error When Using Oracle
Streams Heterogenous Apply to Sybase

Note.466882.1 Streams Apply Process Aborts On Decimal Values Using Tg4sybase - Error ORA-
28500

APPLY SPECIFIC STATIC & DYNAMIC VIEWS


Up to the present RDBMS version, these are the Streams Apply specific views which can be
used in order to retrieve useful Streams Apply information:

Static Views

ALL_APPLY

ALL_APPLY_CONFLICT_COLUMNS

ALL_APPLY_DML_HANDLERS

ALL_APPLY_ENQUEUE

ALL_APPLY_ERROR

ALL_APPLY_EXECUTE

ALL_APPLY_KEY_COLUMNS

ALL_APPLY_PARAMETERS

ALL_APPLY_PROGRESS

ALL_APPLY_TABLE_COLUMNS

DBA_APPLY_TABLE_COLUMNS

DBA_APPLY_PROGRESS

DBA_APPLY_PARAMETERS

DBA_APPLY_KEY_COLUMNS

DBA_APPLY_EXECUTE

DBA_APPLY_ERROR

DBA_APPLY_ENQUEUE

DBA_APPLY_CONFLICT_COLUMNS

DBA_APPLY_DML_HANDLERS

DBA_APPLY_INSTANTIATED_GLOBAL

DBA_APPLY_INSTANTIATED_OBJECTS

DBA_APPLY_INSTANTIATED_SCHEMAS

DBA_APPLY_OBJECT_DEPENDENCIES

DBA_APPLY_SPILL_TXN
DBA_APPLY_VALUE_DEPENDENCIES

DBA_HIST_STREAMS_APPLY_SUM

Dynamic Views

V$STREAMS_APPLY_COORDINATOR

V$STREAMS_APPLY_READER

V$STREAMS_APPLY_SERVER

GV$STREAMS_APPLY_COORDINATOR

GV$STREAMS_APPLY_READER

GV$STREAMS_APPLY_SERVER

Troubleshooting Rules and Rule-Based Transformations Problems

If a capture process, a propagation, an apply process, or a messaging client is behaving in an


unexpected way, then the problem might be that the rules in either the positive rule set or
negative rule set for the Streams client are not configured properly. For example, if you expect
a capture process to capture changes made to a particular table, but the capture process is not
capturing these changes, then the cause might be that the rules in the rule sets used by the
capture process do not instruct the capture process to capture changes to the table.

Spelling Counts!

Rules can be thought of as a SQL WHERE clause against which each message is evaluated. If
the message does not meet the rule condition specification, the rule evaluation return is set to
FALSE and the message is excluded from further handling by the particular streams process.
For example, if you configure Streams to capture changes to the 'SOCTT.EMP' table, changes
made to the actual table 'SCOTT.EMP' will not be captured. Each expression included in the
rule_condition must evaluate to TRUE in order for the rule to evaluate to TRUE.

Eliminate duplicate/overlapping rules


Duplicate rules should be eliminated to reduce confusion and eliminate unnecessary
processing time. Two rules with the same rule condition, one with a transformation specified
and a similar rule without a transformation, is a good example of the confusion that can be
caused by duplicate rules. When the rule set is processed, there is no guarantee as to the
ordering of the rule processing. These duplicate rules can lead to inconsistent behavior.

Avoid including tables with unsupported data types

When using SCHEMA or GLOBAL rules, be sure to modify the rules so that no objects with
unsupported data types are included for Streams.

Avoid complex rules wherever possible

Avoid eliminating tables by pattern (e.g. :dml.get_object_name like 'DR%' ) or using a NOT
operator as this will force a full rule evaluation for the rule. It is frequently much faster to
explicitly name the desired table, even if it results in multiple rules.

If you are configuring a propagation that takes ALL changes from the source queue to the
destination queue (ie. no selectivity requirements),you can remove the rule set from the
propagation definition. This will eliminate the necessity to do ANY rule evaluation and will
result in higher propagation throughput.

Use views to look at ruleset and rules

The DBA_STREAMS_TABLE_RULES view shows the original configuration of the rule and
ruleset. Manual modifications can be performed using the DBMS_RULE_ADM package. Be
sure to use the DBA_RULE_SET_RULES view to obtain the full set of rules participating in a
ruleset. To get the rule condition of each rule, use the DBA_RULES view.

select rsr.rule_set_name RULE_SET ,rsr.rule_owner ||'.'|| rsr.rule_name RULE_NAME,

r.rule_condition CONDITION
from dba_rule_set_rules rsr, dba_rules r where rsr.rule_name = r.rule_name and
rsr.rule_owner = r.rule_owner

order by rsr.rule_set_owner,rsr.rule_set_name;

If that query returns any such rules, then the rules returned might be causing the capture
process to discard changes to the table. If that query returns no rules, then make sure there
are one or more table rules in the positive rule set for the capture process that evaluate to
TRUE for the table. "Displaying the Rules in the Positive Rule Set for a Streams Client" contains
an example of a query that shows rules in a positive rule set.

Resolving Problems with Rules

It is possible that the Streams capture process, propagation, apply process, or messaging client
is not behaving as expected because one or more rules should be altered or removed from a
rule set.

If you have the correct rules, and the relevant messages are still filtered out by a Streams
capture process, propagation, or apply process, then check your trace files and alert log for a
warning about a missing "multi-version data dictionary", which is a Streams data dictionary. If
you find such messages, and you are using custom capture process rules or reusing existing
capture process rules for a new destination database, then make sure you run the appropriate
procedure to prepare for instantiation:

PREPARE_TABLE_INSTANTIATION

PREPARE_SCHEMA_INSTANTIATION

PREPARE_GLOBAL_INSTANTIATION

Are Declarative Rule-Based Transformations Configured Properly?

A declarative rule-based transformation is a rule-based transformation that covers one of a


common set of transformation scenarios for row LCRs. Declarative rule-based transformations
are run internally without using PL/SQL. If a Streams capture process, propagation, apply
process, or messaging client is not behaving as expected, then check the declarative rule-based
transformations specified for the rules used by the Streams client and correct any mistakes.
The most common problems with declarative rule-based transformations are:

The declarative rule-based transformation is specified for a table or involves columns


in a table, but the schema either was not specified or was incorrectly specified when the
transformation was created. If the schema is not correct in a declarative rule-based
transformation, then the transformation will not be run on the appropriate LCRs. You should
specify the owning schema for a table when you create a declarative rule- based
transformation. If the schema is not specified when a declarative rule-based transformation is
created, then the user who creates the transformation is specified for the schema by
default.

If the schema is not correct for a declarative rule-based transformation, then, to


correct the problem, remove the transformation and re-create it, specifying the correct
schema for each table.

If more than one declarative rule-based transformation is specified for a particular


rule, then make sure the ordering is correct for execution of these transformations. Incorrect
ordering of declarative rule-based transformations can result in errors or inconsistent data. If
the ordering is not correct for the declarative rule- based transformation specified on a
single rule, then, to correct the problem, remove the transformations and re-create them
with the correct ordering.

Are the Custom Rule-Based Transformations Configured Properly?

A custom rule-based transformation is any modification by a user-defined function to a


message when a rule evaluates to TRUE. A custom rule-based transformation is specified in the
action context of a rule, and these action contexts contain a name-value pair with
STREAMS$_TRANSFORM_FUNCTION for the name and a user-created function name for the
value. This user-created function performs the transformation. If the user-created function
contains any flaws, then unexpected behavior can result.

If a Streams capture process, propagation, apply process, or messaging client is not behaving
as expected, then check the custom rule-based transformation functions specified for the rules
used by the Streams client and correct any flaws. You can find the names of these functions by
querying the DBA_STREAMS_TRANSFORM_FUNCTION data dictionary view. You might need to
modify a transformation function or remove a custom rule-based transformation to correct
the problem. Also, make sure the name of the function is spelled correctly when you specify
the transformation for a rule.

An error caused by a custom rule-based transformation might cause a capture process,


propagation, apply process, or messaging client to abort. In this case, you might need to
correct the transformation before the Streams client can be restarted or invoked.
Rule evaluation is done before a custom rule-based transformation. For example, if you have a
transformation that changes the name of a table from emps to employees, then make sure
each rule using the transformation specifies the table name emps, rather than employees, in
its rule condition.

Are Incorrectly Transformed LCRs in the Error Queue?

In some cases, incorrectly transformed LCRs might have been moved to the error queue by an
apply process. When this occurs, you should examine the transaction in the error queue to
analyze the feasibility of re-executing the transaction successfully. If an abnormality is found in
the transaction, then you might be able to configure a DML handler to correct the problem.
The DML handler will run when you re-execute the error transaction. When a DML handler is
used to correct a problem in an error transaction, the apply process that uses the DML handler
should be stopped to prevent the DML handler from acting on LCRs that are not involved with
the error transaction. After successful re-execution, if the DML handler is no longer needed,
then remove it. Also, correct the rule-based transformation to avoid future errors.

The rule sets used by all Streams clients, including capture processes and propagations,
determine the behavior of these Streams clients. Therefore, make sure the rule sets for any
capture processes or propagations on which an apply process depends contain the correct
rules. If the rules for these Streams clients are not configured properly, then the apply process
queue might never receive the appropriate messages. Also, a message traveling through a
stream is the composition of all of the transformations done along the path. For example, if a
capture process uses subset rules and performs row migration during capture of a message,
and a propagation uses a rule-based transformation on the message to change the table name,
then, when the message reaches an apply

process, the apply process rules must account for these transformations.

Rules related views/tables

V$RULE

GV$RULE

V$RULE_SET

GV$RULE_SET

V$RULE_SET_AGGREGATE_STATS
GV$RULE_SET_AGGREGATE_STATS

DBA_STREAMS_GLOBAL_RULES

DBA_STREAMS_MESSAGE_RULES

DBA_STREAMS_RULES

DBA_STREAMS_SCHEMA_RULES

DBA_STREAMS_TABLE_RULES

DBA_RULE_SET_RULES
DBA_RULE_SETS
DBA_RULES

DBA_HIST_RULE_SET
Streams Troubleshooting Guide (Doc ID 883372.1) To Bottom

PURPOSE Document Details

This Notes explains what diagnostics are required when


troubleshooting the Streams replication problems.

Type: TROUBLESHOO
TING
Status:
SCOPE & APPLICATION PUBLISHED
Last Major
Update: 23-Sep-2013
To be used by DBAs as a reference when troubleshooting
Last Update: 23-Sep-2013
the Streams associated process.

Currency Check Required.


CONTENT

Troubleshooting the Capture Process


Troubleshooting the Propagation Process CurrentOut of Date
Troubleshooting the Apply Process
Troubleshooting Rules and Rule-Based Transformations
Problems

Troubleshooting the Capture Process

Is the capture process enabled?


Related Products
Verify if the capture process is ENABLED using the
following query:
Oracle Database -
prompt Enterprise Edition
prompt ++ <a name="Capture Processes">CAPTURE
PROCESSES IN DATABASE</a> ++
col capture_name HEADING 'Capture|Name' format a30 Information Centers
wrap
col status HEADING 'Status' format a10 wrap
col QUEUE HEADING 'Queue' format a25 wrap
No Index of Oracle
col RSN HEADING 'Positive|Rule Set' format a25 wrap
Informati Database
col RSN2 HEADING 'Negative|Rule Set' format a25 wrap
on Center Information
col capture_type HEADING 'Capture|Type' format a10
available Centers
wrap
for this [1568043.2]
col error_message HEADING 'Capture|Error Message'
document
format a60 word
.
col logfile_assignment HEADING 'Logfile|Assignment'
col checkpoint_retention_time HEADING 'Days to Information
|Retain|Checkpoints' Center:
col Status_change_time HEADING 'Status|Timestamp' Overview of
col error_number HEADING 'Error|Number' Database
col version HEADING 'Version' Security
Products
SELECT capture_name, queue_owner||'.'||queue_name
[1548952.2]
QUEUE, capture_type, status,
rule_set_owner||'.'||rule_set_name RSN,
negative_rule_set_owner||'.'||negative_rule_set_name
Information
RSN2,
Center:
checkpoint_retention_time, version,
Overview
logfile_assignment,error_number, status_change_time,
Database
error_message
Server/Client
FROM DBA_CAPTURE;
Installation and
If the capture process is DISABLED, restart the process Upgrade/Migrati
uisng: on [1351022.2]

dbms_capture_adm.start_capture(<CAPTURE_NAME>)

If the capture process is ABORTED, check the


error_number and error_messages columns for the error
that caused the process to abort.

Some typical capture errors are: Document References

ORA-1323 Invalid State No References available


This error typically indicates that an archived log file is for this document.
missing.

Check the Logminer log table (SYSTEM.LOGMNR_LOG$)


Recently Viewed
for the last log file for each thread, then look at the
operating system files for the next log file in sequence. Streams Troubleshooting
Typically, this is the log file that cannot be found. Guide [883372.1]

If this doesn't help, try turning on logminer and capture


tracing and restart capture, look at the capture trace file
Electronic Invoice Version
in the bdump directory.
3.10 - NF-E / Electronic Fiscal
ALTER SYSTEM SET EVENTS '1349 trace name context Document [1913047.1]
forever, level 7';
exec
dbms_capture_adm.set_parameter('yourcapturename','tr Brazil Localization: Create
ace_level','127'); Accounting Processes
exec Electronic Invoices Not Yet
dbms_capture_adm.start_capture('yourcapturename'); Authorized by SEFAZ
(Secretaria de Estado de
To turn off tracing:
Fazenda) [1542192.1]
ALTER SYSTEM SET EVENTS '1349 trace name context
off';
E-Business Suite Release R12
exec
Brazil Electronic Invoice
dbms_capture_adm.set_parameter('yourcapturename','tr
layout 3.10, 2014 [1904541.1]
ace_level',null);

ORA-1
ORA-1347 Supplemental log data no longer found LAD Add-on Localizations -
R12 ISV Integration Solution
This error indicates that minimum supplemental logging is
[960846.1]
not enabled for the instance. This occurs most commonly
on 9iR2 RAC instances. When configuring supplemental
logging for RAC in 9iR2, it is necessary to issue the ALTER
DATABASE command at each instance in the cluster Show More
BEFORE creating the capture process. In 10g,
supplemental logging can be initiated from a single
instance so it is no longer necessary to issue the ALTER
DATABASE ADD SUPPLEMENTAL LOG DATA command at
multiple instances. After issuing the ALTER DATABASE
ADD SUPPLEMENTAL LOG DATA, be sure to issue an ALTER
SYSTEM ARCHIVE LOG CURRENT or ALTER SYSTEM
SWITCH LOGFILE.

This error can also be signaled if supplemental logging has


been dropped, either explicitly or implicitly. ALTER
DATABASE DROP SUPPLEMENTAL LOG DATA explicitly
disables supplemental logging. If this command is issued,
the capture process will abort with an ORA-1347 error.

Supplemental logging can be implicitly disabled by DML


statements that use a BUFFER hint. The BUFFER hint is
frequently used in TPCC benchmarks. Logging can also be
disabled when using a TEMPORARY TABLE and CLOB in
combination. This is reported as bug 3172456 and fixed in
9.2.0.6

ORA-1372: Insufficient processes for specified LogMiner


operation

This error indicates that there are not enough processes


available to start the capture process. Check the
following:

1. Verify that the init.ora parameter parallel_max_servers


is sufficient to start the capture and apply processes. For
each capture defined on the database, the number of
processes required is 2 + parallelism defined for capture.
If the capture parallelism parameter is set to 1 (the
default), then 3 processes are required to start the
capture. For capture parallelism value of 3, then 2+3 or 5
processes are required to start the capture.

2. Check if the database resource manager is used for this


database. Check for any plans that have limitations et for
parallel processes by running the following:

select PLAN, TYPE, PARALLEL_DEGREE_LIMIT_P1


from DBA_RSRC_PLAN_DIRECTIVES;

PARALLEL_DEGREE_LIMIT_P1 specifies a limit on the


degree of parallelism for any operation and the default is
UNLIMITED.

If this is the cause you may need to disable the plan or set
the parallelism value high enough for the system_plan.

ORA-23605 invalid value for the Streams parameter


FILE_NAME

Capture abort with ORA-23605 can occur for one of the


following reasons:

1. Invalid value for the Streams parameter FILE_NAME


indicates an inconsistency between the capture logminer
session and existing logminer sessions. Generally, the
registered archived logs view is empty when this occurs.

This error can occur when attempting to add a logfile to


a logminer session. To confirm this problem run the
following query:

select logminer_id from dba_capture where not


exists (select session# from system.logmnr_session$);

If rows are returned, most likely this is the


problem. Check if the customer attempted to remove
metadata at some point or performed an incomplete drop
of the capture process? To fix, drop the existing capture
process with the non-existent logminer session. Then
recreate the capture process.

2. Attempting to use a dictionary build from a previously


deleted logfile (bug 5278539, fixed in 10.2.0.4, 11.1). In
this situation, there are multiple entries in the
V$ARCHIVED_LOG view for the same logfile, with the
name being NULL for deleted logfiles. The patch avoids
checking DELETED entries in V$ARCHIVED_LOG:
select name, status from v$archived_log;

What is the status of the capture process?

If the capture process is ENABLED check its current status


using the following query:

prompt ++ <a name="Capture Statistics">CAPTURE


STATISTICS</a> ++
COLUMN PROCESS_NAME HEADING
"Capture|Process|Number" FORMAT A7
COLUMN CAPTURE_NAME HEADING 'Capture|Name'
FORMAT A10
COLUMN SID HEADING 'Session|ID' FORMAT
99999999999999
COLUMN SERIAL# HEADING 'Session|Serial|Number'
COLUMN STATE HEADING 'State' FORMAT A17

SELECT SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS_NAME,
c.CAPTURE_NAME,
c.STARTUP_TIME,
c.SID,
c.SERIAL#,
c.STATE
FROM gV$STREAMS_CAPTURE c, gV$SESSION s
WHERE c.SID = s.SID AND.SERIAL# = s.SERIAL#;

If the status of the capture process is CAPTURING


CHANGES, verify if messages are being enqueued int the
capture queue.

Are LCRs being enqueued into the capture queue?

The following query can be used to verify if capture is


enqueuing messages to the queue:

SELECT SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS_NAME,
c.CAPTURE_NAME,
C.STARTUP_TIME,
c.SID,
c.SERIAL#,
c.STATE,
c.state_changed_time,
c.TOTAL_MESSAGES_CAPTURED,
c.TOTAL_MESSAGES_ENQUEUED, total_messages_created
FROM gV$STREAMS_CAPTURE c, gV$SESSION s
WHERE c.SID = s.SID AND c.SERIAL# = s.SERIAL#;

Are capture rules defined properly?

If the capture process is capturing changes but messages


are not being enqueued to the capture queue, verify if
capture rules are defined properly using the following
queries:

prompt ++ STREAMS TABLE SUBSETTING RULES ++


col NAME Heading 'Capture Name' format a25 wraP
col object format A25 WRAP
col source_database format a15 wrap
col RULE format a35 wrap
col TYPE format a15 wrap
col dml_condition format a40 wrap

select streams_name
NAME,schema_name||'.'||object_name OBJECT,
RULE_TYPE || 'TABLE RULE' TYPE,
rule_owner||'.'||rule_name RULE,
DML_CONDITION , SUBSETTING_OPERATION
from dba_streams_rules where streams_type = 'CAPTURE'
and
(dml_condition is not null or subsetting_operation is not
null);

prompt
prompt ++ CAPTURE RULES BY RULE SET ++
col capture_name format a25 wrap heading
'Capture|Name'
col RULE_SET format a25 wrap heading 'Rule Set|Name'
col RULE_NAME format a25 wrap heading 'Rule|Name'
col condition format a50 wrap heading 'Rule|Condition'
set long 4000
REM break on rule_set
select c.capture_name,
rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET ,
rsr.rule_owner||'.'||sr.rule_name RULE_NAME,
r.rule_condition CONDITION
from dba_rule_set_rules rsr, DBA_RULES r ,DBA_CAPTURE
c
where rsr.rule_name = r.rule_name and rsr.rule_owner =
r.rule_owner and
rsr.rule_set_owner=c.rule_set_owner and
rsr.rule_set_name=c.rule_set_name and
rsr.rule_set_name in
(select rule_set_name from dba_capture) order by
rsr.rule_set_owner,rsr.rule_set_name;

prompt +** CAPTURE RULES IN NEGATIVE RULE SET **+


prompt
select c.capture_name,
rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET
,rsr.rule_owner||'.'||rsr.rule_name RULE_NAME,
r.rule_condition CONDITION
from dba_rule_set_rules rsr, DBA_RULES r ,DBA_CAPTURE
c
where rsr.rule_name = r.rule_name and rsr.rule_owner =
r.rule_owner and
rsr.rule_set_owner=c.negative_rule_set_owner and
rsr.rule_set_name=c.negative_rule_set_name and
rsr.rule_set_name in
(select negative_rule_set_name rule_set_name from
dba_capture) order by
rsr.rule_set_owner,rsr.rule_set_name;

Are there any overlapping rules?

If capture rules are set up correctly and capture is still not


enqueuing messages, verify if there are any overlapping
rules. The following PL/SQL code can be used for that:

set serveroutput on
declare
overlap_rules boolean := FALSE;
verbose boolean := TRUE;
cursor overlapping_rules is
select a.streams_name sname, a.streams_type stype,
a.rule_set_owner rule_set_owner, a.rule_set_name
rule_set_name,
a.rule_owner owner1, a.rule_name name1,
a.streams_rule_type type1,
b.rule_owner owner2, b.rule_name name2,
b.streams_rule_type type2
from dba_streams_rules a, dba_streams_rules b
where a.rule_set_owner = b.rule_set_owner
and a.rule_set_name = b.rule_set_name
and a.streams_name = b.streams_name and
a.streams_type = b.streams_type
and a.rule_type = b.rule_type
and (a.subsetting_operation is null or
b.subsetting_operation is null)
and (a.rule_owner != b.rule_owner or a.rule_name !=
b.rule_name)
and ((a.streams_rule_type = 'GLOBAL' and
b.streams_rule_type
in ('SCHEMA', 'TABLE') and a.schema_name =
b.schema_name)
or (a.streams_rule_type = 'SCHEMA' and
b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name)
or (a.streams_rule_type = 'TABLE' and
b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name and
a.object_name = b.object_name
and a.rule_name < b.rule_name)
or (a.streams_rule_type = 'SCHEMA' and
b.streams_rule_type = 'SCHEMA'
and a.schema_name = b.schema_name and a.rule_name
< b.rule_name)
or (a.streams_rule_type = 'GLOBAL' and
b.streams_rule_type = 'GLOBAL'
and a.rule_name < b.rule_name))
order by a.rule_name;
begin
for rec in overlapping_rules loop
overlap_rules := TRUE;
dbms_output.put_line('+ WARNING: The rule ''' ||
rec.owner1 || '''.''' || rec.name1 || ''' and
''' || rec.owner2 || '''.''' || rec.name2 || ''' from rule set '''
|| rec.rule_set_owner || '''.''' ||
rec.rule_set_name || ''' overlap.');
end loop;
if overlap_rules and verbose then
dbms_output.put_line('+Overlapping rules are a problem
especially when rule-based transformations exist.');
dbms_output.put_line('+Streams makes no guarantees of
which rule in a rule set will evaluate to TRUE,');
dbms_output.put_line('+ hus overlapping rules will cause
inconsistent behavior, and should be avoided.');
end if;
dbms_output.put_line('+');
end;
/

Is the capture paused for flow control?

If the capture is PAUSED FOR FLOW CONTROL, verify if


messages that have been enqueued have been browsed
using the following queries:

select * from gv$buffered_publishers;


Check the current number of messages in the queue and
compare it to the number of unbrowsed messages. If
messages have been browsed and are not getting
removed from the queue then verify if aq_tm_processes is
set explicitly to 0 in the source or target databases:

declare
mycheck number;
begin
select 1 into mycheck from v$parameter where name =
'aq_tm_processes' and value = '0' and
(ismodified <> 'FALSE' OR isdefault='FALSE');
if mycheck = 1 then
dbms_output.put_line('+ERROR: The parameter
''aq_tm_processes'' should not be explicitly set to 0!');
dbms_output.put_line('+Queue monitoring is disabled for
all queues.');
dbms_output.put_line('+To resolve this problem, set the
value to 1 using: ALTER SYSTEM SET
AQ_TM_PROCESSES=1; ');
end if;
exception when no_data_found then null;
end;
/

If messages are not being browsed, check the status of the


propagation and apply processes.

For additional information, please review the following


note:

Note.746247.1 Troubleshooting Streams Capture when


status is Paused For Flow Control

Is the capture waiting for REDO or in DICTIONARY


INITIALIZATION state?

Turn on logminer and capture tracing and restart capture.


Look at the capture trace file in the bdump directory.

ALTER SYSTEM SET EVENTS '1349 trace name context


forever, level 7';
exec
dbms_capture_adm.set_parameter('yourcapturename','tr
ace_level','127');
exec
dbms_capture_adm.start_capture('yourcapturename');

To turn off tracing:


ALTER SYSTEM SET EVENTS '1349 trace name context
off';
exec
dbms_capture_adm.set_parameter('yourcapturename','tr
ace_level',null);

The trace will indicate which archive is required by the


capture process. Verify if the archive exists in the
archive_destination specified for the database. If the file
exists, verify if it is registered with the capture process
using the following query:

COLUMN CONSUMER_NAME HEADING


'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING
'Source|Database' FORMAT A10
COLUMN SEQUENCE# HEADING 'Sequence|Number'
FORMAT 999999
COLUMN NAME HEADING 'Archived Redo Log|File Name'
format a35
COLUMN FIRST_SCN HEADING 'Archived Log|First SCN'
COLUMN FIRST_TIME HEADING 'Archived Log
Begin|Timestamp'
COLUMN NEXT_SCN HEADING 'Archived Log|Last SCN'
COLUMN NEXT_TIME HEADING 'Archived Log
Last|Timestamp'
COLUMN MODIFIED_TIME HEADING 'Archived
Log|Registered Time'
COLUMN DICTIONARY_BEGIN HEADING
'Dictionary|Build|Begin' format A6
COLUMN DICTIONARY_END HEADING
'Dictionary|Build|End' format A6
COLUMN PURGEABLE HEADING 'Purgeable|Archive|Log'
format a9
SELECT r.CONSUMER_NAME,
r.SOURCE_DATABASE,
r.SEQUENCE#,
r.NAME,
r.first_scn,
r.FIRST_TIME,
r.next_scn,
r.next_time,
r.MODIFIED_TIME,
r.DICTIONARY_BEGIN,
r.DICTIONARY_END,
r.purgeable
FROM DBA_REGISTERED_ARCHIVED_LOG r,
DBA_CAPTURE c
WHERE r.CONSUMER_NAME = c.CAPTURE_NAME
ORDER BY source_database, consumer_name, r.first_scn;

If the archive is not registered, check the alert.log for any


errors during registration and try registering it manually
using:

alter database register logfile '<path/file_name>' for


'<capture_name>';

Is the capture process current?

The following query can be used to determine the


message enqueuing latency of each capture process on
the database:

COLUMN CAPTURE_NAME HEADING


'Capture|Process|Name' FORMAT A10
COLUMN LATENCY_SECONDS HEADING
'Latency|in|Seconds' FORMAT 999999
COLUMN CREATE_TIME HEADING 'Message
Creation|Time' FORMAT A20
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time'
FORMAT A20
COLUMN ENQUEUE_MESSAGE_NUMBER HEADING
'Message|Number' FORMAT 999999
SELECT CAPTURE_NAME,
(ENQUEUE_TIME-
ENQUEUE_MESSAGE_CREATE_TIME)*86400
LATENCY_SECONDS,
TO_CHAR(ENQUEUE_MESSAGE_CREATE_TIME,
'HH24:MI:SS MM/DD/YY') CREATE_TIME,
TO_CHAR(ENQUEUE_TIME, 'HH24:MI:SS MM/DD/YY')
ENQUEUE_TIME,
ENQUEUE_MESSAGE_NUMBER
FROM V$STREAMS_CAPTURE;

Are there any long running transactions?

The following query can be usde to identify if there are


any long running transactions in the system:

prompt
prompt ++ Current Long Running Transactions ++
prompt Current transactions open for more than 20
minutes
prompt
col runlength HEAD 'Txn Open|Minutes' format 9999.99
col sid HEAD 'Session' format a13
col xid HEAD 'Transaction|ID' format a18
col terminal HEAD 'Terminal' format a10
col program HEAD 'Program' format a27 wrap
select t.inst_id, sid||','||serial#
sid,xidusn||'.'||xidslot||'.'||xidsqn xid,
(sysdate - start_date ) * 1440 runlength
,terminal,program from gv$transaction t, gv$session s
where t.addr=s.taddr and (sysdate - start_date) * 1440 >
20;

Long running transaction are also reported in the alert.log.

Are there any large transactions being reported in the


system?

Check the alert.log for any messages related to large


transactions. The alert.log will show information of when
the large transaction was identified and also if it has been
committed or rolled back.

If not commit or rollback for the transaction have been


reported in the alert.log, it means the transaction is still
running.

Troubleshooting the Propagation Process

Does the Propagation Use the Correct Source and


Destination Queue ?

Make sure the propagation has been configured properly


to propagate messages from the correct source queue to
the correct destination queue, and using a valid database
link.

Check how the propagation has been defined:

- Queue-to-Database Link : The propagation is defined by


a source queue and a database link pair. This is the
default. The QUEUE_TO_QUEUE parameter is set to FALSE
in this case.

- Queue-to-Queue : The propagation is defined by a


source queue and destination queue pair. The
QUEUE_TO_QUEUE parameter is set to TRUE.

From the Healthcheck report this can be visualized in


section "++ PROPAGATIONS IN DATABASE ++"
COLUMN 'Source Queue' FORMAT A39
COLUMN 'Destination Queue' FORMAT A39
COLUMN PROPAGATION_NAME HEADING 'Propagation'
FORMAT A35
column queue_to_queue HEADING 'Q-2-Q'
column error_message HEADING 'Error Message'
column error_date HEADING 'Error Date'

SELECT p.propagation_name, p.SOURCE_QUEUE_OWNER


||'.'||
p.SOURCE_QUEUE_NAME ||'@'||
g.GLOBAL_NAME "Source Queue",
p.DESTINATION_QUEUE_OWNER ||'.'||
p.DESTINATION_QUEUE_NAME ||'@'||
p.DESTINATION_DBLINK "Destination Queue",
p.queue_to_queue,
p.status,
p.error_date,
p.error_message
FROM DBA_PROPAGATION p, GLOBAL_NAME g;

Is the Propagation Enabled ?

For a propagation job to propagate messages, the


propagation must be enabled. If messages are not being
propagated by a propagation as expected, then the
propagation might not be enabled.

Check :

- The propagation has an associated Jxxx Process

- whether the propagation is ENABLED, DISABLED, or


ABORTED

- The date of the last error, if there are any propagation


errors and the error number/ error message of the last
error

From the Healthcheck report this can be visualized in


section "++ SCHEDULE FOR EACH PROPAGATION++"

prompt
COLUMN PROPAGATION_NAME Heading
'Propagation|Name' format a17 wrap
COLUMN START_DATE HEADING 'Start Date'
COLUMN PROPAGATION_WINDOW HEADING 'Duration|in
Seconds' FORMAT 9999999999999999
COLUMN NEXT_TIME HEADING 'Next|Time' FORMAT A8
COLUMN LATENCY HEADING 'Latency|in Seconds'
FORMAT 9999999999999999
COLUMN SCHEDULE_DISABLED HEADING 'Status' FORMAT
A8
COLUMN PROCESS_NAME HEADING 'Process' FORMAT A8
COLUMN FAILURES HEADING 'Number of|Failures'
FORMAT 99
COLUMN LAST_ERROR_MSG HEADING 'Error Message'
FORMAT A50
COLUMN TOTAL_BYTES HEADING 'Total
Bytes|Propagated' FORMAT 9999999999999999
COLUMN CURRENT_START_DATE HEADING
'Current|Start' FORMAT A17
COLUMN LAST_RUN_DATE HEADING 'Last|Run' FORMAT
A17
COLUMN NEXT_RUN_DATE HEADING 'Next|Run' FORMAT
A17
COLUMN LAST_ERROR_DATE HEADING 'Last|Error'
FORMAT A17
column message_delivery_mode HEADING
'Message|Delivery|Mode'
column queue_to_queue HEADING 'Q-2-Q'

SELECT p.propagation_name,TO_CHAR(s.START_DATE,
'HH24:MI:SS MM/DD/YY') START_DATE,
s.PROPAGATION_WINDOW,
s.NEXT_TIME,
s.LATENCY,
DECODE(s.SCHEDULE_DISABLED,
'Y', 'Disabled',
'N', 'Enabled') SCHEDULE_DISABLED,
s.PROCESS_NAME, s.total_bytes,
s.FAILURES,
s.message_delivery_mode,
p.queue_to_queue,
s.LAST_ERROR_MSG
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(s.destination, '[^@]+', 1, 2),
s.destination)
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME order by
message_delivery_mode, propagation_name;

SELECT p.propagation_name, message_delivery_mode,


TO_CHAR(s.LAST_RUN_DATE, 'HH24:MI:SS MM/DD/YY')
LAST_RUN_DATE,
TO_CHAR(s.CURRENT_START_DATE, 'HH24:MI:SS
MM/DD/YY') CURRENT_START_DATE,
TO_CHAR(s.NEXT_RUN_DATE, 'HH24:MI:SS MM/DD/YY')
NEXT_RUN_DATE,
TO_CHAR(s.LAST_ERROR_DATE, 'HH24:MI:SS MM/DD/YY')
LAST_ERROR_DATE
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(s.destination, '[^@]+', 1, 2),
s.destination)
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME order by
message_delivery_mode, propagation_name;

Are There Enough Job Queue Processes ?

In 10.2, propagation jobs use job queue processes to


propagate messages. Make sure the
JOB_QUEUE_PROCESSES initialization parameter is set to
2 or higher in each database instance that runs
propagations.
It should be set to a value that is high enough to
accommodate all of the jobs that run simultaneously.

Check DBA_JOBS to ensure that:

- It shows a JOB like:

next_date := sys.dbms_aqadm.aq$_propaq(job);

- This associated propagation JOB is not broken (BROKEN


flag is N),

- The LOG_USER, PRIV_USER, SCHEMA_USER for this


propagation JOB is SYS

prompt
set recsep each
set recsepchar =
select * from dba_jobs;
In 11.1 AQ Propagation uses Oracle SCheduler, enabling
AQ propagation to take advantage of Scheduler features.
Job queue processes parameters need not be set in Oracle
Database 11g for propagation to work. Oracle Scheduler
automatically starts up the required number of slaves for
the existing propagation schedules.

prompt
select * from dba_scheduler_jobs;

Check the Trace Files and Alert Log for Problems

Messages about propagation are recorded in trace files


for the database in which the propagation job is running.
A propagation job runs on the database containing the
source queue in the propagation. These trace file
messages can help you to identify and resolve problems in
a Streams environment.

All trace files for background processes are written to the


destination directory specified by the initialization
parameter BACKGROUND_DUMP_DEST. The names of
trace files are operating system specific, but each file
usually includes the name of the process writing the file.

Each propagation uses a propagation job that depends on


the job queue coordinator process and a job queue
process. The job queue coordinator process is named
cjqnn, where nn is the job queue coordinator process
number, and a job queue process is named jnnn, where
nnn is the job queue process number.

For example, on some operating systems, if the system


identifier for a database running a propagation job is hqdb
and the job queue coordinator process is 01, then the
trace file for the job queue coordinator process starts with
hqdb_cjq01. Similarly, on the same database, if a job
queue process is 001, then the trace file for the job queue
process starts with hqdb_j001. You can check the process
name by querying the PROCESS_NAME column in the
DBA_QUEUE_SCHEDULES data dictionary view.

Determining the Rule Sets for Each Propagation


Make sure the Rule_Sets and Rules are setup properly
according to the requirements

From the Healthcheck report this can be visualized in


sections:

++ PROPAGATION RULE SETS IN DATABASE ++

COLUMN PROPAGATION_NAME HEADING 'Propagation'


FORMAT A35
COLUMN Positive HEADING 'Positive|Rule Set' FORMAT
A35
COLUMN Negative HEADING 'Negative|Rule Set' FORMAT
A35

SELECT PROPAGATION_NAME,
RULE_SET_OWNER||'.'||RULE_SET_NAME Positive,
NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET
_NAME Negative
FROM DBA_PROPAGATION;

++ STREAMS PROPAGATION RULES CONFIGURED ++

col NAME Heading 'Name' format a25 wrap


col PropNAME format a25 Heading 'Propagation Name'
col object format a25 wrap
col source_database format a15 wrap
col RULE format a35 wrap
col TYPE format a15 wrap
col dml_condition format a40 wrap

select streams_name
NAME,schema_name||'.'||object_name OBJECT,
rule_set_type,
SOURCE_DATABASE,
STREAMS_RULE_TYPE ||' '||Rule_type TYPE ,
INCLUDE_TAGGED_LCR,
rule_owner||'.'||rule_name RULE
from dba_streams_rules where streams_type =
'PROPAGATION'
order by name,object, source_database,
rule_set_type,rule;
++ STREAMS TABLE SUBSETTING RULES ++

col NAME format a25 wraP


col object format A25 WRAP
col source_database format a15 wrap
col RULE format a35 wrap
col TYPE format a15 wrap
col dml_condition format a40 wrap

select streams_name
NAME,schema_name||'.'||object_name OBJECT,
RULE_TYPE || 'TABLE RULE' TYPE,
rule_owner||'.'||rule_name RULE,
DML_CONDITION , SUBSETTING_OPERATION
from dba_streams_rules where streams_type =
'PROPAGATION' and (dml_condition is not null or
subsetting_operation is not null);

++ PROPAGATION RULES BY RULE SET ++

col RULE_SET format a25 wrap


col RULE_NAME format a25 wrap
col condition format a60 wrap
set long 4000
REM break on RULE_SET

set long 4000


select rsr.rule_set_owner||'.'||rsr.rule_set_name
RULE_SET ,rsr.rule_owner||'.'||rsr.rule_name
RULE_NAME,
r.rule_condition CONDITION from
dba_rule_set_rules rsr, dba_rules r
where rsr.rule_name = r.rule_name and rsr.rule_owner =
r.rule_owner and rule_set_name in
(select rule_set_name from dba_propagation) order by
rsr.rule_set_owner,rsr.rule_set_name;

++ PROPAGATION RULES IN NEGATIVE RULE SET ++

select c.propagation_name,
rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET
,rsr.rule_owner||'.'||rsr.rule_name RULE_NAME,
r.rule_condition CONDITION from
dba_rule_set_rules rsr, DBA_RULES r ,DBA_PROPAGATION
c
where rsr.rule_name = r.rule_name and rsr.rule_owner =
r.rule_owner and
rsr.rule_set_owner=c.negative_rule_set_owner and
rsr.rule_set_name=c.negative_rule_set_name
and rsr.rule_set_name in
(select negative_rule_set_name rule_set_name from
dba_propagation) order by
rsr.rule_set_owner,rsr.rule_set_name;

++ PROPAGATION RULE TRANSFORMATIONS BY RULE SET


++

col RULE_SET format a25 wrap


col RULE_NAME format a25 wrap
col action_context_name format a32 wrap
col action_context_value format a32 wrap
REM break on RULE_SET

select rsr.rule_set_owner||'.'||rsr.rule_set_name
RULE_SET , r.* from
dba_rule_set_rules rsr, dba_streams_transformations r
where
r.rule_name = rsr.rule_name and r.rule_owner =
rsr.rule_owner and rule_set_name in
(select rule_set_name from dba_propagation)
order by rsr.rule_set_owner,rsr.rule_set_name,
r.rule_owner, r.rule_name,transform_type desc,
step_number, precedence;

Determining the Total Number of Messages and Bytes


Propagated

For determining the number of messages sent by a


propagation, as well as the number of acknowledgements
being returned from the target site, query the
V$PROPAGATION_SENDER view at the Source site and the
V$PROPAGATION_RECEIVER view at the destinarion site.

Following queries display information for each


propagation:

++ EVENTS AND BYTES PROPAGATED FOR EACH


PROPAGATION++
COLUMN Elapsed_propagation_TIME HEADING 'Elapsed
|Propagation Time|(Seconds)' FORMAT
9999999999999999
COLUMN TOTAL_NUMBER HEADING 'Total
|Events|Propagated' FORMAT 9999999999999999
COLUMN TOTAL_BYTES HEADING 'Total
Bytes|Propagated' FORMAT 9999999999999999
COLUMN SCHEDULE_STATUS HEADING 'Schedule|Status'
column elapsed_dequeue_time HEADING
'Elapsed|Dequeue Time|(Seconds)'
column elapsed_pickle_time HEADING 'Total
Time|(Seconds)'
column total_time HEADING 'Elapsed|Pickle
Time|(Seconds)'
column high_water_mark HEADING 'High|Water|Mark'
column acknowledgement HEADING 'Target |Ack'

SELECT p.propagation_name,q.message_delivery_mode,
DECODE(p.STATUS,
'DISABLED', 'Disabled',
'ENABLED', 'Enabled') SCHEDULE_STATUS,
q.instance,
q.total_number TOTAL_NUMBER, q.TOTAL_BYTES ,
q.elapsed_dequeue_time/100 elapsed_dequeue_time,
q.elapsed_pickle_time/100 elapsed_pickle_time,
q.total_time/100 total_time
FROM DBA_PROPAGATION p, dba_queue_schedules q
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(q.destination, '[^@]+', 1, 2),
q.destination)
AND q.SCHEMA = p.SOURCE_QUEUE_OWNER
AND q.QNAME = p.SOURCE_QUEUE_NAME
order by q.message_delivery_mode,
p.propagation_name;

++ PROPAGATION SENDER STATISTICS ++

col queue_id HEADING 'Queue ID'


col queue_schema HEADING 'Source|Queue|Owner'
col queue_name HEADING 'Source|Queue|Name'
col dst_queue_schema HEADING
'Destination|Queue|Owner'
col dst_queue_name HEADING
'Destination|Queue|Name'
col dblink Heading 'Destination|Database|Link'
col total_msgs HEADING 'Total|Messages'
col max_num_per_win HEADING 'Max
Msgs|per|Window'
col max_size HEADING 'Max|Size'
col src_queue_schema HEADING 'Source|Queue|Owner'
col src_queue_name HEADING 'Source|Queue|Name'
column elapsed_dequeue_time HEADING
'Elapsed|Dequeue Time|(CentiSecs)'
column elapsed_pickle_time HEADING 'Total
Time|(CentiSecs)'
column total_time HEADING 'Elapsed|Pickle
Time|(CentiSecs)'

SELECT * from v$propagation_sender;

++ PROPAGATION RECEIVER STATISTICS++

column src_queue_name HEADING


'Source|Queue|Name'
column src_dbname HEADING 'Source|Database|Name'
column startup_time HEADING 'Startup|Time'
column elapsed_unpickle_time HEADING
'Elapsed|Unpickle Time|(CentiSeconds'
column elapsed_rule_time HEADING 'Elapsed|Rule
Time|(CentiSeconds)'
column elapsed_enqueue_time HEADING
'Elapsed|Enqueue Time|(CentiSeconds)'

SELECT * from gv$propagation_receiver;

To display information about suscribers for all buffered


queues in the instance, query V$BUFFERED_SUBSCRIBERS.
In Oracle0g Release 2 and later, there are two rows for
each propagation: one for queue-to-database link
propagation
and one for queue-to-queue propagation.

++ BUFFERED SUBSCRIBERS ++

select * from gv$buffered_subscribers;

NOTE: An optimization first available in Oracle Database


11g, Release 1, isa capture process that automatically
sends LCRs directly to an apply process. This occurs when
there is a single publisher and consumer defined for the
queue that contains the captured changes. This optimized
configuration is called Combined Capture and Apply (CCA).
When CCA is in use, LCRs are transmitted directly from the
capture process to the apply process via a database link.
In this mode, the capture does not stage the LCRs in a
queue or use queue propagation to deliver them.

Propagation Statistics are zero when CCA optimization is


in effect.

Buffered Subscribers view statistics are zero when CCA


optimization is in effect.

Propagation does not start after error or database restart

At times, the propagation job may become "broken" or


fail to start after an error has been encountered or after a
database
restart.

The typical solution is to disable the propagation and then


re-enable it.
For example, for the propagation named
STRMADMIN_PROPAGATE the commands would be:

10.2
exec
dbms_propagation_adm.stop_propagation('STRMADMIN
_PROPAGATE');
exec
dbms_propagation_adm.start_propagation('STRMAMDIN
_PROPAGATE');

If the above does not fix the problem, stop the


propagation specifying the force parameter
(2nd parameter on stop_propagation) as TRUE.
For example, for the propagation named
STRMADMIN_PROPAGATE , the commands would be:

exec
dbms_propagation_adm.stop_propagation('STRMADMIN
_PROPAGATE',true);
exec
dbms_propagation_adm.start_propagation('STRMAMDIN
_PROPAGATE');
The statistics for the propagation are cleared when the
force parameter is set to TRUE.

Common Propagation Errors

The most common propagation errors result from an


incorrect network configuration.

Below list shows errors caused by tnsnames.ora file or


database links being configured incorrectly.

- ORA-12154: TNS:could not resolve service name.

- ORA-12505: TNS:listener does not currently know of


service requested in connect descriptor.

- ORA-12514: TNS:listener does not currently know of


service requested in connect descriptor.

- ORA-12541: TNS-12541 TNS:no listener

Other common errors:

- ORA-02082: A loopback database link must have a


connection qualifier.

Can occur if you use the Streams Setup Wizard in Oracle


Enterprise Manager without first configuring the
GLOBAL_NAME for your database.

- ORA-25307: Enqueue rate too high. Enable flow control.

This is an informative message that indicates flow control


has been automatically enabled to reduce the rate at
which events are being enqueued into the staging area.

DBA_QUEUE_SCHEDULES will display this informational


message when the automatic flow control (10g feature of
Streams) has been invoked.

This typically occurs when the target site is unable to keep


up with the rate of messages flowing from the source site.
Other than checking that the apply process is running
normally on the target site, no action is required by the
DBA. Propagation and the capture process will be
resumed automatically when the target site is able to
accept more messages.

In some situations, propagation may become disabled (if


the number of failures is 16). In these situations, the
propagation can be re-enabled manually.

- ORA-25315 unsupported configuration for propagation


of buffered messages.

This error typically indicates that an attempt was made to


propagate buffered messages with the database link
pointing to an instance in the destination database which
is not the owner instance of the destination queue. To
resolve, use queue to queue propagation for buffered
messages.

- ORA-600 [KWQBMCRCPTS101] after dropping /


recreating propagation

For cause/fixes refer to:

Note 421237.1 ORA-600 [KWQBMCRCPTS101] reported by


a Qmon slave process after dropping a Streams
Propagation

PROPAGATION Related NOTEs

Note 437838.1 Recommended Patches for Streams


Note.749181.1 How to Recover Streams After Dropping
Propagation
Note 368912.1 Queue to Queue Propagation Schedule
encountered ORA-12514 in a RAC environment

Troubleshooting the Apply Process

Is the Apply process ENABLED?

SELECT APPLY_NAME, STATUS FROM DBA_APPLY;


If the apply process is DISABLED, try starting it using:

EXEC DBMS_APPLY_ADM.START_APPLY( '<apply name>' )

If the apply process is ABORTED, you can use the following


query to identified the error condition that caused the
apply to abort:

COLUMN APPLY_NAME HEADING 'Apply|Process|Name'


FORMAT A15

COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time'

COLUMN ERROR_NUMBER HEADING 'Error|Number '


FORMAT 99999999

COLUMN ERROR_MESSAGE HEADING 'Error Message'


FORMAT A33

SELECT APPLY_NAME, TO_CHAR(STATUS_CHANGE_TIME,


'DD-MON-RRRR hh24:mi:ss') STATUS_CHANGE_TIME,
ERROR_NUMBER, ERROR_MESSAGE FROM DBA_APPLY
WHERE STATUS='ABORTED';

If the apply process is aborted due an error on Apply Error


queue, check the error queue using following query:

col source_commit_scn HEADING 'Source|Commit|Scn'

col message_number HEADING 'Message in| Txn


causing|Error'

col message_count HEADING 'Total|Messages|in Txn'

col local_transaction_id HEADING 'Local|Transaction|


ID'

col error_message HEADING 'Apply|Error|Message'

col ERROR_CREATION_TIME HEADING


'Error|Creation|Timestamp'
col source_transaction_id HEADING
'Source|Transaction| ID'

select apply_name,
source_database,source_commit_scn, message_number,
message_count,

local_transaction_id, error_message ,
error_creation_time, source_transaction_id

from DBA_APPLY_ERROR order by apply_name


,source_commit_scn ;

Apply won't re-start if there is any error on apply error


queue when apply parameter DISABLE_ON_ERROR is
TRUE (default) for the specific apply process. Use below
query to check this parameter:

col APPLY_NAME format a30

col parameter format a20

col value format a20

break on apply_name

select * from dba_apply_parameters order by


apply_name,parameter;

Some common apply errors include:

APPLY MOST COMMON ERRORS TROUBLESHOOTING

ORA-26666 cannot alter STREAMS process

On shutdown of the apply process, the error ORA- timeout


occurred. When attempting to restart the apply process,
the apply process aborts with the following message:

ORA-26666: cannot alter STREAMS process


STRMADMIN_SITE1_US_ORACLE
ORA-6512: at SYS.DBMS_LOGREP_UTIL
ORA-6512: at SYS.DBMS_APPLY_ADM
ORA-6512: at line 2
In the case, you will need to do a "forced" shutdown of
the apply process. Then restart the apply process. For
example:

exec
dbms_apply_adm.stop_apply('STRMADMIN_SITE1_US_OR
ACLE',force=>true)
exec
dbms_apply_adm.start_apply('STRMADMIN_SITE1_US_O
RACLE');

ORA-01403 No Data Found

Classic error where most of the time caused by a data


mismatch from source and destination tables. The ORA-
1403 error occurs when an apply process tries to update
an existing row and the OLD_VALUES in the row LCR do
not match the current values at the destination database.
Basically conditions to receive the errors:

- Missing supplemental log for key columns at


source database;

- Missing Primary Key or Unique Key at source


database;

- Missing substitute key columns at destination


database when there is no primary key or unique key at
source database;

- Applying a transaction out of order or a


transaction being applied depends on another transaction
which has not yet executed.

- Data mismatch between the row LCR and the


table for which the LCR is applying the change.

There are two ways to fix the ORA-1403:


- You can manage the ORA-1403 automatically via
DML Handler or

- You can update the current values in the row so


that the row LCR can be applied successfully. If changes to
the row at apply site are also captured by a capture
process at the destination database, then you will need to
use DBMS_STREAMS.SET_TAG to avoid re-capturing these
changes, which would lead for new ORA-1403 at others
Apply sites. For example:

a. Set a tag in the session that corrects the row. Make


sure you set the tag to a value that prevents the manual
change from being replicated. For example, the tag can
prevent the change from being captured by a capture
process.

EXEC DBMS_STREAMS.SET_TAG(tag => HEXTORAW('17'));

In some environments, you might need to set the tag to a


different value.

b. Update the row in the table so that the data


matches the old values in the LCR.

c. Re-execute the error or re-execute all errors. To


re-execute an error, run the EXECUTE_ERROR procedure
in the DBMS_APPLY_ADM package, and specify the
transaction identifier for the transaction that caused the
error. For example:

EXEC
DBMS_APPLY_ADM.EXECUTE_ERROR(local_transaction_id
=> '5.4.312');

Or, execute all errors for the apply process by


running the EXECUTE_ALL_ERRORS procedure:
EXEC
DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(apply_name
=> 'APPLY');

d. If you are going to make other changes in the


current session that you want to replicate destination
databases, then reset the tag for the session to an
appropriate value, as in the following example:

EXEC DBMS_STREAMS.SET_TAG(tag => NULL);

In some environments, you might need to set the


tag to a value other than NULL.

Refer to the follow note for additional


instructions:

Note:265201.1 Troubleshooting Streams Apply


Error ORA-1403, ORA-26787 or ORA-26786

ORA-26687 Instantiation SCN Not Set

Error occurs because the instantiation SCN does not for


the object. Basically it might be caused by :

- Importing an object without preparing the object for


instantiation prior export;

- You used original export/import for instantiation, and


you performed the import without specifying y for the
STREAMS_INSTANTIATION import parameter.
- You have not called any of DBMS_APPLY_ADM
procedures:

SET_TABLE_INSTANTIATION_SCN

SET_SCHEMA_INSTANTIATION_SCN

SET_GLOBAL_INSTANTIATION_SCN

- When applying DDL changes, and you have not set the
instantiation SCN at the SCHEMA or GLOBAL level.

Check current objects prepared for instantiation thru the


view DBA_APPLY_INSTANTIATED_OBJECTS or section "++
TABLES PREPARED FOR CAPTURE ++" of your Streams
Health Check

report.

Refer to the following notes for additional instructions:

Note:272731.1 Objects Not Instantiated When Using


DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
ORA-26687

Note:223000.1 Streams Apply Process Fails with ORA-


26687 or ''Missing Streams multi-version data dictionary''

Note:783815.1 DBA_APPLY_INSTANTIATED_OBJECTS
and ORA-26687

ORA-26688 Missing Key in LCR

- Missing/disabled primary key at source database.


When not using primary keys, make sure you have set an
alternative key at destination database using
SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM
package.

- Missing supplemental logging at source database,


if using Apply PARALLELISM greater than 1, for any
indexed column at a destination database, which includes
unique or non unique index columns, foreign key columns,
and bitmap index columns.

Refer to the follow note for additional instructions:

Note:290280.1 Ora-26688 on Apply Process of Streams


after Online Table Redefinition

ORA-1031 Insufficient Privileges

The user designated as the apply user does not have the
necessary privileges to perform SQL operations on the
replicated objects. The apply user privileges must be
granted by an explicit grant of each privilege. Granting
these privileges through a role is not sufficient for the
Streams apply user.

Additionally if the apply user does not have explicit


EXECUTE privilege on an apply handler procedure or
custom rule-based transformation function, then an ORA-
06550 error might result when the apply user tries to run
the procedure or function. Typically, this error is causes
the apply process to abort without adding errors to the
DBA_APPLY_ERROR view. However, the trace file for the
apply coordinator reports the error. Specifically, errors
similar to the following appear in the trace file:

ORA-12801 in STREAMS process

ORA-12801: error signaled in parallel query server P000


ORA-06550: line 1, column 15:

PLS-00201: identifier
'STRMADMIN.TO_AWARDFCT_RULEDML' must be
declared

ORA-06550: line 1, column 7:

PL/SQL: Statement ignored

In this example, the apply user dssdbo does not have


execute privilege on the to_award_fct_ruledml function in
the strmadmin schema. To correct the problem, grant the
required EXECUTE privilege to the apply user.

Refer to the following note for additional instructions:

Note:215372.1 Streams Apply Process Fails With ORA-


1031: Insufficient Privileges

MISSING Streams multi-version data dictionary

Although you see this message on Streams Apply


destination site, it is caused by the Streams data
dictionary information for the specified object not
available on source database at the time Streams Capture
was created.

Pleaser refer to the Streams Capture Troubleshooting


section on this same article and below Metalink note for
additional instructions:

Note 212044.1 Resolving the MISSING Streams Multi-


version Data Dictionary Error

Other common errors:


ORA-24031 Invalid Value, parameter_name Should Be
Non-NULL

ORA-26689 Column Type Mismatch

For additional details on above errors, please refer to


section "Troubleshooting Specific Apply Errors" from on-
line documentation of your Oracle book:

Manual: Oracle Streams Replication


Administrator's Guide.

Chapter: 13 Troubleshooting Streams Replication

Section: Troubleshooting Specific Apply Errors

Apply Enabled but not dequeuing/moving

1. First of all check if messages are reaching Apply


destination queue by querying the buffer queue with
following query multiple times:

col QUEUE format a50 wrap

col "Message Count" format 9999999999999999 heading


'Current Num. of|Outstanding|Mesg|in Queue'

col "Spilled Msgs" format 9999999999999999 heading


'Current Number of|Spilled|Messages|in Queue'

col "TOtal Messages" format 9999999999999999 heading


'Cumulative |Number| of Messages|in Queue'

col "Total Spilled Msgs" format 9999999999999999


heading 'Cum. Num|of Spilled|Messages|in Queue'

col "Expired_Msgs" heading 'Current Number


of|Expired|Messages|in Queue'
SELECT queue_schema||'.'||queue_name Queue,
startup_time, num_msgs "Message Count", spill_msgs
"Spilled Msgs", cnum_msgs "Total Messages", cspill_msgs
"Total Spilled Msgs",

expired_msgs FROM gv$buffered_queues;

Look for values changing for "Cumulative Number


of Messages in Queue" and "Current Number of
Outstanding Messages in Queue". In an event that
messages are not reaching Apply buffered queue, this
might be an indication for some problem at Capture
and/or Propagation processes, please refer to the related
sections on this Article for further troubleshooting
information.

2. If messages are reaching Apply queue, check if


messages are being dequeued by Apply Reader process
using below query:

col oldest_scn_num HEADING 'Oldest|SCN'

col apply_name HEADING 'Apply Name'

col apply_captured HEADING 'Captured or|User-


Enqueued LCRs'

col process_name HEADING 'Process'

col state HEADING 'STATE'

col total_messages_dequeued HEADING 'Total


Messages|Dequeued'

col total_messages_spilled Heading 'Total


Messages|Spilled'

col sga_used HEADING 'SGA Used'

col oldest_transaction_id HEADING 'Oldest|Transaction'

SELECT ap.APPLY_NAME,

DECODE(ap.APPLY_CAPTURED,

'YES','Captured LCRS',
'NO','User-Enqueued','UNKNOWN')
APPLY_CAPTURED,

SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS_NAME,

r.STATE,

r.TOTAL_MESSAGES_DEQUEUED,

r.TOTAL_MESSAGES_SPILLED,

r.SGA_USED,

oldest_scn_num,

oldest_xidusn||'.'||oldest_xidslt||'.'||oldest_xidsqn

oldest_transaction_id

FROM gV$STREAMS_APPLY_READER r, gV$SESSION s,


DBA_APPLY ap

WHERE r.SID = s.SID AND

r.SERIAL# = s.SERIAL# AND

r.APPLY_NAME = ap.APPLY_NAME;

Ideally "Total Messages Dequeued" must be increasing,


otherwise 'Total Messages Spilled' in case of spilling.

3. We also might get into the scenario where messages


are reaching Apply queue but dequeue is not happening,
therefore the problem might be that the apply process
has fallen behind. You can check apply process latency by
querying the V$STREAMS_APPLY_COORDINATOR dynamic
performance view. If apply process latency is high, then
you might be able to improve performance by adjusting
the setting of the parallelism apply process parameter.

Run the following queries to display the capture to


apply latency using the
V$STREAMS_APPLY_COORDINATOR view for a message
for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name'


FORMAT A17
COLUMN 'Latency in Seconds' FORMAT 999999

COLUMN 'Message Creation' FORMAT A17

COLUMN 'Apply Time' FORMAT A17

COLUMN HWM_MESSAGE_NUMBER HEADING


'Applied|Message|Number' FORMAT 999999

COLUMN APPLIED_MESSAGE_NUMBER HEADING


'Applied|Message|Number' FORMAT 999999

SELECT APPLY_NAME,

(HWM_TIME-HWM_MESSAGE_CREATE_TIME)*86400
"Latency in Seconds",

TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS
MM/DD/YY')

"Message Creation",

TO_CHAR(HWM_TIME,'HH24:MI:SS MM/DD/YY')
"Apply Time",

HWM_MESSAGE_NUMBER

FROM V$STREAMS_APPLY_COORDINATOR;

SELECT APPLY_NAME,

(APPLY_TIME-
APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency in
Seconds",

TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS
MM/DD/YY')

"Message Creation",

TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY')
"Apply Time",

APPLIED_MESSAGE_NUMBER

FROM DBA_APPLY_PROGRESS;
4. Also check Apply Servers:

COLUMN APPLY_NAME HEADING 'Apply Process Name'


FORMAT A22

COLUMN PROCESS_NAME HEADING 'Process Name'


FORMAT A12

COLUMN STATE HEADING 'State' FORMAT A17

COLUMN TOTAL_ASSIGNED HEADING


'Total|Transactions|Assigned' FORMAT 99999999

COLUMN TOTAL_MESSAGES_APPLIED HEADING


'Total|Messages|Applied' FORMAT 99999999

SELECT r.APPLY_NAME,

SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS_NAME,

r.STATE,

r.TOTAL_ASSIGNED,

r.TOTAL_MESSAGES_APPLIED

FROM V$STREAMS_APPLY_SERVER R, V$SESSION S

WHERE r.SID = s.SID AND

r.SERIAL# = s.SERIAL#

ORDER BY r.APPLY_NAME, r.SERVER_ID;

You might be able to see one of these Apply server states


when it is processing a transaction:

RECORD LOW-WATERMARK

ADD PARTITION

DROP PARTITION

EXECUTE TRANSACTION

WAIT COMMIT

WAIT DEPENDENCY

WAIT FOR NEXT CHUNK


TRANSACTION CLEANUP

or IDLE in case of there is no transaction currently being


applied.

5. Check if AQ_TM_PROCESSES initialization parameter is


set explicitly to 0 (zero) or is not set.

conn / as sysdba

show parameter aq_tm_processes

Refer to the follow note for additional instructions:

Note.428441.1 "Warning Aq_tm_processes Is Set To 0"


Message in Alert Log After Upgrade to
10.2.0.3 or 10.2.0.4

6. Check Apply PARALLELISM parameter running below


query:

col APPLY_NAME format a30

col parameter format a20

col value format a20

break on apply_name

select * from dba_apply_parameters order by


apply_name,parameter;

You may need to increase Parallelism process, for


example:

BEGIN

DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'strm01_apply',

parameter => 'parallelism',

value => '2');

END;

7. Check for Apply Process Encountering Contention

An apply server encounters contention when the


apply server must wait for a resource that is being used by
another session. Contention can result from logical
dependencies. For example, when an apply server tries to
apply a change to a row that a user has locked, then the
apply server must wait for the user. Contention can also
result from physical dependencies. For example,
interested transaction list (ITL) contention results when
two transactions that are being applied, which might not
be logically dependent, are trying to lock the same block
on disk. In this case, one apply server locks rows in the
block, and the other apply server must wait for access to
the block, even though the second apply server is trying to
lock different rows. See "Is the Apply Process Waiting for a
Dependent Transaction?" for detailed information about
ITL contention

The following four wait states are possible for an


apply server:

- Not waiting: The apply server is not encountering


contention and is not waiting. No action is necessary in
this case.

- Waiting for an event that is not related to another


session:

An example of an event that is not related to


another session is a log file sync event, where redo data
must be flushed because of a commit or rollback. In
these cases, nothing is written to the log initially because
such waits are common and are usually transient. If the
apply server is waiting for the same event after a certain
interval of time, then the apply server writes a message to
the alert log and apply process trace file. For example, an
apply server a001 might write a message similar to the
following:

A001: warning -- apply server 1, sid 26 waiting for


event:

A001: [log file sync] ...

This output is written to the alert log at intervals until


the problem is rectified.

- Waiting for an event that is related to a non apply server


session:

The apply server writes a message to the alert log


and apply process trace file immediately. For example, an
apply server a001 might write a message similar to the
following:

A001: warning -- apply server 1, sid 10 waiting on user


sid 36 for event:

A001: [enq: TM - contention] name|mode=544d0003,


object #=a078, table/partition=0

This output is written to the alert log at intervals


until the problem is rectified.

- Waiting for another apply server session:

This state can be caused by interested transaction


list (ITL) contention, but it can also be caused by more
serious issues, such as an apply handler that obtains
conflicting locks. In this case, the apply server that is
blocked by another apply server prints only once to the
alert log and the trace file for the apply process, and the
blocked apply server issues a rollback to the blocking
apply server. When the blocking apply server rolls back,
another message indicating that the apply server has been
rolled back is printed to the log files, and the rolled back
transaction is reassigned by the coordinator process for
the apply process.

For example, if apply server 1 of apply process


a001 is blocked by apply server 2 of the same apply
process (a001), then the apply process writes the
following messages to the log files:

A001: apply server 1 blocked on server 2

A001: [enq: TX - row lock contention]


name|mode=54580006, usn<<16 | slot=1000e,
sequence=1853

A001: apply server 2 rolled back

Check TOTAL_ROLLBACKS from below query:

col apply_name HEADING 'Apply Name' format a22 wrap

col process HEADING 'Process' format a7

col RECEIVED HEADING 'Total|Txns|Received'

col ASSIGNED HEADING 'Total|Txns|Assigned'

col APPLIED HEADING 'Total|Txns|Applied'

col ERRORS HEADING 'Total|Txns|w/ Error'

col total_ignored HEADING 'Total|Txns|Ignored'

col total_rollbacks HEADING 'Total|Txns|Rollback'

col WAIT_DEPS HEADING 'Total|Txns|Wait_Deps'

col WAIT_COMMITS HEADING 'Total|Txns|Wait_Commits'

col STATE HEADING 'State' format a10 word

SELECT ap.APPLY_NAME,
SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS,

c.STATE,

c.TOTAL_RECEIVED RECEIVED,

c.TOTAL_ASSIGNED ASSIGNED,

c.TOTAL_APPLIED APPLIED,

c.TOTAL_ERRORS ERRORS,

c.total_ignored,

c.total_rollbacks,

c.TOTAL_WAIT_DEPS WAIT_DEPS,
c.TOTAL_WAIT_COMMITS WAIT_COMMITS

FROM gV$STREAMS_APPLY_COORDINATOR c,
gV$SESSION s, DBA_APPLY ap

WHERE c.SID = s.SID AND

c.SERIAL# = s.SERIAL# AND

c.APPLY_NAME = ap.APPLY_NAME;

8. Check for Apply Process waiting for dependent


transactions (applies only when having Apply
PARALLELISM parameter greater than 1). Use same query
as above:

SELECT ap.APPLY_NAME,

SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS,

c.STATE,

c.TOTAL_RECEIVED RECEIVED,

c.TOTAL_ASSIGNED ASSIGNED,

c.TOTAL_APPLIED APPLIED,

c.TOTAL_ERRORS ERRORS,

c.total_ignored,
c.total_rollbacks,

c.TOTAL_WAIT_DEPS WAIT_DEPS,
c.TOTAL_WAIT_COMMITS WAIT_COMMITS

FROM gV$STREAMS_APPLY_COORDINATOR c,
gV$SESSION s, DBA_APPLY ap

WHERE c.SID = s.SID AND

c.SERIAL# = s.SERIAL# AND

c.APPLY_NAME = ap.APPLY_NAME;

To avoid the problem in the future, perform one of the


following actions:

- Increase the number of ITLs available. You can do


so by changing the INITRANS setting for the table using
the ALTER TABLE statement.

- Set the commit_serialization parameter to none


for the apply process.

- Set the parallelism apply process parameter to 1


for the apply process.

9. Check for poor Apply performance for certain


transactions:

The following query displays information about


the transactions being applied by each apply server:

SET LINE 120

COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT


A20
COLUMN SERVER_ID HEADING 'Apply Server ID' FORMAT
99999999

COLUMN STATE HEADING 'Apply Server State' FORMAT


A20

COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied


Message|Number' FORMAT 99999999

COLUMN MESSAGE_SEQUENCE HEADING 'Message


Sequence|Number' FORMAT 99999999

SELECT APPLY_NAME, SERVER_ID, STATE,


APPLIED_MESSAGE_NUMBER, MESSAGE_SEQUENCE

FROM V$STREAMS_APPLY_SERVER

ORDER BY APPLY_NAME,SERVER_ID;

If you run this query repeatedly, then over time


the apply server state, applied message number, and
message sequence number should continue to change for
each apply server as it applies transactions. If these values
do not change for one or more apply servers, then the
apply server might not be performing well. In this case,
you should make sure that, for each table to which the
apply process applies changes, every key column has an
index.

Use following queries to determine the object in


interest for the poor Apply processing transaction:

COLUMN OPERATION HEADING 'Operation' FORMAT A20

COLUMN OPTIONS HEADING 'Options' FORMAT A20

COLUMN OBJECT_OWNER HEADING 'Object|Owner'


FORMAT A10

COLUMN OBJECT_NAME HEADING 'Object|Name'


FORMAT A10

COLUMN COST HEADING 'Cost' FORMAT 99999999


SELECT p.OPERATION, p.OPTIONS, p.OBJECT_OWNER,
p.OBJECT_NAME, p.COST

FROM V$SQL_PLAN p, V$SESSION s,


V$STREAMS_APPLY_SERVER a

WHERE a.APPLY_NAME = '<apply name>' AND


a.SERVER_ID = <n>

AND s.SID = a.SID

AND p.HASH_VALUE = s.SQL_HASH_VALUE;

SELECT t.SQL_TEXT

FROM V$SESSION s, V$SQLTEXT t,


V$STREAMS_APPLY_SERVER a

WHERE a.APPLY_NAME = '<apply name>' AND


a.SERVER_ID = <n>

AND s.SID = a.SID

AND s.SQL_ADDRESS = t.ADDRESS

AND s.SQL_HASH_VALUE = t.HASH_VALUE

ORDER BY PIECE;

p.s. Fill out "a.APPLY_NAME" and


"a.SERVER_ID" from the WHERE clause appropriately with
information from previous query.

This query returns the operation being performed


currently by the specified apply server. The query also
returns the owner and name of the table on which the
operation is being performed and the cost of the
operation. If the results show FULL for the COST column,
then the operation is causing full table scans

You might increase the Apply performance on


such scenario by:

- Creating indexes for each key column in this


table has an index.
10. Refer to Note 335516.1 "Streams Performance
Recommendations", section "Apply Recommendations"
for further suggestions in order to speed up Streams
Apply process.

APPLY DML AND ERROR HANDLERS TROUBLESHOOTING &


SAMPLE

1. Check for Apply DML / Error Handlers defined at


Destination database:

col object format a35 wrap

col user_procedure HEADING 'User |Procedure' format


a40 wrap

col dblink Heading 'Apply|DBLink' format a15 wrap

col apply_database_link HEAD 'Database Link|for


Remote|Apply' format a25 wrap

col operation_name HEADING 'Operation|Name' format


a13

col typ Heading 'Handler|Type' format a5 wrap

col lob_assemble HEADING 'Assemble|Lob?' format a8

col apply_name HEADING 'Apply Process|Name' FORMAT


A15

select object_owner||'.'||object_name OBJECT,


operation_name ,

user_procedure, apply_name,
decode(error_handler,'Y','Error','N','DML','UNKNOWN')
TYP,

decode(assemble_lobs,'Y','Yes','N','No','UNKNOWN')
lob_assemble,
APPLY_Database_link

from dba_apply_dml_handlers

order by object_owner,object_name,apply_name;

p.s. (i) If "Apply Process Name" is NULL as


result from above query, it means that handler is a
general handler that runs for all of the local apply
processes.

(ii) "Handler Type" indicates if the Apply Handler is


for any "DML" (DML Handler) applied or only when an
"ERROR" (Error Handler) happens.

DML and Error handler are customized accordingly


application needs and data model, so if an apply process is
not behaving as expected, then check the handler PL/SQL
procedures used by the apply process, and correct any
flaws. You might need to modify a handler procedure or
remove it to correct an apply problem.

2. Common errors when DML / Error handlers are


implemented:

a. ORA-1435 user not found

If using schema name transformation in any way, you


might get this error if the source database schema does
not exist at destination database. Let say you have tables
on schema 'REP1' to be replicated to another database
where the schema name is 'REP2', you will get the ORA-
1435 error if the schema 'REP1' does not exist at the
destination database. The schema name and it's object
definitions need to exist at the destination site but NO
ROWs or data is required in this schema. The workaround
for this problem it to create the structure definition of the
original schema and objects.
This can generally by done by a schema level export from
the source site and a schema level import with the
ROWS=NO into the target site.

b. ORA-6550 line <n>, column <nn>:

One of the most common reasons for receiving


this error in a DML HANDLER or TRANSFORMATION is
privileges. Typically, this error is causes the apply process
to 'ABORT' with no ERRORS in the DBA_APPLY_ERROR
view. However, the trace file for the apply coordinator
will report the error. If the specified apply user does not
have explicit privilege to execute the dml_handler
procedure or the transformation function, you will receive
errors similar to the following in the apply trace files:

ORA-12801 in STREAMS process

ORA-12801: error signaled in parallel query server P000

ORA-06550: line 1, column 15:

PLS-00201: identifier
'STRMADMIN.USER_FUNCTION_NAME' must be declared

ORA-06550: line 1, column 7:

PL/SQL: Statement ignored

In this example, the apply user does not have execute


privilege on the "USER_FUNCTION_NAME" function in the
STRMADMIN schema.

p.s. Check "APPLY_USER" column from DBA_APPLY


view to see what schema is being used to Apply
the changes.

c. ORA-23605 Invalid Value for Streams Parameter

When calling SYS.LCR$ member functions this error may


be raised if the value of the parameters do not match the
lcr. For example adding a old column value to an insert
lcr, or setting the value of lob column to a number. This
error can occur if an incorrect value is passed for a
Streams parameter or if an INSERT LCR contains 'OLD'
values, or if a DELETE LCR contains 'NEW' values. Verify
that the correct parameter type ('OLD','NEW') is specified
for the LCR type (INSERT/DELETE/UPDATE).

d. ORA-23607 Invalid Column

This error is raised by SYS.LCR$* member functions, when


the value of the column_name parameter does not match
the name of any of the columns in the lcr. Check the
column names in the LCR. This error is encountered if:

- You attempt to delete a column from an LCR and


the LCR does not have the column (typically occurs on
UPDATEs);

- You attempt to rename a column that does not


exist in the LCR;

- In an UPDATE statement, use GET_VALUE or


GET_VALUES members for NEW values in a dml_handler
or transformation and explicitly set the USE_OLD
parameter to 'N' (default is 'Y').

e. ORA-24031 Invalid parameter, should be non-


NULL

This error can occur when a 'NULL' value is passed to an


LCR method instead of an ANYDATA.

Wrong:

new_lcr.add_column('OLD','LANGUAGE',NULL);

Correct:
new_lcr.add_column('OLD','LANGUAGE',sys.AnyData.Conv
ertVarchar2(NULL));

f. Ora-26689 Column Type Mismatch

Generally is some sort of metadata mismatch.


Some possibilities:

- "column name is valid but the types do not


match"

Source type" not equal to "target type". Confirm


that Conversion utility data type matches the column data
type in the handler / transformation. For example, if the
column is specified as VARCHAR2, then use
sys.anydata.convertvarchar2 to convert the data from
type ANY to VARCHAR2. Confirm that the DATATYPE of
the column name matches between the LCR and the
target table.

- "lcr contains extra columns"

Confirm that all of the columns in the LCR are defined at


the destination site. If the destination table does not have
all of the columns specified in the LCR. Eliminate any
columns from the LCR that should not be applied at the
destination table. Check that column name casing
matches the database. Generally column names are
Upper Case

3. For DML Handler samples, please refer to the


following Metalink Notes:
Note.265867.1 Example Streams Apply DML Handler
Showing the Adding of Columns to the Apply LCR

Note.302018.1 Example Streams Apply DML Handler


Showing LCR Column Rename

Note.265481.1 Example Streams Apply DML Handler


Showing Rows and Columns Filter from the Apply
Process

Note.234094.1 Usage and Restrictions of Streams Apply


Handlers

4. For Error Handler samples, please refer to the


following Metalink Notes:

Note.401275.1 Handling Apply Insert And Delete


Conflicts In A Streams Environment - Error Handlers

Note.387829.1 Auto Correction Example for


Streams using Error Handlers

APPLY MEMORY CONTENTION TROUBLESHOOTING

<to be completed>

APPLY PERFORMANCE TROUBLESHOOTING

<to be completed>
APPLY IN HETEROGENEOUS ENVIRONMENTS

In a heterogeneous environment where destination


database is a non-Oracle database, Streams Apply process
will exist at source Oracle database and it will apply the
changes through an Oracle Transparent Gateway to the
destination non-Oracle database.

1. Configuration checking:

a. If you use substitute key columns for any of the


tables at the non-Oracle database, then make sure to
specify the database link to the non-Oracle database
when you run the SET_KEY_COLUMNS procedure in the
DBMS_APPLY_ADM package.

b. If you use a DML handler to process row LCRs for


any of the tables at the non-Oracle database, then specify
the database link to the non-Oracle database when you
run the SET_DML_HANDLER procedure in the
DBMS_APPLY_ADM package.

c. If you want to use a message handler to process


user-enqueued messages for a non-Oracle database, then,
when you run the CREATE_APPLY procedure in the
DBMS_APPLY_ADM package, specify the database link to
the non-Oracle database using the apply_database_link
parameter, and specify the message handler procedure
using the message_handler parameter.

d. You must set the parallelism apply process


parameter to 1, the default setting, when an apply
process is applying changes to a non-Oracle database.
Currently, parallel apply to non-Oracle databases is not
supported. However, you can use multiple apply
processes to apply changes a non-Oracle database.

2. You can refer to the most of Apply sections from


this article to troubleshoot the heterogeneous Streams
Apply process. All the steps apply, except by Error and
Conflict Handlers which currently are not supported. If an
apply error occurs, then the transaction containing the
LCR that caused the error is moved into the error queue in
the Oracle database.

Also please refer to the follow Metalink notes for


additional information:

Note.313766.1 Streams Heterogeneous Apply To


SQL*Server

Note.377857.1 Apply process aborts with ORA-28584


setting up streams replication to MySQL

Note.436112.1 'ORA-28550 : Pass-Through SQL: Cursor


Not Found' Error When Using Oracle Streams
Heterogenous Apply to Sybase

Note.466882.1 Streams Apply Process Aborts On Decimal


Values Using Tg4sybase - Error ORA-28500

APPLY SPECIFIC STATIC & DYNAMIC VIEWS

Up to the present RDBMS version, these are the Streams


Apply specific views which can be used in order to retrieve
useful Streams Apply information:

Static Views

ALL_APPLY

ALL_APPLY_CONFLICT_COLUMNS

ALL_APPLY_DML_HANDLERS

ALL_APPLY_ENQUEUE
ALL_APPLY_ERROR

ALL_APPLY_EXECUTE

ALL_APPLY_KEY_COLUMNS

ALL_APPLY_PARAMETERS

ALL_APPLY_PROGRESS

ALL_APPLY_TABLE_COLUMNS

DBA_APPLY_TABLE_COLUMNS

DBA_APPLY_PROGRESS

DBA_APPLY_PARAMETERS

DBA_APPLY_KEY_COLUMNS

DBA_APPLY_EXECUTE

DBA_APPLY_ERROR

DBA_APPLY_ENQUEUE

DBA_APPLY_CONFLICT_COLUMNS

DBA_APPLY_DML_HANDLERS

DBA_APPLY_INSTANTIATED_GLOBAL

DBA_APPLY_INSTANTIATED_OBJECTS

DBA_APPLY_INSTANTIATED_SCHEMAS

DBA_APPLY_OBJECT_DEPENDENCIES

DBA_APPLY_SPILL_TXN

DBA_APPLY_VALUE_DEPENDENCIES

DBA_HIST_STREAMS_APPLY_SUM

Dynamic Views

V$STREAMS_APPLY_COORDINATOR

V$STREAMS_APPLY_READER

V$STREAMS_APPLY_SERVER

GV$STREAMS_APPLY_COORDINATOR
GV$STREAMS_APPLY_READER

GV$STREAMS_APPLY_SERVER

Troubleshooting Rules and Rule-Based Transformations


Problems

If a capture process, a propagation, an apply process, or a


messaging client is behaving in an unexpected way, then
the problem might be that the rules in either the positive
rule set or negative rule set for the Streams client are not
configured properly. For example, if you expect a capture
process to capture changes made to a particular table, but
the capture process is not capturing these changes, then
the cause might be that the rules in the rule sets used by
the capture process do not instruct the capture process to
capture changes to the table.

Spelling Counts!

Rules can be thought of as a SQL WHERE clause against


which each message is evaluated. If the message does
not meet the rule condition specification, the rule
evaluation return is set to FALSE and the message is
excluded from further handling by the particular streams
process. For example, if you configure Streams to capture
changes to the 'SOCTT.EMP' table, changes made to the
actual table 'SCOTT.EMP' will not be captured. Each
expression included in the rule_condition must evaluate
to TRUE in order for the rule to evaluate to TRUE.

Eliminate duplicate/overlapping rules

Duplicate rules should be eliminated to reduce confusion


and eliminate unnecessary processing time. Two rules
with the same rule condition, one with a transformation
specified and a similar rule without a transformation, is a
good example of the confusion that can be caused by
duplicate rules. When the rule set is processed, there is
no guarantee as to the ordering of the rule processing.
These duplicate rules can lead to inconsistent behavior.

Avoid including tables with unsupported data types

When using SCHEMA or GLOBAL rules, be sure to modify


the rules so that no objects with unsupported data types
are included for Streams.

Avoid complex rules wherever possible

Avoid eliminating tables by pattern (e.g.


:dml.get_object_name like 'DR%' ) or using a NOT
operator as this will force a full rule evaluation for the
rule. It is frequently much faster to explicitly name the
desired table, even if it results in multiple rules.

If you are configuring a propagation that takes ALL


changes from the source queue to the destination queue
(ie. no selectivity requirements),you can remove the rule
set from the propagation definition. This will eliminate
the necessity to do ANY rule evaluation and will result in
higher propagation throughput.

Use views to look at ruleset and rules

The DBA_STREAMS_TABLE_RULES view shows the original


configuration of the rule and ruleset. Manual
modifications can be performed using the
DBMS_RULE_ADM package. Be sure to use the
DBA_RULE_SET_RULES view to obtain the full set of rules
participating in a ruleset. To get the rule condition of each
rule, use the DBA_RULES view.
select rsr.rule_set_name RULE_SET ,rsr.rule_owner ||'.'||
rsr.rule_name RULE_NAME,

r.rule_condition CONDITION
from dba_rule_set_rules rsr, dba_rules r where
rsr.rule_name = r.rule_name and

rsr.rule_owner = r.rule_owner

order by rsr.rule_set_owner,rsr.rule_set_name;

If that query returns any such rules, then the rules


returned might be causing the capture process to discard
changes to the table. If that query returns no rules, then
make sure there are one or more table rules in the
positive rule set for the capture process that evaluate to
TRUE for the table. "Displaying the Rules in the Positive
Rule Set for a Streams Client" contains an example of a
query that shows rules in a positive rule set.

Resolving Problems with Rules

It is possible that the Streams capture process,


propagation, apply process, or messaging client is not
behaving as expected because one or more rules should
be altered or removed from a rule set.

If you have the correct rules, and the relevant messages


are still filtered out by a Streams capture process,
propagation, or apply process, then check your trace files
and alert log for a warning about a missing "multi-version
data dictionary", which is a Streams data dictionary. If you
find such messages, and you are using custom capture
process rules or reusing existing capture process rules for
a new destination database, then make sure you run the
appropriate procedure to prepare for instantiation:

PREPARE_TABLE_INSTANTIATION

PREPARE_SCHEMA_INSTANTIATION

PREPARE_GLOBAL_INSTANTIATION
Are Declarative Rule-Based Transformations Configured
Properly?

A declarative rule-based transformation is a rule-based


transformation that covers one of a common set of
transformation scenarios for row LCRs. Declarative rule-
based transformations are run internally without using
PL/SQL. If a Streams capture process, propagation, apply
process, or messaging client is not behaving as expected,
then check the declarative rule-based transformations
specified for the rules used by the Streams client and
correct any mistakes.

The most common problems with declarative rule-based


transformations are:

The declarative rule-based transformation is


specified for a table or involves columns in a table, but the
schema either was not specified or was incorrectly
specified when the transformation was created. If the
schema is not correct in a declarative rule-based
transformation, then the transformation will not be run
on the appropriate LCRs. You should specify the owning
schema for a table when you create a declarative rule-
based transformation. If the schema is not
specified when a declarative rule-based transformation is
created, then the user who creates the
transformation is specified for the schema by default.

If the schema is not correct for a declarative rule-


based transformation, then, to correct the problem,
remove the transformation and re-create it, specifying the
correct schema for each table.

If more than one declarative rule-based


transformation is specified for a particular rule, then make
sure the ordering is correct for execution of these
transformations. Incorrect ordering of declarative rule-
based transformations can result in errors or inconsistent
data. If the ordering is not correct for the declarative rule-
based transformation specified on a single rule,
then, to correct the problem, remove the transformations
and re-create them with the correct ordering.

Are the Custom Rule-Based Transformations Configured


Properly?

A custom rule-based transformation is any modification


by a user-defined function to a message when a rule
evaluates to TRUE. A custom rule-based transformation is
specified in the action context of a rule, and these action
contexts contain a name-value pair with
STREAMS$_TRANSFORM_FUNCTION for the name and a
user-created function name for the value. This user-
created function performs the transformation. If the user-
created function contains any flaws, then unexpected
behavior can result.

If a Streams capture process, propagation, apply process,


or messaging client is not behaving as expected, then
check the custom rule-based transformation functions
specified for the rules used by the Streams client and
correct any flaws. You can find the names of these
functions by querying the
DBA_STREAMS_TRANSFORM_FUNCTION data dictionary
view. You might need to modify a transformation function
or remove a custom rule-based transformation to correct
the problem. Also, make sure the name of the function is
spelled correctly when you specify the transformation for
a rule.

An error caused by a custom rule-based transformation


might cause a capture process, propagation, apply
process, or messaging client to abort. In this case, you
might need to correct the transformation before the
Streams client can be restarted or invoked.

Rule evaluation is done before a custom rule-based


transformation. For example, if you have a transformation
that changes the name of a table from emps to
employees, then make sure each rule using the
transformation specifies the table name emps, rather than
employees, in its rule condition.
Are Incorrectly Transformed LCRs in the Error Queue?

In some cases, incorrectly transformed LCRs might have


been moved to the error queue by an apply process.
When this occurs, you should examine the transaction in
the error queue to analyze the feasibility of re-executing
the transaction successfully. If an abnormality is found in
the transaction, then you might be able to configure a
DML handler to correct the problem. The DML handler
will run when you re-execute the error transaction. When
a DML handler is used to correct a problem in an error
transaction, the apply process that uses the DML handler
should be stopped to prevent the DML handler from
acting on LCRs that are not involved with the error
transaction. After successful re-execution, if the DML
handler is no longer needed, then remove it. Also, correct
the rule-based transformation to avoid future errors.

The rule sets used by all Streams clients, including capture


processes and propagations, determine the behavior of
these Streams clients. Therefore, make sure the rule sets
for any capture processes or propagations on which an
apply process depends contain the correct rules. If the
rules for these Streams clients are not configured
properly, then the apply process queue might never
receive the appropriate messages. Also, a message
traveling through a stream is the composition of all of the
transformations done along the path. For example, if a
capture process uses subset rules and performs row
migration during capture of a message, and a propagation
uses a rule-based transformation on the message to
change the table name, then, when the message reaches
an apply

process, the apply process rules must account for these


transformations.

Rules related views/tables

V$RULE
GV$RULE

V$RULE_SET

GV$RULE_SET

V$RULE_SET_AGGREGATE_STATS

GV$RULE_SET_AGGREGATE_STATS

DBA_STREAMS_GLOBAL_RULES

DBA_STREAMS_MESSAGE_RULES

DBA_STREAMS_RULES

DBA_STREAMS_SCHEMA_RULES

DBA_STREAMS_TABLE_RULES

DBA_RULE_SET_RULES

DBA_RULE_SETS

DBA_RULES

DBA_HIST_RULE_SET

Related

Products

Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database -
Enterprise Edition > Streams (Replication and Messaging)
Auto Correction Example for Streams using Error Handlers (Doc ID 387829.1) To Bottom

In this Document Document Details

Purpose

Requirements
SAMPLE
Type: CODE
Configuring
Status: PUBLISHED

Last Major 22-Apr-


Instructions Update: 2013

Last Update: 22-Apr-


2013
Sample Code

Currency Check Required.


Sample Output

CurrentOut of Date

References

Applies to:

Oracle Database - Enterprise Edition - Version 10.1.0.4 to Related Products


11.2.0.2 [Release 10.1 to 11.2]
Information in this document applies to any platform.

Purpose Oracle Database -


Enterprise Edition
This is a complete example of configuring Streams and
extending it for auto correction.
Information
To prepare for running this sql script, configure two(2) Centers
Oracle databases(9iR2 or 10g). The source database (where
changes are captured) must be Enterprise Edition and
enabled for ARCHIVE LOGGING. Index of Oracle
Database
Information
Requirements
Centers
Applicable with SQL, SQL*Plus and iSQL*Plus Oracle10g [1568043.2]

Configuring

Modify the init.ora or spfiles of both databases to configure Information


the size of the streams pool(10gR1 only) and enable job Center:
queue processing. No Overview of

The job_queue_processes parameter should be set to a Informati Database


on Center Security
positive non-zero integer. Typically setting
available Products
job_queue_processes=4 is sufficient.
for this [1548952.2]
Create the streams administrator user STRADM. documen
t.
Grant DBA privilege to the STRADM user. Information
Create tnsnames.ora entries at the source database for Center:
dbs1.net and dbs2.net, for the source and target databases Overview
respectively. Alternatively, modify the script so that Database
dbs1.net and dbs2.net reflect the correct service names for Server/Client
each database. Installation and
Upgrade/Migrat
The script assumes that the HR sample schema exists. ion [1351022.2]
Be sure to review the script before running it. There are
comments imbedded in the script that describe its actions.

Note: This script modifies the GLOBAL_NAME of each


database to match dbs1.net and dbs2.net. If
this is not desired, please modify the script accordingly.
Document References
Instructions

see below
Streams
Caution Conflict
Resolution
This sample code is provided for educational purposes only,
[230049.1]
and is not supported by Oracle Support. It has been tested
internally, however, we do not guarantee that it will work No
for you. Ensure that you run it in your test environment References
before using. available for Usage and
this Restrictions
Sample Code document. of Streams
Apply
SQL SCRIPT
Handlers
===========
[234094.1]

Rem
Rem Copyright (c) 2002, 2004, Oracle. All rights reserved.
Rem Master Note
Rem NAME for
Rem Best Practices Examples - Auto Correction Troubleshoot
Rem ing Streams
Rem DESCRIPTION Apply Errors
Rem Auto-correction with control at source ORA-1403,
Rem control table is created and replicated to all sites ORA-26787
Rem Setting auto_correct to 'Y' handles in case of error or ORA-
during apply 26786,Confli
Rem ct Resolution
Rem NOTES [265201.1]
Rem We recommend that PKs are not modified when
autocorrection is
Rem being used. Example
Rem Streams
Apply DML
Handler
SET ECHO ON Showing
SET FEEDBACK 1 Rows and
SET NUMWIDTH 10 Columns
SET LINESIZE 80 Filter from
SET TRIMSPOOL ON the Apply
SET TAB OFF Process
SET PAGESIZE 100 [265481.1]

variable site1 varchar2(80);


variable site2 varchar2(80); Handling
variable scn number; Apply Insert
and Delete
set echo on Conflicts in a
Streams
-- Set database name and Create database links Environment
- Error
tvconnect sys/change_on_install@dbs1.net as sysdba Handlers
ALTER DATABASE RENAME GLOBAL_NAME TO DBS1.NET; [401275.1]
connect sys/change_on_install@dbs2.net as sysdba
ALTER DATABASE RENAME GLOBAL_NAME TO DBS2.NET;

connect sys/change_on_install@dbs1.net as sysdba


create public database link dbs2.net using 'dbs2.net'; Recently
Viewed
connect sys/change_on_install@dbs2.net as sysdba
Auto Correction Example
create public database link dbs1.net using 'dbs1.net';
for Streams using Error
Handlers [387829.1]
connect stradm/stradm@dbs1.net;
create database link dbs2.net connect to stradm identified
by stradm
using 'dbs2.net';
SPED - Electronic Invoice
connect stradm/stradm@dbs2.net; Integration Support
create database link dbs1.net connect to stradm identified [876892.1]
by stradm
using 'dbs1.net';
Oracle Documentation Links
connect hr/hr@dbs1.net; [1597048.1]
create database link dbs2.net connect to hr identified by hr
using 'dbs2.net';
Streams Troubleshooting
connect hr/hr@dbs2.net; Guide [883372.1]
create database link dbs1.net connect to hr identified by hr
using 'dbs1.net';
Electronic Invoice Version
connect stradm/stradm@dbs1.net; 3.10 - NF-E / Electronic
execute select global_name into :site1 from global_name; Fiscal Document
execute select global_name into :site2 from [1913047.1]
global_name@dbs2.net;
print site1;
print site2;
Show More

------------------------------------------------------
-- - Create Streams Queues
-- - Verify if they are secure Queues.
------------------------------------------------------

connect stradm/stradm@dbs2.net;
exec dbms_streams_adm.set_up_queue( ) ;
select owner , queue_table , secure from dba_queue_tables
where queue_table = 'STREAMS_QUEUE_TABLE' order by
owner , queue_table;

connect stradm/stradm@dbs1.net;
exec dbms_streams_adm.set_up_queue( ) ;
select owner , queue_table , secure from dba_queue_tables
where queue_table = 'STREAMS_QUEUE_TABLE' order by
owner , queue_table;

------------------------------------------------------
-- Create propagation rules to dbs2.net
------------------------------------------------------

connect stradm/stradm@dbs1.net;
begin
dbms_streams_adm.add_schema_propagation_rules(
schema_name => 'hr',
streams_name => 'dbs1net_to_dbs2net',
source_queue_name => 'stradm.streams_queue',
destination_queue_name =>
'stradm.streams_queue@'||:site2,
include_dml => true,
include_ddl => true,
source_database => :site1);
end;
/

select PROPAGATION_NAME , SOURCE_QUEUE_NAME ,


DESTINATION_QUEUE_NAME
from all_propagation order by propagation_name ;

connect hr/hr@dbs1.net
-------------------------------------------------------------------------
-- Create control table for control at source
-- auto_correct is set to 'Y' if error needs to
-- be handled
-------------------------------------------------------------------------

CREATE TABLE control_table (sname varchar2(30),


oname varchar2(30) ,
auto_correct varchar2(2) );

-- Add supplemental logging


alter table control_table add SUPPLEMENTAL LOG GROUP
control_table_log_group (sname,oname,auto_correct);

insert into regions values ( 1002,'YL');


insert into regions values ( 1003,'YM');
commit;

connect hr/hr@dbs2.net
-------------------------------------------------------------------------
-- Create control table for control at source at target
database
-- auto_correct is set to 'Y' if error needs to
-- be handled
-------------------------------------------------------------------------
CREATE TABLE control_table (sname varchar2(30),
oname varchar2(30) ,
auto_correct varchar2(2) );

alter table control_table add SUPPLEMENTAL LOG GROUP


control_table_log_group (sname,oname,auto_correct);

insert into regions values ( 1002,'YL');


insert into regions values ( 1003,'YM');
commit;
------------------------------------------------------
-- - Create Capture Process at dbs1.net
-- - and set table inst scn for hr.regions
------------------------------------------------------

connect stradm/stradm@dbs1.net
begin
dbms_streams_adm.add_table_rules(
table_name => 'hr.regions',
streams_type => 'capture',
streams_name => 'capture_hr',
queue_name => 'stradm.streams_queue',
include_dml => true,
include_ddl => true);
end;
/

begin
dbms_streams_adm.add_table_rules(
table_name => 'hr.control_table',
streams_type => 'capture',
streams_name => 'capture_hr',
queue_name => 'stradm.streams_queue',
include_dml => true,
include_ddl => true);
end;
/
select capture_name, queue_name, queue_owner, status
from dba_capture@dbs1 order by 1,2;

connect stradm/stradm@dbs1.net
exec :scn:= dbms_flashback.get_system_change_number;

------------------------------------------------------
-- - Create apply @ dbs2.net
------------------------------------------------------
connect stradm/stradm@dbs2.net
begin
dbms_streams_adm.add_table_rules(
table_name => 'hr.regions',
streams_type => 'apply',
streams_name => 'apply_from_dbs1net',
queue_name => 'stradm.streams_queue',
include_dml => true,
include_ddl => true,
source_database => :site1);
end;
/

begin
dbms_streams_adm.add_table_rules(
table_name => 'hr.control_table',
streams_type => 'apply',
streams_name => 'apply_from_dbs1net',
queue_name => 'stradm.streams_queue',
include_dml => true,
include_ddl => true,
source_database => :site1);
end;
/

-- Set table instantiation


begin
dbms_apply_adm.set_table_instantiation_scn
('hr.regions',:site1,:scn);
dbms_apply_adm.set_table_instantiation_scn
('hr.control_table',:site1,:scn);
end;
/

------------------------------------------------------
-- - Start apply at dbs2.net and capture at dbs1.net
------------------------------------------------------
connect stradm/stradm@dbs2.net
begin
dbms_apply_adm.set_parameter(
apply_name => 'apply_from_dbs1net',
parameter => 'disable_on_error',
value => 'N');
end;
/
begin
dbms_apply_adm.start_apply(
apply_name => 'apply_from_dbs1net');
end;
/

connect stradm/stradm@dbs1.net
begin
dbms_capture_adm.start_capture(
capture_name => 'capture_hr');
end;
/

connect stradm/stradm@dbs2.net
create sequence reg_exception_s start with 9000;
-------------------------------------------------------------------------
-- Create error handler package. When an error is raised
-- and control is set to 'Y' at source, the error is handled
-------------------------------------------------------------------------

CREATE OR REPLACE PACKAGE err_audpkg


AS
type emsg_array IS table of VARCHAR2(2000) index by
BINARY_INTEGER;
PROCEDURE error_hand
(
lcr_anydata in sys.anydata ,
err_arr_len in number ,
err_num in dbms_utility.number_array,
err_msg in emsg_array
);
END err_audpkg ;
/

CREATE OR REPLACE PACKAGE body err_audpkg AS


PROCEDURE error_hand (
lcr_anydata in sys.anydata ,
err_arr_len in number ,
err_num in dbms_utility.number_array,
err_msg in emsg_array )
IS
lcr sys.lcr$_row_record;
ret pls_integer;
vc varchar2(25) ;
ov2 sys.lcr$_row_list;
auto_correct_mode varchar2(20);
cmd_type varchar2(200);
r_id number;
cnt number;
tmp sys.anydata;
BEGIN

-- Check if auto-correct is Enabled.


select upper(auto_correct) into auto_correct_mode
from hr.control_table where oname = 'REGIONS' and sname
= 'HR';

-- try to access/manipulate the lcr


ret := lcr_anydata.getObject(lcr);

-- Get command type


cmd_type := lcr.get_command_type();

-- If command_type is INSERT or UPDATE then delete the


value
-- at destination and insert the new value
IF cmd_type = 'INSERT' then
ov2 := lcr.get_values ( 'NEW' ) ;
FOR i in 1 .. ov2.COUNT
LOOP
IF lcr.get_object_name() = 'REGIONS' and
auto_correct_mode = 'Y' THEN
IF ov2(i).column_name = 'REGION_ID' THEN

ret := ov2(i).DATA.GetNumber(r_id) ;

-- Delete the value at destination


delete from hr.regions where region_id = r_id;

END IF;
ELSIF lcr.get_object_name() = 'REGIONS' and
auto_correct_mode = 'N' THEN

-- Fix in different way, Change the value using sequence,


-- append something to region_name
IF ov2(i).column_name = 'REGION_ID' THEN

SELECT stradm.reg_exception_s.nextval INTO r_id FROM


DUAL;
ov2(i).data := Sys.AnyData.ConvertNumber( r_id ) ;

ELSIF ov2(i).COLUMN_NAME = 'REGION_NAME' THEN

ret := ov2(i).DATA.GetVarchar2(vc) ;
vc := vc || '_A'||r_id;
ov2(i).DATA := Sys.AnyData.ConvertVarchar2( vc ) ;

END IF;

END IF;
END LOOP;
-- set NEW values in the LCR.
lcr.set_values ( value_type => 'NEW' , value_list => ov2 );

-- Execute the LCR


lcr.execute ( true );

ELSIF cmd_type = 'UPDATE' then


IF lcr.get_object_name() = 'REGIONS' and
auto_correct_mode = 'Y' THEN
-- A primary key is changed
IF ( lcr.get_value ( 'NEW','REGION_ID' ) is not NULL ) THEN

-- Delete the value with OLD value of region_id


tmp := lcr.get_value ( 'OLD','REGION_ID' );

ret := tmp.getnumber ( r_id );

-- Delete the old value at destination


delete from hr.regions where region_id = r_id;

-- Insert the new values with new value of region_id, the


primary key
IF ( lcr.get_value ( 'NEW','REGION_NAME' ) is not NULL )
THEN
-- if region_name is also updated, use the value
lcr.delete_column('REGION_ID','OLD');
lcr.delete_column('REGION_NAME','OLD');

-- Convert Update to Insert


lcr.set_command_type ( 'INSERT');

-- Execute the LCR


lcr.execute ( true );
ELSE
tmp := lcr.get_value ( 'OLD','REGION_NAME');
ret := tmp.getvarchar2(vc);
lcr.delete_column('REGION_ID','OLD');
lcr.delete_column('REGION_NAME','OLD');

-- use old value of region as new region_name


lcr.add_column
('NEW','REGION_NAME',sys.anydata.convertvarchar2(vc) );

-- Convert Update to Insert


lcr.set_command_type ( 'INSERT');

-- Execute the LCR


lcr.execute ( true );
END IF;

ELSIF ( lcr.get_value ( 'NEW','REGION_ID' ) is null ) THEN

-- Delete the value with OLD value of region_id


tmp := lcr.get_value ( 'OLD','REGION_ID' );

ret := tmp.getnumber ( r_id );


-- Delete the old value at destination
delete from hr.regions where region_id = r_id;

-- Insert with old value of region_id and new value of


region_name
lcr.execute ( true );
END IF;
ELSIF lcr.get_object_name() = 'REGIONS' and
auto_correct_mode = 'N' THEN

-- Fix in different way, Change the value using sequence,


-- append something to region_name
IF ( lcr.get_value ( 'OLD','REGION_ID' ) is not NULL ) THEN

SELECT stradm.reg_exception_s.nextval INTO r_id FROM


DUAL;
lcr.set_value('NEW','REGION_ID',sys.anydata.convertnumb
er(r_id));

tmp := lcr.get_value ( 'NEW','REGION_NAME' );

ret := tmp.getvarchar2(vc);
vc := vc || '_A'||r_id;
lcr.set_value('NEW','REGION_NAME',sys.anydata.convertva
rchar2(vc) );

lcr.execute ( true );
END IF;

END IF;
-- if delete is failing because of some foreign key constraint,
handle this
-- if auto_correct is TRUE
ELSIF cmd_type = 'DELETE' and auto_correct_mode = 'Y'
THEN
-- Delete the row referencing region_id and delete from
regions
IF lcr.get_object_name() = 'REGIONS' THEN
IF ( lcr.get_value ( 'OLD','REGION_ID' ) is not null ) THEN

tmp := lcr.get_value ( 'OLD','REGION_ID' );

ret := tmp.getnumber (r_id );

-- Delete the referencing column first


delete from hr.countries where region_id = r_id;
-- Execute the LCR
lcr.execute ( true );

END IF;
END IF;
END IF;

EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
tmp := lcr.get_value ( 'NEW','REGION_ID' );
ret := tmp.getnumber ( r_id );

-- delete using new.region_id


delete from hr.regions where region_id = r_id;

-- Execute the LCR


lcr.execute ( true );
END error_hand;
END err_audpkg;
/
show errors;

-- Error handler for regions


begin
dbms_apply_adm.set_dml_handler
(
'hr.regions','TABLE','DEFAULT',TRUE,'STRADM.ERR_AUDPKG
.ERROR_HAND' );
end ;
/

-- Verify handler is set


select object_owner , object_name , user_procedure
from dba_apply_dml_handlers order by object_owner ,
object_name ;

Sample Output

---------------------------------------------------------------------
-- Case 1 : Auto-correction for a hr.regions
-- Auto_correct is set to TRUE. The value is replicated and
-- the value is checked inside an error handler and error is
-- handled. Supplemental logging is enabled in hr.regions
---------------------------------------------------------------------
connect hr/hr@dbs1.net;
alter table regions add SUPPLEMENTAL LOG GROUP
regions_log_group (region_id,region_name);

-- Disable primary key in regions


-- This is done only for demonstration purpose
exec dbms_streams.set_tag ( 'FF' );
alter table countries drop constraint countr_reg_fk cascade
drop index;
alter table regions drop primary key cascade drop index;
exec dbms_streams.set_tag ;

connect hr/hr@dbs1.net

-- Set control table to TRUE, the error is handled for


HR.REGIONS.
insert into control_table values('HR','REGIONS','Y');
commit;

insert into hr.regions values(1001,'North');


commit;

-- induce primary key violation


insert into hr.regions values(1001,'South');
commit;

connect system/manager@dbs1.net
set serveroutput on
-- sleep to allow for replication this time can be adjusted
(default=5min)
dbms_lock.sleep(300);

-- Primary key is handled. region_name should be 'South'


connect hr/hr@dbs2.net
select * from regions
where region_id = 1001
order by region_id, region_name;

connect stradm/stradm@dbs2.net
-- no error message expected
select error_message from dba_apply_error;

-------------------------------------------------------------------------
-- Auto_correct is set to FALSE. The value is replicated and
-- the value is checked inside an error handler
-- Error is handled and a new value is inserted insted of
failing value
-- Disable supplemental logging for the hr.regions
-------------------------------------------------------------------------
connect hr/hr@dbs1.net;
alter table regions drop SUPPLEMENTAL LOG GROUP
regions_log_group;

delete from control_table;


insert into control_table values('HR','REGIONS','N');
commit;

-- induce error again


insert into hr.regions values(1001,'North_West');
commit;

connect system/manager@dbs1.net
set serveroutput on
----wait for replication - time can be adjusted
(default=5min)
exec dbms_lock.sleep(300);

connect hr/hr@dbs2.net
-- one more row is inserted
select * from regions
where region_id = 1001 or region_id >= 9000
order by region_id, region_name;

connect stradm/stradm@dbs2.net
-- No errors reported
select error_message from dba_apply_error;

------------------------------------------------------
-- Test for update
------------------------------------------------------
connect hr/hr@dbs1.net;
delete from control_table;
-- Set auto correct to true
insert into control_table values('HR','REGIONS','Y');
commit;

-- Enable Supplemental logging for hr.regions;


alter table regions add SUPPLEMENTAL LOG GROUP
regions_log_group (region_id,region_name);

-- Update a value in regions


update regions set region_id = 1001,region_name='Central'
where
region_id = 1002;
commit;

connect system/manager@dbs1.net
set serveroutput on
-- sleep for 5 min
exec dbms_lock.sleep(300);

connect hr/hr@dbs2.net
select * from regions
where region_id = 1001 or region_id = 1002 or region_id >=
9000
order by region_id, region_name;

connect stradm/stradm@dbs2.net
-- no error message expected
select error_message from dba_apply_error;

------------------------------------------------------
-- - Test for Delete. Violate foreign key constraint
-- - After bug fix 2271626, set an apply parameter.
------------------------------------------------------
connect stradm/stradm@dbs2.net
begin
dbms_apply_adm.set_parameter(
apply_name => 'apply_from_dbs1net',
parameter => '_restrict_all_ref_cons',
value => 'N');
end;
/

connect hr/hr@dbs2.net
insert into countries values ( 'N1','N1land',1003 );
commit;

-------------------------------------------------------------------------
-- Delete region=1003 @ dbs2.net. Will raise error at
dbs2.net
-- Since this is referenced by countries. Error is handled by
the
-- error handler and auto-correct is set to true
-------------------------------------------------------------------------
connect hr/hr@dbs1.net
delete from regions where region_id = 1003;
commit;

connect system/manager@dbs1.net
set serveroutput on
-- sleep for 5 min
exec dbms_lock.sleep(300);

-- rows are deleted;


connect hr/hr@dbs2.net
select * from regions
where region_id = 1003;

select * from countries


where region_id = 1003;

connect stradm/stradm@dbs2.net
select error_message from dba_apply_error;

References

NOTE:230049.1 - Streams Conflict Resolution


NOTE:234094.1 - Usage and Restrictions of Streams Apply
Handlers
NOTE:265201.1 - Master Note for Troubleshooting Streams
Apply Errors ORA-1403, ORA-26787 or ORA-26786,Conflict
Resolution
NOTE:265481.1 - Example Streams Apply DML Handler
Showing Rows and Columns Filter from the Apply Process
NOTE:401275.1 - Handling Apply Insert And Delete Conflicts
In A Streams Environment - Error Handlers

Related

Products

Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database -
Enterprise Edition > Streams (Replication and Messaging)

Keywords

CONFLICT RESOLUTION; DBA_APPLY_DML_HANDLERS; FOREIGN KEY; ORA-1403; ORA-2291;


STREAMS

Errors

ORA-01403
Master Note for Streams Recommended Configuration (Doc ID 418755.1) To Bottom

In this Document Document Details

Purpose

Scope
BULLETI
N
Details Type:
PUBLIS
Status: HED

Configuration Last Major 30-


Update: May-
2014
Last
1.0 Software Version Update: 30-
May-
2014
2.0 Database Parameters

Currency Check
Required.
2.1 Significance of AQ_TM_PROCESSES with respect to Oracle
Streams
Out of
Current
Date
3.0 Database Storage

3.1. Tablespace for Streams Administrator queues

Related Products
3. 2. Separate queues for capture and apply

Oracle
Database -
4.0 Privileges Enterprise
Edition

5.0 Source Site Configuration


5.1. Streams and Flash Recovery Area (FRA) Information
Centers

5.2. Archive Logging must be enabled


Index of
Oracle
Database
5.3. Supplemental logging Informatio
n Centers
[1568043.2
5.4. Implement a Heartbeat Table ]

Informatio
5.5. Flow Control
n Center:
Overview
No of
5.6. Perform periodic maintenance
Inform Database
ation Security
Center Products
Database Version 9iR2 and 10gR1 availa [1548952.2
ble for ]
this
docum
Database Version 10gR2 and above
ent. Informatio
n Center:
Overview
5.7. Capture Process Configuration Database
Server/Clie
nt
Installation
5.8. Propagation Configuration
and
Upgrade/M
igration
5.9. Additional Configuration for RAC Environments for a Source [1351022.2
Database ]

6.0 Target Site Configuration

6.1. Privileges Document


References

6.2. Instantiation
No Streams
Refere DML Types
nces Supported
availaband
6.3. Conflict Resolution le for Supported
this Datatypes
docum [238455.1]
6.4. Apply Process Configuration ent.

Streams
Recommen
6.5. Additional Configuration for RAC Environments for an Apply
ded
Database
Patches
[437838.1]

OPERATION
Troublesho
oting
Global Name Oracle
Streams
Performanc
e Issues
Certification/compatibility/interoperability between different
[730036.1]
database versions

Example of
Apply Error Management a Streams
Heartbeat
Table
Backup Considerations [461278.1]

Master
NLS and Characterset considerations
Note for
Streams
Performanc
Batch Processing e
Recommen
dations
[335516.1]
Source Queue Growth

Show More
Streams Cleanup/Removal

Recently Viewed
Automatic Optimizer Statistics Collection
Master Note for
Streams
MONITORING Recommended
Configuration
[418755.1]

Streams Healthcheck Scripts

Auto Correction
Example for
Alert Log Streams using
Error Handlers
[387829.1]
Monitoring Utility STRMMON

SPED - Electronic
Invoice Integration
References
Support
[876892.1]

Applies to:

Oracle Database - Enterprise Edition - Version 9.2.0.1 to 12.1.0.1 Oracle


[Release 9.2 to 12.1] Documentation
Information in this document applies to any platform. Links [1597048.1]

Purpose

Oracle Streams enables the sharing of data and events in a data Streams
stream either within a database or from one database to another. Troubleshooting
This Note describes best practices for Oracle Streams configurations Guide [883372.1]
for both downstream capture and upstream (local) capture in version
9.2 and above.
Show More
Scope

The information contained in this note targets Replication


administrators implementing Streams replication in Oracle 9.2 and
higher. This note contains key recommendations for successful
implementation of Streams in Oracle database release 9.2 and above.

Details

Configuration

To ensure a successful Streams implementation, use the following


recommendations when setting up a Streams environment:

Software Version

Database Settings: Parameters, Storage, and Privileges

Source Site Configuration

Target Site Configuration


1.0 Software Version

Oracle recommends to run streams with the latest available patchset,


and the list of recommended patches from Document 437838.1.

Please assess if any recommended patch conflicts with existing


patches on your system.

There is Streams support in both DbControl and GridControl.


GridControl should be used to manage multiple databases in a
Streams environment.

2.0 Database Parameters

For best results in a Streams environment, set the following


initialization parameters, as necessary, at each participating instance:
global_names, _job_queue_interval, sga_target, streams_pool_size:

Some initialization parameters are important for the configuration,


operation, reliability, and performance of an Oracle Streams
environment. Set these parameters appropriately for your Oracle
Streams environment.

Setting Initialization Parameters Relevant to Oracle Streams

For Oracle version 12.1

Note: There is no new functionality/new features in Oracle Streams


12c; the guidelines for 11.2x are applicable to 12.1x streams
configurations as well.

Refer to Document 1364398.1 Oracle Streams Statement of


Direction.

For Oracle version 11.2


For Oracle version 11.1
For Oracle version 10.2

2.1 Significance of AQ_TM_PROCESSES with respect to Oracle


Streams

AQ_TM_PROCESSES controls the process Qmon. Qmon process takes


care of Queue maintenance operation such as message expiration,
retry, delay, maintaining queue statistics, removing PROCESSED
messages from a queue table and updating the dequeue IOT as
necessary. Qmon process does monitor and maintain system and
user-owned AQ persistent and buffered objects.For example, the
Oracle job scheduler uses AQ and serves as a client to various
database components to allow operations to be coordinated at
scheduled times and intervals. Similarly, Oracle Grid Control relies on
AQ for its Alerts and Service Metrics and database server utilities
such as datapump now use AQ. Furthermore, Oracle Applications has
been using AQ for a significant period of time and this will continue.

In 10.2 and above, it is recommended to leave the parameter


aq_tm_processes unset and let the database autotune the
parameter.

Refer to below notes to understand more about Qmon and


AQ_TM_PROCESSES

Document 305662.1 Master Note for AQ Queue Monitor Process


(QMON)
Document 428441.1 "Warning Aq_tm_processes Is Set To 0"
Message in Alert Log After Upgrade to 10.2.0.3

3.0 Database Storage

3.1. Tablespace for Streams Administrator queues

Create a separate tablespace for the streams administrator schema


(STRMADMIN) at each participating Streams database. This
tablespace will be used for any objects created in the streams
administrator schema, including any spillover of messages from the
in-memory queue.

For example:

CREATE TABLESPACE &streams_tbs_name DATAFILE


'&db_file_directory/&db_file_name' SIZE 25 M REUSE AUTOEXTEND
ON NEXT 25M MAXSIZE UNLIMITED;

ALTER USER strmadmin DEFAULT TABLESPACE &streams_tbs_name


QUOTA UNLIMITED ON &streams_tbs_name;

3. 2. Separate queues for capture and apply

Configure separate queues for changes that are captured locally and
for receiving captured changes from each remote site. This is
especially important when configuring bi-directional replication
between multiple databases. For example, consider the situation
where Database db1.net replicates its changes to databases db2.net,
and Database db2.net replicates to db1.net. Each database will
maintain 2 queues: one for capturing the changes made locally and
other queue receiving changes from the other database.

Similarly, for 3 databases (db1.net, db2.net, db3.net) replicating the


local changes directly to each other database, there will be 3 queues
at each database. For example at db1.net, queue1 for the capture
process, and queue2 and queue3 for receiving changes from each of
the other databases. The two apply processes on db1.net
(apply_from_db2, apply_from_db3) apply the changes, each
associated with a specific queue (queue2 or queue3)

Queue names should not exceed 24 characters in length. Queue table


names should not exceed 24 characters in length. To pre-create a
queue for Streams, use the SET_UP_QUEUE procedure in the
DBMS_STREAMS_ADM package. If you use the MAINTAIN_TABLES,
MAINTAIN_SCHEMAS, or MAINTAIN_GLOBAL procedures to
configure Streams and do not identify specific queue names,
individual queues will be created automatically.

Example: To configure a site (SITEA) that is capturing changes for


distribution to another site, as well as receiving changes from that
other site (SITEB), configure each queue at SITEA with a separate
queue_table as follows:

dbms_streams_adm.set_up_queue(queue_table_name='QT_CAP_SIT
E_A, queue_name=>'CAP_SITEA', )

dbms_streams_adm.set_up_queue(queue_table_name='QT_APP_FR
OM_SITEB', queue_name=>'APP_FROM_SITEB');

If desired, the above set_up_queue procedure calls can include a


storage_clause parameter to configure separate tablespace and
storage specifications for each queue table. Typically, Logical Change
Records (LCRs) are queued to an in-memory buffer and processed
from memory. However, they can be spilled to disk if they remain in
memory too long due to an unavailable destination or on memory
pressure (Streams_Pool memory is too low). The storage clause
parameter can be used to preallocate space for the queue table or
specify an alternative tablespace for the queue table without
changing the default tablespace for the Streams Administrator.

4.0 Privileges

The streams administrator (strmadmin) must be granted


the following on each participating Streams participating database:

GRANT EXECUTE ON DBMS_AQADM TO strmadmin;


GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE); END;
/

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/

In order to create capture and apply processes, the Streams


Administrator must have DBA privilege. This privilege must be
explicitly granted to the Streams Administrator.

GRANT DBA to STRMADMIN;

In addition, other required privileges must be granted to the Streams


Administrator schema (strmadmin) on each participating Streams
database with the GRANT_ADMIN_PRIVILEGE procedure:

In Oracle 10g and above, all the above (except DBA) can be granted
using the procedure:

DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');

5.0 Source Site Configuration

The following recommendations apply to source databases, ie,


databases in which Streams capture is configured.

5.1. Streams and Flash Recovery Area (FRA)

In Oracle 10g and above, configure a separate log archive destination


independent of the Flash Recovery Area for the Streams capture
process for the database. Archive logs in the FRA can be removed
automatically on space pressure, even if the Streams capture process
still requires them. Do not allow the archive logs for Streams capture
to reside solely in the FRA.

5.2. Archive Logging must be enabled

Verify that each source database is running in ARCHIVE LOG mode.


For downstream capture sites (ie, databases in which the Streams
capture is configured for another database), the database at which
the source redo logs are created must have archive logging enabled.

5.3. Supplemental logging

ORACLE redo log files contain redo information needed for instance
and media recovery . However, some of the redo based applications
such as STREAMS, Logical Standby, Adhoc LogMiner need additional
information to be logged into the redolog files. The process of logging
this additional information into the redo files is called Supplemental
Logging. Confirm supplemental logging is enabled at each source site

SUPPLEMENTAL LOGGING LEVELS

There are two levels of Supplemental Logging.

1. Database Level Supplemental Logging - There are two types of


Database level logging.

Minimal supplemental Logging - This places information needed for


identifying the rows in the redo logs. This can be enabled using the
following command.

SQL> Alter database add supplemental log data;

Identification Key Logging - This places the before and after images
of the specified type of columns in the redo log files. This type of
logging can be specified for ALL ,PRIMARY KEY, UNIQUE and FOREIGN
KEY. This can be enabled using the following command

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY


KEY,UNIQUE,FOREIGN KEY) COLUMNS;

You can check the database level supplemental logging using the
following query

select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK,
SUPPLEMENTAL_LOG_DATA_all from v$database;

2. Table-level Supplemental Logging - Creates individual log groups


for each table. Logging can be unconditional or conditional .

Unconditional Logging means before images of the columns are


logged regardless of whether the column is updated or not.
Conditional means the before images of the columns are logged only
when the corresponding columns are updated. After images are
always captured for the columns specified in the log group. The
following query can be used to check the table level log groups
defined in the database.

SELECT supplemental_log_data_pk, supplemental_log_data_ui,


supplemental_log_data_fk FROM dba_capture_prepared_tables
UNION
SELECT supplemental_log_data_pk, supplemental_log_data_ui,
supplemental_log_data_fk FROM dba_capture_prepared_schemas
UNION
SELECT supplemental_log_data_pk, supplemental_log_data_ui,
supplemental_log_data_fk FROM dba_capture_prepared_database;

Select log_group_name, table_name,decode(always, 'ALWAYS',


'Unconditional', NULL, 'Conditional') ALWAYS from
DBA_LOG_GROUPS;

select
owner,log_group_name,table_name,column_name,logging_property
, from DBA_LOG_GROUP_COLUMNS;

SUPPLEMENTAL LOGGING REQUIREMENTS FOR STREAMS


REPLICATION

Supplemental logging should always be enabled on the source


database. The following types of columns at the APPLY site are
candidates for supplemental logging on the source.

1. All columns that are used in Primary keys at the source site for
which changes are applied
on the target must be unconditionally logged at the table level or at
the db level.

2. All columns that are used as substitute columns at the APPLY site
must be unconditionally logged .

3. All columns that are used in DML handlers, Error handlers, Rules,
Rule based transformations,
virtual dependency definitions, Subset rules must be unconditionally
logged.

4. All columns that are used in column list for conflict resolution
methods must be conditionally logged,
if more than one column from the source is part of the column list.

5. If Parallelism of APPLY is > 1, then all columns that are part of


FOREIGN KEY, UNIQUE KEY
constraints that are defined on more than 1 column and BIT MAP
indexes that are defined on more than
one column at the source must be conditionally logged.

You can enable table level supplemental logging using the


following command.
alter table HR.EMPLOYEES
ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID,
LAST_NAME, DEPARTMENT_ID);

Alternately, you can create user-defined log groups using the


following command. Omitting the ALWAYS clause creates a
conditional log groups.

Alter table HR.EMPLOYEES


add SUPPLEMENTAL LOG data (PRIMARY KEY,UNIQUE,FOREIGN
KEY,ALL) columns;

Note: LOB, LONG, and ADT columns cannot be supplemental logged.


These columns will be ignored when an ALL clause if specified in the
command.

Alter table HR.EMPLOYEES


ADD SUPPLEMENTAL LOG GROUP emp_parttime(
DEPARTMENT_ID NO LOG, EMPLOYEE_ID);

In 9iR2 Streams apply requires unconditional logging of Unique Index


and Foreign Key constraints, even if those columns are not modified.
This is because of unpublished Bug 4198593 Apply incorrectly
requires unconditional logging of Unique and FK constraints fixed in
9.2.0.8.

In versions 10g and above, prepare_xxx_instantiation procedure


implicitly creates supplemental log groups. For downstream capture
sites (ie, databases in which the Streams capture is configured for
another database), the database at which the source redo logs are
created must have supplemental logging for the database objects of
interest to the downstream capture process. Type of supplemental
logging that is enabled implicitly using this command can be checked
using the sql in the following link to the documentation. However,
additional supplemental logging might need to be enabled depending
on the requirements as mentioned above.

Refer to Monitoring Supplemental Logging for more information.

5.4. Implement a Heartbeat Table

To ensure that the applied_scn of the DBA_CAPTURE view is updated


periodically, implement a "heart beat" table. A "heart beat" table is
especially useful for databases that have a low activity rate. The
streams capture process requests a checkpoint after every 10Mb of
generated redo. During the checkpoint, the metadata for streams is
maintained if there are active transactions. Implementing a heartbeat
table ensures that there are open transactions occurring regularly
within the source database enabling additional opportunities for the
metadata to be updated frequently. Additionally, the heartbeat table
provides quick feedback to the database administrator as to the
health of the streams replication.

To implement a heartbeat table: Create a table at the source site that


includes a date or timestamp column and the global name of the
database. Add a rule to capture changes to this table and propagate
the changes to each target destination. Make sure that the target
destination will apply changes to this table as well. Set up an
automated job to update this table at the source site periodically, for
example every minute.

Refer to Document 461278.1 Example of a Streams Heartbeat Table

5.5. Flow Control

In Oracle 9iR2, when the threshold for memory of the buffer queue is
exceeded, Streams will write the messages to disk. This is sometimes
referred to as "spillover". When spillover occurs, Streams can no
longer take advantage of the in-memory queue optimization. One
technique to minimize this spillover is to implement a form of flow
control. See the following note for the scripts and pre-requisites:

Script to Prevent Excessive Spill of Message From the Streams Buffer


Queue To Disk Document 259609.1

In Oracle 10g and above flow control is automatically handled by the


database so there is no need to implement it manually.

5.6. Perform periodic maintenance

Database Version 9iR2 and 10gR1

Periodically force capture to checkpoint. This checkpoint is not the


same as a database checkpoint. To force capture to checkpoint, use
the capture parameter _CHECKPOINT_FORCE and set the value to
YES. Forcing a checkpoint ensure that the DBA_CAPTURE view
columns CAPTURED_SCN and APPLIED_SCN are maintained.

Database Version 10gR2 and above

A. Confirm checkpoint retention. In Oracle 10gR2 and above, the


mining process checkpoints itself for quicker restart. These
checkpoints are maintained in the SYSAUX tablespace by default. The
capture parameter, checkpoint_retention_time, controls the amount
of checkpoint data retained by moving the FIRST_SCN of the capture
process forward. The FIRST_SCN is the lowest possible scn available
for capturing changes. When the checkpoint_retention_time is
exceeded (default = 60 days), the FIRST_SCN is moved and the
Streams metadata tables previous to this scn (FIRST_SCN) can be
purged and space in the SYSAUX tablespace reclaimed. To alter the
checkpoint_retention_time, use the
DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.

B. Dump fresh copy of Dictionary to redo. Issue a


DBMS_CAPTURE_ADM.BUILD command to dump a current copy of
the data dictionary to the redo logs. Doing this will reduce the
amount of logs to be processed in case of additional capture process
creation or process rebuild.

C. Prepare database objects for instantiation Issue a


DBMS_CAPTURE_ADM.PREPARE_*_INSTANTIATION where *
indicates the level (TABLE, SCHEMA, GLOBAL) for the database
objects captured by Streams. This is used in conjunction with the
BUILD in B above for new capture creation or rebuild purposes.

5.7. Capture Process Configuration

A. Configuring Capture

Use the DBMS_STREAMS_ADM.ADD_*_RULES procedures


(ADD_TABLE_RULES, ADD_SCHEMA_RULES for DML and DDL,
ADD_GLOBAL_RULES for DDL only). These procedures minimize the
number of steps required to configure Streams processes. Also, it is
possible to create rules for non-existent objects, so be sure to check
the spelling of each object specified in a rule carefully.

CAPTURE requires a rule set with rules.The ADD_GLOBAL_RULES


procedure cannot be used to capture DML changes for entire
database. ADD_GLOBAL_RULES can be used to capture all DDL
changes for the database.

A single Streams capture can process rules for multiple tables or


schemas. For best performance, rules should be simple. Rules that
include NOT or LIKE clauses are not simple and will impact the
performance of Streams.

Minimize the number of rules added into the process rule set. A
good rule of thumb is to keep the number of rules in the rule set to
less than 100. If more objects need to be included in the ruleset,
consider constructing rules using the IN clause. For example, a rule
for the 6 TB_M21* tables in the MYACCT schema would look like the
following:

(:dml.get_object_owner() = 'MYACCT' and :dml.is_null_tag() = 'Y' and


:dml.get_object_name() IN ('TB_M21_1','TB_M21_2','TB_M21_3',
'TB_M21_40','TB_M21_10','TB_M211B010'))
In version 10.2 and above, use the DBMS_STREAMS_ADM.
MAINTAIN_* (where *=TABLE,SCHEMA,GLOBAL, TTS) procedures to
configure Streams. These procedures automate the entire
configuration of the streams processes between databases, following
the Streams best practices. For local capture, the default behavior of
these procedures is to implement a separate queue for capture and
apply. If you are configuring a downstream capture and applying the
changes within the same database, override this behavior by
specifying the same queue for both the capture_queue_name and
apply_queue_name.

If the maintain_* procedures are not suitable for your environment,


please use the ADD_*_RULES procedures (ADD_TABLE_RULES,
ADD_SCHEMA_RULES for DML and DDL, ADD_SUBSET_RULES for
DML only, and ADD_GLOBAL_RULES for DDL only). These procedures
minimize the number of steps required to configure Streams
processes. It is also possible to create rules for non-existent objects,
so be sure to check the spelling of each object specified in a rule
carefully.

The Streams capture process requires a rule set with rules. The
ADD_GLOBAL_RULES procedure can be used to capture DML changes
for entire database as long as a negative ruleset is created for the
capture process that includes rules for objects with unsupported
datatypes.. ADD_GLOBAL_RULES can be used to capture all DDL
changes for the database.

A single Streams capture can process changes for multiple tables or


schemas. For best performance, rules for these multiple tables or
schemas should be simple. Rules that include LIKE clauses are not
simple and will impact the performance of Streams.

To eliminate changes for particular tables or objects, specify the


include_tagged_lcr clause along with the table or object name in the
negative rule set for the Streams process. Setting this clause will
eliminate ALL changes, tagged or not, for the table or object.

B. Capture Parameters

Set the following parameters after a capture process is created:

Values Comment

Parameter &

Recommendation
PARALLELISM=1 Default: Number of parallel
1 execution servers to
configure one or more
preparer processes used
to prefilter changes for
the capture process.
Recommended value is 1.

_CHEKPOINT_FREQUENCY=5 Default: Modify the frequency of


00 10 logminer checkpoints
<10.2.0. especially in a database
4 with significant LOB or
DDL activity. Larger
Default values decrease the
1000 in frequency of logminer
10.2.0.4 checkpoints. Smaller
numbers increase the
frequency of those
checkpoints. Logminer
checkpoints are not the
same as database
checkpoints. Availability
of logminer checkpoints
impacts the time
required to
recover/restart the
capture after database
restart. In a low activity
database (ie, small
amounts of data or the
data to be captured is
changed infrequently),
use a lower value, such as
100.

A logminer checkpoint is
requested by default
every 10Mb of redo
mined. If the value is set
to 500, a logminer
checkpoint is requested
after every 500Mb of
redo mined. Increasing
the value of this
parameter is
recommended for active
databases with significant
redo generated per hour.

It should not be
necessary to configure
_CHECKPOINT_FREQUEN
CY in 10.2.0.4 or higher

_SGA_SIZE Default: Amount of memory


10 available from the
streams pool for
logminer processing. The
default amount of
streams_pool memory
allocated to logminer is
10Mb. Increase this value
especially in
environments where
large LOBs are processed.
This parameter should
not be increased unless
the logminer error ORA-
1341 is encountered.
Streams pool memory
allocated to logminer is
unavailable for other
usage

Capture parameters can be set using the SET_PARAMETER procedure


from the DBMS_CAPTURE_ADM package. For example, to set the
checkpoint frequency of the streams capture process named
CAPTURE_EX, use the following syntax while logged in as the Streams
Administrator to request a logminer checkpoint after processing
every Gigabyte (1000Mb) of redo:

exec
dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequ
ency','1000');

5.8. Propagation Configuration

A. Configuring Propagation

If the maintain_* procedures are not suitable for your


environment(Oracle 9iR2 and 10gR1), please use the
ADD_*__PROPAGATION_RULES procedures
(ADD_TABLE_PROPAGATION_RULES,
ADD_SCHEMA_PROPAGATION_RULES ,
ADD_GLOBAL_PROPAGATION_RULES for both DML and DDL.,
ADD_SUBSET_PROPAGATION_RULES for DML only) These procedures
minimize the number of steps required to configure Streams
processes. Also, it is possible to create rules for non-existent objects,
so be sure to check the spelling of each object specified in a rule
carefully.

The rules in the rule set for propagation can differ from the rules
specified for the capture process. For example, to configure that all
captured changes be propagated to a target site, a single
ADD_GLOBAL_PROPAGATION_RULES procedure can be specified for
the propagation even though multiple ADD_TABLE_RULES might have
been configured for the capture process.

B. Propagation mode

For new propagation processes configured in 10.2 and above. set the
queue_to_queue propagation parameter to TRUE. If the database is
RAC enabled, an additional service is created typically named in the
format: sys$schema.queue_name.global_name when the Streams
subscribers are initially created. A streams subscriber is a defined
propagation between two Streams queues or an apply process with
the apply_captured parameter set to TRUE. This service automatically
follows the ownership of the queue on queue ownership switches (ie,
instance startup, shutdown, etc). The service name can be found in
the network name column of DBA_SERVICES view.

If the maintain_* (TABLE,SCHEMA,GLOBAL) procedures are used to


configure Streams, queue_to_queue is automatically set to TRUE, if
possible. The database link for this queue_to_queue propagation
must use a TNS servicename (or connect name) that specifies the
GLOBAL_NAME in the CONNECT_DATA clause of the descriptor. See
section 6 on Additional Considerations for RAC below.

Propagation process configured prior to 10.2 continue to use the


dblink mode of propagation. In this situation, if the database link no
longer connects to the owning instance of the queue, propagation
will not succeed. You can continue to use the 10.1. best practices for
this propagation, or during a maintenance window recreate
propagation. Make sure that the queue is empty with no unapplied
spilled messages before you drop the propagation. Then, recreate
the propagation with the queue_to_queue parameter set to TRUE.

Queues created prior to 10.2 on RAC instances should be dropped


and recreated in order to take advantage of the automatic service
generation and queue_to_queue propagation. Be sure to perform
this activity when the queue is empty and no new LCRs are being
enqueued into the queue.
C. Propagation Parameters

Parameter & Values Comment


Recommendation

latency=5 Default: Maximum wait, in seconds, in the


60 propagation window for a message
to be propagated after it is
enqueued.

The default value is 60. Caution: if


latency is not specified for this call,
then latency will over-write any
existing value with this default value
(60).

For example, if the latency is 60


seconds, then during the
propagation window, if there are no
messages to be propagated, then
messages from that queue for the
destination will not be propagated
for at least 60 more seconds. It will
be at least 60 seconds before the
queue will be checked again for
messages to be propagated for the
specified destination. If the latency
is 600, then the queue will not be
checked for 10 minutes and if the
latency is 0, then a job queue
process will be waiting for messages
to be enqueued for the destination
and as soon as a message is
enqueued it will be propagated.

Propagation parameters can be set using the


ALTER_PROPAGATION_SCHEDULE procedure from the
DBMS_AQADM package. For example, to set the latency parameter
of the streams propagation from the STREAMS_QUEUE owned by
STRMADMIN to the target database whose global_name is DEST_DB
for the queue Q1, use the following syntax while logged in as the
Streams Administrator:

dbms_aqadm.alter_propagation_schedule('strmadmin.streams_queu
e','DEST_DB',destination_queue=>'Q1',latency=>5);

D. Network Connectivity
When using Streams propagation across a Wide Area Network
(WAN), increase the session data unit (SDU) to improve the
propagation performance. The maximum value for SDU is 32K
(32767). The SDU value for network transmission is negotiated
between the sender and receiver sides of the connection: the
minimum SDU value of the two endpoints is used for any individual
connection. In order to take advantage of an increased SDU for
Streams propagation, the receiving side sqlnet.ora file must include
the default_sdu_size parameter. The receiving side listener.ora must
indicate the SDU change for the SID. The sending side tnsnames.ora
connect string must also include the SDU modification for the
particular service.

Tuning the tcp/ip networking parameters can significantly improve


performance across the WAN. Here are some example tuning
parameters for Linux. These parameters can be set in the
/etc/sysctl.conf file and running sysctl -p . When using RAC, be sure
to configure this at each instance.

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits # min, default, and max
# number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

In addition, the SEND_BUF_SIZE and RECV_BUF_SIZE sqlnet.ora


parameters increase the performance of propagation on your
system. These parameters increase the size of the buffer used to
send or receive the propagated messages. These parameters should
only be increased after careful analysis on their overall impact on
system performance.

For further information, please review the Oracle Database Net


Services Administrator's Guide

5.9. Additional Configuration for RAC Environments for a Source


Database

Archive Logs

The archive log threads from all instances must be available to any
instance running a capture process. This is true for both local and
downstream capture.

Queue Ownership

When Streams is configured in a RAC environment, each queue table


has an "owning" instance. All queues within an individual queue table
are owned by the same instance. The Streams components
(capture/propagation/apply) all use that same owning instance to
perform their work. This means that

a capture process is run at the owning instance of the source queue.

a propagation job must run at the owning instance of the queue

a propagation job must connect to the owning instance of the target


queue.

Ownership of the queue can be configured to remain on a specific


instance, as long as that instance is available, by setting the PRIMARY
_INSTANCE and/or SECONDARY_INSTANCE parameters of
DBMS_AQADM.ALTER_QUEUE_TABLE. If the primary_instance is set
to a specific instance (ie, not 0), the queue ownership will return to
the specified instance whenever the instance is up.

Capture will automatically follow the ownership of the queue. If the


ownership changes while capture is running, capture will stop on the
current instance and restart at the new owner instance.

For queues created with Oracle Database 10g Release 2, a service will
be created with the service name= schema.queue and the network
name SYS$schema.queue.global_name for that queue. If the
global_name of the database does not match the
db_name.db_domain name of the database, be sure to include the
global_name as a service name in the init.ora.

For propagations created with the Oracle Database 10g Release 2


code with the queue_to_queue parameter to TRUE, the propagation
job will deliver only to the specific queue identified. Also, the source
dblink for the target database connect descriptor must specify the
correct service (global name of the target database ) to connect to
the target database. For example, the tnsnames.ora entry for the
target database should include the CONNECT_DATA clause in the
connect descriptor for the target database. This clause should specify
(CONNECT_DATA=(SERVICE_NAME='global_name of target
database')). Do NOT include a specific INSTANCE in the
CONNECT_DATA clause.

For example, consider the tnsnames.ora file for a database with the
global name db.mycompany.com. Assume that the alias name for the
first instance is db1 and that the alias for the second instance is db2.
The tnsnames.ora file for this database might include the following
entries:

db.mycompany.com=
(description=
(load_balance=on)
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)))

db1.mycompany.com=
(description=
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db1)))

db2.mycompany.com=
(description=
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db2)))

Use the italicized tnsnames.ora alias in the target database link


USING clause.

DBA_SERVICES lists all services for the database.


GV$ACTIVE_SERVICES identifies all active services for the database In
non_RAC configurations, the service name will typically be the
global_name. However, it is possible for users to manually create
alternative services and use them in the TNS connect_data
specification . For RAC configurations, the service will appear in these
views as SYS$schema.queue.global_name.

Propagation Restart

Use the procedures START_PROPAGATION and STOP_PROPAGATION


from DBMS_PROPAGATION_ADM to enable and disable the
propagation schedule. These procedures automatically handle
queue_to_queue propagation.

Example:

exec
DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propag
ation'); or

exec
DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propag
ation',force=>true);
exec
DBMS_PROPAGATION_ADM.START_PROPAGATION('name_of_propa
gation');

6.0 Target Site Configuration

The following recommendations apply to target databases, ie,


databases in which Streams apply is configured.

6.1. Privileges

Grant Explicit Privileges to APPLY_USER for the user tables

Examples:

Privileges for table level DML: INSERT/UPDATE/DELETE,

Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY)
INDEX, CREATE (ANY) PROCEDURE

6.2. Instantiation

Set Instantiation SCNs manually if not using export/import. If


manually configuring the instantiation scn for each table within the
schema, use the RECURSIVE=>TRUE option on the
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN procedure

For DDL Set Instantiation SCN at next higher level (ie, SCHEMA or
GLOBAL level).

6.3. Conflict Resolution

If updates will be performed in multiple databases for the same


shared object, be sure to configure conflict resolution. Refer to
Document 230049.1 Streams Conflict Resolution, for more detail.

To simplify conflict resolution on tables with LOB columns, create an


error handler to handle errors for the table. When registering the
handler using the DBMS_APPLY_ADM.SET_DML_HANDLER
procedure, be sure to specify the ASSEMBLE_LOBS parameter as
TRUE.

Document 265201.1 Master Note for Troubleshooting Streams Apply


Errors ORA-1403, ORA-26787 or ORA-26786,Conflict Resolution

6.4. Apply Process Configuration

A. Rules

If the maintain_* procedures are not suitable for your environment,


please use the ADD_*_RULES procedures (ADD_TABLE_RULES ,
ADD_SCHEMA_RULES , ADD_GLOBAL_RULES (for DML and DDL),
ADD_SUBSET_RULES

APPLY can be configured with or without a ruleset. The


ADD_GLOBAL_RULES can be used to apply all changes in the queue
for the database. If no ruleset is specified for the apply process, all
changes in the queue are processed by the apply process.

A single Streams apply can process rules for multiple tables or


schemas located in a single queue that are received from a single
source database . For best performance, rules should be simple.
Rules that include LIKE clauses are not simple and will impact the
performance of Streams.

To eliminate changes for particular tables or objects, specify the


include_tagged_lcr clause along with the table or object name in the
negative rule set for the Streams process. Setting this clause will
eliminate all changes, tagged or not, for the table or object.

B. Parameters

Parameter Values Comment

DISABLE_ON_ERROR=N Default: Y If Y, then the apply


process is disabled on
the first unresolved
error, even if the error is
not fatal.

If N, then the apply


process continues
regardless of unresolved
errors.

PARALLELISM= 4 Default: 1 Parallelism configures


the number of apply
servers available to the
apply process for
performing user
transactions from the
source database. Choose
a value 4, 8, 12, 16 based
on the concurrent
replicated workload
generated at the source
AND the number of CPUs
available at the target.
TXN_LCR_SPILL_THRESH Default=10,0 New in 10.2. Leave this
OLD 00 parameter as default
initially.

It enables you to specify


that an apply process
begins to spill messages
for a transaction from
memory to disk when
the number of messages
in memory for a
particular transaction
exceeds the specified
number.

Setting this parameter to


a value that is higher
than the default to try to
stage everything in
memory must be done
carefully so that queue
spilling is not increased.
Setting
TXN_LCR_SPILL_THRESH
OLD to 'infinite' is not
recommended because
this will revert Streams
to the old pre-10.2
behaviour.

The
DBA_APPLY_SPILL_TXN
and
V$STREAMS_APPLY_REA
DER views enable you to
monitor the number of
transactions and
messages spilled by an
apply process.

Refer to Document
365648.1 Explain
TXN_LCR_SPILL_THRESH
OLD in Oracle10GR2
Streams

Apply parameters can be set using the SET_PARAMETER procedure


from the DBMS_APPLY_ADM package. For example, to set the
DISABLE_ON_ERROR parameter of the streams apply process named
APPLY_EX, use the following syntax while logged in as the Streams
Administrator:

exec
dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');

In some cases, performance can be improved by setting the following


hidden parameter. This parameter should be set when the major
workload is UPDATEs and the updates are performed on just a few
columns of a many-column table.

Parameter Values Comment

_DYNAMIC_STMTS=Y Default: N If Y, then for


UPDATE
statements, the
apply process will
optimize the
generation of SQL
statements based
on required
columns.

_HASH_TABLE_SIZE=1000000 Default: Set the size of the


80*parallelism hash table used to
calculate
transaction
dependencies to 1
million.

6.5. Additional Configuration for RAC Environments for an Apply


Database

Queue Ownership

When Streams is configured in a RAC environment, each queue table


has an "owning" instance. All queues within an individual queue table
are owned by the same instance. The Streams components
(capture/propagation/apply) all use that same owning instance to
perform their work. This means that

the database link specified in the propagation must connect to the


owning instance of the target queue.

the apply process is run at the owning instance of the target queue

Ownership of the queue can be configured to remain on a specific


instance, as long as that instance is available, by setting the PRIMARY
_INSTANCE and SECONDARY_INSTANCE parameters of
DBMS_AQADM.ALTER_QUEUE_TABLE. If the primary_instance is set
to a specific instance (ie, not 0), the queue ownership will return to
the specified instance whenever the instance is up.

Apply will automatically follow the ownership of the queue. If the


ownership changes while apply is running, apply will stop on the
current instance and restart at the new owner instance.

Changing the GLOBAL_NAME of the Database

See the OPERATION section on Global_name below. The following


are some additional considerations when running in a RAC
environment. If the GLOBAL_NAME of the database is changed,
ensure that the queue is empty before changing the name and that
the apply process is dropped and recreated with the apply_captured
parameter = TRUE. In addition, if the GLOBAL_NAME does not match
the db_name.db_domain of the database, include the
GLOBAL_NAME in the list of services for the database in the database
parameter initialization file.

OPERATION

A Streams process will automatically restart after a database startup,


assuming that the process was in a running state before the database
shut down. No special startup or shutdown procedures are required
in the normal case.

Global Name

Streams uses the GLOBAL_NAME of the database to identify changes


from or to a particular database. Do not modify the GLOBAL NAME of
a Streams database after capture has been created. Changes
captured by the Streams capture process automatically include the
current global name of the source database. This means that if the
global name is modified after a capture process has been configured,
the capture process will need to be dropped and recreated following
the GLOBAL_NAME modification. In addition, the system-generated
rules for capture, propagation, and apply typically specify the global
name of the source database. These rule will need to be modified or
recreated to adjust the source_database_name. Finally, if the
GLOBAL_NAME does not match the db_name.db_domain of the
database, include the GLOBAL_NAME in the list of services for the
database in the database parameter initialization file.

If the global name must be modified on the database, do it at a time


when NO user changes are possible on the database and the Streams
queues are empty with no outstanding changes to be applied, so that
the Streams configuration can be recreated. Keep in mind that all
subscribers (propagations to target databases and the target apply
processes) must also be recreated if the source database
global_name is changed. Follow the directions in the Streams
Replication Administrator's Guide for Changing the DBID or GLOBAL
NAME of a source database.

It is also strongly recommended that the database init.ora parameter


global_names be set to TRUE to guarantee that database link names
match the global name of the target database.

Certification/compatibility/interoperability between different


database versions

Streams has 2 types of capture => Local capture and downstream


capture.

Local capture:

Local capture means that a capture process runs on the source


database
Local Capture can be used across different hardware, different OS &
different Oracle Versions.

Downstream capture:

Downstream capture means that a capture process runs on a


database other than the source database. The redo log files from the
source database are copied to the other database, called a
downstream database, and the capture process captures changes in
these redo log files at the downstream database

Operational Requirements for Downstream Capture

The following are operational requirements for using downstream


capture:

* The source database must be running at least Oracle Database 10g


and the downstream capture database must be running the same
version of Oracle as the source database or higher.
* The operating system on the source and downstream capture sites
must be the same, but the operating system release does not need to
be the same. In addition, the downstream sites can use a different
directory structure from the source site.
* The hardware architecture on the source and downstream capture
sites must be the same. For example, a downstream capture
configuration with a source database on a 32-bit Sun system must
have a downstream database that is configured on a 32-bit Sun
system. Other hardware elements, such as the number of CPUs,
memory size, and storage configuration, can be different between
the source and downstream sites.
In a downstream capture environment, the source database can be a
single instance database or a multi-instance Real Application Clusters
(RAC) database. The downstream database can be a single instance
database or a multi-instance RAC database, regardless of whether
the source database is single instance or multi-instance.

Apply Error Management

The view DBA_APPLY_ERROR includes the message_number within


the transaction on which the reported error occurred. Use this
message number in conjunction with the procedures from the
documentation manual Streams Concepts and Administration,
chapter titled Monitoring Streams Apply Processes, under the section
Displaying Detailed Information About Apply Errors, to print out the
column values of each logical change record within the failed
transaction.

Backup Considerations

1. Ensure that any manual backup procedures that include the any of
the following statements include a non-null Streams tag:

ALTER TABLESPACE ... BEGIN BACKUP

ALTER TABLESPACE ... END BACKUP

The tag should be chosen such that these DDL commands will be
ignored by the capture rule set.

To set a streams tag, use the DBMS_STREAMS.SET_TAG procedure. A


non-null tag should be specified to avoid capturing these commands.

Backups performed using RMAN do not need to set a Streams tag.

2. Do not allow any automated backup of the archived logs to remove


necessary archive logs. It is especially important in a Streams
environment that all necessary archived logs remain available online
and in the expected location until the capture process has finished
processing them. If a log required by the capture process is
unavailable, the capture process will abort. Force a checkpoint
(capture/logminer) before beginning the manual backup procedures.
To force a checkpoint, explicitly reset the hidden capture parameter
_CHECKPOINT_FORCE to 'Y'. The REQUIRED_CHECKPOINT_SCN
column of the DBA_CAPTURE view specifies the lowest required SCN
to restart capture. A procedure to determine the minimum archive
log necessary for successful capture restart is available in the Streams
health check script.

3. Ensure that all archive logs (from all threads) are available.
Database recovery depends on the availability of these logs, and a
missing log will result in incomplete recovery.

4. Ensure that the APPLY process parameter,


COMMIT_SERIALIZATION, is set to the default value, FULL, so that the
apply commits all transactions, regardless of whether they contain
dependent row LCRs, in the same order as the corresponding
transactions on the source database.

5. Implement a "heartbeat" table. To ensure that the applied_scn of


the DBA_CAPTURE view is updated periodically, implement a "heart
beat" table. Implementing a heartbeat table ensures that the
metadata is updated frequently. Additionally, the heartbeat table
provides quick feedback as to the health of streams replication. Refer
to the Source Site Configuration Section: Implement a Hearbeat Table
for more details.

6. In situations that result in incomplete recovery (Point-in-Time


recovery) at the source site, follow the instructions in Chapter 9 of
the Streams Replication Administrators Guide

Performing Point-in-Time Recovery on the Source in a Single-Source


Environment
Performing Point-in-Time Recovery in a Multiple-Source Environment

7. In situations that result in incomplete recovery at the destination


site, follow the instructions in Chapter 9 of the Streams Replication
Administrator's Guide

Performing Point-in-Time Recovery on a Destination Database

8. In situations where combined capture and apply (CCA) is


implemented in a single-source replication environment :

Combined Capture and Apply and Point-in-Time Recovery

NLS and Characterset considerations

Ensure that all the databases in the streams configuration and


corresponding NLS settings are configured appropriately for the
successful replication of linguistic data.

For more information, refer to the Database Globalization Support


Guide

Batch Processing
For best performance, the commit point for batch processing should
be kept low. It is preferable that excessively large batch processing be
run independently at each site. If this technique is utilized, be sure to
implement DBMS_STREAMS.SET_TAG to skip the capture of batch
processing session. Setting this tag is valid only in the connected
session issuing the set_tag command and will not impact the capture
of changes from any other database sessions.
DDL Replication

When replicating DDL, keep in mind the effect the DDL statement will
have on the replicated sites. In particular, do not allow system
generated naming for constraints or indexes, as modifications to
these will most likely fail at the replicated site. Also, storage clauses
may cause some issues if the target sites are not identical.

If you decide NOT to replicate DDL in your Streams environment, any


table structure change must be performed manually.

Propagation
At times, the propagation job may become "broken" or fail to start
after an error has been encountered or after a database restart. The
typical solution is to disable the propagation and then re-enable it.

exec
dbms_propagation_adm.stop_propagation('propagation_name');

exec
dbms_propagation_adm.start_propagation('propagation_name');

If the above does not fix the problem, perform a stop of propagation
with the force parameter and then start propagation again.

exec
dbms_propagation_adm.stop_propagation('propagation_name',forc
e=>true);

exec
dbms_propagation_adm.start_propagation('propagation_name');

An additional side-effect of stopping the propagation with the force


parameter is that the statistics for the propagation are cleared

The above is documented in the Streams Replication Administrator's


Guide: Restart Broken Propagations

Source Queue Growth

Source queue may grow if one of the target sites is down for an
extended period, or propagation is unable to deliver the messages to
a particular target site (subscriber) due to network problems for an
extended period.

Automatic flow control minimizes the impact of this queue growth.


Queued messages (LCRs) for unavailable target sites will spill to disk
storage while messages for available sites are processed normally.

Propagation is implemented using the DBMS_JOB subsystem. If a job


is unable to execute 16 successive times, the job will be marked as
"broken" and become disabled. Be sure to periodically check that the
job is running successfully to minimize source queue growth due to
this problem.

Streams Cleanup/Removal

Removing the Streams administrator schema with DROP USER .....


CASCADE can be used to remove the entire Streams configuration.

Automatic Optimizer Statistics Collection

Oracle database 10g has the Automatic Optimizer Statistics Collection


feature that runs every night and gathers optimizer stats of tables
whose stats have become stale. The problem with volatile tables,
such as the Streams queue tables, is that it is quite possible that
when the stats collection job runs these tables may not have data
that is representative of their full load period. For this reason we
recommend to customers that for volatile tables, they run the
dbms_stats.gather job manually on them when they are at the fullest
and then immediately lock the stats of using the PL/SQL API's
(dbms_stats.lock ...) provided. This will ensure that when the nightly
Automatic Optimizer Statistics Collection job runs, these volatile
tables will be skipped and hence not analyzed.

These volatile AQ/Streams tables are created through a call to


dbms_aqadm.create_queue_table (qtable_name, etc.) or
dbms_streams_adm.setup_queue() command with a user defined
queue table (qtable_name). In addition to the queue table, the call
internally creates the following tables which also tend to be volatile:

aq$_{qtable_name}_i
aq$_{qtable_name}_h
aq$_{qtable_name}_t
aq$_{qtable_name}_p
aq$_{qtable_name}_d
aq$_{qtable_name}_c

Oracle has the ability to restore old stats on tables including data
dictionary tables using the dbms_stats.restore... API's. This feature
can be used for short term resolution, but the real solution is the first
one, where you lock optimizer stats of volatile tables.
MONITORING

All Streams processing is done at the "owning instance" of the queue.


To determine the owning instance, use the query below:

SELECT q.owner, q.name, t.queue_table, t.owner_instance


FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
WHERE t.object_type = 'SYS.ANYDATA' AND
q.queue_table = t.queue_table AND
q.owner = t.owner;

To display the monitoring view information, either query the


monitoring views from the owning instance or use the GV$ views for
dynamic streams views.

Refer to Document 735976.1 for streams related views.

Streams Healthcheck Scripts

The Streams health check script is a collection of queries to


determine the configuration of the streams environment. This script
should be run at each participating database in a streams
configuration. In addition to configuration information, analysis of
the rules specified for streams is included to enable quicker diagnosis
of problems. A guide to interpreting the output is provided. The
healthcheck script is an invaluable tool for problem solving customer
issues. The Streams Healthcheck script is available from Document
273674.1 Streams Configuration Report and Health Check Script

Alert Log

Streams capture and apply processes report long-running and long


transactions in the alert log.

Long-running transactions are open transactions with no activity( ie,


no new change records , rollback or commit ) for an extended period
(20 minutes). Large transactions are open transactions with a large
number of change records. The alert log will report the fact that a
long-running or large transaction has been seen every 20 minutes.
Not all such transactions will be reported - only 1 per 10 minute
period. When the commit or rollback is received, this fact will be
reported in the alert log as well.

Refer to Document 783927.1 Streams Long-Running Transactions:


Explanation, Diagnosis, and Troubleshooting

Monitoring Utility STRMMON

STRMMON is a monitoring tool focused on Oracle Streams. Using this


tool, database administrators get a quick overview of the Streams
activity occurring within a database. In a single line display, strmmon
reports information The reporting interval and number of iterations
to display are configurable. STRMMON is available in the
rdbms/demo directory in $ORACLE_HOME. The most recent version
of the tool is available from Document 290605.1 Oracle Streams
STRMMON Monitoring Utility.

Streams Performance Advisor

In 11g and later, Oracle Streams Performance Advisor can be used to


monitor and gather performance statistics on the Streams
Components.

See also Document 732644.1 Oracle Streams Performance Advisor

Still have questions ?

To discuss this information further with Oracle experts and industry


peers, we encourage you to review, join or start a discussion in the
My Oracle Support GoldenGate, Streams and Distributed Database
Community Forum.
Enjoy a short Video about Oracles Support Communities - to quickly
understand its benefits for you right now (http://bcove.me/tlygjitz)

To provide feedback on this note, click on the "Rate this document"


link above.

References

NOTE:238455.1 - Streams DML Types Supported and Supported


Datatypes
NOTE:437838.1 - Streams Recommended Patches
NOTE:730036.1 - Troubleshooting Oracle Streams Performance Issues
NOTE:461278.1 - Example of a Streams Heartbeat Table
NOTE:335516.1 - Master Note for Streams Performance
Recommendations
NOTE:313279.1 - Master Note for Troubleshooting Streams capture
'WAITING For REDO' or INITIALIZING
NOTE:783927.1 - Troubleshooting Long-Running Transactions in
Oracle Streams
NOTE:789445.1 - Master Note for Streams Setup and Administration
NOTE:259609.1 - Script to Prevent Excessive Spill of Message From
the Streams Buffer Queue To Disk
NOTE:265201.1 - Master Note for Troubleshooting Streams Apply
Errors ORA-1403, ORA-26787 or ORA-26786,Conflict Resolution
NOTE:273674.1 - Streams Configuration Report and Health Check
Script
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:1264598.1 - Master Note for Streams Downstream Capture -
10g and 11g [Video]
NOTE:305662.1 - Master Note for AQ Queue Monitor Process
(QMON)
NOTE:365648.1 - Explain TXN_LCR_SPILL_THRESHOLD in
Oracle10GR2 Streams
NOTE:428441.1 - "Warning: Aq_tm_processes Is Set To 0" Message in
Alert Log After Upgrade to 10.2.0.3 or Higher
NOTE:735976.1 - All Replication Configuration Views For Streams,
AQ, CDC and Advanced Replication
NOTE:1364398.1 - Oracle Streams Statement of Direction
NOTE:230049.1 - Streams Conflict Resolution

Related

Products

Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database -
Enterprise Edition > Streams (Replication and Messaging)

Keywords

CONFIGURATION; DBMS_CAPTURE_ADM; DBMS_STREAMS_ADM; DOWNSTREAM CAPTURE;


FLOW CONTROL; HEALTH; HEALTHCHECK; INSTANTIATION; MAINTENANCE; MONITOR;
PROPAGATION; REQUIRED_CHECKPOINT_SCN

Errors

ORA-1341; ORA-1403; ORA-26786; ORA-26787


Introduction to "Bug Description" Articles (Doc ID 245840.1) To Bottom

This is a customer overview of the "Bug Description" documents which can be seen in My
Oracle.

What are "Bug Description" Articles ?

Each article is a placeholder note for a summary description of a bug (defect) in an Oracle
product. The articles give the following summary information on an individual bug:

Affects - Product version/s affected

Fixed - Product version/s where the bug is fixed

Symptoms - Key symptoms of the bug

Related To - Key features / product areas / parameters / views etc.. which the issue is related
to

Description - A short description of the problem, sometimes including an example and/or a


workaround

Many of the articles will show very little information against the description of the problem
but may still be useful as they show which versions are likely to be affected and can give some
idea of the impact of the bug.

IMPORTANT:

These short articles will not usually give enough information to identify that you are hitting, or
may hit, a specific issue. Always check with Oracle Support if a particular issue is of interest.
Also please note that some of the workarounds given may be very specific, or may have side
effects. Again contact Oracle Support for information about any particular bug.

Please note that not all bugs have a bug description present. The aim is to have a description
for all customer related bugs which appear in RDBMS Patch Sets. Other products do not have
these summaries present.

Affects

This section shows the product that the bug is reported against, the range of version believed
to be affected, the versions which are confirmed as being affected and details of the platforms
affected. If the issue is believed to be a regression than that is indicated also.

Product (Component)

The product / component shows the code area that the bug is filed against. In some cases the
tools affected may differ from the actual code area where the fix occurs. Eg: An Oracle Server
(CORE) fix may only show up as an Export / Import issue. You are advised to see the "Related
To" section of the bug description for features / products believed to be affected.

Versions believed to be affected


Oracle does not test every version of a product for a given bug. When a bug is reported it is
typically tested against the version it is reported in and the latest version. The values here give
an idea of the range of versions the bug is believed to affect.

Versions confirmed as affected

These are versions where the bug is known to reproduce.

Platforms affected

If the bug is specific to certain platforms this is indicated here. If the bug is "Generic" this is
shown. A generic bug is one which affects all or most platforms, although the exact symptoms
and chance of hitting the bug may differ between platforms. eg: Some bugs may affect only big
or little endian platforms but the bug itself is marked as generic if the issue is in code which is
common to all platforms.

Regressions

A regression is a problem which is introduced in a particular release but does not affect the
default behaviour in lower releases. Hence it is something to watch out for if upgrading.
Problems with new features are NOT considered as regressions unless the feature is enabled
by default in the new release and it is not obviously controlled by some compatibility related
parameter. The most serious regressions are those introduced by application of a Patch Set.

Changes in behaviour

Some bug fixes intentionally introduce a change in documented behaviour. Ideally such
changes should be protected by some event or parameter to allow them to be turned off but
this is not always the case.

More Information about Versions Affected

Current Patch Set.


Affects 12.1.0.2 Consider using any workaround if given. It may be possible to get
an interim patch.

Patch Set Update


Consider using any workaround if given. Consider using the latest
Affects the listed
Patch Set if available on your platform. It may be possible to get an
12.1.0.1.X DB PSU
interim patch if the issue has serious enough impact and business
justification.

Current Version.
Consider using any workaround if given. Consider using the latest
Affects 12.1.0.1
Patch Set if available on your platform. It may be possible to get an
interim patch.

Patch Set Update


Affects the listed Consider using any workaround if given. It may be possible to get
11.2.0.4.X DB PSU an interim patch if the issue has serious enough impact and
business justification.
Current Patch Set.
Affects 11.2.0.4 Consider using any workaround if given. It may be possible to get
an interim patch.

Patch Set Update


Affects the listed Consider using any workaround if given. It may be possible to get
11.2.0.3.X DB PSU an interim patch if the issue has serious enough impact and
business justification.

Second 11.2 Patch Set.


Affects 11.2.0.3 Consider using any workaround if given or a later version where the
issue is fixed. It may be possible to get an interim patch.

Patch Set Update


Affects the listed Consider using any workaround if given. It may be possible to get
11.2.0.2.X PSU an interim patch if the issue has serious enough impact and
business justification.

First 11.2 Patch Set.


Affects 11.2.0.2 Consider using any workaround if given. It may be possible to get
an interim patch.

Patch Set Update


Affects the listed
Consider using any workaround if given. Interim patches are no
11.2.0.1.X PSU
longer created for 11.2.0.1 releases, including PSU versions.

Current Base release.


Affects 11.2.0.1 Consider using the current patch set (11.2.0.3) or any workaround
if given. Interim patches are no longer created for this release.

Patch Set Update


Affects the listed Consider using any workaround if given. It may be possible to get
11.1.0.7.X PSU an interim patch if the issue has serious enough impact and
business justification.

Current Patch Set.


Consider using any workaround if given. If you have purchased
Affects 11.1.0.7
Extended Support then it may be possible to get an interim patch if
the issue has serious enough impact and business justification.

Current Base release


Affects 11.1.0.6 Consider using the current patch set or any workaround if given.
Interim patches are no longer created for this release.

Patch Set Update


Affects the listed Consider using any workaround if given. It may be possible to get
10.2.0.5.X PSU an interim patch if the issue has serious enough impact and
business justification.
Current Patch Set.
Consider using any workaround if given. If you have purchased
Affects 10.2.0.5
Extended Support then it may be possible to get an interim patch if
the issue has serious enough impact and business justification.

Patch Set Update


Affects the listed Consider using latest patch set or any workaround if given. Interim
10.2.0.4.X PSU patches are no longer created for these releases except for specific
platforms where noted in the support policy.

Consider using latest patch set or any workaround if given. Interim


Affects 10.2.0.4 patches are no longer created for this release except for specific
platforms where noted in the support policy.

Consider using latest patch set or any workaround if given. Interim


Affects 10.2.0.3
patches are no longer created for this release.

Affects 10.2.0.2 or Consider using latest patch set. Interim patches are no longer
10.2.0.1. created for this release.

Current Patch Set.


Consider using 10.2 or 11g or any workaround if given. If you have
Affects 10.1.0.5 purchased Extended Support then it may be possible to get an
interim patch if the issue has serious enough impact and business
justification.

Affects 10.1.0.2, Consider using latest patch set (10.1.0.5) or 10.2 or 11g. Interim
10.1.0.3 or 10.1.0.4 patches are no longer created for this release.

Terminal Patch Set.


Consider using 10g or 11g or any workaround if given. If you have
Affects 9.2.0.8 purchased Extended Support then it may be possible to get an
interim patch if the issue has serious enough impact and business
justification.

Consider using a newer release or any workaround. Interim patches


Affects 9.2.0.1, 9.2.0.2,
are no longer created for these releases.
9.2.0.3, 9.2.0.4, 9.2.0.5,
The support status of various releases can be seen in
9.2.0.6 or 9.2.0.7
Note:161818.1

Consider using a newer release or any workaround. Interim patches


Affects a release prior to are no longer created for these releases.
9.2 The support status of various releases can be seen in
Note:161818.1

Fixed

This lists the releases where the bug has been fixed.
Interim / one-off patches are NOT listed in this section. You can search the "Patches" section of
My Oracle using the bug number to check for one-off / interim emergency patch availability.
Note that if an issue is listed as fixed in a particular patch set then the fix should also be
included in all subsequent Patch Sets for that release. Eg: If a bug is listed as fixed in 9.2.0.3
then the fix will also be included in the 9.2.0.4 Patch Set. It is advisable always to use the latest
Patch Set rather than a specific patch set version.

Fixed in version Release Information

This fix has not been included in any full database release nor any
This bug fix is only Patch Set and is only available as an interim (one off) patch. Typically
available as an interim such issues are addressed in some other manner in a later release
patch and so the fix is not applicable to versions other than those detailed
in the bug description.

This issue is reported The issue may be fixed at some point in the future but the fix may be
as a bug but currently done under some other bug number or even under some separate
has no fix coded. project.

Expected to be Fixed This is just a marker version for a future version of Oracle. There is
in Oracle 12c Release currently no such version. A fixed of 12.2.0.0 just means that the
2 issue has been fixed in 12c Release 2 but this is not guaranteed.

Fixed in the 12.1.0.2


See Note:1683799.1
Patch Set

Fixed in the listed


See Note:854428.1 for details of Patch Set Updates (PSU).
12.1.0.1 Patch Set
See Note:1591141.1 for details of fixes in each 12.1.0.1 PSU
Update (PSU)

Fixed in 12.1.0.1 See Note:1565065.1

Fixed in the listed


See Note:854428.1 for details of Patch Set Updates (PSU).
11.2.0.4 Patch Set
See Note:1611785.1 for details of fixes in each 11.2.0.4 PSU
Update (PSU)

Fixed in the 11.2.0.4


See Note:1562139.1
Patch Set

Fixed in the listed


See Note:854428.1 for details of Patch Set Updates (PSU).
11.2.0.3 Patch Set
See Note:1449750.1 for details of fixes in each 11.2.0.3 PSU
Update (PSU)

Fixed in 11.2.0.3 Patch


See Note:1348336.1
Set

Fixed in the listed


See Note:854428.1 for details of Patch Set Updates (PSU).
11.2.0.2 Patch Set
See Note:1340011.1 for details of fixes in each 11.2.0.2 PSU
Update (PSU)
Fixed in 11.2.0.2 (and
See Note:1179474.1
higher) Patch Set

Fixed in the listed


See Note:854428.1 for details of Patch Set Updates (PSU).
11.2.0.1 Patch Set
See Note:1340010.1 for details of fixes in each 11.2.0.1 PSU
Update (PSU)

Fixed in 11.2.0.1 See Note:880782.1

Fixed in the listed


See Note:854428.1 for details of Patch Set Updates (PSU).
11.1.0.7 Patch Set
See Note:1337836.1 for details of fixes in each 11.1.0.7 PSU
Update (PSU)

Fixed in 11.1.0.7 Patch


See Note:738538.1
Set

Fixed in 11.1.0.6 base


See Note:454506.1
release

Fixed in the listed


See Note:854428.1 for details of Patch Set Updates (PSU).
10.2.0.5 Patch Set
See Note:1337394.1 for details of fixes in each 10.2.0.5 PSU
Update (PSU)

Fixed in 10.2.0.5 Patch


See Note:1087991.1
Set

Fixed in the listed See Note:854428.1 for details of Patch Set Updates (PSU).
10.2.0.4 Patch Set See Note:1340024.1 for details of fixes in each 10.2.0.4 PSU
Update (PSU) Overlay Note that 10.2.0.4.5 onwards PSU are supplied as an overlay patch
patch which must be applied on top of the 10.2.0.4.4 Patch Set Update.

Fixed in the listed


See Note:854428.1 for details of Patch Set Updates (PSU).
10.2.0.4 Patch Set
See Note:1340024.1 for details of fixes in each 10.2.0.4 PSU
Update (PSU)

Fixed in 10.2.0.4 and


subsequent 10.2 Patch See Note:555579.1
Sets

Fixed in 10.2.0.3 and


subsequent 10.2 Patch See Note:401435.1
Sets

Fixed in 10.2.0.2 and


subsequent 10.2 Patch See Note:359415.1
Sets

Fixed in the 10.2.0.1


See Note:316901.1
base release
Fixed in the 10.1.0.5
See Note:356201.1
Patch Set

Fixed in 10.1.0.4 and


subsequent 10.1 Patch See Note:302663.1
Sets

Fixed in 10.1.0.3 and


subsequent 10.1 Patch See Note:280554.1
Sets

Fixed in the 10.1.0.2


See Note:263729.1
Base release

Fixed in the 9.2.0.8


See Note:388281.1
Patch Set

Fixed in one of 9.2.0.1,


These are old 9.2 releases. It is advisable to use 9.2.0.8 or a newer
9.2.0.2, 9.2.0.3,
release such as 10g or 11g. Summary 9.2 information can be seen in
9.2.0.4, 9.2.0.5,
Note:189908.1
9.2.0.6, 9.2.0.7

Fixed in one of 9.0.1.0, These are old 9i releases. It is advisable to use a newer release such
9.0.1.2, 9.0.1.3, as 10g or 11g. Summary 9.0 information can be seen in
9.0.1.4, 9.0.1.5 Note:149018.1

These releases are no longer supported for error correction. A


Fixed in a release prior
summary of releases and they current status can be seen in
to Oracle9i.
Note:161818.1

Note: If a release is shown as "does not exist yet" then this indicates a planned future base
release or Patch Set. A patch set or base release with the indicated version is expected to be
released at some time in the future but this is not a guarantee that there will be such a release
- plans are subject to change.

Symptom

Code Improvement

The code fix in this bug is considered as an enhancement. This may be as simple as enhanced
diagnostics or may be a small enhancement to functionality.

Corrupt/Bad Backup

The problem can result in a bad backup being produced, such that restore from that backup
may be impossible or not restore data to be the same as it was at the time of the backup.

Corruption

A corruption issue which does not fit into one of the corruption issues below.

Corruption (Corrupt Export)


The export file produced is corrupt in some form. This may make the export totally unusable,
or it may only affect a small part of the export file.

Corruption (Dictionary)

The Oracle data dictionary can end up containing incorrect or inconsistent data. Sometimes
such issues can be repaired with the help of Oracle Support, and sometimes it requires the
database to be rebuilt or restored. Never attempt to repair dictionary corruptions by yourself.
Doing so can make your database unsupportable.

Corruption (Incorrect / Missing Corruption Checks)

Either a false corruption error is reported or code which checks for corrupt blocks / data is
incorrect and may not notice that bad data really is corrupt.

Corruption (Index)

This issue can cause an index / table data mismatch where the problem can typically be
resolved by rebuilding the index.
Note: When an index is built / rebuilt is may be built based on data in the corrupt index itself.
Hence it is very important to drop ALL corrupt indexes before recreating them.

Corruption (Logical)

This issue can cause logical corruption to data. This category is typically used where data itself
can get corrupted but the underlying table structure is intact.
Eg: "DBVERIFY" or "ANALYZE VALIDATE" may not report any problems for any table affected.

This category is closely related to Wrong Results in that wrong results issues can cause logical
corruption if used to update data or take decisions. The "Corruption" category is typically used
where the corrupt data is persistent (stored).

Corruption (Physical)

This issue can cause physical corruption. Physical corruption is typically taken to mean that the
underlying structure of datablocks or files can be corrupted.
Eg: "DBVERIFY" or "ANALYZE VALIDATE" is likely to report a problem.

Deadlock

This issue can either lead to a deadlock scenario, or a deadlock situation may go unnoticed. In
some cases a false deadlock may be reported.

Diagnostic Output Problem / Improvement

Diagnostic output relates to output in Oracle trace files etc.. used by Oracle Support to
diagnose a problem. The diagnostic information may not be sufficiently detailed, or may
contain incorrect , misleading or incomplete output. This sort of issue does not normally affect
normal day to day running of a database but can hamper diagnosis when things go wrong.

Error May Occur

The issue causes an unexpected or incorrect error to be reported. Details of the actual error/s
appear in the description of the bug.
Excessive CPU Usage

This issue can cause high CPU usage but is not typically a spin scenario - just excessive CPU use.
For CPU spinning issues see Hang / Spin.

Feature Unusable

This bug makes a particular feature unusable.

Hang (Involving Shared Resource)

A process may hold a shared resource a lot longer than normally expected leading to many
other processes having to wait for that resource. Such a resource could be a lock, a library
cache pin, a latch etc.. The overall symptom is that typically numerous processes all appear to
be stuck, although some processes may continue unhindered if they do not need the blocked
resource.

Hang (Process Hang)

A process may hang, typically in a wait state. Note that this is different to a process which is
spinning and consuming CPU.

Hang (Process Spins)

A process enters a tight CPU loop so appears to hang but is actually consuming CPU.

Install/Patching Is Not Performed Correctly

Installation or patching of a product or option is not performed correctly.


eg: Files may be missing or permissions changed incorrectly.

Instance May Crash

This issue can cause an instance to crash.

Internal Error May Occur (ORA-600)

An internal error (ORA-600) may occur. Details of the actual error/s appear in the description
of the bug.

Latch Contention

This issue can result in latch contention within the database.

Leak (Memory Leak / Growth)

Memory is continually consumed appearing as a memory leak. Some memory leaks are not
"true" leaks in that the memory may be freed up when a long running operation completes but
the issue is still marked as a leak if the operation should run without the memory growth.

Leak (Resources Eg: File Handles)

The issue leaks some form of resource (other than memory or CPU).
Examples are cursor leaks , file handle leaks etc...

Memory Corruption
This issue can result in memory corruption. Memory corruptions can have side effects of
signalling unexpected errors, unexpected internal errors or can even produce incorrect results
depending on how the memory gets corrupted. The text description of the bug usually
indicates if the issue affects private or shared memory and whether the issue is a client or
server side memory corruption issue.

Mutex Contention

Mutex contention may be seen. Typically mutex contention is focused around specific objects
or SQL statements . As mutexes can use very tight loops with only a yield of the CPU between
iterations then mutex contention may often be accompanied by increased CPU usage.

Optimizer Bad Cardinality

The optimizer estimates a bad cardinality when evaluating the best execution plan. This can
typically lead to a suboptimal execution plan being chosen as the cost computation are based
on bad estimates.

Parsing SQL is expensive / slow

This issue can cause parsing of a SQL statement to take excessive resource and time. In this
context "parsing" includes time taken in the optimizer to choose the best plan for a statement.
If a SQL statement takes a long time or a lot of resource during the parse / optimize operations
this can cause waits in other sessions wanting to execute the same SQL statement.

One can typically help the optimizer portion of parse time by use of hints or outlines to reduce
the number of options that the optimizer has to consider.

Performance Affected (General)

The issue can affect performance generally.

Performance Of Certain Operations Affected

Performance of specific operations may be affected.

Performance Of Query/ies Affected

Query performance is affected. This may be due to a poor execution plan or due to un-
necessary operations to execute the query.

Process May Dump (ORA-7445) / Abend / Abort

The process may die unexpectedly. More details on the likely functions or error at the time of
the dump should appear in the bug description.

Relink errors / undefined symbols

This issue can show as errors during relink operations or as undefined symbols.

Security Or Service Denial Vulnerability

This issue is either a security loophole or a vulnerability to denial of service attacks. Such issues
are typically either alerted and/or are included in Oracle Critical Patch Updates .

Shared Pool Affected


The shared pool may be affected. Issues which affect the shared pool may lead to wider
problems for other users such as ORA-4031 errors, very slow performance, increased CPU etc..

Storage Space Usage Affected

The amount of space used for the storage of database objects is affected by this issue.
Typically more space than expected is used. Note that this relates to the "on disk" storage
space in the database and not to memory space used.

Unstable Execution Plan

This issue can cause an execution plan for a statement to change suddenly. ie: The plan is not
stable and the SQL may execute quickly sometimes and poorly at other times.

Unwanted or Excess Trace / Alert messages

This issue can cause unwanted tracefiles , tracefile content, alert log entries or other
extraneous output which might be considered a nuisance. Such files / entries can typically be
deleted at regular intervals but be careful not to also remove useful trace / output.

Wrong Results

This bug can cause wrong results to be returned. If the source of the wrong results is used in
any form of data update or decision this issue could lead to permanent logical corruption.

Wrong/Bad Shared Cursor

A wrong version of a shared cursor may be used. eg: A wrong child cursor may be used. This
sort of problem can show as strange errors (such as ORA-942) or can lead to logically incorrect
behaviour, such as accessing data from tables in the wrong schema.

Wrong OS file/directory Permissions

This issue can cause incorrect permissions to be set on files / directories at OS level. Such
permissions may prevent access when expected OR may allow access to the files/directories by
users that should not normal have permission to read/write the file.

HCHECK script reports this corruption

"hcheck.sql" is a custom script available in Note:136697.1 which can be executed to help check
for potential DB data dictionary inconsistencies. The script specifically includes checks for
dictionary inconsistencies that could be caused by this bug and reports any found with an
HCKE-nnnn or HCKW-nnnn message.

Task Related

Instance Startup

This issue can occur when attempting to start a database or ASM instance.

Migration / Upgrade / Downgrade

This issue can affect upgrade, downgrade or migration of a database between releases.

Performance Monitoring
Performance monitoring may be affected.
eg: performance monitoring views may not show correct information to identify a problem.

Recovery

Issues which affect database recovery operations of some form.

SQL Commands / Features Used

Adaptive Cursor Sharing

This issue is related to the Adaptive Cursor Sharing functionality introduced in 11g. See
Note:836256.1 for details of this feature.

Analytic SQL (Windowing etc..)

Relates to the use of analytic SQL constructs such as the windowing functions.

ANSI Joins

Relates to the use of ANSI joins. Often a workaround for problems with ANSI SQL is to recode
the SQL to Oracle conventional format.

Application Context

Relates to the use of application context. eg: the use of SYS_CONTEXT()

Connect By / Hierarchical Queries

This issue affects SQL which uses the "CONNECT BY" SQL clause.

Constraint Related

This issue is related to the use of constraints. The description should clarify which kinds of
contraint the problem may relate to. eg: CHECK constraints, use of Foreign key constraints etc..

Datatypes (AnyData)

Relates to the use of the ANYDATA datatype

Datatypes (LOBs/CLOB/BLOB/BFILE)

Relates to one of the large object (LOB) datatypes such as CLOB, BLOB or BFILE.

Datatypes (Objects/Types/Collections)

Relates to object data types such as TYPEs or collections.

Datatypes (Special Datatypes)

Special datatypes which are only available in Oracle8 onwards.

Datatypes (TIMESTAMP)

Relates to use of the TIMESTAMP datatype, or timezone data as used for timestamp datatypes.

Direct Path Operations

Direct path operations may be affected. Direct path operations can occur in various places. eg:
INSERT /*+APPEND*/ type operations use direct path access at SQL level, whilst direct path
SQL Load and direct path OCI APIs can also use this form of data access. For issues affecting
direct path operations a workaround can often be to use the equivalent non-direct path
option.

Expression Filters / Rules

This issue relates to the use of rules / expression filters. eg: As created by DBMS_RLMGR

Global Temporary Tables

Can affect the use of global temporary tables.

Globalization Support (NLS)

Related to the use of NLS (Globalization support).

Hash Join

A HASH Join is a specific form of join of row sources. This issue relates to this specific join
method. Hash joins can typically be disabled and other join methods used.

Literal Replacement (CURSOR_SHARING)

This issue relates to the use of literal replacement , which is used when the parameteer
CURSOR_SHARING is set to either SIMILAR or FORCE. A workaround for such issues is to
disable literal replacement by setting CURSOR_SHARING=EXACT for any problem statement,
although this can then result in increased load on the shared pool in systems with high
concurrency.

Merge SQL (MERGE .. USING)

Relates to use of the MERGE SQL statement.

Multi Table Insert SQL

Relates to the use of Multi table insert statements.

Online DDL

Relates to the use of ONLINE DDL operations. Often a workaround for such issues is to use the
non-online equivalent, although that may then need an short outage to allow the operation to
run.

Optimizer

This problem is related to the SQL optimizer which determines the execution plan for a given
SQL statement.

Optimizer (Bind Peeking)

Relates specifically to the use of bind peeking by the Cost Based Optimizer.

Optimizer (SQL Plan Management)

This issue affects, or is related to, the use of the SQL Plan Management (SPM) feature within
Oracle which was introduced in Oracle 11g. SQL Plan Management (SPM) is intended to allow
controlled plan evolution by only using a new plan after it has been verified to be perform
better than the current plan. The feature is typically controlled via:

<Parameter:optimizer_capture_sql_plan_baselines>

<Parameter:optimizer_use_sql_plan_baselines>

<Package:DBMS_SPM>

See the "Performance Tuning Guide" for more details of this feature.

Optimizer (Stored Outlines)

Relates to the use of stored outlines.

Optimizer (Subquery Factoring - WITH clause)

Relates to the use of WITH clauses within SQL statements. Most statements which use a WITH
clause can be recoded into equivalent SQL which does not use the WITH clause.

Parallel Query (Auto DOP)

Parallel query automatic degree of parallelism (Auto DOP) is a Parallel Query feature
introduced in 11.2 which is enabled when the parameter PARALLEL_DEGREE_POLICY=AUTO .
If you encounter problems with this feature it can be disabled by setting
PARALLEL_DEGREE_POLICY to MANUAL .

Parallel Query (PQO)

Relates to the use of Parallel Query or Parallel DML. A workaround for such issues may be to
run the statement serially.

Query Rewrite (Including Materialized Views)

Relates to the use of materialized views / snapshots and / or query rewrite.

Regular Expressions

Related to the use of Regular expressions. eg: the REGEXP_LIKE operator

Result Cache

This issue affects, or is related to, the use of the Result Cache feature within Oracle which was
introduced in Oracle 11g. When the result cache is enabled then a query execution plan may
include a RESULT CACHE node in the plan. When such a query executes the database looks in
the cache memory to determine whether the result exists in the cache. If the result exists, then
the database retrieves the result from memory instead of executing the query. If the result is
not cached, then the database executes the query, returns the result as output, and stores the
result in the result cache.

The result cache can be controlled by the following:

<Parameter:RESULT_CACHE_MODE>

The /*+ RESULT_CACHE */ or /*+ NO_RESULT_CACHE */ hints

See the "Performance Tuning Guide" for more details of this feature.
Securefiles

Relates to the use of Secure File LOBS. Often a workaround may be to use the equivalent
BASICFILE lob.

Star Transformation

Relates to using STAR transformation in SQL statements. Sometimes such issues can be
avoided by disabling star transformation for the problem SQL.
eg: Set STAR_TRANSFORMATION_ENABLED=FALSE

Star Transformation With Temp. Tables

Relates to using STAR temporary table transformations. Often such issues can be avoided by
using STAR transformation but without the temp table transformation.
eg: Set STAR_TRANSFORMATION_ENABLED=TEMP_DISABLE

Triggers

Relates to the use of TRIGGERs.

Truncate

Relates to use of the TRUNCATE SQL statement.

Virtual Columns

This issue is related to the use of Virtual Columns. A virtual column is a column that is not
stored on disk but has a queryable value which is the result of some expression.

Virtual columns may be defined explicitly in the table definition or may be used implicitly by
some SQL operations (such as when a function based index is defined or used).

How Things Are Stored / Defined

ASSM Space Management (Bitmap Managed Segments)

Relates to the use of ASSM segments. eg: SEGMENT SPACE MANAGEMENT AUTO

Automatic Storage Management (ASM)

Relates to ASM storage or instances.

BIGFILE Tablespaces

Relates to BIGFILE tablespaces.

Bitmap Indexes

Relates to the use of bitmap indexes.

Chained or Migrated Rows

This issue is related to chained or migrated rows. A "chained" row is one where the row is split
into two or more pieces - the pieces may be stored in the same block or in different blocks.
Any row with more than 255 columns is internally stored in a chained manner. Row chaining
can typically occur if a row is updated with longer data than currently in place - the row then
has to be split and chained in order to fit in the new data. A migrated row is similar but occurs
where the entire row piece is moved leaving just the head piece in the original location in
order to retain the same "rowid" value. Delete and reinsert of chained / migrated rows can
often remove the chaining (unless the row is very long or has more than 255 columns).

Compressed Data Storage

Relates to the use of compressed data storage in tables.


ie: Use of the COMPRESS keyword on the table itself.
Note that this is differnt to compressed key storage in indexes / IOTs.

Compressed Key Index / IOT

Relates to the use of COMPRESS KEY indexes or IOTs.

Domain Indexes

Relates to domain indexes in general.

External Tables

Relates to the use of External Tables.

Function Based Index (Including DESC Indexes)

Relates to the use of function based indexes, including DESC indexes.

Index Organized Tables (IOT)

Relates to the use of Index Only Tables. ie: Created with ORGANIZATION INDEX

Oracle Disk Manager

Relates to use of Oracle Disk Manager

Partition Views (PV)

Relates to the use of partition views.

Partitioned Tables

Relates to use of partitioned or composite partitioned tables.

Read Only Tablespace/s

Relates to the use of read only tablespaces. eg: ALTER TABLESPACE READ ONLY

Space Management

Relates to dictionary based space management .

Space Management (Locally Managed Tablespaces)

Relates to space management within locally managed tablespaces. ie: Tablespaces created
with "EXTENT MANAGEMENT LOCAL"

SYSAUX Tablespace

This issue relates to the use of the SYSAUX tablespace which is a special auxiliary tablespace
that is a standard part of the database.
System Managed Undo (SMU)

Relates to the use of System Managed Undo (SMU). ie: As is used when UNDO_MANAGEMENT
is set to AUTO.

Transparent Data Encryption

Related to the use of Transparent Data Encryption (TDE).

Net / Connectivity / Authentication / Distributed

Advanced / Secure Networking

This issue is related to Advanced or Secure Networking. Such issues may relate to a specific
encryption or advanced Net feature.
eg: The use of a specific secure network protocol such as Kerberos or SSL etc..

Database Link / Distributed

The issue is related to the use of a database link between Oracle databases. Note that this
does NOT include database links to Heterogenious Services / Gateways but only those
between Oracle instances.

Gateways / Heterogeneous Services

Related to the use of Oracle Gateways / Heterogeneous Services generally (as opposed to a
specific gateway / HS service)

Gateways / MGW Message Gateway

Related specifically to the MGW Message Gateway.

Gateways / ODBC

Related specifically to the ODBC Gateway

Gateways / Procedural Gateway For APPC

Related specifically to the Procedural Gateway For APPC

Gateways / Procedural Gateway For MQ Series

Related specifically to the Procedural Gateway For MQ Series

Gateways / Transparent Gateway For DRDA

Related specifically to the Transparent Gateway For DRDA

Gateways / Transparent Gateway For Informix

Related specifically to the Transparent Gateway For Informix

Gateways / Transparent Gateway For SQL Server

Related specifically to the Transparent Gateway For SQL Server

Interoperability (Between Releases)

Related to interoperability between different Oracle versions.


Network

Related to the Net layer of communication between processes.

Network (Connection Manager)

Related to the use of Connection Manager (CMAN).

Oracle Data Providedr for .NET (ODP)

Related to Oracle Data Providedr for .NET (ODP)

Oracle Names

Related to the use of Oracle Names.

Security ( Authentication / Privileges / Auditing )

This issue relates to auditing , authentication or privileges used in the database.

Transparent Application Failover

This issue is related to Transparent Application Failover (TAF) . This is a feature of the client
that enables an application to automatically reconnect to a database if the database instance
to which the connection is made fails.
See Note:453293.1 for more details of TAF.

XA / Distributed Transactions

Database Options / Major Features

Active Dataguard (ADG)

This issue is related to Active Data Guard.


See Note:1101938.1 for information about Data Guard.

Advanced Queuing

This issue relates to use of the Advanced Queue features in the Oracle database.

Automatic Memory Management

This issue relates to use of the Automatic Memory Management feature of Oracle.
eg: As used when MEMORY_TARGET parameter is used .
Problems in this area can often be avoided by manually configuring memory for the database.

Block Change Tracking (BCT)

"Block change tracking" is a feature that allows the database to keep track of blocks that have
been modified (changed). This might be enabled on a primary or standby database. If enabled
then RMAN uses a block change tracking file to identify changed blocks for incremental
backups. By reading this small bitmap file to determine which blocks changed, RMAN avoids
having to scan every block in the data file that it is backing up.

If a bug issue affects the Block Change Tracking feature then one can often work around /
avoid the bug issue by disabling this feature. Disabling BCT may incur long backup times.

To check if BCT is used:


SELECT STATUS, FILENAME FROM V$BLOCK_CHANGE_TRACKING;

To enabled / disable BCT use the

ALTER DATBASE ENABLE | DISABLE BLOCK CHANGE TRACKING;

Change Data Capture

This issue relates to use of Change Data Capture features.

Database File System (DBFS)

This issue relates to use of Database File System (DBFS), including the DBFS client.

Database Resident Connection Pooling (DRCP)

This issue relates to use of Database Resident Connection Pooling (DRCP), which is a
connection pool that can be configured in the Database Server. DRCP can be used to achieve
scalability by sharing connections among multiple-processes. For details of this feature and
how to enable / disable it see Note:1501987.1

Direct NFS (dNFS)

This issue relates to the Direct NFS feature of Oracle introduced in 11g. You can configure the
Oracle Database to access NFS version 3 servers directly using Direct NFS. This enables the
storage of data files on a supported NFS system. See the "Database Installation Guide" for your
platform and Oracle version for further details of this feature.

Editions

This issue relates to the use of Oracle Editions which allows editioned objects to be used. See
the documentation for details of Oracle Editions.

Flashback

This issues relates to some Flashback feature/s in the database.


See Note:1138253.1 for details of the flashback features.

Flashback Data Archive (Oracle Total Recall)

This issue relates to use of the Flashback Data Archive feature.


See Note:470199.1 for details of this feature.

JavaVM / JSP / Corba etc..

This issue relates to the use of stored Java in the database. Note that this is separate from
general JDBC and Java issues outside of the database.

Job Queues

This issue relates to the use of job queues or the Database job scheduler options in the
database, including DBMS_JOB and DBMS_SCHEDULER pacakges.

LogMiner
This issues relates to the use of LogMiner, particularly adhoc LogMiner for manually mining
redo. Note that issues in LogMiner can also affect other options which use the LogMiner code,
such as Streams and Logical Standby.

Multimedia (Formerly interMedia) (Image)

This issue relates to use of Oracle Multimedia for handling Images.


See Note:1078222.1 for details of Multimedia options.

Multimedia (Formerly interMedia) (Video)

This issue relates to use of Oracle Multimedia for handling Video.


See Note:1078222.1 for details of Multimedia options.

Oracle Data Vault

This issue relates to use of Oracle Data Vault.

Oracle Database Smart Flash Cache

This issue relates to use of Oracle Smart Flash Cache.


For more details of Database Smart Flash Cache see the 11.2 documentation.

Oracle GoldenGate (OGG) Integrated Extract

This issue relates to the use of the the Oracle GoldenGate (OGG) Integrated Extract product.

Oracle Label Security

This issue is related to the use of Oracle Label Security in the database.

Oracle OLAP

This issue is related to the Oracle OLAP product.

Oracle Text (Formerly interMedia Text)

This issue relates to use of Oracle Text which allows text indexing of table content within the
database .
See Note:1087143.1 for details of Oracle Text.

Oracle Text Filters

This issue relates specifically to use of the Text filters within Oracle Text . The filters allow non-
ascii documents to be text indexed.
See Note:1087143.1 for details of Oracle Text, including links to supported document formats
and filters.

Physical Standby Database / Dataguard

This issues is related to Standby databases, either manually configured or part of a Data Guard
configuration.
See Note:1101938.1 for information about Data Guard.

Pluggable Database (PDB) / Container


This issue can affect the Multi-Tenant feature of the database. This feature was introduced in
12c. The issue may relate to individual pluggable databases or to the container database.

RAC (Real Application Clusters) / OPS

This issue is related to the use of RAC , or for older releases is related to the use of Parallel
Server.

Read Only Database

This issue is related to a database opened in READ ONLY mode.

Recycle Bin

This issue is related to the Recycle Bin in the database.

Replication

This issue is related to the use of Advanced Replication in the database.

Resource Manager

Row Level Security / FGA

This issue is related to the use of Row Level Security (RLS) or Fine Grain Auditing (FGA) against
tables / views in the database.

Shared Server (formerly MTS)

This issue relates to the use of shared servers in the database. Often one can avoid problems
with shared servers by using dedicated connections instead.

Spatial Data

This issue is related to the Oracle Spatial product.

Spatial RTREE

This issue is related specifically to the use of Spatial RTREE indexes.

Streams / Logical Standby

This issue is related to the use of Streams and/or Logical Standby. Such issues can affect any
product / feature built on top of the Streams architecture.

Supplemental Logging

This issue affects, or is related to, the use of the Supplemental Logging. When enabled
Supplemental Logging logs additional data into the redo stream - such data is typically need for
redo-based applications such as LogMiner, Streams, Logical Standby etc.. An issue which
affects supplemental logging can have downstream effects on these other features.

Transportable Tablespaces

This issue is related to the use of transportable tablespaces. It may be directly related to the
transport operation, or may be some effect seen later in time which relates to transported
data.
Trusted Oracle

This issue relates to the use of Trusted Oracle.

Workload repository / reporting

This issue relates to use of the workload repository or related reporting features.

XDB

This issue relates to the use of Oracle XML DB .

Programming Languages

JDBC

This issue relates to the use of JDBC .

NCOMP

This issues relates to the use of the NCOMP Native Compilation option for PLSQL. Such issues
can typically be avoided by not "ncomp"ing the code.

OCCI

This issue is related to OCCI clients.

OCI

This issue is related to the use of the Oracle Call Interface on the client side. Such issues can
affect various clients which use OCI to interface to the database. eg: OCI issues can affect pre-
compiler based clients, client SQLPLUS etc..

ODBC Driver

This issue is related to use of the Oracle ODBC driver.

PL/SQL

This issue is related to the use of PL/SQL.

PL/SQL (DBMS Packages)

This issue affects the use of certain packages in the database. The actual packages affected will
typically be listed in the bug description note.

PL/SQL External Procedures (EXTPROC)

This issue affects the use of PLSQL External procedures as can be executed by the "extproc"
process from PLSQL.

Pro* Precompiler

This issue is related to the use of one of the Precompilers.

SQLJ

This issues is related to the use of SQLJ


XDK

This issues affects the Oracle XDK .

XML

This issue is related to the processing of XML data.

Product Related

Advanced Clustered File System

This issue realtes to Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
This is a multi-platform file system and storage management technology that extends Oracle
Automatic Storage Management (Oracle ASM) functionality to support customer files
maintained outside of the Oracle Database. See the "Automatic Storage Management
Administrator's Guide" for more information.

Automatic Diagnostic Repository

This issue relates to the Automatic Diagnostic Repository introduced in Oracle 11g. The
Automatic Diagnostic Repository (ADR) is a file-based repository that stores database
diagnostic data such as trace files, the alert log, and Health Monitor reports. Key
characteristics of ADR include:

Unified directory structure

Consistent diagnostic data formats

Unified tool set

See the documentation for further details.

Cluster Ready Services / Parallel Server Management

This issue relates to the Cluster Ready Services / Parallel Server Management / Grid
Infrastructure elements of Oracle as used in as RAC environment.

Database Configuration Assistant

This issue relates to use of the Oracle Database Configuration Assistant (DBCA). Often one can
use manual steps to perform the same task as DBCA.

Database Replay

This issue relates to use of Database Replay .

Database Upgrade Assistant

This issue relates to use of the Oracle Database Upgrade Assistant (DBUA). Often one can use
manual steps to perform the same task as DBUA.

Datapatch Utility

This issue relates to the Datapatch utility, which is used for automated apply and rollback of
SQL steps of patches. See Note:1585822.1 for information about Datapatch.

Datapump Export/Import
This issues relates to the use of the Datapump Export / Import utilities of the Oracle Database.
ie: The "expdp" and/or "impdp" executables.
Note that this is different to conventional Export / Import which it replaces.
In some cases a workaround to Datapump export / import issues can be to use conventional
export / import.

DBVerify Utility

This issues relates to the DBVERIFY (DBV) utility used to check the consistency of database
files.
See Note:35512.1 for details of the DBVERIFY utility.

Exadata

This issues is related to the use of Oracle Database Machine (Exadata).

Export/Import

This issues relates to the use of the conventional Export / Import utilities of the Oracle
Database. ie: The "exp" and/or "imp" executables.
Note that this is different to Datapump Export / Import which replaces conventional export /
import .
In some cases a workaround to conventional export / import issues can be to use Datapump
export / import.

FailSafe

This issues is related to use of the Oracle Failsafe product (OPFS)

Intelligent Agent

This issue relates to the Intelligent Agent process.

J-Publisher

This issues relates to use of the J-Publisher product.

LDAP / Oracle Internet Directory

This issue is related to the use of LDAP features in the database, inclusing the use of Oracle
Internet Directory from the database. This includes the DBMS_LDAP package and the use of
Enterprise Users configured in OID.

Net Configuration Assistant

This issue is related to the use of the Net Configuration Assistant.

OLEDB

This issue relates to the OLEDB product.

Oracle Administration Assistant

This issue is related to the use of the Oracle Administration Assistant

Oracle COM Automation


This issue relates to use of Oracle COM Automation

Oracle Counters for Windows Performance Monitor

This issue relates to the Oracle Counters for Windows Performance Monitor product.

Oracle Data Mining

This issue relates to the use of Oracle Data Mining.


See Note:1087643.1 for details of this feature.

Oracle Enterprise Manager

This issue affect Enterprise Manager .

Oracle Lite

This issue relates to the Oracle Lite product.

Oracle ONS

This issue relates Oracle ONS

Oracle Type Translator

This issue relates to use of Oracle Type Translator

Oracle Univeral Installer

This issue relates to use of Oracle Universal Installer product. Note that this is different to an
issue where an install / patch operation is not performed correctly.

Portal (MOD_PLSQL)

This issue is related to the use of MOD_PLSQL in the database.

RMAN (Recovery Manager)

This issue relates to use of Recovery Manager (RMAN).


See Note:1116484.1 for details of RMAN.

Spatial / Spatial Advisor

This issue relates to use of Spatial Advisor

SQL*Loader

This issue is related to the use of SQLLOADER utility.

SQL*Plus

This issue relates to use of the SQLPLUS client.

Ultra Search

This issue is related to the use of Oracle UltraSearch.

Wallet Manager
This issue relates to use of Wallets in Oracle, and especially to issues with Oracle Wallet
Manager itself..

Workspace Manager

This issue relates to the use of Oracle Workspace Manager.

Miscellaneous

Critical Patch Update molecule

This bug# is a special marker bug for a molecule in a Critical Patch Update. Refer to the
relevant Critical Patch Update documentation for details of what security issues the molecule
addresses.
See Note:467881.1 for details of the latest Critical Patch Update.

Fine Grained Dependencies

From 11g the Oracle database records more detailed information about dependencies
between objects. For example if table T has columns C1, C2 and C3 but view V selects only
columns C1 and C2 then adding a fourth column C4 to T or changing the definition of C3 has no
need to invalidate the view. Prior to 11g any change to T would cause V to be invalidated.
From 11g, only changes that affect the parts of T that V depends on will cause V to be
invalidated. Similar fine grained dependency checking applies to other types of objects too,
especially PL/SQL library units, and is known as fast validation.

If you have problems with Fine Grained Dependencies see help notes in Note:1061696.1 under
"11g and Fine Grained Dependency Checking"

Large File / Memory Issues (Eg: 2Gb)

This issue relates to number boundary issues such as 2Gb limits for filesize, memory size etc..

NUMA Related

This issue is related to the use of NUMA features within the Oracle database.

Year 2000 / Date limit Issues

This issues is related to data limits, particularly year 2000 issues.

Miscellaneous

Description

This is a brief description of the bug itself, including any workaround if known. Sometimes this
will be very short such as "A dump can occur in XXXXX" and sometimes this will include a good
description with an example. Any workaround should be treated with caution. Any hidden
parameters or events mentioned should not be used unless clarified with Oracle Support.
Streams Complete Reference FAQ (Doc ID 752871.1) To Bottom

In this Document Document Details

Purpose

Questions and Answers


FAQ
Type:
PUBLISH
Status: ED

Last Major 07-Mar-


STREAMS CONCEPTS Update: 2014

Last Update: 07-Mar-


2014
What is streams ?

Currency Check
Architecture / Components of streams Required.

CurrentOut of Date
What are Rules & Rulesets ?

What are transformations ?

Related Products
Why we need streams ?

Oracle Database
What are the advantages of streams over Advanced - Enterprise
Replication ? Edition

What are Streams Queue Buffers ?


Information
Centers
What are the different types of Streams Configuration ?

Index of
What is Spilling ? Oracle
Database
Information
CanOracleStreams be used between different hardware Centers
platforms and OS versions? [1568043.2]

Information
New Streams features with different versions oforacle?
Center:
No Overview of
Informa Database
What is the significance of AQ_TM_PROCESS with respect tion Security
toOracleStreams ? Center Products
availabl [1548952.2]
e for
this
Why we need instantiation under streams environment ? docume Information
nt. Center:
Overview
What is streams conflict resolution ? Database
Server/Client
Installation
What is Streams tags ? and
Upgrade/Mig
ration
[1351022.2]
What is Streams Flow control ?

What are the different states of Streams processes ?

Document References
Streams Heterogeneous services

Script to
What are the different SCN's with respect to Streams ? No Prevent
Referen Excessive
ces Spill of
availabl Message
e for From the
this Streams
Buffer Queue
STREAMS CONFIGURATION / ADMINISTRATION docume To Disk
nt. [259609.1]

How to set up streams ?


Usage of
RMAN in
Streams
How to perform Instantiation? Environment
[421176.1]

How to stop/start streams components ?


Streams
Transformati
on
Streams Best practices
[455797.1]

Streams performance best practices How to Skip a


Transaction
at Apply Site
How to minimize the Performance Impact of Batch [422252.1]
Processing?

"Warning:
Aq_tm_proce
How to convert current existing Advanced replication set up sses Is Set To
to Streams ? 0" Message
in Alert Log
After
How are LOBs queued and propagated? What happens Upgrade to
when there is an update/insert operation involving 1GB 10.2.0.3 or
sized LOB? Higher
[428441.1]

When does Streams read the Oracle on-line Redo Logs?


Does the presence of Stream replication affect log- Show More
switching/archiving mechanism?

Recently Viewed

When using Streams, how can I tell which archived logs can Streams Complete
be removed from disk? How can I tell which archivelogs are Reference FAQ
needed by Streams capture? [752871.1]

Master Note for


Troubleshooting
Can I still use Streams if I can't set the init.ora parameter Streams Apply Errors
GLOBAL_NAMES to TRUE? ORA-1403, ORA-26787
or ORA-26786,Conflict
Resolution [265201.1]

Can Streams be used to maintain availability during


migrations between platforms or database upgrades?
Introduction to
&quot;Bug
Description&quot;
Backup considerations in Streams environment Articles [245840.1]

Streams Specific Patches Bug 10026601 - ORA-


26786 can occur in
Streams apply
[10026601.8]
How to recreate the capture process ?

Master Note for


How to remove the Streams configuration ? Streams
Recommended
Configuration
Minimum Archive Log Necessary To Restart Capture Process [418755.1]

Show More
Streams Idle Wait events

Streams Configuration views

How to Determining Latency for Streams Processes at


Different States?

Initialization Parameters Relevant to Streams

How to Purge Apply Spill ?

How to skip/ignore a Transaction at Apply site ?


Upgrading the Database with Streams in place.

Supplied PL/SQL packages for administering Streams.

STREAMS TROUBLESHOOTING

How to trace Streams components ?

How to troubleshoot Streams components ?

Streams Healthcheck

Streams Monitor

Is Streams replication certified with PeopleSoft tables? Any


restrictions?

References

Applies to:

Oracle Database - Enterprise Edition - Version 9.2.0.1 to 11.1.0.6


[Release 9.2 to 11.1]
Information in this document applies to any platform.
Oracle Server Enterprise Edition - Version: 9.2 to 11.1

Purpose

This document is intended to answer Frequently Asked


Questions about Streams.

Questions and Answers


STREAMS CONCEPTS

What is streams ?

OracleStreams is an information sharing feature that provides


replication, message queuing, data warehouse loading, and
event notification. It is also the foundation
behindOracleAdvanced Queuing andOracleData Guard SQL
Apply. Streams is extremely flexible. Using the Streams
components in different combinations, Streams can also be used
to minimize downtime during application upgrades, audit
DML/DDL activity within theOracledatabase.

Architecture / Components of streams

Capture

Captures the database changes, and application generated


messages into the staging area(Queue).
Changes are captured in two ways.
With implicit capture, the server captures DML and DDL events
at a source database.
Explicit capture allows applications to explicitly generate events
and place them in the staging area.

Implicit capture mines redo log, either by hot mining the online
redo log or, if necessary, by mining archived log files.
After retrieving the data, the capture process formats it into a
Logical Change Record (LCR) and places it in a staging area for
further processing.
The capture process can intelligently filter LCRs based upon
defined rules. Thus, only changes to desired objects are
captured.

User applications can explicitly enqueue user messages


representing events into the staging area.
These messages can be formatted as LCRs, which will allow them
to be consumed by the apply engine, or they can be formatted
for consumption by another user application.

Staging

Once captured, events are placed in a staging area. The staging


area is a queue that provides a service to store and manage
captured events.
Subscribers examine the contents of the staging area and
determine whether or not they have an interest in an event.
A subscriber can either be a user application, another staging
area, usually on another system, or the default apply process.

Propagation

If the subscriber is another staging area, the event is propagated


to the other staging area, either within the same database or in
a remote database, as appropriate.
To simplify network routing and reduce WAN traffic, events
need not be sent to all databases and applications. Rather, they
can be directed through staging areas on one or more systems
until they reach the subscribing system. For example, an event
may propagate via a hub database that does not actually apply
the event. A single staging area can stage events from multiple
databases, simplifying setup and configuration.

Consumption

Events in a staging area are consumed by the apply engine,


where the changes they represent are applied to a database, or
they are consumed by an application. Oracle Streams includes a
flexible apply engine, that allows use of a standard or custom
apply function. This enables data to be transformed when
necessary. Support for explicit dequeue allows application
developers to use Oracle Streams to notify applications of
changes to data, while still leveraging the change capture and
propagation features of Oracle Streams.

Default Apply

The default apply engine applies DML changes and DDL changes
represented by implicitly or explicitly captured LCRs. The default
apply engine will detect conflicts where the destination row has
been changed and does not contain the expected values. If a
conflict is detected, then a resolution routine may be invoked.

User-Defined Function Apply

The apply engine can pass the LCR or a user message to a user-
defined function. This provides the greatest amount of flexibility
in processing an event. A typical application of a user-defined
function would be to reformat the data represented by the LCR
before applying it to a local table, for example, field format,
object name and column name mapping transformations. A
user-defined function could also be used to perform column
subsetting, or to update other objects that may not be present
in the source database.

Explicit Dequeue
User applications can explicitly dequeue LCRs or user messages
from the receiving staging area. This allows a user application to
efficiently access the data in a Streams' staging are. Streams can
send notifications to registered PL/SQL or OCI functions, giving
the applications an alternative to polling for new messages. Of
course, applications can still poll, or even wait, for new
subscribed messages in the staging area to become available.

What are Rules & Rulesets ?

Streams lets users control which information to share and where


to send it by specifying rules. At the highest level, users can
indicate if they want to capture, propagate or apply changes at
the table, schema, or global (database) level. For more complex
requirements, for example, to apply only a particular subset of
data at a given location, users can specify a rule condition similar
to the condition in the WHERE clause of a SQL query. If
necessary, related rules can be grouped into rule sets.

Note.382826.1 Understanding Rules and Rulesets

What are transformations ?

A transformation is a change in the form of an object


participating in capture and apply or a change in the data it
holds. Transformations can include changing the datatype
representation of a particular column in a table at a particular
site, adding a column to a table at one site only, or including a
subset of the data in a table at a particular site.

A transformation can be specified during enqueue, to transform


the message to the correct type before inserting it into the
staging area. It can also be specified for propagation, which may
be useful for subsetting data before it is sent to a remote site.
Finally, it can be specified at dequeue or local apply, which can
be useful for formatting a message in a manner appropriate for
a specific destination.

Note.455797.1 Transformations in streams

Why we need streams ?

Replication of data across different sites.


Message Queuing
Datawarehouse loading.
Event Management and Notification.
Data Protection.
Database Availability During Upgrade and Maintenance
Operations

What are the advantages of streams over Advanced Replication


?

Streams provides enhanced functionality and flexibility, when


compared to Advanced Replication.
Streams provides some features that are not provided in
Advanced Replication:
No quiesce for DDL changes
Setting up Streams is online, whereas Advanced Replication
requires a minimal downtime on source
Streams can have different structure for tables on source &
target(even the schema can be different), which is not possible
on Advanced Replication.
Streams captures the transaction by mining redo logs, whereas
Advanced Replication used triggers, which has performance
overhead.
Streams can be used to replicate only a subset of data
Streams can be used for heterogeneous support.

What are Streams Queue Buffers ?

Note.230901.1 What are Streams Queue Buffers?

What are the different types of Streams Configuration ?

Unidirectional Streams
Bi-directional Streams
Hub-spoke Configuration
Downstream Capture
Real time Downstream Capture

Note 733691.1 How To Setup Schema Level Streams Replication


with an Archived-log Downstream Capture Process with implicit
log
Note 753158.1 How To Configure Streams Real-Time
Downstream Environment
Note: 274456.1 Downstream capture

What is Spilling ?
The logical change records are staged in a memory buffer
associated with the queue, they are not ordinarily written to
disk.
If the messages/LCR's staged in the buffer for a period of time
without being dequeued, or if there is not enough space in
memory to hold all of the captured events, then they are spilled
to disk.

AQ$_QUEUE_TABLE_NAME_P, a table for storing the captured


events that spill from memory

CanOracleStreams be used between different hardware


platforms and OS versions?

Streams can be used across different hardware, different OS &


differentOracleVersions.

New Streams features with different versions oforacle?

Note:336266.1 10g Release1 Streams New Features


Note.392809.1 11g Streams New Features

What is the significance of AQ_TM_PROCESS with respect


toOracleStreams ?

Note.305662.1 Master Note for AQ Queue Monitor Process


(QMON)
Note.428441.1 "Warning Aq_tm_processes Is Set To 0" Message
in Alert Log After Upgrade to 10.2.0.3

Why we need instantiation under streams environment ?

Note 429543.1 Purpose of instantiation in Streams environment

What is streams conflict resolution ?

Note.230049.1 Streams Conflict Resolution


Note.230049.1 Streams Conflict Resolution
Note.401275.1 Handling Apply Insert And Delete Conflicts In A
Streams Environment - Error Handlers
What is Streams tags ?

Every redo entry in the redo log has a tagassociated with it. The
datatype of the tagis RAW. By default, when a user or
application generates redo entries, the value of the tagis NULL
for each redo entry, and a NULL tagconsumes no space in the
redo entry. The size limit for a tagvalue is 2000 bytes.

In Streams, rules can have conditions relating to tag values to


control the behavior of Streams clients. For example, a tag can
be used to determine whether an LCR contains a change that
originated in the local database or at a different database, so
that you can avoid change cycling (sending an LCR back to the
database where it originated). Also, a tag can be used to specify
the set of destination databases for each LCR. Tags can be used
for other LCR tracking purposes as well.

You can specify Streams tags for redo entries generated by a


certain session or by an apply process. These tags then become
part of the LCRs captured by a capture process. Typically, tags
are used in Streams replication environments, but you can use
them whenever it is necessary to track database changes and
LCRs.

You can control the value of the tags generated in the redo log in
the following ways:

1. Use the DBMS_STREAMS.SET_TAG procedure to specify the


value of the redo tags generated in the current session. When a
database change is made in the session, the tag becomes part of
the redo entry that records the change. Different sessions can
have the same tag setting or different tag settings.

2. Use the CREATE_APPLY or ALTER_APPLY procedure in the


DBMS_APPLY_ADM package to control the value of the redo
tags generated when an apply process runs. All sessions
coordinated by the apply process coordinator use this tag
setting. By default, redo entries generated by an apply process
have a tag value that is the hexadecimal equivalent of '00'
(double zero).

Note.550593.1 Minimize Performance Impact of Batch


Processing in Streams

10g
http://download.oracle.com/docs/cd/B19306_01/server.102/b1
4228/rep_tags.htm#STREP00

11g
http://docs.oracle.com/cd/E11882_01/server.112/e10705/rep_t
ags.htm#i1007387

What is Streams Flow control ?

Note.746247.1 Troubleshooting Streams Capture when status is


Paused For Flow Control
Note.259609.1 Script to Prevent Excessive Spill of Message From
the Streams Buffer Queue To Disk

What are the different states of Streams processes ?

Note.471713.1 DIFFERENT STATES OF CAPTURE & APPLY


PROCESS

Streams Heterogeneous services

OracleStreams is an open information sharing solution. Each


element supports industry standard languages and standards.
Streams supports capture and apply fromOracleto non-
Oraclesystems. Changes can be applied to a non-Oraclesystem
via a transparent gateway. Streams also includes an API to allow
non-Oracledata sources to easily submit or receive change
records, allowing for heterogeneous data movement in both
directions. In addition, messages can be sent to and received
from other message queuing systems such as MQ Series via the
Message Gateway.

What are the different SCN's with respect to Streams ?


First SCN

The first SCN is the lowest SCN in the redo log from which a
capture process can capture changes. If you specify a first SCN
during capture process creation, then the database must be able
to access redo data from the SCN specified and higher.

Start SCN

The start SCN is the SCN from which a capture process begins to
capture changes. You can specify a start SCN that is different
than the first SCN during capture process creation, or you can
alter a capture process to set its start SCN. The start SCN does
not need to be modified for normal operation of a capture
process. Typically, you reset the start SCN for a capture process
if point-in-time recovery must be performed on one of the
destination databases that receive changes from the capture
process. In these cases, the capture process can be used to
capture the changes made at the source database after the
point-in-time of the recovery.

captured SCN

The system change number (SCN) that corresponds to the most


recent change scanned in the redo log by a capture process.

applied SCN

A system change number (SCN) relating to a capture process


that corresponds to the most recent message dequeued by an
apply process that applies changes captured by the capture
process.

Required Checkpoint SCN

The SCN that corresponds to the lowest checkpoint for which a


capture process requires redo data is the required checkpoint
SCN. The redo log file that contains the required checkpoint SCN,
and all subsequent redo log files, must be available to the
capture process. If a capture process is stopped and restarted,
then it starts scanning the redo log from the SCN that
corresponds to its required checkpoint SCN. The required
checkpoint SCN is important for recovery if a database stops
unexpectedly. Also, if the first SCN is reset for a capture process,
then it must be set to a value that is less than or equal to the
required checkpoint SCN for the captured process. You can
determine the required checkpoint SCN for a capture process by
querying the REQUIRED_CHECKPOINT_SCN column in the
DBA_CAPTURE data dictionary view.

Instantiation SCN

The system change number (SCN) for a table which specifies that
only changes that were committed after the SCN at the source
database are applied by an apply process.

STREAMS CONFIGURATION / ADMINISTRATION

How to set up streams ?

Note.301431.1 How To Setup One-Way SCHEMA Level Streams


Replication
Note.224255.1 9i: How To Setup Oracle streams replication.
Note.459922.1 How to setup Database Level Streams Replication
Note.471845.1 Streams bidirectional setup.

How to perform Instantiation?

Note 550955.1 Instantiating Objects in a 10g Streams Using


Original Export/Import and Data Pump Export/Import - Example

Note 551106.1 Instantiating Objects in Streams Using


Transportable Tablespace or RMAN

How to stop/start streams components ?

Note.268994.1 How to Start/Stop Apply Process in Streams

Streams Best practices

Note.418755.1 Master Note for Streams Recommended


Configuration
Note.297273.1 9i Streams Recommended Configuration
Note.413353.1 10.2 Best Practices For Streams in RAC
environment
Note.304268.1 9i Best Practices For Streams RAC Setup
Streams performance best practices

Note.335516.1 Master Note for Streams Performance


Recommendations

How to minimize the Performance Impact of Batch Processing?

Note 550593.1 Minimize Performance Impact of Batch


Processing in Streams

How to convert current existing Advanced replication set up to


Streams ?

Note.249443.1 Migrate 9i Advanced Replication to 10g Streams

How are LOBs queued and propagated? What happens when


there is an update/insert operation involving 1GB sized LOB?

A single Logical Change Record (LCR) for all non-LOB columns is


queued. For each LOB column, LCRs based on the chunk-size of
the LOB indicated in the redo log is queued. The number of LCRs
queued is dependent on the size of the blob and the chunk size
indicated. As an implementation detail: Streams uses a queue
buffer and we may buffer all lob chunks - this may cause spill-
over to disk.

When does Streams read the Oracle on-line Redo Logs? Does the
presence of Stream replication affect log-switching/archiving
mechanism?

Streams Capture reads the changes after they are written to the
redo log. Streams is independent of the log-switching and
archiving mechanism. Streams can seamlessly switch between
reading the archived logs to online logs, and back again, if
necessary.

In Streams, Logical Change Records (LCRs) are eagerly captured


and propagated. Each captured change is staged and propagated
independent of the transaction boundary. The APPLY process,
however, only applies committed transactions.

When using Streams, how can I tell which archived logs can be
removed from disk? How can I tell which archivelogs are needed
by Streams capture?
Note.290143.1 Minimum Archived Log Necessary to Restart 10g
and 11g Streams Capture Process.

Can I still use Streams if I can't set the init.ora parameter


GLOBAL_NAMES to TRUE?

The init.ora parameter, GLOBAL_NAMES, should be set to TRUE


in any distributed system. This parameter requires that the
GLOBAL_NAME (from the GLOBAL_NAME view) for the
dblinktargetdatabase match the database link name. Streams
does not enforce this setting and will run independent of this
setting. In a replicated scenario, setting this parameter to TRUE
is an additional confirmation that the correct database is being
updated.

Can Streams be used to maintain availability during migrations


between platforms or database upgrades?

Yes, Streams can be used to ensure database availability to users


while a database maintenance or migration is in progress. See
Appendix B of the Oracle10g Streams Concepts manual for
further details.

http://download.oracle.com/docs/cd/B19306_01/server.102/b1
4229/ap_strup.htm#i642623

Backup considerations in Streams environment

Note.336265.1 Best Practices For Managing Backups In A


Streams Environment
Note.421176.1 Usage of RMAN in Streams Environment

Streams Specific Patches

Note.437838.1 Streams Specific Patches

How to recreate the capture process ?

Note.471695.1 Required Steps to Recreate a Capture Process

How to remove the Streams configuration ?

Note.276648.1 Remove Streams Procedure for 9.2.0.X


In 10g there is a procedure that enables you to remove the
entire Streams
configuration locally.

NOTE: THIS WILL REMOVE ENTIRE STREAMS SET UP IN THE


GIVEN DATABASE.
IF YOU JUST WANT TO REMOVE ONE STREAMS
CONFIGURATION, THIS CANNOT BE USED.

It is part of the dbms_streams_adm package and is run simply


as-is:

SQL> connect / as sysdba


connected.

SQL> execute
DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

You can run this multiple times so if the procedure fails to


complete,
there is no harm in running it again.

Note that the procedure will not remove the STRMADMIN user
and will
need to be run separately, once at each database where a
Streams environment resides.

Minimum Archive Log Necessary To Restart Capture Process

Note.275323.1 Minimum Archive Log Necessary To Restart


Capture Process - 9iR2
Note.290143.1 Minimum Achived Log Necessary to Restart 10g
and 11g Streams Capture Process

Streams Idle Wait events

Note.461279.1 Streams Idle Wait Events in 10g

Streams Configuration views

Note.735976.1 All Replication Configuration Views For Streams,


AQ, CDC and Advanced Replication

How to Determining Latency for Streams Processes at Different


States?
Note 729860.1 Troubleshooting Queries in Streams

Initialization Parameters Relevant to Streams

For Oracle version 11.2


For Oracle version 11.1
For Oracle version 10.2

How to Purge Apply Spill ?

Note.472440.1 How to Purge Apply Spill in 10.2

How to skip/ignore a Transaction at Apply site ?

Note.422252.1 How to skip a transaction at APPLY site

Upgrading the Database with Streams in place.

Note.733853.1 DATABASE UPGRADE FROM 9.2 TO 10.2 IS VERY


SLOW FOR STREAMS ENABLED DATABASE

Supplied PL/SQL packages for administering Streams.

DBMS_STREAMS
DBMS_STREAMS_ADM
DBMS_STREAMS_AUTH
DBMS_STREAMS_MESSAGING
DBMS_STREAMS_TABLESPACE_ADM
DBMS_CAPTURE_ADM
DBMS_APPLY_ADM
DBMS_PROPAGATION_ADM

STREAMS TROUBLESHOOTING

How to trace Streams components ?

Note 855964.1 How to do SQL Trace for the Streams Processes

How to troubleshoot Streams components ?

Note.265201.1 Master Note for Troubleshooting Streams Apply


Errors ORA-1403, ORA-26787 or ORA-26786,Conflict Resolution
Note.746247.1 Troubleshooting Streams Capture when status is
Paused For Flow Control

Streams Healthcheck

Note 273674.1 Streams Configuration Report and Health Check


Script

Streams Monitor

Note.290605.1OracleStreams STRMMON Monitoring Utility

Is Streams replication certified with PeopleSoft tables? Any


restrictions?

For EBS

It is true that Streams can replicate tables that are part of EBS
11.5 -
but it is also true that EBS 11.5 cannot be used at the target on
these tables.
In other words, EBS tables can be replicated to a target database
for the use of
a 3rd party application, but not for use by EBS 11.5.

Hence active-active replication is not supported with E-Bus


Suite.

You can try to replicate some tables from PeopleSoft tables


but cannot use active-active replication because PeopleSoft use
its internal metadata to make any change to thier tables.

Thus the certification should be done by peoplesoft as


they are best to know their own functionality.

Still have questions ?

To discuss this information further with Oracle experts and


industry peers, we encourage you to review, join or start a
discussion via My Oracle Support Streams and Distributed
Database Community

References
NOTE:259609.1 - Script to Prevent Excessive Spill of Message
From the Streams Buffer Queue To Disk
NOTE:421176.1 - Usage of RMAN in Streams Environment
NOTE:455797.1 - Streams Transformation
NOTE:422252.1 - How to Skip a Transaction at Apply Site
NOTE:428441.1 - "Warning: Aq_tm_processes Is Set To 0"
Message in Alert Log After Upgrade to 10.2.0.3 or Higher
NOTE:429543.1 - Purpose of Instantiation in Streams
Environment
NOTE:437838.1 - Streams Recommended Patches
NOTE:418755.1 - Master Note for Streams Recommended
Configuration
NOTE:230901.1 - What are Streams Queue Buffers?
NOTE:550955.1 - Instantiating Objects Using Original
Export/Import and Data Pump Export/Import - Example
NOTE:336266.1 - 10gR1 Streams New Features
NOTE:382826.1 - Understanding Rules and Rulesets
NOTE:392809.1 - 11g R1 Streams New Features
NOTE:471845.1 - Streams Bi-Directional Setup
NOTE:301431.1 - How To Setup One-Way SCHEMA Level
Streams Replication
NOTE:304268.1 - 9i Best Practices For Streams RAC Setup
NOTE:305662.1 - Master Note for AQ Queue Monitor Process
(QMON)
NOTE:335516.1 - Master Note for Streams Performance
Recommendations
NOTE:336265.1 - Best Practices For Managing Backups In A
Streams Environment
NOTE:1264598.1 - Master Note for Streams Downstream
Capture - 10g and 11g [Video]
NOTE:224255.1 - 9i: How To Setup Oracle streams replication.
NOTE:249443.1 - Migrate 9i Advanced Replication to 10g
Streams
NOTE:459922.1 - How to setup Database Level Streams
Replication
NOTE:746247.1 - Troubleshooting Streams Capture when status
is Paused For Flow Control
NOTE:753158.1 - How To Configure an Oracle Streams Real-Time
Downstream Capture Environment
NOTE:551106.1 - Instantiating Objects in Streams Using
Transportable Tablespace or RMAN
NOTE:472440.1 - How to Purge Apply Spilled Transactions in
Streams Environment.
NOTE:550593.1 - Minimize Performance Impact of Batch
Processing in Streams
NOTE:733853.1 - Database Upgrade From 9.2 To 10.2 Very Slow
For Streams Enabled Database
NOTE:735976.1 - All Replication Configuration Views For
Streams, AQ, CDC and Advanced Replication
NOTE:230049.1 - Streams Conflict Resolution
NOTE:789445.1 - Master Note for Streams Setup and
Administration
NOTE:855964.1 - How to do SQL Trace for the Streams Processes
NOTE:733691.1 - How To Setup Schema Level Streams
Replication with a Downstream Capture Process with Implicit
Log Assignment
NOTE:265201.1 - Master Note for Troubleshooting Streams
Apply Errors ORA-1403, ORA-26787 or ORA-26786,Conflict
Resolution
NOTE:729860.1 - Troubleshooting Queries in Streams
NOTE:268994.1 - How to Start and Stop Apply Process in
Streams
NOTE:273674.1 - Streams Configuration Report and Health
Check Script
NOTE:274456.1 - Downstream Capture
NOTE:275323.1 - Minimum Archive Log Necessary To Restart
Capture Process - 9iR2
NOTE:276648.1 - Remove Streams Procedure for 9.2.0.X
NOTE:461279.1 - Streams Idle Wait Events in 10g
NOTE:471695.1 - Required Steps to Recreate a Capture Process.
NOTE:401275.1 - Handling Apply Insert And Delete Conflicts In A
Streams Environment - Error Handlers
NOTE:290143.1 - Minimum Archived Log Necessary to Restart
10g and 11g Streams Capture Process
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:297273.1 - 9i Streams Recommended Configuration
NOTE:471713.1 - Different States of Capture & Apply Process
NOTE:313279.1 - Master Note for Troubleshooting Streams
capture 'WAITING For REDO' or INITIALIZING
NOTE:413353.1 - 10.2 Best Practices For Streams in RAC
environment

Related

Products

Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database -
Enterprise Edition > Streams (Replication and Messaging)

Keywords

ORA-1403; ORA-26786; ORA-26787; STREAMS


Errors

ORA-1403; ORA-26786; ORA-26787