Sei sulla pagina 1di 68

Get a handle on DDL using Oracle 9iR2 Streams

NoCOUG Summer Conference, Aug. 19, 2004

4750 Walnut Street, Suite 106


Boulder, Colorado 80301

303.938.8282 x115
720.219.3773 (mobile)
jameskoopmann@confio.com
www.confio.com

James F. Koopmann
Director of Technical Consulting

James F. Koopmann
Where to Find Me on the NET

N-years of IT/core RDBMS experience


Oracle (V6-10g) 8i & 9i OCP
Writer
3 technical articles a month
Forum expert / moderator
Database centric vendor
reviews

Database general interest


issues

Various Publications & Portals

Speaker : Here, There, & Anywhere

Content
1.
2.
3.
4.
5.
6.
7.
8.
9.

The DBA nightmare


How do you detect DDL changes
STREAMS overview
STREAMS environment
STREAMS setup
STREAMS sample DDL session
OEM
Sweat Dreams for the DBA
Running STREAMS

The DBA Nightmare


Problem Detection & Viable Solutions

Problems

Are problems problems only if seen

Is it a problem if no one sees a benefit

Should I be working on something that doesnt solve a problem

Problems are caused by change


We need to be able to answer the hard questions

What has happened

When did it happen

Will it happen again

We have to fix Problems

Reduce finger pointing

Research viable sourcesTest / ValidateAcceptance

Be weary of Your mileage may vary type statements

The DBA Nightmare


Lazy Detection Methodology Shortcomings

Your actions are reactive, not


proactive, in nature
The events that trigger an
investigation are often very specific to
an incident, narrow in scope, and the
solutions typically do not take the full
health of a database into
consideration.
Most of your time is spent in problem
investigation & detection, not problem
solving
Investigation & detection inherently
wastes money
Customers / users drive the work flow
of the database administrators
Database administration group is seen
as ineffective

The DBA Nightmare


Intelligent Detection Methodology Benefits

Has the issue been seen by others?


Will someone benefit from you working
on the issue?
Are you solving a real problem that is
causing pain for your users?
Before solving the problem, are you be
able to determine what was the true
root cause?
Will solving the problem eliminate it
from recurring?

The DBA Nightmare


What Can We Detect / Solve

The DBA Nightmare


Buffer Cache - Hit Ratio
SQL> SELECT name, block_size,
100*(1 - (physical_reads / (db_block_gets+consistent_gets))) buffhit

FROM v$buffer_pool_statistics;

How we size the buffer cache


1.
2.
3.

Just issue the SQL to see the hit ratio


Increase the buffer cache if hit ratio is low
Done if hit ratio is high

The DBA Nightmare


Buffer Cache - diffd Hit Ratio
SQL> SELECT b.name, b.block_size,
100*(1 - ((e.physical_reads-b.physical_reads) /

((e.db_block_gets-b.db_block_gets)+
(e.consistent_gets-b.consistent_gets)))) buffhit
FROM beg_buffer_pool_statistics b, end_buffer_pool_statistics e
WHERE b.name=e.name AND b.block_size=e.block_size;

How to diff
1.
2.
3.
4.
5.
6.

Create table beg_buffer_pool_statistics as select * from v$buffer_pool_statistics


Run workload through system
Create table end_buffer_pool_statistics as select * from v$buffer_pool_statistics
Just issue the SQL to see hit ratio
Increase the buffer cache if hit ratio is low
Done if hit ratio is high

The DBA Nightmare


Buffer Cache - Advisory
SQL> SELECT name, block_size, size_for_estimate,
estd_physical_read_factor, estd_physical_reads

FROM V$DB_CACHE_ADVICE
WHERE advice_status = 'ON';

How to get the statistics


1.
2.
3.
4.

Set the db_cache_advice to READY


Set the db_cache_advice to ON
Run a valid workload through the system
Just issue the SQL to see report

The DBA Nightmare


Buffer Cache - Advisory Report
BLOCK Cache Estd Phys
NAME
SIZE Size Read Factor
-------- ----- ----- ----------DEFAULT
8192
48
2.1133
DEFAULT
8192
96
1.7266
DEFAULT
8192
144
1.4763
DEFAULT
8192
192
1.3573
DEFAULT
8192
240
1.2801
DEFAULT
8192
288
1.2165
DEFAULT
8192
336
1.1686
DEFAULT
8192
384
1.1202
DEFAULT
8192
432
1.0877
DEFAULT
8192
480
1.0602
DEFAULT
8192
528
1.0196
DEFAULT
8192
544
1.00
1
DEFAULT
8192
576
.9765
DEFAULT
8192
624
.9392
DEFAULT
8192
672
.9216
DEFAULT
8192
720
.9013
DEFAULT
8192
768
.885
DEFAULT
8192
816
.8726
DEFAULT
8192
864
.8607

Estd Phys
Reads
---------343669940
280783364
240091867
220733606
208181172
197842182
190052917
182180544
176884743
172420984
165812231
162626093
158797352
152735392
149879874
146571255
143928671
141908868
139972381

Its all about reads

1.

Dont reduce the size of


your cache if you are
going to incur more
physical reads

CURRENT

2.

Dont increase the size of


your cache if you are not
going to reduce the
number of reads

How Do You Detect DDL Changes


Do I Care About Capturing DDL
What kind of shop are you in
1.

You run a tight / secure database shop. (does it need to be tighter)

2.

Changes are kept to a minimal. (will one slip by)

3.

The time window is not sufficiently large between DDL changes.

4.

You need an inexpensive investigative approach for DDL extraction.

5.

Do you hedge on the side of caution or paranoia

6.

Do you need to be concerned with each and every DDL statement

7.

Are you pro-active

8.

Do you like systematic approaches to solving problems

9.

Do you need to be notified of problems

The DBA Nightmare


Attempts at DDL stability through policy
Have you tried to set policies?
I dont know how many shops I have worked in and tried to set policies about :

NO changes after 3:00pm on friday

Cant use production for Test / Stress / QA

ALL changes must be approved through DBA

ALL changes must be implemented by DBA

ALL source must go through QA & Test

and the list goes on and on and on and on

Your get a call at 4am on Sunday

You spend time with data integrity

This change MUST be in today

You dont know what anyone is doing

Not one will tell you what they are doing


You have to pick up the pieces

Bottom line
1.
2.
3.

Startup fallacy
Everyone is in a rush
Most dont want to do the job the right or proper way the first time

How Do You Detect DDL Changes


Methodolgies
1.

Take a look at Change DATEs in DBA views

2.

3.

Compare Database Objects Over Time

Take Snapshot at T1

Take Snapshot at T2

Compare T1 to T2

Use LogMiner

4.

SELECT Statements against DBA views

Search and Extract DDL from Current Redo Logs & Archived Logs

Streams

Set of database structures to capture and replicate changes from a source database to
a destination database

How Do You Detect DDL Changes


Take a Look at Dates In DBA Views
Object

Date Field

What

DBA_OBJECTS

YES

CREATED
LAST_DDL_TIME

DBA_TABLES

Yes

LAST_ANALYZED

DBA_INDEXES

Yes

LAST_ANALYZED

DBA_TAB_COLUMNS

NO

DBA_IND_COLUMNS

NO

The Good - We know when object is added / changed


The Bad - No Granularity
The Ugly - We have to still validate a change through looking at the total object

How Do You Detect DDL Changes


Compare Database Objects Over time
1.

T0 - CREATE TABLE time_dba_tables with DATE-TYPE column

2.

T1 - INSERT INTO time_dba_tables (SELECT sysdate,* FROM dba_tables)

3.

T2 - INSERT INTO time_dba_tables (SELECT sysdate,* FROM dba_tables)

4.

TC - SELECT <changes> FROM time_dba_tables t1, time_dba_tables t2


OUTER JOINS

- for what was added and deleted

DECODES

- for proper column comparison

t1.column != t2.column

- WHERE Cluase is ORd


for interested columns

The Good - Build Yourself. No reliance on 3rd party software or database vendor
The Bad - Complicated code. (you better be good)
The Ugly - Oracle could change / add tables that are important to object change

How Do You Detect DDL Changes


LogMiner
1.

Lets you look at the redo log activity in your database

2.

You can look in the current redo logs or archived logs

3.

New in 9i is the ability to look at DDL statements

How Do You Detect DDL Changes


LogMiner
1.

Create a dictionary
EXECUTE DBMS_LOGMNR_D.BUILD(
dictionary_filename => 'dictionary.log',
dictionary_location => '/ora/oradata/hcmc/log');

2.

Specify which log files to be monitored


EXECUTE DBMS_LOGMNR.ADD_LOGFILE(
LogFileName => '/ora/oradata/hcmc/arch/arc00075.001',
Options => dbms_logmnr.NEW);

3.

Start logminer
EXECUTE DBMS_LOGMNR.START_LOGMNR(
DictFileName =>'/ora/oradata/hcmc/log/dictionary.log');

4.

Query logminer views


SELECT sql_redo FROM V$LOGMNR_CONTENTS
WHERE operation=DDL

The Good - Easy to setup and use


The Bad - Pinpointing the time when something might have happened
The Ugly - Manual process of adding logs and searching through them

Oracle Streams
Overview
Oracle Streams
A feature within the Oracle database that allows for the replication of database structures and
information between two separate databases

1.
2.

3.
4.
5.
6.
7.

Is a new feature that was introduced in 9iR2


Is a more robust replication environment that allows for the sharing of
information and structures
Improved performance of replication because Streams utilizes the extraction of
DML and DDL from the redo log files through separate background processes.
Supports the hot mining of the current active redo log for change data at the
same time it is written to the log.
Supports the mining of archived log files.
No loss of data
Directed Networks

Oracle Streams
Directed Networks
A directed network is defined as a networked system of hosts that allow for the
passing of information to a destination database where the destination host
is not directly accessible to the source host.
Two methods of information propagation.
1.

Queue Forwarding, information is received by an intermediate database,


and automatically forwarded to another intermediate or destination
database.

2.

Apply Forwarding, information is received by an intermediate database,


applied, captured, and re-queued and ready for propagation to another
destination database.

Streams Environment
CAPTURE
SET_UP_QUEUE
ADD_GLOBAL_RULES

User DDL
Change

HCMC
(source)

A feature within the Oracle


database that allows for the
replication of database structures
and information between two
APPLYseparate databases

Oracle
Streams

PROPAGATION
ADD_GLOBAL_PROPAGATION_RULES

SET_UP_QUEUE
CREATE TABLE history_ddl_lcrs

CREATE PROCEDURE history_ddl


ADD_GLOBAL_RULES
ALTER_APPLY

SAIGON (destination)

Streams Environment
CAPTURE

User DDL
Change
Data
Definition
Language
APPLY
(DDL)

HCMC
(source)
SQL statements that affect the
structure of database objects,
such as CREATE TABLE,
ALTER TABLE, DROP TABLE,
and RENAME TABLE.

PROPAGATION

SAIGON (destination)

Streams Environment
CAPTURE

User DDL
Change
Source
database

APPLY

HCMC
(source)
The database that originates
information to be shared within
the Oracle Streams environment

PROPAGATION

SAIGON (destination)

Streams Environment
CAPTURE

User DDL
Change

HCMC
(source)

Capture A capture background process runs


(cp01-cp99) on the source database
for each of the capture queues
APPLY
defined. As the DDL is issued within
the Oracle database, a set of capture
rules governs what is important and
extracts these statements from the
redo logs. These statements are then
stored (staged) in a capture queue on
the source database.

PROPAGATION

SAIGON (destination)

Streams Environment
CAPTURE

User DDL
Change

HCMC
(source)

Propagation A mechanism that, through a set


of rules, picks up the captured
statements from the source
APPLY queue and then transports the
statements across a network to a
destination database queue.

PROPAGATION

SAIGON (destination)

Streams Environment
CAPTURE

User DDL
Change

HCMC
(source)

Destination The database that receives


database information in the Oracle Streams
environment.

PROPAGATION

APPLY

SAIGON (destination)

Streams Environment
CAPTURE

User DDL
Change
Apply

HCMC
(source)

PROPAGATION

The apply process (one for each capture process that


is propagating to this site) runs on the destination
database and picks statements off the queue that were
transported from the source database. A set of apply
rules then acts on the statements, either applying the
changes to the database, transforming the statements
into another usable form, or rejecting them.

APPLY

SAIGON (destination)

Streams Environment
CAPTURE

User DDL
Change

HCMC
(source)
Rules

APPLY

PROPAGATION

From the capture process, through the


propagation process, and into the apply
process, rules govern what data are of
concern, where to send the data, and
what to do with that data.

SAIGON (destination)

Streams Environment
CAPTURE

User DDL
Change

HCMC
(source)

PROPAGATION

APPLY

SAIGON (destination)
The Good - Able to report on every DDL statement issued without intervention
The Bad - Learning curve is a bit high
The Ugly - Intensive & cumbersome setup

Streams Environment
CAPTURE
SET_UP_QUEUE
ADD_GLOBAL_RULES

User DDL
Change

HCMC
(source)

PROPAGATION
ADD_GLOBAL_PROPAGATION_RULES

APPLY
SET_UP_QUEUE
CREATE TABLE history_ddl_lcrs

CREATE PROCEDURE history_ddl


ADD_GLOBAL_RULES
ALTER_APPLY

SAIGON (destination)

Streams Environment
Software Requirements
1.

All databases using Oracle Streams must be upgraded to version 9.2.0.2


(patch #2632931)

2.

Run catalog.sql & catproc.sql after you have upgraded to version 9.2.0.2

Streams Environment
Archive Log Requirement
1.

The source database must be running in ARCHIVELOG mode


in order to capture changes.
i.

No data loss

ii.

No database hanging caused by LGWR waiting for capture process to


complete

iii.

Tune your redo log generation before implementing streams

iv.

Do not use the NOLOGGING option

Streams Environment
Parameter Requirements
Parameter

Setting

Notes

COMPATABLE

9.2.0 or higher.

For Streams to work,


you must set this to at least 9.2.0.

JOB_QUEUE_PROCESSES

2 or higher.

Should be at least 2 in the environment for each database,


but should be equal to the number of dbms_jobs that can
run concurrently plus 2.

LOG_PARALLELISM

If youre running one or more captures on the database.

LOGMNR_MAX_PERSITENT_SESSIONS.

equal to or higher
than the number of capture
processes.

If this parameter is set too low,


capture processes will not run and may also inhibit you
from dropping the capture processes.

OPEN_LINKS

4 or higher.

This allows for the distributed transactions between source


database and destination database to occur.

PARALLEL_MAX_SERVERS

current value +
(3 * capture processes) +
(3 * apply processes)

This is just an estimate.


Should be monitored to handle the number of parallel
execution servers.

PROCESSES

current value +
((capture processes +
apply processes) * 10).

Again this is just an estimate. Normally I have noticed that


most installations do not account for enough processes.

SHARED_POOL_SIZE

current size +
((capture processes +
apply processes) * 10M).

SHARED_POOL_SIZE
should be at least 100M.

GLOBAL_NAMES

TRUE

Ensures valid database and


no loopbacks

Intermission

Streams Setup
Create Administrator
1.

CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA


Normal Database Stuff

2.

CREATE TABLESPACE ddlman


DATAFILE C:\ORACLE\ORADATA\[HCMC|SAIGON]\DDLMAN01.DBF SIZE 100M
REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

3.

CREATE USER ddlman IDENTIFIED BY ddlman


DEFAULT TABLESPACE ddlman
QUOTA UNLIMITED ON ddlman;

4.

GRANT CONNECT, RESOURCE TO ddlman;

Streams Setup
Grant Privileges to Administrator
1.

CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA


Advanced Queuing Administration

2.

GRANT EXECUTE ON DBMS_AQADM TO DDLMAN;


Streams Administration

3.

GRANT EXECUTE ON DBMS_STREAMS_ADM TO DDLMAN;

4.

GRANT EXECUTE ON DBMS_APPLY_ADM TO DDLMAN;

5.

GRANT EXECUTE ON DBMS_CAPTURE_ADM TO DDLMAN;

6.

GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO DDLMAN;


Instantiation Purposes

7.

GRANT EXECUTE ON DBMS_FLASHBACK TO DDLMAN;


Easy Monitoring and Usage

8.

GRANT SELECT_CATALOG_ROLE TO DDLMAN;

9.

GRANT SELECT ANY DICTIONARY TO DDLMAN;

10.

GRANT SELECT ON DBA_APPLY_ERROR TO DDLMAN;

Streams Setup
Grant Privileges to Administrator to Create Rules
1.

CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA

2.

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege

=> DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,

grantee

=> DDLMAN,

grant_option

=> FALSE);

END;
/
3.

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege

=> DBMS_RULE_ADM.CREATE_RULE_OBJ,

grantee

=> DDLMAN,

grant_option

=> FALSE);

END;

Streams Setup
Switch LogMiner out of the SYSTEM Tablespace
Why
1.

By default, LogMiner tables are in the SYSTEM tablespace.

2.

When you create a capture or apply process, Oracle will create a subset of the data
dictionary to keep track of changes to structures.

3.

Streams will actually keep multiple versions of object information.

4.

The SYSTEM tablespace may not have enough room for these tables.

How
1.

CONNECT sys/<passwd>@hcmc AS SYSDBA

2.

CREATE TABLESPACE logminer


DATAFILE C:\ORACLE\ORADATA\HCMC\LOGMINER01.DBF SIZE 100M
REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

3.

EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE(LOGMINER);

Streams Setup
LogMiner / Streams Issues
What
1.

If you move LogMiner after you have captured or applied, you will lose the Streams
directory changes that have been recorded.

2.

The Streams data dictionary is not kept clean by Oracle which can also cause greater
strains on the Streams dictionary and allow it to grow uncontrollably.

To Do
1.

Simple, dont forget to move LogMiner

2.

Remove an object is not being used you can clean out the Streams dictionary by using
DBMS_STREAMS_ADM.PURGE_SOURCE_CATALOG for a particular object.

Streams Setup
Database Link from source to destination
Why
1. For transporting the captured DDL from the source database to the destination
database

How
1.

CONNECT ddlman/ddlman@hcmc

2.

CREATE DATABASE LINK saigon


CONNECT TO ddlman IDENTIFIED BY ddlman USING saigon

Streams Setup
Capture
1.
2.

3.

CONNECT ddlman/ddlman@hcmc
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table
=> 'ddl_cap_table',
queue_name
=> 'ddl_cap_q',
queue_user
=> 'ddlman');
END;
/
BEGIN
DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
streams_type
=> 'capture',
streams_name
=> 'cap_ddl',
queue_name
=> 'ddl_cap_q',
include_dml
=> false,
include_ddl
=> true,
include_tagged_lcr => false,
source_database => hcmc);
END;
/

Streams Setup
Propagation Rules
1.

CONNECT ddlman/ddlman@hcmc

2.

BEGIN
DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES(
streams_name
=> 'prop_ddl',
source_queue_name
=> 'ddl_cap_q',
destination_queue_name => 'ddlman.ddl_apply_q@saigon,
include_dml
=> false,
include_ddl
=> true,
include_tagged_lcr
=> false,
source_database
=> hcmc );
END;

Streams Setup
Create Queue
1.

CONNECT ddlman/ddlman@saigon

2.

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table

=> 'ddl_apply_table',

queue_name

=> 'ddl_apply_q',

queue_user

=> 'ddlman');

END;
/

Streams Setup
Create Table to hold DDL
1.

CONNECT ddlman/ddlman@saigon

2.

CREATE TABLE ddlman.ddl_history(


timestamp

DATE,

source_database_name

VARCHAR2(128),

command_type

VARCHAR2(30),

object_owner

VARCHAR2(32),

object_name

VARCHAR2(32),

object_type

VARCHAR2(18),

ddl_text

CLOB,

logon_user

VARCHAR2(32),

current_schema

VARCHAR2(32),

base_table_owner

VARCHAR2(32),

base_table_name

VARCHAR2(32),

tag

RAW(10),

transaction_id

VARCHAR2(10),

scn

NUMBER);

Streams Setup
Logical Change Records (LCRs)
When the capture process mines information from the redo log, it reformats this
information into LCRs. These LCRs are specific to the type of information captured
and the completely defines the changed that has occurred.

SYS.ANYDATA
This is an overloaded object type that can be of any scalar (number, varchar, char,date)
or user defined data type. It has defined with it methods that allows us to query
what type of true data type it holds as well as methods to retrieve the values
$ORACLE_HOME/rdbms/admin/dbmsany.sql

Streams Setup
Create Procedure to handle DDL
1.

CONNECT ddlman/ddlman@saigon

2.

CREATE PROCEDURE history_ddl(in_any IN SYS.ANYDATA) IS


lcr

SYS.LCR$_DDL_RECORD;

rc

PLS_INTEGER;

ddl_text

CLOB;

BEGIN
rc := in_any.GETOBJECT(lcr);
DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE);
lcr.GET_DDL_TEXT(ddl_text);
INSERT INTO ddlman.ddl_history
VALUES(SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(),lcr.GET_OBJECT_OWNER(),
lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(),ddl_text, lcr.GET_LOGON_USER(), lcr.GET_CURRENT_SCHEMA(),
lcr.GET_BASE_TABLE_OWNER(), lcr.GET_BASE_TABLE_NAME(),lcr.GET_TAG(),lcr.GET_TRANSACTION_ID(), lcr.GET_SCN());
COMMIT;
DBMS_LOB.FREETEMPORARY(ddl_text);
END;
/

Streams Setup
Create Rules
1.

CONNECT ddlman/ddlman@saigon

2.

BEGIN
DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
streams_type

=> 'apply',

streams_name

=> 'apply_ddl',

queue_name

=> 'ddl_apply_q',

include_dml

=> false,

include_ddl

=> true,

include_tagged_lcr => false,


source_database
END;
/

=> hcmc);

Streams Setup
Hook in the DDL handler
1.

CONNECT ddlman/ddlman@saigon

2.

BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name

=> 'apply_ddl',

ddl_handler

=> 'ddlman.history_ddl');

END;
/

Streams Setup
Instantiate the Stream Environment
Definition
Before we can start capturing, propagating, and applying within our Streams
environment we must instantiate the destination database. This is nothing more
than registering the source SCN with the destination database so it knows the point
in time it can start applying captured information.

There are three methods for instantiating objects


1. Instantiating and creating database objects by using Export/Import
2. Setting the instantiation SCN for an existing table, schema, or database manually,
by executing procedures in the DBMS_APPLY_ADM package at the destination
database
3. Performing a metadata-only export and import for existing tables or schemas You
can verify the instantiation by querying DBA_APPLY_INSTANTIATED_OBJECTS.

Streams Setup
Instantiate the Stream Environment
1.
2.
3.

CONNECT ddlman/ddlman@hcmc
exec dbms_capture_adm.PREPARE_GLOBAL_INSTANTIATION;
DECLARE
iscn NUMBER;
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_GLOBAL_INSTANTIATION_SCN@saigon(
source_database_name

=> hcmc,

instantiation_scn

=> iscn,

apply_database_link

=> saigon);

END;
/

Streams Setup
Start the Apply Process
1.

CONNECT ddlman/ddlman@saigon

status of apply process set to enabled

2.

BEGIN

Starts Reader Server PX process (pnn)

DBMS_APPLY_ADM.START_APPLY(

Start background coordinator process (apNN)

Starts Apply Server PX process(pnn)

apply_name

=> 'apply_ddl');

END;
/

BEGIN

Stop the Apply Process

DBMS_APPLY_ADM.STOP_APPLY(
apply_name
END;
/

=> 'apply_ddl');

Streams Setup
Start the Capture Process
1.

CONNECT ddlman/ddlman@hcmc

2.

BEGIN

DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name

=> 'cap_ddl');

1.

Set the status of the capture process to


enabled

2.

Start the background capture process


(cpnn)

END;
/

BEGIN

Stop the Capture Process

DBMS_CAPTURE_ADM.STOP_CAPTURE(
capture_name
END;
/

=> 'cap_ddl');

OEM
Streams

OEM
Streams - Capture

OEM
Streams - Propagate

OEM
Streams - Apply

Sample DDL Session


Make Some Changes
HCMC-SQL> connect scott/tigger@hcmc
Connected.

HCMC-SQL> CREATE TABLE DDL_CHECK_TABLE (COL1 NUMBER);


Table created.
HCMC-SQL> ALTER TABLE DDL_CHECK_TABLE ADD (COL2 VARCHAR2(500));
Table altered.
HCMC-SQL> TRUNCATE TABLE DDL_CHECK_TABLE;
Table truncated.
HCMC-SQL> ALTER TABLE DDL_CHECK_TABLE DROP COLUMN COL2;
Table altered.
HCMC-SQL> DROP TABLE DDL_CHECK_TABLE;
Table dropped.

Sample DDL Session


View From DDL_HISTORY Table
SAIGON-SQL> SELECT timestamp,substr(source_database_name,1,4) source,
2
logon_user,command_type,object_owner owner,object_name name,
3
object_type type
4
FROM ddl_history;
TIMESTAMP
--------11-OCT-03
11-OCT-03
11-OCT-03
11-OCT-03
11-OCT-03
11-OCT-03
11-OCT-03
11-OCT-03
11-OCT-03

SOURCE
-----HCMC
HCMC
HCMC
HCMC
HCMC
HCMC
HCMC
HCMC
HCMC

LOGON_USER
---------SYS
SYS
SYS
SYS
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT

COMMAND_TYPE
-------------------CREATE USER
CREATE TABLESPACE
CREATE USER
ALTER USER
CREATE TABLE
ALTER TABLE
TRUNCATE TABLE
ALTER TABLE
DROP TABLE

OWNER
NAME
TYPE
--------- --------------- ----AA
USER
TEMPUSER
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT
SCOTT

DDL_CHECK_TABLE
DDL_CHECK_TABLE
DDL_CHECK_TABLE
DDL_CHECK_TABLE
DDL_CHECK_TABLE

USER
USER
TABLE
TABLE
TABLE
TABLE
TABLE

Sample DDL Session


View From DDL_HISTORY Table
SAIGON-SQL> SELECT ddl_text
2
FROM ddl_history;

DDL_TEXT
--------------------------------------------------------------------------CREATE user aa identified by VALUES '1468620FBA6271E8'
create temporary tablespace temp01
create user tempuser identified by VALUES '2B4C9C62A2919AEF'
alter user scott identified by VALUES 'A7E7E0150C6D5EF3'
CREATE TABLE DDL_CHECK_TABLE (COL1 NUMBER)
ALTER TABLE DDL_CHECK_TABLE ADD (COL2 VARCHAR2(500))
TRUNCATE TABLE DDL_CHECK_TABLE
ALTER TABLE DDL_CHECK_TABLE DROP COLUMN COL2
DROP TABLE DDL_CHECK_TABLE

The DBA Nightmare


Remember Me

Sweet Dreams for The DBA


SQL Statements Executed Over Time
DROP INDEX

CREATE INDEX

db file sequential reads


buffer busy waits
parse call

Wait Time
Time

Running Streams
DDL Types Captured
CREATE/ALTER/DROP Tables includes table comments
CREATE/ALTER/DROP Tablespace (requires global rules to be set)
CREATE/ALTER/DROP Indexes
CREATE/ALTER/DROP Triggers
CREATE/ALTER/DROP Views
CREATE/ALTER/DROP Synonyms
CREATE/ALTER/DROP Sequences
Creation of PL/SQL packages, procedures and functions
Changes to users/roles
GRANT or REVOKE on users/roles
COMMIT
ROLLBACK
AUDIT (can be done on user objects)

Running Streams
DDL Types Captured But NOT Applied
CREATE , ALTER, or DROP MATERIALIZED VIEW LOG
CREATE , ALTER, or DROP MATERIALIZED VIEW
CREATE or ALTER TABLE for Index-organized tables
CREATE SCHEMA AUTHORIZATION
CREATE or DROP DATABASE LINK
RENAME (use ALTER TABLE instead)
CREATE TABLE ? AS SELECT for clustered tables

Running Streams
DDL Types NOT Captured
CREATE or ALTER DATABASE
ALTER SESSION
ALTER SYSTEM
TRUNCATE
CREATE/ALTER/DROP ROLLBACK
CREATE/ALTER/DROP TYPE
CREATE/ALTER/DROP PROFILE
CREATE/ DROP LIBRARY
CREATE/ DROP DIRECTORY
SET ROLE
SET TRANSACTION
SET CONSTRAINT
CREATE CONTROL FILE
CREATE SPFILE
CREATE PFILE
ANALYZE
EXPLAIN
CALL
PL/SQL Procedural calls
Lock Table

CREATE , ALTER, or DROP MATERIALIZED VIEW LOG


CREATE , ALTER, or DROP MATERIALIZED VIEW
CREATE or ALTER TABLE for Index-organized tables
CREATE SCHEMA AUTHORIZATION
CREATE or DROP DATABASE LINK
RENAME (use ALTER TABLE instead)
CREATE TABLE ? AS SELECT for clustered tables

Running Streams
Problems You May Encounter
Setup / Running Streams
1. Status of dba_capture & dba_apply where ABORTED
2. ORA-01925: Maximum of 30 enabled roles exceeded
3. What object is that?
4. Mixed case global_name causing mismatch Streams and LogMiner

Remedy
1. Stop the Capture and apply processes and start them again
2. Increase current value for MAX_ENABLED_ROLES
3.
4.

ALTER SYSTEM SET max_endabled_roles=100


Avoid using system generated names

Creation will work

ALTER / DROP statements will more than likely Fail


Change db_name & db_domain to uppercase

Running Streams
Problems You May Encounter
Bugs
1. Analyze statement is not propagated on streams environment to target database
2.
3.
4.

with using DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES.


DDL Issued through the EXECUTE IMMEDIATE are not applied at destination
Checkpoints are skipped if there is any DDL activity in the redo logs causing the
capture process to reprocess logs during startup.
Dropping the streams environment is difficult to cleanup.

Remedy
1. Can use DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES.
2. Do not use dynamic DDL
3. Ensure that no DDL has been issued around scheduled shutdowns.
4. Get it right the first time or choose a version naming schema.

Running Streams
Monitoring
Just Some
DBA_QUEUES
DBA_QUEUE_TABLES
DBA_APPLY
DBA_APPLY_PARAMETERS
DBA_CAPTURE
DBA_CAPTURE_PARAMETERS
DBA_PROPAGATION
DBA_APPLY_ERROR
DBA_RULES
DBA_RULE_SETS
DBA_RULE_SET_RULES
DBA_JOBS
DBA_QUEUE_SCHDULES

Get a handle on DDL using Oracle 9iR2 Streams


NoCOUG Summer Conference, Aug. 19, 2004

4750 Walnut Street, Suite 106


Boulder, Colorado 80301

303.938.8282 x115
720.219.3773 (mobile)
jameskoopmann@confio.com
www.confio.com

James F. Koopmann
Director of Technical Consulting

Potrebbero piacerti anche