Sei sulla pagina 1di 22

Using LogMiner to Analyze Redo Logs

The Oracle LogMiner utility enables you to query redo logs through a SQL
interface. Redo logs contain information about the history of activity on a
database.

This chapter contains the following sections:

* Potential Uses for Data Stored in Redo Logs


* Accessing Information Stored in Redo Logs
* Redo Logs and Dictionary Files
* LogMiner Recommendations and Restrictions
* Filtering Data That is Returned
* Accessing LogMiner Information
* Querying V$LOGMNR_CONTENTS
* Extracting Actual Data Values from Redo Logs
* Supplemental Logging
* Steps in a Typical LogMiner Session
* Example Uses of LogMiner

This chapter describes LogMiner functionality as it is used from the command line.
You also have the option of accessing LogMiner functionality through the Oracle
LogMiner Viewer graphical user interface (GUI). The LogMiner Viewer is a part of
Oracle Enterprise Manager.
Potential Uses for Data Stored in Redo Logs

All changes made to user data or to the data dictionary are recorded in the Oracle
redo logs. Therefore, redo logs contain all the necessary information to perform
recovery operations. Because redo log data is often kept in archived files, the
data is already available. To ensure that redo logs contain useful information,
you should enable at least minimal supplemental logging.
See Also:

Supplemental Logging

The following are some of the potential uses for data contained in redo logs:

* Pinpointing when a logical corruption to a database, such as errors made at


the application level, may have begun. An example of an error made at the
application level could be if a user mistakenly updated a database to give all
employees 100 percent salary increases rather than 10 percent increases. It is
important to know exactly when corruption began so that you know when to initiate
time-based or change-based recovery. This enables you to restore the database to
the state it was in just before corruption.

See Also:

See Extracting Actual Data Values from Redo Logs for details about how you
can use LogMiner to accomplish this.
* Detecting and whenever possible, correcting user error, which is a more
likely scenario than logical corruption. User errors include deleting the wrong
rows because of incorrect values in a WHERE clause, updating rows with incorrect
values, dropping the wrong index, and so forth.
* Determining what actions you would have to take to perform fine-grained
recovery at the transaction level. If you fully understand and take into account
existing dependencies, it may be possible to perform a table-based undo operation
to roll back a set of changes. Normally you would have to restore the table to its
previous state, and then apply an archived redo log to roll it forward.
* Performance tuning and capacity planning through trend analysis. You can
determine which tables get the most updates and inserts. That information provides
a historical perspective on disk access statistics, which can be used for tuning
purposes.
* Performing post-auditing. The redo logs contain all the information
necessary to track any DML and DDL statements executed on the database, the order
in which they were executed, and who executed them.

Accessing Information Stored in Redo Logs

Oracle Corporation provides SQL access to the redo logs through LogMiner, which is
part of the Oracle database server. LogMiner presents the information in the redo
logs through the V$LOGMNR_CONTENTS fixed view. This view contains historical
information about changes made to the database including, but not limited to, the
following:

* The type of change made to the database (INSERT, UPDATE, DELETE, or DDL).
* The SCN at which a change was made (SCN column).
* The SCN at which a change was committed (COMMIT_SCN column).
* The transaction to which a change belongs (XIDUSN, XIDSLT, and XIDSQN
columns).
* The table and schema name of the modified object (SEG_NAME and SEG_OWNER
columns).
* The name of the user who issued the DDL or DML statement to make the change
(USERNAME column).
* Reconstructed SQL statements showing SQL that is equivalent (but not
necessarily identical) to the SQL used to generate the redo records (SQL_REDO
column). If a password is part of the statement in a SQL_REDO column, the password
is encrypted.
* Reconstructed SQL statements showing the SQL statements needed to undo the
change (SQL_UNDO column). SQL_UNDO columns that correspond to DDL statements are
always NULL. Similarly, the SQL_UNDO column may be NULL for some datatypes and for
rolled back operations.

The redo logs contain internally generated numerical identifiers to identify


tables and their associated columns. To reconstruct SQL statements, LogMiner needs
to know how the internal identifiers map to user-defined names. This mapping
information is stored in the data dictionary for the database. LogMiner provides a
procedure (DBMS_LOGMNR_D.BUILD) that lets you extract the data dictionary.
See Also:

Oracle9i Supplied PL/SQL Packages and Types Reference for a complete description
of the DBMS_LOGMNR_D.BUILD procedure

The following section describes redo logs and dictionary files in further detail.
Redo Logs and Dictionary Files

Before you begin using LogMiner, it is important to understand how LogMiner works
with redo logs and dictionary files. This will help you to get accurate results
and to plan the use of your system resources. The following concepts are discussed
in this section:

* Redo Logs
* Dictionary Options
* Tracking DDL Statements

Redo Logs
When you run LogMiner, you specify the names of redo logs that you want to
analyze. LogMiner retrieves information from those redo logs and returns it
through the V$LOGMNR_CONTENTS view. To ensure that the redo logs contain
information of value to you, you must enable at least minimal supplemental
logging. See Supplemental Logging.

You can then use SQL to query the V$LOGMNR_CONTENTS view, as you would any other
view. Each select operation that you perform against the V$LOGMNR_CONTENTS view
causes the redo logs to be read sequentially.

Keep the following things in mind about redo logs:

* The redo logs must be from a release 8.0 or later Oracle database. However,
several of the LogMiner features introduced as of release 9.0.1 only work with
redo logs produced on an Oracle9i or later database. See Restrictions.
* Support for LOB and LONG datatypes is available as of release 9.2, but only
for redo logs generated on a release 9.2 Oracle database.
* The redo logs must use a database character set that is compatible with the
character set of the database on which LogMiner is running.
* In general, the analysis of redo logs requires a dictionary that was
generated from the same database that generated the redo logs.
* If you are using a dictionary that is in flat file format or that is stored
in the redo logs, then the redo logs you want to analyze can be from either the
database on which LogMiner is running or from other databases.
* If you are using the online catalog as the LogMiner dictionary, you can only
analyze redo logs from the database on which LogMiner is running.
* LogMiner must be running on the same hardware platform that generated the
redo logs being analyzed. However, it does not have to be on the same system.
* It is important to specify the correct redo logs when running LogMiner. If
you omit redo logs that contain some of the data you need, you will get inaccurate
results when you query the V$LOGMNR_CONTENTS view.

To determine which redo logs are being analyzed in the current LogMiner session
you can look at the V$LOGMNR_LOGS view, which contains one row for each redo log.
See Also:

"Specify Redo Logs for Analysis"


Dictionary Options

To fully translate the contents of redo logs, LogMiner requires access to a


database dictionary.

LogMiner uses the dictionary to translate internal object identifiers and


datatypes to object names and external data formats. Without a dictionary,
LogMiner returns internal object IDs and presents data as hex bytes.

For example, instead of the SQL statement:

INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);

LogMiner will display:

insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'),


hextoraw('c306'));"

A LogMiner dictionary file contains information that identifies the database it


was created from and the time it was created. This information is used to validate
the dictionary against the selected redo logs, automatically detecting any
mismatch between LogMiner's internal dictionary and the redo logs.

The dictionary file must have the same database character set and be created from
the same database as the redo logs being analyzed. However, once the dictionary is
extracted, you can use it to mine the redo logs of that database in a separate
database instance without being connected to the source database.

Extracting a dictionary file also prevents problems that can occur when the
current data dictionary contains only the newest table definitions. For instance,
if a table you are searching for was dropped sometime in the past, the current
dictionary will not contain any references to it.

LogMiner gives you three choices for your source dictionary:

* Extracting the Dictionary to a Flat File


* Extracting a Dictionary to the Redo Logs
* Using the Online Catalog

Extracting the Dictionary to a Flat File

When the dictionary is in a flat file, fewer system resources are used than when
it is contained in the redo logs. It is recommended that you regularly back up the
dictionary extracts to ensure correct analysis of older redo logs.

To extract database dictionary information to a flat file, use the


DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_FLAT_FILE option.

Be sure that no DDL operations occur while the dictionary is being built.

The following steps describe how to extract a dictionary to a flat file (including
extra steps you must take if you are using Oracle8). Steps 1 through 4 are
preparation steps. You only need to do them once, and then you can extract a
dictionary to a flat file as many times as you wish.

1. The DBMS_LOGMNR_D.BUILD procedure requires access to a directory where it


can place the dictionary file. Because PL/SQL procedures do not normally access
user directories, you must specify a directory for use by the DBMS_LOGMNR_D.BUILD
procedure or the procedure will fail. To specify a directory, set the
initialization parameter, UTL_FILE_DIR, in the init.ora file.

See Also:

Oracle9i Database Reference for more information about the init.ora file

For example, to set UTL_FILE_DIR to use /oracle/database as the directory


where the dictionary file is placed, enter the following in the init.ora file:

UTL_FILE_DIR = /oracle/database

Remember that for the changes to the init.ora file to take effect, you must
stop and restart the database.
2. For Oracle8 only. Otherwise, go to the next step: Use your operating
system's copy command to copy the dbmslmd.sql script, which is contained in the
$ORACLE_HOME/rdbms/admin directory on the Oracle8i database, to the same directory
in the Oracle8 database. For example, enter:

% cp /8.1/oracle/rdbms/admin/dbmslmd.sql /8.0/oracle/rdbms/admin/dbmslmd.sql
3. If the database is closed, use SQL*Plus to mount and then open the database
whose redo logs you want to analyze. For example, entering the STARTUP command
mounts and opens the database:

SQL> STARTUP

4. For Oracle8 only. Otherwise, go to the next step: Execute the copied
dbmslmd.sql script on the 8.0 database to install the DBMS_LOGMNR_D package. For
example, enter:

@dbmslmd.sql

You may need to enter the complete path to the script.


5. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a filename for the
dictionary and a directory path name for the file. This procedure creates the
dictionary files. For example, enter the following to create the file
dictionary.ora in /oracle/database:

SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', -


2 '/oracle/database/', -
3 OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

You could also specify a filename and location without specifying the
STORE_IN_FLAT_FILE option. The result would be the same.

Extracting a Dictionary to the Redo Logs

To extract a dictionary to the redo logs, the database must be open and in
ARCHIVELOG mode and archiving must be enabled. While the dictionary is being
extracted to the redo log stream, no DDL statements can be executed. Therefore,
the dictionary snapshot extracted to the redo logs is guaranteed to be consistent,
whereas the dictionary extracted to a flat file is not.

To extract database dictionary information to the redo logs, use the


DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_FILES option. Do not specify
a filename or location.

SQL> EXECUTE DBMS_LOGMNR_D.BUILD ( -


2 OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

To ensure that the redo logs contain information of value to you, you must enable
at least minimal supplemental logging. See Supplemental Logging.
See Also:

Oracle9i Recovery Manager User's Guide for more information about ARCHIVELOG mode

The process of extracting the dictionary to the redo logs does consume database
resources, but if you limit the extraction to off-peak hours, this should not be a
problem and it is faster than extracting to a flat file. Depending on the size of
the dictionary, it may be contained in multiple redo logs. Provided the relevant
redo logs have been archived, you can find out which redo logs contain the start
and end of an extracted dictionary. To do so, query the V$ARCHIVED_LOG view, as
follows:

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';


SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';
The names of the start and end redo logs, and possibly other logs in between them,
are specified with the ADD_LOGFILE procedure when you are preparing to start a
LogMiner session.

It is recommended that you periodically back up the redo logs so that the
information is saved and available at a later date. Ideally, this will not involve
any extra steps because if your database is being properly managed, there should
already be a process in place for backing up and restoring archived redo logs.
Again, because of the time required, it is good practice to do this during off-
peak hours.
Using the Online Catalog

To direct LogMiner to use the dictionary currently in use for the database,
specify the online catalog as your dictionary source when you start LogMiner, as
follows:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -


2 DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Using the online catalog means that you do not have to bother extracting a
dictionary to a flat file or to the redo logs. In addition to using the online
catalog to analyze online redo logs, you can use it to analyze archived redo logs
provided you are on the same system that generated the archived redo logs.

The online catalog contains the latest information about the database and may be
the fastest way to start your analysis. Because DDL operations that change
important tables are somewhat rare, the online catalog generally contains the
information you need for your analysis.

Remember, however, that the online catalog can only reconstruct SQL statements
that are executed on the latest version of a table. As soon as the table is
altered, the online catalog no longer reflects the previous version of the table.
This means that LogMiner will not be able to reconstruct any SQL statements that
were executed on the previous version of the table. Instead, LogMiner generates
nonexecutable SQL in the SQL_REDO column (including hex-to-raw formatting of
binary values) similar to the following example:

insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'),


hextoraw('c306'));"

The online catalog option requires that the database be open.

The online catalog option is not valid with the DDL_DICT_TRACKING option.
Tracking DDL Statements

LogMiner automatically builds its own internal dictionary from the source
dictionary that you specify when you start LogMiner (either a flat file
dictionary, a dictionary in the redo logs, or an online catalog).

If your source dictionary is a flat file dictionary or a dictionary in the redo


logs, you can use the DDL_DICT_TRACKING option to direct LogMiner to track data
definition language (DDL) statements. DDL tracking is disabled by default. To
enable it, use the OPTIONS parameter to specify DDL_DICT_TRACKING when you start
LogMiner. For example:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -


2 DBMS_LOGMNR.DDL_DICT_TRACKING);
With this option set, LogMiner applies any DDL statements seen in the redo logs to
its internal dictionary. For example, to see all the DDLs executed by user SYS,
you could issue the following query:

SQL> SELECT USERNAME, SQL_REDO


2 FROM V$LOGMNR_CONTENTS
3 WHERE USERNAME = 'SYS' AND OEPRATION = 'DDL';

The information returned might be similar to the following, although the actual
information and how it is displayed will be different on your screen.

USERNAME SQL_REDO
SYS ALTER TABLE SCOTT.ADDRESS ADD CODE NUMBER;
SYS CREATE USER KATHY IDENTIFIED BY VALUES 'E4C8B920449B4C32' DEFAULT
TABLESPACE TS1;

Keep the following in mind when you use the DDL_DICT_TRACKING option:

* The DDL_DICT_TRACKING option is not valid with the DICT_FROM_ONLINE_CATALOG


option.
* The DDL_DICT_TRACKING option requires that the database be open.

The ability to track DDL statements helps you monitor schema evolution because SQL
statements used to change the logical structure of a table (because of DDL
operations such as adding or dropping of columns) can be reconstructed. In
addition, data manipulation language (DML) operations performed on new tables
created after the dictionary was extracted can also be shown.
Note:

In general, it is a good idea to keep the DDL tracking feature enabled because if
it is not enabled and a DDL event occurs, LogMiner returns some of the redo data
as hex bytes. Also, a metadata version mismatch could occur.

Because LogMiner automatically assigns versions to the database metadata, it will


detect and notify you of any mismatch between its internal dictionary and the redo
logs.
Note:

It is important to understand that the LogMiner internal dictionary is not the


same as the LogMiner dictionary contained in a flat file or in redo logs. LogMiner
does update its internal dictionary, but it does not update the dictionary that is
contained in a flat file or in redo logs.
LogMiner Recommendations and Restrictions

When you are using LogMiner, keep the recommendations and restrictions described
in the following sections in mind.
Recommendations

Oracle Corporation recommends that you take the following into consideration when
you are using LogMiner:

* All databases should employ an alternate tablespace for LogMiner tables. By


default all LogMiner tables are created to use the SYSTEM tablespace. Use the
DBMS_LOGMNR_D.SET_TABLESPACE routine to re-create all LogMiner tables in an
alternate tablespace. For example, the following statement will re-create all
LogMiner tables to use the logmnrts$ tablespace:

SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts$');


See Also:

Oracle9i Supplied PL/SQL Packages and Types Reference for a full description
of the DBMS_LOGMNR_D.SET_TABLESPACE routine

Restrictions

The following restrictions apply when you are using LogMiner:

* The following are not supported:


o Simple and nested abstract datatypes (ADTs)
o Collections (nested tables and VARRAYs)
o Object Refs
o Index organized tables (IOTs)
o CREATE TABLE AS SELECT of a table with a clustered key
* LogMiner runs only on databases of release 8.1 or higher, but you can use it
to analyze redo logs from release 8.0 databases. However, the information that
LogMiner is able to retrieve from a redo log depends on the version of the log,
not the version of the database in use. For example, redo logs for Oracle9i can be
augmented to capture additional information when supplemental logging is enabled.
This allows LogMiner functionality to be used to its fullest advantage. Redo logs
created with older releases of Oracle will not have that additional data and may
therefore have limitations on the operations and datatypes supported by LogMiner.

For example, the following features require that supplemental logging be


turned on. (Note that in Oracle9i release 9.0.1, supplemental logging was always
on (it was not available at all in releases prior to 9.0.1). But in release 9.2,
you must specifically turn on supplemental logging; otherwise it will not be
enabled.)
o Support for index clusters, chained rows, and migrated rows (for
chained rows, supplemental logging is required, regardless of the compatibility
level to which the database is set).
o Support for direct-path inserts (also requires that ARCHIVELOG mode be
enabled).
o Extracting the data dictionary into the redo logs.
o DDL tracking.
o Generating SQL_REDO and SQL_UNDO with primary key information for
updates.
o LONG and LOB datatypes are supported only if supplemental logging is
enabled.

See Also:

Supplemental Logging

Filtering Data That is Returned

LogMiner can potentially be dealing with large amounts of information. There are
several methods you can use to limit the information that is returned to the
V$LOGMNR_CONTENTS view, as well as the speed at which it is returned. These
options are specified when you start LogMiner.

* Showing Only Committed Transactions


* Skipping Redo Corruptions
* Filtering Data By Time
* Filtering Data By SCN
Showing Only Committed Transactions

When you use the COMMITTED_DATA_ONLY option, only rows belonging to committed
transactions are shown in the V$LOGMNR_CONTENTS view. This enables you to filter
out rolled back transactions, transactions that are in progress, and internal
operations.

To enable this option, you specify it when you start LogMiner, as follows:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -


2 DBMS_LOGMNR.COMMITTED_DATA_ONLY);

When you specify the COMMITTED_DATA_ONLY option, LogMiner groups together all DML
operations that belong to the same transaction. Transactions are returned in the
order in which they were committed.

If long-running transactions are present in the redo logs being analyzed, use of
this option may cause an "Out of Memory" error.

The default is for LogMiner to show rows corresponding to all transactions and to
return them in the order in which they are encountered in the redo logs.

For example, suppose you start LogMiner without specifying COMMITTED_DATA_ONLY and
you execute the following query:

SQL> SELECT (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,


2 USERNAME AS USER,
3 SQL_REDO AS SQL_REDO
4 FROM V$LOGMNR_CONTENTS;

The output would be as follows. Both committed and uncommitted transactions are
returned and rows from different transactions are interwoven.

XID USER SQL_REDO


1.5.123 SCOTT SET TRANSACTION READ WRITE;
1.5.123 SCOTT INSERT INTO "SCOTT"."EMP"("EMPNO","ENAME")
VALUES (8782, 'Frost');
1.6.124 KATHY SET TRANSACTION READ WRITE;
1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (8839, 'Cummings', '415-321-1234');
1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (7934, 'Yeats', '033-334-1234');
1.5.123 SCOTT INSERT INTO "SCOTT"."EMP" ("EMPNO","ENAME")
VALUES (8566, 'Browning');
1.6.124 KATHY COMMIT;
1.7.234 GOUTAM SET TRANSACTION READ WRITE;
1.5.123 SCOTT COMMIT;
1.7.234 GOUTAM INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (8499, 'Emerson', '202-334-1234');

Now suppose you start LogMiner, but this time you specify the COMMITTED_DATA_ONLY
option. If you executed the previous query again, the output would look as
follows:

1.6.124 KATHY SET TRANSACTION READ WRITE;


1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (8839, 'Cummings', '415-321-1234');
1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (7934, 'Yeats', '033-334-1234');
1.6.124 KATHY COMMIT;
1.5.123 SCOTT SET TRANSACTION READ WRITE;
1.5.123 SCOTT INSERT INTO "SCOTT"."EMP" ("EMPNO","ENAME")
VALUES (8566, 'Browning');
1.5.123 SCOTT INSERT INTO "SCOTT"."EMP"("EMPNO","ENAME")
VALUES (8782, 'Frost');
1.5.123 SCOTT COMMIT;

Because the commit for the 1.6.124 transaction happened before the commit for the
1.5.123 transaction, the entire 1.6.124 transaction is returned first. This is
true even though the 1.5.123 transaction started before the 1.6.124 transaction.
None of the 1.7.234 transaction is returned because a commit was never issued for
it.
Skipping Redo Corruptions

When you use the SKIP_CORRUPTION option, any corruptions in the redo logs are
skipped during select operations from the V$LOGMNR_CONTENTS view. Rows that are
retrieved after the corruption are flagged with a "Log File Corruption
Encountered" message. Additionally, for every corrupt redo record encountered, an
informational row is returned that indicates how many blocks were skipped.

The default is for the select operation to terminate at the first corruption it
encounters in the redo log.

To enable this option, you specify it when you start LogMiner, as follows:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -


2 DBMS_LOGMNR.SKIP_CORRUPTION);

Filtering Data By Time

To filter data by time, set the STARTTIME and ENDTIME parameters. The procedure
expects date values. Use the TO_DATE function to specify date and time, as in this
example:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -


2 DICTFILENAME => '/oracle/dictionary.ora', -
3 STARTTIME => TO_DATE('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), -
4 ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));

If no STARTTIME or ENDTIME parameters are specified, the entire redo log is read
from start to end, for each SELECT statement issued.

The timestamps should not be used to infer ordering of redo records. You can infer
the order of redo records by using the SCN.
Filtering Data By SCN

To filter data by SCN (system change number), use the STARTSCN and ENDSCN
parameters, as in this example:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -


2 DICTFILENAME => '/oracle/dictionary.ora', -
3 STARTSCN => 100, -
4 ENDSCN => 150);

The STARTSCN and ENDSCN parameters override the STARTTIME and ENDTIME parameters
in situations where all are specified.
If no STARTSCN or ENDSCN parameters are specified, the entire redo log is read
from start to end, for each SELECT statement issued.
Accessing LogMiner Information

LogMiner information is contained in the following views. You can use SQL to query
them as you would any other view.

* V$LOGMNR_CONTENTS

Shows changes made to user and table information.


* V$LOGMNR_DICTIONARY

Shows information about the LogMiner dictionary file, provided the


dictionary was created using the STORE_IN_FLAT_FILE option. The information shown
includes the database name and status information.
* V$LOGMNR_LOGS

Shows information about specified redo logs. There is one row for each redo
log.
* V$LOGMNR_PARAMETERS

Shows information about optional LogMiner parameters, including starting and


ending system change numbers (SCNs) and starting and ending times.
See Also:

Oracle9i Database Reference for detailed information about the contents of


these views

The rest of this section discusses the following topics with regard to accessing
LogMiner information:

* Querying V$LOGMNR_CONTENTS
* Extracting Actual Data Values from Redo Logs

Querying V$LOGMNR_CONTENTS

LogMiner output is contained in the V$LOGMNR_CONTENTS view. After LogMiner is


started, you can issue SQL statements at the command line to query the data
contained in V$LOGMNR_CONTENTS.

When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the
redo logs are read sequentially. Translated information from the redo logs is
returned as rows in the V$LOGMNR_CONTENTS view. This continues until either the
filter criteria specified at startup are met or the end of the redo log is
reached.

LogMiner returns all the rows in SCN order unless you have used the
COMMITTED_DATA_ONLY option to specify that only committed transactions should be
retrieved. SCN order is the order normally applied in media recovery.

For example, suppose you wanted to find out about any delete operations that a
user named Ron had performed on the scott.orders table. You could issue a query
similar to the following:

SQL> SELECT OPERATION, SQL_REDO, SQL_UNDO


2 FROM V$LOGMNR_CONTENTS
3 WHERE SEG_OWNER = 'SCOTT' AND SEG_NAME = 'ORDERS' AND
4 OPERATION = 'DELETE' AND USERNAME = 'RON';

The following output would be produced. The formatting may be different on your
display than that shown here.

OPERATION SQL_REDO SQL_UNDO

DELETE delete from "SCOTT"."ORDERS" insert into "SCOTT"."ORDERS"


where "ORDER_NO" = 2 and ("ORDER_NO", "QTY", "EXPR_SHIP")
"QTY" = 3 and values(2,3,'Y');
"EXPR_SHIP" = 'Y' and
ROWID = 'AAABM8AABAAALm/AAA'
DELETE delete from "SCOTT"."ORDERS" insert into "SCOTT"."ORDERS"
where "ORDER_NO" = 4 and ("ORDER_NO",'QTY","EXPR_SHIP")
"QTY" = 7 and values(4,7,'Y');
"EXPR_SHIP" = 'Y' and
ROWID = 'AAABM8AABAAALm/AAC';

This output shows that user Ron deleted two rows from the scott.orders table. The
reconstructed SQL statements are equivalent, but not necessarily identical, to the
actual statement that Ron issued. The reason for this is that the original WHERE
clause is not logged in the redo logs, so LogMiner can only show deleted (or
updated or inserted) rows individually.

Therefore, even though a single DELETE statement may have been responsible for the
deletion of both rows, the output in V$LOGMNR_CONTENTS does not reflect that.
Thus, the actual DELETE statement may have been DELETE FROM SCOTT.ORDERS WHERE
EXPR_SHIP = 'Y' or it might have been DELETE FROM SCOTT.ORDERS WHERE QTY < 8.
Executing Reconstructed SQL Statements

By default, SQL_REDO and SQL_UNDO statements are ended with a semicolon. Depending
on how you plan to use the reconstructed statements, you may or may not want them
to include the semicolon. To suppress the semicolon, specify the
DBMS_LOGMNR.NO_SQL_DELIMITER option when you start LogMiner.

Note that if the STATUS field of V$LOGMNR_CONTENTS contains


dbms_logmnr.invalid_sql, then the SQL cannot be executed.
Formatting of Returned Data

Sometimes a query can result in a large number of columns containing reconstructed


SQL statements, which can be visually busy and hard to read. LogMiner provides the
DBMS_LOGMNR.PRINT_PRETTY_SQL option to address this problem. The PRINT_PRETTY_SQL
option formats the reconstructed SQL statements as follows, which makes them
easier to read:

insert into "SCOTT"."EMP" values


"EMPNO": 5505,
"ENAME": "Parker",
"SAL": 9000
"DEPTNO": NULL;
update "SCOTT"."EMP"
set
"EMPNO" = 5505 and
"SAL" = 9000
where
"EMPNO" = 5505 and
"SAL" = 9000 and
"ROWID" = AABBCEXFGHA;
SQL statements that are reconstructed when the PRINT_PRETTY_SQL option is enabled
are not executable because they do not use standard SQL syntax.
Extracting Actual Data Values from Redo Logs

LogMiner lets you make queries based on actual data values. For instance, you
could perform a query to show all updates to scott.emp that increased sal more
than a certain amount. Data such as this can be used to analyze system behavior
and to perform auditing tasks.

LogMiner data extraction from redo logs is performed using two mine functions:
DBMS_LOGMNR.MINE_VALUE and DBMS_LOGMNR.COLUMN_PRESENT. These functions are part of
the DBMS_LOGMNR package. Support for these mine functions is provided by the
REDO_VALUE and UNDO_VALUE columns in the V$LOGMNR_CONTENTS view.

The following is an example of how you could use the MINE_VALUE function to select
all updates to scott.emp that increased the sal column to more than twice its
original value:

SQL> SELECT SQL_REDO FROM V$LOGMNR_CONTENTS


2 WHERE
3 SEG_NAME = 'emp' AND
4 SEG_OWNER = 'SCOTT' AND
5 OPERATION = 'UPDATE' AND
6 DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'SCOTT.EMP.SAL') >
7 2*DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'SCOTT.EMP.SAL');

As shown in this example, the MINE_VALUE function takes two arguments. The first
one specifies whether to mine the redo (REDO_VALUE) or undo (UNDO_VALUE) portion
of the data. The second argument is a string that specifies the fully-qualified
name of the column to be mined (in this case, SCOTT.EMP.SAL). The MINE_VALUE
function always returns a string that can be converted back to the original
datatype.
NULL Returns From the MINE_VALUE Function

If the MINE_VALUE function returns a NULL value, it can mean either:

* The specified column is not present in the redo or undo portion of the data.
* The specified column is present and has a null value.

To distinguish between these two cases, use the DBMS_LOGMNR.COLUMN_PRESENT


function which returns a 1 if the column is present in the redo or undo portion of
the data. Otherwise, it returns a 0. For example, suppose you wanted to find out
the increment by which the values in the sal column were modified and the
corresponding transaction identifier. You could issue the following query:

SQL> SELECT
2 (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
3 (DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'SCOTT.EMP.SAL') -
4 DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'SCOTT.EMP.SAL')) AS INCR_SAL
5 FROM V$LOGMNR_CONTENTS
6 WHERE
7 DBMS_LOGMNR.COLUMN_PRESENT(REDO_VALUE, 'SCOTT.EMP.SAL') = 1 AND
8 DBMS_LOGMNR.COLUMN_PRESENT(UNDO_VALUE, 'SCOTT.EMP.SAL') = 1 AND
9 OPERATION = 'UPDATE';

Usage Rules for the MINE_VALUE and COLUMN_PRESENT Functions


The following usage rules apply to the MINE_VALUE and COLUMN_PRESENT functions:

* They can only be used within a LogMiner session.


* They must be invoked in the context of a select operation from the
V$LOGMNR_CONTENTS view.
* They do not support LONG, LOB, ADT, or COLLECTION datatypes.
* When the column argument is of type DATE, the string that is returned is
formatted in canonical form (DD-MON-YYYY HH24:MI:SS.SS) regardless of the date
format of the current session.

See Also:

Oracle9i Supplied PL/SQL Packages and Types Reference for a description of


the DBMS_LOGMNR package, which contains the MINE_VALUE and COLUMN_PRESENT
functions

Supplemental Logging

Redo logs are generally used for instance recovery and media recovery. The data
needed for such operations is automatically recorded in the redo logs. However, a
redo-based application may require that additional information be logged in the
redo logs. The following are examples of situations in which supplemental data may
be needed:

* An application that wanted to apply the reconstructed SQL statements to a


different database would need to identify the update statement by its primary key,
not by its ROWID which is the usual method used by LogMiner. (Primary keys are
not, by default, logged in the redo logs unless the key itself is changed by the
update.)
* To make tracking of row changes more efficient, an application may require
that the before image of the whole row be logged, not just the modified columns.

The default behavior of the Oracle database server is to not provide any
supplemental logging at all, which means that certain features will not be
supported (see Restrictions). If you want to make full use of LogMiner support,
you must enable supplemental logging.

The use of LogMiner with minimal supplemental logging enabled does not have any
significant performance impact on the instance generating the redo logs. However,
the use of LogMiner with database-wide supplemental logging enabled does impose
significant overhead and effects performance.

There are two types of supplemental logging: database supplemental logging and
table supplemental logging. Each of these is described in the following sections.
Database Supplemental Logging

There are two types of database supplemental logging: minimal and identification
key logging.

Minimal supplemental logging logs the minimal amount of information needed for
LogMiner to identify, group, and merge the REDO operations associated with DML
changes. It ensures that LogMiner (and any products building on LogMiner
technology) have sufficient information to support chained rows and various
storage arrangements such as cluster tables. In most situations, you should at
least enable minimal supplemental logging. To do so, execute the following
statement:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA


Note:

In LogMiner release 9.0.1, minimal supplemental logging was the default behavior.
In release 9.2, the default is no supplemental logging. It must be specifically
enabled.

Identification key logging enables database-wide before-image logging of primary


keys or unique indexes (in the absence of primary keys) for all updates. With this
type of logging, an application can identify updated rows logically rather than
resorting to ROWIDs.

Identification key logging is necessary when supplemental log data will be the
source of change in another database, such as a logical standby.

To enable identification key logging, execute the following statement:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX)
COLUMNS;

This statement results in all primary key values, database-wide, being logged
regardless of whether or not any of them are modified.

If a table does not have a primary key, but has one or more non-null unique key
constraints, one of the constraints is chosen arbitrarily for logging as a means
of identifying the row getting updated.

If the table has neither a primary key nor a unique index, then all columns except
LONG and LOB are supplementally logged. Therefore, Oracle Corporation recommends
that when you use supplemental logging, all or most tables be defined to have
primary or unique keys.
Note:

Regardless of whether or not identification key logging is enabled, the SQL


statements returned by LogMiner always contain the ROWID clause. You can filter
out the ROWID clause by using the RTRIM function and appropriate arguments on the
reconstructed SQL statement.

To disable either minimal or identification key logging, execute the following


statement.

SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Usage Notes for Identification Key Logging

Keep the following in mind when you use identification key logging:

* Identification key logging is not required for delete operations because


DELETE statements contain all the column values required to identify a row.
* If the database is open when you enable identification key logging, all DML
cursors in the cursor cache are invalidated. This can have a performance impact
until the cache is repopulated.

Table Supplemental Logging

Table supplemental logging uses log groups to log supplemental information. There
are two types of log groups:
* Unconditional log groups - The before images of specified columns are logged
any time the table is updated, regardless of whether the update affected any of
the specified columns. This is sometimes referred to as an ALWAYS log group.
* Conditional log groups - The before images of all specified columns are
logged only if at least one of the columns in the log group is updated.

Unconditional Log Groups

To enable supplemental logging that uses unconditional log groups, use the ALWAYS
clause as shown in the following example:

SQL> ALTER TABLE scott.emp


2 ADD SUPPLEMENTAL LOG GROUP emp_parttime (empno, ename, deptno) ALWAYS;

This creates a log group named emp_parttime on scott.emp that consists of the
columns empno, ename, and deptno. These columns will be logged every time an
UPDATE statement is executed on scott.emp, regardless of whether or not the update
affected them. If you wanted to have the entire row image logged any time an
update was made, you could create a log group that contained all the columns in
the table.
Note:

LOBs, LONGs, and ADTs cannot be part of a log group


Conditional Log Groups

To enable supplemental logging that uses conditional log groups, omit the ALWAYS
clause from your ALTER TABLE statement, as shown in the following example:

SQL> ALTER TABLE scott.emp


2 ADD SUPPLEMENTAL LOG GROUP emp_fulltime (empno, ename, deptno);

This creates a log group named emp_fulltime on scott.emp. Just like the previous
example, it consists of the columns empno, ename, and deptno. But because the
ALWAYS clause was omitted, before images of the columns will be logged only if at
least one of the columns is updated.
Usage Notes for Log Groups

Keep the following in mind when you use log groups:

* A column can belong to more than one log group. However, the before image of
the columns gets logged only once.
* Redo logs do not contain any information about which log group a column is
part of or whether a column's before image is being logged because of log group
logging or identification key logging.
* If you specify the same columns to be logged both conditionally and
unconditionally, the columns are logged unconditionally.

Steps in a Typical LogMiner Session

This section describes the steps in a typical LogMiner session. Each step is
described in its own subsection.

1. Perform Initial Setup Activities


2. Extract a Dictionary (unless you plan to use the online catalog)
3. Specify Redo Logs for Analysis
4. Start a LogMiner Session
5. Query V$LOGMNR_CONTENTS
6. End a LogMiner Session
To run LogMiner, you use the DBMS_LOGMNR PL/SQL package. Additionally, you might
also use the DBMS_LOGMNR_D package if you choose to extract a dictionary rather
than use the online catalog.

The DBMS_LOGMNR package contains the procedures used to initialize and run
LogMiner, including interfaces to specify names of redo logs, filter criteria, and
session characteristics. The DBMS_LOGMNR_D package queries the dictionary tables
of the current database to create a LogMiner dictionary file.

The LogMiner packages are owned by the SYS schema. Therefore, if you are not
connected as user SYS, you must include SYS in your call. For example:

EXECUTE SYS.DBMS_LOGMNR.END_LOGMNR

See Also:

* Oracle9i Supplied PL/SQL Packages and Types Reference for details about
syntax and parameters for these LogMiner packages
* Oracle9i Application Developer's Guide - Fundamentals for information about
executing PL/SQL procedures

Perform Initial Setup Activities

There are initial setup activities that you must perform before using LogMiner for
the first time. You only need to perform these activities once, not every time you
use LogMiner:

* Enable the type of supplemental logging you want to use. At the very least,
Oracle Corporation recommends that you enable minimal supplemental logging, as
follows:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA

See Supplemental Logging for more information.


* Use the DBMS_LOGMNR_D.SET_TABLESPACE routine to re-create all LogMiner
tables in an alternate tablespace. For example:

SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts$');

See Recommendations for more information.

Extract a Dictionary

To use LogMiner you must supply it with a dictionary by doing one of the
following:

* Extract database dictionary information to a flat file. See Extracting the


Dictionary to a Flat File.
* Extract database dictionary information to the redo logs. See Extracting a
Dictionary to the Redo Logs.
* Specify use of the online catalog by using the DICT_FROM_ONLINE_CATALOG
option when you start LogMiner. See Using the Online Catalog.

Specify Redo Logs for Analysis

Before you can start LogMiner, you must specify the redo logs that you want to
analyze. To do so, execute the DBMS_LOGMNR.ADD_LOGFILE procedure, as demonstrated
in the following steps. You can add and remove redo logs in any order.
Note:

If you will be mining in the same instance that is generating the redo logs, you
only need to specify one archived redo log and the CONTINUOUS_MINE option when you
start LogMiner. See Continuous Mining.

1. Use SQL*Plus to start an Oracle instance, with the database either mounted
or unmounted. For example, enter:

SQL> STARTUP

2. Create a list of redo logs. Specify the NEW option of the


DBMS_LOGMNR.ADD_LOGFILE procedure to signal that this is the beginning of a new
list. For example, enter the following to specify /oracle/logs/log1.f:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -


2 LOGFILENAME => '/oracle/logs/log1.f', -
3 OPTIONS => DBMS_LOGMNR.NEW);

3. If desired, add more redo logs by specifying the ADDFILE option of the
DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to add
/oracle/logs/log2.f:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -


2 LOGFILENAME => '/oracle/logs/log2.f', -
3 OPTIONS => DBMS_LOGMNR.ADDFILE);

The OPTIONS parameter is optional when you are adding additional redo logs.
For example, you could simply enter the following:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -


2 LOGFILENAME=>'/oracle/logs/log2.f');

4. If desired, remove redo logs by specifying the REMOVEFILE option of the


DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to remove
/oracle/logs/log2.f:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -


2 LOGFILENAME => '/oracle/logs/log2.f', -
3 OPTIONS => DBMS_LOGMNR.REMOVEFILE);

Continuous Mining

The continuous mining option is useful if you are mining in the same instance that
is generating the redo logs. When you plan to use the continuous mining option,
you only need to specify one archived redo log before starting LogMiner. Then,
when you start LogMiner specify the DBMS_LOGMNR.CONTINUOUS_MINE option, which
directs LogMiner to automatically add and mine subsequent archived redo logs and
also the online catalog.
Note:

Continuous mining is not available in a Real Application Clusters environment.


Start a LogMiner Session

After you have created a dictionary file and specified which redo logs to analyze,
you can start a LogMiner session. Take the following steps:
1. Execute the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner.

It is recommended that you specify a dictionary option. If you do not,


LogMiner cannot translate internal object identifiers and datatypes to object
names and external data formats. Therefore, it would return internal object IDs
and present data as hex bytes. Additionally, the MINE_VALUE and COLUMN_PRESENT
functions cannot be used without a dictionary.

If you are specifying the name of a flat file dictionary, you must supply a
fully qualified filename for the dictionary file. For example, to start LogMiner
using /oracle/database/dictionary.ora, issue the following command:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -


2 DICTFILENAME =>'/oracle/database/dictionary.ora');

If you are not specifying a flat file dictionary name, then use the OPTIONS
parameter to specify either the DICT_FROM_REDO_LOGS or DICT_FROM_ONLINE_CATALOG
option.

If you specify DICT_FROM_REDO_LOGS, LogMiner expects to find a dictionary in


the redo logs that you specified with the DBMS_LOGMNR.ADD_LOGFILE procedure. To
determine which redo logs contain a dictionary, look at the V$ARCHIVED_LOG view.
See Extracting a Dictionary to the Redo Logs for an example.
Note:

If you add additional redo logs after your LogMiner session has been
started, you must restart LogMiner. You can specify new startup parameters if
desired. Otherwise, LogMiner uses the parameters you specified for the previous
session.

For more information on using the online catalog, see Using the Online
Catalog.
2. Optionally, you can filter your query by time or by SCN. See Filtering Data
By Time or Filtering Data By SCN.
3. You can also use the OPTIONS parameter to specify additional characteristics
of your LogMiner session. For example, you might decide to use the online catalog
as your dictionary and to have only committed transactions shown in the
V$LOGMNR_CONTENTS view, as follows:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -


2 DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
3 DBMS_LOGMNR.COMMITTED_DATA_ONLY);

The following list is a summary of LogMiner settings that you can specify
with the OPTIONS parameter and where to find more information about them.
* DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG -- See Using the Online Catalog
* DBMS_LOGMNR.DICT_FROM_REDO_LOGS -- See step 1 in this list
* DBMS_LOGMNR.COMMITTED_DATA_ONLY -- See Showing Only Committed
Transactions
* DBMS_LOGMNR.SKIP_CORRUPTION -- See Skipping Redo Corruptions
* DBMS_LOGMNR.DDL_DICT_TRACKING -- See Tracking DDL Statements
* DBMS_LOGMNR.NEW, DBMS_LOGMNR.ADDFILE, and DBMS_LOGMNR.REMOVEFILE --
See Specify Redo Logs for Analysis
* DBMS_LOGMNR.NO_SQL_DELIMITER -- See Formatting of Returned Data
* DBMS_LOGMNR.PRINT_PRETTY_SQL -- See Formatting of Returned Data
* DBMS_LOGMNR.CONTINUOUS_MINE -- See Continuous Mining

You can execute the DBMS_LOGMNR.START_LOGMNR procedure multiple times,


specifying different options each time. This can be useful for example, if you did
not get the desired results from a query of V$LOGMNR_CONTENTS, and want to restart
LogMiner with different options. You do not need to re-add redo logs that were
already added for a previous session.

Query V$LOGMNR_CONTENTS

At this point, LogMiner is started and you can perform queries against the
V$LOGMNR_CONTENTS view. See Querying V$LOGMNR_CONTENTS for examples of this.
End a LogMiner Session

To properly end a LogMiner session, use the DBMS_LOGMNR.END_LOGMNR procedure, as


follows:

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR;

This procedure closes all the redo logs and allows all the database and system
resources allocated by LogMiner to be released.

If this procedure is not executed, LogMiner retains all its allocated resources
until the end of the Oracle session in which it was invoked. It is particularly
important to use this procedure to end LogMiner if either the DDL_DICT_TRACKING
option or the DICT_FROM_REDO_LOGS option was used.
Example Uses of LogMiner

This section provides the following example uses of LogMiner.

* Example: Using LogMiner to Track Changes Made By a Specific User


* Example: Using LogMiner to Calculate Table Access Statistics

Example: Using LogMiner to Track Changes Made By a Specific User

This example shows how to see all changes made to the database in a specific time
range by one of your users: joedevo.Connect to the database and then take the
following steps:

* Step 1: Creating the Dictionary File


* Step 2: Adding Redo Logs
* Step 3: Starting LogMiner and Limiting the Search Range
* Step 4: Querying V$LOGMNR_CONTENTS

Step 1: Creating the Dictionary File

To use LogMiner to analyze joedevo's data, you must either create a dictionary
file before joedevo makes any changes or specify use of the online catalog at
LogMiner startup. See Extract a Dictionary for examples of creating dictionaries.
Step 2: Adding Redo Logs

Assume that joedevo has made some changes to the database. You can now specify the
names of the redo logs that you want to analyze, as follows:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -


2 LOGFILENAME => 'log1orc1.ora', -
3 OPTIONS => DBMS_LOGMNR.NEW);

If desired, add additional redo logs, as follows:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -


2 LOGFILENAME => 'log2orc1.ora', -
3 OPTIONS => DBMS_LOGMNR.ADDFILE);

Step 3: Starting LogMiner and Limiting the Search Range

Start LogMiner and limit the search to the specified time range:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -


2 DICTFILENAME => 'orcldict.ora', -
3 STARTTIME => TO_DATE('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), -
4 ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));

Step 4: Querying V$LOGMNR_CONTENTS

At this point, the V$LOGMNR_CONTENTS view is available for queries. You decide to
find all of the changes made by user joedevo to the salary table. Execute the
following SELECT statement:

SQL> SELECT SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS


2 WHERE USERNAME = 'joedevo' AND SEG_NAME = 'salary';

For both the SQL_REDO and SQL_UNDO columns, two rows are returned (the format of
the data display will be different on your screen). You discover that joedevo
requested two operations: he deleted his old salary and then inserted a new,
higher salary. You now have the data necessary to undo this operation.

SQL_REDO SQL_UNDO
-------- --------
delete * from SALARY insert into SALARY(NAME, EMPNO, SAL)
where EMPNO = 12345 values ('JOEDEVO', 12345, 500)
and ROWID = 'AAABOOAABAAEPCABA';

insert into SALARY(NAME, EMPNO, SAL) delete * from SALARY


values('JOEDEVO',12345, 2500) where EMPNO = 12345
and ROWID = 'AAABOOAABAAEPCABA';
2 rows selected

Example: Using LogMiner to Calculate Table Access Statistics

In this example, assume you manage a direct marketing database and want to
determine how productive the customer contacts have been in generating revenue for
a two week period in August. Assume that you have already created the dictionary
and added the redo logs you want to search (as demonstrated in the previous
example). Take the following steps:

1. Start LogMiner and specify a range of times:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -


2 STARTTIME => TO_DATE('07-Aug-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), -
3 ENDTIME => TO_DATE('21-Aug-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'), -
4 DICTFILENAME => '/usr/local/dict.ora');

2. Query the V$LOGMNR_CONTENTS view to determine which tables were modified in


the time range you specified, as shown in the following example. (This query
filters out system tables that traditionally have a $ in their name.)

SQL> SELECT SEG_OWNER, SEG_NAME, COUNT(*) AS Hits FROM


2 V$LOGMNR_CONTENTS WHERE SEG_NAME NOT LIKE '%$' GROUP BY
3 SEG_OWNER, SEG_NAME;

3. The following data is displayed. (The format of your display may be


different.)

SEG_OWNER SEG_NAME Hits


--------- -------- ----
CUST ACCOUNT 384
SCOTT EMP 12
SYS DONOR 12
UNIV DONOR 234
UNIV EXECDONOR 325
UNIV MEGADONOR 32

The values in the Hits column show the number of times that the named table
had an insert, delete, or update operation performed on it during the two week
period specified in the query.

Potrebbero piacerti anche