Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
The Oracle LogMiner utility enables you to query redo logs through a SQL
interface. Redo logs contain information about the history of activity on a
database.
This chapter describes LogMiner functionality as it is used from the command line.
You also have the option of accessing LogMiner functionality through the Oracle
LogMiner Viewer graphical user interface (GUI). The LogMiner Viewer is a part of
Oracle Enterprise Manager.
Potential Uses for Data Stored in Redo Logs
All changes made to user data or to the data dictionary are recorded in the Oracle
redo logs. Therefore, redo logs contain all the necessary information to perform
recovery operations. Because redo log data is often kept in archived files, the
data is already available. To ensure that redo logs contain useful information,
you should enable at least minimal supplemental logging.
See Also:
Supplemental Logging
The following are some of the potential uses for data contained in redo logs:
See Also:
See Extracting Actual Data Values from Redo Logs for details about how you
can use LogMiner to accomplish this.
* Detecting and whenever possible, correcting user error, which is a more
likely scenario than logical corruption. User errors include deleting the wrong
rows because of incorrect values in a WHERE clause, updating rows with incorrect
values, dropping the wrong index, and so forth.
* Determining what actions you would have to take to perform fine-grained
recovery at the transaction level. If you fully understand and take into account
existing dependencies, it may be possible to perform a table-based undo operation
to roll back a set of changes. Normally you would have to restore the table to its
previous state, and then apply an archived redo log to roll it forward.
* Performance tuning and capacity planning through trend analysis. You can
determine which tables get the most updates and inserts. That information provides
a historical perspective on disk access statistics, which can be used for tuning
purposes.
* Performing post-auditing. The redo logs contain all the information
necessary to track any DML and DDL statements executed on the database, the order
in which they were executed, and who executed them.
Oracle Corporation provides SQL access to the redo logs through LogMiner, which is
part of the Oracle database server. LogMiner presents the information in the redo
logs through the V$LOGMNR_CONTENTS fixed view. This view contains historical
information about changes made to the database including, but not limited to, the
following:
* The type of change made to the database (INSERT, UPDATE, DELETE, or DDL).
* The SCN at which a change was made (SCN column).
* The SCN at which a change was committed (COMMIT_SCN column).
* The transaction to which a change belongs (XIDUSN, XIDSLT, and XIDSQN
columns).
* The table and schema name of the modified object (SEG_NAME and SEG_OWNER
columns).
* The name of the user who issued the DDL or DML statement to make the change
(USERNAME column).
* Reconstructed SQL statements showing SQL that is equivalent (but not
necessarily identical) to the SQL used to generate the redo records (SQL_REDO
column). If a password is part of the statement in a SQL_REDO column, the password
is encrypted.
* Reconstructed SQL statements showing the SQL statements needed to undo the
change (SQL_UNDO column). SQL_UNDO columns that correspond to DDL statements are
always NULL. Similarly, the SQL_UNDO column may be NULL for some datatypes and for
rolled back operations.
Oracle9i Supplied PL/SQL Packages and Types Reference for a complete description
of the DBMS_LOGMNR_D.BUILD procedure
The following section describes redo logs and dictionary files in further detail.
Redo Logs and Dictionary Files
Before you begin using LogMiner, it is important to understand how LogMiner works
with redo logs and dictionary files. This will help you to get accurate results
and to plan the use of your system resources. The following concepts are discussed
in this section:
* Redo Logs
* Dictionary Options
* Tracking DDL Statements
Redo Logs
When you run LogMiner, you specify the names of redo logs that you want to
analyze. LogMiner retrieves information from those redo logs and returns it
through the V$LOGMNR_CONTENTS view. To ensure that the redo logs contain
information of value to you, you must enable at least minimal supplemental
logging. See Supplemental Logging.
You can then use SQL to query the V$LOGMNR_CONTENTS view, as you would any other
view. Each select operation that you perform against the V$LOGMNR_CONTENTS view
causes the redo logs to be read sequentially.
* The redo logs must be from a release 8.0 or later Oracle database. However,
several of the LogMiner features introduced as of release 9.0.1 only work with
redo logs produced on an Oracle9i or later database. See Restrictions.
* Support for LOB and LONG datatypes is available as of release 9.2, but only
for redo logs generated on a release 9.2 Oracle database.
* The redo logs must use a database character set that is compatible with the
character set of the database on which LogMiner is running.
* In general, the analysis of redo logs requires a dictionary that was
generated from the same database that generated the redo logs.
* If you are using a dictionary that is in flat file format or that is stored
in the redo logs, then the redo logs you want to analyze can be from either the
database on which LogMiner is running or from other databases.
* If you are using the online catalog as the LogMiner dictionary, you can only
analyze redo logs from the database on which LogMiner is running.
* LogMiner must be running on the same hardware platform that generated the
redo logs being analyzed. However, it does not have to be on the same system.
* It is important to specify the correct redo logs when running LogMiner. If
you omit redo logs that contain some of the data you need, you will get inaccurate
results when you query the V$LOGMNR_CONTENTS view.
To determine which redo logs are being analyzed in the current LogMiner session
you can look at the V$LOGMNR_LOGS view, which contains one row for each redo log.
See Also:
The dictionary file must have the same database character set and be created from
the same database as the redo logs being analyzed. However, once the dictionary is
extracted, you can use it to mine the redo logs of that database in a separate
database instance without being connected to the source database.
Extracting a dictionary file also prevents problems that can occur when the
current data dictionary contains only the newest table definitions. For instance,
if a table you are searching for was dropped sometime in the past, the current
dictionary will not contain any references to it.
When the dictionary is in a flat file, fewer system resources are used than when
it is contained in the redo logs. It is recommended that you regularly back up the
dictionary extracts to ensure correct analysis of older redo logs.
Be sure that no DDL operations occur while the dictionary is being built.
The following steps describe how to extract a dictionary to a flat file (including
extra steps you must take if you are using Oracle8). Steps 1 through 4 are
preparation steps. You only need to do them once, and then you can extract a
dictionary to a flat file as many times as you wish.
See Also:
Oracle9i Database Reference for more information about the init.ora file
UTL_FILE_DIR = /oracle/database
Remember that for the changes to the init.ora file to take effect, you must
stop and restart the database.
2. For Oracle8 only. Otherwise, go to the next step: Use your operating
system's copy command to copy the dbmslmd.sql script, which is contained in the
$ORACLE_HOME/rdbms/admin directory on the Oracle8i database, to the same directory
in the Oracle8 database. For example, enter:
% cp /8.1/oracle/rdbms/admin/dbmslmd.sql /8.0/oracle/rdbms/admin/dbmslmd.sql
3. If the database is closed, use SQL*Plus to mount and then open the database
whose redo logs you want to analyze. For example, entering the STARTUP command
mounts and opens the database:
SQL> STARTUP
4. For Oracle8 only. Otherwise, go to the next step: Execute the copied
dbmslmd.sql script on the 8.0 database to install the DBMS_LOGMNR_D package. For
example, enter:
@dbmslmd.sql
You could also specify a filename and location without specifying the
STORE_IN_FLAT_FILE option. The result would be the same.
To extract a dictionary to the redo logs, the database must be open and in
ARCHIVELOG mode and archiving must be enabled. While the dictionary is being
extracted to the redo log stream, no DDL statements can be executed. Therefore,
the dictionary snapshot extracted to the redo logs is guaranteed to be consistent,
whereas the dictionary extracted to a flat file is not.
To ensure that the redo logs contain information of value to you, you must enable
at least minimal supplemental logging. See Supplemental Logging.
See Also:
Oracle9i Recovery Manager User's Guide for more information about ARCHIVELOG mode
The process of extracting the dictionary to the redo logs does consume database
resources, but if you limit the extraction to off-peak hours, this should not be a
problem and it is faster than extracting to a flat file. Depending on the size of
the dictionary, it may be contained in multiple redo logs. Provided the relevant
redo logs have been archived, you can find out which redo logs contain the start
and end of an extracted dictionary. To do so, query the V$ARCHIVED_LOG view, as
follows:
It is recommended that you periodically back up the redo logs so that the
information is saved and available at a later date. Ideally, this will not involve
any extra steps because if your database is being properly managed, there should
already be a process in place for backing up and restoring archived redo logs.
Again, because of the time required, it is good practice to do this during off-
peak hours.
Using the Online Catalog
To direct LogMiner to use the dictionary currently in use for the database,
specify the online catalog as your dictionary source when you start LogMiner, as
follows:
Using the online catalog means that you do not have to bother extracting a
dictionary to a flat file or to the redo logs. In addition to using the online
catalog to analyze online redo logs, you can use it to analyze archived redo logs
provided you are on the same system that generated the archived redo logs.
The online catalog contains the latest information about the database and may be
the fastest way to start your analysis. Because DDL operations that change
important tables are somewhat rare, the online catalog generally contains the
information you need for your analysis.
Remember, however, that the online catalog can only reconstruct SQL statements
that are executed on the latest version of a table. As soon as the table is
altered, the online catalog no longer reflects the previous version of the table.
This means that LogMiner will not be able to reconstruct any SQL statements that
were executed on the previous version of the table. Instead, LogMiner generates
nonexecutable SQL in the SQL_REDO column (including hex-to-raw formatting of
binary values) similar to the following example:
The online catalog option is not valid with the DDL_DICT_TRACKING option.
Tracking DDL Statements
LogMiner automatically builds its own internal dictionary from the source
dictionary that you specify when you start LogMiner (either a flat file
dictionary, a dictionary in the redo logs, or an online catalog).
The information returned might be similar to the following, although the actual
information and how it is displayed will be different on your screen.
USERNAME SQL_REDO
SYS ALTER TABLE SCOTT.ADDRESS ADD CODE NUMBER;
SYS CREATE USER KATHY IDENTIFIED BY VALUES 'E4C8B920449B4C32' DEFAULT
TABLESPACE TS1;
Keep the following in mind when you use the DDL_DICT_TRACKING option:
The ability to track DDL statements helps you monitor schema evolution because SQL
statements used to change the logical structure of a table (because of DDL
operations such as adding or dropping of columns) can be reconstructed. In
addition, data manipulation language (DML) operations performed on new tables
created after the dictionary was extracted can also be shown.
Note:
In general, it is a good idea to keep the DDL tracking feature enabled because if
it is not enabled and a DDL event occurs, LogMiner returns some of the redo data
as hex bytes. Also, a metadata version mismatch could occur.
When you are using LogMiner, keep the recommendations and restrictions described
in the following sections in mind.
Recommendations
Oracle Corporation recommends that you take the following into consideration when
you are using LogMiner:
Oracle9i Supplied PL/SQL Packages and Types Reference for a full description
of the DBMS_LOGMNR_D.SET_TABLESPACE routine
Restrictions
See Also:
Supplemental Logging
LogMiner can potentially be dealing with large amounts of information. There are
several methods you can use to limit the information that is returned to the
V$LOGMNR_CONTENTS view, as well as the speed at which it is returned. These
options are specified when you start LogMiner.
When you use the COMMITTED_DATA_ONLY option, only rows belonging to committed
transactions are shown in the V$LOGMNR_CONTENTS view. This enables you to filter
out rolled back transactions, transactions that are in progress, and internal
operations.
To enable this option, you specify it when you start LogMiner, as follows:
When you specify the COMMITTED_DATA_ONLY option, LogMiner groups together all DML
operations that belong to the same transaction. Transactions are returned in the
order in which they were committed.
If long-running transactions are present in the redo logs being analyzed, use of
this option may cause an "Out of Memory" error.
The default is for LogMiner to show rows corresponding to all transactions and to
return them in the order in which they are encountered in the redo logs.
For example, suppose you start LogMiner without specifying COMMITTED_DATA_ONLY and
you execute the following query:
The output would be as follows. Both committed and uncommitted transactions are
returned and rows from different transactions are interwoven.
Now suppose you start LogMiner, but this time you specify the COMMITTED_DATA_ONLY
option. If you executed the previous query again, the output would look as
follows:
Because the commit for the 1.6.124 transaction happened before the commit for the
1.5.123 transaction, the entire 1.6.124 transaction is returned first. This is
true even though the 1.5.123 transaction started before the 1.6.124 transaction.
None of the 1.7.234 transaction is returned because a commit was never issued for
it.
Skipping Redo Corruptions
When you use the SKIP_CORRUPTION option, any corruptions in the redo logs are
skipped during select operations from the V$LOGMNR_CONTENTS view. Rows that are
retrieved after the corruption are flagged with a "Log File Corruption
Encountered" message. Additionally, for every corrupt redo record encountered, an
informational row is returned that indicates how many blocks were skipped.
The default is for the select operation to terminate at the first corruption it
encounters in the redo log.
To enable this option, you specify it when you start LogMiner, as follows:
To filter data by time, set the STARTTIME and ENDTIME parameters. The procedure
expects date values. Use the TO_DATE function to specify date and time, as in this
example:
If no STARTTIME or ENDTIME parameters are specified, the entire redo log is read
from start to end, for each SELECT statement issued.
The timestamps should not be used to infer ordering of redo records. You can infer
the order of redo records by using the SCN.
Filtering Data By SCN
To filter data by SCN (system change number), use the STARTSCN and ENDSCN
parameters, as in this example:
The STARTSCN and ENDSCN parameters override the STARTTIME and ENDTIME parameters
in situations where all are specified.
If no STARTSCN or ENDSCN parameters are specified, the entire redo log is read
from start to end, for each SELECT statement issued.
Accessing LogMiner Information
LogMiner information is contained in the following views. You can use SQL to query
them as you would any other view.
* V$LOGMNR_CONTENTS
Shows information about specified redo logs. There is one row for each redo
log.
* V$LOGMNR_PARAMETERS
The rest of this section discusses the following topics with regard to accessing
LogMiner information:
* Querying V$LOGMNR_CONTENTS
* Extracting Actual Data Values from Redo Logs
Querying V$LOGMNR_CONTENTS
When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the
redo logs are read sequentially. Translated information from the redo logs is
returned as rows in the V$LOGMNR_CONTENTS view. This continues until either the
filter criteria specified at startup are met or the end of the redo log is
reached.
LogMiner returns all the rows in SCN order unless you have used the
COMMITTED_DATA_ONLY option to specify that only committed transactions should be
retrieved. SCN order is the order normally applied in media recovery.
For example, suppose you wanted to find out about any delete operations that a
user named Ron had performed on the scott.orders table. You could issue a query
similar to the following:
The following output would be produced. The formatting may be different on your
display than that shown here.
This output shows that user Ron deleted two rows from the scott.orders table. The
reconstructed SQL statements are equivalent, but not necessarily identical, to the
actual statement that Ron issued. The reason for this is that the original WHERE
clause is not logged in the redo logs, so LogMiner can only show deleted (or
updated or inserted) rows individually.
Therefore, even though a single DELETE statement may have been responsible for the
deletion of both rows, the output in V$LOGMNR_CONTENTS does not reflect that.
Thus, the actual DELETE statement may have been DELETE FROM SCOTT.ORDERS WHERE
EXPR_SHIP = 'Y' or it might have been DELETE FROM SCOTT.ORDERS WHERE QTY < 8.
Executing Reconstructed SQL Statements
By default, SQL_REDO and SQL_UNDO statements are ended with a semicolon. Depending
on how you plan to use the reconstructed statements, you may or may not want them
to include the semicolon. To suppress the semicolon, specify the
DBMS_LOGMNR.NO_SQL_DELIMITER option when you start LogMiner.
LogMiner lets you make queries based on actual data values. For instance, you
could perform a query to show all updates to scott.emp that increased sal more
than a certain amount. Data such as this can be used to analyze system behavior
and to perform auditing tasks.
LogMiner data extraction from redo logs is performed using two mine functions:
DBMS_LOGMNR.MINE_VALUE and DBMS_LOGMNR.COLUMN_PRESENT. These functions are part of
the DBMS_LOGMNR package. Support for these mine functions is provided by the
REDO_VALUE and UNDO_VALUE columns in the V$LOGMNR_CONTENTS view.
The following is an example of how you could use the MINE_VALUE function to select
all updates to scott.emp that increased the sal column to more than twice its
original value:
As shown in this example, the MINE_VALUE function takes two arguments. The first
one specifies whether to mine the redo (REDO_VALUE) or undo (UNDO_VALUE) portion
of the data. The second argument is a string that specifies the fully-qualified
name of the column to be mined (in this case, SCOTT.EMP.SAL). The MINE_VALUE
function always returns a string that can be converted back to the original
datatype.
NULL Returns From the MINE_VALUE Function
* The specified column is not present in the redo or undo portion of the data.
* The specified column is present and has a null value.
SQL> SELECT
2 (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
3 (DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'SCOTT.EMP.SAL') -
4 DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'SCOTT.EMP.SAL')) AS INCR_SAL
5 FROM V$LOGMNR_CONTENTS
6 WHERE
7 DBMS_LOGMNR.COLUMN_PRESENT(REDO_VALUE, 'SCOTT.EMP.SAL') = 1 AND
8 DBMS_LOGMNR.COLUMN_PRESENT(UNDO_VALUE, 'SCOTT.EMP.SAL') = 1 AND
9 OPERATION = 'UPDATE';
See Also:
Supplemental Logging
Redo logs are generally used for instance recovery and media recovery. The data
needed for such operations is automatically recorded in the redo logs. However, a
redo-based application may require that additional information be logged in the
redo logs. The following are examples of situations in which supplemental data may
be needed:
The default behavior of the Oracle database server is to not provide any
supplemental logging at all, which means that certain features will not be
supported (see Restrictions). If you want to make full use of LogMiner support,
you must enable supplemental logging.
The use of LogMiner with minimal supplemental logging enabled does not have any
significant performance impact on the instance generating the redo logs. However,
the use of LogMiner with database-wide supplemental logging enabled does impose
significant overhead and effects performance.
There are two types of supplemental logging: database supplemental logging and
table supplemental logging. Each of these is described in the following sections.
Database Supplemental Logging
There are two types of database supplemental logging: minimal and identification
key logging.
Minimal supplemental logging logs the minimal amount of information needed for
LogMiner to identify, group, and merge the REDO operations associated with DML
changes. It ensures that LogMiner (and any products building on LogMiner
technology) have sufficient information to support chained rows and various
storage arrangements such as cluster tables. In most situations, you should at
least enable minimal supplemental logging. To do so, execute the following
statement:
In LogMiner release 9.0.1, minimal supplemental logging was the default behavior.
In release 9.2, the default is no supplemental logging. It must be specifically
enabled.
Identification key logging is necessary when supplemental log data will be the
source of change in another database, such as a logical standby.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX)
COLUMNS;
This statement results in all primary key values, database-wide, being logged
regardless of whether or not any of them are modified.
If a table does not have a primary key, but has one or more non-null unique key
constraints, one of the constraints is chosen arbitrarily for logging as a means
of identifying the row getting updated.
If the table has neither a primary key nor a unique index, then all columns except
LONG and LOB are supplementally logged. Therefore, Oracle Corporation recommends
that when you use supplemental logging, all or most tables be defined to have
primary or unique keys.
Note:
Keep the following in mind when you use identification key logging:
Table supplemental logging uses log groups to log supplemental information. There
are two types of log groups:
* Unconditional log groups - The before images of specified columns are logged
any time the table is updated, regardless of whether the update affected any of
the specified columns. This is sometimes referred to as an ALWAYS log group.
* Conditional log groups - The before images of all specified columns are
logged only if at least one of the columns in the log group is updated.
To enable supplemental logging that uses unconditional log groups, use the ALWAYS
clause as shown in the following example:
This creates a log group named emp_parttime on scott.emp that consists of the
columns empno, ename, and deptno. These columns will be logged every time an
UPDATE statement is executed on scott.emp, regardless of whether or not the update
affected them. If you wanted to have the entire row image logged any time an
update was made, you could create a log group that contained all the columns in
the table.
Note:
To enable supplemental logging that uses conditional log groups, omit the ALWAYS
clause from your ALTER TABLE statement, as shown in the following example:
This creates a log group named emp_fulltime on scott.emp. Just like the previous
example, it consists of the columns empno, ename, and deptno. But because the
ALWAYS clause was omitted, before images of the columns will be logged only if at
least one of the columns is updated.
Usage Notes for Log Groups
* A column can belong to more than one log group. However, the before image of
the columns gets logged only once.
* Redo logs do not contain any information about which log group a column is
part of or whether a column's before image is being logged because of log group
logging or identification key logging.
* If you specify the same columns to be logged both conditionally and
unconditionally, the columns are logged unconditionally.
This section describes the steps in a typical LogMiner session. Each step is
described in its own subsection.
The DBMS_LOGMNR package contains the procedures used to initialize and run
LogMiner, including interfaces to specify names of redo logs, filter criteria, and
session characteristics. The DBMS_LOGMNR_D package queries the dictionary tables
of the current database to create a LogMiner dictionary file.
The LogMiner packages are owned by the SYS schema. Therefore, if you are not
connected as user SYS, you must include SYS in your call. For example:
EXECUTE SYS.DBMS_LOGMNR.END_LOGMNR
See Also:
* Oracle9i Supplied PL/SQL Packages and Types Reference for details about
syntax and parameters for these LogMiner packages
* Oracle9i Application Developer's Guide - Fundamentals for information about
executing PL/SQL procedures
There are initial setup activities that you must perform before using LogMiner for
the first time. You only need to perform these activities once, not every time you
use LogMiner:
* Enable the type of supplemental logging you want to use. At the very least,
Oracle Corporation recommends that you enable minimal supplemental logging, as
follows:
Extract a Dictionary
To use LogMiner you must supply it with a dictionary by doing one of the
following:
Before you can start LogMiner, you must specify the redo logs that you want to
analyze. To do so, execute the DBMS_LOGMNR.ADD_LOGFILE procedure, as demonstrated
in the following steps. You can add and remove redo logs in any order.
Note:
If you will be mining in the same instance that is generating the redo logs, you
only need to specify one archived redo log and the CONTINUOUS_MINE option when you
start LogMiner. See Continuous Mining.
1. Use SQL*Plus to start an Oracle instance, with the database either mounted
or unmounted. For example, enter:
SQL> STARTUP
3. If desired, add more redo logs by specifying the ADDFILE option of the
DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to add
/oracle/logs/log2.f:
The OPTIONS parameter is optional when you are adding additional redo logs.
For example, you could simply enter the following:
Continuous Mining
The continuous mining option is useful if you are mining in the same instance that
is generating the redo logs. When you plan to use the continuous mining option,
you only need to specify one archived redo log before starting LogMiner. Then,
when you start LogMiner specify the DBMS_LOGMNR.CONTINUOUS_MINE option, which
directs LogMiner to automatically add and mine subsequent archived redo logs and
also the online catalog.
Note:
After you have created a dictionary file and specified which redo logs to analyze,
you can start a LogMiner session. Take the following steps:
1. Execute the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner.
If you are specifying the name of a flat file dictionary, you must supply a
fully qualified filename for the dictionary file. For example, to start LogMiner
using /oracle/database/dictionary.ora, issue the following command:
If you are not specifying a flat file dictionary name, then use the OPTIONS
parameter to specify either the DICT_FROM_REDO_LOGS or DICT_FROM_ONLINE_CATALOG
option.
If you add additional redo logs after your LogMiner session has been
started, you must restart LogMiner. You can specify new startup parameters if
desired. Otherwise, LogMiner uses the parameters you specified for the previous
session.
For more information on using the online catalog, see Using the Online
Catalog.
2. Optionally, you can filter your query by time or by SCN. See Filtering Data
By Time or Filtering Data By SCN.
3. You can also use the OPTIONS parameter to specify additional characteristics
of your LogMiner session. For example, you might decide to use the online catalog
as your dictionary and to have only committed transactions shown in the
V$LOGMNR_CONTENTS view, as follows:
The following list is a summary of LogMiner settings that you can specify
with the OPTIONS parameter and where to find more information about them.
* DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG -- See Using the Online Catalog
* DBMS_LOGMNR.DICT_FROM_REDO_LOGS -- See step 1 in this list
* DBMS_LOGMNR.COMMITTED_DATA_ONLY -- See Showing Only Committed
Transactions
* DBMS_LOGMNR.SKIP_CORRUPTION -- See Skipping Redo Corruptions
* DBMS_LOGMNR.DDL_DICT_TRACKING -- See Tracking DDL Statements
* DBMS_LOGMNR.NEW, DBMS_LOGMNR.ADDFILE, and DBMS_LOGMNR.REMOVEFILE --
See Specify Redo Logs for Analysis
* DBMS_LOGMNR.NO_SQL_DELIMITER -- See Formatting of Returned Data
* DBMS_LOGMNR.PRINT_PRETTY_SQL -- See Formatting of Returned Data
* DBMS_LOGMNR.CONTINUOUS_MINE -- See Continuous Mining
Query V$LOGMNR_CONTENTS
At this point, LogMiner is started and you can perform queries against the
V$LOGMNR_CONTENTS view. See Querying V$LOGMNR_CONTENTS for examples of this.
End a LogMiner Session
This procedure closes all the redo logs and allows all the database and system
resources allocated by LogMiner to be released.
If this procedure is not executed, LogMiner retains all its allocated resources
until the end of the Oracle session in which it was invoked. It is particularly
important to use this procedure to end LogMiner if either the DDL_DICT_TRACKING
option or the DICT_FROM_REDO_LOGS option was used.
Example Uses of LogMiner
This example shows how to see all changes made to the database in a specific time
range by one of your users: joedevo.Connect to the database and then take the
following steps:
To use LogMiner to analyze joedevo's data, you must either create a dictionary
file before joedevo makes any changes or specify use of the online catalog at
LogMiner startup. See Extract a Dictionary for examples of creating dictionaries.
Step 2: Adding Redo Logs
Assume that joedevo has made some changes to the database. You can now specify the
names of the redo logs that you want to analyze, as follows:
Start LogMiner and limit the search to the specified time range:
At this point, the V$LOGMNR_CONTENTS view is available for queries. You decide to
find all of the changes made by user joedevo to the salary table. Execute the
following SELECT statement:
For both the SQL_REDO and SQL_UNDO columns, two rows are returned (the format of
the data display will be different on your screen). You discover that joedevo
requested two operations: he deleted his old salary and then inserted a new,
higher salary. You now have the data necessary to undo this operation.
SQL_REDO SQL_UNDO
-------- --------
delete * from SALARY insert into SALARY(NAME, EMPNO, SAL)
where EMPNO = 12345 values ('JOEDEVO', 12345, 500)
and ROWID = 'AAABOOAABAAEPCABA';
In this example, assume you manage a direct marketing database and want to
determine how productive the customer contacts have been in generating revenue for
a two week period in August. Assume that you have already created the dictionary
and added the redo logs you want to search (as demonstrated in the previous
example). Take the following steps:
The values in the Hits column show the number of times that the named table
had an insert, delete, or update operation performed on it during the two week
period specified in the query.