Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
TRACE_LEVEL_CLIENT = OFF
- Increase DB_BLOCK_SIZE when recreating the database, if possible. The
larger the block size, the smaller the number of I/O cycles needed.
This change is permanent, so consider all effects it will have before
changing it.
For more info on db block sizing see Note:34020.1
Init.ora Parameter Changes
-------------------------- Set LOG_CHECKPOINT_INTERVAL to a number that is larger than the size
of the redo log files. This number is in OS blocks (512 bytes on most
Unix systems). This reduces checkpoints to a minimum (only at log
switch time).
- Increase SORT_AREA_SIZE. Indexes are not being built yet, but any
unique or primary key constraints will be. The increase depends on what
other activity is on the machine and how much free memory is available.
Try 5-10 times the normal setting. If the machine starts swapping and
paging, you have set it too high.
- Try increasing db_block_buffers and shared_pool_size.
Shared pool holds cached dictionary info and things like cursors,
procedures, triggers, etc. Dictionary info. or cursors created on
the import's behalf (there may be many since it's always working on a
new table) may sometimes clog the pipes. Therefore, this stale stuff
sits around until the aging/flush mechanisms kick in on a per-request
basis because a request can't be satisfied from the lookaside lists. The
ALTER SYSTEM FLUSH SHARED_POOL throws out *all* currently unused objects
in one operation, hence, defragments the pool.
If you can restart the instance with a bigger SHARED_POOL_SIZE prior
to importing, that would definitely help. When it starts to slow down,
at least you can see what's going on by doing the following:
SQL> set serveroutput on size 2000;
SQL>begin
SQL>
dbms_shared_pool.sizes(2000);
SQL> end;
SQL> /
The dbms_shared_pool is in $ORACLE_HOME/rdbms/admin/dbmspool.sql
Import Options Changes
---------------------- Use COMMIT=N. This will cause import to commit after each object (table),
not after each buffer. This is why one large rollback segment is needed.
- Use a large BUFFER size. This value also depends on system activity,
database size, etc. Several megabytes is usually enough, but if you
have the memory some can go higher. Again, check for paging and swapping
at the OS level to see if it is too high. This reduces the number of
times the import program has to go to the export file for data.
time it fetches one buffer's worth of data.
Each
- Consider using INDEXES=N during import. The user defined indexes will be
created after the table has
been created and populated, but if the primary
objective of the import is to get the data in there as fast as possible,
then importing with INDEXES=N will help. The indexes can then be created
at a later date when time is not a factor.
If this approach is chosen, then you will need to use INDEXFILE option
to extract the DLL for the index creation or to re-run the import with
INDEXES=Y and ROWS=N.
For more info on extracting the DLL from an export file see
Note:29765.1
Importing a table with a LONG column may cause a higher rate of I/O and disk
utilization than importing a table without a LONG column.
There are no parameters available within IMP to help optimize the import of
these
data types.