Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
CREATING A SYNONYM
Database links are notoriously slow, not an optimal solution for applications needing real-
time response.
Expect the database link to cause a delay in accessing the remote data.
Synonym created.
COUNT(*)
----------
14
View created.
COUNT(*)
----------
14
HOW TO DROP A DATABASE LINK
Dropping a database link from your Oracle database is as easy as doing the following:
drop database link remotedb;
or
drop public database link remotedb;
You will need no other system privilege other than the 'create database link' privilege
granted directly
to your username or granted via a role to drop your own database link.
If you try to specify another schema's database link by qualifying it with a name,
Oracle will just look for a private database link in your schema with a name
which includes the other schema's name and will not find it.
To drop a public database link, you will need the system privilege 'drop public database
link'.
For example:
select from @;
For example:
SQL> select * from dept@remotedb;
The database link is opened during the select (or other DML transaction) and remains open
for the duration of the session.
After you close a session, the links that were active in the session are automatically closed.
Close a db link
Session altered.
You can use insert/update/delete statements just as easily with database links
SQL> commit;
Commit complete.
1 row updated.
SQL> commit;
Commit complete.
1 row deleted.
SQL> commit;
Commit complete.
Note that DDL operations are not allowed through a database link:
******************************************************************************
Oracle Tablespace:-
create, add a file, remove a file, drop, rename
Dropping a tablespace
DROP TABLESPACE DATA_1_TBS; (if the tablespace is empty)
DROP TABLESPACE DATA_1_TBS INCLUDING CONTENTS; (if the objects in the tablespace
are no longer needed)
However the files must be deleted from the OS level
Rename a tablespace
(in 10g)
ALTER TABLESPACE DATA_1_TBS RENAME TO DATA_10_TBS;
(in 9i)
1. Create a new tablespace NEW_TBLS.
2. Copy all objects from OLD_TBLS to NEW_TBLS.
3. Drop tablespace OLD_TBLS.
Moving the tablespace files
=> for Data files, Log files:
1) Shutdown the database.
2) Rename the physical file on the OS. ==> Win: SQL> HOST MOVE file1.dbf file2.dbf
3) Start the database in MOUNT mode.
ALTER DATABASE RENAME FILE ‘C:\OHOME_9I\ORADATA\DB9\REDO01.LOG’ TO
‘C:\ORACLE\data\REDO01.LOG’;
ALTER DATABASE RENAME FILE ‘C:\OHOME_9I\ORADATA\DB9\REDO02.LOG’ TO
‘C:\ORACLE\data\REDO02.LOG’;
ALTER DATABASE RENAME FILE ‘C:\OHOME_9I\ORADATA\DB9\REDO03.LOG’ TO
‘C:\ORACLE\data\REDO03.LOG’;
ALTER DATABASE OPEN;
=> for Control File (SPFILE is used)
1) Alter control_files initialisation parameter in SPFILE
ALTER SYSTEM SET control_files = ‘C:\NEW_PATH\RENAME_CONTROL01.CTL’,
‘C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL02.CTL’,
‘C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL03.CTL’
SCOPE = SPFILE;
2) Shutdown the database.
3) Rename the physical Controle file on the OS. ==> Win: SQL> HOST MOVE file1.ctl file2.ctl
4) Start the database.
* EMNC (event monitor coordinator) is the background server process used for database
event management and notifications.
* FBDA (flashback data archiver process) archives the historical rows of tracked tables into
flashback data archives. Tracked tables are tables which are enabled for flashback archive.
When a transaction containing DML on a tracked table commits, this process stores the pre-
image of the rows into the flashback archive. It also keeps metadata on the current rows.
FBDA is also responsible for automatically managing the flashback data archive for space,
organization, and retention and keeps track of how far the archiving of tracked transactions
has occurred.
* GTX0-j (global transaction) processes provide transparent support for XA global
transactions in an Oracle RAC environment. The database autotunes the number of these
processes based on the workload of XA global transactions. Global transaction processes
are only seen in an Oracle RAC environment.
* MARK marks ASM allocation units as stale following a missed write to an offline disk.
*******************************************************************************
dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
For example output files see expdpEMP_DEPT.log and impdpEMP_DEPT.log.
The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing
tables.
SCHEMA EXPORTS/IMPORTS
The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to
specify the schemas to be exported. The following is an example of the schema export and
import syntax.
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp
logfile=expdpSCOTT.log
dumpfile=DB10G.dmp logfile=expdpDB10G.log
dumpfile=DB10G.dmp logfile=impdpDB10G.log
For an example output file see expdpDB10G.log.
INCLUDE AND EXCLUDE
The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific
objects. When the INCLUDEparameter is used, only those objects specified by it will be
included in the export/import. When the EXCLUDE parameter is used, all objects except those
specified by it will be included in the export/import. The two parameters are mutually
exclusive, so use the parameter that requires the least entries to give you the result you
require. The basic syntax for both parameters is the same.
INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=object_type[:name_clause] [, ...]
The following code shows how they can be used as command line parameters.
expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN
logfile=expdpSCOTT.log
or
INCLUDE=TABLE
INCLUDE=VIEW
INCLUDE=PACKAGE:"LIKE '%API'"
The valid object type paths that can be included or excluded can be displayed using
the DATABASE_EXPORT_OBJECTS,SCHEMA_EXPORT_OBJECTS,
and TABLE_EXPORT_OBJECTS views.
NETWORK EXPORTS/IMPORTS (NETWORK_LINK)
The NETWORK_LINK parameter identifies a database link to be used as the source for a
network export/import. The following database link will be used to demonstrate its use.
CONN / AS SYSDBA
GRANT CREATE DATABASE LINK TO test;
CONN test/test
CREATE DATABASE LINK remote_scott CONNECT TO scott IDENTIFIED BY tiger
USING 'DEV';
In the case of exports, the NETWORK_LINK parameter identifies the database link pointing to
the source server. The objects are exported from the source server in the normal manner,
but written to a directory object on the local server, rather than one on the source server.
Both the local and remote users require the EXP_FULL_DATABASE role granted to them.
expdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT
MISCELLANEOUS INFORMATION
Unlike the original exp and imp utilities all data pump ".dmp" and ".log" files are created on
the Oracle server, not the client machine.
All data pump actions are performed by multiple jobs (server processes not DBMS_JOB
jobs). These jobs are controlled by a master control process which uses Advanced
Queuing. At runtime an advanced queue table, named after the job name, is created and
used by the master control process. The table is dropped on completion of the data pump
job. The job and the advanced queue can be named using the JOB_NAME parameter.
Cancelling the client process does not stop the associated data pump job. Issuing "ctrl+c"
on the client during a job stops the client output and presents a command prompt. Typing
"status" at this prompt allows you to monitor the current job.
Export> status
Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:\TEMP\DB10G.DMP
bytes written: 4,096
Worker 1 Status:
State: EXECUTING
Object Schema: SYSMAN
Object Name: MGMT_CONTAINER_CRED_ARRAY
Object Type: DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Completed Objects: 261
Total Objects: 261
Data pump performance can be improved by using the PARALLEL parameter. This should be
used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple
dumpfiles to be created or read.
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4
dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log
The DBA_DATAPUMP_JOBS view can be used to monitor the current jobs.
system@db10g> select * from dba_datapump_jobs;
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.dmp',
directory => 'TEST_DIR');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.log',
directory => 'TEST_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '= ''SCOTT''');
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.detach(l_dp_handle);
END;
/
Once the job has started the status can be checked using.
system@db10g> select * from dba_datapump_jobs;
EXTERNAL TABLES
Oracle have incorporated support for data pump technology into external tables. The
ORACLE_DATAPUMP access driver can be used to unload data to data pump export files
and subsequently reload it. The unload of data occurs when the external table is created
using the "AS" clause.
CREATE TABLE emp_xt
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY test_dir
LOCATION ('emp_xt.dmp')
)
AS SELECT * FROM emp;
The data can then be queried using the following.
SELECT * FROM emp_xt;
The syntax to create the external table pointing to an existing file is similar, but
without the "AS" clause.
DROP TABLE emp_xt;
HELP
The HELP=Y option displays the available parameters.
expdp
expdp help=y
The Data Pump export utility provides a mechanism for transferring data
objects
between Oracle databases. The utility is invoked with the following command:
You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:
Command Description
------------------------------------------------------------------------------
ADD_FILE Add dumpfile to dumpfile set.
ADD_FILE=dumpfile-name
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.
Oracle 10g Release 2 (10.2) added the following parameters.
Keyword Description (Default)
------------------------------------------------------------------------------
COMPRESSION Reduce size of dumpfile contents where valid
keyword values are: (METADATA_ONLY) and NONE.
ENCRYPTION_PASSWORD Password key for creating encrypted column data.
SAMPLE Percentage of data to be exported;
Command Description
------------------------------------------------------------------------------
FILESIZE Default filesize (bytes) for subsequent ADD_FILE
commands.
Oracle 11g Release 1 (11.1) added the following parameters.
Keyword Description (Default)
------------------------------------------------------------------------------
DATA_OPTIONS Data layer flags where the only valid value is:
XML_CLOBS-write XML datatype in CLOB format
ENCRYPTION Encrypt part or all of the dump file where valid keyword
values are: ALL, DATA_ONLY, METADATA_ONLY,
ENCRYPTED_COLUMNS_ONLY, or NONE.
ENCRYPTION_ALGORITHM Specify how encryption should be done where valid
keyword values are: (AES128), AES192, and AES256.
ENCRYPTION_MODE Method of generating encryption key where valid keyword
values are: DUAL, PASSWORD, and (TRANSPARENT).
REMAP_DATA Specify a data conversion function,
e.g. REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
REUSE_DUMPFILES Overwrite destination dump file if it exists (N).
TRANSPORTABLE Specify whether transportable method can be used where
valid keyword values are: ALWAYS, (NEVER).
Command Description
------------------------------------------------------------------------------
REUSE_DUMPFILES Overwrite destination dump file if it exists (N).
Oracle 11g Release 1 (11.2) altered the format of the help output as well as adding the
following parameters.
CLUSTER
Utilize cluster resources and distribute workers across the Oracle RAC.
Valid keyword values are: [Y] and N.
SERVICE_NAME
Name of an active Service and associated resource group to constrain Oracle
RAC resources.
SOURCE_EDITION
Edition to be used for extracting metadata.
impdp
impdp help=y
You can control how Import runs by entering the 'impdp' command followed
by various parameters. To specify parameters, you use keywords:
SERVICE_NAME
Name of an active Service and associated resource group to constrain Oracle
RAC resources.
SOURCE_EDITION
Edition to be used for extracting metadata.
TARGET_EDITION
Edition to be used for loading metadata.