Sei sulla pagina 1di 974

/***********************************************************************/ /* Document : Oracle 8i,9i,10g queries, information, and tips */ /* Doc. Versie : 59 */ /* File : oracle9i10g.

txt */ /* Date : 30-06-2008 */ /* Content : Just a series of handy DBA queries. */ /* Compiled by : Albert van der Sel */ /***********************************************************************/

CONTENTS: 0. Common data dictionary queries for sessions, locks, perfoRMANce etc.. 1. DATA DICTIONARY QUERIES m.b.t. files, tablespaces, logs: 2. NOTES ON PERFORMANCE: 3. Data dictonary queries m.b.t perfoRMANce: 4. IMP and EXP, 10g IMPDB and EXPDB, and SQL*Loader Examples 5. Add, Move AND Size Datafiles,logfiles, create objects etc..: 6. Install Oracle 92 on Solaris: 7. install Oracle 9i on Linux: 8. Install Oracle 9.2.0.2 on OpenVMS: 9. Install Oracle 9.2.0.1 on AIX 9. Installation Oracle 8i - 9i: 10. CONSTRAINTS: 11. DBMS_JOB and scheduled Jobs: 12. Net8,9,10 / SQLNet: 13. Datadictionary queries Rollback segments: 14. Data dictionary queries m.b.t. security, permissions: 15. INIT.ORA parameters: 16. Snapshots: 17. Triggers: 19. BACKUP RECOVERY, TROUBLESHOOTING: 20. TRACING: 21. Overig: 22. DBA% and v$ views 23 TUNING: 24 RMAN: 25. UPGRADE AND MIGRATION 26. Some info on Rdb: 27. Some info on IFS 28. Some info on 9iAS rel. 2 29 - 35 9iAS configurations and troubleshooting 30. BLOBS 31. BLOCK CORRUPTION 32. iSQL*Plus and EM 10g 33. ADDM 34. ASM and 10g RAC 35. CDC and Streams 36. X$ Tables

================================================================================ ============ 0. QUICK INFO/VIEWS ON SESSIONS, LOCKS, AND UNDO/ROLLBACK INFORMATION IN A SING LE INSTANCE: ================================================================================ ===========

SINGLE INSTANCE QUERIES: ======================== -- ---------------------------- 0.1 QUICK VIEW ON SESSIONS: -- --------------------------SELECT substr(username, 1, 10), osuser, sql_address, to_char(logon_time, 'DD-MMYYYY;HH24:MI'), sid, serial#, command, substr(program, 1, 30), substr(machine, 1, 30), su bstr(terminal, 1, 30) FROM v$session; SELECT sql_text, rows_processed from v$sqlarea where address='' -- ------------------------- 0.2 QUICK VIEW ON LOCKS: (use the sys.obj$ to find ID1:) -- -----------------------First, lets take a look at some important dictionary views with respect to locks : SQL> desc v$lock; Name Null? ----------------------------- -------ADDR KADDR SID TYPE ID1 ID2 LMODE REQUEST CTIME BLOCK Type -------------------RAW(8) RAW(8) NUMBER VARCHAR2(2) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER

This view stores all information relating to locks in the database. The interest ing columns in this view are sid (identifying the session holding or aquiring the lock), type, and the lmode/request pair. Important possi ble values of type are TM (DML or Table Lock), TX (Transaction), MR (Media Recovery), ST (Disk Space Transaction). Exactly one of the lmode, request pair is either 0 or 1 while the other indicates the lock mode. If lmode is not 0 or 1, then the sessio n has aquired the lock, while it waits to aquire the lock if request is other than 0 or 1. The possible values for lmode and request are: 1: 2: 3: 4: 5: 6: null, Row Share (SS), Row Exclusive (SX), Share (S), Share Row Exclusive (SSX) and Exclusive(X)

If the lock type is TM, the column id1 is the object's id and the name of the ob ject can then be queried like so:

select name from sys.obj$ where obj# = id1 A lock type of JI indicates that a ma terialized view is being SQL> desc v$locked_object; Name Null? ----------------------------- -------XIDUSN XIDSLOT XIDSQN OBJECT_ID SESSION_ID ORACLE_USERNAME OS_USER_NAME PROCESS LOCKED_MODE SQL> desc dba_waiters; Name Null? ----------------------------- -------WAITING_SESSION HOLDING_SESSION LOCK_TYPE MODE_HELD MODE_REQUESTED LOCK_ID1 LOCK_ID2 SQL> desc v$transaction; Name Null? ----------------------------- -------ADDR XIDUSN XIDSLOT XIDSQN UBAFIL UBABLK UBASQN UBAREC STATUS START_TIME START_SCNB START_SCNW START_UEXT START_UBAFIL START_UBABLK START_UBASQN START_UBAREC SES_ADDR FLAG SPACE RECURSIVE NOUNDO PTX NAME PRV_XIDUSN PRV_XIDSLT PRV_XIDSQN

Type -------------------NUMBER NUMBER NUMBER NUMBER NUMBER VARCHAR2(30) VARCHAR2(30) VARCHAR2(12) NUMBER

Type -------------------NUMBER NUMBER VARCHAR2(26) VARCHAR2(40) VARCHAR2(40) NUMBER NUMBER

Type -------------------RAW(8) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER VARCHAR2(16) VARCHAR2(20) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER RAW(8) NUMBER VARCHAR2(3) VARCHAR2(3) VARCHAR2(3) VARCHAR2(3) VARCHAR2(256) NUMBER NUMBER NUMBER

PTX_XIDUSN PTX_XIDSLT PTX_XIDSQN DSCN-B DSCN-W USED_UBLK USED_UREC LOG_IO PHY_IO CR_GET CR_CHANGE START_DATE DSCN_BASE DSCN_WRAP START_SCN DEPENDENT_SCN XID PRV_XID PTX_XID

NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER DATE NUMBER NUMBER NUMBER NUMBER RAW(8) RAW(8) RAW(8)

Queries you can use in investigating locks: =========================================== SELECT XIDUSN,OBJECT_ID,SESSION_ID,ORACLE_USERNAME,OS_USER_NAME,PROCESS from v$l ocked_object; SELECT d.OBJECT_ID, substr(OBJECT_NAME,1,20), l.SESSION_ID, l.ORACLE_USERNAME, l .LOCKED_MODE from v$locked_object l, dba_objects d where d.OBJECT_ID=l.OBJECT_ID; SELECT ADDR, KADDR, SID, TYPE, ID1, ID2, LMODE, BLOCK from v$lock; SELECT a.sid, a.saddr, b.ses_addr, a.username, b.xidusn, b.used_urec, b.used_ubl k FROM v$session a, v$transaction b WHERE a.saddr = b.ses_addr; SELECT s.sid, l.lmode, l.block, substr(s.username, 1, 10), substr(s.schemaname, 1, 10), substr(s.osuser, 1, 10), substr(s.program, 1, 30), s.command FROM v$session s, v$lock l WHERE s.sid=l.sid; SELECT p.spid, s.sid, p.addr,s.paddr,substr(s.username, 1, 10), substr(s.schema name, 1, 10), s.command,substr(s.osuser, 1, 10), substr(s.machine, 1, 10) FROM v$session s, v$process p WHERE s.paddr=p.addr SELECT sid, serial#, command,substr(username, 1, 10), osuser, sql_address,LOCKWA

IT, FROM to_char(logon_time, 'DD-MM-YYYY;HH24:MI'), substr(program, 1, 30) v$session;

SELECT sid, serial#, username, LOCKWAIT from v$session; SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL FROM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER BY v.SID; SELECT * from dba_waiters; SELECT waiting_session, holding_session, lock_type, mode_held FROM dba_waiters; SELECT p.spid s.sid p.addr, s.paddr, substr(s.username, 1, 10) substr(s.schemaname, 1, 10) s.command substr(s.osuser, 1, 10) substr(s.machine, 1, 25) FROM v$session s, v$process WHERE s.paddr=p.addr ORDER BY p.spid;

unix_spid, sid, username, schemaname, command, osuser, machine p

Usage of v$session_longops: =========================== SQL> desc v$session_longops; SID SERIAL# OPNAME TARGET NUMBER NUMBER VARCHAR2(64) VARCHAR2(64) Session identifier Session serial number Brief description of the operation The object on which the operation is carried out Description of the target The units of work done so far The total units of work The units of measurement The starting time of operation Time when statistics last updated Timestamp Estimate (in seconds) of time remaining for the The number of elapsed seconds from the start of Statistics summary message User ID of the user performing the operation Used with the value of the SQL_HASH_VALUE column

TARGET_DESC VARCHAR2(32) SOFAR NUMBER TOTALWORK NUMBER UNITS VARCHAR2(32) START_TIME DATE LAST_UPDATE_TIME DATE TIMESTAMP DATE TIME_REMAINING NUMBER operation to complete ELAPSED_SECONDS NUMBER operations CONTEXT NUMBER Context MESSAGE VARCHAR2(512) USERNAME VARCHAR2(30) SQL_ADDRESS RAW(4 | 8)

to identify the SQL statement associated with the operation SQL_HASH_VALUE NUMBER Used with the value of the SQL_ADDRESS column to identify the SQL statement associated with the operation SQL_ID VARCHAR2(13) SQL identifier of the SQL statement associated w ith the operation QCSID NUMBER Session identifier of the parallel coordinator This view displays the status of various operations that run for longer than 6 s econds (in absolute time). These operations currently include many backup and recovery functions, statistics gathering, and query exec ution, and more operations are added for every Oracle release. To monitor query execution progress, you must be using the cost-based optimizer and you must: Set the TIMED_STATISTICS or SQL_TRACE parameter to true Gather statistics for your objects with the ANALYZE statement or the DBMS_STATS package You can add information to this view about application-specific long-running ope rations by using the DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure. Select 'long', to_char (l.sid), to_char (l.serial#), to_char(l.sofar), to_char (l.totalwork), to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ), to_char ( l.last_update_time , 'DD-Mon-YYYY HH24:MI:SS'), to_char(l.time_remaini ng), to_char(l.elapsed_seconds), l.opname,l.target,l.target_desc,l.message,s.username,s.osuser,s.lockwait from v $session_longops l, v$session s where l.sid = s.sid and l.serial# = s.serial#; Select 'long', to_char (l.sid), to_char (l.serial#), to_char(l.sofar), to_char (l.totalwork), to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ), to_char ( l.last_update_time , 'DD-Mon-YYYY HH24:MI:SS'), s.username,s.osuser,s. lockwait from v$session_longops l, v$session s where l.sid = s.sid and l.serial# = s.serial#; select substr(username,1,15),target,to_char(start_time, 'DD-Mon-YYYY HH24:MI:SS' ), SOFAR,substr(MESSAGE,1,70) from v$session_longops; select USERNAME, to_char(start_time, 'DD-Mon-YYYY HH24:MI:SS' ),substr(message,1 ,90),to_char(time_remaining) from v$session_longops;

9i and 10G note: ================ Oracle has a view inside the Oracle data buffers. The view is called v$bh, and while v$bh was originally developed for Oracle Parallel Server (OPS), the v$bh view can be used to show th e number of data blocks in the data buffer for every object type in the database. The following query is especially exciting because you can now see what objects are consuming

the data buffer caches. In Oracle9i, you can use this information to segregate tables to separate RAM buffers with different blocksizes. Here is a sample query that shows data buffer utilization for individual objects in the database. Note that this script uses an Oracle9i scalar sub-query, and will not work in pr e-Oracle9i systems unless you comment-out column c3. column column column column c0 c1 c2 c3 heading heading heading heading 'Owner' 'Object|Name' 'Number|of|Buffers' 'Percentage|of Data|Buffer' format format format format a15 a30 999,999 999,999,999

select owner c0, object_name c1, count(1) c2, (count(1)/(select count(*) from v$bh)) *100 c3 from dba_objects o, v$bh bh where o.object_id = bh.objd and o.owner not in ('SYS','SYSTEM','AURORA$JIS$UTILITY$') group by owner, object_name order by count(1) desc ; -- ------------------------------ 0.3 QUICK VIEW ON TEMP USAGE: -- ----------------------------select total_extents, used_extents, total_extents, current_users, tablespace_nam e from v$sort_segment; select username, user, sqladdr, extents, tablespace from v$sort_usage; SELECT b.tablespace, ROUND(((b.blocks*p.value)/1024/1024),2), a.sid||','||a.serial# SID_SERIAL, a.username, a.program FROM sys.v_$session a, sys.v_$sort_usage b, sys.v_$parameter p WHERE p.name = 'db_block_size' AND a.saddr = b.session_addr ORDER BY b.tablespace, b.blocks; -- --------------------------------- 0.4 QUICK VIEW ON UNDO/ROLLBACK: -- --------------------------------

SELECT substr(username, 1, 10), substr(terminal, 1, 10), substr(osuser, 1, 10), t.start_time, r.name, t.used_ublk "ROLLB BLKS", log_io, phy_io FROM sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s WHERE t.xidusn = r.usn AND t.ses_addr = s.saddr; SELECT substr(n.name, 1, 10), s.writes, s.gets, s.waits, s.wraps, s.extents, s.s tatus, s.optsize, s.rssize FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn; SELECT substr(r.name, 1, 10) "RBS", s.sid, s.serial#, s.taddr, t.addr, substr(s.username, 1, 10) "USER", t.status, t.cr_get, t.phy_io, t.used_ublk, t.noundo, substr(s.program, 1, 15) "COMMAND" FROM sys.v_$session s, sys.v_$transaction t, sys.v_$rollname r WHERE t.addr = s.taddr AND t.xidusn = r.usn ORDER BY t.cr_get, t.phy_io; SELECT substr(segment_name, 1, 20), substr(tablespace_name, 1, 20), status, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE FROM DBA_ROLLBACK_SEGS; select 'FREE',count(*) from sys.fet$ union select 'USED',count(*) from sys.uet$; -- Quick view active transactions SELECT NAME, XACTS "ACTIVE TRANSACTIONS" FROM V$ROLLNAME, V$ROLLSTAT WHERE V$ROLLNAME.USN = V$ROLLSTAT.USN; SELECT to_char(BEGIN_TIME, 'DD-MM-YYYY;HH24:MI'), to_char(END_TIME, 'DD-MM-YYYY ;HH24:MI'), UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON" FROM V$UNDOSTAT WHERE trunc(BEGIN_TIME)=trunc(SYSDATE); select TO_CHAR(MIN(Begin_Time),'DD-MON-YYYY HH24:MI:SS') "Begin Time", TO_CHAR(MAX(End_Time),'DD-MON-YYYY HH24:MI:SS') "End Time", SUM(Undoblks) "Total Undo Blocks Used", SUM(Txncount) "Total Num Trans Executed", MAX(Maxquerylen) "Longest Query(in secs)", MAX(Maxconcurrency) "Highest Concurrent TrCount", SUM(Ssolderrcnt), SUM(Nospaceerrcnt) from V$UNDOSTAT; SELECT used_urec FROM v$session s, v$transaction t WHERE s.audsid=sys_context('userenv', 'sessionid') and s.taddr = t.addr; (used_urec = Used Undo records)

SELECT a.sid, a.username, b.xidusn, b.used_urec, b.used_ublk FROM v$session a, v$transaction b WHERE a.saddr = b.ses_addr; SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL FROM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER BY v.SID;

-- --------------------------------- 0.5 SOME EXPLANATIONS: -- --------------------------------

-- explanation of "COMMAND": 1: CREATE TABLE 2: INSERT 3: SELECT 4: CREATE CLUSTER 5: ALTER CLUSTER 6: UPDATE 7: DELETE 8: DROP CLUSTER 9: CREATE INDEX 10: DROP INDEX 11: ALTER INDEX 12: DROP TABLE 13: CREATE SEQUENC E 14: ALTER SEQUENCE 15: ALTER TABLE 16: DROP SEQUENCE 17: GRANT 18: REVOKE 19: CREATE SYNONYM 20: DR OP SYNONYM 21: CREATE VIEW 22: DROP VIEW 23: VALIDATE INDEX 24: CREATE PROCEDURE 25: ALTER PROCEDURE 26: LO CK TABLE 27: NO OPERATION 28: RENAME 29: COMMENT 30: AUDIT 31: NOAUDIT 32: CREATE DATABASE LINK 33: DROP D ATABASE LINK 34: CREATE DATABASE 35: ALTER DATABASE 36: CREATE ROLLBACK SEGMENT 37: ALTER ROLLBACK SEGMENT 38: DR OP ROLLBACK SEGMENT 39: CREATE TABLESPACE 40: ALTER TABLESPACE 41: DROP TABLESPACE 42: ALTER SESSION 43: ALTER USE 44: COMMIT 45: ROLLBACK 46: SAVEPOINT 47: PL/SQL EXECUTE 48: SET TRANSACTION 49: ALTER SYST EM SWITCH LOG 50: EXPLAIN 51: CREATE USER 25: CREATE ROLE 53: DROP USER 54: DROP ROLE 55: SET ROLE 56: CRE ATE SCHEMA 57: CREATE CONTROL FILE 58: ALTER TRACING 59: CREATE TRIGGER 60: ALTER TRIGGER 61: DROP TRIGGER 62: ANAL YZE TABLE 63: ANALYZE INDEX 64: ANALYZE CLUSTER 65: CREATE PROFILE 66: DROP PROFILE 67: ALTER PROFILE 68: DR OP PROCEDURE 69: DROP PROCEDURE 70: ALTER RESOURCE COST 71: CREATE SNAPSHOT LOG 72: ALTER SNAPSHOT LOG 73: DROP SNAPSHOT LOG 74: CREATE SNAPSHOT 75: ALTER SNAPSHOT 76: DROP SNAPSHOT 79: ALTER ROLE 85: TRUNCATE TABLE 86: TRUNC ATE COUSTER 88: ALTER VIEW 91: CREATE FUNCTION 92: ALTER FUNCTION 93: DROP FUNCTION 94: CREATE PACKAGE 95: ALTER PACKAGE 96: DROP PACKAGE 97: CREATE PACKAGE BODY 98: ALTER PACKAGE BODY 99: DROP PACKAGE BODY -- explanation of locks: Locks: 0, 'None', 1, 'Null', 2, 'Row-S (SS)', 3, 'Row-X (SX)', 4, 'Share', 5, 'S/Row-X (SRX)', 6, 'Exclusive', to_char(b.lmode) /* /* /* /* /* /* /* Mon Lock equivalent */ N */ L */ R */ S */ C */ X */

TX: enqueu, waiting TM: DDL on object MR: Media Recovery A TX lock is acquired when a transaction initiates its first change and is held until the transaction does a COMMIT or ROLLBACK. It is used mainly as a queuing mechanism so that other sessions can wait for the transaction to complete. TM Per table locks are acquired during the execution of a transaction when referencing a table with a DML statement so that the object is not dropped or altered during the execution of the transaction, if and only if the dml_locks parameter is non-zero. LOCKS: locks op user objects, zoals tables en rows LATCH: locks op system objects, zoals shared data structures in memory en data d ictionary rows LOCKS - shared of exclusive LATCH - altijd exclusive UL= user locks, geplaats door programmatuur m.b.v. bijvoorbeeld DBMS_LOCK packag e DML LOCKS: data manipulatie: table lock, row lock DDL LOCKS: preserves de struktuur van object (geen simulane DML, DDL statements) DML locks: row lock (TX): voor rows (insert, update, delete) row lock plus table lock: row lock, maar ook voorkomen DDL statements table lock (TM): automatisch bij insert, update, delete, ter voorkoming DDL op t able table lock: S: share lock RS: row share RSX: row share exlusive RX: row exclusive X: exclusive (ANDere tansacties kunnen alleen SELECT..) in V$LOCK lmode column: 0, 1, 2, 3, 4, 5, 6, None Null (NULL) Row-S (SS) Row-X (SX) Share (S) S/Row-X (SSX) Exclusive (X)

Internal Implementation of Oracle Locks (Enqueue) Oracle server uses locks to provide concurrent access to shared resources wherea s it uses latches to provide exclusive and short-term access to memory structures inside the SGA. Latches als o prevent more than one process to execute the same piece of code, which other process might be executing. Latch

is also a simple lock, which provides serialize and only exclusive access to the memory area in SGA. Or acle doesn t use latches to provide shared access to resources because it will increase CPU usage. Latches a re used for big memory structure and allow operations required for locking the sub structures. Shared resources c an be tables, transactions, redo threads, etc. Enqueue can be local or global. If it is a single instance th en enqueues will be local to that instance. There are global enqueus also like ST enqueue, which is held befo re any space transaction can be occurred on any tablespace in RAC. ST enqueues are held only for dictiona ry-managed tablespaces. These oracle locks are generally known as Enqueue, because whenever there is a s ession request for a lock on any shared resource structure, it's lock data structure is queued to one of t he linked list attached to that resource structure (Resource structure is discussed later). Before proceeding further with this topic, here is little brief about Oracle loc ks. Oracle locks can be applied to compound and simple objects like tables and the cache buffer. Locks can be he ld in different modes like shared, excusive, null, sub-shared, sub-exclusive and shared sub-exclusive. Depending on the type of object, different modes are applied. Foe example, a compound object like a table with ro ws, all above mentioned modes could be applicable whereas for simple objects only the first three will be appl icable. These lock modes don t have any importance of their own but the importance is how they are being used b y the subsystem. These lock modes (compatibility between locks) define how the session will get a lock on that object.

-- Explanation of Waits: SQL> desc v$system_event; Name -----------------------EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO v$system_event This view displays the count (total_waits) of all wait events since startup of t he instance. If timed_statistics is set to true, the sum of the wait times for all events are also displayed in the column time_waited. The unit of time_waited is one hundreth of a second. Since 10g, an additional column (time_waited_micro) measures wait times in milli onth of a second. total_waits where event='buffer busy waits' is equal the sum of count in v$waits tat. v$enqueue_stat can be used to break down waits on the enqueue wait event. While this view totals all

events in an instance, v$session select event, total_waits, time_waited from v$system_event where event like '%file%' Order by total_waits desc; column column column column column c1 c2 c3 c4 c5 heading heading heading heading heading 'Event|Name' 'Total|Waits' 'Seconds|Waiting' 'Total|Timeouts' 'Average|Wait|(in secs)' format format format format format a30 999,999,999 999,999 999,999,999 99.999

ttitle 'System-wide Wait Analysis|for current wait events' select event c1, total_waits c2, time_waited / 100 c3, total_timeouts c4, average_wait /100 c5 from sys.v_$system_event where event not in ( 'dispatcher timer', 'lock element cleanup', 'Null event', 'parallel query dequeue wait', 'parallel query idle wait - Slaves', 'pipe get', 'PL/SQL lock timer', 'pmon timer', 'rdbms ipc message', 'slave wait', 'smon timer', 'SQL*Net break/reset to client', 'SQL*Net message from client', 'SQL*Net message to client', 'SQL*Net more data to client', 'virtual circuit status', 'WMON goes to sleep' ) AND event not like 'DFS%' and event not like '%done%' and event not like '%Idle%' AND event not like 'KXFX%' order by c2 desc ; Create table beg_system_event as select * from v$system_event Run workload through system or user task Create table end_system_event as select * from v$system_event Issue SQL to determine true wait events drop table beg_system_event;

drop table end_system_event; SELECT b.event, (e.total_waits - b.total_waits) total_waits, (e.total_timeouts - b.total_timeouts) total_timeouts, (e.time_waited - b.time_waited) time_waited FROM beg_system_event b, end_system_event e WHERE b.event = e.event; Cumulative info, after startup: ------------------------------SELECT * FROM v$system_event WHERE event = 'enqueue'; SELECT * FROM v$sysstat WHERE class=4; select c.name,a.addr,a.gets,a.misses,a.sleeps, a.immediate_gets,a.immediate_misses,a.wait_time, b.pid from v$latch a, v$latchholder b, v$latchname c where a.addr = b.laddr(+) and a.latch# = c.latch# order by a.latch#; -- ---------------------------------------------------------------- 0.6. QUICK INFO ON HIT RATIO, SHARED POOL etc.. -- ---------------------------------------------------------------- Hit ratio: SELECT FROM WHERE AND AND (1-(pr.value/(dbg.value+cg.value)))*100 v$sysstat pr, v$sysstat dbg, v$sysstat cg pr.name = 'physical reads' dbg.name = 'db block gets' cg.name = 'consistent gets';

SELECT * FROM V$SGA; -- free memory shared pool: SELECT * FROM v$sgastat WHERE name = 'free memory'; -- hit ratio shared pool: SELECT gethits,gets,gethitratio FROM v$librarycache WHERE namespace = 'SQL AREA'; SELECT SUM(PINS) "EXECUTIONS", SUM(RELOADS) "CACHE MISSES WHILE EXECUTING" FROM V$LIBRARYCACHE; SELECT sum(sharable_mem) FROM v$db_object_cache; -- finding literals in SP: SELECT substr(sql_text,1,50) "SQL",

count(*) , sum(executions) "TotExecs" FROM v$sqlarea WHERE executions < 5 GROUP BY substr(sql_text,1,50) HAVING count(*) > 30 ORDER BY 2; -- ---------------------------------------- 0.7 Quick Table and object information -- --------------------------------------SELECT distinct substr(t.owner, 1, 25), substr(t.table_name,1,50), substr(t.tabl espace_name,1,20), t.chain_cnt, t.logging, s.relative_fno FROM dba_tables t, dba_segments s WHERE t.owner not in ('SYS','SYSTEM', 'OUTLN','DBSNMP','WMSYS','ORDSYS','ORDPLU GINS','MDSYS','CTXSYS','XDB') AND t.table_name=s.segment_name AND s.segment_type='TABLE' AND s.segment_name like 'CI_PAY%'; SELECT substr(segment_name, 1, 30), segment_type, substr(owner, 1, 10), extents, initial_extent, next_extent, max_extents FROM dba_segments WHERE extents > max_extents - 100 AND owner not in ('SYS','SYSTEM'); SELECT FROM WHERE and segment_name, owner, tablespace_name, extents dba_segments owner='SALES' -- you use the correct schema here extents > 700;

SELECT owner, substr(object_name, 1, 30), object_type, created, last_ddl_time, status FROM dba_objects where OWNER='RM_LIVE'; WHERE created > SYSDATE-5; SELECT owner, substr(object_name, 1, 30), object_type, created, to_char(last_ddl_time, 'DD-MM-YYYY;HH24:MI'), status FROM dba_objects where OWNER='RM_LIVE' AND last_ddl_time > SYSDATE-5; SELECT owner, substr(object_name, 1, 30), object_type, created, last_ddl_time, status FROM dba_objects WHERE status='INVALID'; Compare 2 owners: ----------------select table_name from dba_tables where owner='MIS_OWNER' and table_name not in (SELECT table_name from dba_tables where OWNER='MARPAT'); Table and column information: ----------------------------select

substr(table_name, 1, 3) schema , table_name , column_name , substr(data_type,1 ,1) data_type from user_tab_columns where COLUMN_NAME='ENV_ID' where table_name like 'ALG%' or table_name like 'STG%' or table_name like 'ODS%' or table_name like 'DWH%' or table_name like 'MKM%' order by decode(substr(table_name, 1, 3), 'ALG', 10, 'STG', 20, 'ODS', 30, 'DWH ', 40, 'MKM', 50, 60) , table_name , column_id Check on existence of JServer: -----------------------------select count(*) from all_objects where object_name = 'DBMS_JAVA'; should return a count of 3 -- --------------------------------------- 0.8 QUICK INFO ON PRODUCT INFORMATION: -- -------------------------------------ersa SELECT * FROM PRODUCT_COMPONENT_VERSION; SELECT * FROM NLS_DATABASE_PARAMETERS; SELECT * FROM NLS_SESSION_PARAMETERS; SELECT * FROM NLS_INSTANCE_PARAMETERS; SELECT * FROM V$OPTION; SELECT * FROM V$LICENSE; SELECT * FROM V$VERSION; Oracle RDBMS releases: ---------------------9.2.0.1 is the terminal release for Oracle 9i. Rel 2. Normally it's patched to 9.2.0.4. As from october patches 9.2.0.5 and little later 9.2.0.6 were available 9.2.0.4 is patch ID 3095277. 9.0.1.4 8.1.7 8.0.6 7.3.4 is is is is the the the the terminal terminal terminal terminal release release release release for for for for Oracle 9i Oracle8i. Oracle8. Oracle7. Rel. 1. Additional patchsets exists. Additional patchsets exists. Additional patchsets exists.

IS ORACLE 32BIT or 64BIT? ------------------------Starting with version 8, Oracle began shipping 64bit versions of it's RDBMS prod uct on UNIX platforms that support 64bit software. IMPORTANT: 64bit Oracle can only be installed on O

perating Systems that are 64bit enabled. In general, if Oracle is 64bit, '64bit' will be displayed on the opening banners of Oracle executables such as 'svrmgrl', 'exp' and 'imp'. It will also be displayed in the headers o f Oracle trace files. Otherwise if '64bit' is not display at these locations, it can be assumed that O racle is 32bit. or From the OS level: % cd $ORACLE_HOME/bin % file oracle will be indicated. ...if 64bit, '64bit'

To verify the wordsize of a downloaded patchset: -----------------------------------------------The filename of the downloaded patchset usually dictates which version and word size of Oracle it should be applied against. For instance: p1882450_8172_SOLARIS64.zip is the 8.1.7.2 patchset for 64bit Oracle on Solaris. Also refer to the README that is included with the patch or patch set and this Note: Win2k Server Certifications: ---------------------------OS Product Certified With Version Status Addtl. Info. Components Other Install I ssue 2000 10g N/A N/A Certified Yes None None None 2000 9.2 32-bit -Opteron N/A N/A Certified Yes None None None 2000 9.2 N/A N/A Certified Yes None None None 2000 9.0.1 N/A N/A Desupported Yes None N/A N/A 2000 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 2000 8.1.6 (8i) N/A N/A Desupported Yes None N/A N/A 2000, Beta 3 8.1.5 (8i) N/A N/A Withdrawn Yes N/A N/A N/A Solaris Server certifications: -----------------------------Server Certifications OS Product Certified With Version Status Addtl. Info. Components Other Install I ssue 9 10g 64-bit N/A N/A Certified Yes None None None 8 10g 64-bit N/A N/A Certified Yes None None None 10 10g 64-bit N/A N/A Projected None N/A N/A N/A 9 9.2 64-bit N/A N/A Certified Yes None None None 8 9.2 64-bit N/A N/A Certified Yes None None None 10 9.2 64-bit N/A N/A Projected None N/A N/A N/A 2.6 9.2 N/A N/A Certified Yes None None None 9 9.2 N/A N/A Certified Yes None None None 8 9.2 N/A N/A Certified Yes None None None 7 9.2 N/A N/A Certified Yes None None None 10 9.2 N/A N/A Projected None N/A N/A N/A 9 9.0.1 64-bit N/A N/A Desupported Yes None N/A N/A 8 9.0.1 64-bit N/A N/A Desupported Yes None N/A N/A 2.6 9.0.1 N/A N/A Desupported Yes None N/A N/A 9 9.0.1 N/A N/A Desupported Yes None N/A N/A 8 9.0.1 N/A N/A Desupported Yes None N/A N/A 7 9.0.1 N/A N/A Desupported Yes None N/A N/A 9 8.1.7 (8i) 64-bit N/A N/A Desupported Yes None N/A N/A 8 8.1.7 (8i) 64-bit N/A N/A Desupported Yes None N/A N/A

2.6 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 9 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 8 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 7 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A everything below: desupported Oracle clients: --------------Server Client Version 10.1.0 7.3.4 10.1.0 Yes No 9.2.0 Yes No #1 9.0.1 Was Was 8.1.7 Yes Was 8.1.6 No Was 8.1.5 No Was 8.0.6 No Was 8.0.5 No Was 7.3.4 No Was Version 9.2.0 9.0.1 Yes Yes Was Yes No No Was No Was Was Was Was Was Was No Was No Was 8.1.7 8.1.6 8.1.5 No No No Was Was Was Was Was Was 8.0.6 No Was Was Was Was Was Was Was Was 8.0.5 No No No Was Was Was Was Was Was

Yes #2 No Yes Was Yes Was Was Was Was Was No Was Was Was Was Was Was Was

-- ------------------------------------------------------ 0.9 QUICK INFO WITH REGARDS LOGS AND BACKUP RECOVERY: -- ----------------------------------------------------SELECT * from V$BACKUP; SELECT file#, substr(name, 1, 30), status, checkpoint_change# controlfile FROM V$DATAFILE; SELECT d.file#, d.status, d.checkpoint_change#, b.status, b.CHANGE#, to_char(b.TIME,'DD-MM-YYYY;HH24:MI'), substr(d.name, 1, 40) FROM V$DATAFILE d, V$BACKUP b WHERE d.file#=b.file#; SELECT file#, substr(name, 1, 30), status, fuzzy, checkpoint_change# file header FROM V$DATAFILE_HEADER; -- uit -- uit

SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 40), COMPLETION_TIME, FIRST_CHANGE#, FIRST_TIME FROM V$ARCHIVED_LOG WHERE COMPLETION_TIME > SYSDATE -2; SELECT recid, first_change#, sequence#, next_change#

FROM V$LOG_HISTORY; SELECT resetlogs_change#, checkpoint_change#, controlfile_change#, open_resetlog s FROM V$DATABASE; SELECT * FROM V$RECOVER_FILE -- Which file needs recovery -- ----------------------------------------------------------------------------- 0.10 QUICK INFO WITH REGARDS TO TABLESPACES, DATAFILES, REDO LOGFILES etc..: -- ------------------------------------------------------------------------------ online redo log informatie: V$LOG, V$LOGFILE: SELECT l.group#, l.members, l.status, l.bytes, substr(lf.member, 1, 50) FROM V$LOG l, V$LOGFILE lf WHERE l.group#=lf.group#; SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, FIRST_TIME, to_char(FIRST_TIME, 'DD-MM-YYYY;HH24:MI') FROM V$LOG_HISTORY; -- WHERE SEQUENCE# SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG; -- tablespace free-used: SELECT Total.name "Tablespace Name", Free_space, (total_space-Free_space) Used_space, total_space FROM (SELECT tablespace_name, sum(bytes/1024/1024) Free_Space FROM sys.dba_free_space GROUP BY tablespace_name ) Free, (SELECT b.name, sum(bytes/1024/1024) TOTAL_SPACE FROM sys.v_$datafile a, sys.v_$tablespace B WHERE a.ts# = b.ts# GROUP BY b.name ) Total WHERE Free.Tablespace_name = Total.name; SELECT substr(file_name, 1, 70), tablespace_name FROM dba_data_files; ----------------------------------------------- 0.11 AUDIT Statements: ---------------------------------------------select v.sql_text, v.FIRST_LOAD_TIME, v.PARSING_SCHEMA_ID, v.DISK_READS, v.ROWS_ PROCESSED, v.CPU_TIME, b.username from v$sqlarea v, dba_users b where v.FIRST_LOAD_TIME > '2009-03-08' and v.PARSING_SCHEMA_ID=b.user_id order by v.FIRST_LOAD_TIME ;

------------------------------------------------ 0.12 EXAMPLE OF DYNAMIC SQL: ----------------------------------------------select 'UPDATE '||t.table_name||' SET '||c.column_name||'=REPLACE('||c.column_na me||','''',CHR(7));' from user_tab_columns c, user_tables t where c.table_name=t.table_name and t.num_rows>0 and c.DATA_LENGTH>10 and data_type like '%CHAR%' ORDER BY t.table_name desc; create public synonym EMPLOYEE for HARRY.EMPLOYEE; select 'create public synonym '||table_name||' for CISADM.'||table_name||';' from dba_tables where owner='CISADM'; select 'GRANT SELECT, INSERT, UPDATE, DELETE ON '||table_name||' TO CISUSER;' from dba_tables where owner='CISADM'; select 'GRANT SELECT ON '||table_name||' TO CISREAD;' from dba_tables where owner='CISADM';

======================== 1. NOTES ON PERFORMANCE: ========================= 1.1 POOLS: ========== -- SHARED POOL: -- -----------A literal SQL statement is considered as one which uses literals in the predicat e/s rather than bind variables where the value of the literal is likely to differ between various executions of the statement. Eg 1: SELECT * FROM emp WHERE ename='CLARK'; is used by the application instead of SELECT * FROM emp WHERE ename=:bind1; SQL statement for this article as it can be shared. -- Hard Parse If a new SQL statement is issued which does not exist in the shared pool then th is has to be parsed fully. Eg: Oracle has to allocate memory for the statement from the shared pool, check the statement syntactically and semantically etc... This is referred to as a hard parse and is very expensiv e in both terms of CPU used and in the number of latch gets performed. --Soft Parse

If an of As

a session issues a SQL statement which is already in the shared pool AND it c use an existing version that statement then this is known as a 'soft parse'. far as the application is concerned it has asked to parse the statement.

if two statements are textually identical but cannot be shared then these are ca lled 'versions' of the same statement. If Oracle matches to a statement with many versions it has to check each version in turn to see if it is truely identical to the statement currently being parsed. Hence high ve rsion counts are best avoided. The best approach to take is that all SQL should be sharable unless it is adhoc or infrequently used SQL where it is important to give CBO as much information as possible in order for it to p roduce a good execution plan. --Eliminating Literal SQL If you have an existing application it is unlikely that you could eliminate all literal SQL but you should be prepared to eliminate some if it is causing problems. By looking at the V$SQL AREA view it is possible to see which literal statements are good candidates for converting to use bind v ariables. The following query shows SQL in the SGA where there are a large number of similar statements: SELECT substr(sql_text,1,40) "SQL", count(*) , sum(executions) "TotExecs" FROM v$sqlarea WHERE executions < 5 GROUP BY substr(sql_text,1,40) HAVING count(*) > 30 ORDER BY 2; The values 40,5 and 30 are example values so this query is looking for different statements whose first 40 characters are the same which have only been executed a few times each and th ere are at least 30 different occurrances in the shared pool. This query uses the idea it is common for litera l statements to begin "SELECT col1,col2,col3 FROM table WHERE ..." with the leading portion of each st atement being the same. --Avoid Invalidations Some specific orders will change the state of cursors to INVALIDATE. These order s modify directly the context of related objects associated with cursors. That's orders are TRUNCA TE, ANALYZE or DBMS_STATS.GATHER_XXX on tables or indexes, grants changes on underlying objects. The associated curso rs will stay in the SQLAREA but when it will be reference next time, it should be reloaded and reparsed fully, so the gl obal performance will be impacted. The following query could help us to better identify the concerned cursors: SELECT substr(sql_text, 1, 40) "SQL", invalidations from v$sqlarea order by invalidations DESC; -- CURSOR_SHARING parameter (8.1.6 onwards)

<Parameter:CURSOR_SHARING> is a new parameter introduced in Oracle8.1.6. It shou ld be used with caution in this release. If this parameter is set to FORCE then literals will be replaced by system gener ated bind variables where possible. For multiple similar statements which differ only in the literals used this allo ws the cursors to be shared even though the application supplied SQL uses literals. The parameter can be set dynamically at the system or session level thus: ALTER SESSION SET cursor_sharing = FORCE; or ALTER SYSTEM SET cursor_sharing = FORCE; or it can be set in the init.ora file. Note: As the FORCE setting causes system generated bind variables to be used in place of literals, a different execution plan may be chosen by the cost based optimizer (CBO) as it no longer has the lit eral values available to it when costing the best execution plan. In Oracle9i, it is possible to set CURSOR_SHARING=SIMILAR. SIMILAR causes statem ents that may differ in some literals, but are otherwise identical, to share a cursor, unless the lit erals affect either the meaning of the statement or the degree to which the plan is optimized. This enhancement improves the usability of the parameter for situations where FORCE would normally cause a different, undesired execution plan. With CURSOR_SHARING=SIMILAR, Oracle determines which literals are "safe" for sub stitution with bind variables. This will result in some SQL not being shared in an attempt to provide a more ef ficient execution plan. -- SESSION_CACHED_CURSORS parameter <Parameter:SESSION_CACHED_CURSORS> is a numeric parameter which can be set at in stance level or at session level using the command: ALTER SESSION SET session_cached_cursors = NNN; The value NNN determines how many 'cached' cursors there can be in your session. Whenever a statement is parsed Oracle first looks at the statements pointed to b y your private session cache if a sharable version of the statement exists it can be used. This provides a sh ortcut access to frequently parsed statements that uses less CPU and uses far fewer latch gets than a soft or hard parse. To get placed in the session cache the same statement has to be parsed 3 times w ithin the same cursor - a pointer to the shared cursor is then added to your session cache. If all session cache cursors are in use then the least recently used entry is discarded. If you do not have this parameter set already then it is advisable to set it to a starting value of about 50. The statistics section of the bstat/estat report includes a value for 'session c ursor cache hits' which shows if the cursor cache is giving any benefit. The size of the cursor cache can then be increased or decreased as necessary. SESSION_CACHED_CURSORS are particularly useful with Oracle Forms applications wh en forms are frequently opened and closed. -- SHARED_POOL_RESERVED_SIZE parameter There are quite a few notes explaining <Parameter:SHARED_POOL_RESERVED_SIZE> alr

eady in circulation. The parameter was introduced in Oracle 7.1.5 and provides a means of reserving a portion of th e shared pool for large memory allocations. The reserved area comes out of the shared pool itself. From a practical point of view one should set SHARED_POOL_RESERVED_SIZE to about 10% of SHARED_POOL_SIZE unless either the shared pool is very large OR SHARED_POOL_RESERVED_MIN_ALLOC has been set low er than the default value: If the shared pool is very large then 10% may waste a significant amount of memo ry when a few Mb will suffice. If SHARED_POOL_RESERVED_MIN_ALLOC has been lowered then many space requests may be eligible to be satisfied from this portion of the shared pool and so 10% may be too little. It is easy to monitor the space usage of the reserved area using the <View:V$SHA RED_POOL_RESERVED> which has a column FREE_SPACE. -- SHARED_POOL_RESERVED_MIN_ALLOC parameter In Oracle8i this parameter is hidden. SHARED_POOL_RESERVED_MIN_ALLOC should generally be left at its default value, al though in certain cases values of 4100 or 4200 may help relieve some contention on a heavily loaded shared pool . -- SHARED_POOL_SIZE parameter <Parameter:SHARED_POOL_SIZE> controls the size of the shared pool itself. The si ze of the shared pool can impact performance. If it is too small then it is likely that sharable informati on will be flushed from the pool and then later need to be reloaded (rebuilt). If there is heavy use of literal S QL and the shared pool is too large then over time a lot of small chunks of memory can build up on the internal memory fr eelists causing the shared pool latch to be held for longer which in-turn can impact performance. In this situation a smaller shared pool may perform better than a larger one. This problem is greatly reduced in 8.0.6 and in 8.1.6 onwards due to the enhancement in <bug:986149> . NB: The shared pool itself should never be made so large that paging or swapping occur as performance can then decrease by many orders of magnitude. -- _SQLEXEC_PROGRESSION_COST parameter (8.1.5 onwards) This is a hidden parameter which was introduced in Oracle 8.1.5. The parameter i s included here as the default setting has caused some problems with SQL sharability. Setting this parameter to 0 can avoid these issues which result in multiple versions statements in the shared pool. Eg: Add the following to the init.ora file # _SQLEXEC_PROGRESSION_COST is set to ZERO to avoid SQL sharing issues # See Note:62143.1 for details _sqlexec_progression_cost=0 Note that a side effect of setting this to '0' is that the V$SESSION_LONGOPS vie w is not populated by long running queries. -- MTS, Shared Server and XA The multi-threaded server (MTS) adds to the load on the shared pool and can cont

ribute to any problems as the User Global Area (UGA) resides in the shared pool. This is also true of XA sessions in Oracle7 as their UGA is located in the shared pool. (In Oracle8/8i XA sessions do NOT put their UGA in the shared pool). In Oracle8 the Large Pool can be used for MTS reducing its impact on shared pool activity - However memory allocations in the Large Pool still make use of the "shared poo l latch". See <Note:62140.1> for a description of the Large Pool. Using dedicated connections rather than MTS causes the UGA to be allocated out o f process private memory rather than the shared pool. Private memory allocations do not use the "shared pool lat ch" and so a switch from MTS to dedicated connections can help reduce contention in some cases. In Oracle9i, MTS was renamed to "Shared Server". For the purposes of the shared pool, the behaviour is essentially the same. Useful SQL for looking at memory and Shared Pool problems --------------------------------------------------------Indeling SGA: ------------SELECT * FROM V$SGA; free memory shared pool: -----------------------SELECT * FROM v$sgastat WHERE name = 'free memory'; hit ratio shared pool: ---------------------SELECT gethits,gets,gethitratio FROM v$librarycache WHERE namespace = 'SQL AREA'; SELECT SUM(PINS) "EXECUTIONS", SUM(RELOADS) "CACHE MISSES WHILE EXECUTING" FROM V$LIBRARYCACHE; SELECT sum(sharable_mem) FROM v$db_object_cache; statistics: ----------SELECT class, value, name FROM v$sysstat; Executions: ----------SELECT substr(sql_text,1,90) "SQL", count(*) , sum(executions) "TotExecs" FROM v$sqlarea

WHERE GROUP HAVING ORDER ;

executions > 5 BY substr(sql_text,1,90) count(*) > 10 BY 2

The values 40,5 and 30 are example values so this query is looking for different statements whose first 40 characters are the same which have only been executed a few times each and there are at least 30 differe nt occurrances in the shared pool. This query uses the idea it is common for litera l statements to begin "SELECT col1,col2,col3 FROM table WHERE ..." with the leading portion of each st atement being the same. V$SQLAREA: SQL_TEXT VARCHAR2(1000) First thousand characters of the SQL text for the current cursor SHARABLE_MEM NUMBER Amount of shared memory used by a cursor. If multiple child cursors exist, then the sum of all shared memory used by all child cursors. PERSISTENT_MEM NUMBER Fixed amount of memory used for the lifetime of an open cursor. If multiple chi ld cursors exist, the fixed sum of memory used for the lifetime of all the child cursors. RUNTIME_MEM NUMBER Fixed amount of memory required during execution of a cursor. If multiple child cursors exist, the fixed sum of all memory required during execution of all the child cursors. SORTS NUMBER Sum of the number of sorts that were done for all the child cursors VERSION_COUNT NUMBER Number of child cursors that are present in the cache under this parent LOADED_VERSIONS NUMBER Number of child cursors that are present in the cache and have their context he ap (KGL heap 6) loaded OPEN_VERSIONS NUMBER The number of child cursors that are currently open under this current parent USERS_OPENING NUMBER The number of users that have any of the child cursors open

FETCHES NUMBER Number of fetches associated with the SQL statement EXECUTIONS NUMBER Total number of executions, totalled over all the child cursors USERS_EXECUTING NUMBER Total number of users executing the statement over all child cursors LOADS NUMBER The number of times the object was loaded or reloaded FIRST_LOAD_TIME VARCHAR2(19) Timestamp of the parent creation time INVALIDATIONS NUMBER Total number of invalidations over all the child cursors PARSE_CALLS NUMBER The sum of all parse calls to all the child cursors under this parent DISK_READS NUMBER The sum of the number of disk reads over all child cursors BUFFER_GETS NUMBER The sum of buffer gets over all child cursors ROWS_PROCESSED NUMBER The total number of rows processed on behalf of this SQL statement COMMAND_TYPE NUMBER The Oracle command type definition OPTIMIZER_MODE VARCHAR2(10) Mode under which the SQL statement is executed PARSING_USER_ID NUMBER The user ID of the user that has parsed the very first cursor under this parent PARSING_SCHEMA_ID NUMBER The schema ID that was used to parse this child cursor KEPT_VERSIONS NUMBER The number of child cursors that have been marked to be kept using the DBMS_SHA RED_POOL package

ADDRESS RAW(4) The address of the handle to the parent for this cursor HASH_VALUE NUMBER The hash value of the parent statement in the library cache MODULE VARCHAR2(64) Contains the name of the module that was executing at the time that the SQL sta tement was first parsed as set by calling DBMS_APPLICATION_INFO.SET_MODULE MODULE_HASH NUMBER The hash value of the module that is named in the MODULE column ACTION VARCHAR2(64) Contains the name of the action that was executing at the time that the SQL sta tement was first parsed as set by calling DBMS_APPLICATION_INFO.SET_ACTION ACTION_HASH NUMBER The hash value of the action that is named in the ACTION column SERIALIZABLE_ABORTS NUMBER Number of times the transaction fails to serialize, producing ORA-08177 errors, totalled over all the child cursors IS_OBSOLETE VARCHAR2(1) Indicates whether the cursor has become obsolete (Y) or not (N). This can happe n if the number of child cursors is too large. CHILD_LATCH NUMBER Child latch number that is protecting the cursor V$SQL: -----V$SQL lists statistics on shared SQL area without the GROUP BY clause and contai ns one row for each child of the original SQL text entered. Column Datatype Description SQL_TEXT VARCHAR2(1000) First thousand characters of the SQL text for the current cursor SHARABLE_MEM NUMBER Amount of shared memory used by this child cursor (in bytes)

PERSISTENT_MEM NUMBER Fixed amount of memory used for the lifetime of this child cursor (in bytes) RUNTIME_MEM NUMBER Fixed amount of memory required during the execution of this child cursor SORTS NUMBER Number of sorts that were done for this child cursor LOADED_VERSIONS NUMBER Indicates whether the context heap is loaded (1) or not (0) OPEN_VERSIONS NUMBER Indicates whether the child cursor is locked (1) or not (0) USERS_OPENING NUMBER Number of users executing the statement FETCHES NUMBER Number of fetches associated with the SQL statement EXECUTIONS NUMBER Number of executions that took place on this object since it was brought into t he library cache USERS_EXECUTING NUMBER Number of users executing the statement LOADS NUMBER Number of times the object was either loaded or reloaded FIRST_LOAD_TIME VARCHAR2(19) Timestamp of the parent creation time INVALIDATIONS NUMBER Number of times this child cursor has been invalidated PARSE_CALLS NUMBER Number of parse calls for this child cursor DISK_READS NUMBER Number of disk reads for this child cursor BUFFER_GETS NUMBER

Number of buffer gets for this child cursor ROWS_PROCESSED NUMBER Total number of rows the parsed SQL statement returns COMMAND_TYPE NUMBER Oracle command type definition OPTIMIZER_MODE VARCHAR2(10) Mode under which the SQL statement is executed OPTIMIZER_COST NUMBER Cost of this query given by the optimizer PARSING_USER_ID NUMBER User ID of the user who originally built this child cursor PARSING_SCHEMA_ID NUMBER Schema ID that was used to originally build this child cursor KEPT_VERSIONS NUMBER Indicates whether this child cursor has been marked to be kept pinned in the ca che using the DBMS_SHARED_POOL package ADDRESS RAW(4) Address of the handle to the parent for this cursor TYPE_CHK_HEAP RAW(4) Descriptor of the type check heap for this child cursor HASH_VALUE NUMBER Hash value of the parent statement in the library cache PLAN_HASH_VALUE NUMBER Numerical representation of the SQL plan for this cursor. Comparing one PLAN_HA SH_VALUE to another easily identifies whether or not two plans are the same (rather than comparing the two plans line by line). CHILD_NUMBER NUMBER Number of this child cursor MODULE VARCHAR2(64) Contains the name of the module that was executing at the time that the SQL sta tement was first parsed, which is set by calling DBMS_APPLICATION_INFO.SET_MODULE

MODULE_HASH NUMBER Hash value of the module listed in the MODULE column ACTION VARCHAR2(64) Contains the name of the action that was executing at the time that the SQL sta tement was first parsed, which is set by calling DBMS_APPLICATION_INFO.SET_ACTION ACTION_HASH NUMBER Hash value of the action listed in the ACTION column SERIALIZABLE_ABORTS NUMBER Number of times the transaction fails to serialize, producing ORA-08177 errors, per cursor OUTLINE_CATEGORY VARCHAR2(64) If an outline was applied during construction of the cursor, then this column d isplays the category of that outline. Otherwise the column is left blank. CPU_TIME NUMBER CPU time (in microseconds) used by this cursor for parsing/executing/fetching ELAPSED_TIME NUMBER Elapsed time (in microseconds) used by this cursor for parsing/executing/fetchi ng OUTLINE_SID NUMBER Outline session identifier CHILD_ADDRESS RAW(4) Address of the child cursor SQLTYPE NUMBER Denotes the version of the SQL language used for this statement REMOTE VARCHAR2(1) (Y/N) Identifies whether the cursor is remote mapped or not OBJECT_STATUS VARCHAR2(19) Status of the cursor (VALID/INVALID) LITERAL_HASH_VALUE NUMBER Hash value of the literals which are replaced with system-generated bind variab les and are to be matched, when CURSOR_SHARING is used. This is not the hash value for the SQL statement. If CURSOR_SHARING is not used,

then the value is 0. LAST_LOAD_TIME VARCHAR2(19) IS_OBSOLETE VARCHAR2(1) Indicates whether the cursor has become obsolete (Y) or not (N). This can happe n if the number of child cursors is too large. CHILD_LATCH NUMBER Child latch number that is protecting the cursor

Checking for high version counts: -------------------------------SELECT address, hash_value, version_count , users_opening , users_executing, substr(sql_text,1,40) "SQL" FROM v$sqlarea WHERE version_count > 10 ; "Versions" of a statement occur where the SQL is character for character identic al but the underlying objects or binds etc.. are different. Finding statement/s which use lots of shared pool memory: -------------------------------------------------------SELECT substr(sql_text,1,60) "Stmt", count(*), sum(sharable_mem) "Mem", sum(users_opening) "Open", sum(executions) "Exec" FROM v$sql GROUP BY substr(sql_text,1,60) HAVING sum(sharable_mem) > 20000 ; SELECT substr(sql_text,1,100) "Stmt", count(*), sum(sharable_mem) "Mem", sum(users_opening) "Open", sum(executions) "Exec" FROM v$sql GROUP BY substr(sql_text,1,60) HAVING sum(executions) > 200 ; SELECT substr(sql_text,1,100) "Stmt", count(*), sum(executions) "Exec" FROM v$sql GROUP BY substr(sql_text,1,100) HAVING sum(executions) > 200

; where MEMSIZE is about 10% of the shared pool size in bytes. This should show if there are similar literal statements, or multiple versions of a statements which account for a large portion of the me mory in the shared pool.

1.2 statistics: --------------- Rule based / Cost based - apply EXPLAIN PLAN in query - ANALYZE COMMAND: ANALYZE TABLE EMPLOYEE COMPUTE STATISTICS; ANALYZE TABLE EMPLOYEE COMPUTE STATISTICS FOR ALL INDEXES; ANALYZE INDEX scott.indx1 COMPUTE STATISTICS; ANALYZE TABLE EMPLOYEE ESTIMATE STATISTICS SAMPLE 10 PERCENT; ALTER TABLE EMPLOYEE DELETE STATISTICS; - DBMS_UTILITY.ANALYZE_SCHEMA() procedure: DBMS_UTILITY.ANALYZE_SCHEMA ( schema VARCHAR2, method VARCHAR2, estimate_rows NUMBER DEFAULT NULL, estimate_percent NUMBER DEFAULT NULL, method_opt VARCHAR2 DEFAULT NULL); DBMS_UTILITY.ANALYZE_DATABASE ( method VARCHAR2, estimate_rows NUMBER DEFAULT NULL, estimate_percent NUMBER DEFAULT NULL, method_opt VARCHAR2 DEFAULT NULL); method=compute, estimate, delete To exexcute: exec DBMS_UTILITY.ANALYZE_SCHEMA('CISADM','COMPUTE');

1.3 Storage parameters: ----------------------segement: pctfree, pctused, number AND size of extends in STORAGE clause - very low updates - if updates, oltp - if only inserts : pctfree low : pctfree 10, pctused 40 : pctfree low

1.4 rebuild indexes on regular basis: ----------------------------------------alter index SCOTT.EMPNO_INDEX rebuild tablespace INDEX

storage (initial 5M next 5M pctincrease 0); You should next use the ANALYZE TABLE COMPUTE STATISTICS command 1.5 Is an index used in a query?: --------------------------------De WHERE clause of a query must use the 'leading column' of (one of the) index(e s): Suppose an index 'indx1' exists on EMPLOYEE(city, state, zip) Suppose a user issues the query: SELECT .. FROM EMPLOYEE WHERE state='NY' Then this query will not use that index! Therfore you must pay attention to the cardinal column of any index. 1.6 set transaction parameters: ------------------------------ONLY ORACLE 7,8,8i: Suppose you must perform an action which will generate a lot of redo and rollback. If you want to influence which rollback segment will be used in your transactions, you can use the statement set transaction use rollback segment SEGMENT_NAME 1.7 Reduce fragmentation of a dictionary managed tablespace: -----------------------------------------------------------alter tablespace DATA coalesce;

1.8 normalisation of tables: ---------------------------The more tables are 'normalized', the higher the performance costs for queries joining tables 1.9 commits na zoveel rows: ---------------------------declare i number := 0; cursor s1 is SELECT * FROM tab1 WHERE col1 = 'value1' FOR UPDATE; begin for c1 in s1 loop update tab1 set col1 = 'value2' WHERE current of s1; i := i + 1; if i > 1000 then commit; -- Commit after every X records

i := 0; end if; end loop; commit; end; / -- -----------------------------CREATE TABLE TEST ( ID NUMBER(10) NULL, DATUM DATE NULL, NAME VARCHAR2(10) NULL ); declare i number := 1000; begin while i>1 loop insert into TEST values (1, sysdate+i,'joop'); i := i - 1; commit; end loop; commit; end; / -- -----------------------------CREATE TABLE TEST2 ( i number ID NUMBER(10) DATUM DATE DAG VARCHAR2(10) NAME VARCHAR2(10) );

NULL, NULL, NULL, NULL, NULL

declare i number := 1; j date; k varchar2(10); begin while i<1000000 loop j:=sysdate+i; k:=TO_CHAR(SYSDATE+i,'DAY'); insert into TEST2 values (i,1, j, k,'joop'); i := i + 1; commit; end loop; commit;

end; / -- -----------------------------CREATE TABLE TEST3 ( ID NUMBER(10) DATUM DATE DAG VARCHAR2(10) VORIG VARCHAR2(10) NAME VARCHAR2(10) );

NULL, NULL, NULL, NULL, NULL

declare i number := 1; j date; k varchar2(10); l varchar2(10); begin while i<1000 loop j:=sysdate+i; k:=TO_CHAR(SYSDATE+i,'DAY'); l:=TO_CHAR(SYSDATE+i-1,'DAY'); insert into TEST3 (ID,DATUM,DAG,VORIG,NAME) values (i, j, k, l,'joop'); i := i + 1; commit; end loop; commit; end; / 1.10 explain plan commAND, autotrace: ------------------------------------1 explain plan commAND: ----------------------First execute the utlxplan.sql script. This script will create the PLAN_TABLE table, needed for storage of performance data. Now it's possible to do the following: -- optionally, delete the former performance data DELETE FROM plan_table WHERE statement_id = 'XXX'; COMMIT; -- now you can run the query that is to be analyzed EXPLAIN PLAN SET STATEMENT_ID = 'XXX' FOR SELECT * FROM EMPLOYEE WHERE city > 'Y%'; To view results, you can use the utlxpls.sql script.

2. set autotrace on / off ------------------------Deze maakt ook gebruik van de PLAN_TABLE en de PLUSTRACE role moet bestaan. Desgewenst kan het plustrce.sql script worden uitgevoerd (onder SYS). Opmerking: Execution plan / access path bij een join query: - nested loop: 1 table is de driving table met full table scan of gebruik van in dex, en de tweede table wordt benadert m.b.v. een index van de tweede table gebaseerd op de WHERE clause. - merge join: als er geen bruikbare index is, worden alle rows opgehaald, gesorteerd, en gejoined naar een resultset. - Hash join: bepaalde init.ora parameters moeten aanwezig zijn (HASH_JOIN_ENABLE=TRUE, HASH_AREA_SIZE= , of via ALTER SESSION SET HASH_JOIN_ENABLED=TRUE). Meestal zeer effectief bij joins van een kleine table met een grote table. De kleine table is de driving table in memory en het vervolg is een algolritme wat lijkt op de nested loop Kan ook worden afgedwongen met een hint: SELECT /*+ USE_HASH(COMPANY) */ COMPANY.Name, SUM(Dollar_Amount) FROM COMPANY, SALES WHERE COMPANY.Company_ID = SALES.Company_ID GROUP BY COMPANY.Name; 3 SQL trace en TKPROFF ---------------------SQL trace kan geactiveerd worden via init.ora of via ALTER SESSION SET SQL_TRACE=TRUE DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid, serial#, TRUE); DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(12, 398, TRUE); DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(12, 398, FALSE); DBMS_SUPPORT.START_TRACE_IN_SESSION(12,398); Turn SQL tracing on in session 448. The trace information will get written to us er_dump_dest. SQL> exec dbms_system.set_sql_trace_in_session(448,2288,TRUE); Turn SQL tracing off in session 448 SQL> exec dbms_system.set_sql_trace_in_session(448,2288,FALSE);

Init.ora: Max_dump_file_size in OS blocks SQL_TRACE=TRUE (kan zeer grote files opleveren, is voor alle sessions) USER_DUMP_DEST= lokatie trace files

1.12 Indien de CBO niet het beste access path gebruikt: hints in query: ----------------------------------------------------------------------Goal hints: Access methods hints: ALL_ROWS, FIRST_ROWS, CHOOSE, RULE FULL, ROWID, CLUSTER, HASH, INDEX

SELECT /*+ INDEX(emp_pk) */ FROM emp WHERE empno=12345; SELECT /*+ RULE */ ename, dname FROM emp, dept WHERE emp.deptno=dept.deptno

============================================== 3. Data dictonary queries m.b.t perfoRMANce: ============================================== 3.1 Reads AND writes in files: -----------------------------V$FILESTAT, V$DATAFILE - Relative File I/O (1) SELECT fs.file#, df.file#, substr(df.name, 1, 50), fs.phyrds, fs.phywrts, df.status FROM v$filestat fs, v$datafile df WHERE fs.file#=df.file# - Relative File I/O (2) set pagesize 60 linesize 80 newpage 0 feedback off ttitle skip centre 'Datafile IO Weights' skip centre column Total_IO format 999999999 column Weigt format 999.99 column file_name format A40 break on drive skip 2 compute sum of Weight on Drive SELECT substr(DF.Name, 1, 6) Drive, DF.Name File_Name, FS.Phyblkrd+FS.Phyblkwrt Total_IO, 100*(FS.Phyblkrd+FS.Phyblkwrt) / MaxIO Weight FROM V$FILESTAT FS, V$DATAFILE DF, (SELECT MAX(Phyblkrd+Phyblkwrt) MaxIO FROM V$FILESTAT) WHERE DF.File#=FS.File# ORDER BY Weight desc / 3.2 undocumented init parameters: --------------------------------SELECT * FROM SYS.X$KSPPI

WHERE SUBSTR(KSPPINM,1,1) = '_'; 3.3 Kans op gebruik index of niet?: ----------------------------------Kijk in DBA_TAB_COLUMNS.NUM_DISTINCT DBA_TABLES.NUM_ROWS als num_distinct in de buurt komt van num_rows : index favoriet i.p.v. full table Kijk in DBA_INDEXES, USER_INDEXES.CLUSTERING_FACTOR als clustering_factor = aantal blocks: ordered 3.4 snel overzicht hit ratio buffer cache: -----------------------------------------Hit ratio= (LR - PR) / LR Stel er zijn nauwelijk Physical Reads PR, ofwel PR=0, dan is de Hit Ratio=LR/LR=1 Er worden dan geen blocks van disk gelezen. Praktijk: Hit ratio moet gemiddeld wel zo > 0,8 - 0,9 V$sess_io en v$sysstat en v$session kunnen geraadpleegd worden om de hit rat io te bepalen. V$sess_io: sid, consistent_gets, physical_reads V$session: sid, username SELECT name, value FROM v$sysstat WHERE name IN ('db block gets', 'consistent gets','physical reads'); SELECT (1-(pr.value/(dbg.value+cg.value)))*100 FROM v$sysstat pr, v$sysstat dbg, v$sysstat cg WHERE pr.name = 'physical reads' AND dbg.name = 'db block gets' AND cg.name = 'consistent gets'; -- uitgebeidere query m.b.t. hit ratio CLEAR SET HEAD ON SET VERIFY OFF col col col col HitRatio format 999.99 heading 'Hit Ratio' CGets format 9999999999999 heading 'Consistent Gets' DBGets format 9999999999999 heading 'DB Block Gets' PhyGets format 9999999999999 heading 'Physical Reads'

SELECT substr(Username, 1, 10), v$sess_io.sid, consistent_gets, block_gets, phys ical_reads,

100*(consistent_gets+block_gets-physical_reads)/ (consistent_gets+block_g ets) HitRatio FROM v$session, v$sess_io WHERE v$session.sid = v$sess_io.sid AND (consistent_gets+block_gets) > 0 AND Username is NOT NULL / SELECT 'Hit Ratio' Database, cg.value CGets, db.value DBGets, pr.value PhyGets, 100*(cg.value+db.value-pr.value)/(cg.value+db.value) HitRatio FROM v$sysstat db, v$sysstat cg, v$sysstat pr WHERE db.name = 'db block gets' AND cg.name = 'consistent gets' AND pr.name = 'physical reads' /

3.6 Wat zijn de actieve transacties?: ------------------------------------SELECT substr(username, 1, 10), substr(terminal, 1, 10), substr(osuser, 1, 10), t.start_time, r.name, t.used_ublk "ROLLB BLKS", decode(t.space, 'YES', 'SPACE TX', decode(t.recursive, 'YES', 'RECURSIVE TX', decode(t.noundo, 'YES', 'NO UNDO TX', t.status) )) status FROM sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s WHERE t.xidusn = r.usn AND t.ses_addr = s.saddr 3.7 sid's, resource belasting en locks: --------------------------------------SELECT sid, lmode, ctime, block FROM v$lock SELECT s.sid, substr(s.username, 1, 10), substr(s.schemaname, 1, 10), substr(s.o suser, 1, 10), substr(s.program, 1, 10), s.command, l.lmode, l.block FROM v$session s, v$lock l WHERE s.sid=l.sid; SELECT l.addr, s.saddr, l.sid, s.sid, l.type, l.lmode, s.status, substr(s.schemaname, 1, 10), s.lockwait, s.row_wait_obj# FROM v$lock l, v$session s WHERE l.addr=s.saddr SELECT sid, substr(owner, 1, 10), substr(object, 1, 10) FROM v$access SID Session number that is accessing an object OWNER Owner of the object OBJECT Name of the object

TYPE Type identifier for the object SELECT substr(s.username, 1, 10), s.sid, t.log_io, t.phy_io FROM v$session s, v$transaction t WHERE t.ses_addr=s.saddr 3.8 latch use in SGA (locks op process): ---------------------------------------SELECT c.name,a.gets,a.misses,a.sleeps, a.immediate_gets,a.immediate_misses,b.pi d FROM v$latch a, v$latchholder b, v$latchname c WHERE a.addr = b.laddr(+) AND a.latch# = c.latch# AND (c.name like 'redo%' or c.name like 'row%') ORDER BY a.latch#; column latch_name format a40 SELECT name latch_name, gets, misses, round(decode(gets-misses,0,1,gets-misses)/ decode(gets,0,1,gets),3) hit_ratio FROM v$latch WHERE name = 'redo allocation'; column latch_name format a40 SELECT name latch_name, immediate_gets, immediate_misses, round(decode(immediate_gets-immediate_misses,0,1, immediate_gets-immediate_misses)/ decode(immediate_gets,0,1,immediate_gets),3) hit_ratio FROM v$latch WHERE name = 'redo copy'; column name format a40 column value format a10 SELECT name,value FROM v$parameter WHERE name in ('log_small_entry_max_size','log_simultaneous_copies', 'cpu_count'); -- latches en locks in beeld set pagesize 23 set pause on set pause 'Hit any key...' col col col col col col col col col sid format 999999 serial# format 999999 username format a12 trunc process format a8 trunc terminal format a12 trunc type format a12 trunc lmode format a4 trunc lrequest format a4 trunc object format a73 trunc

SELECT s.sid, s.serial#, decode(s.process, null, decode(substr(p.username,1,1), '?', upper(s.osuser), p.username), decode( p.username, 'ORACUSR ', upper(s.osuser), s.process)

) process, nvl(s.username, 'SYS ('||substr(p.username,1,4)||')') username, decode(s.terminal, null, rtrim(p.terminal, chr(0)), upper(s.terminal)) terminal, decode(l.type, -- Long locks 'TM', 'DML/DATA ENQ', 'TX', 'TRANSAC ENQ', 'UL', 'PLS USR LOCK', -- Short locks 'BL', 'BUF HASH TBL', 'CF', 'CONTROL FILE', 'CI', 'CROSS INST F', 'DF', 'DATA FILE ', 'CU', 'CURSOR BIND ', 'DL', 'DIRECT LOAD ', 'DM', 'MOUNT/STRTUP', 'DR', 'RECO LOCK ', 'DX', 'DISTRIB TRAN', 'FS', 'FILE SET ', 'IN', 'INSTANCE NUM', 'FI', 'SGA OPN FILE', 'IR', 'INSTCE RECVR', 'IS', 'GET STATE ', 'IV', 'LIBCACHE INV', 'KK', 'LOG SW KICK ', 'LS', 'LOG SWITCH ', 'MM', 'MOUNT DEF ', 'MR', 'MEDIA RECVRY', 'PF', 'PWFILE ENQ ', 'PR', 'PROCESS STRT', 'RT', 'REDO THREAD ', 'SC', 'SCN ENQ ', 'RW', 'ROW WAIT ', 'SM', 'SMON LOCK ', 'SN', 'SEQNO INSTCE', 'SQ', 'SEQNO ENQ ', 'ST', 'SPACE TRANSC', 'SV', 'SEQNO VALUE ', 'TA', 'GENERIC ENQ ', 'TD', 'DLL ENQ ', 'TE', 'EXTEND SEG ', 'TS', 'TEMP SEGMENT', 'TT', 'TEMP TABLE ', 'UN', 'USER NAME ', 'WL', 'WRITE REDO ', 'TYPE='||l.type) type, decode(l.lmode, 0, 'NONE', 1, 'NULL', 2, 'RS', 3, 'RX', 4, 'S', 5, 'RSX', 6, 'X', to_char(l.lmode) ) lmode, decode(l.request, 0, 'NONE', 1, 'NULL', 2, 'RS', 3, 'RX', 4, 'S', 5, 'RSX', 6, 'X', to_char(l.request) ) lrequest, decode(l.type, 'MR', decode(u.name, null, 'DICTIONARY OBJECT', u.name||'.'||o.name), 'TD', u.name||'.'||o.name, 'TM', u.name||'.'||o.name, 'RW', 'FILE#='||substr(l.id1,1,3)|| ' BLOCK#='||substr(l.id1,4,5)||' ROW='||l.id2, 'TX', 'RS+SLOT#'||l.id1||' WRP#'||l.id2, 'WL', 'REDO LOG FILE#='||l.id1, 'RT', 'THREAD='||l.id1, 'TS', decode(l.id2, 0, 'ENQUEUE', 'NEW BLOCK ALLOCATION'), 'ID1='||l.id1||' ID2='||l.id2) object FROM sys.v_$lock l, sys.v_$session s, sys.obj$ o, sys.user$ u, sys.v_$process p WHERE s.paddr = p.addr(+) AND l.sid = s.sid AND l.id1 = o.obj#(+) AND o.owner# = u.user#(+) AND l.type <> 'MR' UNION ALL /*** LATCH HOLDERS ***/ SELECT s.sid, s.serial#, s.process, s.username, s.terminal, 'LATCH', 'X', 'NONE', h.name||' ADDR='||rawtohex(laddr) FROM sys.v_$process p, sys.v_$session s, sys.v_$latchholder h WHERE h.pid = p.pid

AND p.addr = s.paddr UNION ALL /*** LATCH WAITERS ***/ SELECT s.sid, s.serial#, s.process, s.username, s.terminal, 'LATCH', 'NONE', 'X', name||' LATCH='||p.latchwait FROM sys.v_$session s, sys.v_$process p, sys.v_$latch l WHERE latchwait is not null AND p.addr = s.paddr AND p.latchwait = l.addr / SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL FROM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER BY v.SID; SQL> desc v$sess_io Name Null? ----------------------------- -------SID BLOCK_GETS CONSISTENT_GETS PHYSICAL_READS BLOCK_CHANGES CONSISTENT_CHANGES SQL> desc v$session; Name Null? ----------------------------- -------SADDR SID SERIAL# AUDSID PADDR USER# USERNAME COMMAND OWNERID TADDR LOCKWAIT STATUS SERVER SCHEMA# SCHEMANAME OSUSER PROCESS MACHINE TERMINAL PROGRAM TYPE SQL_ADDRESS SQL_HASH_VALUE SQL_ID SQL_CHILD_NUMBER PREV_SQL_ADDR PREV_HASH_VALUE PREV_SQL_ID PREV_CHILD_NUMBER PLSQL_ENTRY_OBJECT_ID

Type -------------------NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER

Type -------------------RAW(8) NUMBER NUMBER NUMBER RAW(8) NUMBER VARCHAR2(30) NUMBER NUMBER VARCHAR2(16) VARCHAR2(16) VARCHAR2(8) VARCHAR2(9) NUMBER VARCHAR2(30) VARCHAR2(30) VARCHAR2(12) VARCHAR2(64) VARCHAR2(30) VARCHAR2(48) VARCHAR2(10) RAW(8) NUMBER VARCHAR2(13) NUMBER RAW(8) NUMBER VARCHAR2(13) NUMBER NUMBER

PLSQL_ENTRY_SUBPROGRAM_ID PLSQL_OBJECT_ID PLSQL_SUBPROGRAM_ID MODULE MODULE_HASH ACTION ACTION_HASH CLIENT_INFO FIXED_TABLE_SEQUENCE ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# LOGON_TIME LAST_CALL_ET PDML_ENABLED FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER RESOURCE_CONSUMER_GROUP PDML_STATUS PDDL_STATUS PQ_STATUS CURRENT_QUEUE_DURATION CLIENT_IDENTIFIER BLOCKING_SESSION_STATUS BLOCKING_INSTANCE BLOCKING_SESSION SEQ# EVENT# EVENT P1TEXT P1 P1RAW P2TEXT P2 P2RAW P3TEXT P3 P3RAW WAIT_CLASS_ID WAIT_CLASS# WAIT_CLASS WAIT_TIME SECONDS_IN_WAIT STATE SERVICE_NAME SQL_TRACE SQL_TRACE_WAITS SQL_TRACE_BINDS SQL>

NUMBER NUMBER NUMBER VARCHAR2(48) NUMBER VARCHAR2(32) NUMBER VARCHAR2(64) NUMBER NUMBER NUMBER NUMBER NUMBER DATE NUMBER VARCHAR2(3) VARCHAR2(13) VARCHAR2(10) VARCHAR2(3) VARCHAR2(32) VARCHAR2(8) VARCHAR2(8) VARCHAR2(8) NUMBER VARCHAR2(64) VARCHAR2(11) NUMBER NUMBER NUMBER NUMBER VARCHAR2(64) VARCHAR2(64) NUMBER RAW(8) VARCHAR2(64) NUMBER RAW(8) VARCHAR2(64) NUMBER RAW(8) NUMBER NUMBER VARCHAR2(64) NUMBER NUMBER VARCHAR2(19) VARCHAR2(64) VARCHAR2(8) VARCHAR2(5) VARCHAR2(5)

======================================================== 4. IMP and EXP, IMPDP and EXPDP, and SQL*Loader Examples ========================================================

4.1 EXPDP and IMPDP examples: ============================= New for Oracle 10g, are the impdp and expdp utilities. EXPDP practice/practice PARFILE=par1.par EXPDP hr/hr DUMPFILE=export_dir:hr_schema.dmp LOGFILE=export_dir:hr_schema.explo g EXPDP system/******** PARFILE=c:\rmancmd\dpe_1.expctl Oracle 10g provides two new views, DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS t hat allow the DBA to monitor the progress of all DataPump operations. But ofcourse there are other views as well: SQL> select view_name from dba_views where view_name like '%PUMP%'; VIEW_NAME DATAPUMP_PATHS DATAPUMP_PATHMAP DATAPUMP_TABLE_DATA DATAPUMP_OBJECT_CONNECT DATAPUMP_DDL_TRANSFORM_PARAMS DATAPUMP_REMAP_OBJECTS V_$DATAPUMP_JOB V_$DATAPUMP_SESSION GV_$DATAPUMP_JOB GV_$DATAPUMP_SESSION USER_DATAPUMP_JOBS DBA_DATAPUMP_JOBS DBA_DATAPUMP_SESSIONS AQ$KUPC$DATAPUMP_QUETAB_S AQ$_KUPC$DATAPUMP_QUETAB_F AQ$KUPC$DATAPUMP_QUETAB AQ$KUPC$DATAPUMP_QUETAB_R SELECT owner_name ,job_name ,operation ,job_mode ,state ,degree ,attached_sessions FROM dba_datapump_jobs ; SELECT DPS.owner_name ,DPS.job_name ,S.osuser FROM dba_datapump_sessions DPS ,v$session S WHERE S.saddr = DPS.saddr ;

Oracle Data Pump is a newer, faster and more flexible alternative to the "exp" a nd "imp" utilities used in previous Oracle versions. In addition to basic import and export functionalit y data pump provides a PL/SQL API and support for external tables. For the examples to work we must first unlock the SCOTT account and create a dir ectory object it can access: CONN sys/password@db10g AS SYSDBA ALTER USER scott IDENTIFIED BY tiger ACCOUNT UNLOCK; GRANT CREATE ANY DIRECTORY TO scott; CREATE OR REPLACE DIRECTORY test_dir AS '/u01/app/oracle/oradata/'; GRANT READ, WRITE ON DIRECTORY test_dir TO scott; -- Table Exports/Imports: ------------------------The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax: $ expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.d mp logfile=expdpEMP_DEPT.log $ impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.d mp logfile=impdpEMP_DEPT.log For example output files see expdpEMP_DEPT.log and impdpEMP_DEPT.log. The "TABLE_EXISTS_ACTION=APPEND" parameter allows data to be imported into exist ing tables. -- Schema Exports/Imports: -------------------------The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is u sed to specify the schemas to be exported. The following is an example of the schema export and import synt ax: $ expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp lo gfile=expdpSCOTT.log $ impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp lo gfile=impdpSCOTT.log For example output files see expdpSCOTT.log and impdpSCOTT.log. -- Database Exports/Imports: ---------------------------The FULL parameter indicates that a complete database export is required. The fo llowing is an example of the full database export and import syntax: $ expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfi

le=expdpDB10G.log $ impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfi le=impdpDB10G.log For an example output file see expdpDB10G.log.

Example 1. EXPDP parfile -----------------------JOB_NAME=NightlyDRExport DIRECTORY=export_dir DUMPFILE=export_dir:fulldb_%U.dmp LOGFILE=export_dir:NightlyDRExport.explog FULL=Y PARALLEL=2 FILESIZE=650M CONTENT=ALL STATUS=30 ESTIMATE_ONLY=Y Example 2. EXPDP parfile, only for getting an estimate of export size --------------------------------------------------------------JOB_NAME=EstimateOnly DIRECTORY=export_dir LOGFILE=export_dir:EstimateOnly.explog FULL=Y CONTENT=DATA_ONLY ESTIMATE=STATISTICS ESTIMATE_ONLY=Y STATUS=60 Example 3. EXPDP parfile, only 1 schema, writing to multiple files with %U varia ble, limited to 650M --------------------------------------------------------------------------------------------JOB_NAME=SH_TABLESONLY DIRECTORY=export_dir DUMPFILE=export_dir:SHONLY_%U.dmp LOGFILE=export_dir:SH_TablesOnly.explog SCHEMAS=SH PARALLEL=2 FILESIZE=650M STATUS=60 Example 4. EXPDP parfile, multiple tables, writing to multiple files with %U var iable, limited --------------------------------------------------------------------------------------JOB_NAME=HR_PAYROLL_REFRESH DIRECTORY=export_dir DUMPFILE=export_dir:HR_PAYROLL_REFRESH_%U.dmp LOGFILE=export_dir:HR_PAYROLL_REFRESH.explog STATUS=20

FILESIZE=132K CONTENT=ALL TABLES=HR.EMPLOYEES,HR.DEPARTMENTS,HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROL L_SALARY,HR.PAYROLL_TRANSACTIONS Example 5. EXPDP parfile, Exports all objects in the HR schema, including metada ta, asof just before midnight on April 10, 2005 -----------------------------------------------------------------------------------------------------------------------JOB_NAME=HREXPORT DIRECTORY=export_dir DUMPFILE=export_dir:HREXPORT_%U.dmp LOGFILE=export_dir:2005-04-10_HRExport.explog SCHEMAS=HR CONTENTS=ALL FLASHBACK_TIME=TO_TIMESTAMP"('04-10-2005 23:59', 'MM-DD-YYYY HH24:MI')" Example 6. IMPDP parfile, Imports data +only+ into selected tables in the HR sch ema, Multiple dump files will be used --------------------------------------------------------------------------------------------------------------------JOB_NAME=HR_PAYROLL_IMPORT DIRECTORY=export_dir DUMPFILE=export_dir:HR_PAYROLL_REFRESH_%U.dmp LOGFILE=export_dir:HR_PAYROLL_IMPORT.implog STATUS=20 TABLES=HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROLL_SALARY,HR.PAYROLL_TRANSACT IONS CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=TRUNCATE Example 7. IMPDP parfile,3 tables in the SH schema are the only tables to be ref reshed,These tables will be truncated before loading ------------------------------------------------------------------------------------------------------------------------------DIRECTORY=export_dir JOB_NAME=RefreshSHTables DUMPFILE=export_dir:fulldb_%U.dmp LOGFILE=export_dir:RefreshSHTables.implog STATUS=30 CONTENT=DATA_ONLY SCHEMAS=SH INCLUDE=TABLE:"IN('COUNTRIES','CUSTOMERS','PRODUCTS','SALES')" TABLE_EXISTS_ACTION=TRUNCATE Example IMPDP parfile,Generates SQLFILE output showing the DDL statements,Note t hat this code is +not+ executed! --------------------------------------------------------------------------------------------------------------DIRECTORY=export_dir JOB_NAME=GenerateImportDDL DUMPFILE=export_dir:hr_payroll_refresh_%U.dmp LOGFILE=export_dir:GenerateImportDDL.implog SQLFILE=export_dir:GenerateImportDDL.sql INCLUDE=TABLE

Example: schedule a procedure which uses DBMS_DATAPUMP -----------------------------------------------------BEGIN DBMS_SCHEDULER.CREATE_JOB ( job_name => 'HR_EXPORT' ,job_type => 'PLSQL_BLOCK' ,job_action => 'BEGIN HR.SP_EXPORT;END;' ,start_date => '04/18/2005 23:00:00.000000' ,repeat_interval => 'FREQ=DAILY' ,enabled => TRUE ,comments => 'Performs HR Schema Export nightly at 11 PM' ); END; / ====================================== How to use the NETWORK_LINK paramater: ====================================== Note 1: ======= Lora, the DBA at Acme Bank, is at the center of attention in a high-profile meet ing of the bank's top management team. The objective is to identify ways of enabling end users to slice and dice the da ta in the company's main data warehouse. At the meeting, one idea presented is to create several small data marts each base d on a particular functional area that can each be used by specialized teams. To effectively implement the data mart approach, the data specialists must get d ata into the data marts quickly and efficiently. The challenge the team faces is figuring out how to quickly refresh the warehous e data to the data marts, which run on heterogeneous platforms. And that's why Lora is at the meeting. What options doe s she propose for moving the data? An experienced and knowledgeable DBA, Lora provides the meeting attendees with t hree possibilities, as follows: Using transportable tablespaces Using Data Pump (Export and Import) Pulling tablespaces This article shows Lora's explanation of these options, including their implemen tation details and their pros and cons. Transportable Tablespaces: Lora starts by describing the transportable tablespaces option. The quickest way to transport an entire tablespace to a target system is to simply transfer the tablespace's underlying files, using F TP (file transfer protocol) or rcp (remote copy). However, just copying the Oracle data files is not sufficient; the target databa se must recognize and import the files

and the corresponding tablespace before the tablespace data can become available to end users. Using transportable tablespaces involves copying the tablespace files and making the data available in the targe t database. A few checks are necessary before this option can be considered. First, for a ta blespace TS1 to be transported to a target system, it must be self-contained. That is, all the indexes, partitions, and other depen dent segments of the tables in the tablespace must be inside the tablespace. Lora explains that if a set of tablespaces contai ns all the dependent segments, the set is considered to be self-contained. For instance, if tablespaces TS1 and TS2 are to be transfe rred as a set and a table in TS1 has an index in TS2, the tablespace set is self-contained. However, if another index of a table in TS1 is in tablespace TS3, the tablespace set (TS1, TS2) is not self-contained. To transport the tablespaces, Lora proposes using the Data Pump Export utility i n Oracle Database 10g. Data Pump is Oracle's next-generation data transfer tool, which replaces the earlier Oracle Export (EX P) and Import (IMP) tools. Unlike those older tools, which use regular SQL to extract and insert data, Data Pump uses proprietary APIs that bypass the SQL buffer, making the process extremely fast. In addition, Data Pump can ex tract specific objects, such as a particular stored procedure or a set of tables from a particular tablespace. Data Pump Expo rt and Import are controlled by jobs, which the DBA can pause, restart, and stop at will. Lora has run a test before the meeting to see if Data Pump can handle Acme's req uirements. Lora's test transports the TS1 and TS2 tablespaces as follows: 1. Check that the set of TS1 and TS2 tablespaces is self- contained. Issue the f ollowing command: BEGIN SYS.DBMS_TTS.TRANSPORT_SET_CHECK ('TS1','TS2'); END;

2. Identify any nontransportable sets. If no rows are selected, the tablespaces are self-contained: SELECT * FROM SYS.TRANSPORT_SET_VIOLATIONS; no rows selected 3. Ensure the tablespaces are read-only: SELECT STATUS

FROM DBA_TABLESPACES WHERE TABLESPACE_NAME IN ('TS1','TS2'); STATUS --------READ ONLY READ ONLY 4. Transfer the data files of each tablespace to the remote system, into the dir ectory /u01/oradata, using a transfer mechanism such as FTP or rcp. 5. In the target database, create a database link to the source database (named srcdb in the line below). CREATE DATABASE LINK srcdb USING 'srcdb'; 6. In the target database, import the tablespaces into the database, using Data Pump Import. impdp lora/lora123 TRANSPORT_DATAFILES="'/u01/oradata/ts1_1.dbf','/u01/oradata/ ts2_1.dbf'" NETWORK_LINK='srcdb' TRANSPORT_TABLESPACES=\(TS1,TS2\) NOLOGFILE=Y This step makes the TS1 and TS2 tablespaces and their data available in the targ et database. Note that Lora doesn't export the metadata from the source database. She merely specifies the value srcdb, the database link to the source database, for the parameter NETWORK_LINK in the impdp command above. Data Pump Import fetches the necessary metadata from the source across the datab ase link and re-creates it in the target. 7. Finally, make the TS1 and TS2 tablespaces in the source database read-write. ALTER TABLESPACE TS1 READ WRITE; ALTER TABLESPACE TS2 READ WRITE; Note 2: ======= One of the most significant characteristics of an import operation is its mode, because the mode largely determines what is imported. The specified mode applies to the source of the operation, eit her a dump file set or another database if the NETWORK_LINK parameter is specified. The NETWORK_LINK parameter initiates a network import. This means that the impdp client initiates the import request, typically to the local database. That server contacts the remote source database referenced by the database link

in the NETWORK_LINK parameter, retrieves the data, and writes it directly back t o the target database. There are no dump files involved. In the following example, the source_database_link would be replaced with the na me of a valid database link that must already exist. impdp hr/hr TABLES=employees DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_l ink EXCLUDE=CONSTRAINT This example results in an import of the employees table (excluding constraints) from the source database. The log file is written to dpump_dir1, specified on the DIRECTORY parameter.

4.2 Export / Import examples: ============================= In all Oracle versions 7,8,8i,9i,10g you can use the exp and imp utilities. exp system/manager file=expdat.dmp compress=Y owner=(HARRY, PIET) exp system/manager file=hr.dmp owner=HR indexes=Y exp system/manager file=expdat.dmp TABLES=(john.SALES) imp system/manager file=hr.dmp full=Y buffer=64000 commit=Y imp system/manager file=expdat.dmp FROMuser=ted touser=john indexes=N commit=Y b uffer=64000 imp rm_live/rm file=dump.dmp tables=(employee) imp system/manager file=expdat.dmp FROMuser=ted touser=john buffer=4194304 c:\> cd [oracle_db_home]\bin c:\> set nls_lang=american_america.WE8ISO8859P15 # export NLS_LANG=AMERICAN_AMERICA.UTF8 # export NLS_LANG=AMERICAN_AMERICA.AL32UTF8 c:\> imp system/manager fromuser=mis_owner touser=mis_owner file=[yourexport.dmp ] FROM Oracle8i one can use the QUERY= export parameter to SELECTively unload a su bset of the data FROM a table. Look at this example: exp scott/tiger tables=emp query=\"WHERE deptno=10\" -- Export metadata only: The Export utility is used to export the metadata describing the objects contain ed in the transported tablespace. For our example scenario, the Export command could be: EXP TRANSPORT_TABLESPACE=y TABLESPACES=ts_temp_sales FILE=jan_sales.dmp This operation will generate an export file, jan_sales.dmp. The export file will be small, because it contains only metadata. In this case, the export file will contain information describing

the table temp_jan_sales, such as the column names, column datatype, and all other information that the ta rget Oracle database will need in order to access the objects in ts_temp_sales. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ $$$ Extended example: ----------------CASE 1: ======= We create a user Albert on a 10g DB. This user will create a couple of tables with referential constraints (PK-FK relations). Then we will export this user, drop the user, and do an import. See what we have after the import. -- User: create user albert identified by albert default tablespace ts_cdc temporary tablespace temp QUOTA 10M ON sysaux QUOTA 20M ON users QUOTA 50M ON TS_CDC ; -- GRANTS: GRANT create session TO albert; GRANT create table TO albert; GRANT create sequence TO albert; GRANT create procedure TO albert; GRANT connect TO albert; GRANT resource TO albert; -- connect albert/albert -- create tables create table LOC -- table of locations ( LOCID int, CITY varchar2(16), constraint pk_loc primary key (locid) ); create table DEPT -- table of departments ( DEPID int, DEPTNAME varchar2(16), LOCID int, constraint pk_dept primary key (depid), constraint fk_dept_loc foreign key (locid) references loc(locid) ); create table EMP -- table of employees

( EMPID EMPNAME DEPID constraint constraint );

int, varchar2(16), int, pk_emp primary key (empid), fk_emp_dept foreign key (depid) references dept(depid)

-- show constraints: SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME from u ser_constraints; CONSTRAINT_NAME E ------------------------------------------FK_EMP_DEPT FK_DEPT_LOC PK_LOC PK_DEPT PK_EMP -- insert some data: INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO LOC LOC LOC LOC DEPT DEPT DEPT DEPT DEPT EMP EMP EMP EMP EMP EMP EMP VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES (1,'Amsterdam'); (2,'Haarlem'); (3,null); (4,'Utrecht'); (1,'Sales',1); (2,'PZ',1); (3,'Management',2); (4,'RD',3); (5,'IT',4); (1,'Joop',1); (2,'Gerrit',2); (3,'Harry',2); (4,'Christa',3); (5,null,4); (6,'Nina',5); (7,'Nadia',5); C TABLE_NAME R_CONSTRAINT_NAM

- ------------------------------ ---------------R R P P P EMP DEPT LOC DEPT EMP PK_DEPT PK_LOC

-- make an export C:\oracle\expimp>exp '/@test10g2 as sysdba' file=albert.dat owner=albert Export: Release 10.2.0.1.0 - Production on Sat Mar 1 08:03:59 2008 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc tion With the Partitioning, OLAP and Data Mining options Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set server uses AL32UTF8 character set (possible charset conversion)

About to export specified users ... . exporting pre-schema procedural objects and actions . exporting foreign function library names for user ALBERT . exporting PUBLIC type synonyms . exporting private type synonyms . exporting object type definitions for user ALBERT About to export ALBERT's objects ... . exporting database links . exporting sequence numbers . exporting cluster definitions . about to export ALBERT's tables via Conventional Path ... . . exporting table DEPT 5 rows exported . . exporting table EMP 7 rows exported . . exporting table LOC 4 rows exported . exporting synonyms . exporting views . exporting stored procedures . exporting operators . exporting referential integrity constraints . exporting triggers . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting statistics Export terminated successfully without warnings. C:\oracle\expimp> -- drop user albert SQL>drop user albert cascade - create user albert See above -- do the import C:\oracle\expimp>imp '/@test10g2 as sysdba' file=albert.dat fromuser=albert tous er=albert Import: Release 10.2.0.1.0 - Production on Sat Mar 1 08:09:26 2008 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc tion With the Partitioning, OLAP and Data Mining options Export file created by EXPORT:V10.02.01 via conventional path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set import server uses AL32UTF8 character set (possible charset conversion)

. importing ALBERT's objects into ALBERT . . importing table "DEPT" . . importing table "EMP" . . importing table "LOC" About to enable constraints... Import terminated successfully without warnings. C:\oracle\expimp> - connect albert/albert SQL> select * from emp; EMPID ---------1 2 3 4 5 6 7 EMPNAME DEPID ---------------- ---------Joop 1 Gerrit 2 Harry 2 Christa 3 4 Nina 5 Nadia 5

5 rows imported 7 rows imported 4 rows imported

7 rows selected. SQL> select * from loc; LOCID ---------1 2 3 4 CITY ---------------Amsterdam Haarlem Utrecht

SQL> select * from dept; DEPID ---------1 2 3 4 5 DEPTNAME LOCID ---------------- ---------Sales 1 PZ 1 Management 2 RD 3 IT 4

-- show constraints: SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME from u ser_constraints; CONSTRAINT_NAME E ------------------------------------------FK_DEPT_LOC FK_EMP_DEPT PK_DEPT PK_EMP PK_LOC C TABLE_NAME R_CONSTRAINT_NAM

- ------------------------------ ---------------R R P P P DEPT EMP DEPT EMP LOC PK_LOC PK_DEPT

Everything is back again. CASE 2: ======= We are not going to drop the user, but empty the tables: SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> alter table dept disable constraint FK_DEPT_LOC; alter table emp disable constraint FK_EMP_DEPT; alter table dept disable constraint PK_DEPT; alter table emp disable constraint pk_emp; alter table loc disable constraint pk_loc; truncate table emp; truncate table loc; truncate table dept;

-- do the import C:\oracle\expimp>imp '/@test10g2 as sysdba' file=albert.dat ignore=y fromuser=al bert touser=albert Import: Release 10.2.0.1.0 - Production on Sat Mar 1 08:25:27 2008 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc tion With the Partitioning, OLAP and Data Mining options Export file created by EXPORT:V10.02.01 via conventional path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set import server uses AL32UTF8 character set (possible charset conversion) . importing ALBERT's objects into ALBERT . . importing table "DEPT" 5 rows imported . . importing table "EMP" 7 rows imported . . importing table "LOC" 4 rows imported About to enable constraints... IMP-00017: following statement failed with ORACLE error 2270: "ALTER TABLE "EMP" ENABLE CONSTRAINT "FK_EMP_DEPT"" IMP-00003: ORACLE error 2270 encountered ORA-02270: no matching unique or primary key for this column-list IMP-00017: following statement failed with ORACLE error 2270: "ALTER TABLE "DEPT" ENABLE CONSTRAINT "FK_DEPT_LOC"" IMP-00003: ORACLE error 2270 encountered ORA-02270: no matching unique or primary key for this column-list Import terminated successfully with warnings. So the data gets imported, but we have a problem with the FOREIGN KEYS: SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATU S from user_constrai nts; CONSTRAINT_NAME C TABLE_NAME R_CONSTRAINT_NAM

E STATUS ------------------------------------------- ----FK_DEPT_LOC DISABLED FK_EMP_DEPT DISABLED PK_LOC DISABLED PK_EMP DISABLED PK_DEPT DISABLED alter alter alter alter alter alter table table table table table table

- ------------------------------ ---------------R DEPT R EMP P LOC P EMP P DEPT PK_LOC PK_DEPT

dept enable constraint pk_dept; emp enable constraint pk_emp; loc enable constraint pk_loc; dept enable constraint FK_DEPT_LOC; emp enable constraint FK_EMP_DEPT; dept enable constraint PK_DEPT;

SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATU S from user_constraints; CONSTRAINT_NAME E STATUS ------------------------------------------- ----FK_DEPT_LOC ENABLED FK_EMP_DEPT ENABLED PK_DEPT ENABLED PK_EMP ENABLED PK_LOC ENABLED SQL> Everything is back again. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ $$$$$$$$$$$$$$$$$$$$$$ C TABLE_NAME R_CONSTRAINT_NAM

- ------------------------------ ---------------R DEPT R EMP P DEPT P EMP P LOC PK_LOC PK_DEPT

What is exported?: -----------------Tables, indexes, data, database links gets exported. Example: --------

exp system/manager file=oemuser.dmp owner=oemuser Verbonden met: Oracle9i Enterprise Edition Release 9.0.1.4.0 - Production With the Partitioning option JServer Release 9.0.1.4.0 - Production. Export is uitgevoerd in WE8MSWIN1252 tekenset en AL16UTF16 NCHAR-tekenset. Export van opgegeven gebruikers gaat beginnen ... . pre-schema procedurele objecten en acties wordt gexporteerd. . bibliotheeknamen van verwijzende functie voor gebruiker OEMUSER worden gexpo teerd . objecttypedefinities voor gebruiker OEMUSER worden gexporteerd Export van objecten van OEMUSER gaat beginnen ... . databasekoppelingen worden gexporteerd. . volgnummers worden gexporteerd. . clusterdefinities worden gexporteerd. . export van tabellen van OEMUSER gaat beginnen ... via conventioneel pad ... . . tabel CUSTOMERS wordt gexporteerd.Er zijn 2 rijen gexporteerd. . synoniemen worden gexporteerd. . views worden gexporteerd. . opgeslagen procedures worden gexporteerd. . operatoren worden gexporteerd. . referentile integriteitsbeperkingen worden gexporteerd. . triggers worden gexporteerd. . indextypen worden gexporteerd. . bitmap, functionele en uit te breiden indexen worden gexporteerd. . acties post-tabellen worden gexporteerd . snapshots worden gexporteerd. . logs voor snapshots worden gexporteerd. . takenwachtrijen worden gexporteerd . herschrijfgroepen en kinderen worden gexporteerd . dimensies worden gexporteerd. . post-schema procedurele objecten en acties wordt gexporteerd. . statistieken worden gexporteerd. Export is succesvol beindigd zonder waarschuwingen. D:\temp> Can one import tables to a different tablespace? ------------------------------------------------Import the dump file using the INDEXFILE= option Edit the indexfile. Remove remarks and specify the correct tablespaces. Run this indexfile against your database, this will create the required tables in the appropriate tablespaces Import the table(s) with the IGNORE=Y option. Change the default tablespace for the user: Revoke the "UNLIMITED TABLESPACE" privilege FROM the user Revoke the user's quota FROM the tablespace FROM WHERE the object was exported. This forces the import utility to create tables in the user's default tablespace . Make the tablespace to which you want to import the default tablespace for the u ser Import the table Can one export to multiple files?/ Can one beat the Unix 2 Gig limit?

--------------------------------------------------------------------FROM Oracle8i, the export utility supports multiple output files. exp SCOTT/TIGER FILE=D:\F1.dmp,E:\F2.dmp FILESIZE=10m LOG=scott.log Use the following technique if you use an Oracle version prior to 8i: Create a compressed export on the fly. # create a named pipe mknod exp.pipe p # read the pipe - output to zip file in the background gzip < exp.pipe > scott.exp.gz & # feed the pipe exp userid=scott/tiger file=exp.pipe ... Some famous Errors: ------------------Error 1: -------EXP-00008: ORACLE error 6550 encountered ORA-06550: line 1, column 31: PLS-00302: component 'DBMS_EXPORT_EXTENSION' must be declared 1. The errors indicate that $ORACLE_HOME/rdbms/admin/CATALOG.SQL and $ORACLE_HOME/rdbms/admin/CATPROC.SQL Should be run again, as has been previously suggested. Were these scripts run co nnected as SYS? Try SELECT OBJECT_NAME, OBJECT_TYPE FROM DBA_OBJECTS WHERE STATUS = 'INVALID' AND OWNER = 'SYS'; Do you have invalid objects? Is DBMS_EXPORT_EXTENSION invalid? If so, try compil ing it manually: ALTER PACKAGE DBMS_EXPORT_EXTENSION COMPILE BODY; If you receive errors during manual compilation, please show errors for further information. 2. Or possibly different imp/exp versions are run to another version of the database. The problem can be resolved by copying the higher version CATEXP.SQL and executed in the lesser version RDBMS. 3. Other fix: If there are problems in exp/imp from single byte to multibyte databases: - Analyze which tables/rows could be affected by national characters before running the export - Increase the size of affected rows. - Export the table data once again. Error 2: --------

EXP-00091: Exporting questionable statistics. Hi. This warning is generated because the statistics are questionable due to the client character set difference from the server character set. There is an article which discusses the causes of questionable statistics availa ble via the MetaLink Advanced Search option by Doc ID: Doc ID: 159787.1 9i: Import STATISTICS=SAFE If you do not want this conversion to occur, you need to ensure the client NLS e nvironment performing the export is set to match the server. Fix ~~~~ a) If the statistics of a table are not required to include in export take the export with parameter STATISTICS=NONE Example: $exp scott/tiger file=emp1.dmp tables=emp STATISTICS=NONE b) In case, the statistics are need to be included can use STATISTICS=ESTIMATE or COMPUTE (default is Estimate). Error 3: -------EXP-00056: ORA-01403: EXP-00056: ORA-01403: EXP-00000: ORACLE error 1403 encountered no data found ORACLE error 1403 encountered no data found Export terminated unsuccessfully

You can't export any DB with an exp utility of a newer version. The exp version must be equal or older than the DB version Doc ID </help/usaeng/Search/search.html>: Note:281780.1 Content Type: TEXT/PLAIN Subject: Oracle 9.2.0.4.0: Schema Export Fails with ORA-1403 (No Data Fou nd) on Exporting Cluster Definitions Creation Date: 29-AUG-2004 Type: PROBLEM Last Revision Date: 29-AUG-2004 Status: PUBLISHED The information in this article applies to: - Oracle Server - Enterprise Edition - Version: 9.2.0.4 to 9.2.0.4 - Oracle Server - Personal Edition - Version: 9.2.0.4 to 9.2.0.4 - Oracle Server - Standard Edition - Version: 9.2.0.4 to 9.2.0.4 This problem can occur on any platform. ERRORS -----EXP-56 ORACLE error encountered ORA-1403 no data found EXP-0: Export terminated unsuccessfully SYMPTOMS -------A schema level export with the 9.2.0.4 export utility from a 9.2.0.4 or higher release database in which XDB has been installed, fails when exporting the cluster definitions with:

... . exporting cluster definitions EXP-00056: ORACLE error 1403 encountered ORA-01403: no data found EXP-00000: Export terminated unsuccessfully You can confirm that XDB has been installed in the database: SQL> SELECT substr(comp_id,1,15) comp_id, status, substr(version,1,10) version, substr(comp_name,1,30) comp_name FROM dba_registry ORDER BY 1; COMP_ID --------------... XDB XML XOQ STATUS VERSION COMP_NAME ----------- ---------- -----------------------------INVALID VALID LOADED 9.2.0.4.0 Oracle XML Database 9.2.0.6.0 Oracle XDK for Java 9.2.0.4.0 Oracle OLAP API

You create a trace file of the ORA-1403 error: SQL> SHOW PARAMETER user_dump SQL> ALTER SYSTEM SET EVENTS '1403 trace name errorstack level 3'; System altered. -- Re-run the export SQL> ALTER SYSTEM SET EVENTS '1403 trace name errorstack off'; System altered. The trace file that was written to your USER_DUMP_DEST directory, shows: ksedmp: internal or fatal error ORA-01403: no data found Current SQL statement for this session: SELECT xdb_uid FROM SYS.EXU9XDBUID You can confirm that you have no invalid XDB objects in the database: SQL> SET lines 200 SQL> SELECT status, object_id, object_type, owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects WHERE owner='XDB' AND status != 'VALID' ORDER BY 4,2; no rows selected Note: If you do have invalid XDB objects, and the same ORA-1403 error occurs when performing a full database export, see the solution mentioned in: [NOTE:255724.1] <ml2_documents.showDocument?p_id=255724.1&p_database_id=NO T> "Oracle 9i: Full Export Fails with ORA-1403 (No Data Found) on Exporting Cluster Defintions" CHANGES -------

You recently restored the database from a backup or you recreated the controlfile, or you performed Operating System actions on your database tempfiles. CAUSE ----The Temporary tablespace does not have any tempfiles. Note that the errors are different when exporting with a 9.2.0.3 or earlier export utility: . exporting cluster definitions EXP-00056: ORACLE error 1157 encountered ORA-01157: cannot identify/lock data file 201 - see DBWR trace file ORA-01110: data file 201: 'M:\ORACLE\ORADATA\M9201WA\TEMP01.DBF' ORA-06512: at "SYS.DBMS_LOB", line 424 ORA-06512: at "SYS.DBMS_METADATA", line 1140 ORA-06512: at line 1 EXP-00000: Export terminated unsuccessfully The errors are also different when exporting with a 9.2.0.5 or later export utility: . exporting cluster definitions EXP-00056: ORACLE error 1157 encountered ORA-01157: cannot identify/lock data file 201 - see DBWR trace file ORA-01110: data file 201: 'M:\ORACLE\ORADATA\M9205WA\TEMP01.DBF' EXP-00000: Export terminated unsuccessfully FIX --1. If the controlfile does not have any reference to the tempfile(s), add the tempfile(s): SQL> SET lines 200 SQL> SELECT status, enabled, name FROM v$tempfile; no rows selected SQL> ALTER TABLESPACE temp ADD TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' REUSE; or: If the controlfile has a reference to the tempfile(s), but the files are missing on disk, re-create the temporary tablespace, e.g.: SQL> SET lines 200 SQL> CREATE TEMPORARY TABLESPACE temp2 TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP201.DBF' SIZE 100m AUTOEXTEND ON NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2; SQL> DROP TABLESPACE temp; SQL> CREATE TEMPORARY TABLESPACE temp TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' SIZE 100m AUTOEXTEND ON NEXT 100M MAXSIZE 2000M;

SQL> SQL> SQL> SQL>

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp; SHUTDOWN IMMEDIATE STARTUP DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;

2. Now re-run the export. Other errors: ------------Doc ID </help/usaeng/Search/search.html>: Note:175624.1 Content Type: TEXT/X-HTML Subject: Oracle Server - Export and Import FAQ Creation Date: 08-FEB-2 002 Type: FAQ Last Revision Date: 16-FEB-2005 Status: PUBLISHED PURPOSE ======= This Frequently Asked Questions (FAQ) provides common Export and Import issues in the following sections: - GENERIC - LARGE FILES - INTERMEDIA - TOP EXPORT DEFECTS - COMPATIBILITY - TABLESPACE - ADVANCED QUEUING - TOP IMPORT DEFECTS - PARAMETERS - ORA-942 - REPLICATION - PERFORMANCE - NLS - FREQUENT ERRORS GENERIC ======= Question: What is actually happening when I export and import data? See Note 61949.1 </metalink/plsql/showdoc?db=NOT&id=61949.1> "Overview of Export and Import in Oracle7" Question: What is important when doing a full database export or import? See Note 10767.1 </metalink/plsql/showdoc?db=NOT&id=10767.1> "How to p erform full system Export/Import" Question: Can data corruption occur using export & import (version 8.1.7.3 to 9. 2.0)? See Note 199416.1 </metalink/plsql/showdoc?db=NOT&id=199416.1> "ALERT: EXP Can Produce Dump File with Corrupted Data" Question: How to Connect AS SYSDBA when Using Export or Import? See Note 277237.1 </metalink/plsql/showdoc?db=NOT&id=277237.1> "How to Connect AS SYSDBA when Using Export or Import" COMPATIBILITY ============= Question: Which version should I use when moving data between different database releases? See Note 132904.1 </metalink/plsql/showdoc?db=NOT&id=132904.1> "Compat ibility Matrix for Export & Import Between Different Oracle Versions" See Note 291024.1 </metalink/plsql/showdoc?db=NOT&id=291024.1> "Compat ibility and New Features when Transporting Tablespaces with Export and Import" See Note 76542.1 </metalink/plsql/showdoc?db=NOT&id=76542.1> "NT: Expo rting from Oracle8, Importing Into Oracle7" Question: How to resolve the IMP-69 error when importing into a database? See Note 163334.1 </metalink/plsql/showdoc?db=NOT&id=163334.1> "Import

Gets IMP-00069 when Importing 8.1.7 Export" See Note 1019280.102 </metalink/plsql/showdoc?db=NOT&id=1019280.102> " IMP-69 on Import" PARAMETERS ========== Question: What is the difference between a Direct Path and a Conventional Path E xport? See Note 155477.1 </metalink/plsql/showdoc?db=NOT&id=155477.1> "Parame ter DIRECT: Conventional Path Export versus Direct Path Export" Question: What is the meaning of the Export parameter CONSISTENT=Y and when shou ld I use it? See Note 113450.1 </metalink/plsql/showdoc?db=NOT&id=113450.1> "When t o Use CONSISTENT=Y During an Export" Question: How to use the Oracle8i/9i Export parameter QUERY=... and what does it do? See Note 91864.1 </metalink/plsql/showdoc?db=NOT&id=91864.1> "Query= S yntax in Export in 8i" See Note 277010.1 </metalink/plsql/showdoc?db=NOT&id=277010.1> "How to Specify a Query in Oracle10g Export DataPump and Import DataPump" Question: How to create multiple export dumpfiles instead of one large file? See Note 290810.1 </metalink/plsql/showdoc?db=NOT&id=290810.1> "Parame ter FILESIZE - Make Export Write to Multiple Export Files" PERFORMANCE =========== Question: Import takes so long to complete. How can I improve the performance of Import? See Note 93763.1 </metalink/plsql/showdoc?db=NOT&id=93763.1> "Tuning C onsiderations when Import is slow" Question: Why has export performance decreased after creating tables with LOB co lumns? See Note 281461.1 </metalink/plsql/showdoc?db=NOT&id=281461.1> "Export and Import of Table with LOB Columns (like CLOB and BLOB) has Slow Performance" LARGE FILES =========== Question: Which commands to use for solving Export dump file problems on UNIX pl atforms? See Note 30528.1 </metalink/plsql/showdoc?db=NOT&id=30528.1> "QREF: Ex port/Import/SQL*Load Large Files in Unix - Quick Reference" Question: How to solve the EXP-15 and EXP-2 errors when Export dump file is larg er than 2Gb? See Note 62427.1 </metalink/plsql/showdoc?db=NOT&id=62427.1> "2Gb or N ot 2Gb - File limits in Oracle" See Note 1057099.6 </metalink/plsql/showdoc?db=NOT&id=1057099.6> "Unab le to export when export file grows larger than 2GB" See Note 290810.1 </metalink/plsql/showdoc?db=NOT&id=290810.1> "Parame ter FILESIZE - Make Export Write to Multiple Export Files" Question: How to export to a tape device by using a named pipe? See Note 30428.1 </metalink/plsql/showdoc?db=NOT&id=30428.1> "Exportin

g to Tape on Unix System" TABLESPACE ========== Question: How to transport tablespace between different versions? See Note 291024.1 </metalink/plsql/showdoc?db=NOT&id=291024.1> "Compat ibility and New Features when Transporting Tablespaces with Export and Import" Question: How to move tables to a different tablespace and/or different user? See Note 1012307.6 </metalink/plsql/showdoc?db=NOT&id=1012307.6> "Movi ng Tables Between Tablespaces Using EXPORT/IMPORT" See Note 1068183.6 </metalink/plsql/showdoc?db=NOT&id=1068183.6> "How to change the default tablespace when importing using the INDEXFILE option" Question: How can I export all tables of a specific tablespace? See Note 1039292.6 </metalink/plsql/showdoc?db=NOT&id=1039292.6> "How to Export Tables for a specific Tablespace" ORA-942 ======= Question: How to resolve an ORA-942 during import of the ORDSYS schema? See Note 109576.1 </metalink/plsql/showdoc?db=NOT&id=109576.1> "Full I mport shows Errors when adding Referential Constraint on Cartrige Tables" Question: How to resolve an ORA-942 during import of a snapshot (log) into a dif ferent schema? See Note 1017292.102 </metalink/plsql/showdoc?db=NOT&id=1017292.102> " IMP-00017 IMP-00003 ORA-00942 USING FROMUSER/TOUSER ON SNAPSHOT [LOG] IMPORT" Question: How to resolve an ORA-942 during import of a trigger on a renamed tabl e? See Note 1020026.102 </metalink/plsql/showdoc?db=NOT&id=1020026.102> " ORA-01702, ORA-00942, ORA-25001, When Importing Triggers" Question: How to resolve an ORA-942 during import of one specific table? See Note 1013822.102 </metalink/plsql/showdoc?db=NOT&id=1013822.102> " ORA-00942: ON TABLE LEVEL IMPORT" NLS === Question: Which effect has the client's NLS_LANG setting on an export and import ? See Note 227332.1 </metalink/plsql/showdoc?db=NOT&id=227332.1> "NLS co nsiderations in Import/Export - Frequently Asked Questions" See Note 15656.1 </metalink/plsql/showdoc?db=NOT&id=15656.1> "Export/I mport and NLS Considerations" Question: How to prevent the loss of diacritical marks during an export/import? See Note 96842.1 </metalink/plsql/showdoc?db=NOT&id=96842.1> "Loss Of Diacritics When Performing EXPORT/IMPORT Due To Incorrect Charactersets" INTERMEDIA OBJECTS ================== Question: How to solve an EXP-78 when exporting metadata for an interMedia Text index? See Note 130080.1 </metalink/plsql/showdoc?db=NOT&id=130080.1> "Proble

ms with EXPORT after upgrading from 8.1.5 to 8.1.6" Question: I dropped the ORDSYS schema, but now I get ORA-6550 and PLS-201 when e xporting? See Note 120540.1 </metalink/plsql/showdoc?db=NOT&id=120540.1> "EXP-8 PLS-201 After Drop User ORDSYS" ADVANCED QUEUING OBJECTS ======================== Question: Why does export show ORA-1403 and ORA-6512 on an AQ object, after an u pgrade? See Note 159952.1 </metalink/plsql/showdoc?db=NOT&id=159952.1> "EXP-8 and ORA-1403 When Performing A Full Export" Question: How to resolve export errors on DBMS_AQADM_SYS and DBMS_AQ_SYS_EXP_INT ERNAL? See Note 114739.1 </metalink/plsql/showdoc?db=NOT&id=114739.1> "ORA-40 68 while performing full database export" REPLICATION OBJECTS =================== Question: How to resolve import errors on DBMS_IJOB.SUBMIT for Replication jobs? See Note 137382.1 </metalink/plsql/showdoc?db=NOT&id=137382.1> "IMP-3, PLS-306 Unable to Import Oracle8i JobQueues into Oracle8" Question: How to reorganize Replication base tables with Export and Import? See Note 1037317.6 </metalink/plsql/showdoc?db=NOT&id=1037317.6> "Move Replication System Tables using Export/Import for Oracle 8.X" FREQUENTLY REPORTED EXPORT/IMPORT ERRORS ======================================== EXP-00002: Error in writing to export file Note 1057099.6 </metalink/plsql/showdoc?db=NOT&id=1057099.6> "Unable to export when export file grows larger than 2GB" EXP-00002: error in writing to export file The export file could not be written to disk anymore, probably because the disk is full or the device has an error. Most of the time this is followed by a device (filesystem) error message indicat ing the problem. Possible causes are file systems that do not support a certain limit (eg. dump f ile size > 2Gb) or a disk/filesystem that ran out of space. EXP-00003: No storage definition found for segment(%s,%s) (EXP-3 EXP-0) Note 274076.1 </metalink/plsql/showdoc?db=NOT&id=274076.1> "EXP-00003 When Exporting From Oracle9i 9.2.0.5.0 with a Pre-9.2.0.5.0 Export Utility" Note 124392.1 </metalink/plsql/showdoc?db=NOT&id=124392.1> "EXP-3 whi le exporting Rollback Segment definitions during FULL Database Export" EXP-00067: "Direct path cannot export %s which contains object or lob data." Note 1048461.6 </metalink/plsql/showdoc?db=NOT&id=1048461.6> "EXP-000 67 PERFORMING DIRECT PATH EXPORT" EXP-00079: Data in table %s is protected (EXP-79)

Note 277606.1 </metalink/plsql/showdoc?db=NOT&id=277606.1> "How to Pr event EXP-00079 or EXP-00080 Warning (Data in Table xxx is Protected) During Exp ort" EXP-00091: Exporting questionable statistics Note 159787.1 </metalink/plsql/showdoc?db=NOT&id=159787.1> "9i: Impor t STATISTICS=SAFE" IMP-00016: Required character set conversion (type %lu to %lu) not supported Note 168066.1 </metalink/plsql/showdoc?db=NOT&id=168066.1> "IMP-16 Wh en Importing Dumpfile into a Database Using Multibyte Characterset" IMP-00020: Long column too large for column buffer size Note 148740.1 </metalink/plsql/showdoc?db=NOT&id=148740.1> "ALERT: Ex port of table with dropped functional index may cause IMP-20 on import" ORA-00904: Invalid column name (EXP-8 ORA-904 EXP-0) Note 106155.1 </metalink/plsql/showdoc?db=NOT&id=106155.1> "EXP-00008 ORA-1003 ORA-904 During Export" Note 172220.1 </metalink/plsql/showdoc?db=NOT&id=172220.1> "Export of Database fails with EXP-00904 and ORA-01003" Note 158048.1 </metalink/plsql/showdoc?db=NOT&id=158048.1> "Oracle8i Export Fails on Synonym Export with EXP-8 and ORA-904" Note 130916.1 </metalink/plsql/showdoc?db=NOT&id=130916.1> "ORA-904 u sing EXP73 against Oracle8/8i Database" Note 1017276.102 </metalink/plsql/showdoc?db=NOT&id=1017276.102> "Ora cle8i Export Fails on Synonym Export with EXP-8 and ORA-904" ORA-01406: Fetched column value was truncated (EXP-8 ORA-1406 EXP-0) Note 163516.1 </metalink/plsql/showdoc?db=NOT&id=163516.1> "EXP-0 and ORA-1406 during Export of Object Types" ORA-01422: Exact fetch returns more than requested number of rows Note 221178.1 </metalink/plsql/showdoc?db=NOT&id=221178.1> "PLS-201 a nd ORA-06512 at 'XDB.DBMS_XDBUTIL_INT' while Exporting Database" Note 256548.1 </metalink/plsql/showdoc?db=NOT&id=256548.1> "Export of Database with XDB Throws ORA-1422 Error" ORA-01555: Snapshot too old Note 113450.1 </metalink/plsql/showdoc?db=NOT&id=113450.1> "When to U se CONSISTENT=Y During an Export" ORA-04030: Out of process memory when trying to allocate %s bytes (%s,%s) (IMP-3 ORA-4030 ORA-3113) Note 165016.1 </metalink/plsql/showdoc?db=NOT&id=165016.1> "Corrupt P ackages When Export/Import Wrapper PL/SQL Code" ORA-06512: at "SYS.DBMS_STATS", line ... (IMP-17 IMP-3 ORA-20001 ORA-6512) Note 123355.1 </metalink/plsql/showdoc?db=NOT&id=123355.1> "IMP-17 an d IMP-3 errors referring dbms_stats package during import" ORA-29344: Owner validation failed - failed to match owner 'SYS' Note 294992.1 </metalink/plsql/showdoc?db=NOT&id=294992.1> "Import Da taPump: Transport Tablespace Fails with ORA-39123 and 29344 (Failed to match own er SYS)" ORA-29516: Aurora assertion failure: Assertion failure at %s (EXP-8 ORA-29516 EX P-0) Note 114356.1 </metalink/plsql/showdoc?db=NOT&id=114356.1> "Export Fa ils With ORA-29516 Aurora Assertion Failure EXP-8"

PLS-00103: Encountered the symbol "," when expecting one of the following ... (I MP-17 IMP-3 ORA-6550 PLS-103) Note 123355.1 </metalink/plsql/showdoc?db=NOT&id=123355.1> "IMP-17 an d IMP-3 errors referring dbms_stats package during import" Note 278937.1 </metalink/plsql/showdoc?db=NOT&id=278937.1> "Import Da taPump: ORA-39083 and PLS-103 when Importing Statistics Created with Non "." NLS Decimal Character" EXPORT TOP ISSUES CAUSED BY DEFECTS =================================== Release : 8.1.7.2 and below Problem : Export may fail with ORA-1406 when exporting object type definition s Solution : apply patch-set 8.1.7.3 Workaround: no, see Note 163516.1 </metalink/plsql/showdoc?db=NOT&id=163516.1> "EXP-0 and ORA-1406 during Export of Object Types" Bug 1098503 </metalink/plsql/showdoc?db=Bug&id=1098503> Release : Oracle8i (8.1.x) and Oracle9i (9.x) Problem : EXP-79 when Exporting Protected Tables Solution : this is not a defect Workaround: N/A, see Note 277606.1 </metalink/plsql/showdoc?db=NOT&id=277606.1> "How to Prevent EXP-00079 or EXP-00080 Warning (Data in Table xxx is Protected) During Export" Bug 2410612 </metalink/plsql/showdoc?db=Bug&id=2410612> Release : 8.1.7.3 and higher and 9.0.1.2 and higher Problem : Conventional export may produce an export file with corrupt data Solution : 8.1.7.5 and 9.2.0.x or check for Patch 2410612 <http://updates.orac le.com/ARULink/PatchDetails/process_form?patch_num=2410612> (for 8.1.7.x), 24491 13 (for 9.0.1.x) Workaround: yes, see Note 199416.1 </metalink/plsql/showdoc?db=NOT&id=199416.1> "ALERT: Client Program May Give Incorrect Query Results (EXP Can Produce Dump File with Corrupted Data)" Release : Oracle8i (8.1.x) Problem : Full database export fails with EXP-3: no storage definition found for segment Solution : Oracle9i (9.x) Workaround: yes, see Note 124392.1 </metalink/plsql/showdoc?db=NOT&id=124392.1> "EXP-3 while exporting Rollback Segment definitions during FULL Database Export " Bug 2900891 </metalink/plsql/showdoc?db=Bug&id=2900891> Release : 9.0.1.4 and below and 9.2.0.3 and below Problem : Export with 8.1.7.3 and 8.1.7.4 from Oracle9i fails with invalid id entifier SPOLICY (EXP-8 ORA-904 EXP-0) Solution : 9.2.0.4 or 9.2.0.5 Workaround: yes, see Bug 2900891 </metalink/plsql/showdoc?db=Bug&id=2900891> ho w to recreate view sys.exu81rls Bug 2685696 </metalink/plsql/showdoc?db=Bug&id=2685696> Release : 9.2.0.3 and below Problem : Export fails when exporting triggers in call to XDB.DBMS_XDBUTIL_IN T (EXP-56 ORA-1422 ORA-6512) Solution : 9.2.0.4 or check for Patch 2410612 <http://updates.oracle.com/ARULi

nk/PatchDetails/process_form?patch_num=2410612> (for 9.2.0.2 and 9.2.0.3) Workaround: yes, see Note 221178.1 </metalink/plsql/showdoc?db=NOT&id=221178.1> "ORA-01422 ORA-06512: at "XDB.DBMS_XDBUTIL_INT" while exporting full database" Bug 2919120 </metalink/plsql/showdoc?db=Bug&id=2919120> Release : 9.2.0.4 and below Problem : Export fails when exporting triggers in call to XDB.DBMS_XDBUTIL_IN T (EXP-56 ORA-1422 ORA-6512) Solution : 9.2.0.5 or check for Patch 2919120 <http://updates.oracle.com/ARULi nk/PatchDetails/process_form?patch_num=2919120> (for 9.2.0.4) Workaround: yes, see Note 256548.1 </metalink/plsql/showdoc?db=NOT&id=256548.1> "Export of Database with XDB Throws ORA-1422 Error" IMPORT TOP ISSUES CAUSED BY DEFECTS =================================== Bug 1335408 </metalink/plsql/showdoc?db=Bug&id=1335408> Release : 8.1.7.2 and below Problem : Bad export file using a locale with a ',' decimal seperator (IMP-17 IMP-3 ORA-6550 PLS-103) Solution : apply patch-set 8.1.7.3 or 8.1.7.4 Workaround: yes, see Note 123355.1 </metalink/plsql/showdoc?db=NOT&id=123355.1> "IMP-17 and IMP-3 errors referring DBMS_STATS package during import" Bug 1879479 </metalink/plsql/showdoc?db=Bug&id=1879479> Release : 8.1.7.2 and below and 9.0.1.2 and below Problem : Export of a wrapped package can result in a corrupt package being i mported (IMP-3 ORA-4030 ORA-3113 ORA-7445 ORA-600[16201]). Solution : in Oracle8i with 8.1.7.3 and higher; in Oracle9iR1 with 9.0.1.3 and higher Workaround: no, see Note 165016.1 </metalink/plsql/showdoc?db=NOT&id=165016.1> "Corrupt Packages When Export/Import Wrapper PL/SQL Code" Bug 2067904 </metalink/plsql/showdoc?db=Bug&id=2067904> Release : Oracle8i (8.1.7.x) and 9.0.1.2 and below Problem : Trigger-name causes call to DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY to fail during Import (IMP-17 IMP-3 ORA-931 ORA-23308 ORA-6512). Solution : in Oracle9iR1 with patchset 9.0.1.3 Workaround: yes, see Note 239821.1 </metalink/plsql/showdoc?db=NOT&id=239821.1> "ORA-931 or ORA-23308 in SET_TRIGGER_FIRING_PROPERTY on Import of Trigger in 8.1.7.x and 9.0.1.x" Bug 2854856 </metalink/plsql/showdoc?db=Bug&id=2854856> Release : Oracle8i (8.1.7.x) and 9.0.1.2 and below Problem : Schema-name causes call to DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY to fail during Import (IMP-17 IMP-3 ORA-911 ORA-6512). Solution : in Oracle9iR2 with patchset 9.2.0.4 Workaround: yes, see Note 239890.1 </metalink/plsql/showdoc?db=NOT&id=239890.1> "ORA-911 in SET_TRIGGER_FIRING_PROPERTY on Import of Trigger in 8.1.7.x and Oracle9i"

4.3 SQL*Loader examples: -======================= SQL*Loader is used for loading data from text files into Oracle tables. The text file can have fixed column positions or columns separated by a special character, for example an ",". to call sqlloader sqlldr system/manager control=smssoft.ctl sqlldr parfile=bonus.par Example 1: ---------BONUS.PAR: userid=scott control=bonus.ctl bad=bonus.bad log=bonus.log discard=bonus.dis rows=2 errors=2 skip=0 BONUS.CTL: LOAD DATA INFILE bonus.dat APPEND INTO TABLE BONUS (name position(01:08) char, city position(09:19) char, salary position(20:22) integer external) Now you can use the command: $ sqlldr parfile=bonus.par Example 2: ---------LOAD1.CTL: LOAD DATA INFILE 'PLAYER.TXT' INTO TABLE BASEBALL_PLAYER FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' (player_id,last_name,first_name,middle_initial,start_date) SQLLDR system/manager CONTROL=LOAD1.CTL LOG=LOAD1.LOG BAD=LOAD1.BAD DISCARD=LOAD1.DSC Example 3: another controlfile: -----------------------------SMSSOFT.CTL: LOAD DATA

INFILE 'SMSSOFT.TXT' TRUNCATE INTO TABLE SMSSOFTWARE FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' (DWMACHINEID, SERIALNUMBER, NAME, SHORTNAME, SOFTWARE, CMDB_ID, LOGONNAME) Example 4: another controlfile: ------------------------------LOAD DATA INFILE * BADFILE 'd:\stage\loader\load.bad' DISCARDFILE 'd:\stage\loader\load.dsc' APPEND INTO TABLE TEST FIELDS TERMINATED BY "<tab>" TRAILING NULLCOLS ( c1, c2 char, c3 date(8) "DD-MM-YY" ) BEGINDATA 1<tab>X<tab>25-12-00 2<tab>Y<tab>31-12-00 Note: The <tab> placeholder is only for illustration purposes, in the acutal imp lementation, one would use a real tab character which is not visible. - Convential path load: When the DIRECT=Y parameter is not used, the convential path is used. This means that essentially INSERT statements are used, triggers and referential integrety are in normal use, and that the buffer cache is used. - Direct path load: Buffer cache is not used. Existing used blocks are not used. New blocks are written as needed. Referential integrety and triggers are disabled during the load. Example 5: ---------The following shows the control file (sh_sales.ctl) loading the sales table: LOAD DATA INFILE sh_sales.dat APPEND INTO TABLE sales FIELDS TERMINATED BY "|" (PROD_ID, CUST_ID, TIME_ID, CHANNEL_ID, PROMO_ID, QUANTITY_SOLD, AMOUNT_SOLD) It can be loaded with the following command: $ sqlldr sh/sh control=sh_sales.ctl direct=true

4.4 Creation of new table on basis of existing table: ===================================================== CREATE TABLE EMPLOYEE_2 AS SELECT * FROM EMPLOYEE

CREATE TABLE temp_jan_sales NOLOGGING TABLESPACE ts_temp_sales AS SELECT * FROM sales WHERE time_id BETWEEN '31-DEC-1999' AND '01-FEB-2000'; insert into t SELECT * FROM t2; insert into DSA_IMPORT SELECT * FROM MDB_DW_COMPONENTEN@SALES

4.5 Copy commAND om data uit een remote database te halen: ========================================================== set copycommit 1 set arraysize 1000 copy FROM HR/PASSWORD@loc create EMPLOYEE using SELECT * FROM employee WHERE state='NM' 4.6 Simple differences between table versions: ============================================== SELECT * FROM new_version MINUS SELECT * FROM old_version; SELECT * FROM old_version MINUS SELECT * FROM new_version;

======================================================= 5. Add, Move AND Size Datafiles, tablespaces, logfiles: ======================================================= 5.1 ADD OR DROP REDO LOGFILE GROUP: =================================== ADD: ---alter database add logfile group 4 ('/db01/oracle/CC1/log_41.dbf', '/db02/oracle/CC1/log_42.dbf') size 5M; ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; Add logfile plus group: ALTER DATABASE ADD LOGFILE GROUP 4 ('/dbms/tdbaeduc/educslot/recovery/redo_logs/

redo04.log') SIZE 50M; ALTER DATABASE ADD LOGFILE GROUP 5 ('/dbms/tdbaeduc/educslot/recovery/redo_logs/ redo05.log') SIZE 50M; ALTER DATABASE ADD LOGFILE ('G:\ORADATA\AIRM\REDO05.LOG') SIZE 20M; DROP: -----An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more memb ers.) -You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur. ALTER DATABASE DROP LOGFILE GROUP 3; ALTER DATABASE DROP LOGFILE 'G:\ORADATA\AIRM\REDO02.LOG'; 5.2 ADD REDO LOGFILE MEMBER: ============================ alter database add logfile member '/db03/oracle/CC1/log_3c.dbf' to group 4; Note: More on ONLINE LOGFILES: ------------------------------- Log Files Without Redundancy LOGFILE GROUP GROUP GROUP GROUP 1 2 3 4 '/u01/oradata/redo01.log'SIZE '/u02/oradata/redo02.log'SIZE '/u03/oradata/redo03.log'SIZE '/u04/oradata/redo04.log'SIZE 10M, 10M, 10M, 10M

-- Log Files With Redundancy LOGFILE GROUP 1 GROUP 2 GROUP 3 GROUP 4 ('/u01/oradata/redo1a.log','/u05/oradata/redo1b.log') ('/u02/oradata/redo2a.log','/u06/oradata/redo2b.log') ('/u03/oradata/redo3a.log','/u07/oradata/redo3b.log') ('/u04/oradata/redo4a.log','/u08/oradata/redo4b.log') SIZE SIZE SIZE SIZE 10M, 10M, 10M, 10M

-- Related Queries View information on log files SELECT * FROM gv$log; View information on log file history SELECT thread#, first_change#, TO_CHAR(first_time,'MM-DD-YY HH12:MIPM'), next_change# FROM gv$log_history; -- Forcing log file switches

ALTER SYSTEM SWITCH LOGFILE; -- Clear A Log File If It Has Become Corrupt ALTER DATABASE CLEAR LOGFILE GROUP <group_number>; This statement overcomes two situations where dropping redo logs is not possible : If there are only two log groups The corrupt redo log file belongs to the current group. ALTER DATABASE CLEAR LOGFILE GROUP 4; -- Clear A Log File If It Has Become Corrupt And Avoid Archiving ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP <group_number>; -- Use this version of clearing a log file if the corrupt log file has not been archived. ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3; Managing Log File Groups Adding a redo log file group ALTER DATABASE ADD LOGFILE ('<log_member_path_and_name>', '<log_member_path_and_name>') SIZE <integer> <K|M>; ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; Adding a redo log file group and specifying the group number ALTER DATABASE ADD LOGFILE GROUP <group_number> ('<log_member_path_and_name>') SIZE <integer> <K|M>; ALTER DATABASE ADD LOGFILE GROUP 4 ('c:\temp\newlog1.log') SIZE 100M; Relocating redo log files ALTER DATABASE RENAME FILE '<existing_path_and_file_n ame>' TO '<new_path_and_file_name>'; conn / as sysdba SELECT member FROM v_$logfile; SHUTDOWN; host $ cp /u03/logs/log1a.log /u04/logs/log1a.log $ cp /u03/logs/log1b.log /u05/logs/log1b.log $ exit startup mount ALTER DATABASE RENAME FILE '/u03/logs/log1a.log' TO '/u04/oradata/log1a.log'; ALTER DATABASE RENAME FILE '/u04/logs/log1b.log' TO '/u05/oradata/log1b.log'; ALTER DATABASE OPEN host $ rm /u03/logs/log1a.log

$ rm /u03/logs/log1b.log $ exit SELECT member FROM v_$logfile; Drop a redo log file group ALTER DATABASE DROP LOGFILE GROUP <group_number>; ALTER DATABASE DROP LOGFILE GROUP 4; Managing Log File Members Adding log file group members ALTER DATABASE ADD LOGFILE MEMBER '<log_member_pat h_and_name>' TO GROUP <group_number>; ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2; Dropping log file group members ALTER DATABASE DROP LOGFILE MEMBER '<log_member_ path_and_name>'; ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo'; Dumping Log Files Dumping a log file to trace ALTER SYSTEM DUMP LOGFILE '<logfile_path_and_name>' DBA MIN <file_number> <block_number> DBA MAX <file_number> <block_number>; or ALTER SYSTEM DUMP LOGFILE '<logfile_path_and_name>' TIME MIN <value> TIME MIN <value> conn uwclass/uwclass alter session set nls_date_format='MM/DD/YYYY HH24:MI:SS'; SELECT SYSDATE FROM dual; CREATE TABLE test AS SELECT owner, object_name, object_type FROM all_objects WHERE SUBSTR(object_name,1,1) BETWEEN 'A' AND 'W'; INSERT INTO test (owner, object_name, object_type) VALUES ('UWCLASS', 'log_dump', 'TEST'); COMMIT; conn / as sysdba SELECT ((SYSDATE-1/1440)-TO_DATE('01/01/2007','MM/DD/YYYY'))*86400 ssec FROM dual; ALTER SYSTEM DUMP LOGFILE 'c:\oracle\product\oradata\orabase\redo01.log' TIME MI N 579354757; Disable Log Archiving Stop log file archiving The following is undocumented and unsupported and should be used only with great care and following through tests. One might consider th

is for loading a data warehouse. Be sure to restart logging as soon as the load is complete or the system will be at extremely high risk. The rest of the database remains unchanged. The buffer cache works in exactly th e same way, old buffers get overwritten, old dirty buffers get written to disk. It's just the process of physically flushing the redo buffer that gets disabled. I used it in a very large test environment where I wanted to perform a massive a mount of changes (a process to convert blobs to clobs actually) and it was going to take days to complete. By disabling logging, I completed the task in hours and if anything untoward were to have happened, I was quite happy to restore the test database back from backup. ~ the above paraphrased from a private email from Richard Foote. conn / as sysdba SHUTDOWN; STARTUP MOUNT EXCLUSIVE; ALTER DATABASE NOARCHIVELOG; ALTER DATABASE OPEN; ALTER SYSTEM SET "_disable_logging"=TRUE;

5.3 RESIZE DATABASE FILE: ========================= alter database datafile '/db05/oracle/CC1/data01.dbf' rezise 400M; (increase or decrease size) alter tablespace DATA datafile '/db05/oracle/CC1/data01.dbf' rezise 400M; (increase or decrease size) 5.4 ADD FILE TO TABLESPACE: =========================== alter tablespace DATA add datafile '/db05/oracle/CC1/data02.dbf' size 50M autoextend ON maxsize unlimited; 5.5 ALTER STORAGE FOR FILE: =========================== alter database datafile '/db05/oracle/CC1/data01.dbf' autoextend ON maxsize unlimited; alter database datafile '/oradata/temp/temp.dbf' autoextend off; The AUTOEXTEND option cannot be turned OFF at for the entire tablespace with a single command. Each datafile within the tablespace must explicitly turn off the AUTOEXTEND option via the ALTER DATABASE command.

+447960585647 5.6 MOVE OF DATA FILE: ====================== connect internal shutdown mv /db01/oracle/CC1/data01.dbf /db02/oracle/CC1 connect / as SYSDBA startup mount CC1 alter database rename file '/db01/oracle/CC1/data01.dbf' to '/db02/oracle/CC1/data01.dbf'; alter database open; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/sysaux01.dbf' to '/dbms/tdbaplay/play dwhs/database/default/sysaux01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/system01.dbf' to '/dbms/tdbaplay/play dwhs/database/default/system01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/temp01.dbf' to '/dbms/tdbaplay/playdw hs/database/default/temp01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/undotbs01.dbf' to '/dbms/tdbaplay/pla ydwhs/database/default/undotbs01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/users01.dbf' to '/dbms/tdbaplay/playd whs/database/default/users01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/redo01.log' to '/dbms/tdbaplay/playdw hs/recovery/redo_logs/redo01.log'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/redo02.log' to '/dbms/tdbaplay/playdw hs/recovery/redo_logs/redo02.log'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/redo03.log' to '/dbms/tdbaplay/playdw hs/recovery/redo_logs/redo03.log'; 5.7 MOVE OF REDO LOG FILE: ========================== connect internal shutdown mv /db05/oracle/CC1/redo01.dbf /db02/oracle/CC1

connect / as SYSDBA startup mount CC1 alter database rename file '/db05/oracle/CC1/redo01.dbf' to '/db02/oracle/CC1/redo01.dbf'; alter database open; in case of problems: ALTER DATABASE CLEAR LOGFILE GROUP n example: -------shutdown immediate op Unix: mv /u01/oradata/spltst1/redo01.log /u02/oradata/spltst1/ mv /u03/oradata/spltst1/redo03.log /u02/oradata/spltst1/ startup mount pfile=/apps/oracle/admin/SPLTST1/pfile/init.ora alter database rename file '/u01/oradata/spltst1/redo01.log' to '/u02/oradata/spltst1/redo01.log'; alter database rename file '/u03/oradata/spltst1/redo03.log' to '/u02/oradata/spltst1/redo03.log'; alter database open;

5.8 Put a datafile or tablespace ONLINE or OFFLINE: =================================================== alter tablespace data offline; alter tablespace data online; alter database datafile 8 offline; alter database datafile 8 online; 5.9 ALTER DEFAULT STORAGE: ========================== alter tablespace AP_INDEX_SMALL default storage (initial 5M next 5M pctincrease 0); 5.10 CREATE TABLESPACE STORAGE PARAMETERS: ========================================== locally managed 9i style:

-- autoallocate: ---------------CREATE TABLESPACE DEMO DATAFILE '/u02/oracle/data/lmtbsb01.dbf' size 100M extent management local autoallocate; -- uniform size, 1M is default: ------------------------------CREATE TABLESPACE LOBS DATAFILE 'f:\oracle\oradata\pegacc\lobs01.dbf' SIZE 3000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K; CREATE TABLESPACE LOBS2 DATAFILE 'f:\oracle\oradata\pegacc\lobs02.dbf' SIZE 3000 M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M; CREATE TABLESPACE CISTS_01 DATAFILE '/u04/oradata/pilactst/cists_01.dbf' SIZE 10 00M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLESPACE CISTS_01 DATAFILE '/u01/oradata/spldev1/cists_01.dbf' SIZE 400 M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLESPACE PUB DATAFILE 'C:\ORACLE\ORADATA\TEST10G\PUB.DBF' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; CREATE TABLESPACE STAGING DATAFILE 'C:\ORACLE\ORADATA\TEST10G\STAGING.DBF' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; CREATE TABLESPACE RMAN DATAFILE 'C:\ORACLE\ORADATA\RMAN\RMAN.DBF' SIZE 100M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; CREATE TABLESPACE CISTS_01 DATAFILE '/u07/oradata/spldevp/cists_01.dbf' SIZE 120 0M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLESPACE USERS DATAFILE '/u06/oradata/splpack/users01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLESPACE INDX DATAFILE '/u06/oradata/splpack/indx01.dbf' SIZE 100M EXTENT MANAGEMENT LOCAL UNIFORM SIZE CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u07/oradata/spldevp/temp01.dbf' SIZE 200M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M; ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP;

ALTER TABLESPACE CISTS_01 ADD DATAFILE '/u03/oradata/splplay/cists_02.dbf' SIZE 1000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

ALTER TABLESPACE UNDOTBS ADD DATAFILE '/dbms/tdbaprod/prodross/database/default/undotbs03.dbf' SIZE 2000M ; alter tablespace DATA add datafile '/db05/oracle/CC1/data02.dbf' size 50M autoextend ON maxsize unlimited; -- segment management manual or automatic: -- --------------------------------------We can have a locally managed tablespace, but the segment space management, via the free lists and the pct_free and pct_used parameters, be still used manually. To specify manual space management, use the SEGMENT SPACE MANAGEMENT MANUAL clau se CREATE TABLESPACE INDX2 DATAFILE '/u06/oradata/bcict2/indx09.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT MANUAL; or if you want segement space management to be automatic: CREATE TABLESPACE INDX2 DATAFILE '/u06/oradata/bcict2/indx09.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; -- temporary tablespace: -----------------------CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u04/oradata/pilactst/temp01.dbf' SIZE 200M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M; create user cisadm identified by cisadm default tablespace cists_01 temporary tablespace temp; create user cisuser identified by cisuser default tablespace cists_01 temporary tablespace temp; create user cisread identified by cisread default tablespace cists_01 temporary tablespace temp; grant connect to cisadm; grant connect to cisuser; grant connect to cisread; grant resource to cisadm; grant resource to cisuser; grant resource to cisread;

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u04/oradata/bcict2/tempt01.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100M; alter tablespace TEMP add tempfile '/u04/oradata/bcict2/temp02.dbf' SIZE 5000M; alter tablespace UNDO add file '/u04/oradata/bcict2/undo07.dbf' size 500M; ALTER DATABASE datafile '/u04/oradata/bcict2/undo07.dbf' RESIZE 3000M; CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/u04/oradata/bcict2/temp01.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100M; ALTER TABLESPACE TEMP ADD TEMPFILE '/u04/oradata/bcict2/tempt4.dbf' SIZE 5000M; 1 /u03/oradata/bcict2/temp.dbf 2 /u03/oradata/bcict2/temp01.dbf 3 /u03/oradata/bcict2/temp02.dbf ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP INCLUDING DATAFILES; The extent management clause is optional for temporary tablespaces because all t emporary tablespaces are created with locally managed extents of a uniform size. The Oracle default f or SIZE is 1M. But if you want to specify another value for SIZE, you can do so as shown in the above statement. The AUTOALLOCATE clause is not allowed for temporary tablespaces. If you get errors: -----------------If the controlfile does not have any reference to the tempfile(s), add the tempfile(s): SQL> SET lines 200 SQL> SELECT status, enabled, name FROM v$tempfile; no rows selected SQL> ALTER TABLESPACE temp ADD TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' R EUSE; or: If the controlfile has a reference to the tempfile(s), but the files are missing on disk, re-create the temporary tablespace, e.g.: SQL> SET lines 200 SQL> CREATE TEMPORARY TABLESPACE temp2 TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP201.DBF' SIZE 100m AUTOEXTEND ON

NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2; SQL> DROP TABLESPACE temp; SQL> CREATE TEMPORARY TABLESPACE temp TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' SIZE 100m AUTOEXTEND ON NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp; SQL> SHUTDOWN IMMEDIATE SQL> STARTUP SQL> DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;

-- undo tablespace: -- ---------------CREATE UNDO TABLESPACE undotbs_02 DATAFILE '/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE AUTOEXTEND ON; ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02; -- ROLLBACK TABLESPACE: -- -------------------create tablespace RBS datafile '/disk01/oracle/oradata/DB1/rbs01.dbf' size 25M default storage ( initial 500K next 500K pctincrease 0 minextents 2 ); ################################################################################ ####### CREATE TABLESPACE "DRSYS" LOGGING DATAFILE '/u02/oradata/pegacc/drsys01.dbf' SIZE 20M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "INDX" LOGGING DATAFILE '/u02/oradata/pegacc/indx01.dbf' SIZE 100M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "TOOLS" LOGGING DATAFILE '/u02/oradata/pegacc/tools01.dbf' SIZE 100M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "USERS" LOGGING DATAFILE '/u02/oradata/pegacc/users01.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "XDB" LOGGING DATAFILE '/u02/oradata/pegacc/xdb01.dbf' SIZE 20M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "LOBS" LOGGING DATAFILE '/u02/oradata/pegacc/lobs01.dbf' SIZE 2000M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M ; ################################################################################ #######

General form of a 8i type statement: CREATE TABLESPACE DATA DATAFILE 'G:\ORADATA\RCDB\DATA01.DBF' size 100M EXTENT MANAGEMENT DICTIONARY default storage ( initial 512K next 512K minextents 1 pctincrease 0 ) minimum extent 512K logging online peRMANENTt; More info: ---------By declaring a tablespace as DICTIONARY managed, you are specifying that extent management for segments in this tablespace will be managed using the dictionary tables sys.fet$ and sys. uet$. Oracle updates these tables in the data dictionary whenever an extent is allocated, or freed fo r reuse. This is the default in Oracle8i when no extent management clause is used in the CREATE TABLESPACE st atement. The sys.fet$ table is clustered in the C_TS# cluster. Because it is created with out a SIZE clause, one block will be reserved in the cluster for each tablespace. Although, if a tablespace h as more free extents than can be contained in a single cluster block, then cluster block chaining wil l occur which can significantly impact performance on the data dictionary and space management transactions in p articular. Unfortunately, chaining in this cluster cannot be repaired without recreating the entire databa se. Preferably, the number of free extents in a tablespace should never be greater than can be recorded in the primary cluster block for that tablespace, which is about 500 free extents for a database with an 8K d atabase block size. Used extents, on the other hand, are recorded in the data dictionary table sys.u et$, which is clustered in the C_FILE#_BLOCK# cluster. Unlike the C_TS# cluster, C_FILE#_BLOCK# is sized on the assumption that segments will have an average of just 4 or 5 extents each. Unless your data dictionary wa s specifically customized prior to database creation to allow for more used extents per segment , then creating segments with thousands of extents (like mentioned in the previous section) will cause ex cessive cluster block chaining in this cluster. The major dilemma with an excessive number of used and/or free extents is that they can

misrepresent the operations of the dictionary cache LRU mechanism. Extents shoul d therefore not be allowed to grow into the thousands, not because of the impact of full table scans, but rather th e performance of the data dictionary and dictionary cache. A Locally Managed Tablespace is a tablespace that manages its own extents by mai ntaining a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Ea ch bit in the bitmap corresponds to a block or a group of blocks. When the extents are allocated or freed for reu se, Oracle simply changes the bitmap values to show the new status of the blocks. These changes do not gen erate rollback information because they do not update tables in the data dictionary (except for tablespace quota information). This is the default in Oracle9i. If COMPATIBLE is set to 9.0.0, then the default extent mana gement for any new tablespace is locally managed in Oracle9i. If COMPATIBLE is less than 9.0.0, then the default extent management for any new tablespace is dictionary managed in Oracle9i. While free space is represented in a bitmap within the tablespace, used extents are only recorded in the extent map in the segment header block of each segment, and if necessary, in add itional extent map blocks within the segment. Keep in mind though, that this information is not cached in the dictionary cache . It must be obtained from the database block every time that it is required, and if those blocks are not in th e buffer cache, that involves I/O and potentially lots of it. Take for example a query against D BA_EXTENTS. This query would be required to read every segment header and every additional extent map block i n the entire database. It is for this reason that it is recommended that the number of extents per segm ent in locally managed tablespaces be limited to the number of rows that can be contained in the extent map with th e segment header block. This would be approximately - (db_block_size / 16) - 7. For a database with a db block size of 8K, the above formula would be 505 extents.

Drop a tempfile from the TEMP tablespace: alter tablespace TEMP drop tempfile '/dbms/tdbaaccp/accproca/database/default/temp02.dbf';

5.11 DEALLOCATE EN OPSPOREN VAN UNUSED SPACE IN EEN TABLE: ========================================================== alter table emp deallocate unused; alter table emp

deallocate unused keep 100K; alter table emp allocate extent ( size 100K datafile '/db05/oradata/CC1/user05.dbf'); Deze datafile moet in dezelfde tablespace bestaan. -- gebruik van de dbms_space.unused_space package declare var1 number; var2 number; var3 number; var4 number; var5 number; var6 number; var7 number; begin dbms_space.unused_space('AUTOPROV1', 'MACADDRESS_INDEX', 'INDEX', var1, var2, var3, var4, var5, var6, var7); dbms_output.put_line('OBJECT_NAME = NOG ZON SLECHTE INDEX'); dbms_output.put_line('TOTAL_BLOCKS ='||var1); dbms_output.put_line('TOTAL_BYTES ='||var2); dbms_output.put_line('UNUSED_BLOCKS ='||var3); dbms_output.put_line('UNUSED_BYTES ='||var4); dbms_output.put_line('LAST_USED_EXTENT_FILE_ID ='||var5); dbms_output.put_line('LAST_USED_EXTENT_BLOCK_ID ='||var6); dbms_output.put_line('LAST_USED_BLOCK ='||var7); end; / 5.12 CREATE TABLE: ================== -- STORAGE PARAMETERS EXAMPLE: -- --------------------------create table emp ( id number, name varchar(2) ) tablespace users pctfree 10 storage (initial 1024K next 1024K pctincrease 10 minextents 2); ALTER a COLUMN: ===============

ALTER TABLE GEWEIGERDETRANSACTIE MODIFY (VERBRUIKTIJD DATE); -- Creation of new table on basis of existing table: -- ------------------------------------------------CREATE TABLE EMPLOYEE_2 AS SELECT * FROM EMPLOYEE insert into t SELECT * FROM t2; insert into DSA_IMPORT SELECT * FROM MDB_DW_COMPONENTEN@SALES -- Creation of a table with an autoincrement: -- -----------------------------------------CREATE SEQUENCE seq_customer INCREMENT BY 1 START WITH 1 MAXVALUE 99999 NOCYCLE; CREATE SEQUENCE seq_employee INCREMENT BY 1 START WITH 1218 MAXVALUE 99999 NOCYCLE; CREATE SEQUENCE seq_a INCREMENT BY 1 START WITH 1 MAXVALUE 99999 NOCYCLE; CREATE TABLE CUSTOMER ( CUSTOMER_ID NUMBER (10) NOT NULL, NAAM VARCHAR2 (30) NOT NULL, CONSTRAINT PK_CUSTOMER PRIMARY KEY ( CUSTOMER_ID ) USING INDEX TABLESPACE INDX PCTFREE 10 STORAGE ( INITIAL 16K NEXT 16K PCTINCREASE 0 )) TABLESPACE USERS PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE ( INITIAL 80K NEXT 80K PCTINCREASE 0 MINEXTENTS 1 MAXEXTENTS 2147483645 ) NOCACHE; CREATE OR REPLACE TRIGGER tr_CUSTOMER_ins BEFORE INSERT ON CUSTOMER FOR EACH ROW BEGIN SELECT seq_customer.NEXTVAL INTO :NEW.CUSTOMER_ID FROM dual; END;

CREATE SEQUENCE seq_brains_verbruik INCREMENT BY 1 START WITH 1750795 MAXVALUE 100000000 NOCYCLE; CREATE OR REPLACE TRIGGER tr_PARENTEENHEID_ins BEFORE INSERT ON PARENTEENHEID FOR EACH ROW BEGIN SELECT seq_brains_verbruik.NEXTVAL INTO :NEW.VERBRUIKID FROM dual; END; 5.13 REBUILD OF INDEX: ====================== ALTER INDEX emp_pk REBUILD -- online 8.16 or higher NOLOGGING TABLESPACE INDEX_BIG PCTFREE 10 STORAGE ( INITIAL 5M NEXT 5M pctincrease 0 ); ALTER INDEX emp_ename INITRANS 5 MAXTRANS 10 STORAGE (PCTINCREASE 50); In situations where you have B*-tree index leaf blocks that can be freed up for reuse, you can merge those leaf blocks using the following statement: ALTER INDEX vmoore COALESCE; DROP INDEX emp_ename: -- Basic example of creating an index: CREATE INDEX emp_ename ON emp(ename) TABLESPACE users STORAGE (INITIAL 20K NEXT 20k PCTINCREASE 75) PCTFREE 0; If you have a LMT, you can just do: create index cust_indx on customers(id) nologging; This statement is without storage parameters. -- Dropping an index: DROP INDEX emp_ename:

5.14 MOVE TABLE TO OTHER TABLESPACE: ==================================== ALTER TABLE CHARLIE.CUSTOMERS MOVE TABLESPACE USERS2 5.15 SYNONYM (pointer to an object): ==================================== example: create public synonym EMPLOYEE for HARRY.EMPLOYEE; 5.16 DATABASE LINK: =================== CREATE PUBLIC DATABASE LINK SALESLINK CONNECT TO FRONTEND IDENTIFIED BY cygnusx1 USING 'SALES'; SELECT * FROM employee@MY_LINK; For example, using a database link to database sales.division3.acme.com, a user or application can reference remote data as follows: SELECT * FROM scott.emp@sales.division3.acme.com; # emp table in scott's schema SELECT loc FROM scott.dept@sales.division3.acme.com; If GLOBAL_NAMES is set to FALSE, then you can use any name for the link to sales .division3.acme.com. For example, you can call the link foo. Then, you can access the remote database as follows: SELECT name FROM scott.emp@foo; # link name different FROM global name Synonyms for Schema Objects: Oracle lets you create synonyms so that you can hide the database link name FROM the user. A synonym allows access to a table on a remote database using the same syntax th at you would use to access a table on a local database. For example, assume you issue the followi ng query against a table in a remote database: SELECT * FROM emp@hq.acme.com; You can create the synonym emp for emp@hq.acme.com so that you can issue the following query instead to access the same data: SELECT * FROM emp; View DATABASE LINKS: select substr(owner,1,10), substr(db_link,1,50), substr(username,1,25), substr(host,1,40), created from dba_db_links

5.17 TO CLEAR TABLESPACE TEMP: ============================== alter tablespace TEMP default storage (pctincrease 0); alter session set events 'immediate trace name DROP_SEGMENTS level TS#+1'; 5.18 RENAME OF OBJECT: ====================== RENAME sales_staff TO dept_30; RENAME emp2 TO emp; 5.19 CREATE PROFILE: ==================== CREATE PROFILE DEVELOPER LIMIT IDLE_TIME 10; CREATE PROFILE PRIOLIMIT LIMIT SESSIONS_PER_USER 10; ALTER USER GERRIT PROFILE DEVELOPER; ALTER PROFILE EXTERNLIMIT LIMIT PASSWORD_REUSE_TIME 90 PASSWORD_REUSE_MAX UNLIMITED; ALTER PROFILE EXTERNLIMIT LIMIT SESSIONS_PER_USER 20 IDLE_TIME 20; 5.20 RECOMPILE OF FUNCTION, PACKAGE, PROCEDURE: =============================================== ALTER FUNCTION schema.function COMPILE; example: ALTER FUNCTION oe.get_bal COMPILE; ALTER PACKAGE schema.package COMPILE specification/body/package example ALTER PACKAGE emp_mgmt COMPILE PACKAGE; ALTER PROCEDURE schema.procedure COMPILE; example ALTER PROCEDURE hr.remove_emp COMPILE; TO FIND OBJECTS: SELECT 'ALTER '||decode( object_type, 'PACKAGE SPECIFICATION' ,'PACKAGE' ,'PACKAGE BODY' ,'PACKAGE' ,object_type) ||' '||owner ||'.'|| object_name ||' COMPILE ' ||decode( object_type, 'PACKAGE SPECIFICATION'

,'SPECIFACTION' ,'PACKAGE BODY' ,'BODY' , NULL) ||';' FROM dba_objects WHERE status = 'INVALID'; 5.21 CREATE PACKAGE: ==================== A package is a set of related functions and / or routines. Packages are used to group together PL/SQL code blocks which make up a common ap plication or are attached to a single business function. Packages consist of a specificati on and a body. The package specification lists the public interfaces to the blocks within the p ackage body. The package body contains the public and private PL/SQL blocks which make up the application, private blocks are not defined in the package specification and cannot be called by any routine other than one defined within the package body. The benefits of packages are that they improve the organisation of procedure and function blocks, allow you to update the blocks that make up the package bod y without affecting the specification (which is the object that users have rights to) and allow you to grant execute rights once instead of for each and every block. To create a package specification we use a variation on the CREATE command, all we need put in the specification is each PL/SQL block header that will be public within the package. An example follows :CREATE OR REPLACE PACKAGE MYPACK1 AS PROCEDURE MYPROC1 (REQISBN IN NUMBER, MYVAR1 IN OUT CHAR,TCOST OUT NUMBER); FUNCTION MYFUNC1; END MYPACK1; To create a package body we now specify each PL/SQL block that makes up the pack age, note that we are not creating these blocks separately (no CREATE OR REPLACE is required for the procedure and function definitions). An example follows :CREATE OR REPLACE PACKAGE BODY MYPACK1 AS PROCEDURE MYPROC1 (REQISBN IN NUMBER, MYVAR1 IN OUT CHAR, TCOST OUT NUMBER) TEMP_COST NUMBER(10,2)) IS BEGIN SELECT COST FROM JD11.BOOK INTO TEMP_COST WHERE ISBN = REQISBN; IF TEMP_COST > 0 THEN UPDATE JD11.BOOK SET COST = (TEMP_COST*1.175) WHERE ISBN = REQISBN; ELSE UPDATE JD11.BOOK SET COST = 21.32 WHERE ISBN = REQISBN; END IF; TCOST := TEMP_COST; COMMIT;

EXCEPTION WHEN NO_DATA_FOUND THEN INSERT INTO JD11.ERRORS (CODE, MESSAGE) VALUES(99, 'ISBN NOT FOUND'); END MYPROC1; FUNCTION MYFUNC1 RETURN NUMBER IS RCOST NUMBER(10,2); BEGIN SELECT COST FROM JD11.BOOK INTO RCOST WHERE ISBN = 21; RETURN (RCOST); END MYFUNC1; END MYPACK1; You can execute a public package block like this :EXECUTE :PCOST := JD11.MYPACK1.MYFUNC1 - WHERE JD11 is the schema name that owns the package. You can use DROP PACKAGE and DROP PACKAGE BODY to remove the p ackage objects FROM the database. CREATE OR REPLACE PACKAGE schema.package CREATE PACKAGE emp_mgmt AS FUNCTION hire (last_name VARCHAR2, job_id VARCHAR2, manager_id NUMBER, salary NUMBER, commission_pct NUMBER, department_id NUMBER) RETURN NUMBER; FUNCTION create_dept(department_id NUMBER, location NUMBER) RETURN NUMBER; PROCEDURE remove_emp(employee_id NUMBER); PROCEDURE remove_dept(department_id NUMBER); PROCEDURE increase_sal(employee_id NUMBER, salary_incr NUMBER); PROCEDURE increase_comm(employee_id NUMBER, comm_incr NUMBER); no_comm EXCEPTION; no_sal EXCEPTION; END emp_mgmt; / Before you can call this package's procedures and functions, you must define these procedures and functions in the package body. 5.22 View a view: ================= set long 2000 SELECT text FROM sys.dba_views WHERE view_name = 'CONTROL_PLAZA_V'; 5.23 ALTER SYSTEM: ================== ALTER ALTER ALTER ALTER ALTER ALTER SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM CHECKPOINT; ENABLE/DISABLE RESTRICTED SESSION; FLUSH SHARED_POOL; SWITCH LOGFILE; SUSPEND/RESUME; SET RESOURCE_LIMIT = TRUE;

ALTER SYSTEM SET LICENSE_MAX_USERS = 300; ALTER SYSTEM SET GLOBAL_NAMES=FALSE; ALTER SYSTEM SET COMPATIBLE = '9.2.0' SCOPE=SPFILE;

5.24 HOW TO ENABLE OR DISABLE TRIGGERS: ======================================= Disable enable trigger: ALTER TRIGGER Reorder DISABLE; ALTER TRIGGER Reorder ENABLE; Or in 1 time for all triggers on a table: ALTER TABLE Inventory DISABLE ALL TRIGGERS; 5.25 DIASABLING AND ENABLING AN INDEX: ====================================== alter index HEAT_CUSTOMER_POSTAL_CODE unusable; alter index HEAT_CUSTOMER_POSTAL_CODE rebuild; 5.26 CREATE A VIEW: =================== CREATE VIEW v1 AS SELECT LPAD(' ',40-length(size_tab.size_col)/2,' ') size_col FROM size_tab; CREATE VIEW X AS SELECT * FROM gebruiker@aptest 5.27 MAKE A USER: ================= CREATE USER jward IDENTIFIED BY aZ7bC2 DEFAULT TABLESPACE data_ts QUOTA 100M ON test_ts QUOTA 500K ON data_ts TEMPORARY TABLESPACE temp_ts PROFILE clerk; GRANT connect TO jward; create user jaap identified by jaap default tablespace users temporary tablespace temp; grant connect to jaap; grant resource to jaap; Dynamic queries:

----------------- CREATE USER AND GRANT PERMISSION STATEMENTS -- dynamic querieS SELECT 'CREATE USER '||USERNAME||' identified by '||USERNAME||' default tableSpa ce '|| DEFAULT_TABLESPACE||' temporary tableSpace '||TEMPORARY_TABLESPACE||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRANT CREATE SeSSion to '||USERNAME||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRANT connect to '||USERNAME||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRANT reSource to '||USERNAME||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRANT unlimited tableSpace to '||USERNAME||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); Becoming another user: ====================== - Do the query: select 'ALTER USER '||username||' IDENTIFIED BY VALUES '||''''||password||''''|| ';' from dba_users; - change the password - do what you need to do as the other account - change the password back to the original value -- grant <other roles or permissions> to <user> SELECT 'ALTER TABLE RM_LIVE.'||table_name||' disable constraint '||constraint_na me||';' from dba_constraints where owner='RM_LIVE' and CONSTRAINT_TYPE='R'; SELECT 'ALTER TABLE RM_LIVE.'||table_name||' disable constraint '||constraint_na me||';' from dba_constraints where owner='RM_LIVE' and CONSTRAINT_TYPE='P'; 5.28 CREATE A SEQUENCE: ======================= Sequences are database objects from which multiple users can generate unique int egers. You can use sequences to automatically generate primary key values. CREATE SEQUENCE INCREMENT BY START WITH MAXVALUE <sequence name> <increment number> <start number> <maximum value>

CYCLE ; CREATE SEQUENCE department_seq INCREMENT BY 1 START WITH 1 MAXVALUE 99999 NOCYCLE; 5.29 STANDARD USERS IN 9i: ========================== CTXSYS is the primary schema for interMedia. MDSYS, ORDSYS, and ORDPLUGINS are schemas required when installing any of the ca rtridges. MTSSYS is required for the Oracle Service for MTS and is specific to NT. OUTLN is an integral part of the database required for the plan stability featur e in Oracle8i. While the interMedia and cartridge schemas can be recreated by running their ass ociated scripts as needed, I am not 100% on the steps associated with the MTSSYS user. Unfortunately, the OUTLN user is created at database creation time when sql.bsq is run. The OUTLN user owns the package OUTLN_PKG which is used to manage stored outline s and their outline categories. There are other tables (base tables), indexes, grants, and synonyms related to t his package. By default, are automatically created during database creation : SCOTT by script $ORACLE_HOME/rdbms/admin/utlsampl.sql OUTLN by script $ORACLE_HOME/rdbms/admin/sql.bsq Optionally: DBSNMP if Enterprise Manager Intelligent Agent is installed TRACESVR if Enterprise Manager is installed AURORA$ORB$UNAUTHENTICATED \ AURORA$JIS$UTILITY$ -- if Oracle Servlet Engine (OSE) is installed OSE$HTTP$ADMIN / MDSYS if Oracle Spatial option is installed ORDSYS if interMedia Audio option is installed ORDPLUGINS if interMedia Audio option is installed CTXSYS if Oracle Text option is installed REPADMIN if Replication Option is installed LBACSYS if Oracle Label Security option is installed ODM if Oracle Data Mining option is installed ODM_MTR idem OLAPSYS if OLAP option is installed WMSYS if Oracle Workspace Manager script owmctab.plb is exec uted. ANONYMOUS if catqm.sql catalog script for SQL XML management XDB is executed 5.30 FORCED LOGGING: ====================

alter database no force logging; If a database is in force logging mode, all changes, except those in temporary t ablespaces, will be logged, independently from any nologging specification. It is also possible to put arbit rary tablespaces into force logging mode: alter tablespace force logging. A force logging might take a while to complete b ecause alter database add supplemental log data; ALTER DATABASE DROP SUPPLEMENTAL LOG DATA; ALTER TABLESPACE TDBA_CDC NO FORCE LOGGING;

==================================================== ORACLE INSTALLATIONS ON SOLARIS, LINUX, AIX, VMS: ==================================================== 6: 7: 8: 9: Install Install Install Install on on on on Solaris Linux OpenVMS AIX

================================== 6.1. Install Oracle 92 on Solaris: ================================== 6.1 Tutorial 1: =============== Short Guide to install Oracle 9.2.0 on SUN Solaris 8 -------------------------------------------------------------------------------The Oracle 9i Distribution can be found on Oracle Technet (http://technet.oracle .com) The following, short Installation Guide shows how to install Oracle 9.2.0 for SU N Solaris 8. You may download our scripts to create a database, we suggest this way and NOT u sing DBASSIST. Besides this scripts, you can download our SQLNET configuration files TNSNAMES.ORA. LISTENER.ORA and S QLNET.ORA. Check Hardware Requirements Operating System Software Requirements Java Runtime Environment (JRE) Check Software Limits Setup the Solaris Kernel Create Unix Group dba Create Unix User oracle Setup ORACLE environment ($HOME/.profile) as follows

Install from CD-ROM ... ... or Unpacking downloaded installation files Check oraInst.loc File Install with Installer in interactive mode Create the Database Start Listener Automatically Start / Stop the Database Install Oracle Options (optional) Download Scripts for Sun Solaris For our installation, we used the following ORACLE_HOME and ORACLE_SID, please a djust these parameters for your own environment. ORACLE_HOME = /opt/oracle/product/9.2.0 ORACLE_SID = TYP2 -------------------------------------------------------------------------------Check Hardware Requirements Minimal Memory: 256 MB Minimal Swap Space: Twice the amount of the RAM To determine the amount of RAM memory installed on your system, enter the follow ing command. $ /usr/sbin/prtconf To determine the amount of SWAP installed on your system, enter the following co mmand and multiply the BLOCKS column by 512. $ swap -l Use the latest kernel patch from Sun Microsystems (http://sunsolve.sun.com) Operating System Software Requirements Use the latest kernel patch from Sun Microsystems. - Download the Patch from: http://sunsolve.sun.com - Read the README File included in the Patch - Usually the only thing you have to do is: $ $ $ $ cd <patch cluster directory> ./install_custer cat /var/sadm/install_data/<luster name>_log showrev -p

- Reboot the system To determine your current operating system information: $ uname -a To determine which operating system patches are installed: $ showrev -p

To determine which operating system packages are installed: $ pkginfo -i [package_name] To determine if your X-windows system is working properly on your local system, but you can redirect the X-windows output to another system. $ xclock To determine if you are using the correct system executables: $ $ $ $ /usr/bin/which /usr/bin/which /usr/bin/which /usr/bin/which make ar ld nm

Each of the four commands above should point to the /usr/ccs/bin directory. If n ot, add /usr/ccs/bin to the beginning of the PATH environment variable in the current shell. Java Runtime Environment (JRE) The JRE shipped with Oracle9i is used by Oracle Java applications such as the Or acle Universal Installer is the only one supported. You should not modify this JRE, unless it is done thr ough a patch provided by Oracle Support Services. The inventory can contain multiple versions of the JRE, each of which can be used by one or more products or releases. The Installer creates the oraInventory dire ctory the first time it is run to keep an inventory of products that it installs on yo ur system as well as other installation information. The location of oraInventory is defined in /var/opt/or acle/oraInst.loc. Products in an ORACLE_HOME access the JRE through a symbolic link in $ORACLE_HOM E/JRE to the actual location of a JRE within the inventory. You should not modify the symbolic link. Check Software Limits Oracle9i includes native support for files greater than 2 GB. Check your shell t o determine whether it will impose a limit. To check current soft shell limits, enter the following command: $ ulimit -Sa To check maximum hard limits, enter the following command: $ ulimit -Ha The file (blocks) value should be multiplied by 512 to obtain the maximum file s ize imposed by the shell. A value of unlimited is the operating system default and is the maximum value of 1 TB. Setup the Solaris Kernel Set to the sum of the PROCESSES parameter for each Oracle database, adding the l

argest one twice, then add an additional 10 for each database. For example, consider a system that has three Oracle instances with the PROCESSE S parameter in their initSID.ora files set to the following values: ORACLE_SID=TYP1, PROCESSES=100 ORACLE_SID=TYP2, PROCESSES=100 ORACLE_SID=TYP3, PROCESSES=200 The value of SEMMNS is calculated as follows: SEMMNS = [(A=100) + (B=100)] + [(C=200) * 2] + [(# of instances=3) * 10] = 630 Setting parameters too high for the operating system can prevent the machine fro m booting up. Refer to Sun Microsystems Sun SPARC Solaris system administration documentation for parameter limits. * * Kernel Parameters on our SUN Enterprise with 640MB for Oracle 9 * set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10 set semsys:seminfo_semmni=100 set semsys:seminfo_semmsl=100 set semsys:seminfo_semmns=2500 set semsys:seminfo_semopm=100 set semsys:seminfo_semvmx=32767 -- remarks: The parameter for shared memory (shminfo_shmmax) can be set to the maximum val ue; it will not impact Solaris in any way. The values for semaphores (seminfo_semmni and seminfo_semmns) depend on the nu mber of clients you want to collect data from. As a rule of the thumb, the values should be set to at least (2*nr of clients + 15). You will have to reboot the system after making changes to the /etc/system fil e. Solaris doesn't automatically allocate shared memory, unless you specify the value in /etc/system and reboot. Were I you, i'd put in lines in /etc/system that look something like this: only the first value is *really* important. It specifies the maximum amount of shared memory to allocate. I'd make this parameter be about 70-75% of your physical ram (assuming you have nothing else on this machine running besides Oracle ... if not, adjust down accordingly). Then this value will dictate your maximum SGA size as you build your database. set set set set set set shmsys:shminfo_shmmax=4294967295 shmsys:shminfo_shmmin=1 shmsys:shminfo_shmmni=100 shmsys:shminfo_shmseg=10 semsys:seminfo_semmsl=256 semsys:seminfo_semmns=1024

set semsys:seminfo_semmni=400 -- end remarks Create Unix Group dba $ groupadd -g 400 dba $ groupdel dba Create Unix User oracle $ useradd -u 400 -c "Oracle Owner" -d /export/home/oracle \ -g "dba" -m -s /bin/ksh oracle Setup ORACLE environment ($HOME/.profile) as follows # Setup ORACLE environment ORACLE_HOME=/opt/oracle/product/9.2.0; export ORACLE_HOME ORACLE_SID=TYP2; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM TNS_ADMIN=/export/home/oracle/config/9.2.0; export TNS_ADMIN NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1; export NLS_LANG ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33 LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/openwin/lib LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/dt/lib:/usr/ucblib:/usr/local/lib export LD_LIBRARY_PATH # Set up the search paths: PATH=/bin:/usr/bin:/usr/sbin:/opt/bin:/usr/ccs/bin:/opt/local/GNU/bin PATH=$PATH:/opt/local/bin:/opt/NSCPnav/bin:$ORACLE_HOME/bin PATH=$PATH:/usr/local/samba/bin:/usr/ucb:. export PATH # CLASSPATH must include the following JRE location(s): CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib Install from CD-ROM ... Usually the CD-ROM will be mounted automatically by the Solaris Volume Manager, if not, do it as follows as user root. $ su root $ mkdir /cdrom $ mount -r -F hsfs /dev/.... /cdrom exit or CTRL-D ... or Unpacking downloaded installation files If you downloaded database installation files from Oracle site (901solaris_disk1 .cpio.gz, 901solaris_disk2.cpio.gz and 901solaris_disk3.cpio.gz) gunzip them somewhere and you'll get three .cpio files . The best way to download the huge files is to use the tool GetRight ( http://www.getright.com/ ) $ cd <somewhere>

$ $ $ $

mkdir Disk1 Disk2 Disk3 cd Disk1 gunzip 901solaris_disk1.cpio.gz cat 901solaris_disk1.cpio | cpio -icd

This will extract all the files for Disk1, repeat steps for Disk2 and D3isk3. No w you should have three directories (Disk1, Disk2 and Disk3) containing installation files. Check oraInst.loc File If you used Oracle before on your system, then you must edit the Oracle Inventor y File, usually located in: /var/opt/oracle/oraInst.loc inventory_loc=/opt/oracle/product/oraInventory Install with Installer in interactive mode Install Oracle 9i with Oracle Installer $ $ $ $ cd /Disk1 DISPLAY=<Any X-Window Host>:0.0 export DISPLAY ./runInstaller example display: $ export DISPLAY=192.168.1.10:0.0 Answer the questions in the Installer, we use the following install directories Inventory Location: /opt/oracle/product/oraInventory Oracle Universal Installer in: /opt/oracle/product/oui Java Runtime Environment in: /opt/oracle/product/jre/1.1.8 Edit the Database Startup Script /var/opt/oracle/oratab TYP2:/opt/oracle/product/9.2.0:Y Create the Database Edit and save the CREATE DATABASE File initTYP2.sql in $ORACLE_HOME/dbs, or crea te a symbolic-Link from $ORACLE_HOME/dbs to your Location. $ cd $ORACLE_HOME/dbs $ ln -s /export/home/oracle/config/9.2.0/initTYP2.ora initTYP2.ora $ ls -l initTYP2.ora -> /export/home/oracle/config/9.2.0/initTYP2.ora First start the Instance, just to test your initTYP2.ora file for correct syntax and system resources. $ cd /export/home/oracle/config/9.2.0/ $ sqlplus /nolog SQL> connect / as sysdba SQL> startup nomount

SQL> shutdown immediate Now you can create the database SQL> @initTYP2.sql SQL> @shutdown immediate SQL> startup Check the Logfile: initTYP2.log Start Listener $ lsnrctl start LSNRTYP2 Automatically Start / Stop the Database To start the Database automatically on Boot-Time, create or use our Startup Scri pts dbora and lsnrora (included in ora_config_sol_920.tar.gz), which must be installed in /etc/init.d. Create symbolic Links from the Startup D irectories. lrwxrwxrwx 1 root root S99dbora -> ../init.d/dbora* lrwxrwxrwx 1 root root S99lsnrora -> ../init.d/lsnrora* Install Oracle Options (optional) You may want to install the following Options: Oracle JVM Orcale XML Oracle Spatial Oracle Ultra Search Oracle OLAP Oracle Data Mining Example Schemas Run the following script install_options.sh to enable this options in the databa se. Before running this scripts adjust the initSID.ora paramaters as follows for the build process. After this, you can reset the paramters to sma ller values. parallel_automatic_tuning = false shared_pool_size = 200000000 java_pool_size = 100000000 $ ./install_options.sh Download Scripts for Sun Solaris These Scripts can be used as Templates. Please note, that some Parameters like O RACLE_HOME, ORACLE_SID and PATH must be adjusted on your own Environment. Besides this, you should check the ini tSID.ora Parameters for your Database (Size, Archivelog, ...) 6.2 Environment oracle user: ----------------------------

typical profile for Oracle account on most unix systems: .profile -------MAIL=/usr/mail/${LOGNAME:?} umask=022 EDITOR=vi; export EDITOR ORACLE_BASE=/opt/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/9.2; export ORACLE_HOME ORACLE_SID=OWS; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN NLS_LANG=AMERICAN_AMERICA.AL16UTF8; export NLS_LANG ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33 LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/openwin/lib LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/dt/lib:/usr/ucblib:/usr/local/lib export LD_LIBRARY_PATH PATH=.:/usr/bin:/usr/sbin:/sbin:/usr/ucb:/etc:$ORACLE_HOME/lib:/usr/oasys/bin:$O RACLE_HOME/bin:/usr/local/bin: export PATH PS1='$PWD >' DISPLAY=172.17.2.128:0.0 export DISPLAY /etc >more passwd ----------------root:x:0:1:Super-User:/:/sbin/sh daemon:x:1:1::/: bin:x:2:2::/usr/bin: sys:x:3:3::/: adm:x:4:4:Admin:/var/adm: lp:x:71:8:Line Printer Admin:/usr/spool/lp: uucp:x:5:5:uucp Admin:/usr/lib/uucp: nuucp:x:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico smmsp:x:25:25:SendMail Message Submission Program:/: listen:x:37:4:Network Admin:/usr/net/nls: nobody:x:60001:60001:Nobody:/: noaccess:x:60002:60002:No Access User:/: nobody4:x:65534:65534:SunOS 4.x Nobody:/: avdsel:x:1002:100:Albert van der Sel:/export/home/avdsel:/bin/ksh oraclown:x:1001:102:Oracle owner:/export/home/oraclown:/bin/ksh brighta:x:1005:102:Bright Alley:/export/home/brighta:/bin/ksh customer:x:2000:102:Customer account:/export/home/customer:/usr/bin/tcsh /etc >more group ---------------root::0:root other::1: bin::2:root,bin,daemon sys::3:root,bin,sys,adm adm::4:root,adm,daemon uucp::5:root,uucp mail::6:root tty::7:root,adm lp::8:root,lp,adm nuucp::9:root,nuucp staff::10: daemon::12:root,daemon sysadmin::14:

smmsp::25:smmsp nobody::60001: noaccess::60002: nogroup::65534: dba::100:oraclown,brighta oper::101: oinstall::102:

===================================== 7. install Oracle 9i on Linux: ===================================== ==================== 7.1.Article 1: ==================== The Oracle 9i Distribution can be found on Oracle Technet (http://technet.oracle .com) The following short Guide shows how to install and configure Oracle 9.2.0 on Red Hat Linux 7.2 / 8.0 You may download our Scripts to create a database, we suggest this way and NOT using DBASSIST. Beside s these scripts, you can download our NET configuration files: LISTNER.ORA, TNSNAMES.ORA and SQLNET.ORA. System Requirements Create Unix Group dba Create Unix User oracle Setup Environment ($HOME/.bash_profile) as follows Mount the Oracle 9i CD-ROM (only if you have the CD) ... ... or Unpacking downloaded installation files Install with Installer in interactive mode Create the Database Create your own DB-Create Script (optional) Start Listener Automatically Start / Stop the Database Setup Kernel Parameters ( if necessary ) Install Oracle Options (optional) Download Scripts for RedHat Linux 7.2 For our installation, we used the following ORACLE_HOME AND ORACLE_SID, please a djust these parameters for your own environment. ORACLE_HOME = /opt/oracle/product/9.2.0 ORACLE_SID = VEN1 -------------------------------------------------------------------------------System Requirements Oracle 9i needs Kernel Version 2.4 and glibc 2.2, which is included in RedHat Li nux 7.2. Component Check with ...

... Output Liunx Kernel Version 2.4 rpm -q kernel kernel-2.4.7-10 System Libraries rpm -q glibc glibc-2.2.4-19.3 Proc*C/C++ rpm -q gcc gcc-2.96-98 Create Unix Group dba $ groupadd -g 400 dba Create Unix User oracle $ useradd -u 400 -c "Oracle Owner" -d /home/oracle \ -g "dba" -m -s /bin/bash oracle Setup Environment ($HOME/.bash_profile) as follows # Setup ORACLE environment ORACLE_HOME=/opt/oracle/product/9.2.0; export ORACLE_HOME ORACLE_SID=VEN1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM ORACLE_OWNER=oracle; export ORACLE_OWNER TNS_ADMIN=/home/oracle/config/9.2.0; export TNS_ADMIN NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1; export NLS_LANG ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33 CLASSPATH=$ORACLE_HOME/jdbc/lib/classes111.zip LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH ### see JSDK: export CLASSPATH # Set up JAVA and JSDK environment: export JAVA_HOME=/usr/local/jdk export JSDK_HOME=/usr/local/jsdk CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JSDK_HOME/lib/jsdk.jar export CLASSPATH # Set up the search paths: PATH=$POSTFIX/bin:$POSTFIX/sbin:$POSTFIX/sendmail PATH=$PATH:/usr/local/jre/bin:/usr/local/jdk/bin:/bin:/sbin:/usr/bin:/usr/sbin PATH=$PATH:/usr/local/bin:$ORACLE_HOME/bin:/usr/local/jsdk/bin PATH=$PATH:/usr/local/sbin:/usr/bin/X11:/usr/X11R6/bin:/root/bin PATH=$PATH:/usr/local/samba/bin export PATH Mount the Oracle 9i CD-ROM (only if you have the CD) ...

Mount the CD-ROM as user root. $ $ $ $ su root mkdir /cdrom mount -t iso9660 /dev/cdrom /cdrom exit ... or Unpacking downloaded installation files If you downloaded database installation files from Oracle site (Linux9i_Disk1.cp io.gz, Linux9i_Disk2.cpio.gz and Linux9i_Disk3.cpio.gz) gunzip them somewhere and you'll get three .cpio files. T he best way to download the huge files is to use the tool GetRight ( http://www.getright.com/ ) $ $ $ $ cd <somewhere> cpio -idmv < Linux9i_Disk1.cpio cpio -idmv < Linux9i_Disk2.cpio cpio -idmv < Linux9i_Disk3.cpio

Now you should have three directories (Disk1, Disk2 and Disk3) containing instal lation files. Install with Installer in interactive mode Install Oracle 9i with Oracle Installer $ $ $ $ cd Disk1 DISPLAY=<Any X-Window Host>:0.0 export DISPLAY ./runInstaller

Answer the questions in the Installer, we use the following install directories Inventory Location: /opt/oracle/product/oraInventory Oracle Universal Installer in: /opt/oracle/product/oui Java Runtime Environment in: /opt/oracle/product/jre/1.1.8 Edit the Database Startup Script /etc/oratab VEN1:/opt/oracle/product/9.2.0:Y Create the Database Edit and save the CREATE DATABASE File initVEN1.sql in $ORACLE_HOME/dbs, or crea te a symbolic-Link from $ORACLE_HOME/dbs to your Location. $ cd $ORACLE_HOME/dbs $ ln -s /home/oracle/config/9.2.0/initVEN1.ora initVEN1.ora $ ls -l initVEN1.ora -> /home/oracle/config/9.2.0/initVEN1.ora First start the Instance, just to test your initVEN1.ora file for correct syntax and system resources. $ cd /home/oracle/config/9.2.0/ $ sqlplus /nolog SQL> connect / as sysdba

SQL> startup nomount SQL> shutdown immediate Now you can create the database SQL> @initVEN1.sql SQL> @shutdown immediate SQL> startup Check the Logfile: initVEN1.log Create your own DB-Create Script (optional) You can generate your own DB-Create Script using the Tool: $ORACLE_HOME/bin/dbca Start Listener $ lsnrctl start LSNRVEN1 Automatically Start / Stop the Database To start the Database automatically on Boot-Time, create or use our Startup Scri pts dbora and lsnrora (included in ora_config_linux_901.tar.gz), which must be installed in /etc/rc.d/init.d. Creat e symbolic Links from the Startup Directories in /etc/rc.d (e.g. /etc/rc.d/rc2.d). lrwxrwxrwx 1 root root S99dbora -> ../init.d/dbora* lrwxrwxrwx 1 root root S99lsnrora -> ../init.d/lsnrora* Setup Kernel Parameters ( if necessary ) Oracle9i uses UNIX resources such as shared memory, swap space, and semaphores e xtensively for interprocess communication. If your kernel parameter settings are insufficie nt for Oracle9i, you will experience problems during installation and instance startup. The greater the amount of data you can store in memory, the faster your database will operate. In addition, by maintaining data in memory, the UNIX kernel reduces disk I/O activity. Use the ipcs command to obtain a list of the system s current shared memory and se maphore segments, and their identification number and owner. You can modify the kernel parameters by using the /proc file system. To modify kernel parameters using the /proc file system: 1. Log in as root user. 2. Change to the /proc/sys/kernel directory. 3. Review the current semaphore parameter values in the sem file using the cat o r more utility # cat sem The output will list, in order, the values for the SEMMSL, SEMMNS, SEMOPM, and S EMMNI parameters. The following example shows how the output will appear.

250 32000 32 128 In the preceding example, 250 is the value of the SEMMSL parameter, 32000 is the value of the SEMMNS parameter, 32 is the value of the SEMOPM parameter, and 128 is the value of the SEMMNI paramet er. 4. Modify the parameter values using the following command: # echo SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_value > sem In the preceding command, all parameters must be entered in order. 5. Review the current shared memory parameters using the cat or more utility. # cat shared_memory_parameter In the preceding example, the shared_memory_parameter is either the SHMMAX or SH MMNI parameter. The parameter name must be entered in lowercase letters. 6. Modify the shared memory parameter using the echo utility. For example, to mo dify the SHMMAX parameter, enter the following: # echo 2147483648 > shmmax 7. Write a script to initialize these values during system startup and include t he script in your system init files. Refer to the following table to determine if your system shared memory and semap hore kernel parameters are set high enough for Oracle9i. The parameters in the following table are the minimum values required to run Ora cle9i with a single database instance. You can put the initialization in the file /etc/rc.d/rc.local # Setup Kernel Parameters for Oracle 9i echo 250 32000 100 128 > /proc/sys/kernel/sem echo 2147483648 > /proc/sys/kernel/shmmax echo 4096 > /proc/sys/kernel/shmmni Install Oracle Options (optional) You may want to install the following Options: Oracle JVM Orcale XML Oracle Spatial Oracle Ultra Search Oracle OLAP Oracle Data Mining Example Schemas Run the following script install_options.sh to enable this options in the databa se. Before running this scripts adjust the initSID.ora paramaters as follows for the build process. After this, you can reset the paramters to smaller values. parallel_automatic_tuning = false shared_pool_size = 200000000 java_pool_size = 100000000

$ ./install_options.sh Download Scripts for RedHat Linux 7.2 These Scripts can be used as Templates. Please note, that some Parameters like O RACLE_HOME, ORACLE_SID and PATH must be adjusted on your own Environment. Besides this, you should check the initSID. ora Parameters for your Database (Size, Archivelog, ...)

==================== 7.2.Article 2: ==================== Installing Oracle9i (9.2.0.5.0) on Red Hat Linux (Fedora Core 2) by Jeff Hunter, Sr. Database Administrator -------------------------------------------------------------------------------Contents Overview Swap Space Considerations Configuring Shared Memory Configuring Semaphores Configuring File Handles Create Oracle Account and Directories Configuring the Oracle Environment Configuring Oracle User Shell Limits Downloading / Unpacking the Oracle9i Installation Files Update Red Hat Linux System - (Oracle Metalink Note: 252217.1) Install the Oracle 9.2.0.4.0 RDBMS Software Install the Oracle 9.2.0.5.0 Patchset Post Installation Steps Creating the Oracle Database -------------------------------------------------------------------------------Overview The following article is a summary of the steps required to successfully install the Oracle9i (9.2.0.4.0) RDBMS software on Red Hat Linux Fedora Core 2. Also in cluded in this article is a detailed overview for applying the Oracle9i (9.2.0.5 .0) patchset. Keep in mind the following assumptions throughout this article: When installing Red Hat Linux Fedora Core 2, I install ALL components. (Everythi ng). This makes it easier than trying to troubleshoot missing software component s. As of March 26, 2004, Oracle includes the Oracle9i RDBMS software with the 9.2.0 .4.0 patchset already included. This will save considerable time since the patch set does not have to be downloaded and installed. We will, however, be applying the 9.2.0.5.0 patchset.

Although it is not required, it is recommend to apply the 9.2.0.5.0 patchset. The post installation section includes steps for configuring the Oracle Networki ng files, configuring the database to start and stop when the machine is cycled, and other miscellaneous tasks. Finally, at the end of this article, we will be creating an Oracle 9.2.0.5.0 dat abase named ORA920 using supplied scripts. -------------------------------------------------------------------------------Swap Space Considerations Ensure enough swap space is available. Installing Oracle9i requires a minimum of 512MB of memory. (An inadequate amount of swap during the installation will cause the Oracle Univ ersal Installer to either "hang" or "die") To check the amount of memory / swap you have allocated, type either: # free - OR # cat /proc/swaps - OR # cat /proc/meminfo | grep MemTotal If you have less than 512MB of memory (between your RAM and SWAP), you can add t emporary swap space by creating a temporary swap file. This way you do not have to use a raw device or even more drastic, rebuild your system. As root, make a file that will act as additional swap space, let's say about 300 MB: # dd if=/dev/zero of=tempswap bs=1k count=300000 Now we should change the file permissions: # chmod 600 tempswap Finally we format the "partition" as swap and add it to the swap space: # mke2fs tempswap # mkswap tempswap # swapon tempswap

-------------------------------------------------------------------------------Configuring Shared Memory The Oracle RDBMS uses shared memory in UNIX to allow processes to access common data structures and data. These data structures and data are placed in a shared memory segment to allow pr ocesses the fastest form of Interprocess Communications (IPC) available. The speed is primarily a result of processes not needing to copy

data between each other to share common data and structures - relieving the kern el from having to get involved. Oracle uses shared memory in UNIX to hold its Shared Global Area (SGA). This is an area of memory within the Oracle instance that is shared by all Oracle backup and foreground processes . It is important to size the SGA to efficiently hold the database buffer cache, shared pool, redo log buf fer as well as other shared Oracle memory structures. Inadequate sizing of the SGA can have a dramatic decre ase in performance of the database. To determine all shared memory limits you can use the ipcs command. The followin g example shows the values of my shared memory limits on a fresh RedHat Linux install using the defaults: # ipcs -lm ------ Shared Memory Limits -------max number of segments = 4096 max seg size (kbytes) = 32768 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Let's continue this section with an overview of the parameters that are responsi ble for configuring the shared memory settings in Linux. SHMMAX The SHMMAX parameter is used to define the maximum size (in bytes) for a shared memory segment and should be set large enough for the largest SGA size. If the SHMMAX is set incorrectly (too low ), it is possible that the Oracle SGA (which is held in shared segments) may be limited in size. An inadequ ate SHMMAX setting would result in the following: ORA-27123: unable to attach to shared memory segment You can determine the value of SHMMAX by performing the following: # cat /proc/sys/kernel/shmmax 33554432 As you can see from the output above, the default value for SHMMAX is 32MB. This is often too small to configure the Oracle SGA. I generally set the SHMMAX para meter to 2GB. NOTE: With a 32-bit Linux operating system, the default maximum size of the SGA is 1.7GB. This is the reason I will often set the SHMMAX parameter to 2GB since it requires a larger value for SHMMAX. On a 32-bit Linux operating system, without Physical Address Extension (PAE), th e physical memory is divided into a 3GB user space and a 1GB kernel space. It is therefore possible to create a 2.7GB SGA, but you will need make several change s at the Linux operating system level by changing the mapped base. In the case o f a 2.7GB SGA, you would want to set the SHMMAX parameter to 3GB. Keep in mind that the maximum value of the SHMMAX parameter is 4GB.

To change the value SHMMAX, you can use either of the following three methods: This is method I use most often. This method sets the SHMMAX on startup by inser ting the following kernel parameter in the /etc/sysctl.conf startup file:

# echo "kernel.shmmax=2147483648" >> /etc/sysctl.conf If you wanted to dynamically alter the value of SHMMAX without rebooting the mac hine, you can make this change directly to the /proc file system. This command c an be made permanent by putting it into the /etc/rc.local startup file: # echo "2147483648" > /proc/sys/kernel/shmmax You can also use the sysctl command to change the value of SHMMAX: # sysctl -w kernel.shmmax=2147483648 SHMMNI We now look at the SHMMNI parameters. This kernel parameter is used to set the m aximum number of shared memory segments system wide. The default value for this parameter is 4096. This value is sufficient and typically does not need to be ch anged. You can determine the value of SHMMNI by performing the following: # cat /proc/sys/kernel/shmmni 4096 SHMALL Finally, we look at the SHMALL shared memory kernel parameter. This parameter co ntrols the total amount of shared memory (in pages) that can be used at one time on the system. In short, the value of this parameter should always be at least: ceil(SHMMAX/PAGE_SIZE) The default size of SHMALL is 2097152 and can be queried using the following com mand: # cat /proc/sys/kernel/shmall 2097152 From the above output, the total amount of shared memory (in bytes) that can be used at one time on the system is: SM = (SHMALL * PAGE_SIZE) = 2097152 * 4096 = 8,589,934,592 bytes The default setting for SHMALL should be adequate for our Oracle installation. NOTE: The page size in Red Hat Linux on the i386 platform is 4096 bytes. You can , however, use bigpages which supports the configuration of larger memory page s izes.

-------------------------------------------------------------------------------Configuring Semaphores Now that we have configured our shared memory settings, it is time to take care of configuring our semaphores. A semaphore can be thought of as a counter that i s used to control access to a shared resource. Semaphores provide low level sync hronization between processes (or threads within a process) so that only one pro cess (or thread) has access to the shared segment, thereby ensureing the integri ty of that shared resource. When an application requests semaphores, it does so using "sets". To determine all semaphore limits, use the following: # ipcs -ls

------ Semaphore Limits -------max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 You can also use the following command: # cat /proc/sys/kernel/sem 250 32000 32 128 SEMMSL The SEMMSL kernel parameter is used to control the maximum number of semaphores per semaphore set. Oracle recommends setting SEMMSL to the largest PROCESS instance parameter setti ng in the init.ora file for all databases hosted on the Linux system plus 10. Al so, Oracle recommends setting the SEMMSL to a value of no less than 100. SEMMNI The SEMMNI kernel parameter is used to control the maximum number of semaphore s ets on the entire Linux system. Oracle recommends setting the SEMMNI to a value of no less than 100. SEMMNS The SEMMNS kernel parameter is used to control the maximum number of semaphores (not semaphore sets) on the entire Linux system. Oracle recommends setting the SEMMNS to the sum of the PROCESSES instance parame ter setting for each database on the system, adding the largest PROCESSES twice, and then finally adding 10 for each Oracle database on the system. To summarize : SEMMNS = sum of PROCESSES setting for each database on the system + ( 2 * [largest PROCESSES setting]) + (10 * [number of databases on system] To determine the maximum number of semaphores that can be allocated on a Linux s ystem, use the following calculation. It will be the lesser of: SEMMNS -or- (SEMMSL * SEMMNI) SEMOPM The SEMOPM kernel parameter is used to control the number of semaphore operation s that can be performed per semop system call. The semop system call (function) provides the ability to do operations for multi ple semaphores with one semop system call. A semaphore set can have the maximum number of SEMMSL semaphores per semaphore set and is therefore recommended to se t SEMOPM equal to SEMMSL. Oracle recommends setting the SEMOPM to a value of no less than 100. Setting Semaphore Kernel Parameters Finally, we see how to set all semaphore parameters using several methods. In th e following, the only parameter I care about changing (raising) is SEMOPM. All o

ther default settings should be sufficient for our example installation. This is method I use most often. This method sets all semaphore kernel parameter s on startup by inserting the following kernel parameter in the /etc/sysctl.conf startup file: # echo "kernel.sem=250 32000 100 128" >> /etc/sysctl.conf If you wanted to dynamically alter the value of all semaphore kernel parameters without rebooting the machine, you can make this change directly to the /proc fi le system. This command can be made permanent by putting it into the /etc/rc.loc al startup file: # echo "250 32000 100 128" > /proc/sys/kernel/sem You can also use the sysctl command to change the value of all semaphore setting s: # sysctl -w kernel.sem="250 32000 100 128" -------------------------------------------------------------------------------Configuring File Handles When configuring our Linux database server, it is critical to ensure that the ma ximum number of file handles is large enough. The setting for file handles desig nate the number of open files that you can have on the entire Linux system. Use the following command to determine the maximum number of file handles for th e entire system: # cat /proc/sys/fs/file-max 103062 Oracle recommends that the file handles for the entire system be set to at least 65536. In most cases, the default for Red Hat Linux is 103062. I have seen othe rs (Red Hat Linux AS 2.1, Fedora Core 1, and Red Hat version 9) that will only d efault to 32768. If this is the case, you will want to increase this value to at least 65536. This is method I use most often. This method sets the maximum number of file han dles (using the kernel parameter file-max) on startup by inserting the following kernel parameter in the /etc/sysctl.conf startup file: # echo "fs.file-max=65536" >> /etc/sysctl.conf If you wanted to dynamically alter the value of all semaphore kernel parameters without rebooting the machine, you can make this change directly to the /proc fi le system. This command can be made permanent by putting it into the /etc/rc.loc al startup file: # echo "65536" > /proc/sys/fs/file-max You can also use the sysctl command to change the maximum number of file handles : # sysctl -w fs.file-max=65536 NOTE: It is also possible to query the current usage of file handles using the f ollowing command: # cat /proc/sys/fs/file-nr 1140 0 103062 In the above example output, here is an explanation of the three values from the file-nr command: Total number of allocated file handles. Total number of file handles currently being used. Maximum number of file handles that can be allocated. This is essentially the va lue of file-max - (see above).

NOTE: If you need to increase the value in /proc/sys/fs/file-max, then make sure that the ulimit is set properly. Usually for 2.4.20 it is set to unlimited. Ver ify the ulimit setting my issuing the ulimit command: # ulimit unlimited

-------------------------------------------------------------------------------Create Oracle Account and Directories Now let's create the Oracle UNIX account all all required directories: Login as the root user id. % su Create directories. # mkdir -p /u01/app/oracle # mkdir -p /u03/app/oradata # mkdir -p /u04/app/oradata # mkdir -p /u05/app/oradata # mkdir -p /u06/app/oradata Create the UNIX Group for the Oracle User Id. # groupadd -g 115 dba Create the UNIX User for the Oracle Software. # useradd -u 173 -c "Oracle Software Owner" -d /u01/app/oracle -g "dba" -m -s /b in/bash oracle # passwd oracle Changing password for user oracle. New UNIX password: ************ BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: ************ passwd: all authentication tokens updated successfully. Change ownership of all Oracle Directories to the Oracle UNIX User. # chown -R oracle:dba /u01 # chown -R oracle:dba /u03 # chown -R oracle:dba /u04 # chown -R oracle:dba /u05 # chown -R oracle:dba /u06 Oracle Environment Variable Settings NOTE: Ensure to set the environment variable: LD_ASSUME_KERNEL=2.4.1 Failing to set the LD_ASSUME_KERNEL parameter will cause the Oracle Universal Installer to hang!

Verify all mount points. Please keep in mind that all of the following mount poi nts can simply be directories if you only have one hard drive. For our installation, we will be using four mount points (or directories) as fol lows: /u01 : The Oracle RDBMS software will be installed to /u01/app/oracle. /u03 : This mount point will contain the physical Oracle files:

Control File 1 Online Redo Log File - Group 1 / Member 1 Online Redo Log File - Group 2 / Member 1 Online Redo Log File - Group 3 / Member 1 /u04 : This mount point will contain the physical Oracle files: Control File 2 Online Redo Log File - Group 1 / Member 2 Online Redo Log File - Group 2 / Member 2 Online Redo Log File - Group 3 / Member 2 /u05 : This mount point will contain the physical Oracle files: Control File 3 Online Redo Log File - Group 1 / Member 3 Online Redo Log File - Group 2 / Member 3 Online Redo Log File - Group 3 / Member 3 /u06 : This mount point will contain the all physical Oracle data files. This will be one large RAID 0 stripe for all Oracle data files. All tablespaces including System, UNDO, Temporary, Data, and Index. -------------------------------------------------------------------------------Configuring the Oracle Environment After configuring the Linux operating environment, it is time to setup the Oracl e UNIX User ID for the installation of the Oracle RDBMS Software. Keep in mind that the following steps need to be performed by the oracle user id . Before delving into the details for configuring the Oracle User ID, I packaged a n archive of shell scripts and configuration files to assist with the Oracle preparation and installation. You should download the archive "o racle_920_installation_files_linux.tar" as the Oracle User ID and place it in his HOME directory. Login as the oracle user id. % su - oracle Unpackage the contents of the oracle_920_installation_files_linux.tar archive. A fter extracting the archive, you will have a new directory called oracle_920_installation_files_linux that contains all required files. The following set of commands descibe how to extract the file and where to copy/extract all required files: $ id uid=173(oracle) gid=115(dba) groups=115(dba) $ pwd /u01/app/oracle $ tar xvf oracle_920_installation_files_linux.tar oracle_920_installation_files_linux/ oracle_920_installation_files_linux/admin.tar oracle_920_installation_files_linux/common.tar oracle_920_installation_files_linux/dbora

oracle_920_installation_files_linux/dbshut oracle_920_installation_files_linux/.bash_profile oracle_920_installation_files_linux/dbstart oracle_920_installation_files_linux/ldap.ora oracle_920_installation_files_linux/listener.ora oracle_920_installation_files_linux/sqlnet.ora oracle_920_installation_files_linux/tnsnames.ora oracle_920_installation_files_linux/crontabORA920.txt $ cp oracle_920_installation_files_linux/.bash_profile ~/.bash_profile $ tar xvf oracle_920_installation_files_linux/admin.tar $ tar xvf oracle_920_installation_files_linux/common.tar $ . ~/.bash_profile .bash_profile executed $ -------------------------------------------------------------------------------Configuring Oracle User Shell Limits Many of the Linux shells (including BASH) implement certain controls over certai n critical resources like the number of file descriptors that can be opened and the maximum number of processes available to a user's session. In most cases, you will not need to alter any of these shell limits, but you find yourself getting errors when creating or maintaining the Oracle da tabase, you may want to read through this section. You can use the following command to query these shell limits: # ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited Maximum Number of Open File Descriptors for Shell Session Let's first talk about the maximum number of open file descriptors for a user's shell session. NOTE: Make sure that throughout this section, that you are logged in as the orac le user account since this is the shell account we want to test! Ok, you are first going to tell me, "But I've already altered my Linux environme nt by setting the system wide kernel parameter /proc/sys/fs/file-max". Yes, this is correct, but there is still a per user limit on the number of open file descriptors. This typically defaults to 1024. To check that, use the following command:

% su - oracle % ulimit -n 1024 If you wanted to change the maximum number of open file descriptors for a user's shell session, you could edit the /etc/security/limits.conf as the root account . For your Linux system, you would add the following lines: oracle soft nofile 4096 oracle hard nofile 101062 The first line above sets the soft limit, which is the number of files handles ( or open files) that the Oracle user will have after logging in to the shell acco unt. The hard limit defines the maximum number of file handles (or open files) a re possible for the user's shell account. If the oracle user account starts to r ecieve error messages about running out of file handles, then number of file han dles should be increased for the oracle using the user should increase the numbe r of file handles using the hard limit setting. You can increase the value of th is parameter to 101062 for the current session by using the following: % ulimit -n 101062 Keep in mind that the above command will only effect the current shell session. If you were to log out and log back in, the value would be set back to its defau lt for that shell session. NOTE: Although you can set the soft and hard file limits higher, it is critical to understand to never set the hard limit for nofile for your shell account equa l to /proc/sys/fs/file-max. If you were to do this, your shell session could use up all of the file descriptors for the entire Linux system, which means that th e entire Linux system would run out of file descriptors. At this point, you woul d not be able to initiate any new logins since the system would not be able to o pen any PAM modules, which are required for login. Notice that I set my hard lim it to 101062 and not 103062. In short, I am leaving 2000 spare! We're not totally done yet. We still need to ensure that pam_limits is configure d in the /etc/pam.d/system-auth file. The steps defined below sould already be p erformed with a normal Red Hat Linux installation, but should still be validated ! The PAM module will read the /etc/security/limits.conf file. You should have an entry in the /etc/pam.d/system-auth file as follows: session required /lib/security/$ISA/pam_limits.so I typically validate that my /etc/pam.d/system-auth file has the following two e ntries: session required /lib/security/$ISA/pam_limits.so session required /lib/security/$ISA/pam_unix.so Finally, let's test our new settings for the maximum number of open file descrip tors for the oracle shell session. Logout and log back in as the oracle user acc ount then run the following commands. Let's first check all current soft shell limits: $ ulimit -Sa core file size (blocks, -c) data seg size (kbytes, -d) file size (blocks, -f) max locked memory (kbytes, -l) max memory size (kbytes, -m) open files (-n) pipe size (512 bytes, -p) stack size (kbytes, -s) cpu time (seconds, -t) max user processes (-u) 0 unlimited unlimited unlimited unlimited 4096 8 10240 unlimited 16383

virtual memory (kbytes, -v) unlimited Finally, let's check all current hard shell limits: $ ulimit -Ha core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 101062 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited The soft limit is now set to 4096 while the hard limit is now set to 101062. NOTE: There may be times when you cannot get access to the root user account to change the /etc/security/limits.conf file. You can set this value in the user's login script for the shell as follows: su - oracle cat >> ~oracle/.bash_profile << EOF ulimit -n 101062 EOF

NOTE: For this section, I used the BASH shell. The session values will not alway s be the same for other shells.

Maximum Number of Processes for Shell Session This section is very similar to the previous section, "Maximum Number of Open Fi le Descriptors for Shell Session" and deals with the same concept of soft limits and hard limits as well as configuring pam_limits. For most default Red Hat Lin ux installations, you will not need to be concerned with the maximum number of u ser processes as this value is generally high enough! NOTE: For this section, I used the BASH shell. The session values will not alway s be the same for other shells. Let's start by querying the current limit of the maximum number of processes for the oracle user: % su - oracle % ulimit -u 16383 If you wanted to change the soft and hard limits for the maximum number of proce sses for the oracle user, (and for that matter, all users), you could edit the / etc/security/limits.conf as the root account. For your Linux system, you would a dd the following lines: oracle soft nproc 2047 oracle hard nproc 16384 NOTE: There may be times when you cannot get access to the root user account to change the /etc/security/limits.conf file. You can set this value in the user's login script for the shell as follows: su - oracle cat >> ~oracle/.bash_profile << EOF

ulimit -u 16384 EOF

Miscellaneous Notes To check all current soft shell limits, enter the following command: $ ulimit -Sa core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 4096 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited To check maximum hard limits, enter the following command: $ ulimit -Ha core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 101062 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited The file (blocks) value should be multiplied by 512 to obtain the maximum file s ize imposed by the shell. A value of unlimited is the operating system default a nd typically has a maximum value of 1 TB. NOTE: Oracle9i Release 2 (9.2.0) includes native support for files greater than 2 GB. Check your shell to determine whether it will impose a limit.

-------------------------------------------------------------------------------Downloading / Unpacking the Oracle9i Installation Files Most of the actions throughout the rest of this document should be done as the " oracle" user account unless otherwise noted. If you are not logged in as the "or acle" user account, do so now. Download Oracle9i from Oracle's OTN Site. (If you do not currently have an account with Oracle OTN, you will need to creat e one. This is a FREE account!) http://www.oracle.com/technology/software/products/oracle9i/htdocs/linuxsoft.htm l

Download the following files to a temporary directory (i.e. /u01/app/oracle/orai nstall: ship_9204_linux_disk1.cpio.gz (538,906,295 bytes) (cksum - 245082434) ship_9204_linux_disk2.cpio.gz (632,756,922 bytes) (cksum - 2575824107) ship_9204_linux_disk3.cpio.gz (296,127,243 bytes) (cksum - 96915247) Directions to extract the files. Run "gunzip <filename>" on all the files. % gunzip ship_9204_linux_disk1.cpio.gz Extract the cpio archives with the command: "cpio -idmv < <filename>" % cpio -idmv < ship_9204_linux_disk1.cpio NOTE: Some browsers will uncompress the files but leave the extension the same ( gz) when downloading. If the above steps do not work for you, try skipping step 1 and go directly to step 2 without changing the filename. % cpio -idmv < ship_9204_linux_disk1.cpio.gz

You should now have three directories called "Disk1, Disk2 and Disk3" containing the Oracle9i Installation files: /Disk1 /Disk2 /Disk3

-------------------------------------------------------------------------------Update Red Hat Linux System - (Oracle Metalink Note: 252217.1) The following RPMs, all of which are available on the Red Hat Fedora Core 2 CDs, will need to be updated as per the steps described in Metalink Note: 252217.1 "Requirements for Installing Oracle 9iR2 on RHEL3". All of these packages will need to be installed as the root user: From Fedora Core 2 / Disk #1 # cd /mnt/cdrom/Fedora/RPMS # rpm -Uvh libpng-1.2.2-22.i386.rpm From Fedora Core 2 / Disk #2 # cd /mnt/cdrom/Fedora/RPMS # rpm -Uvh gnome-libs-1.4.1.2.90-40.i386.rpm From Fedora Core 2 / Disk #3 # cd /mnt/cdrom/Fedora/RPMS # rpm -Uvh compat-libstdc++-7.3-2.96.126.i386.rpm # rpm -Uvh compat-libstdc++-devel-7.3-2.96.126.i386.rpm # rpm -Uvh compat-db-4.1.25-2.1.i386.rpm # rpm -Uvh compat-gcc-7.3-2.96.126.i386.rpm # rpm -Uvh compat-gcc-c++-7.3-2.96.126.i386.rpm # rpm -Uvh openmotif21-2.1.30-9.i386.rpm # rpm -Uvh pdksh-5.2.14-24.i386.rpm From Fedora Core 2 / Disk #4 # cd /mnt/cdrom/Fedora/RPMS # rpm -Uvh sysstat-5.0.1-2.i386.rpm

Set gcc296 and g++296 in PATH Put gcc296 and g++296 first in $PATH variable by creating the following symbolic links: # mv /usr/bin/gcc /usr/bin/gcc323 # mv /usr/bin/g++ /usr/bin/g++323 # ln -s /usr/bin/gcc296 /usr/bin/gcc # ln -s /usr/bin/g++296 /usr/bin/g++ Check hostname Make sure the hostname command returns a fully qualified host name by amending t he /etc/hosts file if necessary: # hostname Install the 3006854 patch: The Oracle / Linux Patch 3006854 can be downloaded here. # unzip p3006854_9204_LINUX.zip # cd 3006854 # sh rhel3_pre_install.sh -------------------------------------------------------------------------------Install the Oracle 9.2.0.4.0 RDBMS Software As the "oracle" user account: Set your DISPLAY variable to a valid X Windows display. % DISPLAY=<Any X-Windows Host>:0.0 % export DISPLAY NOTE: If you forgot to set the DISPLAY environment variable and you get the foll owing error: Xlib: connection to ":0.0" refused by server Xlib: Client is not authorized to connect to Server you will then need to execute the following command to get "runInstaller" workin g again: % rm -rf /tmp/OraInstall If you don't do this, the Installer will hang without giving any error messages. Also make sure that "runInstaller" has stopped running in the background. If no t, kill it. Change directory to the Oracle installation files you downloaded and extracted. Then run: runInstaller. $ su - oracle $ cd orainstall/Disk1 $ ./runInstaller Initializing Java Virtual Machine from /tmp/OraInstall2004-05-02_08-45-13PM/jre/ bin/java. Please wait... Screen Name Response Welcome Screen: Click "Next" Inventory Location: Click "OK" UNIX Group Name: Use "dba" Root Script Window: Open another window, login as the root userid, and run "/tmp /orainstRoot.sh". When the script has completed, return to the dialog from the O racle Installer and hit Continue.

File Locations: Leave the "Source Path" at its default setting. For the Destinat ion name, I like to use "OraHome920". You can leave the Destination path at it's default value which should be "/u01/app/oracle/product/9.2.0". Available Products: Select "Oracle9i Database 9.2.0.4.0" and click "Next" Installation Types: Select "Enterprise Edition (2.84GB)" and click "Next" Database Configuration: Select "Software Only" and click "Next" Summary: Click "Install" Running root.sh script. When the "Link" phase is complete, you will be prompted to run the $ORACLE_HOME/ root.sh script as the "root" user account. Shutdown any started Oracle processes The Oracle Universal Installer will succeed in starting some Oracle programs, in particular the Oracle HTTP Server (Apache), the Oracle Intelligent Agent, and p ossibly the Orcle TNS Listener. Make sure all programs are shutdown before attem pting to continue in installing the Oracle 9.2.0.5.0 patchset: % $ORACLE_HOME/Apache/Apache/bin/apachectl stop % agentctl stop % lsnrctl stop -------------------------------------------------------------------------------Install the Oracle 9.2.0.5.0 Patchset Once you have completed installing of the Oracle9i (9.2.0.4.0) RDBMS software, y ou should now apply the 9.2.0.5.0 patchset. NOTE: The details and instructions for applying the 9.2.0.5.0 patchset in this a rticle is not absolutely necessary. I provide it here simply as a convenience fo r those how do want to apply the latest patchset. The 9.2.0.5.0 patchset can be downloaded from Oracle Metalink: Patch Number: 3501955 Description: ORACLE 9i DATABASE SERVER RELEASE 2 - PATCH SET 4 VERSION 9.2.0.5.0 Product: Oracle Database Family Release: Oracle 9.2.0.5 Select a Platform or Language: Linux x86 Last Updated: 26-MAR-2004 Size: 313M (328923077 bytes)

Use the following steps to install the Oracle10g Universal Installer and then th e Oracle 9.2.0.5.0 patchset. To start, let's unpack the Oracle 9.2.0.5.0 to a temporary directory: % cd orapatch % unzip p3501955_9205_LINUX.zip % cpio -idmv < 9205_lnx32_release.cpio

Next, we need to install the Oracle10g Universal Installer into the same $ORACLE _HOME we used to install the Oracle9i RDBMS software. NOTE: Using the old Universal Installer that was used to install the Oracle9i RD BMS software, (OUI release 2.2), cannot be used to install the 9.2.0.5.0 patchse t and higher!

Starting with the Oracle 9.2.0.5.0 patchset, Oracle requires the use of the Orac le10g Universal Installer to apply the 9.2.0.5.0 patchset and to perform all sub sequent maintenance operations on the Oracle software $ORACLE_HOME. Let's get this thing started by installing the Oracle10g Universal Installer. Th is must be done by running the runInstaller that is included with the 9.2.0.5.0 patchset we extracted in the above step: % cd orapatch/Disk1 % ./runInstaller -ignoreSysPrereqs Starting Oracle Universal Installer... Checking installer requirements... Checking operating system version: must be redhat-2.1, UnitedLinux-1.0, redhat-3 , SuSE-7 or SuSE-8 Failed <<<< >>> Ignoring required pre-requisite failures. Continuing... Preparing to launch Oracle Universal Installer from /tmp/OraInstall2004-08-30_07 -48-15PM. Please wait ... Oracle Universal Installer, Version 10.1.0.2.0 Production Copyright (C) 1999, 2004, Oracle. All rights reserved. Use the following options in the Oracle Universal Installer to install the Oracl e10g OUI: Screen Name Response Welcome Screen: Click "Next" File Locations: The "Source Path" should be pointing to the products.xml file by default. For the Destination name, choose the same one you created when installing the Or acle9i software. The name we used in this article was "OraHome920" and the desti nation path should be "/u01/app/oracle/product/9.2.0". Select a Product to Install: Select "Oracle Universal Installer 10.1.0.2.0" and click "Next" Summary: Click "Install"

Exit from the Oracle Universal Installer. Correct the runInstaller symbolic link bug. (Bug 3560961) After the installation of Oracle10g Universal Installer, there is a bug that doe s NOT update the $ORACLE_HOME/bin/runInstaller symbolic link to point to the new 10g installation location. Since the symbolic link does not get updated, the ru nInstaller command still points to the old installer (2.2) and will be run inste ad of the new 10g installer.

To correct this, you will need to manually update the $ORACLE_HOME/bin/runInstal ler symbolic link: % cd $ORACLE_HOME/bin % ln -s -f $ORACLE_HOME/oui/bin/runInstaller.sh runInstaller We now install the Oracle 9.2.0.5.0 patchset by executing the newly installed 10 g Universal Installer: % cd % runInstaller -ignoreSysPrereqs Starting Oracle Universal Installer... Checking installer requirements... Checking operating system version: must be redhat-2.1, UnitedLinux-1.0, redhat-3 , SuSE-7 or SuSE-8 Failed <<<< >>> Ignoring required pre-requisite failures. Continuing... Preparing to launch Oracle Universal Installer from /tmp/OraInstall2004-08-30_07 -59-30PM. Please wait ... Oracle Universal Installer, Version 10.1.0.2.0 Production Copyright (C) 1999, 2004, Oracle. All rights reserved. Here is an overview of the selections I made while performing the 9.2.0.5.0 patc hset install: Screen Name Response Welcome Screen: Click "Next" File Locations: The "Source Path" should be pointing to the products.xml file by default. For the Destination name, choose the same one you created when installing the Or acle9i software. The name we used in this article was "OraHome920" and the desti nation path should be "/u01/app/oracle/product/9.2.0". Select a Product to Install: Select "Oracle 9iR2 Patchsets 9.2.0.5.0" and click "Next" Summary: Click "Install"

Running root.sh script. When the Link phase is complete, you will be prompted to run the $ORACLE_HOME/ro ot.sh script as the "root" user account. Go ahead and run the root.sh script. Exit Universal Installer Exit from the Universal Installer and continue on to the Post Installation secti on of this article.

-------------------------------------------------------------------------------Post Installation Steps After applying the Oracle 9.2.0.5.0 patchset, we should perform several miscella neous tasks like configuring the Oracle Networking files and setting up startup and shutdown scripts for then the machine is cycled.

Configuring Oracle Networking Files: I already included sample configuration files (contained in the oracle_920_insta llation_files_linux.tar file) that can be simply copied to their proper location and started. Change to the oracle HOME directory and copy the files as follows: % % % % % % cd cd cp cp cp cp

oracle_920_installation_files_linux ldap.ora $ORACLE_HOME/network/admin/ tnsnames.ora $ORACLE_HOME/network/admin/ sqlnet.ora $ORACLE_HOME/network/admin/ listener.ora $ORACLE_HOME/network/admin/

% cd % lsnrctl start Update /etc/oratab: The dbora script (below) relies on an entry in the /etc/oratab. Perform the foll owing actions as the oracle user account: % echo "ORA920:/u01/app/oracle/product/9.2.0:Y" >> /etc/oratab Configuring Startup / Shutdown Scripts: Also included in the oracle_920_installation_files_linux.tar file is a script ca lled dbora. This script can be used by the init process to startup and shutdown the database when the machine is cycled. The following tasks will need to be per formed by the root user account: % su # cp /u01/app/oracle/oracle_920_installation_files_linux/dbora /etc/init.d # chmod 755 /etc/init.d/dbora # # # # # ln ln ln ln ln -s -s -s -s -s /etc/init.d/dbora /etc/init.d/dbora /etc/init.d/dbora /etc/init.d/dbora /etc/init.d/dbora /etc/rc3.d/S99dbora /etc/rc4.d/S99dbora /etc/rc5.d/S99dbora /etc/rc0.d/K10dbora /etc/rc6.d/K10dbora

-------------------------------------------------------------------------------Creating the Oracle Database Finally, let's create an Oracle9i database. This can be done using scripts that I already included with the oracle_920_installation_files_linux.tar download. Th e scripts are included in the ~oracle/admin/ORA920/create directory. To create t he database, perform the following steps: % su - oracle % cd admin/ORA920/create % ./RUN_CRDB.sh After starting the RUN_CRDB.sh, there will be no screen activity until the datab ase creation is complete. You can, however, bring up a new console window to the Linux databse server as the oracle user account, navigate to the same directory you started the database creation from, and tail the crdb.log log file. $ telnet linux3 ... Fedora Core release 2 (Tettnang) Kernel 2.6.5-1.358 on an i686 login: oracle

Password: xxxxxx .bash_profile executed [oracle@linux3 oracle]$ cd admin/ORA920/create [oracle@linux3 create]$ tail -f crdb.log

===================================== 8. Install Oracle 9.2.0.2 on OpenVMS: ===================================== VMS: ==== Using OUI to install Oracle9i Release 2 on an OpenVMS System We have a PC running Xcursion and a 16 Processor GS1280 with the 2 built-in disk s In the examples we booted on disk DKA0: Oracle account is on disk DKA100. Oracle and the database will be installed on D KA100. Install disk MUST be ODS-5. Installation uses the 9.2 downloaded from the Oracle website. It comes in a Java JAR file. Oracle ships a JRE with its product. However, you will have to install Java on O penVMS so you can unpack the 9.2 JAR file that comes from the Oracle website Unpack the JAR file as described on the Oracle website. This will create two .BC K files. Follow the instructions in the VMS_9202_README.txt file on how to restore the 2 backup save sets. When the two backup save sets files are restored, you should end up with two dir ectories: [disk1] directory [disk2] directory These directories will be in the root of a disk. In this example they are in the root of DKA100. The OUI requires X-Windows. If the Alpha system you are using does not have a gr aphic head, use a PC with an X-Windows terminal such as Xcursion. During this install we discovered a problem: Instructions tell you to run @DKA100:[disk1]runinstaller. This will not work because the RUNINSTALLER.COM file is not in the root of DKA10 0:[disk1]. You must first copy RUNINSTALLER.COM from the dka100:[disk1.000000] directory in to dka100:[disk1]: $ Copy dka100:[disk1.000000]runinstaller.com dka100:[disk1]

From a terminal window execute: @DKA100:[disk1]runinstaller - Oracle Installer starts Start the installation Click Next to start the installation. - Assign name and directory structure for the Oracle Home ORACLE_HOME Assign a name for your Oracle home. Assign the directory structure for the home, for example Ora_home Dka100:[oracle.oracle9] This is where the OUI will install Oracle. The OUI will create the directories as necessary - Select product to install Select Database. Click Next. - Select type of installation Select Enterprise Edition (or Standard Edition or Custom). Click Next. - Enable RAC Select No. Click Next. - Database summary View list of products that will be installed. Click Install. - Installation begins Installation takes from 45 minutes to an hour. Installation ends Click Exit. Oracle is now installed in DKA100:[oracle.oracle9]. To create the first database, you must first set up Oracle logicals. To do this use a terminal and execute @[.oracle9]orauser . The tool to create and manage databases is DBCA. On the terminal, type DBCA to launch the Database Assistant. Welcome to Database Configuration Assistant DBCA starts. Click Next. Select an operation Select Create a Database. Click Next. Select a template Select New Database. Click Next. Enter database name and SID Enter the name of the database and Oracle System Identifier (SID): In this example, the database name is DB9I. The SID is DB9I1. Click Next. Select database features Select which demo databases are installed.

In the example, we selected all possible databases. Click Next. Select default node Select the node in which you want your database to operate by default. In the example, we selected Shared Server Mode. Click Next. Select memory In the example, we selected the default. Click Next. Specify database storage parameters Select the device and directory. Use the UNIX device syntax I.E. For example, DKA100:[oracle.oracle9.database] would be: /DKA100/oracle/oracle9/database/ In the example, we kept the default settings. Click Next. Select database creation options Creating a template saves time when creating a database. Click Finish. Create a template Click OK. Creating and starting Oracle Instance The database builds. If it completes successfully, click Exit. If it does not complete successfully, build it again. Running the database Enter show system to see the Oracle database up and running. Set up some files to start and stop the database. Example of a start file This command sets the logicals to manage the database: $ @dka100:[oracle.oracle9]orauser db9i1 The next line starts the Listener (needed for client connects). The final lines start the database. Stop database example Example of how to stop the database. Test database server Use the Enterprise Manager console to test the database server. Oracle Enterprise Manager Enter address of server and SID. Name the server. Click OK. Databases connect information Select database. Enter system account and password. Change connection box to AS SYSDBA. Click OK. Open database Database is opened and exposed. Listener Listener automatically picks up the SID from the database. Start Listener before database and the SID will display in the Listener. If you start the database before the Listener, the SID may not appear immediatel y. To see if the SID is registered in the Listener, enter:

$lsnrctl stat Alter a user User is altered: SQL> alter user oe identified by oe account unlock; SQL> exit Preferred method is to use the Enterprise Manager Console. ================================================== 9. Installation of Oracle 9i on AIX and other UNIX ================================================== AIX: ==== 9.1 Installation of Oracle 9i on AIX Doc ID: Note:201019.1 Content Type: TEXT/PLAIN Subject: AIX: Quick Start Guide - 9.2.0 RDBMS Installation Date: 25-JUN-2002 Type: REFERENCE Last Revision Date: 14-APR-2004 Status: PUBLISHED Quick Start Guide Oracle9i Release 2 (9.2.0) RDBMS Installation AIX Operating System Purpose ======= This document is designed to be a quick reference that can be used when installing Oracle9i Release 2 (9.2.0) on an AIX platform. It is NOT designed to replace the Installation Guide or other documentation. A familiarity with the AIX Operating System is assumed. If more detailed information is needed, please see the Appendix at the bottom of this document for additional resources. Each step should be done in the order that it is listed. These steps are the bare minimum that is necessary for a typical install of the Oracle9i RDBMS. Verify OS version is certified with the RDBMS version ====================================================== The following steps are required to verify your version of the AIX operating system is certified with the version of the RDBMS (Oracle9i Release 2 (9.2.0)): 1. 2. 3. 4. 5. 6. 7. 8. 9. Point your web browser to http://metalink.oracle.com. Click the "Certify & Availability" button near the left. Click the "Certifications" button near the top middle. Click the "View Certifications by Platform" link. Select "IBM RS/6000 AIX" and click "Submit". Select Product Group "Oracle Server" and click "Submit". Select Product "Oracle Server - Enterprise Edition" and click "Submit". Read any general notes at the top of the page. Select "9.2 (9i) 64-bit" and click "Submit".

Creation

The "Status" column displays the certification status. The links in the "Addt'l Info" and "Install Issue" columns may contain additional information relevant to a given version. Note that if patches are listed under one of these links, your installation is not considered certified unless you apply them. The "Addt'l Info" link also contains information about available patchsets. Installation of patchsets is not required to be considered certified, but they are highly recommended. Pre-Installation Steps for the System Administrator ==================================================== The following steps are required to verify your operating system meets minimum requirements for installation, and should be performed by the root user. For assistance with system administration issues, please contact your system administator or operating system vendor. Use these steps to manually check the operating system requirements before attempting to install Oracle RDBMS software, or you may choose to use the convenient "Unix InstallPrep script" which automates these checks for you. For more information about the script, including download information, please review the following article: Note:189256.1 UNIX: Script to Verify Installation Requirements for Oracle 9.x version of RDBMS

The InstallPrep script currently does not check requirements for AIX5L systems. The Following Steps Need to be Performed by the Root User: 1. Configure Operating System Resources: Ensure that the system has at least the following resources: ? 400 MB in /tmp * ? 256 MB of physical RAM memory ? Two times the amount of physical RAM memory for Swap/Paging space (On systems with more than 2 GB of physical RAM memory, the requirements for Swap/Paging space can be lowered, but Swap/Paging space should never be less than physical RAM memory.) * You may also redirect /tmp by setting the TEMP environment variable. This is only recommended in rare circumstances where /tmp cannot be expanded to meet free space requirements. 2. Create an Oracle Software Owner and Group: Create an AIX user and group that will own the Oracle software. (user = oracle, group = dba) ? Use the "smit security" command to create a new group and user Please ensure that the user and group you use are defined in the local /etc/passwd (user) and /etc/group (group) files rather than resolved via a network service such as NIS. 3. Create a Software Mount Point and Datafile Mount Points:

Create a mount point for the Oracle software installation. (at least 3.5 GB, typically /u01) Create a second, third, and fourth mount point for the database files. (typically /u02, /u03, and /u04) Use of multiple mount points is not required, but is highly recommended for best performance and ease of recoverability. 4. Ensure that Asynchronous Input Output (AIO) is "Available": Use the following command to check the current AIO status: # lsdev -Cc aio Verify that the status shown is "Available". If the status shown is "Defined", then change the "STATE to be configured at system restart" to "Available" after running the following command: # smit chaio 5. Ensure that the math library is installed on your system: Use the following command to determine if the math library is installed: # lslpp -l bos.adt.libm If this fileset is not installed and "COMMITTED", then you must install it from the AIX operating system CD-ROM from IBM. With the correct CD-ROM mounted, run the following command to begin the process to load the required bos.adt.libm fileset: # smit install_latest AIX5L systems also require the following filesets: # lslpp -l bos.perf.perfstat # lslpp -l bos.perf.libperfstat 6. Download and install JDK 1.3.1 from IBM. At the time this article was created, the JDK could be downloaded from the following URL: http://www.ibm.com/developerworks/java/jdk/aix/index.html Please contact IBM Support if you need assistance downloading or installing the JDK. 7. Mount the Oracle CD-ROM: Mount the Oracle9i Release 2 (9.2.0) CD-ROM using the command: # mount -rv cdrfs /dev/cd0 /cdrom 8. Run the rootpre.sh script: NOTE: You must shutdown ALL Oracle database instances (if any) before running the rootpre.sh script. Do not run the rootpre.sh script if you have a newer version of an Oracle database already installed on this system.

Use the following command to run the rootpre.sh script: # /cdrom/rootpre.sh Installation Steps for the Oracle User ======================================= The Following Steps Need to be Performed by the Oracle User: 1. Set Environment Variables Environment variables should be set in the login script for the oracle user. If the oracle user's default shell is the C-shell (/usr/bin/csh), then the login script will be named ".login". If the oracle user's default shell is the Bourne-shell (/usr/bin/bsh) or the Korn-shell (/usr/bin/sh or /usr/bin/ksh), then the login script will be named ".profile". In either case, the login script will be located in the oracle user's home directory ($HOME). The examples below assume that your software mount point is /u01. Parameter ----------ORACLE_HOME PATH Value ----------------------------/u01/app/oracle/product/9.2.0 /u01/app/oracle/product/9.2.0/bin:/usr/ccs/bin: /usr/bin/X11: (followed by any other directories you wish to include) Set this to what you will call your database instance. (typically 4 characters in length) <ip-address>:0.0 (review Note:153960.1 for detailed information)

ORACLE_SID DISPLAY 2. Set the umask:

Set the oracle user's umask to "022" in you ".profile" or ".login" file. Example: umask 022 3. Verify the Environment Log off and log on as the oracle user to ensure all environment variables are set correctly. Use the following command to view them: % env | more Before attempting to run the Oracle Universal Installer (OUI), verify that you can successfully run the following command: % /usr/bin/X11/xclock If this does not display a clock on your display screen, please review the following article:

Note:153960.1 FAQ: X Server testing and troubleshooting 4. Start the Oracle Universal Installer and install the RDBMS software: Use the following commands to start the installer: % cd /tmp % /cdrom/runInstaller Respond to the installer prompts as shown below: ? When prompted for whether rootpre.sh has been run by root, enter "y". This should have been done in Pre-Installation step 8 above. ? At the "Welcome Screen", click Next. ? If prompted, enter the directory to use for the "Inventory Location". This can be any directory, but is usually not under ORACLE_HOME because the oraInventory is shared with all Oracle products on the system. ? If prompted, enter the "UNIX Group Name" for the oracle user (dba). ? At the "File Locations Screen", verify the Destination listed is your ORACLE_HOME directory. Also enter a NAME to identify this ORACLE_HOME. The NAME can be anything, but is typically "DataServer" and the first three digits of the version. For example: "DataServer920" ? At the "Available Products Screen", choose Oracle9i Database, then click Next. ? At the "Installation Types Screen", choose Enterprise Edition, then click Next. ? If prompted, click Next at the "Component Locations Screen" to accept the default directories. ? At the "Database Configuration Screen", choose the the configuration based on how you plan to use the database, then click Next. ? If prompted, click Next at the "Privileged Operating System Groups Screen" to accept the default values (your current OS primary group). ? If prompted, enter the Global Database Name in the format "ORACLE_SID.hostname" at the "Database Identification Screen". For example: "TEST.AIXhost". The SID entry should be filled in with the value of ORACLE_SID. Click Next. ? If prompted, enter the directory where you would like to put datafiles at the "Database File Location Screen". Click Next. ? If prompted, select "Use the default character set" (WE8ISO8859P1) at the "Database Character Set Screen". Click Next. ? At the "Choose JDK Home Directory", enter the directory where you have previously installed the JDK 1.3.1 from IBM. This should have been done in Pre-Installation step 6 above. ? At the "Summary Screen", review your choices, then click Install. The install will begin. Follow instructions regarding running "root.sh"

and any other prompts. When completed, the install will have created a default database, configured a Listener, and started both for you. Note: If you are having problems changing CD-ROMs when prompted to do so, please review the following article: Note:146566.1 How to Unmount / Eject First Cdrom Your Oracle9i Release 2 (9.2.0) RDBMS installation is now complete and ready for use. Appendix A ========== Documentation is available from the following resources: Oracle9i Release 2 (9.2.0) CD-ROM Disk1 ---------------------------------------Mount the CD-ROM, then use a web browser to open the file "index.htm" located at the top level directory of the CD-ROM. On this CD-ROM you will find the Installation Guide, Administrator's Reference, and other useful documentation. Oracle Documentation Center --------------------------Point your web browser to the following URL: http://otn.oracle.com/documentation/content.html Select the highest version CD-pack displayed to ensure you get the most up-to-date information. Unattended install: ------------------Note 1: ------This note describes how to start the unattended install of patch 9.2.0.5 on AIX 5L, which can be applied to 9.2.0.2, 9.2.0.3, 9.2.0.4 Shut down the existing Oracle server instance with normal or immediate priority. For example, shutdown all instances (cleanly) if running Parallel Server. Stop all listener, agent and other processes running in or against the ORACLE_HO ME that will have the patch set installation. Run slibclean (/usr/sbin/slibclean) as root to remov e ant currently unused modules in kernel and library memory. To perform a silent installation requiring no user intervention:

Copy the response file template provided in the response directory where you unp acked the patch set tar file. Edit the values for all fields labeled as <Value Required> according to the comm ents and examples in the template. Start the Oracle Universal Installer from the directory described in Step 4 whic h applies to your situation. You should pass the full path of the response file template you have edited loca lly as the last argument with your own value of ORACLE_HOME and FROM_LOCATION. The following is an exampl e of the command: % ./runInstaller -silent -responseFile full_path_to_your_response_file Run the $ORACLE_HOME/root.sh script from a root session. If you are applying the patch set in a cluster database environment, the root.sh script should be run in the same way on both the local node and all participating nodes. Note 2: ------In order to make an unattended install of 9.2.0.1 on Win2K: Running Oracle Universal Installer and Specifying a Response File To run Oracle Universal Installer and specify the response file: Go to the MS-DOS command prompt. Go to the directory where Oracle Universal Installer is installed. Run the appropriate response file. For example, C:\program files\oracle\oui\install> setup.exe -silent -nowelcome -responseFile filename Where... Description filename Identifies the full path of the specific response file -silent Runs Oracle Universal Installer in complete silent mode. The Welcome window is suppressed automatically. This parameter is optional. If you use -silent, -nowelcome is not necessary. -nowelcome Suppresses the Welcome window that appears during installation. This parameter is optional. Note 3: ------Unattended install of 9.2.0.5 on Win2K: To perform a silent installation requiring no user intervention:

Make a copy of the response file template provided in the response directory whe re you unzipped the patch set file. Edit the values for all fields labeled as <Value Required> according to the comm ents and examples in the template. Start Oracle Universal Installer release 10.1.0.2 located in the unzipped area o f the patch set. For example, Disk1\setup.exe. You should pass the full path of the response file template you have edited locally as the last argument with your own value of ORACLE_HOME and FROM_LOCATIO N. The syntax is as follows: setup.exe -silent -responseFile ORACLE_BASE\ORACLE_HOME\response_file_path

=============================== 9.2 Oracle and UNIX and other OS: =============================== You have the following options for creating your new Oracle database: - Use the Database Configuration Assistant (DBCA). DBCA can be launched by the Oracle Universal Installer, depending upon the type of install that you select, and provides a graphical user interface (GUI) that guides you through the creati on of a database. You can chose not to use DBCA, or you can launch it as a standalone tool at any time in the future to create a database. Run DCBA as % dbca - Create the database manually from a script. If you already have existing scripts for creating your database, you can still c reate your database manually. However, consider editing your existing script to take advantage of new Oracle f eatures. Oracle provides a sample database creation script and a sample initialization parameter file with the database sof tware files it distributes, both of which can be edited to suit your needs. - Upgrade an existing database. In all cases, the Oracle software needs to be installed on your host machine.

9.1.1 Operating system dependencies: -----------------------------------First, determine for this version of Oracle, what OS settings must be made, and if any patches must be installed.

For example, on Linux, glibc 2.1.3 is needed with Oracle version 8.1.7. Linux could be quite critical with respect to libraries in combination with Oracle. Ook moet er mogelijk shmmax (max size of shared memory segment) en dergelijke kernel parameters worden aangepast. # sysctl -w kernel.shmmax=100000000 # echo "kernel.shmmax = 100000000" >> /etc/sysctl.conf Opmerking: Het onderstaANDe is algemeen, maar is ook afgeleid van een Oracle 8.1.7 installatie op Linux Redhat 6.2 Als de 8.1.7 installatie gedaan wordt is ook nog de Java JDK 1.1.8 nodig. Deze kan gedownload worden van www.blackdown.org Download jdk-1.1.8_v3 jdk118_v3-glibc-2.1.3.tar.bz2 in /usr/local tar xvif jdk118_v3-glibc-2.1.3.tar.bz2 ln -s /usr/local/jdk118_v3 /usr/local/java 9.1.2 Environment variables: ---------------------------Make sure you have the following environment variables set: ON UNIX: ======== Example 1: ---------ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE (root voor oracl e software) ORACLE_HOME=$ORACLE_BASE/product/8.1.5; export ORACLE_HOME (bepaald de dire ctory waarin de instance software zich bevind) ORACLE_SID=brdb; export ORACLE_SID (bepaald de naam van de huidige instance) ORACLE_TERM=xterm, vt100, ansi of wat ANDers; export ORACLE_TERM ORA_NLSxx=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS (bepaald de nls directory t.b.v. datafiles voor meerdere talen) NLS_LANG="Dutch_The NetherlANDs.WE8ISO8859P1"; export NLS_LANG (Dit specificeer t de language, territory en characterset t.b.v de client applicaties. LD_LIBRARY_PATH=/u01/app/oracle/product/8.1.7/lib; export LD_LIBRARY_PATH PATH=$ORACLE_HOME/bin:/bin:/user/bin:/usr/sbin:/bin; export PATH plaats deze variabelen in de oracle user profile file: .profile, of .bash_profile etc.. Example 2: ---------/dbs01 /dbs01 /app Db directory 1 Constante

/dbs01 ctory /dbs01 ectory /dbs01 /dbs01 ctory

/app /app /app /app

/oracle -

$ORACLE_BASE $ORACLE_ADMIN /817 $ORACLE_HOME

Oracle base dire Oracle admin dir Constante Oracle home dire

/oracle /admin /oracle /product /oracle /product

# LISTENER.ORA Network Configuration File: /dbs01/app/oracle/product/817/network/admin/listener.ora # TNSNAMES.ORA Network Configuration File: /dbs01/app/oracle/product/817/network/admin/tnsnames.ora Example 3: ---------/dbs01/app/orace /dbs02/oradata /dbs03/oradata .. .. /var/opt/oracle /opt/oracle/admin/bin Example 4: ---------Mountpunt / /usr /var /home /opt /u01 /u02 /u03 /u04 /u05 /u06 /u07 Example 5: ---------initBENE.ora tnsnames.ora listener.ora alert log oratab Example 6: ---------ORACLE_BASE /u01/app/oracle ORACLE_HOME $ORACLE_BASE/product/10.1.0/db_1 ORACLE_PATH /u01/app/oracle/product/10.1.0/db_1/bin:. Note: The period adds the current working directory to the search path. /opt/oracle/product/8.0.6/dbs /opt/oracle/product/8.0.6/network/admin /opt/oracle/product/8.0.6/network/admin /var/opt/oracle/bene/bdump /var/opt/oracle Device Omvang /dev/md/dsk/d1 /dev/md/dsk/d3 /dev/md/dsk/d4 /dev/md/dsk/d5 /dev/md/dsk/d6 /dev/md/dsk/d7 /dev/md/dsk/d8 /dev/md/dsk/d9 /dev/md/dsk/d10 /dev/md/dsk/d110 /dev/md/dsk/d120 /dev/md/dsk/d123 (Mbyte) 100 1200 200 200 4700 8700 8700 8700 8700 8700 8700 8650 Doel Unix Root-filesysteem Unix usr-filesysteem Unix var-filesysteem Unix opt-filesysteem Oracle_Home Oracle datafiles Oracle datafiles Oracle datafiles Oracle datafiles Oracle datafiles Oracle datafiles Oracle datafiles Oracle software database files database files network files

ORACLE_SID SAL1 ORAENV_ASK NO SQLPATH /home:/home/oracle:/u01/oracle TNS_ADMIN $ORACLE_HOME/network/admin TWO_TASK Function Specifies the default connect identifier to use in the connect string. If this environment variable is set, you do not need to specify the connect identifier in the connect string. For exa mple, if the TWO_TASK environment variable is set to sales, you can connect to a database using the CONNECT username/passwo rd command rather than the CONNECT username/password@sales command. Syntax Any connect identifier. Example PRODDB_TCP to identify the SID and Oracle home directory for the instance that you want to shut down, enter the following command: Solaris: $ cat /var/opt/oracle/oratab Other operating systems: $ cat /etc/oratab ON NT/2000: =========== SET SET SET SET SET ORACLE_BASE=G:\ORACLE ORACLE_HOME=G:\ORACLE\ORA81 ORACLE_SID=AIRM ORA_NLSxxx=G:\ORACLE\ORA81\ocommon\nls\admin\data NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

ON OpenVMS: =========== When Oracle is installed on VMS, a root directory is chosen which is pointed to by the logical name ORA_ROOT. This directory can be placed anywhere on the VM S system. The majority of code, configuration files and command procedures are found belo w this root directory. When a new database is created a new directory is created in the root directory to store database specific configuration files. This directory is called [.DB_db name]. This directory will normally hold the system tablespace data file as well as the database specific startup, shutdown and orauser files. The Oracle environment for a VMS user is set up by running the appropriate ORAUSER_dbname.COM file. This sets up the necessary command symbols and logical names to access the various ORACLE utilities. Each database created on a VMS system will have an ORAUSER file in it's home dir ectory and will be named ORAUSER_dbname.COM, e.g. for a database SALES the file specifi cation could be:

ORA_ROOT:[DB_SALES]ORAUSER_SALES.COM To have the environment set up automatically on login, run this command file in your login.com file. To access SQLPLUS use the following command with a valid username and password: $ SQLPLUS username/password SQLDBA is also available on VMS and can be invoked similarly: $ SQLDBA username/password 9.1.3 OFA directory structuur: -----------------------------Hou je aan OFA. Een voorbeeld voor database PROD: /opt/oracle/product/8.1.6 /opt/oracle/product/8.1.6/admin/PROD /opt/oracle/product/8.1.6/admin/pfile /opt/oracle/product/8.1.6/admin/adhoc /opt/oracle/product/8.1.6/admin/bdump /opt/oracle/product/8.1.6/admin/udump /opt/oracle/product/8.1.6/admin/adump /opt/oracle/product/8.1.6/admin/cdump /opt/oracle/product/8.1.6/admin/create /u02/oradata/PROD /u03/oradata/PROD /u04/oradata/PROD etc.. Example mountpoints and disks: -----------------------------Mountpunt / /usr /var /home /opt /u01 /u02 /u03 /u04 /u05 /u06 /u07 Device /dev/md/dsk/d1 /dev/md/dsk/d3 /dev/md/dsk/d4 /dev/md/dsk/d5 /dev/md/dsk/d6 /dev/md/dsk/d7 /dev/md/dsk/d8 /dev/md/dsk/d9 /dev/md/dsk/d10 /dev/md/dsk/d110 /dev/md/dsk/d120 /dev/md/dsk/d123 Omvang 100 1200 200 200 4700 8700 8700 8700 8700 8700 8700 8650 Doel Unix Root-filesysteem Unix usr-filesysteem Unix var-filesysteem Unix opt-filesysteem Oracle_Home Oracle datafiles Oracle datafiles Oracle datafiles Oracle datafiles Oracle datafiles Oracle datafiles Oracle datafiles

9.1.4 Users en groups: ---------------------Als je met OS verificatie wilt werken, moet in de init.ora gezet zijn: remote_login_passwordfile=none (passwordfile authentication via exlusive)

Benodigde groups in UNIX: group dba. Deze moet voorkomen in de /etc/group file vaak is ook nog nodig de group oinstall groupadd dba groupadd oinstall groupadd oper Maak nu user oracle aan: adduser -g oinstall -G dba -d /home/oracle oracle # # # # # # # groupadd dba useradd oracle mkdir /usr/oracle mkdir /usr/oracle/9.0 chown -R oracle:dba /usr/oracle touch /etc/oratab chown oracle:dba /etc/oratab

9.1.5 mount points en disks: ---------------------------maak de mount points: mkdir mkdir mkdir mkdir /opt/u01 /opt/u02 /opt/u03 /opt/u04

dit moeten voor een produktie omgeving aparte schijven zijn Geef nu ownership van deze mount points aan user oracle en group oinstall chown chown chown chown -R -R -R -R oracle:oinstall oracle:oinstall oracle:oinstall oracle:oinstall /opt/u01 /opt/u02 /opt/u03 /opt/u04

directories: drwxr-xr-x oracle dba files : -rw-r----- oracle dba : -rw-r--r-- oracle dba chmod 644 * chmod u+x filename chmod ug+x filename 9.1.6 test van user oracle: --------------------------log in als user oracle en geef de commANDo's $groups laat de groups zien (oinstall, dba) $umask laat 022 zien, zoniet zet dan de line umask 022 in het .profile umask is de default mode van een file of directory wanneer deze aangemaakt wordt .

rwxrwxrwx=777 rw-rw-rw-=666 rw-r--r--=644 welke correspondeert met umask 022 Verander nu het .profile of .bash_profile van de user oracle. Plaats de environment variabelen van 9.1 in het profile. log uit en in als user oracle, en test de environment: %env %echo $variablename

9.1.7 Oracle Installer bij 8.1.x op Linux: -----------------------------------------Log in als user oracle. Draai nu oracle installer: Linux: startx cd /usr/local/src/Oracle8iR3 ./runInstaller of Ga naar install/linux op de CD en run runIns.sh Nu volgt een grafische setup. Beantwoord de vragen. Het kan zijn dat de installer vraagt om scripts uit te voeren zoals: orainstRoot.sh en root.sh Om dit uit te voeren: open een nieuw window su root cd $ORACLE_HOME ./orainstRoot.sh Installatie database op Unix: ----------------------------$ export PATH=$PATH:$ORACLE_HOME/bin $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib $ dbca & or $ cat "db1:/usr/oracle/9.0:Y >> /etc/oratab" $ cd $ORACLE_HOME/dbs $ cat initdw.ora |sed s/"#db_name = MY_DB_NAME"/"db_name = db1"/|sed s/#contro l_files/control_files/ > initdb1.ora Start and create database : $ export PATH=$PATH:$ORACLE_HOME/bin $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib $ export ORACLE_SID=db1 $ sqlplus /nolog <<!

connect / as sysdba startup nomount create database db1 ! This creates a default database with files in $ORACLE_HOME/dbs Now add the database meta data to actually make it useful : $ sqlplus /nolog <<! connect / as sysdba @?/rdbms/admin/catalog # E.g: /apps/oracle/product/9.2/rdbms/admin @?/rdbms/admin/catproc ! Now create a user and give it wide ranging permissions : $ sqlplus /nolog <<! connect / as sysdba create user myuser identified by password; grant create session,create any table to myuser; grant unlimited tablespace to myuser; ! 9.1.8 OS or Password Authentication: ------------------------------------- Preparing to Use OS Authentication To enable authentication of an administrative user using the operating system yo u must do the following: Create an operating system account for the user. Add the user to the OSDBA or OS OPER operating system defined groups. Ensure that the initialization parameter, REMOTE_LOGIN_PASSWORDFILE, is set to N ONE. This is the default value for this parameter. A user can be authenticated, enabled as an administrative user, and connected to a local database by typing one of the following SQL*Plus commands:

CONNECT / AS SYSDBA CONNECT / AS SYSOPER For a remote database connection over a secure connection, the user must also sp ecify the net service name of the remote database: CONNECT /@net_service_name AS SYSDBA CONNECT /@net_service_name AS SYSOPER OSDBA: unix : dba windows: ORA_DBA OSOPER: unix : oper

windows: ORA_OPER -- Preparing to Use Password File Authentication To enable authentication of an administrative user using password file authentic ation you must do the following: Create an operating system account for the user. If not already created, Create the password file using the ORAPWD utility: ORAPWD FILE=filename PASSWORD=password ENTRIES=max_users Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE. Connect to the database as user SYS (or as another user with the administrative privilege). If the user does not already exist in the database, create the user. Grant the S YSDBA or SYSOPER system privilege to the user: GRANT SYSDBA to scott; This statement adds the user to the password file, thereby enabling connection A S SYSDBA. For example, user scott has been granted the SYSDBA privilege, so he can connect as follows: CONNECT scott/tiger AS SYSDBA 9.1.9 Create a 9i database: --------------------------Step 1: Decide on Your Instance Identifier (SID) Step 2: Establish the Database Administrator Authentication Method Step 3: Create the Initialization Parameter File Step 4: Connect to the Instance Step 5: Start the Instance. Step 6: Issue the CREATE DATABASE Statement Step 7: Create Additional Tablespaces Step 8: Run Scripts to Build Data Dictionary Views Step 9: Run Scripts to Install Additional Options (Optional) Step 10: Create a Server Parameter File (Recommended) Step 11: Back Up the Database. Step 1: -------

% ORACLE_SID=ORATEST; export ORACLE_SID Step 2: see above ----------------Step 3: init.ora ---------------Note DB_CACHE_SIZE 10g: Parameter type Big integer Syntax DB_CACHE_SIZE = integer [K | M | G] Default value If SGA_TARGET is set: If the parameter is not specified, then the default is 0 (internally determined by the Oracle Database). If the parameter is specified, then the user-specified value indicates a minimum value for the memory pool. If SGA_TARGET is not set, then the default is either 48 MB or 4MB * number of CP Us * granule size, whichever is greater Modifiable ALTER SYSTEM Basic No Oracle10g Obsolete Oracle SGA Parameters Using AMM via the sga_target parameter renders several parameters obsolete. Reme mber, you can continue to perform manual SGA tuning if you like, but if you set sga_target, then these parameters will default to zero: db_cache_size - This parameter determines the number of database block buffers i n the Oracle SGA and is the single most important parameter in Oracle memory. db_xk_cache_size - This set of parameters (with x replaced by 2, 4, 8, 16, or 32 ) sets the size for specialized areas of the buffer area used to store data from tablespaces with varying blocksizes. When these are set, they impose a hard limit on the maximum size of their respective areas. db_keep_cache_size - This is used to store small tables that perform full table scans. This data buffer pool was a sub-pool of db_block_buffers in Oracle8i. db_recycle_cache_size - This is reserved for table blocks from very large tables that perform full table scans. This was buffer_pool_keep in Oracle8i. large_pool_size - This is a special area of the shared pool that is reserved for SGA usage when using the multi-threaded server. The large pool is used for para llel query and RMAN processing, as well as setting the size of the Java pool. log_buffer - This parameter determines the amount of memory to allocate for Orac le's redo log buffers. If there is a high amount of update activity, the log_buffer should be allocated more space. shared_pool_size - This parameter defines the pool that is shared by all users i n the system, including SQL areas and data dictionary caching. A large shared_po ol_size is not always better than a smaller shared pool. If your application con tains non-reusable SQL, you may get better performance with a smaller shared poo l.

java_pool_size -- This parameter specifies the size of the memory area used by J ava, which is similar to the shared pool used by SQL and PL/SQL. streams_pool_size - This is a new area in Oracle Database 10g that is used to pr ovide buffer areas for the streams components of Oracle. This is exactly the same automatic tuning principle behind the Oracle9i pga_aggr egate_target parameter that made these parameters obsolete. If you set pga_aggre gate_target, then these parameters are ignored: sort_area_size - This parameter determines the memory region that is allocated f or in-memory sorting. When the v$sysstat value sorts (disk) become excessive, yo u may want to allocate additional memory. hash_area_size - This parameter determines the memory region reserved for oins. Starting with Oracle9i, Oracle Corporation does not recommend using rea_size unless the instance is configured with the shared server option. recommends that you enable automatic sizing of SQL work areas by setting gregate_target hash_area_size is retained only for backward compatibility es. hash j hash_a Oracle pga_ag purpos

Sample Initialization Parameter File # Cache and I/O DB_BLOCK_SIZE=4096 DB_CACHE_SIZE=20971520 # Cursors and Library Cache CURSOR_SHARING=SIMILAR OPEN_CURSORS=300 # Diagnostics and Statistics BACKGROUND_DUMP_DEST=/vobs/oracle/admin/mynewdb/bdump CORE_DUMP_DEST=/vobs/oracle/admin/mynewdb/cdump TIMED_STATISTICS=TRUE USER_DUMP_DEST=/vobs/oracle/admin/mynewdb/udump # Control File Configuration CONTROL_FILES=("/vobs/oracle/oradata/mynewdb/control01.ctl", "/vobs/oracle/oradata/mynewdb/control02.ctl", "/vobs/oracle/oradata/mynewdb/control03.ctl") # Archive LOG_ARCHIVE_DEST_1='LOCATION=/vobs/oracle/oradata/mynewdb/archive' LOG_ARCHIVE_FORMAT=%t_%s.dbf LOG_ARCHIVE_START=TRUE # Shared Server # Uncomment and use first DISPATCHES parameter below when your listener is # configured for SSL # (listener.ora and sqlnet.ora) # DISPATCHERS = "(PROTOCOL=TCPS)(SER=MODOSE)", # "(PROTOCOL=TCPS)(PRE=oracle.aurora.server.SGiopServer)" DISPATCHERS="(PROTOCOL=TCP)(SER=MODOSE)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)", (PROTOCOL=TCP) # Miscellaneous COMPATIBLE=9.2.0

DB_NAME=mynewdb # Distributed, Replication and Snapshot DB_DOMAIN=us.oracle.com REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE # Network Registration INSTANCE_NAME=mynewdb # Pools JAVA_POOL_SIZE=31457280 LARGE_POOL_SIZE=1048576 SHARED_POOL_SIZE=52428800 # Processes and Sessions PROCESSES=150 # Redo Log and Recovery FAST_START_MTTR_TARGET=300 # Resource Manager RESOURCE_MANAGER_PLAN=SYSTEM_PLAN # Sort, Hash Joins, Bitmap Indexes SORT_AREA_SIZE=524288 # Automatic Undo Management UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=undotbs Reasonable 10g init.ora: -----------------------########################################### # Cache and I/O ########################################### db_block_size=8192 db_file_multiblock_read_count=16 ########################################### # Cursors and Library Cache ########################################### open_cursors=300 ########################################### # Database Identification ########################################### db_domain=antapex.org db_name=test10g ########################################### # Diagnostics and Statistics ########################################### background_dump_dest=C:\oracle/admin/test10g/bdump core_dump_dest=C:\oracle/admin/test10g/cdump user_dump_dest=C:\oracle/admin/test10g/udump ###########################################

# File Configuration ########################################### control_files=("C:\oracle\oradata\test10g\control01.ctl", "C:\oracle\oradata\tes t10g\control02.ctl", "C:\oracle\oradata\test10g\control03.ctl") db_recovery_file_dest=C:\oracle/flash_recovery_area db_recovery_file_dest_size=2147483648 ########################################### # Job Queues ########################################### job_queue_processes=10 ########################################### # Miscellaneous ########################################### compatible=10.2.0.1.0 ########################################### # Processes and Sessions ########################################### processes=150 ########################################### # SGA Memory ########################################### sga_target=287309824 ########################################### # Security and Auditing ########################################### audit_file_dest=C:\oracle/admin/test10g/adump remote_login_passwordfile=EXCLUSIVE ########################################### # Shared Server ########################################### dispatchers="(PROTOCOL=TCP) (SERVICE=test10gXDB)" ########################################### # Sort, Hash Joins, Bitmap Indexes ########################################### pga_aggregate_target=95420416 ########################################### # System Managed Undo and Rollback Segments ########################################### undo_management=AUTO undo_tablespace=UNDOTBS1 LOG_ARCHIVE_DEST=c:\oracle\oradata\log LOG_ARCHIVE_FORMAT=arch_%t_%s_%r.dbf' Flash_recovery_area: location where RMAN stores diskbased backups

Step 4: Connect to the Instance: --------------------------------

Start SQL*Plus and connect to your Oracle instance AS SYSDBA. $ SQLPLUS /nolog CONNECT SYS/password AS SYSDBA Step 5: Start the Instance: --------------------------Start an instance without mounting a database. Typically, you do this only durin g database creation or while performing maintenance on the database. Use the STARTUP command with the NOMOUNT option. I n this example, because the initialization parameter file is stored in the default location, you are not required to specif y the PFILE clause: STARTUP NOMOUNT At this point, there is no database. Only the SGA is created and background proc esses are started in preparation for the creation of a new database. Step 6: Issue the CREATE DATABASE Statement: -------------------------------------------To create the new database, use the CREATE DATABASE statement. The following sta tement creates database mynewdb: CREATE DATABASE mynewdb USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDENTIFIED BY y1tz5p LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M, GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M, GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE tempts1 DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf' SIZE 20M REUSE UNDO TABLESPACE undotbs DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;

Oracle 10g create statement: CREATE DATABASE playdwhs USER SYS IDENTIFIED BY cactus USER SYSTEM IDENTIFIED BY cactus LOGFILE GROUP 1 ('/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo01.log') SIZ E 100M, GROUP 2 ('/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo02.log') SIZ E 100M,

GROUP 3 ('/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo03.log') SIZ E 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/dbms/tdbaplay/playdwhs/database/default/system01.dbf' SIZE 500M RE USE EXTENT MANAGEMENT LOCAL SYSAUX DATAFILE '/dbms/tdbaplay/playdwhs/database/default/sysaux01.dbf' SIZE 300M REUSE DEFAULT TEMPORARY TABLESPACE temp TEMPFILE '/dbms/tdbaplay/playdwhs/database/default/temp01.dbf' SIZE 1000M REUSE UNDO TABLESPACE undotbs DATAFILE '/dbms/tdbaplay/playdwhs/database/default/undotbs01.dbf' SIZE 1000M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; CONNECT SYS/password AS SYSDBA -- create a user tablespace to be assigned as the default tablespace for users CREATE TABLESPACE users LOGGING DATAFILE '/u01/oracle/oradata/mynewdb/users01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; -- create a tablespace for indexes, separate from user tablespace CREATE TABLESPACE indx LOGGING DATAFILE '/u01/oracle/oradata/mynewdb/indx01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; For information about creating tablespaces, see Chapter 8, " Managing Tablespace s". Step 9: Run Scripts to Build Data Dictionary Views Run the scripts necessary to build views, synonyms, and PL/SQL packages: CONNECT SYS/password AS SYSDBA @/u01/oracle/rdbms/admin/catalog.sql @/u01/oracle/rdbms/admin/catproc.sql EXIT catalog.sql All databases Creates the data dictionary and public synonyms for ma ny of its views Grants PUBLIC access to the synonyms catproc.sql All databases Runs all scripts required for, or used with PL/SQL catclust.sql Real Application Clusters Creates Real Application Clusters data di ctionary views Oracle supplies other scripts that create additional structures you can use in m anaging your database and creating database applications. These scripts are list ed in Table B-2. See Also:

Your operating system-specific Oracle documentation for the exact names and loca tions of these scripts on your operating system Table B-2 Creating Additional Data Dictionary Structures Script Name Needed For Run By Description catblock.sql Performance management SYS Creates views that can dynamically displ ay lock dependency graphs catexp7.sql Exporting data to Oracle7 SYS Creates the dictionary views needed fo r the Oracle7 Export utility to export data from the Oracle Database in Oracle7 Export file format caths.sql Heterogeneous Services SYS Installs packages for administering heterog eneous services catio.sql Performance management SYS Allows I/O to be traced on a table-by-table basis catoctk.sql Security SYS Creates the Oracle Cryptographic Toolkit package catqueue.sql Advanced Queuing Creates the dictionary objects required for Advanced Queuing catrep.sql Oracle Replication SYS Runs all SQL scripts for enabling database rep lication catrman.sql Recovery Manager RMAN or any user with GRANT_RECOVERY_CATALOG_OWNER role Creates recovery manager tables and views (schema) to establish an external recovery catalog for the backup, restore, and recovery functionality provided b y the Recovery Manager (RMAN) utility dbmsiotc.sql Storage management Any user Analyzes chained rows in index-organize d tables dbmsotrc.sql Performance management SYS or SYSDBA Enables and disables generatio n of Oracle Trace output dbmspool.sql Performance management SYS or SYSDBA Enables DBA to lock PL/SQL pac kages, SQL statements, and triggers into the shared pool userlock.sql Concurrency control SYS or SYSDBA Provides a facility for user-name d locks that can be used in a local or clustered environment to aid in sequencin g application actions utlbstat.sql and utlestat.sql Performance monitoring SYS Respectively start and stop collecting performance tuning statistics utlchn1.sql Storage management Any user For use with the Oracle Database. Create s tables for storing the output of the ANALYZE command with the CHAINED ROWS opt ion. Can handle both physical and logical rowids. utlconst.sql Year 2000 compliance Any user Provides functions to validate that C HECK constraints on date columns are year 2000 compliant utldtree.sql Metadata management Any user Creates tables and views that show dep endencies between objects utlexpt1.sql Constraints Any user For use with the Oracle Database. Creates the default table (EXCEPTIONS) for storing exceptions from enabling constraints. Can handle both physical and logical rowids. utlip.sql PL/SQL SYS Used primarily for upgrade and downgrade operations. It inv alidates all existing PL/SQL modules by altering certain dictionary tables so th at subsequent recompilations will occur in the format required by the database. It also reloads the packages STANDARD and DBMS_STANDARD, which are necessary for any PL/SQL compilations. utlirp.sql PL/SQL SYS Used to change from 32-bit to 64-bit word size or vice ver sa. This script recompiles existing PL/SQL modules in the format required by the new database. It first alters some data dictionary tables. Then it reloads the packages STANDARD and DBMS_STANDARD, which are necessary for using PL/SQL. Final ly, it triggers a recompilation of all PL/SQL modules, such as packages, procedu res, and types. utllockt.sql Performance monitoring SYS or SYSDBA Displays a lock wait-for graph , in tree structure format utlpwdmg.sql Security SYS or SYSDBA Creates PL/SQL functions for default passwor

d complexity verification. Sets the default password profile parameters and enab les password management features. utlrp.sql PL/SQL SYS Recompiles all existing PL/SQL modules that were previously in an INVALID state, such as packages, procedures, and types. utlsampl.sql Examples SYS or any user with DBA role Creates sample tables, such as emp and dept, and users, such as scott utlscln.sql Oracle Replication Any user Copies a snapshot schema from another sn apshot site utltkprf.sql Performance management SYS Creates the TKPROFER role to allow the T KPROF profiling utility to be run by non-DBA users utlvalid.sql Partitioned tables Any user Creates tables required for storing out put of ANALYZE TABLE ...VALIDATE STRUCTURE of a partitioned table utlxplan.sql Performance management Any user

+++++++ Graag op de pl003 de twee volgende instances: - playdwhs - accpdwhs En op de pl101 de volgende instance: - proddwhs Graag conform de huidige standaard voor filesystems. Dat wil zeggen, al deze databases komen op volumegroup roca_vg. Met daaronder het de volgende mount points: /dbms/tdba[env]/[env]dwhs/admin /dbms/tdba[env]/[env]dwhs/database /dbms/tdba[env]/[env]dwhs/recovery /dbms/tdba[env]/[env]dwhs/export /dev/fslv32 admin /dev/fslv33 database /dev/fslv34 recovery /dev/fslv35 export 1. FS: /dbms/tdbaplay/playdwhs/admin /dbms/tdbaplay/playdwhs/database /dbms/tdbaplay/playdwhs/recovery /dbms/tdbaplay/playdwhs/export /dbms/tdbaaccp/accpdwhs/admin /dbms/tdbaaccp/accpdwhs/database /dbms/tdbaaccp/accpdwhs/recovery /dbms/tdbaaccp/accpdwhs/export /dbms/tdbaprod/proddwhs/admin /dbms/tdbaprod/proddwhs/database /dbms/tdbaprod/proddwhs/recovery SIZE(G): 0.25 15 4 5 SIZE(G): 0.25 15 4 5 SIZE(G): 0.25 15 4 LPs: PPs: 4 8 240 480 64 128 80 160 LPs: PPs: 4 8 240 480 64 128 80 160 LPs: PPs: 4 8 240 480 64 128 0.25 15.00 4.00 5.00 0.23 11.78 3.51 4.99 7% 22% 13% 1% 55 17 12 10 1% /dbms/tdbaaccp/accproca/ 1% /dbms/tdbaaccp/accproca/ 1% /dbms/tdbaaccp/accproca/ 1% /dbms/tdbaaccp/accproca/

/dbms/tdbaprod/proddwhs/export

80

160

CREATE DATABASE mynewdb USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDENTIFIED BY y1tz5p LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M, GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M, GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE tempts1 DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf' SIZE 20M REUSE UNDO TABLESPACE undotbs DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED; Or use a character set as AL32UTF8 Or use mirrored logs with logfile group 1 ('/disk03/oracle/oradata/DB1/redo1a.log', '/disk04/oracle/oradata/DB1/redo1b.log') size 100M, group 2 ('/disk05/oracle/oradata/DB1/redo2a.log', ('/disk06/oracle/oradata/DB1/redo2b.log') size 100M

+++++++ Step 7: Create Additional Tablespaces: -------------------------------------To make the database functional, you need to create additional files and tablesp aces for users. The following sample script creates some additional tablespaces: CONNECT SYS/password AS SYSDBA -- create a user tablespace to be assigned as the default tablespace for users CREATE TABLESPACE users LOGGING DATAFILE '/vobs/oracle/oradata/mynewdb/users01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; -- create a tablespace for indexes, separate from user tablespace CREATE TABLESPACE indx LOGGING DATAFILE '/vobs/oracle/oradata/mynewdb/indx01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; EXIT

Step 8: Run Scripts to Build Data Dictionary Views: --------------------------------------------------Run the scripts necessary to build views, synonyms, and PL/SQL packages: CONNECT SYS/password AS SYSDBA @/vobs/oracle/rdbms/admin/catalog.sql @/vobs/oracle/rdbms/admin/catproc.sql EXIT Do not forget to run as SYSTEM the script /sqlplus/admin/pupbld.sql; @/dbms/tdbaaccp/ora10g/home/sqlplus/admin/pupbld.sql @/dbms/tdbaaccp/ora10g/home/rdbms/admin/catexp.sql The following table contains descriptions of the scripts: Script Description CATALOG.SQL: Creates the views of the data dictionary tables, the dynamic perfo rmance views, and public synonyms for many of the views. Grants PUBLIC access to the synonyms. CATPROC.SQL: Runs all scripts required for or used with PL/SQL. Step 10: Create a Server Parameter File (Recommended): -----------------------------------------------------Oracle recommends you create a server parameter file as a dynamic means of maint aining initialization parameters. The following script creates a server parameter file from the text initializatio n parameter file and writes it to the default location. The instance is shut down, then restarted using the ser ver parameter file (in the default location). CONNECT SYS/password AS SYSDBA -- create the server parameter file CREATE SPFILE='/vobs/oracle/dbs/spfilemynewdb.ora' FROM PFILE='/vobs/oracle/admin/mynewdb/scripts/init.ora'; SHUTDOWN -- this time you will start up using the server parameter file CONNECT SYS/password AS SYSDBA STARTUP EXIT CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfileOWS.ora' FROM PFILE='/opt/app/oracle/admin/OWS/pfile/init.ora'; CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfilePEGACC.ora' FROM PFILE='/opt/app/oracle/admin/PEGACC/scripts/init.ora'; CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfilePEGTST.ora' FROM PFILE='/opt/app/oracle/admin/PEGTST/scripts/init.ora'; 9.10 Oracle 9i licenses: ------------------------

Setting License Parameters Oracle no longer offers licensing by the number of concurrent sessions. Therefor e the LICENSE_MAX_SESSIONS and LICENSE_SESSIONS_WARNING initialization parameters have been deprecated. - named user licesnsing: If you use named user licensing, Oracle can help you enforce this form of licens ing. You can set a limit on the number of users created in the database. Once this limit is reached, you cannot create more user s. Note: This mechanism assumes that each person accessing the database has a unique user name and that no people share a user name. Therefore, so that named user licensing can help you ensure compliance with your Oracle license agreement, do not allow multiple users to log in using the same user name. To limit the number of users created in a database, set the LICENSE_MAX_USERS in itialization parameter in the database's initialization parameter file, as shown in the following example: LICENSE_MAX_USERS = 200 - per-processor licensing: Oracle encourages customers to license the database on the per-processor licensi ng model. With this licensing method you count up the number of CPUs in your computer, and multiply that number by th e licensing cost of the database and database options you need. Currently the Standard (STD) edition of the database is priced at $15,000 per pr ocessor, and the Enterprise (EE) edition is priced at $40,000 per processor. The RAC feature is $20,000 per processor extra, and you n eed to add 22 percent annually for the support contract. It's possible to license the database on a per-user basis, which makes financial sense if there'll never be many users accessing the database. However, the licensing method can't be changed after it is initial ly licensed. So if the business grows and requires significantly more users to access the database, the costs could exceed the costs under the per-processor model. You also have to understand what Oracle corporation considers to be a user for t he purposes of licensing purposes. If 1,000 users access the database through an application server, which only mak es five connections to the database, then Oracle will require that either 1,000 user licenses be purchased or that th e database be licensed via the per-processor pricing model. The Oracle STD edition is and EE edition costs $800 (with a 25 user minimum). ich should be budgeted in licensed at $300 per user (with a five user minimum), per user There is still an annual support fee of 22 percent, wh addition to the licensing fees.

If the support contract is not paid each year, then the customer is not licensed to upgrade to the latest version of the database and must re-purchase all of the licenses over again in order to upgrade versions. This se ction only gives you a brief overview of the available licensing options and costs, so if you have additional questions you really shou ld contact an Oracle sales representative Note about 10g init.ora: -----------------------PARALLEL_MAX_SERVERS=(> apply or capture processes) Each capture process and apply process may use multiple parallel execution servers. The apply process by default needs two parallel servers. So this parameter needs to set to at least 2 even for a single non-parallel apply process. Specify a value for this parameter to ensure that there are enough parallel execution servers. In our installation we went for 12 apply server, so we incre ased the number of parallel_max_server above this figure of 12. _kghdsidx_count=1 This parameter prevents the shared_pool from being divided among CPUs LOG_PARALLELISM=1 This parameter must be set to 1 at each database that capt ures events. Parameters set using DBMS_CAPTURE_ADM package: Using the DBMS_CAPTURADM.SET_PARAMETER procedure there a 3 a parameters that are of common usage to affect installation PARALLELISM=3 There may be only one logminer session for the whole ruleset and only one e nqueuer process that will push the objects. you can safely define as much as 3 execution cap ture process per CPU _CHECKPOINT_FREQUENCY=1 Increase the frequency of logminer checkpoints especially in a database wit h significant LOB or DDL activity. A logminer checkpoint is requested by def ault every 10Mb of redo mined. _SGA_SIZE Amount of memory available from the shared pool for logminer processing. Th e default amount of shared_pool memory allocated to logminer is 10Mb. Increase this value esp ecially in environments where large LOBs are processed. 9.11. Older Database installations: ----------------------------------CREATE DATABASE Examples on 8.x The easiest way to create a 8i, 9i database, is using the "Database Configuratio n Assistant". Using this tool, you are able to create a database and setup the NET configurati on and the listener, in a graphical environment. It is also possible to use a script running in sqlpus (8i,9i) or svrmgrl (only i n 8i).

Charactersets that are used a lot in europe: WE8ISO8859P15 WE8MMSWIN1252 Example 1: ---------$ SQLPLUS /nolog CONNECT username/password AS sysdba STARTUP NOMOUMT PFILE=<path to init.ora> -- Create database CREATE DATABASE rbdb1 CONTROLFILE REUSE LOGFILE '/u01/oracle/rbdb1/redo01.log' SIZE 1M '/u01/oracle/rbdb1/redo02.log' SIZE 1M '/u01/oracle/rbdb1/redo03.log' SIZE 1M '/u01/oracle/rbdb1/redo04.log' SIZE 1M DATAFILE '/u01/oracle/rbdb1/system01.dbf' SIZE AUTOEXTEND ON NEXT 10M MAXSIZE 200M CHARACTER SET WE8ISO8859P1; run catalog.sql run catproq.sql -- Create another (temporary) system tablespace CREATE ROLLBACK SEGMENT rb_temp STORAGE (INITIAL 100 k NEXT 250 k); -- Alter temporary system tablespace online before proceding ALTER ROLLBACK SEGMENT rb_temp ONLINE; -- Create additional tablespaces ... -- RBS: For rollback segments -- USERs: Create user sets this as the default tablespace -- TEMP: Create user sets this as the temporary tablespace CREATE TABLESPACE rbs DATAFILE '/u01/oracle/rbdb1/rbs01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 5M MAXSIZE 150M; CREATE TABLESPACE users DATAFILE '/u01/oracle/rbdb1/users01.dbf' SIZE 3M REUSE AUTOEXTEND ON NEXT 5M MAXSIZE 150M; CREATE TABLESPACE temp DATAFILE '/u01/oracle/rbdb1/temp01.dbf' SIZE 2M REUSE AUTOEXTEND ON NEXT 5M MAXSIZE 150M; -- Create rollback segments. CREATE ROLLBACK SEGMENT rb1 STORAGE(INITIAL tablespace rbs; CREATE ROLLBACK SEGMENT rb2 STORAGE(INITIAL tablespace rbs; CREATE ROLLBACK SEGMENT rb3 STORAGE(INITIAL tablespace rbs; CREATE ROLLBACK SEGMENT rb4 STORAGE(INITIAL tablespace rbs; 50K NEXT 250K) 50K NEXT 250K) 50K NEXT 250K) 50K NEXT 250K)

REUSE, REUSE, REUSE, REUSE 10M REUSE

-- Bring new rollback segments online and drop the temporary system one

ALTER ALTER ALTER ALTER

ROLLBACK ROLLBACK ROLLBACK ROLLBACK

SEGMENT SEGMENT SEGMENT SEGMENT

rb1 rb2 rb3 rb4

ONLINE; ONLINE; ONLINE; ONLINE;

ALTER ROLLBACK SEGMENT rb_temp OFFLINE; DROP ROLLBACK SEGMENT rb_temp ; Example 2: ---------connect internal startup nomount pfile=/disk00/oracle/software/7.3.4/dbs/initDB1.ora create database "DB1" maxinstances 2 maxlogfiles 32 maxdatafiles 254 characterset "US7ASCII" datafile '/disk02/oracle/oradata/DB1/system01.dbf' size 128M autoextent on next 8M maxsize 256M logfile group 1 ('/disk03/oracle/oradata/DB1/redo1a.log', '/disk04/oracle/oradata/DB1/redo1b.log') size 100M, group 2 ('/disk05/oracle/oradata/DB1/redo2a.log', ('/disk06/oracle/oradata/DB1/redo2b.log') size 100M REM * install data dictionary views @/disk00/oracle/software/7.3.4/rdbms/admin/catalog.sql @/disk00/oracle/software/7.3.4/rdbms/admin/catproq.sql create rollback segment SYSROLL tablespace system storage (initial 2M next 2M minextents 2 maxextents 255); alter rollback segment SYSROLL online; create tablespace RBS datafile '/disk01/oracle/oradata/DB1/rbs01.dbf' size 25M default storage ( initial 500K next 500K pctincrease 0 minextents 2 ); create rollback segment RBS_1 tablespace RBS1 storage (initial 512K next 512K minextents 50); create rollback segment RBS02 tablespace RBS storage (initial 500K next 500K minextents 2 optimal 1M); etc.. alter rollback segment RBS01 online; alter rollback segment RBS02 online;

etc.. create tablespace DATA datafile '/disk05/oracle/oradata/DB1/data01.dbf' size 25M default storage ( initial 500K next 500K pctincrease 0 maxextends UNLIMITED ); etc.. other tablespaces you need run other scripts you need. alter user sys temporary tablespace TEMP; alter user system default tablespace TOOLS temporary tablespace TEMP; connect system/manager @/disk00/oracle/software/7.3.4/rdbms/admin/catdbsyn.sql @/disk00/oracle/software/7.3.4/rdbms/admin/pubbld.sql t.b.v. PRODUCT_USER_PROFILE, SQLPLUS_USER_PROFILE Example 3: on NT/2000 8i best example: -------------------------------------Suppose you want a second database on a NT/2000 Server: 1. create a service with oradim oradim -new -sid -startmode -pfile 2. sqlplus /nolog (or use svrmgrl) startup nomount pfile="G:\oracle\admin\hd\pfile\init.ora" SVRMGR> CREATE DATABASE hd LOGFILE 'G:\oradata\hd\redo01.log' SIZE 2048K, 'G:\oradata\hd\redo02.log' SIZE 2048K, 'G:\oradata\hd\redo03.log' SIZE 2048K MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXLOGHISTORY 1 DATAFILE 'G:\oradata\hd\system01.dbf' SIZE 264M REUSE AUTOEXTEND ON NEX T 10240K MAXDATAFILES 254 MAXINSTANCES 1 CHARACTER SET WE8ISO8859P1 NATIONAL CHARACTER SET WE8ISO8859P1; @catalog.sql @catproq.sql Oracle 9i: ----------

Example 1: ---------CREATE DATABASE mynewdb USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDENTIFIED BY y1tz5p LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M, GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M, GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE tempts1 DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf' SIZE 20M REUSE UNDO TABLESPACE undotbs DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;

9.2 Automatische start oracle bij system boot: ============================================== 9.2.1 oratab: ------------Inhoud ORATAB in /etc of /var/opt: Voorbeeld: # $ORACLE_SID:$ORACLE_HOME:[N|Y] # ORCL:/u01/app/oracle/product/8.0.5:Y # De oracle scripts om de database te starten en te stoppen zijn: $ORACLE_HOME/bin /dbstart en dbshut, of startdb en stopdb of wat daarop lijkt. Deze kijken in ORATAB om te zien welk e databases gestart moeten worden. 9.2.2 dbstart en dbshut: -----------------------Het script dbstart zal oratab lezen en ook tests doen en om de oracle versie te bepalen. Verder bestaat de kern uit:

het starten van sqldba, svrmgrl of sqlplus vervolgens doen we een connect vervolgens geven we het startup commando. Voor dbshut geldt een overeenkomstig verhaal. 9.2.3 init, sysinit, rc: -----------------------Voor een automatische start, voeg nu de juiste entries toe in het /etc/rc2.d/S99 dbstart (or equivalent) file: Tijdens het opstarten van Unix worden de scrips in de /etc/rc2.d uitgevoerd die beginnen met een 'S' en in alfabetische volgorde. De Oracle database processen zullen als (een van de) laatste processen worden ge start. Het bestAND S99oracle is gelinkt met deze directory. Inhoud S99oracle: su - oracle -c "/path/to/$ORACLE_HOME/bin/dbstart" # Start DB's su - oracle -c "/path/to/$ORACLE_HOME/bin/lsnrctl start" # Start listener su - oracle -c "/path/tp/$ORACLE_HOME/bin/namesctl start" # Start OraNames (o ptional) Het dbstart script is een standaard Oracle script. Het kijkt in oratab welke sid 's op 'Y' staan, en zal deze databases starten. of customized via een customized startdb script: ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN su PRD.$$ su TST.$$ su CUR.$$ oracle -c "$ORACLE_ADMIN/bin/startdb WPRD 1>$ORACLE_ADMIN/log/WPRD/startW 2>&1" oracle -c "$ORACLE_ADMIN/bin/startdb WTST 1>$ORACLE_ADMIN/log/WTST/startW 2>&1" oracle -c "$ORACLE_ADMIN/bin/startdb WCUR 1>$ORACLE_ADMIN/log/WCUR/startW 2>&1"

9.3 Het stoppen van Oracle in unix: ----------------------------------Tijdens het down brengen van Unix (shutdown -i 0) worden de scrips in de directo ry /etc/rc2.d uitgevoerd die beginnen met een 'K' en in alfabetische volgorde. De Oracle database processen zijn een van de eerste processen die worden afgeslo ten. Het bestand K10oracle is gelinkt met de /etc/rc2.d/K10oracle # Configuration File: /opt/oracle/admin/bin/K10oracle ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN

su $$ su $$ su $$

- oracle -c "$ORACLE_ADMIN/bin/stopdb WPRD 1>$ORACLE_ADMIN/log/WPRD/stopWPRD. 2>&1" - oracle -c "$ORACLE_ADMIN/bin/stopdb WCUR 1>$ORACLE_ADMIN/log/WCUR/stopWCUR. 2>&1" - oracle -c "$ORACLE_ADMIN/bin/stopdb WTST 1>$ORACLE_ADMIN/log/WTST/stopWTST. 2>&1"

9.4 startdb en stopdb: ---------------------Startdb [ORACLE_SID] -------------------Dit script is een onderdeel van het script S99Oracle. Dit script heeft 1 paramet er, ORACLE_SID # Configuration File: /opt/oracle/admin/bin/startdb # Algemene omgeving zetten . $ORACLE_ADMIN/env/profile ORACLE_SID=$1 echo $ORACLE_SID # Omgeving zetten RDBMS . $ORACLE_ADMIN/env/$ORACLE_SID.env # Het starten van de database sqlplus /nolog << EOF connect / as sysdba startup EOF # Het starten van de listener lsnrctl start $ORACLE_SID # Het starten van de intelligent agent voor alle instances #lsnrctl dbsnmp_start

Stopdb [ORACLE_SID] ------------------Dit script is een onderdeel van het script K10Oracle. Dit script heeft 1 paramet er, ORACLE_SID # Configuration File: /opt/oracle/admin/bin/stopdb # Algemene omgeving zetten . $ORACLE_ADMIN/env/profile ORACLE_SID=$1 export $ORACLE_SID # Settings van het RDBMS . $ORACLE_ADMIN/env/$ORACLE_SID.env

# Het stoppen van de intelligent agent #lsnrctl dbsnmp_stop # Het stoppen van de listener lsnrctl stop $ORACLE_SID # Het stoppen van de database. sqlplus /nolog << EOF connect / as sysdba shutdown immediate EOF 9.5 Batches: -----------De batches (jobs) worden gestart door het Unix proces cron # Batches (Oracle) # Configuration File: /var/spool/cron/crontabs/root # Format of lines: # min hour daymo month daywk cmd # # Dayweek 0=sunday, 1=monday... 0 9 * * 6 /sbin/sh /opt/oracle/admin/batches/bin/batc hes.sh >> /opt/oracle/admin/batches/log/batcheserroroutput.log 2>&1 # Configuration File: /opt/oracle/admin/batches/bin/batches.sh # Door de op de commandline ' BL_TRACE=T ; export BL_TRACE ' worden alle comman do's getoond. case $BL_TRACE in T) set -x ;; esac ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN ORACLE_HOME=/opt/oracle/product/8.1.6; export ORACLE_HOME ORACLE_SID=WCUR ; export ORACLE_SID su - oracle -c ". $ORACLE_ADMIN/env/profile ; . $ORACLE_ADMIN/env/$ORACLE_SID.en v; cd $ORACLE_ADMIN/batches/bin; sqlplus /NOLOG @$ORACLE_ADMIN/batches/bin/Analyse_ WILLOW2K.sql 1> $ORACLE_ADMIN/batches/log/batches$ORACLE_SID.`date +"%y%m%d"` 2>&1" ORACLE_SID=WCON ; export ORACLE_SID su - oracle -c ". $ORACLE_ADMIN/env/profile ; . $ORACLE_ADMIN/env/$ORACLE_SID.en v; cd $ORACLE_ADMIN/batches/bin; sqlplus /NOLOG @$ORACLE_ADMIN/batches/bin/Analyse_ WILLOW2K.sql 1> $ORACLE_ADMIN/batches/log/batches$ORACLE_SID.`date +"%y%m%d"` 2>&1" 9.6 Autostart in NT/Win2K: -------------------------1) Older versions

delete the existing instance FROM the command prompt: oradim80 -delete -sid SID recreate the instance FROM the command prompt: oradim -new -sid SID -intpwd <password> -startmode <auto> -pfile <path\initSID.o ra> Execute the command file FROM the command prompt: oracle_home\database\strt<sid> .cmd Check the log file generated FROM this execution: oracle_home\rdbmsxx\oradimxx.l og 2) NT Registry value HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HOME0\ORA_SID_AUTOSTART REG_EXPAND_SZ TRUE 9.7 Tools: ---------Relink van Oracle: -----------------info: showrev -p pkginfo -i relink: mk -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk install mk -f $ORACLE_HOME/svrmgr/lib/ins_svrmgr.mk install mk -f $ORACLE_HOME/network/lib/ins_network.mk install $ORACLE_HOME/bin relink all Relinking Oracle Background: Applications for UNIX are generally not distributed as complete exec utables. Oracle, like many application vendors who create products for UNIX, distribute individual object files, library archives of object files, and some source files which then get ?relinked? at the operating system level d uring installation to create usable executables. This guarantees a reliable integration with functions provi ded by the OS system libraries. Relinking occurs automatically under these circumstances: - An Oracle product has been installed with an Oracle provided installer. - An Oracle patch set has been applied via an Oracle provided installer. [Step 1] Log into the UNIX system as the Oracle software owner Typically this is the user 'oracle'. [STEP 2] Verify that your $ORACLE_HOME is set correctly: For all Oracle Versions and Platforms, perform this basic environment check fir st:

% cd $ORACLE_HOME % pwd ...Doing this will ensure that $ORACLE_HOME is set correctly in your current env ironment. [Step 3] Verify and/or Configure the UNIX Environment for Proper Relinking: For all Oracle Versions and UNIX Platforms: The Platform specific environment v ariables LIBPATH, LD_LIBRARY_PATH, & SHLIB_PATH typically are already set to include system library locations lik e '/usr/lib'. In most cases, you need only check what they are set to first, then add the $O RACLE_HOME/lib directory to them where appropriate. i.e.: % setenv LD_LIBRARY_PATH ${ORACLE_HOME}/lib:${LD_LIBRARY_PATH} (see [NOTE:131207.1] How to Set UNIX Environment Variables for help with setting UNIX environment variables) If on SOLARIS (Sparc or Intel) with: Oracle 7.3.X, 8.0.X, or 8.1.X: - Ensure that /usr/ccs/bin is before /usr/ucb in $PATH % which ld ....should return '/usr/ccs/bin/ld' If using 32bit(non 9i) Oracle, - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib If using 64bit(non 9i) Oracle, - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib - Set LD_LIBRARY_PATH_64=$ORACLE_HOME/lib64 Oracle 9.X.X (64Bit) on Solaris (64Bit) OS - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib32 - Set LD_LIBRARY_PATH_64=$ORACLE_HOME/lib Oracle 9.X.X (32Bit) on Solaris (64Bit) OS - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib [Step 4] For all Oracle Versions and UNIX Platforms: Verify that you performed Step 2 correctly: % env|pg ....make sure that you see the correct absolute path for OME in the variable definitions. [Step 5] Run the OS Commands to Relink Oracle: Before relinking Oracle, shut down both the database and the listener. Oracle 8.1.X or 9.X.X -----------------------*** NEW IN 8i AND ABOVE *** A 'relink' script is provided in the $ORACLE_HOME/bin directory. % cd $ORACLE_HOME/bin % relink ...this will display all of the command's options. usage: r elink <parameter> accepted values for parameter: all Every product executable that has been installed oracle Oracle Database executable only network net_client, net_server, cman client net_client, plsql client_sharedlib Client shared library interMedia ctx ctx Oracle Text utilities precomp All precompilers that have been installed utilities All utilities that have been installed oemagent oemagent Note: To give the correct permissions to the nmo and nmb execu tables,

$ORACLE_H

you must run the root.sh script after relinking oemagent. ldap ldap, oid

Note: ldap option is available only from 9i. In 8i, you would have to manually r elink ldap. You can relink most of the executables associated with an Oracle Server Installa tion by running the following command: % relink all This will not relink every single executable Oracle provides (you can discern which executables were relinked by checking their timestamp wit h 'ls -l' in the $ORACLE_HOME/bin directory). However, 'relink all' will recreate the shared libraries that most executables r ely on and thereby resolve most issues that require a proper relink. -orSince the 'relink' command merely calls the traditional 'make' commands, you still have the option of running the 'make' commands independently: For executables: oracle, exp, imp, sqlldr, tkprof, mig, dbv, orapwd, rman, svrmgrl, ogms, ogmsctl % cd $ORACLE_HOME/rdbms/lib % make -f ins_rdbms.mk install For executables: sqlplus % cd $ORACLE_HOME/sqlplus/lib % make -f ins_sqlplus.mk install For executables: isqlplus % cd $ORACLE_HOME/sqlplus/lib % make -f ins_sqlplus install_isqlplus For executables: dbsnmp, oemevent, oratclsh % cd $ORACLE_HOME/network/lib % make -f ins_oemagent.mk install For executables: names, namesctl % cd $ORACLE_HOME/network/lib % make -f ins_names.mk install For executables: osslogin, trcasst, trcroute, onrsd, tnsping % cd $ORACLE_HOME/network/lib % make -f ins_net_client.mk install For executables: tnslsnr, lsnrctl % cd $ORACLE_HOME/network/lib % make -f ins_net_server.mk install For executables related to ldap (for example Oracle Internet Directory): % cd $ORACLE_HOME/ldap/lib % make -f ins_ldap.mk install Note: Unix Installation/OS: RDBMS Technical Forum Displayed below are the messages of the selected thread. Thread Status: Closed From: Ray Stell 20-Apr-05 21:43 Subject: solaris upgrade RDBMS Version: 9.2.0.4 Operating System and Version: Solaris 8

Error Number (if applicable): Product (i.e. SQL*Loader, Import, etc.): Product Version: solaris upgrade I need to move a server from solaris 5.8 to 5.9. Does this require a new oracle 9.2.0 ee server install or relink or nothing at all? Thanks. -------------------------------------------------------------------------------From: Samir Saad 21-Apr-05 03:28 Subject: Re : solaris upgrade You must relink even if you find that the databases came up after Solaris upgrad e and they seem fine. As for the existing Oracle installations, they will all be fine. Samir. -------------------------------------------------------------------------------From: Oracle, soumya anand 21-Apr-05 10:59 Subject: Re : solaris upgrade Hello Ray, As rightly pointed by Samir, after an OS upgrade it sufficient to relink the executables. Regards, Soumya Note: troubles after relink: ---------------------------If you see on AIX something that resembles the following: P522:/home/oracle $lsnrctl exec(): 0509-036 Cannot load program lsnrctl because of the following errors: 0509-130 Symbol resolution failed for /usr/lib/libc.a[aio_64.o] because: 0509-136 Symbol kaio_rdwr64 (number 0) is not exported from dependent module /unix. 0509-136 Symbol listio64 (number 1) is not exported from dependent module /unix. 0509-136 Symbol acancel64 (number 2) is not exported from dependent module /unix. 0509-136 Symbol iosuspend64 (number 3) is not exported from dependent module /unix. 0509-136 Symbol aio_nwait (number 4) is not exported from dependent module /unix. 0509-150 Dependent module libc.a(aio_64.o) could not be loaded. 0509-026 System error: Cannot run a file that does not have a valid form at. 0509-192 Examine .loader section symbols with the 'dump -Tv' command.

If this occurs, you have asynchronous I/O turned off. To turn on asynchronous I/O: Run smitty chgaio and set STATE to be configured at system restart from defined to available. Press Enter. Do one of the following: Restart your system. Run smitty aio and move the cursor to Configure defined Asynchronous I/O. Then p ress Enter.

trace: -----truss -aef -o /tmp/trace svrmgrl To trace what a Unix process is doing enter: truss -rall -wall -p <PID> truss -p $ lsnrctl dbsnmp_start NOTE: The "truss" command works on SUN and Sequent. Use "tusc" on HP-UX, "strace " on Linux, "trace" on SCO Unix or call your system administrator to find the equivalent com mand on your system. Monitor your Unix system: Logfiles: --------Unix message files record all system problems like disk errors, swap errors, NFS problems, etc. Monitor the following files on your system to detect system problems: tail -f /var/adm/SYSLOG tail -f /var/adm/messages tail -f /var/log/syslog

=============== 10. CONSTRAINTS: =============== 10.1 index owner en table owner information: DBA_INDEXES ------------------------------------------set linesize 100 SELECT DISTINCT substr(owner, 1, 10) as INDEX_OWNER,

substr(index_name, 1, 40) substr(tablespace_name,1,40) substr(index_type, 1, 10) substr(table_owner, 1, 10) substr(table_name, 1, 40) BLEVEL,NUM_ROWS,STATUS FROM DBA_INDEXES order by index_owner;

as as as as as

INDEX_NAME, TABLE_SPACE, INDEX_TYPE, TABLE_OWNER, TABLE_NAME,

SELECT DISTINCT substr(owner, 1, 10) as INDEX_OWNER, substr(index_name, 1, 40) as INDEX_NAME, substr(index_type, 1, 10) as INDEX_TYPE, substr(table_owner, 1, 10) as TABLE_OWNER, substr(table_name, 1, 40) as TABLE_NAME FROM DBA_INDEXES WHERE table_name='HEAT_CUSTOMER'; SELECT substr(owner, 1, 10) substr(index_name, 1, 40) substr(index_type, 1, 10) substr(table_owner, 1, 10) substr(table_name, 1, 40) FROM DBA_INDEXES WHERE owner<>table_owner;

as as as as as

INDEX_OWNER, INDEX_NAME, INDEX_TYPE, TABLE_OWNER, TABLE_NAME

10.2 PK en FK constraint relations: ---------------------------------SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY, SUBSTR(b.column_name, 1, 40) as COLUMN_NAME FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER in ('TRIDION_CM','TCMLOGDBUSER','VPOUSERDB') AND c.constraint_type in ('P', 'R', 'U'); SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY, SUBSTR(b.column_name, 1, 40) as COLUMN_NAME FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type in ('P', 'R', 'U'); SELECT distinct c.constraint_type

as TYPE,

SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type ='R'; ----------------------------------------------------------------------create table reftables (TYPE varchar2(32), TABLE_NAME varchar2(40), CONSTRAINT_NAME varchar2(40), REF_KEY varchar2(40), REF_TABLE varchar2(40));

insert into reftables (type,table_name,constraint_name,ref_key) SELECT distinct c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type ='R'; update reftables set REF_TABLE=(select distinct table_name from dba_cons_columns where owner='RM_LIVE' and CONSTRAINT_NAME=REF_KEY); ---------------------------------------------------------------------SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type ='R'; SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY, (select b.table_name from dba_cons_columns where b.constraint_name=c.r_constrain t_name) as REF_TABLE FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND

c.OWNER='RM_LIVE' AND c.constraint_type ='R' or c.constraint_type ='P' ; select

select c.constraint_name, c.constraint_type, c.table_name, (select table_name from c where c.r_constraint_name, o.constraint_name, o.column_name from dba_constraints c, dba_cons_columns o where c.constraint_name=o.constraint_name and c.constraint_type='R' and c.owner='BRAINS'; SELECT 'SELECT * FROM '||c.table_name||' WHERE '||b.column_name||' '||c.search_c ondition FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='BRAINS' AND c.constraint_type = 'C'; SELECT 'ALTER TABLE PROJECTS.'||table_name||' enable constraint '||constraint_na me||';' FROM DBA_CONSTRAINTS WHERE owner='PROJECTS' AND constraint_type='R'; SELECT 'ALTER TABLE BRAINS.'||table_name||' disable constraint '||constraint_nam e||';' FROM USER_CONSTRAINTS WHERE owner='BRAINS' AND constraint_type='R';

10.3 PK en FK constraint informatie: DBA_CONSTRAINTS ------------------------------------ owner and all foreign key, constraints SELECT SUBSTR(owner, 1, 10) as OWNER, constraint_type as TYPE, SUBSTR(table_name, 1, 40) as TABLE_NAME, SUBSTR(constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(r_constraint_name, 1, 40) as REF_KEY, DELETE_RULE as DELETE_RULE, status FROM DBA_CONSTRAINTS WHERE OWNER='BRAINS' AND constraint_type in ('R', 'P', 'U'); SELECT SUBSTR(owner, 1, 10) constraint_type SUBSTR(table_name, 1, 30) as OWNER, as TYPE, as TABLE_NAME,

SUBSTR(constraint_name, 1, 30) as CONSTRAINT_NAME, SUBSTR(r_constraint_name, 1, 30) as REF_KEY, DELETE_RULE as DELETE_RULE, status FROM DBA_CONSTRAINTS WHERE OWNER='BRAINS' AND constraint_type in ('R'); -- owner en alle primary key constraints bepalen van een bepaalde user, op bepaa lde objects Zelfde query: Zet OWNER='gewenste_owner' AND constraint_type='P' select owner, CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME,STAT US from dba_constraints where owner='FIN_VLIEG' and constraint_type in ('P','R','U');

10.4 opsporen bijbehorende index van een bepaalde constraint: DBA_INDEXES, DBA_C ONSTRAINTS -----------------------------------------------------------SELECT c.constraint_type substr(x.index_name, 1, 40) substr(c.constraint_name, 1, 40) substr(x.tablespace_name, 1, 40) FROM DBA_CONSTRAINTS c, DBA_INDEXES WHERE c.constraint_name=x.index_name AND c.constraint_name='UN_DEMO1'; SELECT c.constraint_type substr(x.index_name, 1, 40) substr(c.constraint_name, 1, 40) substr(c.table_name, 1, 40) substr(c.owner, 1, 10) FROM DBA_CONSTRAINTS c, DBA_INDEXES WHERE c.constraint_name=x.index_name AND c.owner='JOOPLOC'; as as as as x Type, INDX_NAME, CONSTRAINT_NAME, TABLESPACE

as as as as as x

Type, INDX_NAME, CONSTRAINT_NAME, TABLE_NAME, OWNER

10.5 opsporen tablespace van een constraint of constraint owner: --------------------------------------------------------------SELECT substr(s.segment_name, 1, 40) as substr(c.constraint_name, 1, 40) as substr(s.tablespace_name, 1, 40) as substr(s.segment_type, 1, 10) as FROM DBA_SEGMENTS s, DBA_CONSTRAINTS c WHERE s.segment_name=c.constraint_name Segmentname, Constraintname, Tablespace, Type

AND c.owner='PROJECTS'; 10.6 Ophalen index create statements: -----------------------------------DBA_INDEXES DBA_IND_COLUMNS SELECT substr(i.index_name, 1, 40) as INDEX_NAME, substr(i.index_type, 1, 15) as INDEX_TYPE, substr(i.table_name, 1, 40) as TABLE_NAME, substr(c.index_owner, 1, 10) as INDEX_OWNER, substr(c.column_name, 1, 40) as COLUMN_NAME, c.column_position as POSITION FROM DBA_INDEXES i, DBA_IND_COLUMNS c WHERE i.index_name=c.index_name AND i.owner='SALES';

10.7 Aan en uitzetten van constraints: -------------------------------------- aanzetten: alter table tablename enable constraint constraint_name -- uitzetten: alter table tablename disable constraint constraint_name -- voorbeeld: ALTER TABLE EMPLOYEE DISABLE CONSTRAINT FK_DEPNO; ALTER TABLE EMPLOYEE ENABLE CONSTRAINT FK_DEPNO; maar ook kan: ALTER TABLE DEMO ENABLE PRIMARY KEY; -- Alle FK constraints van een schema in een keer uitzetten: SELECT 'ALTER TABLE MIS_OWNER.'||table_name||' disable constraint '||constrai nt_name||';' FROM DBA_CONSTRAINTS WHERE owner='MIS_OWNER' AND constraint_type='R' AND TABLE_NAME LIKE 'MKM%'; SELECT 'ALTER TABLE MIS_OWNER.'||table_name||' enable constraint '||constrain t_name||';' FROM DBA_CONSTRAINTS WHERE owner='MIS_OWNER' AND constraint_type='R' AND TABLE_NAME LIKE 'MKM%';

10.8 Constraint aanmaken en initieel uit: ---------------------------------------Dit kan handig zijn bij bijvoorbeeld het laden van een table waarbij mogelijk dubbele waarden voorkomen ALTER TABLE CUSTOMERS ADD CONSTRAINT PK_CUST PRIMARY KEY (custid) DISABLE; Als nu blijkt dat bij het aanzetten van de constraint, er dubbele records voorko men, kunnen we deze dubbele records plaatsen in de EXCEPTIONS table: 1. aanmaken EXCEPTIONS table: @ORACLE_HOME\rdbms\admin\utlexcpt.sql 2. Constraint aaNzetten: ALTER TABLE CUSTOMERS ENABLE PRIMARY KEY exceptions INTO EXCEPTIONS; Nu bevat de EXCEPTIONS table de dubbele rijen. 3. Welke dubbele rijen: SELECT c.custid, c.name FROM CUSTOMERS c, EXCEPTIONS s WHERE c.rowid=s.row_id; 10.9 Gebruik PK FK constraints: -----------------------------10.9.1: Voorbeeld normaal gebruik met DRI: create table customers ( custid number not null, custname varchar(10), CONSTRAINT pk_cust PRIMARY KEY (custid) ); create table contacts ( contactid number not null, custid number, contactname varchar(10), CONSTRAINT pk_contactid PRIMARY KEY (contactid), CONSTRAINT fk_cust FOREIGN KEY (custid) REFERENCES customers(custid) ); Hierbij kun je dus niet zondermeer een row met een bepaald custid uit customers verwijderen, indien er een row in contacts bestaat met hetzelfd e custid. 10.9.2: Voorbeeld met ON DELETE CASCADE:

create table contacts ( contactid number not null, custid number, contactname varchar(10), CONSTRAINT pk_contactid PRIMARY KEY (contactid), CONSTRAINT fk_cust FOREIGN KEY (custid) REFERENCES customers(custid) ON DELETE C ASCADE ); Ook de clausule "ON DELETE SET NULL" kan gebruikt worden. Nu is het wel mogelijk om in customers een row te verwijderen, terwijl in contacts een overeenkomende custid bestaat. De row in contacts wordt dan namelijk ook verwijdert. 10.10 Procedures voor insert, delete: -----------------------------------Als voorbeeld op table customers: CREATE OR REPLACE PROCEDURE newcustomer (custid NUMBER, custname VARCHAR) IS BEGIN INSERT INTO customers values (custid,custname); commit; END; / CREATE OR REPLACE PROCEDURE delcustomer (cust NUMBER) IS BEGIN delete from customers where custid=cust; commit; END; / 10.11 User datadictonary views: ----------------------------We hebben al gezien dat we voor constraint informatie voornamelijk de onderstaan de views raadplegen: DBA_TABLES DBA_INDEXES, DBA_CONSTRAINTS, DBA_IND_COLUMNS, DBA_SEGMENTS Deze zijn echter voor de DBA. Gewone users kunnen informatie opvragen uit USER_ en ALL_ views. USER_ : in the schema van de user ALL_ : waar de user bij kan

USER_TABLES, USER_INDEXES, USER_CONSTRAINTS, USER_VIEWS, USER_SEQUENCES, USER_CONS_COLUMNS, USER_TAB_COLUMNS, USER_SOURCE, cat tab col dict

ALL_TABLES ALL_INDEXES ALL_CONSTRAINTS ALL_VIEWS ALL_SEQUENCES ALL_CONS_COLUMNS ALL_TAB_COLUMNS ALL_SOURCE

10.12 Create en drop index examples: ----------------------------------CREATE UNIQUE INDEX HEATCUST0 ON HEATCUST(CUSTTYPE) TABLESPACE INDEX_SMALL PCTFREE 10 STORAGE(INITIAL 163840 NEXT 163840 PCTINCREASE 0 ); DROP INDEX indexname 10.13 Check the height of indexes: --------------------------------Is an index rebuild neccessary ? SELECT index_name, owner, blevel, decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL', 2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK FROM dba_indexes WHERE owner='SALES' and blevel > 3; 10.14 Make indexes unusable (before a large dataload): ------------------------------------------------------ Make Indexes unusable alter index HEAT_CUSTOMER_DISCON_DATE unusable; alter index HEAT_CUSTOMER_EMAIL_ADDRESS unusable; alter index HEAT_CUSTOMER_POSTAL_CODE unusable; -- Enable Indexes again alter index HEAT_CUSTOMER_DISCON_DATE rebuild; alter index HEAT_CUSTOMER_EMAIL_ADDRESS rebuild; alter index HEAT_CUSTOMER_POSTAL_CODE rebuild;

================================ 11. DBMS_JOB and scheduled Jobs: ================================ Used in Oracle 9i and lower versions.

11.1 SNP background process: ---------------------------Scheduled jobs zijn mogelijk wanneer het SNP background process geactiveerd is. Dit kan via de init.ora: JOB_QUEUE_PROCESSES=1 ation en jobqueue's JOB_QUEUE_INTERVAL=60 11.2 DBMS_JOB package: ---------------------DBMS_JOB.SUBMIT() DBMS_JOB.REMOVE() DBMS_JOB.CHANGE() DBMS_JOB.WHAT() DBMS_JOB.NEXT_DATE() DBMS_JOB.INTERVAL() DBMS_JOB.RUN() 11.2.1 DBMS_JOB.SUBMIT() ----------------------There are actually two versions SUBMIT() and ISUBMIT() PROCEDURE DBMS_JOB.SUBMIT (job OUT BINARY_INTEGER, what IN VARCHAR2, next_date IN DATE DEFAULT SYSDATE, interval IN VARCHAR2 DEFAULT 'NULL', no_parse IN BOOLEAN DEFAULT FALSE); PROCEDURE DBMS_JOB.ISUBMIT (job IN BINARY_INTEGER, what IN VARCHAR2, next_date in DATE DEFAULT SYSDATE interval IN VARCHAR2 DEFAULT 'NULL', no_parse in BOOLEAN DEFAULT FALSE); The difference between ISUBMIT and SUBMIT is that ISUBMIT specifies a job number , whereas SUBMIT returns a job number generated by the DBMS_JOB package Look for submitted jobs: -----------------------select job, last_date, next_date, interval, substr(what, 1, 50) from dba_jobs; Submit a job: -------------The jobnumber (if you use SUBMIT() ) will be derived from the sequence SYS.JOBSE aantal SNP processes (SNP0, SNP1), max 36 t.b.v. replic check interval

Q Suppose you have the following procedure: create or replace procedure test1 is begin dbms_output.put_line('Hallo grapjas.'); end; / Example 1: ---------variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', Sysdate, 'Sysdate+1'); commit; end; / DECLARE jobno NUMBER; BEGIN DBMS_JOB.SUBMIT (job => jobno ,what => 'test1;' ,next_date => SYSDATE ,interval => 'SYSDATE+1/24'); COMMIT; END; / So suppose you submit the above job at 08.15h. Then the next, and first time, that the job will run is at 09.15h. Example 2: ---------variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE+1), 'LAST_DAY(ADD_MONTHS(LA ST_DAY(SYSDATE+1),1))'); commit; end; / Example 3: ---------VARIABLE jobno NUMBER BEGIN DBMS_JOB.SUBMIT(:jobno, 'DBMS_DDL.ANALYZE_OBJECT(''TABLE'', ''CHARLIE'', ''X1'', ''ESTIMATE'', NULL, 50);', SYSDATE, 'SYSDATE + 1'); COMMIT; END;

/ PRINT jobno JOBNO ---------14144 Example 4: this job is scheduled every hour ------------------------------------------DECLARE jobno NUMBER; BEGIN DBMS_JOB.SUBMIT (job => jobno ,what => 'begin space_logger; end;' ,next_date => SYSDATE ,interval => 'SYSDATE+1/24'); COMMIT; END; / Example 5: Examples of intervals -------------------------------'SYSDATE + 7' ays from the last execution 'SYSDATE + 1/48' 'NEXT_DAY(TRUNC(SYSDATE), ''MONDAY'') + 15/24' 3PM 'NEXT_DAY(ADD_MONTHS(TRUNC(SYSDATE, ''Q''), 3), ''THURSDAY'')' of each quarter 'TRUNC(SYSDATE + 1)' :00 midnight 'TRUNC(SYSDATE + 1) + 8/24' 00 a.m. 'NEXT_DAY(TRUNC(SYSDATE ), "TUESDAY" ) + 12/24' t 12:00 noon 'TRUNC(LAST_DAY(SYSDATE ) + 1)' e month at midnight 'TRUNC(ADD_MONTHS(SYSDATE + 2/24, 3 ), 'Q' ) - 1/24' quarter at 11:00 p.m. NEXT_DAY(SYSDATE, "FRIDAY") ) ) + 9/24' ednesday, and Friday at 9:00 a.m. :exactly seven d :every half hour :every Monday at :first Thursday :Every day at 12 :Every day at 8: :Every Tuesday a :First day of th :Last day of the :Every Monday, W

-------------------------------------------------------------------------------Example 6: ---------You have this testprocedure create or replace procedure test1 as id_next number; begin

select max(id) into id_next from iftest; insert into iftest (id) values (id_next+1); commit; end; / Suppose on 16 juli at 9:26h you do: variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE+1), 'LAST_DAY(ADD_MONTHS(LA ST_DAY(SYSDATE+1),1))'); commit; end; / select job, to_char(this_date,'DD-MM-YYYY;HH24:MI'), to_char(next_date, 'DD-MM-Y YYY;HH24:MI') from dba_jobs; JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT ---------- ---------------- ---------------25 31-07-2004;09:26 Suppose on 16 juli at 9:38h you do: variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE)+1, 'LAST_DAY(ADD_MONTHS(LA ST_DAY(SYSDATE+1),1))'); commit; end; / JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT ---------- ---------------- ---------------25 31-07-2004;09:26 26 01-08-2004;09:38 Suppose on 16 juli at 9:41h you do: variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', SYSDATE, 'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDA TE+1),1))'); commit; end; / JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT ---------- ---------------- ---------------27 31-08-2004;09:41 25 31-07-2004;09:26 26 01-08-2004;09:39

Suppose on 16 juli at 9:46h you do: variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', SYSDATE, 'TRUNC(LAST_DAY(SYSDATE + 1/24 ) )' ); commit; end; / JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT --------- ---------------- ---------------27 31-08-2004;09:41 28 31-07-2004;00:00 25 31-07-2004;09:26 29 31-07-2004;00:00 ------------------------------------------------------------------------------------variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', null, 'TRUNC(LAST_DAY(SYSDATE ) + 1)' ); commit; end; / In the job definition, use two single quotation marks around strings. Always include a semicolon at the end of the job definition.

11.2.2 DBMS_JOB.REMOVE() -----------------------Removing a Job FROM the Job Queue To remove a job FROM the job queue, use the REMOVE procedure in the DBMS_JOB pac kage. The following statements remove job number 14144 FROM the job queue: BEGIN DBMS_JOB.REMOVE(14144); END; / 11.2.3 DBMS_JOB.CHANGE() -----------------------In this example, job number 14144 is altered to execute every three days: BEGIN DBMS_JOB.CHANGE(1, NULL, NULL, 'SYSDATE + 3'); END; / If you specify NULL for WHAT, NEXT_DATE, or INTERVAL when you call the

procedure DBMS_JOB.CHANGE, the current value remains unchanged. 11.2.4 DBMS_JOB.WHAT() ---------------------You can alter the definition of a job by calling the DBMS_JOB.WHAT procedure. The following example changes the definition for job number 14144: BEGIN DBMS_JOB.WHAT(14144, 'DBMS_DDL.ANALYZE_OBJECT(''TABLE'', ''HR'', ''DEPARTMENTS'', ''ESTIMATE'', NULL, 50);'); END; / 11.2.5 DBMS_JOB.NEXT_DATE() --------------------------You can alter the next execution time for a job by calling the DBMS_JOB.NEXT_DATE procedure, as shown in the following example: BEGIN DBMS_JOB.NEXT_DATE(14144, SYSDATE + 4); END; / 11.2.6 DBMS_JOB.INTERVAL(): --------------------------The following example illustrates changing the execution interval for a job by calling the DBMS_JOB.INTERVAL procedure: BEGIN DBMS_JOB.INTERVAL(14144, 'NULL'); END; / execute dbms_job.interval(<job number>,'SYSDATE+(1/48)'); In this case, the job will not run again after it successfully executes and it will be deleted FROM the job queue 11.2.7 DBMS_JOB.BROKEN(): ------------------------A job is labeled as either broken or not broken. Oracle does not attempt to run broken jobs. Example: BEGIN DBMS_JOB.BROKEN(10, TRUE); END; / Example:

The following example marks job 14144 as not broken and sets its next execution date to the following Monday: BEGIN DBMS_JOB.BROKEN(14144, FALSE, NEXT_DAY(SYSDATE, 'MONDAY')); END; / Example: exec DBMS_JOB.BROKEN( V_JOB_ID, true); Example: select JOB into V_JOB_ID from DBA_JOBS where WHAT like '%SONERA%'; DBMS_SNAPSHOT.REFRESH( 'SONERA', 'C'); DBMS_JOB.BROKEN( V_JOB_ID, false); fix broken jobs: ---------------/* Filename on companion disk: job5.sql */* CREATE OR REPLACE PROCEDURE job_fixer AS /* || calls DBMS_JOB.BROKEN to try and set || any broken jobs to unbroken */ /* cursor selects user's broken jobs */ CURSOR broken_jobs_cur IS SELECT job FROM user_jobs WHERE broken = 'Y'; BEGIN FOR job_rec IN broken_jobs_cur LOOP DBMS_JOB.BROKEN(job_rec.job,FALSE); END LOOP; END job_fixer; 11.2.8 DBMS_JOB.RUN(): ---------------------BEGIN DBMS_JOB.RUN(14144); END; / 11.3 DBMS_SCHEDULER: --------------------

Used in Oracle 10g. BEGIN DBMS_SCHEDULER.create_job ( job_name => 'test_self_contained_job', job_type => 'PLSQL_BLOCK', job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''JOHN''); END;', start_date => SYSTIMESTAMP, repeat_interval => 'freq=hourly; byminute=0', end_date => NULL, enabled => TRUE, comments => 'Job created using the CREATE JOB procedure.'); End; / BEGIN DBMS_SCHEDULER.run_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB', use_current_session => FALSE); END; / BEGIN DBMS_SCHEDULER.stop_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB'); END; / Jobs can be deleted using the DROP_JOB procedure: BEGIN DBMS_SCHEDULER.drop_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB'); DBMS_SCHEDULER.drop_job (job_name => 'test_self_contained_job'); END; / Oracle 10g: ----------BMS_JOB has been replaced by DBMS_SCHEDULER. Views: V_$SCHEDULER_RUNNING_JOBS GV_$SCHEDULER_RUNNING_JOBS DBA_QUEUE_SCHEDULES USER_QUEUE_SCHEDULES _DEFSCHEDULE DEFSCHEDULE AQ$SCHEDULER$_JOBQTAB_S AQ$_SCHEDULER$_JOBQTAB_F AQ$SCHEDULER$_JOBQTAB AQ$SCHEDULER$_JOBQTAB_R AQ$SCHEDULER$_EVENT_QTAB_S AQ$_SCHEDULER$_EVENT_QTAB_F AQ$SCHEDULER$_EVENT_QTAB AQ$SCHEDULER$_EVENT_QTAB_R DBA_SCHEDULER_PROGRAMS USER_SCHEDULER_PROGRAMS ALL_SCHEDULER_PROGRAMS

DBA_SCHEDULER_JOBS USER_SCHEDULER_JOBS ALL_SCHEDULER_JOBS DBA_SCHEDULER_JOB_CLASSES ALL_SCHEDULER_JOB_CLASSES DBA_SCHEDULER_WINDOWS ALL_SCHEDULER_WINDOWS DBA_SCHEDULER_PROGRAM_ARGS USER_SCHEDULER_PROGRAM_ARGS ALL_SCHEDULER_PROGRAM_ARGS DBA_SCHEDULER_JOB_ARGS USER_SCHEDULER_JOB_ARGS ALL_SCHEDULER_JOB_ARGS DBA_SCHEDULER_JOB_LOG DBA_SCHEDULER_JOB_RUN_DETAILS USER_SCHEDULER_JOB_LOG USER_SCHEDULER_JOB_RUN_DETAILS ALL_SCHEDULER_JOB_LOG ALL_SCHEDULER_JOB_RUN_DETAILS DBA_SCHEDULER_WINDOW_LOG DBA_SCHEDULER_WINDOW_DETAILS ALL_SCHEDULER_WINDOW_LOG ALL_SCHEDULER_WINDOW_DETAILS DBA_SCHEDULER_WINDOW_GROUPS ALL_SCHEDULER_WINDOW_GROUPS DBA_SCHEDULER_WINGROUP_MEMBERS ALL_SCHEDULER_WINGROUP_MEMBERS DBA_SCHEDULER_SCHEDULES USER_SCHEDULER_SCHEDULES ALL_SCHEDULER_SCHEDULES DBA_SCHEDULER_RUNNING_JOBS ALL_SCHEDULER_RUNNING_JOBS USER_SCHEDULER_RUNNING_JOBS DBA_SCHEDULER_GLOBAL_ATTRIBUTE ALL_SCHEDULER_GLOBAL_ATTRIBUTE DBA_SCHEDULER_CHAINS USER_SCHEDULER_CHAINS ALL_SCHEDULER_CHAINS DBA_SCHEDULER_CHAIN_RULES USER_SCHEDULER_CHAIN_RULES ALL_SCHEDULER_CHAIN_RULES DBA_SCHEDULER_CHAIN_STEPS USER_SCHEDULER_CHAIN_STEPS ALL_SCHEDULER_CHAIN_STEPS DBA_SCHEDULER_RUNNING_CHAINS USER_SCHEDULER_RUNNING_CHAINS ALL_SCHEDULER_RUNNING_CHAINS

================== 12. Net8 / SQLNet: ================== In bijvoorbeeld sql*plus vult men in: ----------------Username: system Password: manager Host String: XXX

----------------NET8 bij de client kijkt in TNSNAMES.ORAnaar de eerste entry XXX= (description.. protocol..host...port.. SERVICE_NAME=Y) XXX is eigenlijk een alias en is dus willekeurig hoewel het uiteraard aansluit bij de instance name of database name waarnaar je wilt connecten. Maar het zou dus zelfs pipo mogen zijn. Wordt XXX niet gevonden, dan meld de client: ORA-12154 TNS: could not resolve SERVICE NAME Vervolgens wordt door NET8 via de connect descriptor Y contact gemaakt met de listener op de Server die luistert naar Y Is Y niet wat de listener verwacht, dan meldt de listener aan de client: TNS: listener could not resolve SERVICE_NAME in connect descriptor 12.1 sqlnet.ora voorbeeld: -------------------------SQLNET.AUTHENTICATION_SERVICES= (NTS) NAMES.DIRECTORY_PATH= (TNSNAMES) 12.2 tnsnames.ora voorbeelden: -----------------------------voorbeeld 1. DB1= (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=STARBOSS)(PORT=1521) ) (CONNECT_DATA= (SERVICE_NAME=DB1.world) ) ) voorbeeld 2. DB1.world= (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(COMMUNITY=tcp.world)(PROTOCOL=TCP)(HOST=STARBOSS)(PORT=1521 ) ) (CONNECT_DATA=(SID=DB1) ) ) DB2.world= (... )

DB3.world= (... ) etc.. voorbeeld 3. RCAT = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = w2ktest)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = rcat.antapex) ) )

12.3 listener.ora voorbeelden: -----------------------------Example 1: ---------LISTENER= (DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=STARBOSS)(PORT=1521)) ) SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=DB1.world) (ORACLE_HOME=D:\oracle8i) (SID_NAME=DB1) ) ) Example 2: ---------############## WPRD ##################################################### LOG_DIRECTORY_WPRD = /opt/oracle/admin/WPRD/network/log LOG_FILE_WPRD = WPRD.log TRACE_LEVEL_WPRD = OFF #ADMIN TRACE_DIRECTORY_WPRD = /opt/oracle/admin/WPRD/network/trace TRACE_FILE_WPRD = WPRD.trc WPRD = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=blnl01)(PORT=1521))))) SID_LIST_WPRD = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = WPRD)

(ORACLE_HOME = /opt/oracle/product/8.1.6) (SID_NAME = WPRD))) ############## WTST ##################################################### LOG_DIRECTORY_WTST = /opt/oracle/admin/WTST/network/log LOG_FILE_WTST = WTST.log TRACE_LEVEL_WTST = OFF #ADMIN TRACE_DIRECTORY_WTST = /opt/oracle/admin/WTST/network/trace TRACE_FILE_WTST = WTST.trc WTST = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=blnl01)(PORT=1522))))) SID_LIST_WTST = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = WTST) (ORACLE_HOME = /opt/oracle/product/8.1.6) (SID_NAME = WTST))) Example 3: ---------# LISTENER.ORA Network Configuration File: D:\oracle\ora901\NETWORK\ADMIN\listen er.ora # Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = missrv)(PORT = 1521)) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = D:\oracle\ora901) (PROGRAM = extproc) ) (SID_DESC = (GLOBAL_DBNAME = o901) (ORACLE_HOME = D:\oracle\ora901) (SID_NAME = o901) ) (SID_DESC = (SID_NAME = MAST) (ORACLE_HOME = D:\oracle\ora901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = NATOPS)

(ORACLE_HOME = D:\oracle\ora901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = VRF) (ORACLE_HOME = D:\oracle\ora901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = DRILLS) (ORACLE_HOME = D:\oracle\ora901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = DDS) (ORACLE_HOME = D:\oracle\ora901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = IVP) (ORACLE_HOME = D:\oracle\ora901) (PROGRAM = hsodbc) (SID_DESC = (SID_NAME = ALBERT) (ORACLE_HOME = D:\oracle\ora901) (PROGRAM = hsodbc) ) ) 12.4: CONNECT TIME FAILOVER: ---------------------------The connect-time failover feature allows clients to connect to another listener if the initial connection to the first listener fails. Multiple listener locations are specified in the cl ients tnsnames.ora file. If a connection attempt to the first listener fails, a connection request to the next listener in the list is attempted. This feature increases the availablity of the Oracle s ervice should a listener location be unavailable. Here is an example of what a tnsnames.ora file looks like with connect-time fail over enabled: ORCL= (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=DBPROD)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=DBFAIL)(PORT=1521)) ) (CONNECT_DATA=(SERVICE_NAME=PROD)(SERVER=DEDICATED) ) ) 12.5: CLIENT LOAD BALANCING: ----------------------------

Client Load Balancing is a feature that allows clients to randomly select from a list of listeners. Oracle Net moves through the list of listeners and balances the load of connecti on requests accross the available listeners. Here is an example of the tnsnames.ora entry that allows for load balancing: ORCL= (DESCRIPTION= (LOAD_BALANCE=ON) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=MWEISHAN-DELL)(PORT=1522)) (ADDRESS=(PROTOCOL=TCP)(HOST=MWEISHAN-DELL)(PORT=1521)) ) (CONNECT_DATA=(SERVICE_NAME=PROD)(SERVER=DEDICATED) ) ) Notice the additional parameter of LOAD_BALANCE. This enables load balancing bet ween the two listener locations specified. 12.6: ORACLE SHARED SERVER: --------------------------With the dedicated Server, each server process has a PGA, outside the SGA When Shared Server is used, the user program area's are in the SGA in the large pool. With a few init.ora parameters, you can configure Shared Server. 1. DISPATCHERS: The DISPATCHERS parameter defines the number of dispatchers that should start wh en the instance is started. For example, if you want to configure 3 TCP/IP dispatchers and to IPC dispatcher s, you set the parameters as follows: DISPATCHERS="(PRO=TCP)(DIS=3)(PRO=IPC)(DIS=2)" For example, if you have 500 concurrent TCP/IP connections, and you want each di spatcher to manage 50 concurrent connections, you need 10 dispatchers. You set your DISPATCHERS parameter as follows: DISPATCHERS="(PRO=TCP)(DIS=10)" 2. SHARED_SERVER: The Shared_Servers parameter specifies the minimum number of Shared Servers to s tart and retain when the Oracle instance is started. View information about dispatchers and shared servers with the following command s and queries:

lsnrctl services SELECT name, status, messages, idle, busy, bytes, breaks FROM v$dispatcher; 12.7: Keeping Oracle connections alive through a Firewall: ---------------------------------------------------------Implementing keep alive packets: SQLNET.INBOUND_CONNECT_TIMEOUT

Notes: ======= Note 1: -------

Doc ID: Subject:

Note:274130.1 Content Type: SHARED SERVER CONFIGURATION

TEXT/PLAIN Creation Date: 25-MAY-2004 24-JUN-2004

Type: BULLETIN Last Revision Date: Status: PUBLISHED PURPOSE -------

This article discusses about the configuration of shared servers on 9i DB. SHARED SERVER CONFIGURATION: =========================== 1. Add the parameter shared_servers in the init.ora SHARED_SERVERS specifies the number of server processes that you want to create when an instance is started up. If system load decreases, this minimum number of servers is maintained. Therefore, you should take care not to set SHARED_SERVERS too high at system startup. Parameter type Parameter class Integer Dynamic: ALTER SYSTEM

2. Add the parameter DISPATCHERS in the init.ora DISPATCHERS configures dispatcher processes in the shared server architecture. USAGE: ----DISPATCHERS = "(PROTOCOL=TCP)(DISPATCHERS=3)" 3. Save the init.ora file.

4. Change the connect string in tnsnames.ora from ORACLE.IDC.ORACLE.COM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xyzac)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = oracle) ) ) to ORACLE.IDC.ORACLE.COM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xyzac)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = SHARED) (SERVICE_NAME = Oracle) ) ) Change SERVER=SHARED. 5. Shutdown and startup the database. 6. Make a new connection to database other than SYSDBA. (NOTE: SYSDBA will always acquire dedicated connection by default.) 7. Check whether the connection is done through server server. > Select server from v$session. SERVER --------DEDICATED DEDICATED DEDICATED SHARED DEDICATED NOTE: ==== The following parameters are optional (if not specified, Oracle selects defaults): MAX_DISPATCHERS: =============== Specifies the maximum number of dispatcher processes that can run simultaneously. SHARED_SERVERS: ==============

Specifies the number of shared server processes created when an instance is started up. MAX_SHARED_SERVERS: ================== Specifies the maximum number of shared server processes that can run simultaneously. CIRCUITS: ======== Specifies the total number of virtual circuits that are available for inbound and outbound network sessions. SHARED_SERVER_SESSIONS: ====================== Specifies the total number of shared server user sessions to allow. Setting this parameter enables you to reserve user sessions for dedicated servers.

Other parameters affected by shared server that may require adjustment: LARGE_POOL_SIZE: =============== Specifies the size in bytes of the large pool allocation heap. Shared server may force the default value to be set too high, causing performance problems or problems starting the database. SESSIONS: ======== Specifies the maximum number of sessions that can be created in the system. May need to be adjusted for shared server.

12.7 password for the listener: ------------------------------Note 1: LSNRCTL> set password <password> where <password> is the password you want to us e. To change a password, use "Change_Password" You can also designate a password wh en you configure the listener with the Net8 Assistant. These passwords are stored in the listener.ora file and although they will not show in the Net8 Assistant, they are readable in the listener.ora file. Note 2: The password can be set either by specifying it through the command CHANGE_PASSW ORD, or through a parameter in the listener.ora file. We saw how to do that through the CHANGE_PASSWORD comm and earlier. If the password is changed this way, it should not be specified in the listener. ora file. The password is not

displayed anywhere. When supplying the password in the listener control utility, you must supply it at the Password: prompt as shown above. You cannot specify the password in one line as shown below. LSNRCTL> set password t0p53cr3t LSNRCTL> stop Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC))) TNS-01169: The listener has not recognized the password LSNRCTL> Note 3: more correct method would be to password protect the listener functions. See the net8 admin guide for info but in short -- you can: LSNRCTL> change_password Old password: <just hit enter if you don't have one yet> New password: Reenter new password: Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=slackdog)(PORT=1521))) Password changed for LISTENER The command completed successfully LSNRCTL> set password Password: The command completed successfully LSNRCTL> save_config Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=slackdog)(PORT=1521))) Saved LISTENER configuration parameters. Listener Parameter File /d01/home/oracle8i/network/admin/listener.ora Old Parameter File /d01/home/oracle8i/network/admin/listener.bak The command completed successfully LSNRCTL> Now, you need to use a password to do various operations (such as STOP) but not others (such as STATUS)

============================================= 13. Datadictionary queries Rollback segments: ============================================= 13.1 naam, plaats en status van rollback segementen: ---------------------------------------------------SELECT substr(segment_name, 1, 10), substr(tablespace_name, 1, 20), status, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE FROM DBA_ROLLBACK_SEGS; 13.2 indruk van aantal active transactions per rollback segment: ---------------------------------------------------------------aantal actieve transacties: V$ROLLSTAT naam rollback segment: V$ROLLNAME SELECT n.name, s.xacts

FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn; (usn=undo segment number) 13.3 grootte, naam, extents, bytes van de rollback segmenten: ------------------------------------------------------------SELECT substr(segment_name, 1, 15), bytes/1024/1024 Size_in_MB, blocks, extents, substr(tablespace_name, 1, 15) FROM DBA_SEGMENTS WHERE segment_type='ROLLBACK'; SELECT n.name, s.extents, s.rssize FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn; Create Tablespace RBS datafile '/db1/oradata/oem/rbs.dbf' SIZE 200M AUTOEXTEND ON NEXT 20M MAXSIZE 500 M LOGGING DEFAULT STORAGE ( INITIAL 5M NEXT 5M MINEXTENTS 2 MAXEXTENTS 100 PCTINCREASE 0 ) ONLINE PERMANENT; 13.4 De optimal parameter: -------------------------SELECT n.name, s.optsize FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn; 13.5 writes to rollback segementen: ----------------------------------Doe de query begin meting, en bij einde meting en bekijk het verschil SELECT n.name, s.writes FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn 13.6 Wie en welke processes gebruiken de rollback segs: ------------------------------------------------------Query1: Query op v$lock, v$session, v$rollname column column column column rr us os te heading heading heading heading 'RB Segment' 'Username' 'OS user' 'Terminal' format format format format a15 a10 a10 a15

SELECT R.name rr, nvl(S.username, 'no transaction') us,

S.Osuser os, S.Terminal te FROM V$LOCK L, V$SESSION S, V$ROLLNAME R WHERE L.Sid=S.Sid(+) AND trunc(L.Id1/65536)=R.usn AND L.Type='TX' AND L.Lmode=6 ORDER BY R.name / Query 2: SELECT r.name "RBS", s.sid, s.serial#, s.username "USER", t.status, t.cr_get, t.phy_io, t.used_ublk, t.noundo, substr(s.program, 1, 78) "COMMAND" FROM sys.v_$session s, sys.v_$transaction t, sys.v_$rollname r WHERE t.addr = s.taddr AND t.xidusn = r.usn ORDER BY t.cr_get, t.phy_io / 13.7 Bepaling minimum aantal rollbacksegmenten: -----------------------------------------------Bepaal in init.ora via "show parameter transactions" transactions= a (max no of transactions, stel 100) transactions_per_rollback_segment= b (allowed no of concurrent tr/rbs, stel 10) minimum=a/b (100/10=10)

13.8 Bepaling minimale grootte rollback segmenten: -------------------------------------------------lts=largest transaction size (normal production, niet af en toe batch loads) min_size=minimum size van rollback segment min_size= lts * 100 / (100 - (40 {%free} + 15 {iaiu} +5 {header} min_size=lts * 1.67 Stel lts=700K, dan is de startwaarde rollbacksegment=1400K ========================================================= 14. Data dictionary queries m.b.t. security, permissions: =========================================================

14.1 user information in datadictionary --------------------------------------SELECT username, user_id, password FROM DBA_USERS WHERE username='Kees'; 14.2 default tablespace, account_status of users ------------------------------------------------

SELECT username, default_tablespace, account_status FROM DBA_USERS; 14.3 tablespace quotas of users ------------------------------SELECT tablespace_name, bytes, max_bytes, blocks, max_blocks FROM DBA_TS_QUOTAS WHERE username='CHARLIE'; 14.4 Systeem rechten van een user opvragen: DBA_SYS_PRIVS --------------------------------------------------------SELECT substr(grantee, 1, 15), substr(privilege, 1, 40), admin_option FROM DBA_SYS_PRIVS WHERE grantee='CHARLIE'; SELECT * FROM dba_sys_privs WHERE grantee='Kees'; 14.5 Invalid objects in DBA_OBJECTS: -----------------------------------SELECT substr(owner, 1, 10), substr(object_name, 1, 40), substr(object_type, 1, 40), status FROM DBA_OBJECTS WHERE status='INVALID'; 14.6 session information -----------------------SELECT sid, serial#, substr(username, 1, 10), substr(osuser, 1, 10), substr(sche maname, 1, 10), substr(program, 1, 15), substr(module, 1, 15), status, logon_time, substr(terminal, 1, 15), substr(machine, 1, 15) FROM V$SESSION; 14.7 kill a session ------------------alter system kill session 'SID, SERIAL#'

======================== 15. INIT.ORA parameters: ======================== 15.1 init.ora parameters en ARCHIVE MODE: ---------------------------------------LOG_ARCHIVE_DEST=/oracle/admin/cc1/arch LOG_ARCHIVE_START=TRUE LOG_ARCHIVE_FORMAT=archcc1_%s.log

10g: LOG_ARCHIVE_DEST=c:\oracle\oradata\log' LOG_ARCHIVE_FORMAT=arch_%t_%s_%r.dbf' other: LOG_ARCHIVE_DEST_1= LOG_ARCHIVE_DEST_2= LOG_ARCHIVE_MAX_PROCESSES=2 15.2 init.ora en perfoRMANce en SGA: -----------------------------------SORT_AREA_SIZE SORT_AREA_RETAINED_SIZE PROCESSES DB_BLOCK_SIZE DB_BLOCK_BUFFERS SHARED_POOL_SIZE LOG_BUFFER LARGE_POOL_SIZE DBWR_IO_SLAVES DB_WRITER_PROCESSES LGWR_IO_SLAVES= DB_FILE_MULTIBLOCK_READ_COUNT s in one io operation during sequential r ead) BUFFER_POOL_RECYCLE BUFFER_POOL_KEEP TIMED_STATISTICES collected or not) OPTIMIZER_MODE PARALLEL_MIN_SERVERS l recovery) PARALLEL_MAX_SERVERS RECOVERY_PARALLELISM se niveau) = = =TRUE = = = = = = = = 65536 65536 100 8192 3400 52428800 26214400 (per PGA, max sort area) (size after sort) (alle processes) (DB_CACHE_SIZE in Oracle 9i)

(DB_WRITER_PROCESSES) = 2 =16 (minimize io during table scans, it specifies max number of block

(statistics related to time are

=RULE, CHOOSE, FIRST_ROWS, ALL_ROWS = 2 = 4 = 2 (set parallel recovery op databa (voor Parallel Query, en paralle

SHARED_POOL_SIZE: in bytes or K or M SHARED_POOL_SIZE specifies (in bytes) the size of the shared pool. The shared po ol contains shared cursors, stored procedures, control structures, and other structures. If you set PARALLEL_AUTOMATIC_TUNING t o false, Oracle also allocates parallel execution message buffers from the shared pool. L arger values improve perfoRMANce in multi-user systems. Smaller values use less memory. You can monitor utilization of the shared pool by querying the view V$SGASTAT. SHARED_POOL_RESERVED_SIZE: The parameter was introduced in Oracle 7.1.5 and provides a means of reserving a portion of the shared pool

for large memory allocations. The reserved area comes out of the shared pool its elf. From a practical point of view one should set SHARED_POOL_RESERVED_SIZE to about 10% of SHARED_POOL_SIZE unless either the shared pool is very large OR SHARED_POOL_R ESERVED_MIN_ALLOC has been set lower than the default value:

15.3 init.ora en jobs: ---------------------JOB_QUEUE_PROCESSES=1 ation en jobqueue's JOB_QUEUE_INTERVAL=60 15.4 instance name, sid: -----------------------db_name global_names instance_name db_domain 15.5 overige parameters: -----------------------OS_AUTHENT_PREFIX REMOTE_OS_AUTHENTICATION via het netwerk kan) REMOTE_LOGIN_PASSWORDFILEe distributed_transactions aq_tm_processes mts_servers multithreaded server) mts_max_servers audit_file_dest background_dump_dest user_dump_dest core_dump_dest resource_limit ts in profiles are in effect) license_max_sessions essions) license_sessions_warning log) license_max_users n be created in the database) compatible = "" = TRUE or FALSE = NONE or EXCLUSIVE =0 or >0 (starts the RECO process) = (advanced queuing, message queues) = = = = = = /dbs01/app/oracle/admin/AMI_PRD/adump /dbs01/app/oracle/admin/AMI_PRD/bdump /dbs01/app/oracle/admin/AMI_PRD/udump /dbs01/app/oracle/admin/AMI_PRD/cdump (specifies whether resource limi (max number of concurrent user s (at this limit, warning in alert (maximum number of users that ca are enforced) = 8.1.7.0.0 (number of shared server processes in (stANDaard is dat OPS$) (of een OS authentication = = = = CC1 TRUE CC1 antapex.net aantal SNP processes (SNP0, SNP1), max 36 t.b.v. replic check interval

=true = = =

control_files control_files control_files db_files d) java_pool_size log_checkpoint_interval log_checkpoint_timeout max_dump_file_size max_enabled_roles nls_date_format nls_language nls_territory o7_dictionary_accessibility open_cursors optimizer_max_permutations optimizer_mode parallel_max_servers pre_page_sga service_names utl_file_dir

= /dbs04/oradata/AMI_PRD/ctrl/cc1_01.ctl = /dbs05/oradata/AMI_PRD/ctrl/cc1_02.ctl = /dbs06/oradata/AMI_PRD/ctrl/cc1_03.ctl = 150 = = = = = = = = = = = = = = = = (max number of data files opene

0 10000 1800 10240 40 "DD-MM-YYYY" AMERICAN AMERICA TRUE 250 1000 CHOOSE 5 TRUE CC1 /app01/oradata/cc1/utl_file

All init.ora parameters: ------------------------PARAMETER DESCRIPTION ------------------------------ ---------------------------------------O7_DICTIONARY_ACCESSIBILITY Version 7 Dictionary Accessibility support [TRUE | FALSE] active_instance_count aq_tm_processes archive_lag_target asm_diskgroups asm_diskstring asm_power_limit R] audit_file_dest ['Path'] audit_sys_operations audit_trail background_core_dump ull] background_dump_dest backup_tape_io_slaves bitmap_merge_area_size blank_trimming E] buffer_pool_keep buffer_pool_recycle :m)] Number of active instances in the cluster database [NUMBER] Number of AQ Time Managers to start [NUMBER] Maximum number of seconds of redos the standby could lose [NUMBER] Disk groups to mount automatically [CHAR] Disk set locations for discovery [CHAR] Number of processes for disk rebalancing [NUMBE Directory in which auditing files are to reside Enable sys auditing [TRUE|FALSE] Enable system auditing [NONE|DB|DB_EXTENDED|OS] Core Size for Background Processes [partial | f Detached process dump directory [file_path] BACKUP Tape I/O slaves [TRUE | FALSE] Maximum memory allow for BITMAP MERGE [NUMBER] Blank trimming semantics parameter [TRUE | FALS Number of database blocks/latches in keep buffer pool [CHAR: (buffers:n, latches:m)] Number of database blocks/latches in recycle buffer pool [CHAR: (buffers:n, latches

circuits cluster_database | FALSE] cluster_database_instances cluster_interconnects commit_point_strength compatible control_file_record_keep_time control_files core_dump_dest cpu_count ER] create_bitmap_area_size cursor_sharing create_stored_outlines | FALSE | category_name] cursor_space_for_time db_16k_cache_size db_2k_cache_size db_32k_cache_size db_4k_cache_size db_8k_cache_size db_block_buffers db_block_checking db_block_checksum db_block_size db_cache_advice ] db_cache_size db_create_file_dest db_create_online_log_dest_n ) ['Path'] db_domain

Max number of virtual circuits [NUMBER] If TRUE startup in cluster database mode [TRUE Number of instances to use for sizing cluster db SGA structures [NUMBER] Interconnects for RAC use [CHAR] Bias this node has toward not preparing in a two-phase commit [NUMBER (0-255)] Database will be completely compatible with this software version [CHAR: 9.2.0.0.0] Control file record keep time in days [NUMBER] Control file names list [file_path,file_path..] Core dump directory [file_path] Initial number of cpu's for this instance [NUMB Size of create bitmap buffer for bitmap index [INTEGER] Cursor sharing mode [EXACT | SIMILAR | FORCE] Create stored outlines for DML statements [TRUE Use more memory in order to get faster execution [TRUE | FALSE] Size of cache for 16K buffers [bytes] Size of cache for 2K buffers [bytes] Size of cache for 32K buffers [bytes] Size of cache for 4K buffers [bytes] Size of cache for 8K buffers [bytes] Number of database blocks to cache in memory [bytes: 8M or NUMBER of blocks (Ora7)] Data and index block checking [TRUE | FALSE] Store checksum in db blocks and check during reads [TRUE | FALSE] Size of database block [bytes] Buffer cache sizing advisory [internal use only Size of DEFAULT buffer pool for standard block size buffers [bytes] Default database location ['Path_to_directory'] Online log/controlfile destination (where n=1-5

Directory part of global database name stored with CREATE DATABASE [CHAR] * db_file_multiblock_read_count Db blocks to be read each IO [NUMBER] db_file_name_convert Datafile name convert patterns and strings for standby/clone db [, ] db_files Max allowable # db files [NUMBER] db_flashback_retention_target Maximum Flashback Database log retention time i n minutes [NUMBER] db_keep_cache_size Size of KEEP buffer pool for standard block size buffers [bytes] db_name Database name specified in CREATE DATABASE [CHAR] db_recovery_file_dest Default database recovery file location [CHAR] db_recovery_file_dest_size Database recovery files size limit [bytes] db_recycle_cache_size Size of RECYCLE buffer pool for standard block size buffers [bytes] db_unique_name Database Unique Name [CHAR] db_writer_processes Number of background database writer processes to start [NUMBER]

dblink_encrypt_login dbwr_io_slaves ddl_wait_for_locks LSE] dg_broker_config_file1 ] dg_broker_config_file2 ] dg_broker_start disk_asynch_io | FALSE] dispatchers (MTS_dispatchers in Ora 8) distributed_lock_timeout dml_locks drs_start ALSE] enqueue_resources event ] fal_client fal_server fast_start_io_target fast_start_mttr_target fast_start_parallel_rollback file_mapping fileio_network_adapters filesystemio_options fixed_date 2_30_24_59_00'] gc_files_to_locks gcs_server_processes art [NUMBER] global_context_pool_size global_names hash_area_size )[bytes] hash_join_enabled hi_shared_memory_address hs_autoregister ifile instance_groups instance_name instance_number

Enforce password for distributed login always be encrypted [TRUE | FALSE] DBWR I/O slaves [NUMBER] Disable NOWAIT DML lock acquisitions [TRUE | FA Data guard broker configuration file #1 ['Path' Data guard broker configuration file #2 ['Path' Start Data Guard broker framework (DMON process) [TRUE | FALSE] Use asynch I/O for random access devices [TRUE Specifications of dispatchers [CHAR] Number of seconds a distributed transaction waits for a lock [Internal] Dml locks - one for each table modified in a transaction [NUMBER] Start DG Broker monitor (DMON process)[TRUE | F Resources for enqueues [NUMBER] Debug event control - default null string [CHAR FAL client [CHAR] FAL server list [CHAR] Upper bound on recovery reads [NUMBER] MTTR target of forward crash recovery in seconds [NUMBER] Max number of parallel recovery slaves that may be used [LOW | HIGH | FALSE] Enable file mapping [TRUE | FALSE] Network Adapters for File I/O [CHAR] IO operations on filesystem files [Internal] Fix SYSDATE value for debugging[NONE or '2000_1 RAC/OPS - lock granularity number of global cache locks per file (DFS) [CHAR] Number of background gcs server processes to st Global Application Context Pool Size in Bytes [bytes] Enforce that database links have same name as remote database [TRUE | FALSE] Size of in-memory hash work area (Shared Server Enable/disable hash join (CBO) [TRUE | FALSE] SGA starting address (high order 32-bits on 64-bit platforms) [NUMBER] Enable automatic server DD updates in HS agent self-registration [TRUE | FALSE] Include file in init.ora ['path_to_file'] List of instance group names [CHAR] Instance name supported by the instance [CHAR] Instance number [NUMBER]

instance_type ASM] java_max_sessionspace_size java_pool_size java_soft_sessionspace_limit job_queue_processes large_pool_size es] ldap_directory_access SL] license_max_sessions license_max_users ed license_sessions_warning local_listener [CHAR] lock_name_space ary database lock_sga log_archive_config log_archive_dest log_archive_dest_n log_archive_dest_state_n ] log_archive_duplex_dest '] log_archive_format "] log_archive_local_first E | FALSE] log_archive_max_processes ] log_archive_min_succeed_dest log_archive_start RUE | FALSE] log_archive_trace log_buffer log_checkpoint_interval log_checkpoint_timeout ween log_checkpoints_to_alert FALSE] log_file_name_convert log_parallelism

Type of instance to be executed RDBMS or Automated Storage Management [RDBMS | Max allowed size in bytes of a Java sessionspace [bytes] Size in bytes of the Java pool [bytes] Warning limit on size in bytes of a Java sessionspace [NUMBER] Number of job queue slave processes [NUMBER] Size in bytes of the large allocation pool [byt RDBMS's LDAP access option [NONE | PASSWORD | S Maximum number of non-system user sessions (concurrent licensing) [NUMBER] Maximum number of named users that can be creat (named user licensing) [NUMBER] Warning level for number of non-system user sessions [NUMBER] Define which listeners instances register with Used for generating lock names for standby/prim assign each a unique name space [CHAR] Lock entire SGA in physical memory [Internal] Log archive config [SEND|NOSEND] [RECEIVE|NORECEIVE] [ DG_CONFIG] Archive logs destination ['path_to_directory'] Archive logging parameters (n=1-10) Enterprise Edition [CHAR] Archive logging parameter status (n=1-10) [CHAR Enterprise Edition [CHAR] Duplex archival destination ['path_to_directory Archive log filename format [CHAR: "MyApp%S.ARC Establish EXPEDITE attribute default value [TRU Maximum number of active ARCH processes [NUMBER Minimum number of archive destinations that must succeed [NUMBER] Start archival process on SGA initialization [T Archive log tracing level [NUMBER] Redo circular buffer size [bytes] Checkpoint threshold, # redo blocks [NUMBER] Checkpoint threshold, maximum time interval bet checkpoints in seconds [NUMBER] Log checkpoint begin/end to alert file [TRUE | Logfile name convert patterns and strings for standby/clone db [, ] Number of log buffer strands [NUMBER]

logmnr_max_persistent_sessions Maximum number of threads to mine [NUMBER] max_commit_propagation_delay max_dispatchers max_dump_file_size r bytes] max_enabled_roles MBER] max_rollback_segments UMBER] max_shared_servers mts_circuits mts_dispatchers mts_listener_address mts_max_dispatchers mts_max_servers mts_multiple_listeners mts_servers mts_service mts_sessions nls_calendar HAR] nls_comp SI] nls_currency nls_date_format nls_date_language ] nls_dual_currency nls_iso_currency R] nls_language nls_length_semantics nls_nchar_conv_excp nls_numeric_characters nls_sort nls_territory nls_time_format nls_time_tz_format nls_timestamp_format nls_timestamp_tz_format object_cache_max_size_percent object_cache_optimal_size olap_page_pool_size open_cursors open_links open_links_per_instance optimizer_dynamic_sampling optimizer_features_enable optimizer_index_caching Max age of new snapshot in .01 seconds [NUMBER] Max number of dispatchers [NUMBER] Maximum size (blocks) of dump file [UNLIMITED o Max number of roles a user can have enabled [NU Max number of rollback segments in SGA cache [N Max number of shared servers [NUMBER] Max number of circuits [NUMBER] Specifications of dispatchers [CHAR] Address(es) of network listener [CHAR] Max number of dispatchers [NUMBER] Max number of shared servers [NUMBER] Are multiple listeners enabled? [TRUE | FALSE] Number of shared servers to start up [NUMBER] Service supported by dispatchers [CHAR] max number of shared server sessions [NUMBER] NLS calendar system name (Default=GREGORIAN) [C NLS comparison, Enterprise Edition [BINARY | AN NLS local currency symbol [CHAR] NLS Oracle date format [CHAR] NLS date language name (Default=AMERICAN) [CHAR Dual currency symbol [CHAR] NLS ISO currency territory name override the default set by NLS_TERRITORY [CHA NLS language name (session default) [CHAR] Create columns using byte or char semantics by default [BYTE | CHAR] NLS raise an exception instead of allowing implicit conversion [CHAR] NLS numeric characters [CHAR] Case-sensitive or insensitive sort [Language] language may be BINARY, BINARY_CI, BINARY_AI, GERMAN, GERMAN_CI, etc NLS territory name (country settings) [CHAR] Time format [CHAR] Time with timezone format [CHAR] Time stamp format [CHAR] Timestamp with timezone format [CHAR] Percentage of maximum size over optimal of the user session's ob [NUMBER] Optimal size of the user session's object cache in bytes [bytes] Size of the olap page pool in bytes [bytes] Max # cursors per session [NUMBER] Max # open links per session [NUMBER] Max # open links per instance [NUMBER] Optimizer dynamic sampling [NUMBER] Optimizer plan compatibility (oracle version e.g. 8.1.7) [CHAR] Optimizer index caching percent [NUMBER]

optimizer_index_cost_adj optimizer_max_permutations optimizer_mode L_ROWS] oracle_trace_collection_name oracle_trace_collection_path oracle_trace_collection_size oracle_trace_enable oracle_trace_facility_name oracle_trace_facility_path os_authent_prefix os_roles | FALSE] parallel_adaptive_multi_user

Optimizer index cost adjustment [NUMBER] Optimizer maximum join permutations per query block [NUMBER] Optimizer mode [RULE | CHOOSE | FIRST_ROWS | AL Oracle TRACE default collection name [CHAR] Oracle TRACE collection path [CHAR] Oracle TRACE collection file max. size [NUMBER] Oracle Trace enabled/disabled [TRUE | FALSE] Oracle TRACE default facility name [CHAR] Oracle TRACE facility path [CHAR] Prefix for auto-logon accounts [CHAR] Retrieve roles from the operating system [TRUE

Enable adaptive setting of degree for multiple user streams [TRUE | FALSE] parallel_automatic_tuning Enable intelligent defaults for parallel execution parameters [TRUE | FALSE] parallel_execution_message_size Message buffer size for parallel execution [bytes] parallel_instance_group Instance group to use for all parallel operations [CHAR] parallel_max_servers Maximum parallel query servers per instance [NUMBER] parallel_min_percent Minimum percent of threads required for parallel query [NUMBER] parallel_min_servers Minimum parallel query servers per instance [NUMBER] parallel_server If TRUE startup in parallel server mode [TRUE | FALSE] parallel_server_instances Number of instances to use for sizing OPS SGA structures [NUMBER] parallel_threads_per_cpu Number of parallel execution threads per CPU [NUMBER] partition_view_enabled Enable/disable partitioned views [TRUE | FALSE] pga_aggregate_target Target size for the aggregate PGA memory consumed by the instance [bytes] plsql_code_type PL/SQL code-type [INTERPRETED | NATIVE] plsql_compiler_flags PL/SQL compiler flags [CHAR] plsql_debug PL/SQL debug [TRUE | FALSE] plsql_native_c_compiler plsql native C compiler [CHAR] plsql_native_library_dir plsql native library dir ['Path_to_directory'] plsql_native_library_subdir_count plsql native library number of subdirectories [NUMBER] plsql_native_linker plsql native linker [CHAR] plsql_native_make_file_name plsql native compilation make file [CHAR] plsql_native_make_utility plsql native compilation make utility [CHAR] plsql_optimize_level PL/SQL optimize level [NUMBER] plsql_v2_compatibility PL/SQL version 2.x compatibility flag [TRUE | F ALSE] plsql_warnings PL/SQL compiler warnings settings [CHAR] See also DBMS_WARNING and DBA_PLSQL_OBJECT_SET TINGS pre_page_sga Pre-page sga for process [TRUE | FALSE] processes User processes [NUMBER] query_rewrite_enabled ws Allow rewrite of queries using materialized vie if enabled [FORCE | TRUE | FALSE]

query_rewrite_integrity TED | ENFORCED] rdbms_server_dn read_only_open_delayed recovery_parallelism remote_archive_enable | FALSE | TRUE] remote_dependencies_mode

Perform rewrite using materialized views with desired integrity [STALE_TOLERATED | TRUS RDBMS's Distinguished Name [CHAR] If TRUE delay opening of read only files until first access [TRUE | FALSE] Number of server processes to use for parallel recovery [NUMBER] Remote archival enable setting [RECEIVE[,SEND]

Remote-procedure-call dependencies mode parameter [TIMESTAMP | SIGNATURE] remote_listener Remote listener [CHAR] remote_login_passwordfile Use a password file [NONE | SHARED | EXCLUSIVE] remote_os_authent Allow non-secure remote clients to use auto-logon accounts [TRUE | FALSE] remote_os_roles Allow non-secure remote clients to use os roles [TRUE | FALSE] replication_dependency_tracking Tracking dependency for Replication parallel propagation [TRUE | FALSE] resource_limit Master switch for resource limit [TRUE | FALSE] resource_manager_plan Resource mgr top plan [Plan_Name] resumable_timeout Set resumable_timeout, seconds [NUMBER] rollback_segments Undo segment list [CHAR] row_locking Row-locking [ALWAYS | DEFAULT | INTENT] (Defaul t=always) serial_reuse PLSQL|ALL|NULL] serializable service_names session_cached_cursors session_max_open_files sessions sga_max_size sga_target shadow_core_dump | NONE] shared_memory_address shared_pool_reserved_size shared_pool_size shared_server_sessions shared_servers skip_unusable_indexes LSE] sort_area_retained_size sort_area_size smtp_out_server r [server_clause] spfile sp_name sql92_security sql_trace Reuse the frame segments [DISABLE | SELECT|DML| Serializable [Internal] Service names supported by the instance [CHAR] Number of cursors to save in the session cursor cache [NUMBER] Maximum number of open files allowed per session [NUMBER] User and system sessions [NUMBER] Max total SGA size [bytes] Target size of SGA [bytes] Core Size for Shadow Processes [PARTIAL | FULL SGA starting address (low order 32-bits on 64-bit platforms) [NUMBER] Size in bytes of reserved area of shared pool [bytes] Size in bytes of shared pool [bytes] Max number of shared server sessions [NUMBER] Number of shared servers to start up [NUMBER] Skip unusable indexes if set to true [TRUE | FA Size of in-memory sort work area retained between fetch calls [bytes] Size of in-memory sort work area [bytes] utl_smtp server and port configuration paramete Server parameter file [CHAR] Service Provider Name [CHAR] Require select privilege for searched update/delete [TRUE | FALSE] Enable SQL trace [TRUE | FALSE]

sqltune_category sql_version standby_archive_dest standby_file_management star_transformation_enabled statistics_level streams_pool_size tape_asynch_io | FALSE] thread timed_os_statistics timed_statistics SE] trace_enabled | FALSE] tracefile_identifier transaction_auditing

Category qualifier for applying hintsets [CHAR] Sql language version parameter for compatibility issues [CHAR] Standby database archivelog destination text string ['Path_to_directory'] If auto then files are created/dropped automatically on standby [MANUAL | AUTO] Enable the use of star transformation [TRUE | FALSE | DISABLE_TEMP_TABLE] Statistics level [ALL | TYPICAL | BASIC] Size in bytes of the streams pool [bytes] Use asynch I/O requests for tape devices [TRUE Redo thread to mount [NUMBER] Internal os statistic gathering interval in seconds [NUMBER] Maintain internal timing statistics [TRUE | FAL Enable KST tracing (Internal parameter) [TRUE

Trace file custom identifier [CHAR] Transaction auditing records generated in the redo log [TRUE | FALSE] transactions Max. number of concurrent active transactions [NUMBER] transactions_per_rollback_segment Number of active transactions per rollback segment [NUMBER] undo_management undo_retention undo_suppress_errors undo_tablespace use_indirect_data_buffers Instance runs in SMU mode if TRUE, else in RBU mode [MANUAL | AUTO] Undo retention in seconds [NUMBER] Suppress RBU errors in SMU mode [TRUE | FALSE] Use or switch undo tablespace [Undo_tbsp_name] Enable indirect data buffers (very large SGA on 32-bit platforms [TRUE | FALSE] User process dump directory ['Path_to_directory utl_file accessible directories list utl_file_dir='Path1', 'Path2'.. or utl_file_dir='Path1' # Must be utl_file_dir='Path2' # consecutive entries Policy used to size SQL working areas [MANUAL |

user_dump_dest '] utl_file_dir

workarea_size_policy AUTO]

db_file_multiblock_read_count: The db_file_multiblock_read_count initialization parameter determines the maxim um number of database blocks read in one I/O operation during a full table scan. The setting of this parame ter can reduce the number of I/O calls required for a full table scan, thus improving performan ce. 15.6 9i UNDO or ROLLBACK parameters: ------------------------------------

- UNDO_MANAGEMENT If AUTO, use automatic undo management mode. If MANUAL, use manual undo manage ment mode. - UNDO_TABLESPACE A dynamic parameter specifying the name of an undo tablespace to use. - UNDO_RETENTION A dynamic parameter specifying the length of time to retain undo. Default is 9 00 seconds. - UNDO_SUPPRESS_ERRORS If TRUE, suppress error messages if manual undo management SQL statements are issued when operating in automatic undo management mode. If FALSE, issue error message. This is a dy namic parameter. If you're database is on manual, you can still use the following 8i type paramet ers: - ROLLBACK_SEGMENTS Specifies the rollback segments to be acquired at instance startup - TRANSACTIONS Specifies the maximum number of concurrent transactions - TRANSACTIONS_PER_ROLLBACK_SEGMENT Specifies the number of concurrent transactions that each rollback segment is expected to handle - MAX_ROLLBACK_SEGMENTS Specifies the maximum number of rollback segments that can be online for any i nstance

15.7 Oracle 9i init file examples: ---------------------------------= Example 1: ---------# Cache and I/O DB_BLOCK_SIZE=4096 DB_CACHE_SIZE=20971520 # Cursors and Library Cache CURSOR_SHARING=SIMILAR OPEN_CURSORS=300 # Diagnostics and Statistics BACKGROUND_DUMP_DEST=/vobs/oracle/admin/mynewdb/bdump CORE_DUMP_DEST=/vobs/oracle/admin/mynewdb/cdump TIMED_STATISTICS=TRUE USER_DUMP_DEST=/vobs/oracle/admin/mynewdb/udump # Control File Configuration CONTROL_FILES=("/vobs/oracle/oradata/mynewdb/control01.ctl",

"/vobs/oracle/oradata/mynewdb/control02.ctl", "/vobs/oracle/oradata/mynewdb/control03.ctl") # Archive LOG_ARCHIVE_DEST_1='LOCATION=/vobs/oracle/oradata/mynewdb/archive' LOG_ARCHIVE_FORMAT=%t_%s.dbf LOG_ARCHIVE_START=TRUE # Shared Server # Uncomment and use first DISPATCHES parameter below when your listener is # configured for SSL # (listener.ora and sqlnet.ora) # DISPATCHERS = "(PROTOCOL=TCPS)(SER=MODOSE)", # "(PROTOCOL=TCPS)(PRE=oracle.aurora.server.SGiopServer)" DISPATCHERS="(PROTOCOL=TCP)(SER=MODOSE)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)", (PROTOCOL=TCP) # Miscellaneous COMPATIBLE=9.2.0 DB_NAME=mynewdb # Distributed, Replication and Snapshot DB_DOMAIN=us.oracle.com REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE # Network Registration INSTANCE_NAME=mynewdb # Pools JAVA_POOL_SIZE=31457280 LARGE_POOL_SIZE=1048576 SHARED_POOL_SIZE=52428800 # Processes and Sessions PROCESSES=150 # Redo Log and Recovery FAST_START_MTTR_TARGET=300 # Resource Manager RESOURCE_MANAGER_PLAN=SYSTEM_PLAN # Sort, Hash Joins, Bitmap Indexes SORT_AREA_SIZE=524288 # Automatic Undo Management UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=undotbs Example 2: ---------############################################################################## # Copyright (c) 1991, 2001 by Oracle Corporation ############################################################################## ########################################### # Cache and I/O

########################################### db_block_size=8192 db_cache_size=50331648 ########################################### # Cursors and Library Cache ########################################### open_cursors=300 ########################################### # Diagnostics and Statistics ########################################### background_dump_dest=D:\oracle\admin\iasdb\bdump core_dump_dest=D:\oracle\admin\iasdb\cdump timed_statistics=TRUE user_dump_dest=D:\oracle\admin\iasdb\udump ########################################### # Distributed, Replication and Snapshot ########################################### db_domain=missrv.miskm.mindef.nl remote_login_passwordfile=EXCLUSIVE ########################################### # File Configuration ########################################### control_files=("D:\oracle\oradata\iasdb\CONTROL01.CTL", "D:\oracle\oradata\iasdb \CONTROL02.CTL", "D:\oracle\oradata\iasdb\CONTROL03.CTL") ########################################### # Job Queues ########################################### job_queue_processes=4 ########################################### # MTS ########################################### dispatchers="(PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer)", "(PROTOCOL=TC P)(PRE=oracle.aurora.server.SGiopServer)" ########################################### # Miscellaneous ########################################### aq_tm_processes=1 compatible=9.0.0 db_name=iasdb ########################################### # Network Registration ########################################### instance_name=iasdb ########################################### # Pools ########################################### java_pool_size=41943040 shared_pool_size=33554432 ########################################### # Processes and Sessions

########################################### processes=150 ########################################### # Redo Log and Recovery ########################################### fast_start_mttr_target=300 ########################################### # Sort, Hash Joins, Bitmap Indexes ########################################### pga_aggregate_target=33554432 sort_area_size=524288 ########################################### # System Managed Undo and Rollback Segments ########################################### undo_management=AUTO undo_tablespace=UNDOTBS

============== 17. Snapshots: ============== Snapshots allow you to replicate data based on column- and/or row-level subsetti ng, while multimaster replication requires replication of the entire table. You need a database link to implement replication. 17.1 Database link: ------------------In de "local" database, waar de snapshot copy komt te staan, geef een statement als bijv: CREATE PUBLIC DATABASE LINK MY_LINK CONNECT TO HARRY IDENTIFIED BY password USING 'DB1'; De servicename "DB1" wordt via de tnsnames.ora geresolved in een connectdescriptor, waarin de remote Servername, protocol, en SID van de remote database bekend is geworden. Nu is het mogelijk om bijv. de table employee in de remote database "DB1" te SELECTeren: SELECT * FROM employee@MY_LINK; Ook 2PC is geimplementeerd: update employee set amount=amount-100; update employee@my_link set amount=amount+100; commit;

17.2 Snapshots: --------------There are in general 2 styles of snapshots available Simple snapshot: One to one replication of a remote table to a local snapshot (=table). The refresh of the snapshot can be a complete refresh, with the refresh rate specified in the "create snapshot" command. Also a snapshot log can be used at the remote original table in order to repli cate only the transaction data. Complex snapshot: If multiple remote tables are joined in order to create/refresh a local snapsh ot, it is a "complex snapshot". Only complete refreshes are possible. If joins or complex query clauses are used, like group by, one can only use a "complex snapshot". -> Example COMPLEX snapshot: On the local database: CREATE SNAPSHOT EMP_DEPT_COUNT pctfree 5 tablespace SNAP storage (initial 100K next 100K pctincrease 0) REFRESH COMPLETE START WITH SYSDATE NEXT SYSDATE+7 AS SELECT DEPTNO, COUNT(*) Dept_count FROM EMPLOYEE@MY_LINK GROUP BY Deptno; Because the records in this snapshot will not correspond one to one with the records in the master table (since the query contains a group by clause ) this is a complex snapshot. Thus the snapshot will be completely recreated every time it is refreshed. -> Example SIMPLE snapshot: On the local database: CREATE SNAPSHOT EMP_DEPT_COUNT pctfree 5 tablespace SNAP storage (initial 100K next 100K pctincrease 0) REFRESH FAST

START WITH SYSDATE NEXT SYSDATE+7 AS SELECT * FROM EMPLOYEE@MY_LINK In this case the refresh fast clause tells oracle to use a snapshot log to refre sh the local snapshot. When a snapshotlog is used, only the changes to the master table are sent to the targets. The snapshot log must be created in the master database (WHERE the original obje ct is) create snapshot log on employee tablespace data storage (initial 100K next 100K pctincrease 0); Snapshot groups: ---------------A snapshot group in a replication system maintains a partial or complete copy of the objects at the target master group. Snapshot groups cannot span master group boundaries. Figure 3-7 displays the correlation between Groups A and B at the master site an d Groups A and B at the snapshot site. Group A at the snapshot site (see Figure 3-7) contains only some of the objects in the corresponding Group A at the master site. Group B at the snapshot site contains all objects in Group B at the master site. Under no circumstances, however, could Group B at the snapshot site contain obje cts FROM Group A at the master site. As illustrated in Figure 3-7, a snapshot group has the same name as the master g roup on which the snapshot group is based. For example, a snapshot group based on a "PERSONNEL" master group is also named "PERSONNEL." In addition to maintaining organizational consistency between snapshot sites and master sites, snapshot groups are required for supporting updateable snapshots. If a snapshot does not belong to a snapshot group, then it must be a read-only s napshot. A snapshot group is used to organize snapshots in a logical manner. Refresh groups: --------------If 2 or more master tables which have a PK-FK relationship, are replicated, it i s possible'that the 2 cooresponding snapshots violate the referential integrety, because of differen t refresh times and schedules etc.. Related snapshots can be collected int refresh groups. The purpose of a refresh group is to coordinate the refresh schedules of it's members. This is achieved via the DBMS_REFRESH package. The procedures in this package ar

e MAKE, ADD, SUBSTRACT, CHANGE, DESTROY, and REFRESH A refresh group could contain more than one snapshot groups.

Types of snapshots: ------------------Primary Key ----------Primary key snapshots are the default type of snapshot. They are updateable if t he snapshot was created as part of a snapshot group and "FOR UPDATE" was specified when defining the snapshot. Changes are propagated according to the row-level changes that have occurred, as identified by the primary key value of the row (not the ROWID). The SQL statement for creating an updateable, primary key snapshot might look like: CREATE SNAPSHOT sales.customer FOR UPDATE AS SELECT * FROM sales.customer@dbs1.acme.com; Primary key snapshots may contain a subquery so that you can create a horizontal ly partitioned subset of data at the remote snapshot site. This subquery may be as simple as a basic W HERE clause or as complex as a multilevel WHERE EXISTS clause. Primary key snapshots that contain a SELECTed class of subqueries can still be incrementally or fast refreshed. The following is a subquery snapsh ot with a WHERE clause containing a subquery: CREATE SNAPSHOT sales.orders REFRESH FAST AS SELECT * FROM sales.orders@dbs1.acme.com o WHERE EXISTS (SELECT 1 FROM sales.customer@dbs1.acme.com c WHERE o.c_id = c.c_id AND zip = 19555); ROWID ----For backwards compatibility, Oracle supports ROWID snapshots in addition to the default primary key snapshots. A ROWID snapshot is based on the physical row identifiers (ROWIDs ) of the rows in a master table. ROWID snapshots should be used only for snapshots based on master tables FROM an Oracle7 database, and should not be used when creating new snapshots based on master tables FROM O racle release 8.0 or greater databases. CREATE SNAPSHOT sales.customer REFRESH WITH ROWID AS SELECT * FROM sales.customer@dbs1.acme.com; Complex -------

To be fast refreshed, the defining query for a snapshot must observe certain res trictions. If you require a snapshot whose defining query is more general and cannot observ e the restrictions, then the snapshot is complex and cannot be fast refreshed. Specifically, a snapshot is considered complex when the defining query of the sn apshot contains: A CONNECT BY clause Clauses that do not comply with the requirements detailed in Table 3-1, "Restric tions for Snapshots with Subqueries" A set operation, such as UNION, INTERSECT, or MINUS In most cases, a distinct or aggregate function, although it is possible to have a distinct or aggregate function in the defining query and still have a simple snapshot See Also: Oracle8i Data Warehousing Guide for more information about complex materialized views. "Snapshot" is synonymous with "materialized view" in Oracle documentation, and " materialized view" is used in the Oracle8i Data Warehousing Guide. The following statement is an example of a complex snapshot CREATE statement: CREATE SNAPSHOT scott.snap_employees AS SELECT emp.empno, emp.ename FROM scott.emp@dbs1.acme.com UNION ALL SELECT new_emp.empno, new_emp.ename FROM scott.new_emp@dbs1.acme.com; Read Only --------Any of the previously described types of snapshots can be made read-only by omitting the FOR UPDATE clause or disabling the equivalent checkbox in the Repli cation Manager interface. Read-only snapshots use many of the same mechanisms as updateable snapshots, except that they do not need to belong to a snapshot group. Snapshot Registration at a Master Site -------------------------------------At the master site, an Oracle database automatically registers information about a snapshots based on its master table(s). The following sections explain more about Oracle's snapshot registration mechani sm. DBA_REGISTERED_SNAPSHOTS and DBA_SNAPSHOT_REFRESH_TIMES dictionary views You can query the DBA_REGISTERED_SNAPSHOTS data dictionary view to list the following information about a remote snapshot: The owner, name, and database that contains the snapshot The snapshot's defining query

Other snapshot characteristics, such as its refresh method (fast or complete) You can also query the DBA_SNAPSHOT_REFRESH_TIMES view at the master site to obtain the last refresh times for each snapshot. Administrators can use this inf ormation to monitor snapshot activity FROM master sites and coordinate changes to snapsho t sites if a master table needs to be dropped, altered, or relocated. Internal Mechanisms Oracle automatically registers a snapshot at its master database when you create the snapshot, and unregisters the snapshot when you drop it. Caution: Oracle cannot guarantee the registration or unregistration of a snapshot at its master site during the creation or drop of the snapshot, respectively. If Oracle cannot successfully register a snapshot during creation, Oracle completes snapshot registration during a subsequent refresh of the snapsh ot. If Oracle cannot successfully unregister a snapshot when you drop the snapshot, the registration information for the snapshot persists in the master database un til it is manually unregistered. Complex snapshots might not be registered. Manual registration ------------------If necessary, you can maintain registration manually. Use the REGISTER_SNAPSHOT and UNREGISTER_SNAPSHOT procedures of the DBMS_SNAPSHOT package at the master site to add, modify, or remove snapshot regi stration information. Snapshot Log -----------When you create a snapshot log for a master table, Oracle creates an underlying table as the snapshot log. A snapshot log holds the primary keys and/or the ROWIDs of rows that have been updated in the master table. A snapshot log can also contain filt er columns to support fast refreshes of snapshots with subqueries. The name of a snapshot log's table is MLOG$_master_table_name. The snapshot log is created in the same schema as the target master table. One snapshot log can support multiple snapshots on its master table. As described in the previous section, the internal trigger adds change informati on to the snapshot log whenever a DML transaction has taken place on the target mas ter table. There are three types of snapshot logs: Primary Key: The snapshot records changes to the master table based on the prima ry key of the affected rows. Row ID: The snapshot records changes to the master table based on the ROWID of t he affected rows.

Combination: The snapshot records changes to the master table based on both the primary key and the ROWID of the affected rows. This snapshot log supports both primary key and ROWI D snapshots, which is helpful for mixed environments. A combination snapshot log works in the same manner as the primary key and ROWID snapshot log, except that both the primary key and the ROWID of the affected row are recorded. Though the difference between snapshot logs based on primary keys and ROWIDs is small (one records affected rows using the primary key, while the other records affect ed rows using the physical ROWID), the practical impact is large. Using ROWID snapshots and snapshot logs makes reo rganizing and truncating your master tables difficult because it prevents your ROWID snapshots FROM being fast refreshed. If you reorganize or truncate your master table, your ROWID snapshot must be COM PLETE refreshed because the ROWIDs of the master table have changed. To delete a snapshot log, execute the DROP SNAPSHOT LOG SQL statement in SQL*Plu s. For example, the following statement deletes the snapshot log for a table named CUSTOMERS in the SALES schema: DROP SNAPSHOT LOG ON sales.customers; To delete the master table, use truncate table TABLE_NAME purge snapshot log;

============= 18. Triggers: ============= A trigger is PL/SQL code block attached and executed by an event which occurs to a database table. Triggers are implicitly invoked by DML commands. Triggers are stored as text and compiled at execute time, because of this it is wise not to include much code in them but to call out to previously stored procedures or packages as this will greatly improve perfoRMANc e. You may not use COMMIT, ROLLBACK and SAVEPOINT statements within trigger blocks. Remember that triggers may be executed thousands of times for a large update they can seriously affect SQL execution perfoRMANce. Triggers may be called BEFORE or AFTER the following events :INSERT, UPDATE and DELETE. Triggers may be STATEMENT or ROW types. - STATEMENT triggers fire BEFORE or AFTER the execution of the statement

that caused the trigger to fire. - ROW triggers fire BEFORE or AFTER any affected row is processed. An example of a statement trigger follows :CREATE OR REPLACE TRIGGER MYTRIG1 BEFORE DELETE OR INSERT OR UPDATE ON JD11.BOOK BEGIN IF (TO_CHAR(SYSDATE,'DAY') IN ('sat','sun')) OR (TO_CHAR(SYSDATE,'hh24:mi') N OT BETWEEN '08:30' AND '18:30') THEN RAISE_APPLICATION_ERROR(-20500,'Table is secured'); END IF; END; After the CREATE OR REPLACE statement is the object identifier (TRIGGER) and the object name (MYTRIG1). This trigger specifies that before any data change event on the BOOK table this PL/SQL code block will be compiled and executed. The user will not be allowed to update the table outside of normal working hours. An example of a row trigger follows :CREATE OR REPLACE TRIGGER MYTRIG2 AFTER DELETE OR INSERT OR UPDATE ON JD11.BOOK FOR EACH ROW BEGIN IF DELETING THEN INSERT INTO JD11.XBOOK (PREVISBN, TITLE, DELDATE) VALUES (:OLD.ISBN, :OLD. TITLE, SYSDATE); ELSIF INSERTING THEN INSERT INTO JD11.NBOOK (ISBN, TITLE, ADDDATE) VALUES (:NEW.ISBN, :NEW.TITL E, SYSDATE); ELSIF UPDATING ('ISBN) THEN INSERT INTO JD11.CBOOK (OLDISBN, NEWISBN, TITLE, UP_DATE) VALUES (:OLD.ISB N :NEW.ISBN, :NEW.TITLE, SYSDATE); ELSE /* UPDATE TO ANYTHING ELSE THAN ISBN */ INSERT INTO JD11.UBOOK (ISBN, TITLE, UP_DATE) VALUES (:OLD.ISBN :NEW.TITLE , SYSDATE); END IF END; In this case we have specified that the trigger will be executed after any data change event on any affected row. Within the PL/SQL block body we can check which update action is being performed for the currently affected row and take whatever action we feel is appropriate. Note tha t we can specify the old and new values of updated rows by prefixing column names with th e :OLD and :NEW qualifiers. --------------------------------------------------------------------------------

The following statement creates a trigger for the Emp_tab table: CREATE OR REPLACE TRIGGER Print_salary_changes

BEFORE DELETE OR INSERT OR UPDATE ON Emp_tab FOR EACH ROW WHEN (new.Empno > 0) DECLARE sal_diff number; BEGIN sal_diff := :new.sal - :old.sal; dbms_output.put('Old salary: ' || :old.sal); dbms_output.put(' New salary: ' || :new.sal); dbms_output.put_line(' Difference ' || sal_diff); END; / If you enter a SQL statement, such as the following: UPDATE Emp_tab SET sal = sal + 500.00 WHERE deptno = 10; Then, the trigger fires once for each row that is updated, and it prints the new and old salaries, and the difference. CREATE OR REPLACE TRIGGER "SALES".HENKILOROOLI_CHECK2 AFTER INSERT OR UPDATE OR DELETE ON AH_HENKILOROOLI BEGIN IF INSERTING OR DELETING THEN handle_delayed_triggers ('AH_HENKILOROOLI', 'HENKILOROOLI_CHECK'); END IF; IF INSERTING OR UPDATING OR DELETING THEN /* FE */ handle_delayed_triggers('AH_HENKILOROOLI', 'FRONTEND_FLAG'); /* FE */ END IF; /* FE */ END; A trigger is either a stored PL/SQL block or a PL/SQL, C, or Java procedure asso ciated with a table, view, schema, or the database itself. Oracle automatically executes a trigger wh en a specified event takes place, which may be in the form of a system event or a DML statement being issued again st the table. Triggers can be: -DML triggers on tables. -INSTEAD OF triggers on views. -System triggers on DATABASE or SCHEMA: With DATABASE, triggers fire for each ev ent for all users; with SCHEMA, triggers fire for each event for that specific user. BEFORE and AFTER Options The BEFORE or AFTER option in the CREATE TRIGGER statement specifies exactly whe n to fire the trigger body in relation to the triggering statement that is being run. In a CREATE TRIGGER statement, the BEFORE or AFTER option is specified just befo re the triggering statement. For example, the PRINT_SALARY_CHANGES trigger in the previous example is a BEFOR E trigger.

INSTEAD OF Triggers The INSTEAD OF option can also be used in triggers. INSTEAD OF triggers provide a transparent way of modifying views that cannot be modified directly through UPDATE, INSERT, and DELETE statements. These triggers are called INSTEAD OF triggers because, unlike other types of tri ggers, Oracle fires the trigger instead of executing the triggering statement. The trigger performs UPDATE, INSERT, or DELETE operations directly on the underl ying tables.

CREATE TABLE Prj_level Projno Resp_dept CREATE TABLE Empno Ename Job Mgr Hiredate Sal Comm Deptno CREATE TABLE Deptno Dname Loc Mgr_no Dept_type

Project_tab ( NUMBER, NUMBER, NUMBER); Emp_tab ( NUMBER NOT NULL, VARCHAR2(10), VARCHAR2(9), NUMBER(4), DATE, NUMBER(7,2), NUMBER(7,2), NUMBER(2) NOT NULL); Dept_tab ( NUMBER(2) NOT NULL, VARCHAR2(14), VARCHAR2(13), NUMBER, NUMBER);

The following example shows an INSTEAD OF trigger for inserting rows into the MA NAGER_INFO view. CREATE OR REPLACE VIEW manager_info AS SELECT e.ename, e.empno, d.dept_type, d.deptno, p.prj_level, p.projno FROM Emp_tab e, Dept_tab d, Project_tab p WHERE e.empno = d.mgr_no AND d.deptno = p.resp_dept; CREATE OR REPLACE TRIGGER manager_info_insert INSTEAD OF INSERT ON manager_info REFERENCING NEW AS n -- new manager information FOR EACH ROW DECLARE rowcnt number; BEGIN SELECT COUNT(*) INTO rowcnt FROM Emp_tab WHERE empno = :n.empno; IF rowcnt = 0 THEN INSERT INTO Emp_tab (empno,ename) VALUES (:n.empno, :n.ename); ELSE

UPDATE Emp_tab SET Emp_tab.ename = :n.ename WHERE Emp_tab.empno = :n.empno; END IF; SELECT COUNT(*) INTO rowcnt FROM Dept_tab WHERE deptno = :n.deptno; IF rowcnt = 0 THEN INSERT INTO Dept_tab (deptno, dept_type) VALUES(:n.deptno, :n.dept_type); ELSE UPDATE Dept_tab SET Dept_tab.dept_type = :n.dept_type WHERE Dept_tab.deptno = :n.deptno; END IF; SELECT COUNT(*) INTO rowcnt FROM Project_tab WHERE Project_tab.projno = :n.projno; IF rowcnt = 0 THEN INSERT INTO Project_tab (projno, prj_level) VALUES(:n.projno, :n.prj_level); ELSE UPDATE Project_tab SET Project_tab.prj_level = :n.prj_level WHERE Project_tab.projno = :n.projno; END IF; END;

FOR EACH ROW Option The FOR EACH ROW option determines whether the trigger is a row trigger or a sta tement trigger. If you specify FOR EACH ROW, then the trigger fires once for each row of the tab le that is affected by the triggering statement. The absence of the FOR EACH ROW option indicates th at the trigger fires only once for each applicable statement, but not separately for each row affected by the s tatement. For example, you define the following trigger: -------------------------------------------------------------------------------Note: You may need to set up the following data structures for certain examples to wor k: CREATE TABLE Emp_log ( Emp_id NUMBER, Log_date DATE, New_salary NUMBER, Action VARCHAR2(20)); -------------------------------------------------------------------------------CREATE OR REPLACE TRIGGER Log_salary_increase AFTER UPDATE ON Emp_tab FOR EACH ROW WHEN (new.Sal > 1000) BEGIN INSERT INTO Emp_log (Emp_id, Log_date, New_salary, Action) VALUES (:new.Empno, SYSDATE, :new.SAL, 'NEW SAL');

END; Then, you enter the following SQL statement: UPDATE Emp_tab SET Sal = Sal + 1000.0 WHERE Deptno = 20; If there are five employees in department 20, then the trigger fires five times when this statement is entered, because five rows are affected. The following trigger fires only once for each UPDATE of the Emp_tab table: CREATE OR REPLACE TRIGGER Log_emp_update AFTER UPDATE ON Emp_tab BEGIN INSERT INTO Emp_log (Log_date, Action) VALUES (SYSDATE, 'Emp_tab COMMISSIONS CHANGED'); END; Trigger Size The size of a trigger cannot be more than 32K. Valid SQL Statements in Trigger Bodies The body of a trigger can contain DML SQL statements. It can also contain SELECT statements, but they must be SELECT... INTO... statements or the SELECT statement in the def inition of a cursor. DDL statements are not allowed in the body of a trigger. Also, no transaction control statements are allowed in a trigger. ROLLBACK, COMMIT, and SAVEPOINT cannot be used.For system triggers, {CREATE/ALTE R/DROP} TABLE statements and ALTER...COMPILE are allowed. Recompiling Triggers Use the ALTER TRIGGER statement to recompile a trigger manually. For example, the following statement recompiles the PRINT_SALARY_CHANGES trigger : ALTER TRIGGER Print_salary_changes COMPILE; Disable enable trigger: ALTER TRIGGER Reorder DISABLE; ALTER TRIGGER Reorder ENABLE; Or in 1 time for all triggers on a table: ALTER TABLE Inventory DISABLE ALL TRIGGERS;

ALTER DATABASE rename GLOBAL_NAME TO NEW_NAME;

==================================== 19 BACKUP RECOVERY, TROUBLESHOOTING: ====================================

19.1 SCN: -------The Control files and all datafiles contain the last SCN (System Change Number) after: checkpoint, for example via ALTER SYSTEM CHECKPOINT, shutdown normal/immediate/transactional, log switch occurs by the system via alter system switch logfile, alter tablespace begin backup etc..

at checkpoint the following occurs: ------------------------------------ The database writer (DBWR) writes all modified database blocks in the buffer cache back to datafiles, - Log writer (LGWR) or Checkpoint process (CHKPT) updates both the controlfile and the datafiles to indicate when the last checkpoint occurred (SCN) Log switching causes a checkpoint, but a checkpoint does not cause a logswitch. LGWR writes logbuffers to online redo log: ------------------------------------------ at commit - redolog buffers 1/3 full, > 1 MB changes - before DBWR writes modified blocks to datafiles LOG_CHECKPOINT_INTERVAL init.ora parameter: ------------------------------------------The LOG_CHECKPOINT_INTERVAL init.ora parameter controls how often a checkpoint operation will be performed based upon the number of operating system blocks that have been written to the redo log. If this value is larger than the size of the redo log, then the checkpoint will only occur when Oracle performs a log switch FROM one group to another, which is preferred. NOTE: Starting with Oracle 8.1, LOG_CHECKPOINT_INTERVAL will be interpreted to mean that the incremental checkpoint should not lag the tail of the log by more than log_checkpoint_interval number of redo blocks. On most Unix systems the operating system block size is 512 bytes. This means that setting LOG_CHECKPOINT_INTERVAL to a value of 10,000 (the default setting), causes a checkpoint to occur after 5,120,000 (5M) bytes are written to the redo log. If the size of your redo log is 20M, you are taking 4 checkpoints for each log. LOG_CHECKPOINT_TIMEOUT init.ora parameter:

-----------------------------------------The LOG_CHECKPOINT_TIMEOUT init.ora parameter controls how often a checkpoint will be performed based on the number of seconds that have passed since the last checkpoint. NOTE: Starting with Oracle 8.1, LOG_CHECKPOINT_TIMEOUT will be interpreted to mean that the incremental checkpoint should be at the log position WHERE the tail of the log was LOG_CHECKPOINT_TIMEOUT seconds ago. Checkpoint frequency impacts the time required for the database to recover FROM an unexpected failure. Longer intervals between checkpoints mean that more time will be required during database recovery. LOG_CHECKPOINTS_TO_ALERT init.ora parameter: -------------------------------------------The LOG_CHECKPOINTS_TO_ALERT init.ora parameter, when set to a value of TRUE, allows you to log checkpoint start and stop times in the alert log. This is very helpful in determining if checkpoints are occurring at the optimal frequency and gives a chronological view of checkpoints and other database activities occurring in the background. It is a misconception that setting LOG_CHECKPOINT_TIMEOUT to a given value will initiate a log switch at that interval, enabling a recovery window used for a stand-by database configuration. Log switches cause a checkpoint, but a checkpoint does not cause a log switch. The only way to cause a log switch is manually with ALTER SYSTEM SWITCH LOGFILE or resizing the redo logs to cause more FAST_START_MTTR_TARGET init.ora parameter: -----------------------------------------FAST_START_MTTR_TARGET enables you to specify the number of seconds the database takes to perform crash recovery of a single instance. It is the number of seconds it takes to recover FROM crash recovery. The lower the value, the more often DBWR will write the blocks to disk. FAST_START_MTTR_TARGET can be overridden by either FAST_START_IO_TARGET or LOG_CHECKPOINT_INTERVAL. FAST_START_IO_TARGET init.ora paramater: ---------------------------------------FAST_START_IO_TARGET (available only with the Oracle Enterprise Edition) specifies the number of I/Os that should be needed during crash or instance reco very. Smaller values for this parameter result in faster recovery times. This improvement in recovery perfoRMANce is achieved at the expense of additional writing activity during normal processing. ARCHIVE_LAG_TARGET init.ora parameter: -------------------------------------The following initialization parameter setting sets the log switch interval to 30 minutes (a typical value). ARCHIVE_LAG_TARGET = 1800

Note: More on SCN: ================== >>>> thread from asktom You Asked Tom, Would you tell me what snapshot too old error. When does it happen? What's the p ossible causes? How to fix it? Thank you very much. Jane and we said... I think support note <Note:40689.1> covers this topic very well: ORA-01555 "Snapshot too old" - Detailed Explanation =================================================== Overview ~~~~~~~~ This article will discuss the circumstances under which a query can return the O racle error ORA-01555 "snapshot too old (rollback segment too small)". The article wil l then proceed to discuss actions that can be taken to avoid the error and finally will provide some simple PL/SQL scripts that illustrate the issues discussed. Terminology ~~~~~~~~~~~ It is assumed that the reader is familiar with standard Oracle terminology such as 'rollback segment' and 'SCN'. If not, the reader should first read the Oracle Se rver Concepts manual and related Oracle documentation. In addition to this, two key concepts are briefly covered below which help in th e understanding of ORA-01555: 1. READ CONSISTENCY: ==================== This is documented in the Oracle Server Concepts manual and so will not be discu ssed further. However, for the purposes of this article this should be read and under

stood if not understood already. Oracle Server has the ability to have multi-version read consistency which is in valuable to you because it guarantees that you are seeing a consistent view of the data ( no 'dirty reads'). 2. DELAYED BLOCK CLEANOUT: ========================== This is best illustrated with an example: Consider a transaction that updates a million row table. This obviously visits a large number of database blocks to make the c hange to the data. When the user commits the transaction Oracle does NOT go back and revi sit these blocks to make the change permanent. It is left for the next transaction that vi sits any block affected by the update to 'tidy up' the block (hence the term 'delayed blo ck cleanout'). Whenever Oracle changes a database block (index, table, cluster) it stores a poi nter in the header of the data block which identifies the rollback segment used to hold the rollback information for the changes made by the transaction. (This is required if the user later elects to not commit the changes and wishes to 'undo' the changes mad e.) Upon commit, the database simply marks the relevant rollback segment header entr y as committed. Now, when one of the changed blocks is revisited Oracle examines the header of the data block which indicates that it has been changed at some point. The datab ase needs to confirm whether the change has been committed or whether it is currently unco mmitted. To do this, Oracle determines the rollback segment used for the previous transac tion (from the block's header) and then determines whether the rollback header indica tes whether it has been committed or not. If it is found that the block is committed then the header of the data block is updated so that subsequent accesses to the block do not incur this processing. This behaviour is illustrated in a very simplified way below. Here we walk throu gh the stages involved in updating a data block. STAGE 1 - No changes made Description: This is the starting point. At the top of the data block we have an area used to link active

transactions to a rollback segment (the 'tx' part), and the rollback segment header has a table that stores information upon all the latest transactions that have used that rollback segment. In our example, we have two active transaction slots (01 and 02) and the next free slot is slot 03. (Since we are free to overwrite committed transactions.) Data Block 500 +----+--------------+ | tx | None | +----+--------------+ | row 1 | | row 2 | | ... .. | | row n | +-------------------+ STAGE 2 - Row 2 is updated Description: We have now updated row 2 of block 500. Note that the data block header is updated to point to the rollback segment 5, transaction slot 3 (5.3) and that it is marked uncommitted (Active). Data Block 500 Rollback Segment Header 5 +----+--------------+ +----------------------+---------+ | tx |5.3uncommitted|-+ | transaction entry 01 |ACTIVE | +----+--------------+ | | transaction entry 02 |ACTIVE | | row 1 | +-->| transaction entry 03 |ACTIVE | | row 2 *changed* | | transaction entry 04 |COMMITTED| | ... .. | | ... ... .. | ... | | row n | | transaction entry nn |COMMITTED| +------------------+ +--------------------------------+ STAGE 3 - The user issues a commit Description: Next the user hits commit. Note that all that this does is it updates the rollback segment header's corresponding transaction slot as committed. It does *nothing* to the data block. Data Block 500 Rollback Segment Header 5 +----+--------------+ +----------------------+---------+ | tx |5.3uncommitted|--+ | transaction entry 01 |ACTIVE | +----+--------------+ | | transaction entry 02 |ACTIVE | | row 1 | +--->| transaction entry 03 |COMMITTED| | row 2 *changed* | | transaction entry 04 |COMMITTED| | ... .. | | ... ... .. | ... | | row n | | transaction entry nn |COMMITTED| +------------------+ +--------------------------------+ STAGE 4 - Another user selects data block 500 Rollback Segment Header 5 +----------------------+---------+ | transaction entry 01 |ACTIVE | | transaction entry 02 |ACTIVE | | transaction entry 03 |COMMITTED| | transaction entry 04 |COMMITTED| | ... ... .. | ... | | transaction entry nn |COMMITTED| +--------------------------------+

Description: Some time later another user (or the same user) revisits data block 500. We can see that there is an uncommitted change in the data block according to the data block's header. Oracle then uses the data block header to look up the corresponding rollback segment transaction table slot, sees that it has been committed, and changes data block 500 to reflect the true state of the datablock. (i.e. it performs delayed cleanout). Data Block 500 +----+--------------+ | tx | None | +----+--------------+ | row 1 | | row 2 | | ... .. | | row n | +------------------+ ORA-01555 Explanation ~~~~~~~~~~~~~~~~~~~~~ There are two fundamental causes of the error ORA-01555 that are a result of Ora cle trying to attain a 'read consistent' image. These are : o The rollback information itself is overwritten so that Oracle is unable to r ollback the (committed) transaction entries to attain a sufficiently old enough version of the block. o The transaction slot in the rollback segment's transaction table (stored in the rollback segment's header) is overwritten, and Oracle cannot rollback the transa ction header sufficiently to derive the original rollback segment transaction slot. Note: If the transaction of User A is not committed, the rollback segment entr ies will NOT be reused, but if User A commits, the entries become free for reuse, and if a que ry of User B takes a lot of time, and "meet" those overwritten entries, user B gets an erro r. Both of these situations are discussed below with the series of steps that cause the ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query Environment', which can be thought of as the environment that existed when a que ry is first started and to which Oracle is trying to attain a read consistent image. A ssociated with this environment is the SCN (System Change Number) at that time and hence, QENV 50 is the query environment with SCN Rollback Segment Header 5 +----------------------+---------+ | transaction entry 01 |ACTIVE | | transaction entry 02 |ACTIVE | | transaction entry 03 |COMMITTED| | transaction entry 04 |COMMITTED| | ... ... .. | ... | | transaction entry nn |COMMITTED| +--------------------------------+

50. CASE 1 - ROLLBACK OVERWRITTEN This breaks down into two cases: another session overwriting the rollback that the current session requires or the case where the current session overwrites the r ollback information that it requires. The latter is discussed in this article because th is is usually the harder one to understand. Steps: 1. Session 1 starts query at time T1 and QENV 50 2. Session 1 selects block B1 during this query 3. Session 1 updates the block at SCN 51 4. Session 1 does some other work that generates rollback information. 5. Session 1 commits the changes made in steps '3' and '4'. (Now other transactions are free to overwrite this rollback information) 6. Session 1 revisits the same block B1 (perhaps for a different row). Now, Oracle can see from the block's header that it has been changed and it is later than the required QENV (which was 50). Therefore we need to get an image o f the block as of this QENV. If an old enough version of the block can be found in the buffer cache th en we will use this, otherwise we need to rollback the current block to generate anoth er version of the block as at the required QENV. It is under this condition that Oracle may not be able to get the require d rollback information because Session 1's changes have generated rollback informa tion that has overwritten it and returns the ORA-1555 error. CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN 1. Session 1 starts query at time T1 and QENV 50 2. Session 1 selects block B1 during this query 3. Session 1 updates the block at SCN 51 4. Session 1 commits the changes (Now other transactions are free to overwrite this rollback information) 5. A session (Session 1, another session or a number of other sessions) then use the same rollback segment for a series of committed transactions.

These transactions each consume a slot in the rollback segment transactio n table such that it eventually wraps around (the slots are written to in a circular fas hion) and overwrites all the slots. Note that Oracle is free to reuse these slots since al l transactions are committed. 6. Session 1's query then visits a block that has been changed since the ini tial QENV was established. Oracle therefore needs to derive an image of the block as at th at point in time. Next Oracle attempts to lookup the rollback segment header's transaction slot pointed to by the top of the data block. It then realises that this has been ove rwritten and attempts to rollback the changes made to the rollback segment header to get the original transaction slot entry. If it cannot rollback the rollback segment transaction table sufficiently it will return ORA-1555 since Oracle can no longer derive the required version of the da ta block. It is also possible to encounter a variant of the transaction slot being overwr itten when using block cleanout. This is briefly described below : Session 1 starts a query at QENV 50. After this another process updates the blo cks that Session 1 will require. When Session 1 encounters these blocks it determines tha t the blocks have changed and have not yet been cleaned out (via delayed block cleanou t). Session 1 must determine whether the rows in the block existed at QENV 50, were subsequently changed, In order to do this, Oracle must look at the relevant rollback segment transact ion table slot to determine the committed SCN. If this SCN is after the QENV then Oracle m ust try to construct an older version of the block and if it is before then the block ju st needs clean out to be good enough for the QENV. If the transaction slot has been overwritten and the transaction table cannot b e rolled back to a sufficiently old enough version then Oracle cannot derive the block im age and will return ORA-1555. (Note: Normally Oracle can use an algorithm for determining a block's SCN durin g block cleanout even when the rollback segment slot has been overwritten. But in this c ase

Oracle cannot guarantee that the version of the block has not changed since the start of the query). Solutions ~~~~~~~~~ This section lists some of the solutions that can be used to avoid the ORA-01555 problems discussed in this article. It addresses the cases where rollback segment informa tion is overwritten by the same session and when the rollback segment transaction table entry is overwritten. It is worth highlighting that if a single session experiences the ORA-01555 and it is not one of the special cases listed at the end of this article, then the session mus t be using an Oracle extension whereby fetches across commits are tolerated. This doe s not follow the ANSI model and in the rare cases where ORA-01555 is returned one of the solutions below must be used. CASE 1 - ROLLBACK OVERWRITTEN 1. Increase size of rollback segment which will reduce the likelihood of over writing rollback information that is needed. 2. Reduce the number of commits (same reason as 1). 3. Run the processing against a range of data rather than the whole table. (S ame reason as 1). 4. Add additional rollback segments. This will allow the updates etc. to be s pread across more rollback segments thereby reducing the chances of overwriting requir ed rollback information. 5. If fetching across commits, the code can be changed so that this is not do ne. 6. Ensure that the outer select does not revisit the same block at different times during the processing. This can be achieved by : - Using a full table scan rather than an index lookup - Introducing a dummy sort so that we retrieve all the data, sort it and then sequentially visit these data blocks. CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN 1. Use any of the methods outlined above except for '6'. This will allow trans actions to spread their work across multiple rollback segments therefore reducing the li kelihood

or rollback segment transaction table slots being consumed. 2. If it is suspected that the block cleanout variant is the cause, then force block cleanout to occur prior to the transaction that returns the ORA-1555. This can b e achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager : alter session set optimizer_goal = rule; select count(*) from table_name; If indexes are being accessed then the problem may be an index block and cl ean out can be forced by ensuring that all the index is traversed. Eg, if the index is o n a numeric column with a minimum value of 25 then the following query will force cl eanout of the index : select index_column from table_name where index_column > 24; Examples ~~~~~~~~ Listed below are some PL/SQL examples that can be used to illustrate the ORA-155 5 cases given above. Before these PL/SQL examples will return this error the database mu st be configured as follows : o Use a small buffer cache (db_block_buffers). REASON: You do not want the session executing the script to be able to find old versions of the block in the buffer cache which can be used to satisfy a block v isit without requiring the rollback information. o Use one rollback segment other than SYSTEM. REASON: You need to ensure that the work being done is generating rollback information that will overwrite the rollback information required. o Ensure that the rollback segment is small. REASON: See the reason for using one rollback segment. ROLLBACK OVERWRITTEN rem rem rem rem * * * * 1555_a.sql Example of getting ora-1555 "Snapshot too old" by session overwriting the rollback information required by the same session.

drop table bigemp; create table bigemp (a number, b varchar2(30), done char(1)); drop table dummy1; create table dummy1 (a varchar2(200));

rem * Populate the example tables. begin for i in 1..4000 loop insert into bigemp values (mod(i,20), to_char(i), 'N'); if mod(i,100) = 0 then insert into dummy1 values ('ssssssssssss'); commit; end if; end loop; commit; end; / rem * Ensure that table is 'cleaned out'. select count(*) from bigemp; declare -- Must use a predicate so that we revisit a changed block at a different -- time. -- If another tx is updating the table then we may not need the predicate cursor c1 is select rowid, bigemp.* from bigemp where a < 20; begin for c1rec in c1 loop update dummy1 update dummy1 update dummy1 update bigemp commit; end loop; end; / set set set set a = 'aaaaaaaa'; a = 'bbbbbbbb'; a = 'cccccccc'; done='Y' where c1rec.rowid = rowid;

ROLLBACK TRANSACTION SLOT OVERWRITTEN rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by rem * overwriting the transaction slot in the rollback rem * segment header. This just uses one session. drop table bigemp; create table bigemp (a number, b varchar2(30), done char(1)); rem * Populate demo table. begin for i in 1..200 loop insert into bigemp values (mod(i,20), to_char(i), 'N'); if mod(i,100) = 0 then commit; end if; end loop; commit; end; / drop table mydual; create table mydual (a number); insert into mydual values (1); commit;

rem * Cleanout demo table. select count(*) from bigemp; declare cursor c1 is select * from bigemp; begin -- The following update is required to illustrate the problem if block -- cleanout has been done on 'bigemp'. If the cleanout (above) is commented -- out then the update and commit statements can be commented and the -- script will fail with ORA-1555 for the block cleanout variant. update bigemp set b = 'aaaaa'; commit; for c1rec in c1 loop for i in 1..20 loop update mydual set a=a; commit; end loop; end loop; end; / Special Cases ~~~~~~~~~~~~~ There are other special cases that may result in an ORA-01555. These are given b elow but are rare and so not discussed in this article : o Trusted Oracle can return this if configured in OS MAC mode. Decreasing LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem. o If a query visits a data block that has been changed by using the Oracle disc rete transaction facility then it will return ORA-01555. o It is feasible that a rollback segment created with the OPTIMAL clause maycau se a query to return ORA-01555 if it has shrunk during the life of the query causing rollback segment information required to generate consistent read versions of blocks to b e lost. Summary ~~~~~~~ This article has discussed the reasons behind the error ORA-01555 "Snapshot too old", has provided a list of possible methods to avoid the error when it is encountered, a nd has provided simple PL/SQL scripts that illustrate the cases discussed.

>>>>> thread about SCN

Do It Yourself (DIY) Oracle replication Here's a demonstration. First I create a simple table, called TBL_SRC. This is t he table on which we want to perform change-data-capture (CDC). create table tbl_src ( x number primary key, y number ); Next, I show a couple of CDC tables, and the trigger on TBL_SRC that will load t he CDC tables. create table trx ( trx_id varchar2(25) primary key, SCN number, username varchar2(30) ); create table trx_detail ( trx_id varchar(25) , step_id number , step_tms date , old_x number , old_y number , new_x number , new_y number , operation char(1) ); alter table trx_detail add constraint xp_trx_detail primary key ( trx_id, step_i d ); create or replace trigger b4_src before insert or update or delete on tbl_src for each row DECLARE l_trx_id VARCHAR2(25); l_step_id NUMBER; BEGIN BEGIN l_trx_id := dbms_transaction.local_transaction_id; l_step_id := dbms_transaction.step_id; INSERT INTO trx VALUES (l_trx_id, userenv('COMMITSCN'), USER); EXCEPTION WHEN dup_val_on_index THEN NULL; END; INSERT INTO trx_detail (trx_id, step_id, step_tms, old_x, old_y, new_x, new_y) VALUES (l_trx_id, l_step_id, SYSDATE, :OLD.x, :OLD.y, :NEW.x, :NEW.y); END; /

Let's see the magic in action. I'll insert a record. We'll see the 'provisional' SCN in the TRX table. Then we'll commit, and see the 'true'/post-commit SCN: insert into tbl_src values ( 1, 1 ); 1 row created. select * from trx; TRX_ID SCN USERNAME ------------------------- ---------- ------------------3.4.33402 3732931665 CIDW commit; Commit complete. select * from trx; TRX_ID SCN USERNAME ------------------------- ---------- ------------------3.4.33402 3732931668 CIDW Notice how the SCN "changed" from 3732931665 to 3732931668. Oracle was doing som e background transactions in between. And we can look at the details of the transaction: column step_id format 999,999,999,999,999,999,999; / TRX_ID STEP_ID STEP_TMS OLD_X OLD_Y NEW_X NEW_Y O ------------------------- ---------------------------- --------- ---------- --------- ---------- ---------- 3.4.33402 4,366,162,821,393,448 11-NOV-06 1 1 This approach works back to at least Oracle 7.3.4. Not perfect, because it only captures DML. A TRUNCATE is DDL, and that's not captured. For the actual implementation, I sto red the before and after values as CSV strings. For 9i or later, I'd use built-in Oracle functionality.

19.2 init.ora parameters and ARCHIVE MODE: ---------------------------------------LOG_ARCHIVE_DEST=/oracle/admin/cc1/arch

LOG_ARCHIVE_DEST_1=d:\oracle\oradata\arc LOG_ARCHIVE_START=TRUE LOG_ARCHIVE_FORMAT=arc_%s.log LOG_ARCHIVE_DEST_1= LOG_ARCHIVE_DEST_2= LOG_ARCHIVE_MAX_PROCESSES=2 19.3 Enabling or disabling archive mode: ---------------------------------ALTER DATABASE ARCHIVELOG (mounted, niet open) ALTER DATABASE NOARCHIVELOG (mounted, niet open) 19.4 Implementation backup in archive mode via OS script: -------------------------------------------------------19.4.1 OS backup script in unix -----------------------------############################################### # Example archive log backup script in UNIX: # ############################################### # Set up the environment to point to the correct database ORACLE_SID=CC1; export ORACLE_SID ORAENV_ASK=NO; export ORAENV_ASK .oraenv # Backup the tablespaces svrmgrl <<EOFarch1 connect internal alter tablespace SYSTEM begin backup; ! tar -cvf /dev/rmt/0hc /u01/oradata/sys01.dbf alter tablespace data end backup; alter tablespace DATA begin backup; ! tar -rvf /dev/rmt/0hc /u02/oradata/data01.dbf alter tablespace data end backup; etc .. .. # Now we backup the archived redo logs before we delete them. # We must briefly stop the archiving process in order that # we do not miss the latest files for sure. archive log stop; exit EOFarch1 # Get a listing of all archived files. FILES='ls /db01/oracle/arch/cc1/arch*.dbf'; export FILES

# Start archiving again svrmgrl <<EOFarch2 connect internal archive log start; exit EOFarch2 # Now backup the archived files to tape tar -rvf /dev/rmt/0hc $FILES # Delete the backupped archived files rm -f $FILES # Backup the control file svrmgrl <<EOFarch3 connect internal alter database backup controlfile to '/db01/oracle/cc1/cc1controlfile.bck'; exit EOFarch3 tar -rvf /dev/rmt/0hc /db01/oracle/cc1/cc1controlfile.bck ############################### # End backup script example # ############################### 19.5 Tablespaces en datafiles online/offline in non-archive en archive mode: --------------------------------------------------------------------------Tablespace: Een tablespace kan in archive mode en non-archive mode offline worden geplaatst zonder dat media recovery nodig is. Dit is zo met de NORMAL clausule: alter tablespace offline normal; Met de immediate clausule is wel recovery nodig. Datafile; Een datafile kan in archive mode offline worden gezet. Als de datafile online wordt gebracht, moet eerst media recovery wordfen toegepa st. Een datafile kan in non-archive mode niet offline worden geplaatst. Backup mode: When you issue ALTER TABLESPACE .. BEGIN BACKUP, it freezes the datafile header. This is so that we know what redo logs we need to apply it consistent. While you are backing up that file hot, it -- it is logically inconsistent. Some of the backed the SCN in place at the time the backup began -- others and others from various points in between. 19.6 Recovery in archive mode: to a given file to make we are still writing to up blocks could be from from the time it ended

----------------------------19.6.1: recovery waarbij een current controlfile bestaat ======================================================= Media recovery na de loss van datafile(s) en dergelijke, gebeurt normaliter op basis van de SCN in de controlfile. A1: complete recovery: -----------------RECOVER DATABASE RECOVER TABLESPACE DATA RECOVER DATAFILE 5 A2: incomplete recovery: -----------------------time based: recover database until time '1999-12-31:23.40.00' cancel based: recover database until cancel change bases: recover database until change 60747681; Bij beide recoveries worden de archived redo logs toegepast. Een incomplete recovery altijd met "alter database open resetlogs;" uitvoeren om de nieuwe logentries te purgen uit de online redo files 19.6.2: Recovery zonder huidige controlfile ========================================== media recovery wanneer er geen huidige controlfile bestaat De control file bevat dus een SCN die te oud is t.o.v. de SCN's in de archived redo logs. Dit moet je Oracle laten weten via RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; specifying "using backup controlfile" is effectively telling oracle that you've lost your controlfile, and thus SCN's in file headers cannot be compared to anything. So Oracle will ha ppily keep applying archives until you tell it to stop (or run out)

(database not open) (database open, except this tablespace) (database open, except this datafile)

19.7 Queries om SCN te vinden: ----------------------------Iedere redo log is geassocieerd met een hoog en laag scn In V$LOG_HISTORY, V$ARCHIVED_LOG, V$DATABASE, V$DATAFILE_HEADER, V$DATAFILE sta an scn's: Queries: --------

SELECT file#, substr(name, 1, 30), status, checkpoint_change# controlfile FROM V$DATAFILE; SELECT file#, substr(name, 1, 30), status, fuzzy, checkpoint_change# file header FROM V$DATAFILE_HEADER;

-- uit

-- uit

SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 40) FROM V$ARCHIVED_LOG; SELECT recid, first_change#, sequence#, next_change# FROM V$LOG_HISTORY; SELECT resetlogs_change#, checkpoint_change#, controlfile_change#, open_resetlog s FROM V$DATABASE; SELECT * FROM V$RECOVER_FILE -- Which file needs recovery Find the latest archived redologs: SELECT name FROM v$archived_log WHERE sequence# = (SELECT max(sequence#) FROM v$archived_log WHERE 1699499 >= first_change#;

sequence# : geeft het nummer aan van de archived redo log first_change# : eerste scn in archived redo log next_change# : laatste scn in archived redo log, en de eerste scn van d e volgende log checkpoint_change# : laatste actuele SCN FUZZY : Y/N, indien YES dan bevat de file changes die later zijn dan de scn in de header A datafile that contains a block whose SCN is more recent than the SCN of its he ader is called a fuzzy datafile. 19.8 Archived redo logs nodig voor recovery: ------------------------------------------In V$RECOVERY_LOG staan die archived logs vermeld die nodig zijn bij een recovery. Je kunt ook V$RECOVER_FILE gebruiken om te bepalen welke files moeten recoveren. SELECT * FROM v$recover_file; Hier vind je de FILE# en deze kun je weer gebruiken met v$datafile en v$tablespa ce: SELECT d.name, t.name FROM v$datafile d, v$tablespace t WHERE t.ts# = d.ts# AND d.file# in (14,15,21); # use values obtained FROM V$RECOVER_FILE query

19.9 voorbeeld recovery 1 datafile: ---------------------------------Stel 1 datafile is corrupt. Nu behoeft slechts die ene file te worden teruggezet en daarna recovery toe te passen. SVRMGRL>alter database datafile '/u01/db1/users01.dbf' offline; $ cp /stage/users01.dbf /u01/db1 SVRMGRL>recover datafile '/u01/db1/users01.dbf'; en oracle komt met een suggestie van het toepassen van archived logfiles SVRMGRL>alter database datafile '/u01/db1/users01.dbf' online; 19.10 voorbeeld recovery database: --------------------------------Stel meerdere datafiles zijn verloren. Zet nu backup files terug. SVRMGRL>startup mount; SVRMGRL>recover database; en oracle zal de archived redo logfiles toepassen. media recovery complete SVRMGRL>alter database open;

19.11 restore naar ANDere disks: ------------------------------- alter database backup controlfile to trace; - restore files naar nieuwe lokatie: - edit control file met nieuwe lokatie files - save dit als .sql script en voer het uit: SVRMGRL>@new.sql controlfile: startup nomount create controlfile reuse database "brdb" noresetlogs archivelog maxlogfiles 16 maxlogmembers 2 maxdatafiles 100 maxinstances 1 maxloghistory 226 logfile group 1 ('/disk03/db1/redo/redo01a.dbf', '/disk04/db1/redo/redo01b.dbf') size 2M , group 2 ('/disk03/db1/redo/redo02a.dbf', '/disk04/db1/redo/redo02b.dbf') size 2M datafile '/disk04/oracle/db1/sys01.dbf', '/disk05/oracle/db1/rbs01.dbf',

'/disk06/oracle/db1/data01.dbf', '/disk04/oracle/db1/index01.dbf', character set 'us7ascii' ; RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS; 19.12 Copy van database naar ANDere Server: -----------------------------------------1. kopieer alle files precies van ene lokatie naar ANDere 2. source server: alter database backup controlfile to trace 3. Maak een juiste init.ora met references nieuwe server 4. edit de ascii versie controlfile uit stap 2 waarbij alle schijflokaties verwi jzen naar de target STARTUP NOMOUNT CREATE CONTROLFILE REUSE SET DATABASE "FSYS" RESETLOGS noARCHIVELOG MAXLOGFILES 8 MAXLOGMEMBERS 4 etc.. ALTER DATABASE OPEN resetlogs; of CREATE CONTROLFILE REUSE SET DATABASE "TEST" RESETLOGS ARCHIVELOG .. #RECOVER DATABASE ALTER DATABASE OPEN RESETLOGS; ALTER DATABASE OPEN RESETLOGS;

CREATE CONTROLFILE REUSE DATABASE "PROD" NORESETLOGS ARCHIVELOG .. .. RECOVER DATABASE # All logs need archiving AND a log switch is needed. ALTER SYSTEM ARCHIVE LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN; 5. SVRMGRL>@script bij probleem: delete originele controlfiles en geen reuse. Voorbeeld create controlfile: ----------------------------If you want another database name use CREATE CONTROLFILE SET DATABASE

STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "O901" RESETLOGS NOARCHIVELOG MAXLOGFILES 50 MAXLOGMEMBERS 5 MAXDATAFILES 100 MAXINSTANCES 1 MAXLOGHISTORY 113 LOGFILE GROUP 1 'D:\ORACLE\ORADATA\O901\REDO01.LOG' SIZE 100M, GROUP 2 'D:\ORACLE\ORADATA\O901\REDO02.LOG' SIZE 100M, GROUP 3 'D:\ORACLE\ORADATA\O901\REDO03.LOG' SIZE 100M DATAFILE 'D:\ORACLE\ORADATA\O901\SYSTEM01.DBF', 'D:\ORACLE\ORADATA\O901\UNDOTBS01.DBF', 'D:\ORACLE\ORADATA\O901\CWMLITE01.DBF', 'D:\ORACLE\ORADATA\O901\DRSYS01.DBF', 'D:\ORACLE\ORADATA\O901\EXAMPLE01.DBF', 'D:\ORACLE\ORADATA\O901\INDX01.DBF', 'D:\ORACLE\ORADATA\O901\TOOLS01.DBF', 'D:\ORACLE\ORADATA\O901\USERS01.DBF' CHARACTER SET UTF8 ; Voorbeeld controlfile: ---------------------STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "SALES" NORESETLOGS ARCHIVELOG MAXLOGFILES 5 MAXLOGMEMBERS 2 MAXDATAFILES 255 MAXINSTANCES 2 MAXLOGHISTORY 1363 LOGFILE GROUP 1 ( '/oradata/system/log/log1.log', '/oradata/dump/log/log1.log' ) SIZE 100M, GROUP 2 ( '/oradata/system/log/log2.log', '/oradata/dump/log/log2.log' ) SIZE 100M DATAFILE '/oradata/system/system.dbf', '/oradata/rbs/rollback.dbf', '/oradata/rbs/rollbig.dbf', '/oradata/system/users.dbf', '/oradata/temp/temp.dbf', '/oradata/data_big/ahp_lkt_data_small.dbf', '/oradata/data_small/ahp_lkt_data_big.dbf', '/oradata/data_big/ahp_lkt_index_small.dbf', '/oradata/index_small/ahp_lkt_index_big.dbf', '/oradata/data_small/maniin_ah_data_small.dbf', '/oradata/index_small/maniin_ah_data_big.dbf', '/oradata/index_big/maniin_ah_index_small.dbf', '/oradata/index_big/maniin_ah_index_big.dbf', '/oradata/index_big/fe_heat_data_big.dbf', '/oradata/data_small/fe_heat_index_big.dbf', '/oradata/data_small/eksa_data_small.dbf', '/oradata/data_big/eksa_data_big.dbf',

'/oradata/index_small/eksa_index_small.dbf', '/oradata/index_big/eksa_index_big.dbf', '/oradata/data_small/provisioning_data_small.dbf', '/oradata/data_small/softplan_data_small.dbf', '/oradata/index_small/provisioning_index_small.dbf', '/oradata/system/tools.dbf', '/oradata/index_small/fe_heat_index_small.dbf', '/oradata/data_small/softplan_data_big.dbf', '/oradata/index_small/softplan_index_small.dbf', '/oradata/index_small/softplan_index_big.dbf', '/oradata/data_small/fe_heat_data_small.dbf' ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS; 19.13 PROBLEMS DURING RECOVERY: -------------------------------

normal business | system=453 switch logfile | users=455 | | CRASH tools=459 | | | | | | | -----------------------------------------------------------------------------t=t0 t=t1 t=t2 t=t3 ORA-01194, ORA-01195: --------------------------Note 1: ------Suppose the system comes with: ORA-01194: file 1 needs more recovery to be consistent ORA-01110: data file 1: '/u03/oradata/tstc/dbsyst01.dbf' Either you had the database in archive mode or in non archive mode: archive mode RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS; non-archive mode: # RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS; If you have checked that the scn's of all files are the samed number, you might try in the init.ora file:

BEGIN BACKUP

END BACKUP

_allow_resetlogs_corruption = true ------Note 2: ------Problem Description ------------------You restored your hot backup and you are trying to do a point-in-time recovery. When you tried to open your database you received the following error: ORA-01195: online backup of file <name> needs more recovery to be consistent Cause: An incomplete recovery session was started, but an insufficient number of redo logs were applied to make the file consistent. The reported file is an online backup that must be recovered to the time the bac kup ended. Action: Either apply more redo logs until the file is consistent or restore the file from an older backup and repeat the recovery. For more information about online backup, see the index entry "online backups" in the <Oracle7 Server Administrator's Guide>. This is assuming that the hot backup completed error free. Solution Description -------------------Continue to apply the requested logs until you are able to open the database. Explanation ----------When you perform hot backups on a file, the file header is frozen. For example, datafile01 may have a file header frozen at SCN #456. When you backup the next datafile the SCN # may be differnet. For example the file header for datafile02 may be frozen with SCN #457. Therefore, you must apply archive logs until you reach the SCN # of the last file that was backed up. Usually, applying one or two more archive logs wi ll solve the problem, unless there was alot of activity on the database during the backu p. ------Note 3: ------ORA-01194: file 1 needs more recovery to be consistent I am working with a test server, I can load it again but I would like to know if this kind of problem could be solved or not. Just to let you know, that I am new in Oracle Database Administration. I ran a hot backup script, which deleted the old ARCHIVE, logs at the end. After checking the script's log, I realized that the hot backup was not successf

ul and it deleted the Archives. I tried to startup the database and an error occurred; "ORA-01589: must use RESETLOGS or NORESETLOGS option for database open" I tried to open it with the RESETLOGS option then another error occurred; "ORA-01195: online backup of file 1 needs more recovery to be consistent" Just because, it was a test environment, I have never taken any cold backups. I still have hot backups. I don't know how to recover from those. If anyone can tell me how to do it from SQLPLUS (SVRMGRL is not loaded), I would really appreciate it. Thanks, Hi Hima, The following might help. You now have a database that is operating like it's in noarchive mode since the logs are gone. 1. Mount the database. 2. Issue the following query: SELECT V1.GROUP#, MEMBER, SEQUENCE#, FIRST_CHANGE# FROM V$LOG V1, V$LOGFILE V2 WHERE V1.GROUP# = V2.GROUP# ; This will list all your online redolog files and their respective sequence and first change numbers. 3. If the database is in NOARCHIVELOG mode, issue the query: SELECT FILE#, CHANGE# FROM V$RECOVER_FILE; If the CHANGE# is GREATER than the minimum FIRST_CHANGE# of your logs, the datafile can be recovered. 4. Recover the datafile, after taking offline, you cannot take system offline which is the file in error in your case. RECOVER DATAFILE '<full_path_file_name>' 5. Confirm each of the logs that you are prompted for until you receive the message "Media recovery complete". If you are prompted for a non-exi sting archived log, Oracle probably needs one or more of the online logs to proceed wi th the recovery. Compare the sequence number referenced in the ORA-280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence number matches the one you are being asked for. Keep entering online logs as requested until you receive the message "Media recovery complete". 6. Bring the datafile online. No need for system. 7. If the database is at mount point, open it Perform a full closed backup of the existing database

------Note 4: ------Recover until time using backup controlfile Hi, I am trying to perform an incomplete recovery to an arbitrary point in time in t he past. Eg. I want to go back five minutes. I have a hot backup of my database. (Tablespaces into hotbackup mode, copy files , tablespaces out of hotbackup mode, archive current log, backup controlfile to a file and also to a trace). (yep im in archivelog mode as well) I shutdown the current database and blow the datafiles,online redo logs,controlf iles away. I restore my backup copy of the database - (just the datafiles) startup nomount and then run an edited controlfile trace backup (with resetlogs). I then RECOVER DATABSE UNTIL TIME 'whenever' USING BACKUP CONTROLFILE. I'm prompted for logs in the usual way but the recovery ends with an ORA-1547 Recover succeeded but open resetlogs would give the following error. The next error is that datafile 1 (system ts) - would need more recovery. Now metalink tells me that this is usually due to backups being restored that ar e older than the archive redo logs - this isn't the case. I have all the archive redo lo gs I need to cover the time the backup was taken up to the present. The time specified in the recovery is after the backup as well. What am I missing here? Its driving me nuts. I'm off back to the docs again! Thanks in advance Tim -------------------------------------------------------------------------------From: Anand Devaraj 15-Aug-02 15:15 Subject: Re : Recover until time using backup controlfile The error indicates that Oracle requires a few more scns to get all the datafile s in sync. It is quite possible that those scns are present in the online redo logfiles whi ch were lost. In such cases when Oracle asks for a non-existent archive log, you should provid e the complete path of the online log file for the recovery to succeed. Since you dont have an online log file you should use

RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE. In this case when you exhaust all the archive log files, you issue the cancel co mmand which will automatically rollback all the incomplete transactions and get all the datafile headers in sync with the controlfile. To do an incomplete recovery using time,you usually require the online logfiles to be present. Anand -------------------------------------------------------------------------------From: Radhakrishnan paramukurup 15-Aug-02 16:19 Subject: Re : Recover until time using backup controlfile I am not sure whether you have You need to also to switch the f practice else you need the next log which is not wise some of the changes to reach a consistant state is till you reach a consistent state. Hope this helps ........ -------------------------------------------------------------------------------From: Mark Gokman 15-Aug-02 16:41 Subject: Re : Recover until time using backup controlfile To successfully perform incomplete recovery, you need a full db backup that was completed prior to the point to which you want to recover, plus you need all archive logs contai ning all SCNs up to the point to which you want to recover. Applying these rules to your case, I have two questions: - are you recovering to the point in time AFTER the time the successful full bac kup was copleted? - is there an archive log that was generated AFTER the time you specify in until time? If both answers are yes, then you should have no problems. I actually recently performed such a recovery several times. -------------------------------------------------------------------------------From: Tim Palmer 15-Aug-02 18:02 Subject: Re : Re : Recover until time using backup controlfile Thanks Guys! I think Mark has hit the nail on the head here. I was being an idio t! Ive ran this exercise a few more times (with success) and I am convinced that wh at I was doing was trying to recover to a point in time that basically was before the latest sc missed this step or just missed in the note. log at the end of the back up (I do as a matter o sure to be available in case of a failure). Other still in the online log and you can never open un

n of any one file in the hot backup set I was using - convinced myself that I wasnt but I must have been..... perhaps I need a holiday! Thanks again Tim -------------------------------------------------------------------------------From: Oracle, Rowena Serna 16-Aug-02 15:44 Subject: Re : Recover until time using backup controlfile Thanks to mark for his input for helping you out. ------Note 5: ------ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152: file 2 was not restored from a sufficiently old backup ORA-01110: data file 2: 'D:\ORACLE\ORADATA\<instance>\UNDOTBS01.DBF' File number, name and directory may vary depending on Oracle configuration Details: Undo tablespace data description In an Oracle database, Undo tablespace data is an image or snapshot of the origi nal contents of a row (or rows) in a table. This data is stored in Undo segments (formerly Ro llback segments in earlier releases of Oracle) in the Undo tablespace. When a user begins to mak e a change to the data in a row in an Oracle table, the original data is first written to Undo segments in the Undo tablespace. The entire process (including the creation of the Undo data) is recorded in Redo logs before the change is completed and written in the Database Buffer Cache, and then the d ata files via the database writer (DBWn) process. If the transaction does not complete due to some error or should there be a user decision to reverse (rollback) the change, this Undo data is critical for the ability to roll back or undo the changes that were made. Undo data also ensures a way to provide read consistency in the database. Read consistency means that if there is a data change in a row of data that is not yet committed, a new query of this same row or table will not display any of the uncommitted data to other users, but will use the information from the Undo segm ents in the Undo tablespace to actually construct and present a consistent view of the data that only includ es committed transactions or information. During recovery, Oracle uses its Redo logs to play forward through transactions

in a database so that all lost transactions (data changes and their Undo data generation) are replayed into the database. Then, once all the Redo data is applied to the data files, Oracle uses the information in the Undo segments to undo or roll back all uncommitted transactions. Once rec overy is complete, all data in the database is committed data, the System Change Numbers (SCN) on a ll data files and the control_files match, and the database is considered consistent. As for Oracle 9i, the default method of Undo management is no longer manual, but automatic; there are no Rollback segments in individual user tablespaces, and all Undo mana gement is processed by the Oracle server, using the Undo tablespace as the container to maintain the Undo segments for the user tablespaces in the database. The tablespace that still maintains it s own Rollback segments is the System tablespace, but this behavior is by design and irrelevant to the d iscussion here. If this configuration is left as the default for the database, and the 5.022 or 5.025 version of the VERITAS Backup Exec (tm) Oracle Agent is used to perform Oracle backups, the Und o tablespace will not be backed up. If Automatic Undo Management is disabled and the database administrator (DBA) has modified the locations for the Undo segments (if the Undo data is no longer in the Undo tablespace), this data may be located elsewhere, and the issues addressed by this TechNote ma y not affect the ability to fully recover the database, although it is still recommended that the upgrade to the 5.026 Oracle Agent be performed. Scenario 1 The first scenario would be a recovery of the entire database to a previous poin t-in-time. This type of recovery would utilize the RECOVER DATABASE USING BACKUP CONTROLFIL E statement and its customizations to restore the entire database to a point before the entr y of improper or corrupt data or to roll back to a point before the accidental deletion of cri tical data. In this type of situation, the most common procedure for the restore is to just restore the entire online backup over the existing Oracle files with the database shutdo wn. (See the Related Documents section for the appropriate instructions on how to re store and recover an Oracle database to a point-in-time using an online backup.) In this scenario, where the entire database would be rolled back in time, an off line restore would include all data files, archived log files, and the backup control_file fr om the tape or backup media. Once the RECOVER DATABASE USING BACKUP CONTROLFILE command was

executed, Oracle would begin the recovery process to roll forward through the Redo log tra nsactions, and it would then roll back or undo uncommitted transactions. At the point when the recovery process started on the actual Undo tablespace, Or acle would see that the SCN of that tablespace was too high (in relation to the record in the contro l_file). This would happen simply because the Undo tablespace wasn't on the tape or backu p media that was restored, so the original Undo tablespace wouldn't have been overwritten, as were the othe r data files, during the restore operation. The failure would occur because the Undo tablespac e would still be at its SCN before the restore from backup (an SCN in the future as related to th e restored backup control_file). All other tablespaces and control_files would be back at their older SCNs (not n ecessarily consistent yet), and the Oracle server would respond with the following error messages: ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152: file 2 was not restored from a sufficiently old backup ORA-01110: data file 2: 'D:\ORACLE\ORADATA\<instance>\UNDOTBS01.DBF' At this point, the database cannot be opened with the RESETLOGS option, nor in a normal mode. Any attempt to do so yields the error referenced above. SQL> alter database open resetlogs; alter database open resetlogs * Error at line 1: ORA-01152: file 2 was not restored from a sufficiently old backup ORA-01110: data file 2: 'D:\ORACLE\ORADATA\DRTEST\UNDOTBS01.DBF' The only recourse here is to recover or restore an older backup that contains an Undo tablespace, whether from an older online backup, or from a closed or offline backup or copy of the database. Without this ability to acquire an older Undo tablespace to rerun the recovery o peration, it will not be possible to start the database. At this point, Oracle Technical S upport must be contacted. Scenario 2 The second scenario would involve the actual corruption or loss of the Undo tabl espace's data files. If the Undo tablespace data is lost or corrupted due to media failure or other i nternal logical error or user error, this data/tablespace must be recovered. Oracle 9i does offer the ability to create a new Undo tablespace and to alter th e Oracle Instance to use this new tablespace when deemed necessary by the DBA. One of the requirements to accomplish this change, though, is that there cannot be any active transactions in the Undo segments of the tabl

espace when it is time to actually drop it. In the case of data file corruption, uncommitted transactions in the database that have data in Undo segments can be extremely troublesome because the existence of any uncommitted transactions will lock the Undo segments holding the data so that they cannot be dropped. Thi s will be evidenced by an "ORA-01548" error if this is attempted. This error, in turn, prevents the dro p and recreation of the Undo tablespace, and thus prevents the successful recovery of the database. To overcome this problem, the transaction tables of the Undo segments can be tra ced to provide details on transactions that Oracle is trying to recover via rollback and these traces w ill also identify the objects that Oracle is trying to apply the undo to. Oracle Doc ID: 94114.1 m ay be referenced to set up a trace on the database startup so that the actual transactions that are locking the Undo segments can be identified and dropped. Dropping objects that contain uncommitted transac tions that are holding locks on Undo segments does entail data loss, and the amount of loss depends on how much uncommitted data was in the Undo segments at the point of failure. When utilized, this trace is actually monitoring or dumping data from the transa ction tables in the headers of the Undo segments (where the records that track the data in the Undo segments are located), but if the Undo tablespace's data file is actually missing, has been offline dro pped, or if these Undo segment headers have been corrupted, even the ability to dump the transacti on table data is lost and the only recourse at this point may be to open the database, export, and reb uild. At this point, Oracle Technical Support must be contacted. Backup Exec Agent for Oracle 5.022 and 5.025 should be upgraded to 5.026 When using the 5.022 or 5.025 version of the Backup Exec for Windows Servers Ora cle Agent (see the Related Documents section for the appropriate instructions on how to id entify the version of the Oracle Agent in use), the Oracle Undo tablespace is not available for bac kup because the Undo tablespace falls into the type category of Undo, and only tablespaces with a content type of PERMANENT are located and made available for backup. Normal full backups with al l Oracle components selected will run without error and will complete with a successful status since the Undo tablespace is not actually flagged as a selection. In most Oracle recovery situations, this absence of the Undo tablespace data for restore would not cause any problem because the original Undo tablespace is still available on the database server. Restores of User tablespaces, which do not require a rollback in time, would pro ceed normally since lost data or changes would be replayed back into the database, and Undo da ta would be

available to roll back uncommitted transactions to leave the database in a consi stent state and ready for user access. However, in certain recovery scenarios, (in which a rollback in time or full dat abase recovery is attempted, or in the case of damaged or missing Undo tablespace data files) t his missing Undo data can result in the inability to properly recover tablespaces back to a point-in-t ime, and could potentially render the database unrecoverable without an offline backup or the assistance of Oracle Technical Support. The scenarios in this TechNote describe two examples (this does not necessarily imply that these are the only scenarios) of how this absence of the Undo tablespace on tape or ba ckup media, and thus its inability to be restored, can result in failure of the database to open and can result in actual data loss. The only solution to the problems referenced within this TechNote is to upgrade the Backup Exec for Windows Servers Oracle Agent to version 5.026, and to take new offline (closed d atabase) and then new online (running database) backups of the entire Oracle 9i database as per th e Oracle Agent documentation in the Backup Exec 9.0 for Windows Servers Administrator's Guide. Oracle 9i database backups made with the 5.022 and 5.025 Agent that shipped with Backup Exec 9.0 for Window s Servers build 4367 or build 4454 should be considered suspect in the context of the info rmation provided in this TechNote. Note: The 5.022, 5.025, and 5.026 versions of the Oracle Agent are compatible wi th Backup Exec 8.6 for Windows NT and Windows 2000, which includes support for Orac le 9i, as well as Backup Exec 9.0 for Windows Servers. See the Related Documents sectio n for instructions on how to identify the version of the Oracle Agent in use. ------Note 6: ------- Backup a) Consistent backups A consistent backup means that all data files and control files are consistent to a point in time. I.e. they have the same SCN. This is the only method of backup when the database is in NO Archive log mode. b) Inconsistent backups An Inconsistent backup is possible only when the database is in Archivelog mode and proper Oracle aware software is used. Most default backup software can not backup open files. Special precautions need to be used and testing needs to be done. You must apply redo logs to the data files, in order to restore the database to a consistent state.

c) Database Archive mode The database can run in either Archivelog mode or noarchivelog mode. When you first create the database, you specify if it is to be in Archivelog mode. Then in the init.ora file you set the parameter log_archive_start=true so that archiving will start automatically on startup. If the database has not been created with Archivelog mode enabled, you can issue the command whilst the database is mounted, not open. SVRMGR> alter database Archivelog;. SVRMGR> log archive start SVRMGR> alter database open SVRMGR> archive log list This command will show you the log mode and if automatic archival is set. d) Backup Methods Essentially, there are two backup methods, hot and cold, also known as online and offline, respectively. A cold backup is one taken when the database is shutdown. A hot backup is on taken when the database is running. Commands for a hot backup: 1. Svrmgr>alter database Archivelog Svrmgr> log archive start Svrmgr> alter database open 2. Svrmgr> archive log list --This will show what the oldest online log sequence is. As a precaution, always keep the all archived log files starting from the oldest online log sequence. 3. Svrmgr> Alter tablespace tablespace_name BEGIN BACKUP 4. --Using an OS command, backup the datafile(s) of this tablespace. 5. Svrmgr> Alter tablespace tablespace_name END BACKUP --- repeat step 3, 4, 5 for each tablespace. 6. Svrmgr> archive log list ---do this again to obtain the current log sequence. You will want to make sure you have a copy of this redo log file. 7. So to force an archived log, issue Svrmgr> ALTER SYSTEM SWITCH LOGFILE A better way to force this would be: svrmgr> alter system archive log current; 8. Svrmgr> archive log list This is done again to check if the log file had been archived and to find the latest archived sequence number. 9. Backup all archived log files determined from steps 2 and 8. Do not backup the online redo logs. These will contain the end-of-backup marker and can cause corruption if use doing recovery. 10. Back up the control file: Svrmgr> Alter database backup controlfile to 'filename' e) Incremental backups These are backups that are taken on blocks that have been modified since the last backup. These are useful as they don't take up as much space and time. There are two kinds of incremental backups Cumulative and Non cumulative. Cumulative incremental backups include all blocks that were changed since the last backup at a lower level. This one reduces the work during restoration as only one backup contains all the changed blocks. Noncumulative only includes blocks that were changed since the previous backup at the same or lower level. Using rman, you issue the command "backup incremental level n" f) Support scenarios

When the database crashes, you now have a backup. You restore the backup and then recover the database. Also, don't forget to take a backup of the control file whenever there is a schema change. RECOVERY ========= There are several kinds of recovery you can perform, depending on the type of failure and the kind of backup you have. Essentially, if you are not running in archive log mode, then you can only recover the cold backup of the database and you will lose any new data and changes made since that backup was taken. If, however, the database is in Archivelog mode you will be able to restore the database up to the time of failure. There are three basic types of recovery: 1. Online Block Recovery. This is performed automatically by Oracle.(pmon) Occurs when a process dies while changing a buffer. Oracle will reconstruct the buffer using the online redo logs and writes it to disk. 2. Thread Recovery. This is also performed automatically by Oracle. Occurs when an instance crashes while having the database open. Oracle applies all the redo changes in the thread that occurred since the last time the thread was checkpointed. 3. Media Recovery. This is required when a data file is restored from backup. The checkpoint count in the data files here are not equal to the check point count in the control file. This is also required when a file was offlined without checkpoint and when using a backup control file. Now let's explain a little about Redo vs Rollback. Redo information is recorded so that all commands that took place can be repeated during recovery. Rollback information is recorded so that you can undo changes made by the current transaction but were not committed. The Redo Logs are used to Roll Forward the changes made, both committed and non- committed changes. Then from the Rollback segments, the undo information is used to rollback the uncommitted changes. Media Failure and Recovery in Noarchivelog Mode In this case, your only option is to restore a backup of your Oracle files. The files you need are all datafiles, and control files. You only need to restore the password file or parameter files if they are lost or are corrupted. Media Failure and Recovery in Archivelog Mode In this case, there are several kinds of recovery you can perform, depending on what has been lost. The three basic kinds of recovery are: 1. Recover database - here you use the recover database command and the database must be closed and mounted. Oracle will recover all datafiles that are online. 2. Recover tablespace - use the recover tablespace command. The database can be open but the tablespace must be offline. 3. Recover datafile - use the recover datafile command. The database can be open but the specified datafile must be offline. Note: You must have all archived logs since the backup you restored from, or else you will not have a complete recovery. a) Point in Time recovery: A typical scenario is that you dropped a table at say noon, and want to recover it. You will have to restore the appropriate datafiles and do a point-in-time recovery to a time just before noon. Note: you will lose any transactions that occurred after noon. After you have recovered until noon, you must open the database with resetlogs. This is necessary to reset the log numbers, which will protect the database from having the redo logs that weren't used be applied.

The four incomplete recovery scenarios all work the same: Recover database until time '1999-12-01:12:00:00'; Recover database until cancel; (you type in cancel to stop) Recover database until change n; Recover database until cancel using backup controlfile; Note: When performing an incomplete recovery, the datafiles must be online. Do a select name, status from v$datafile to find out if there are any files which are offline. If you were to perform a recovery on a database which has tablespaces offline, and they had not been taken offline in a normal state, you will lose them when you issue the open resetlogs command. This is because the data file needs recovery from a point before the resetlogs option was used. b) Recovery without control file If you have lost the current control file, or the current control file is inconsistent with files that you need to recover, you need to recover either by using a backup control file command or create a new control file. You can also recreate the control file based on the current one using the 'backup control file to trace' command which will create a script for you to run to create a new one. Recover database using backup control file command must be used when using a control file other that the current. The database must then be opened with resetlogs option. c) Recovery of missing datafile with rollback segment The tricky part here is if you are performing online recovery. Otherwise you can just use the recover datafile command. Now, if you are performing an online recovery, you must first ensure that in the init.ora file, you remove the parameter rollback_segments. Otherwise, oracle will want to use those rollback segments when opening the database, but can't find them and wont open. Until you recover the datafiles that contain the rollback segments, you need to create some temporary rollback segments in order for new transactions to work. Even if other rollback segments are ok, they will have to be taken offline. So, all the rollback segments that belong to the datafile need to be recovered. If all the datafiles belonging to the tablespace rollback_data were lost, you can now issue a recover tablespace rollback_data. Next bring the tablespace online and check the status of the rollback segments by doing a select segment_name, status from dba_rollback_segs; You will see the list of rollback segments that are in status Need Recovery. Simply issue alter rollback segment online command to complete. Don't forget to reset the rollback_segments parameter in the init.ora. d) Recovery of missing datafile without rollback segment There are three ways to recover in this scenario, as mentioned above. 1. recover database 2. recover datafile 'c:\orant\database\usr1orcl.ora' 3. recover tablespace user_data e) Recovery with missing online redo logs Missing online redo logs means that somehow you have lost your redo logs before they had a chance to archived. This means that crash recovery cannot be performed, so media recovery is required instead. All datafiles will need to berestored and rolled forwarded until the last available archived log file is applied. This is thus an incomplete recovery, and as such, the recover database command is necessary. (i.e. you cannot do a datafile or tablespace recovery). As always, when an incomplete recovery is performed, you must open the database with resetlogs. Note: the best way to avoid this kind of a loss, is to mirror your online log files. f) Recovery with missing archived redo logs If your archives are missing, the only way to recover the database is to restore from your latest backup. You will have lost any uncommitted transactions which were recorded in the archived redo logs. Again, this is why Oracle strongly suggests mirroring your online redo logs and duplicating copies

of the archives. g) Recovery with resetlogs option Reset log option should be the last resort, however, as we have seen from above, it may be required due to incomplete recoveries. (recover using a backup control file, or a point in time recovery). It is imperative that you backup up the database immediately after you have opened the database with reset logs. The reason is that oracle updates the control file and resets log numbers, and you will not be able to recover from the old logs. The next concern will be if the database crashes after you have opened the database with resetlogs, but have not had time to backup the database. How to recover? Shut down the database Backup all the datafiles and the control file Startup mount Alter database open resetlogs This will work, because you have a copy of a control file after the resetlogs point. Media failure before a backup after resetlogs. If a media failure should occur before a backup was made after you opened the database using resetlogs, you will most likely lose data. The reason is because restoring a lost datafile from a backup prior to the resetlogs will give an error that the file is from a point in time earlier, and you don't have its backup log anymore. h) Recovery with corrupted/missing rollback segments. If a rollback segment is missing or corrupted, you will not be able to open the database. The first step is to find out what object is causing the rollback to appear corrupted. If we can determine that, we can drop that object. If we can't we will need to log an iTar to engage support. So, how do we find out if it's actually a bad object? 1. Make sure that all tablespaces are online and all datafiles are online. This can be checked through v$datafile, under the status column. For tablespaces associated with the datafiles, look in dba_tablespaces. If this doesn't show us anything, i.e., all are online, then 2. Put the following in the init.ora: event = "10015 trace name context forever, level 10" This event will generate a trace file that will reveal information about the transaction Oracle is trying to roll back and most importantly, what object Oracle is trying to apply the undo to. Stop and start the database. 3. Check in the directory that is specified by the user_dump_dest parameter (in the init.ora or show parameter command) for a trace file that was generated at startup time. 4. In the trace file, there should be a message similar to: error recovery tx(#,#) object #. TX(#,#) refers to transaction information. The object # is the same as the object_id in sys.dba_objects. 5. Use the following query to find out what object Oracle is trying to perform recovery on. select owner, object_name, object_type, status from dba_objects where object_id = <object #>; 6. Drop the offending object so the undo can be released. An export or relying on a backup may be necessary to restore the object after the corrupted rollback segment goes away. 7. After dropping the object, put the rollback segment back in the init.ora parameter rollback_segments, remove the event, and shutdown and startup the database. In most cases, the above steps will resolve the problematic rollback segment. If this still does not resolve the problem, it may be likely that the corruption is in the actual rollback segment.

If in fact the rollback segment itself is corrupted, we should see if we can restore from a backup. However, that isn't always possible, there may not be a recent backup etc. In this case, we have to force the database open with the unsupported, hidden parameters, you will need to log an iTar to engage support. Please note, that this is potentially dangerous! When these are used, transaction tables are not read on opening of the database Because of this, the typical safeguards associated with the rollback segment are disabled. Their status is 'offline' in dba_rollback_segs. Consequently, there is no check for active transactions before dropping the rollback segment. If you drop a rollback segment which contains active transactions then you will have logical corruption. Possibly this corruption will be in the data dictionary. If the rollback segment datafile is physically missing, has been offlined dropped, or the rollback segment header itself is corrupt, there is no way to dump the transaction table to check for active transactions. So the only thing to do is get the database open, export and rebuild. Log an iTar to engage suppor t to help with this process. If you cannot get the database open, there is no other alternative than restoring from a backup. i) Recovery with System Clock change. You can end up with duplicate timestamps in the datafiles when a system clock changes. A solution here is to recover the database until time 'yyyy-mm-dd:00:00:00', and set the time to be later than the when the problem occurred. That way it will roll forward through the records that were actually performed later, but have an earlier time stamp due to the system clock change. Performing a complete recovery is optimal, as all transactions will be applied. j) Recovery with missing System tablespace. The only option is to restore from a backup. k) Media Recovery of offline tablespace When a tablespace is offline, you cannot recover datafiles belonging to this tablespace using recover database command. The reason is because a recover database command will only recover online datafiles. Since the tablespace is offline, it thinks the datafiles are offline as well, so even if you recover database and roll forward, the datafiles in this tablespace will not be touched. Instead, you need to perform a recover tablespace command. Alternatively, you could restored the datafiles from a cold backup, mount the database and select from the v$datafile view to see if any of the datafiles are offline. If they are, bring them online, and then you can perform a recover database command. l) Recovery of Read-Only tablespaces If you have a current control file, then recovery of read only tablespaces is no different than recovering read-write files. The issues with read-only tablespaces arise if you have to use a backup control file. If the tablespace is in read-only mode, and hasn't changed to read-write since the last backup, then you will be able to media recovery using a backup control file by taking the tablespace offline. The reason here is that when you are using the backup control file, you must open the database with resetlogs. And we know that Oracle wont let you read files from before a resetlogs was done. However, there is an exception with read-only tablespaces. You will be able to take the datafiles online after you have opened the database. When you have tablespaces that switch modes and you don't have a current control file, you should use a backup control file that recognizes the tablespace in read-write mode. If you don't have a backup control file, you can create a new one using the create controlfile command. Basically, the point here is that you should take a backup of the control file every time you switch a tablespaces mod

ORA-01547: ORA-01110: ORA-01588 ORA-00205: ----------

------NOTE 7: ------PURPOSE ------To consolidate the common reasons & solutions for the ORA-1113 error. SCOPE & APPLICATION -------------------Customers facing ORA-1113 and analysts requiring information on known issues with ORA-1113 errors. ORA-1113 ======== An ORA-1113 occurs when a datafile needs recovery. Error Explanation: -----------------01113, 00000, "file %s needs media recovery" Cause: An attempt was made to online or open a database with a file that is in need of media recovery. Action: First apply media recovery to the file. This error is usually followed with ORA-1110 error which will indicate the name of the datafile that needs media recovery. Eg: ORA-01113: file 28 needs media recovery ORA-01110: data file 28: '/h04/usupport/app/oracle/oradata/v817/nar.dbf' This error message indicates that a datafile that is not up-to-date with respect to the controlfile and other datafiles. Oracle's architecture is tightly coupled in the sense that all database files i.e., datafiles, redolog files, and controlfiles -- must be in sync when the database is opened or at the end of a checkpoint. This implies that the checkpoint SCN (System Commit Number) of all datafiles must be the same. If that is not the case for a particular datafile, an ORA-1113 error will be generated. For example, when you put a tablespace in hot backup mode, the checkpoint

SCN of all its datafiles is frozen at the current value until you issue the corresponding end backup. If the database crashes during a hot backup and you try to restart it without doing recovery, you will likely get ORA-1113 for at least one of the datafiles in the tablespace that was being backed up, since its SCN will probably be lower than that of the controlfile and the datafiles in other tablespaces. Likewise, offlining a datafile causes its checkpoint SCN to freeze. If you simply attempt to online the file without recovering it first, its SCN will likely be much older than that of the online datafiles, and thus an ORA-1113 will result. ********************************************** Before Starting these actions do the following: ********************************************** Note : If you are using Oracle9i, use SQL*Plus, instead of Server Manager to execute the mentioned commands, since Server Manager is not available in Oracle9i. Query the V$LOG and V$LOGFILE. 1. If the database is down, you need to mount it first. SVRMGR> STARTUP MOUNT PFILE=<location_of_pfile>; 2. Then connect internal Server Manager and issue the query: SVRMGR> CONNECT INTERNAL; or SQL> connect / as sysdba (for Oracle9i) SVRMGR> SELECT V1.GROUP#, MEMBER, SEQUENCE#, FIRST_CHANGE# FROM V$LOG V1, V$LOGFILE V2 WHERE V1.GROUP# = V2.GROUP# ; This will list all your online redolog files and their respective sequence and first change numbers. The steps to take next depend on the scenario in which the ORA-1113 was issued. This is discussed in the following sections. POSSIBLE CAUSES AND SOLUTIONS SUMMARY: ===================================== I. II. III. IV. VERY AT STARTUP AFTER CRASH WITH TABLESPACE(S) IN HOT BACKUP AT STARTUP AFTER RESTORING A DATAFILE OR TABLESPACE FROM A BACKUP TRYING TO ONLINE A DATAFILE OR TABLESPACE WHEN RECOVERING ' USING BACKUP CONTROLFILE' OPTION TO DO INCOMPLETE RECO

I. AT STARTUP AFTER CRASH WITH TABLESPACE(S) IN HOT BACKUP ********************************************************** A. WITH ORACLE 7.1 OR LOWER

1. Mount the database.(If the database is NOT already mounted) SVRMGR> STARTUP MOUNT PFILE=<location_of_pfile>; 2. Apply media recovery to the database. SVRMGR> RECOVER DATABASE; 3. Confirm each of the archived logs that you are prompted for until you receive the message "Media recovery complete". If you are prompted for an archived log that does not exist, Oracle probab ly needs one or more of the online logs to proceed with the recovery. Compare the sequence number referenced in the ORA-280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence number matches the one you are be ing asked for. 4. Open the database. SVRMGR> ALTER DATABASE OPEN; B. WITH ORACLE 7.2 OR HIGHER 1. Mount the database. SVRMGR> STARTUP MOUNT; 2. Find out which datafiles were in hot backup mode when the database crashed or was shutdown abort or the machine was rebooted by running the query: SVRMGR> SELECT V1.FILE#, NAME FROM V$BACKUP V1, V$DATAFILE V2 WHERE V1.STATUS = 'ACTIVE' AND V1.FILE# = V2.FILE# ; 3. For each of the files returned by the above query, issue the command: SVRMGR> ALTER DATABASE DATAFILE '<full path name>' END BACKUP; 4. Open the database. SVRMGR> ALTER DATABASE OPEN; II. AT STARTUP AFTER RESTORING A DATAFILE OR TABLESPACE FROM A BACKUP ********************************************************************* A. WITH THE DATABASE IN ARCHIVELOG MODE 1. Mount the database. SVRMGR> STARTUP MOUNT; 2. Recover the datafile: SVRMGR> RECOVER DATAFILE '<full path name>';

If recovering more than one datafile in a tablepace issue a SVRMGR> RECOVER TABLESPACE; If recovering more than one tablespace issue a SVRMGR> RECOVER DATABASE; 3. Confirm each of the archived logs that you are prompted for until you receive the message "Media recovery complete". If you are prompted for an archived log that does not exist, Oracle probab ly needs one or more of the online logs to proceed with the recovery. Compare the sequence number referenced in the ORA-280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence number matches the one you are being asked for. 4. Open the database. SVRMGR> ALTER DATABASE OPEN; B. WITH THE DATABASE IN NOARCHIVELOG MODE In this case, you will only succeed in recovering the datafile or tablespace if the redo to be applied to it is within the range of your online logs. Issue the query: SVRMGR> SELECT FILE#, CHANGE# FROM V$RECOVER_FILE; Compare the change number you obtain with the FIRST_CHANGE# of your online lo gs. If the CHANGE# is GREATER than the minimum FIRST_CHANGE# of your logs, the datafile can be recovered. In this case, the procedure to be followed is analogous to that of scenario II.A above, except that you must always enter the appropriate online log when prompted, until recovery is finished . If the CHANGE# is LESS than the minimum FIRST_CHANGE# of your logs, the file cannot be recovered.Your options at this point include: - If the datafile is in a temporary or index tablespace, you may drop it with an ALTER DATABASE DATAFILE '<full path name>' OFFLINE DROP statement and then open the database. Once the database is up, you must drop the tablespace to which the datafile belongs and recreate it. - If the datafile is in the SYSTEM or in a rollback tablespace, restore an up-to-date copy of the datafile (if available) or your most recent full backup.In case you do not have either of this, then it might not be possible to recover the database fully. For more details or to assist you in your decision, please contact Oracle Customer Support. For all other cases in this scenario, you must weigh the cost of going to a backup versus the cost of recreating the tablespace involved, as described in the two previous cases.For more details or to assist you in your decision,

please contact Oracle Customer Support. III. TRYING TO ONLINE A DATAFILE OR TABLESPACE ********************************************** 1. Recover the datafile: SVRMGRL> RECOVER DATAFILE '<datafile_name>'; If recovering a tablespace, do SVRMGRL> RECOVER TABLESPACE <tablespace_name>; If recovering a database, do SVRMGRL> RECOVER DATABASE; 2. Confirm each of the archived logs that you are prompted for until you receive the message "Media recovery complete". If you are prompted for an archived log that does not exist, Oracle probab ly needs one or more of the online logs to proceed with the recovery. Compare the sequence number referenced in the ORA-280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence number matches the one you are being asked for. 3. Open the database. SVRMGR> ALTER DATABASE OPEN;

IV. WHEN RECOVERING ' USING BACKUP CONTROLFILE' OPTION TO DO INCOMPLETE RECOVERY ******************************************************************************* If the database is recovered with the "RECOVER DATABASE USING BACKUP CONTROLFILE;" option without specifying the "UNTIL CANCEL" option, then upon "ALTER DATABASE OPEN RESETLOGS;" you will encounter the ORA-1113 error. Steps to workaround this issue: 1. Recover database again using: SVRMGR> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL; 2. Cancel recovery by issuing the "CANCEL" command. 3. Open the database using: SVRMGR> ALTER DATABASE OPEN RESETLOGS; Notes that explain other possible recovery scenarios involving ORA-1113: ------------------------------------------------------------------------

Note 116374.1

-- "ORA-1113 on Datafile After Moving Datafile Using USFDMP Utility" ORA-1113 on Datafile After Moving Datafile Using USFDMP -- "VMS: Mount Phase of Database Startup Results in ORA-01113, ORA-01186 & ORA-01122" ORA-1113 due to datafile getting locked on OpenVMS

Note 1079626.6

Note 1020262.102 -- "ORA-01113, ORA-01110: TRYING TO STARTUP DATABASE AFTER INCOMPLETE RECOVERY" Another scenario of ORA-1113 Note 168115.1 ilure" -- "ORA-01113 ORA-01110 on Database Startup after Write Disk fa Datafile header contains different SCN comparing to other database files due to a write disk failure Note 146039.1 -- "Database Startup Fails with ORA-01113, ORA-01110" Dropping the datafile while ORA-1113, if the datafile is not required

Some usefull queries: set pagesize 20000 set linesize 1000 set pause off set serveroutput on set feedback on set echo on set numformat 999999999999999 Spool recover.lst show parameter pfile; archive log list; select * from v$backup; select file#, status, substr(name, 1, 70) from v$datafile; select distinct checkpoint_change# from v$datafile_header; select status,resetlogs_change#,resetlogs_time,checkpoint_change#, to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,cou nt(*) from v$datafile_header group by status, resetlogs_change#, resetlogs_time, checkpoint_change#, checkpoi nt_time order by status, checkpoint_change#, checkpoint_time ; select substr(name,1,60), recover, fuzzy, checkpoint_change#, resetlogs_change#, resetlogs_time from v$datafile_header; select name, open_mode, checkpoint_change#, ARCHIVE_CHANGE# from v$database; select GROUP#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS,FIRST_CHANGE# from v$log

; select GROUP#,substr(member,1,60) from v$logfile; select * from v$log_history; select * from v$recover_file; select * from v$recovery_log; select HXFIL File_num,substr(HXFNM,1,70) File_name,FHTYP Type,HXERR Validity, FHSCN SCN, FHTNM TABLESPACE_NAME,FHSTA status ,FHRBA_SEQ Sequence from X$KCVFH; select hxfil FileNo,FHSTA status from x$kcvfhall; spool off

------NOTE 8: ------Subject: How to recover and open the database if the archivelog required for re covery is either missing, lost or corrupted? Doc ID: Note:465478.1 Type: HOWTO Last Revision Date: 28-MAY-2008 Status: PUBLISHED In this Document Goal Solution References

--------------------------------------------------------------------------------

Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 11.1 Information in this document applies to any platform. Goal How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted? Solution The assumption here is that we have exhausted all possible locations to find ano ther good and valid copy or backup of the archivelog that we are looking for, wh ich could be in one of the following: directories defined in the LOG_ARCHIVE_DEST_n another directory in the same server or another server standby database RMAN backup OS backup If the archivelog is not found in any of the above mentioned locations, then the approach and strategy on how to recover and open the database depends on the SC N (System Change Number) of the datafiles, as well as, whether the log sequence#

required for the recovery is still available in the online redologs. For the SCN of the datafiles, it is important to know the mode of the database w hen the datafiles are backed up. That is whether the database is open, mounted o r shutdown (normally) when the backup is taken. If the datafiles are restored from an online or hot backup, which means that the database is open when the backup is taken, then we must apply at least the arch ivelog(s) or redolog(s) whose log sequence# are generated from the beginning and until the completion of the said backup that was used to restore the datafiles. However, if the datafiles are restored from an offline or cold backup, and the d atabase is cleanly shutdown before the backup is taken, that means that the data base is either not open, is in nomount mode or mounted when the backup is taken, then the datafiles are already synchronized in terms of their SCN. In this situ ation, we can immediately open the database without even applying archivelogs, b ecause the datafiles are already in a consistent state, except if there is a req uirement to roll the database forward to a point-in-time after the said backup i s taken. The critical key thing here is to ensure that all of the online datafiles are sy nchronized in terms of their SCN before we can normally open the database. So, r un the following SQL statement, as shown below, to determine whether the datafil es are synchronized or not. Take note that we query the V$DATAFILE_HEADER, becau se we want to know the SCN recorded in the header of the physical datafile, and not the V$DATAFILE, which derives the information from the controlfile. select status, checkpoint_change#, to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time, count(*) from v$datafile_header group by status, checkpoint_change#, checkpoint_time order by status, checkpoint_change#, checkpoint_time; The results of the above query must return one and only one row for the online d atafiles, which means that they are already synchronized in terms of their SCN. Otherwise, if the results return more than one row for th e online datafiles, then the datafiles are still not synchronized yet. In this case, we need to apply archivelog(s) or redolog(s) to synchronize all of the online datafiles. By the way, take note of the CHECKPOINT_TIME in the V$DATAFILE_HEADER, which ind icates the date and time how far the datafiles have been recovered. The results of the query above may return some offline datafiles. So, ensure tha t all of the required datafiles are online, because we may not be able to recover later the offline datafile once we open th e database in resetlogs. Even though we can recover the database beyond resetlogs for the Oracle database starting from 10g and late r versions due to the introduction of the format "%R" in the LOG_ARCHIVE_FORMAT, it is recommended that you online the required datafi les now than after the database is open in resetlogs to avoid any possible problems. However, in some cases, we intentionally offline the datafile(s), because we are doing a partial database restore, or perhaps we don't need the contents of the said data file. You may run the following query to determine the offline datafiles: select file#, name from v$datafile where file# in (select file# from v$datafile_header where status='OFFLINE');

You may issue the following SQL statement to change the status of the required d atafile(s) from "OFFLINE" to "ONLINE": alter database datafile <file#> online; If we are lucky that the required log sequence# is still available in the online redologs and the corresponding redolog member is still physically existing on d isk, then we may apply them instead of the archivelog. To confirm, issue the fol lowing query, as shown below, that is to determine the redolog member(s) that yo u can apply to recover the database: set echo on feedback on pagesize 100 numwidth 16 alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; select LF.member, L.group#, L.thread#, L.sequence#, L.status, L.first_change#, L.first_time, DF.min_checkpoint_change# from v$log L, v$logfile LF, (select min(checkpoint_change#) min_checkpoint_change# from v$datafile_header where status='ONLINE') DF where LF.group# = L.group# and L.first_change# >= DF.min_checkpoint_change#; If the above query returns no rows, because the V$DATABASE.CONTROLFILE_TYPE has a value of "BACKUP", then try to apply each of the redolog membes one at a time during the recovery. You may run the followi ng query to determine the redolog members: select * from v$logfile; If you have tried to apply all of the online redolog members instead of an archi velog during the recovery, but you always received the ORA-00310 error, as shown in the example below, then the log sequence# required for recovery is no longer available in the online redolog. ORA-00279: change 189189555 generated at 11/03/2007 09:27:46 needed for thread 1 ORA-00289: suggestion : +BACKUP ORA-00280: change 189189555 for thread 1 is in sequence #428 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} +BACKUP/prmy/onlinelog/group_2.258.603422107 ORA-00310: archived log contains sequence 503; sequence 428 required ORA-00334: archived log: '+BACKUP/prmy/onlinelog/group_2.258.603422107' After trying all of the possible solutions mentioned above, but you still cannot open the database, because the archivelog required for recovery is either missi ng, lost or corrupted, or the corresponding log sequence# is no longer available in the online redolog, since they are already overwritten during the redolog sw itches, then we cannot normally open the database, since the datafiles are in an inconsistent state. So, the following are the 3 options available to allow you to open the database: Option#1: Force open the database by setting the _ALLOW_RESETLOGS_CORRUPTION=TRU E in the init.ora. But there is no 100% guarantee that this will open the databa se. However, once the database is opened, then we must immediately rebuild the d atabase. Database rebuild means doing the following, namely: (1) perform a fulldatabase export, (2) create a brand new and separate database, and finally (3) i mport the recent export dump. This option can be tedious and time consuming, but once we successfully open the new database, then we expect minimal or perhaps n

o data loss at all. Before you try this option, ensure that you have a good and valid backup of the current database. Option#2: If you have a good and valid backup of the database, then restore the database from the said backup, and recover the database by applying up to the la st available archivelog. In this option, we will only recover the database up to the last archivelog that is applied, and any data after that are lost. If no ar chivelogs are applied at all, then we can only recover the database from the bac kup that is restored. However, if we restored from an online or hot backup, then we may not be able to open the database, because we still need to apply the arc hivelogs generated during the said backup in order to synchronize the SCN of the datafiles before we can normally open the database. Option#3: Manually extract the data using the Oracle's Data Unloader (DUL), whic h is performed by Oracle Field Support at the customer site on the next business day and for an extra charge. If the customer wants to pursue this approach, we need the complete name, phone# and email address of the person who has the autho rity to sign the work order in behalf of the customer. References

OTHER ERRORS: ============= 1. Control file missing ORA-00202: ORA-27041: OSD-04002: O/S-Error: controlfile: 'g:\oradata\airm\control03.ctl' unable to open file unable to open file (OS 2) The system cannot find the file specified.

Sat May 24 20:02:40 2003 ORA-205 signalled during: alter database airm mount... Solution: just copy one of the present to the missing one ORA=00214 --------1. one Control file is different version Solution: just copy one of the present to the different one 19.13 recovery FROM -----------------alter system disable distributed recovery ORA-2019 ORA-2058 ORA-2068 ORA-2050: FAILED DISTRIBUTED TRANSACTIONS for step by step instructions on how to proceed. The above errors indicates that there is a failed distributed transaction that

needs to be manually cleaned up. See <Note 1012842.102> In some cases, the instance may crash before the solutions are implemented. If this is the case, issue an 'alter system disable distributed recovery' immediately after the database starts to allow the database to run without having reco terminate the instance. 19.14 get a tablespace out of backup mode: -------------------------------------SVRMGR> connect internal SVRMGR> startup mount SVRMGR> SELECT df.name,bk.time FROM v$datafile df,v$backup bk 2> WHERE df.file# = bk.file# and bk.status = 'ACTIVE'; Shows the datafiles currently in a hot backup state. SVRMGR> alter database datafile 2> '/u03/oradata/PROD/devlPROD_1.dbf' end backup; Do an "end backup" on those listed hot backup datafiles. SVRMGR> alter database open; 19.15 Disk full, corrupt archive log --------------------------------Archive mandatory in log_archive_dest is unavailable and it's impossible to make a full recovery. Workaround Configure log_archive_min_succeed_dest = 2 Do not use log_archive_duplex_dest 19.16 ORA-1578 ORACLE data block corrupted (file # %s, block # %s) --------------------------------------------------------------SELECT FROM WHERE AND segment_name , segment_type , owner , tablespace_name sys.dba_extents file_id = &bad_file_id &bad_block_id BETWEEN block_id and block_id + blocks -1

19.17 Database does not start (1) SGADEF.DBF LK.DBF -------------------------------------------------Note:1034037.6 Subject: ORA-01102: WHEN STARTING THE DATABASE Type: PROBLEM Status: PUBLISHED Content Type: TEXT/PLAIN Creation Date: 25-JUL-1997 Last Revision Date: 10-FEB-2000 Problem Description: ==================== You are trying to startup the database and you receive the following error:

ORA-01102: cannot mount database in EXCLUSIVE mode Cause: Some other instance has the database mounted exclusive or shared. Action: Shutdown other instance or mount in a compatible mode. or scumnt: failed to lock /opt/oracle/product/8.0.6/dbs/lkSALES Fri Sep 13 14:29:19 2002 ORA-09968: scumnt: unable to lock file SVR4 Error: 11: Resource temporarily unavailable Fri Sep 13 14:29:19 2002 ORA-1102 signalled during: alter database mount... Fri Sep 13 14:35:20 2002 Shutting down instance (abort) Problem Explanation: ==================== A database is started in EXCLUSIVE mode by default. Therefore, the ORA-01102 error is misleading and may have occurred due to one of the following reasons: - there is still an "sgadef<sid>.dbf" file in the "ORACLE_HOME/dbs" directory - the processes for Oracle (pmon, smon, lgwr and dbwr) still exist - shared memory segments and semaphores still exist even though the database has been shutdown - there is a "ORACLE_HOME/dbs/lk<sid>" file Search Words: ============= ORA-1102, crash, immediate, abort, fail, fails, migration Solution Description: ===================== Verify that the database was shutdown cleanly by doing the following: 1. Verify that there is not a "sgadef<sid>.dbf" file in the directory "ORACLE_HOME/dbs". % ls $ORACLE_HOME/dbs/sgadef<sid>.dbf If this file does exist, remove it. % rm $ORACLE_HOME/dbs/sgadef<sid>.dbf 2. Verify that there are no background processes owned by "oracle" % ps -ef | grep ora_ | grep $ORACLE_SID If background processes exist, remove them by using the Unix command "kill". For example: % kill -9 <Process_ID_Number> 3. Verify that no shared memory segments and semaphores that are owned by "oracle" still exist

% ipcs -b If there are shared memory segments and semaphores owned by "oracle", remove the shared memory segments % ipcrm -m <Shared_Memory_ID_Number> and remove the semaphores % ipcrm -s <Semaphore_ID_Number> NOTE: The example shown above assumes that you only have one database on this machine. If you have more than one database, you will need to shutdown all other databases before proceeding with Step 4. 4. Verify that the "$ORACLE_HOME/dbs/lk<sid>" file does not exist 5. Startup the instance Solution Explanation: ===================== The "lk<sid>" It seems that still locked. oracle has of . and "sgadef<sid>.dbf" files are used for locking shared memory. even though no memory is allocated, Oracle thinks memory is By removing the "sgadef" and "lk" files you remove any knowledge shared memory that is in use. Now the database can start.

19.18 Rollback segment missing, active transactions -----------------------------------------------Note:1013221.6 Subject: RECOVERING FROM A LOST DATAFILE IN A ROLLBACK TABLESPACE Type: PROBLEM Status: PUBLISHED Content Type: TEXT/PLAIN Creation Date: 16-OCT-1995 Last Revision Date: 18-JUN-2002 Solution 1: --------------Error scenario: 1. 2. 3. 4. 5. set transaction use rollback segment rb1; INSERTS into's... SHUTDOWN ABORT; (simulate Media errors) Delete file rb1.ora (Tablespace RB1 with segment rb1 ); Restore a backup of the file

Recover: 1. comment out INIT.ORA ROLLBACK_SEGMENT parameter , so ORACLE does not try to find the incorrect segment rb1

2. 3. 4. 5.

STARTUP MOUNT ALTER DATABASE DATAFILE 'rb1.ora' OFFLINE; ALTER DATABASE OPEN # now we are in business CREATE ROLLBACK SEGMENT rbtemp TABLESPACE SYSTEM; # We need Temporary RBS for further steps; 6. ALTER ROLLBACK SEGMENT rbtemp ONLINE; 7. RECOVER TABLESPACE RB1; 8. ALTER TABLESPACE RB1 ONLINE; 9. ALTER ROLLBACK SEGMENT rb1 ONLINE; 10. ALTER ROLLBACK SEGMENT rbtemp OFFLINE; 11. DROP ROLLBACK SEGMENT rbtemp; Result: Successfully rollback uncommitted Transactions, no suspect instance. Solution 2: --------------INTRODUCTION -----------Rollback segments can be monitored through the data dictionary view, dba_rollback_segs. There is a status column that describes what state the rollback segment is currently in. Normal states are either online or offline. Occasionally, the status of "needs recovery" will appear. When a rollback segment is in this state, bringing the rollback segment offline or online either through the alter rollback segment command or removing it FROM the rollback_segments parameter in the init.ora usually has no effect. UNDERSTANDING ------------A rollback segment falls into this status of needs recovery whenever Oracle tries to roll back an uncommitted transaction in its transaction table and fails. Here are some examples of why a transaction may need to rollback: 1-A user may do a dml transaction and decides to issue rollback 2-A shutdown abort occurs and the database needs to do an instance recovery in which case, Oracle has to roll back all uncommitted transactions. When a rollback of a transaction occurs, undo must be applied to the data block the modified row/s are in. If for whatever reason, that data block is unavailable, the undo cannot be applied. The result is a 'corrupted' rollback segment with the status of needs recovery. What could be some reasons a datablock is unaccessible for undo? 1-If a tablespace or a datafile is offline or missing. 2-If the object the datablock belongs to is corrupted. 3-If the datablock that is corrupt is actually in the rollback segment itself rather than the object. HOW TO RESOLVE IT ----------------1-MAKE sure that all tablespaces are online and all datafiles are online. This can be checked through v$datafile, under the status column. For tablespaces associated with the datafiles, look in dba_tablespaces.

If that still does not resolve the problem then 2-PUT the following in the init.oraevent = "10015 trace name context forever, level 10" Setting this event will generate a trace file that will reveal the necessary information about the transaction Oracle is trying to roll back and most importantly, what object Oracle is trying to apply the undo to. 3-SHUTDOWN the database (if normal does not work, immediate, if that does not work, abort) and bring it back up. Note: An ora-1545 may be encountered, or other errors. If the database cannot startup, contact customer support at this point. 4-CHECK in the directory that is specified by the user_dump_dest parameter (in the init.ora or show parameter command) for a trace file that was generated at startup time. 5-IN the trace file, there should be a message similar toerror recovery tx(#,#) object #. TX(#,#) refers to transaction information. The object # is the same as the object_id in sys.dba_objects. 6-USE the following query to find out what object Oracle is trying to perform recovery on. SELECT owner, object_name, object_type, status FROM dba_objects WHERE object_id = <object #>; 7-THIS object must be dropped so the undo can be released. An export or relying on a backup may be necessary to restore the object after the corrupted rollback segment goes away. 8-AFTER dropping the object, put the rollback segment back in the init.ora parameter rollback_segments, removed the event, and shutdown and startup the database. In most cases, the above steps will resolve the problematic rollback segment. If this still does not resolve the problem, it may be likely that the corruption is in the actual rollback segment. At this point, if the problem has not been resolved, please contact customer support. Solution 3: --------------Recovery FROM the loss of a Rollback segment datafile containing active transact ions How do I recover the datafile containing rollback segments having active transac tions and if the backup is done with RMAN without using catalog. I have tried the case study FROM the Oracle recovery handbook,but when i tried to open the database after offlining the Rollback segment file I go t the following errors

ORA-00604: error occurred at recursive SQL level 2 ORA-00376: file 2 cannot be read at this time ORA-01110:data file 2: '/orabackup/CCD1prod/oradata/rbs01CCD1prod.dbf' the status of the datafile was "Recover". Anyhow shutting down and starup mounting the database allows for the database or the datafile recovery, but this was done through SVRMGRL. Here is whats happening. simulate the loss of datafile by removing FROM the os and shut down abort the da tabase. mount the database so RMAN can restore the file, at this point offlining the file succeeds but you cannot open the database. so the question is can we offline a rollback segment datafile containing active transactions and open the database ? How to perform recovery in such case using an RMAN backup without using the cata log. I appreciate for any insight and tips into this issue. Madhukar FROM: Oracle, Tom Villane 01-May-02 21:04 Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing a ctive transactions Hi, The only supported way to recover FROM the loss of a rollback segment datafile c ontaining a rollback segment with a potentially active data dictionary transaction is to r estore the datafile FROM backup and roll forward to a point in time prior to the loss of the datafil e (assuming archivelog mode). Tom Villane Oracle Support Metalink Analyst FROM: Madhukar Yedulapuram 02-May-02 06:46 Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing a ctive transactions Hi Tom, What does Rollforward upto a time prior to the loss of the datafile got to do wi th the recovery, are you suggesting this so that active transaction is not lost,is it possible ? Because during the recovery the rollforward is followed by rollback and all the active transactions FROM the rollback segment's transaction table will be rolled back isnt it ? My question is if I have a active transaction in a rollback segment and the file containing that rollback segment is lost and the database crashed or did a shutdown abort c an we open the database after offlining the datafile and commenting out the rollback_segments p arameter in the init.ora parameter,

I tried to do it and got the errors which I mentioned earlier. So in this case I have to do offline recovery only or what ? Thanks, madhukar FROM: Oracle, Tom Villane 02-May-02 16:24 Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile contain ing active transactions Hi, You won't be able to open the database if you lose a rollback segment datafile t hat contains an active transaction. You will have to: Restore a good backup of the file RECOVER DATAFILE '<name>' ALTER DATABASE DATAFILE '<name>' ONLINE; The only way you would be able to open the database is if the status of the roll back were OFFLINE, any other status requires that you recover as noted before. As recovering FROM rollback corruption needs to be done properly, you may want to log an iTAR if you have additional questions. Regards Tom Villane Oracle Support Metalink Analyst FROM: Madhukar Yedulapuram 03-May-02 07:22 Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing a ctive transactions Hi Tom, Thank you for the reply.you said that the only way the database can be opened is if the status of the rollback segment was offline,but what happens to an active transaction which was using this rollb ack segment, once the database is opened and the media recovery performed on the datafile,the database will show values which were part of an active transaction and not committed,isnt this the logical corruption? madhukar FROM: Madhukar Yedulapuram 05-May-02 08:14 Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing a ctive transactions Tom, Can I get some reponse to my questions. Thank You, Madhukar FROM: Oracle, Tom Villane 07-May-02 13:53

Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile contain ing active transactions Hi, Sorry for the confusion, I should not have said "rolling forward to a point in t ime..." in my previous reply. No, there won't be corruption or inconsistency. The redo logs will contain the i nformation for both committed and uncommitted transactions. Since this includes changes made to roll back segment blocks, it follows that rollback data is also (indirectly) recorded in the redo log. To recover FROM a loss of Datafiles in the SYSTEM tablespace or datafiles with active rollback segments. You must perform closed database recove ry. -Shutdown the database -Restore the file FROM backup -Recover the datafile -Open the database. References: Oracle8i Backup and Recovery Guide, chapter 6 under "Losing Datafiles in ARCHIVE LOG Mode ". Regards Tom Villane Oracle Support Metalink Analyst FROM: Madhukar Yedulapuram 07-May-02 22:23 Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing a ctive transactions Hi Tom, After offlining the rollback segment containing active transaction you can open the database and do the recovery and after that any active transactions should be rolled back and the data should not show up, but I performed the following test and Oracle is showing logical corruption by s howing data which was never committed. SVRMGR> create tablespace test_rbs datafile '/orabackup/CCD1prod/oradata/test_rbs01.dbf' size 10M 2> default storage (initial 1M next 1M minextents 1 maxextents 1024); Statement processed. SVRMGR> create rollback segment test_rbs tablespace test_rbs; Statement processed. SVRMGR> create table case5 (c1 number) tablespace tools; Statement processed. SVRMGR> set transaction use rollback segment test_rbs; ORA-01598: rollback segment 'TEST_RBS' is not online SVRMGR> alter rollback segment test_rbs online; Statement processed. SVRMGR> set transaction use rollback segment test_rbs; Statement processed. SVRMGR> insert into case5 values (5); 1 row processed. SVRMGR> alter rollback segment test_rbs offline;

Statement processed. SVRMGR> shutdown abort ORACLE instance shut down. SVRMGR> startup mount ORACLE instance started. Total System Global Area 145981600 bytes Fixed Size 73888 bytes Variable Size 98705408 bytes Database Buffers 26214400 bytes Redo Buffers 20987904 bytes Database mounted. SVRMGR> alter database datafile '/orabackup/CCD1prod/oradata/test_rbs01.dbf' offline; Statement processed. SVRMGR> alter database open; Statement processed. SVRMGR> recover tablespace test_rbs; Media recovery complete. SVRMGR> alter tablespace test_rbs online; Statement processed. SVRMGR> SELECT * FROM case5; C1 ---------5 1 row SELECTed. SVRMGR> alter rollback segment test_rbs online; Statement processed. SVRMGR> SELECT * FROM case5; C1 ---------5 1 row SELECTed. SVRMGR> drop rollback segment test_rbs; drop rollback segment test_rbs * ORA-01545: rollback segment 'TEST_RBS' specified not available SVRMGR> SELECT segment_name,status FROM dba_rollback_segs; SEGMENT_NAME STATUS ------------------------------ ---------------SYSTEM ONLINE R0 OFFLINE R01 OFFLINE R02 OFFLINE R03 OFFLINE R04 OFFLINE R05 OFFLINE R06 OFFLINE R07 OFFLINE R08 OFFLINE R09 OFFLINE R10 OFFLINE R11 OFFLINE R12 OFFLINE BIG_RB OFFLINE TEST_RBS ONLINE 16 rows SELECTed. SVRMGR> drop rollback segment test_rbs; drop rollback segment test_rbs *

ORA-01545: rollback segment 'TEST_RBS' specified not available Here I have to bring the rollback segment offline to dropt it. Can this be explained or is this a bug,because this caused logical corruption. FROM: Oracle, Tom Villane 10-May-02 13:19 Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile contain ing active transactions Hi, What you are showing is expected and normal, and not corruption. At the time that you issue the "alter rollback segment test_rbs online;" Oracle does an implicit commit becuase any "ALTER" statement is considered DDL and Oracle issues an implicit COMMIT before and after any data definition language (DDL)statement. Regards Tom Villane Oracle Support Metalink Analyst

-------------------------------------------------------------------------------FROM: Madhukar Yedulapuram 14-May-02 20:12 Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing a ctive transactions Hi Tom, So what you are saying is the moment I say Alter rollback segment RBS# online,oracle will issue an implicit commit,but if you look at my test just after performing the tablespa ce recovery (had only one datafile in the RBS tablespace which was offlined before opening the database and doing the recovery), I brought the tablespace online and did a SELECT FROM the table which was having the active transaction in one of the rollback segments,so this statement has iss ued an implicit commit and I could see the data which was never actually committed,does nt this contradict the Oracle's stance that only that data will be shown which shown whi ch is committed, I think this statement is true for Intance and Crash recovery,not for media reco very as the case in point proves,but still if you say Oracle issues an implicit commit,then the s tance of oracle is consistent. madhukar

FROM: Oracle, Tom Villane 15-May-02 18:30 Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile contain ing active transactions

Hi, A slight correction to what I posted, I should have said the implicit commit hap pened when the rollback segment was altered offline. Whether it's an implicit commit (before and after a DDL statement like CREATE, D ROP, RENAME, ALTER) or if the user did the commit, or if the user exits the application (forces a c ommit). All of the above are considered commits and the data will be saved. Regards Tom Villane Oracle Support Metalink Analyst FROM: Madhukar Yedulapuram 16-May-02 23:17 Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing a ctive transactions Hi Tom, Thank You very much,so the moment i brought the RBS offline,the transaction was committed and the data saved in the table,is that what you are saying. So the data was committed even before performing the recovery,so recovery is ess entially not applying anything in this case. madhukar

FROM: Oracle, Tom Villane 17-May-02 12:18 Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile contain ing active transactions Hi, Yes, that is what happened. Regards Tom Villane Oracle Support Metalink Analyst 19.19 After backup you increase a datafile. -----------------------------------------problem 2: "the backed up datafile size is smaller, and Oracle won't accept it for recovery." isn't a problem because we most certainly will accept that file. As a test you can do this (i just did) o create a small 1m tablespace with a datafile.

o o o o o o o o o o

alter it and begin backup. copy the datafile alter it and end backup. alter the datafile and "autoextend on next 1m" it. create a table with initial 2m initial extent. This will grow the datafile. offline the tablespace copy the 1m original file back. try to online it -- it'll tell you the file that needs recovery (its already accepted the smaller file at this point) alter database recover datafile 'that file'; alter the tablespace online again -- all is well.

As for the questions: 1) There is such a command -- "alter database create datafile". Here is an example I just ran through: tkyte@TKYTE816> alter tablespace t begin backup; Tablespace altered. I copied the single datafile that is in T at this point tkyte@TKYTE816> alter tablespace t end backup; Tablespace altered. tkyte@TKYTE816> alter tablespace t add datafile 'c:\temp\t2.dbf' size 1m; Tablespace altered. So, I added a datafile AFTER the backup... tkyte@TKYTE816> alter tablespace t offline; Tablespace altered. At this point, I went out and erased the two datafiles associated with T. I moved the copy of the one datafile in place... tkyte@TKYTE816> alter tablespace t online; alter tablespace t online * ERROR at line 1: ORA-01113: file 9 needs media recovery ORA-01110: data file 9: 'C:\TEMP\T.DBF' So, it sees the copy is out of sync... tkyte@TKYTE816> recover tablespace t; ORA-00283: recovery session canceled due to errors ORA-01157: cannot identify/lock data file 10 - see DBWR trace file ORA-01110: data file 10: 'C:\TEMP\T2.DBF' and now it tells of the missing datafile -- all we need do at this point is: tkyte@TKYTE816> alter database create datafile 'c:\temp\t2.dbf'; Database altered. tkyte@TKYTE816> recover tablespace t; Media recovery complete.

tkyte@TKYTE816> alter tablespace t online; Tablespace altered. and we are back in business....

19.22 Setting Trace Events ------------------------database level via init.ora EVENT="604 TRACE NAME ERRORSTACK FOREVER" EVENT="10210 TRACE NAME CONTEXT FOREVER, LEVEL 10" session level ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME BLOCKDUMP LEVEL 67109037'; ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME CONTROLF LEVEL 10'; system trace dump file ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME SYSTEMSTATE LEVEL 10'; 19.23 DROP TEMP DATAFILE ----------------------SVRMGRL>startup mount SVRMGRL>alter database open; ora-01157 cannot identify datafile 4 - file not found ora-01110 data file 4 '/oradata/temp/temp.dbf' SVRMGRL>alter database datafile '/oradata/temp/temp.dbf' offline drop; SVRMGRL>alter database open; SVRMGRL>drop tablespace temp including contents; SVRMGRL>create tablespace temp datafile '.... 19.24 SYSTEM DATAFILE RECOVERY ----------------------------- a normal datafile can be taken offline and the database started up. - the system file can be taken offline but the database cannot start - restore a backup copy of the system file - recover the file 19.25 Strange processes=.. and database does not start ----------------------------------------------------Does the PROCESSES initialization parameter of init.ora depend on some other par ameter ? We were getting the error as maximum no of process (50) exceeded..... The value was initially set to 50, so when the value was....changed to 200, and the database was restarted, it gave an error of "end-of-file on communication channel" The value was reduced to 150 & 100 and the same error was encountered.... when it was set back to 50, the database started.... Can anyone clear ?

check out ur semaphore settings in /etc/system. try increasing seminfo_semmns 19.26 ORA-00600 -------------I work with ORACLE DB ver.8.0.5 and recieved an error in alert.log ksedmp: internal or fatal error ORA-00600: internal error code, arguments: [12700], [3383], [41957137], [44], [] , [], [], [] oerr ora 600 00600, 00000, "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s ], [%s], [%s]" Cause: This is the generic internal error number for Oracle program exceptions. This indicates that a process has encountered an exceptional condition. Action: Report as a bug - the first argument is the internal error number Number [12700] indicates "invalid NLS parameter value (%s)" Cause: An invalid or unknown NLS configuration parameter was specified. 19.27 segment has reached it's max_extents ----------------------------------------oracle later than 7.3.x Version 7.3 and later: You can set the MAXEXTENTS storage parameter value to UNLIMITED for any object. Rollback Segment ================ ALTER ROLLBACK SEGMENT rollback_segment STORAGE ( MAXEXTENTS UNLIMITED); Temporary Segment ================= ALTER TABLESPACE tablespace DEFAULT STORAGE ( MAXEXTENTS UNLIMITED); Table Segment ============= ALTER TABLE MANIIN_ASIAKAS STORAGE ( MAXEXTENTS UNLIMITED); ALTER TABLE MANIIN_ASIAKAS STORAGE ( NEXT 5M ); Index Segment ============= ALTER INDEX index STORAGE ( MAXEXTENTS UNLIMITED); Table Partition Segment =======================

ALTER TABLE table MODIFY PARTITION partition STORAGE (MAXEXTENTS UNLIMITED);

19.28 max logs -------------Problem Description ------------------In the "alert.log", you find the following warning messages: kccrsz: denied expansion of controlfile section 9 by 65535 record(s) the number of records is already at maximum value (65535) krcpwnc: following controlfile record written over: RECID #520891 Recno 53663 Record timestamp ... kccrsz: denied expansion of controlfile section 9 by 65535 record(s) the number of records is already at maximum value (65535) krcpwnc: following controlfile record written over: RECID #520892 Recno 53664 Record timestamp The database is still running. The CONTROL_FILE_RECORD_KEEP_TIME init parameter is set to 7. If you display the records used in the LOG HISTORY section 9 of the controlfile: SQL> SELECT * FROM v$controlfile_record_section WHERE type='LOG HISTORY' ; TYPE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID ------------- ------------- ------------ ----------- ---------- ---------LOG HISTORY 65535 65535 33864 33863 520892 The number of RECORDS_USED has reached the maximum allowed in RECORDS_TOTAL. Solution Description -------------------Set the CONTROL_FILE_RECORD_KEEP_TIME to 0: * Insert the parameter CONTROL_FILE_RECORD_KEEP_TIME = 0 IN "INIT.ORA" -OR* Set it momentarily if you cannot shut the database down now: SQL> alter system set control_file_record_keep_time=0; Explanation ----------The default value for * the CONTROL_FILE_RECORD_KEEP_TIME is 7 days. SELECT value FROM v$parameter WHERE name='control_file_record_keep_time'; VALUE ----7 * the MAXLOGHISTORY database parameter has already reached the maximum of 65535 and it cannot be increased anymore. SQL> alter database backup controlfile to trace; => in the trace file, MAXLOGHISTORY is 65535 The MAXLOGHISTORY increases dynamically when the CONTROL_FILE_RECORD_KEEP_TIME is set to a value different FROM 0, but does not exceed 65535. Once reached, the message appears in the alert.log warning you that a controlfile record is written over.

19.29 ORA-470 maxloghistory -------------------------Problem Description: ==================== Instance cannot be started because of ORA-470. LGWR has also died creating a trace file with an ORA-204 error. It is possible that the maxloghistory limit of 65535 as specified in the controlfile has been reached. Diagnostic Required: ==================== The following information should be requested for diagnostics: 1. LGWR trace file produced 2. Dump of the control file - using the command: ALTER SESSION SET EVENTS 'immediate trace name controlf level 10' 3. Controlfile contents, using the command: ALTER DATABASE BACKUP CONTROLFILE TO TRACE; Diagnostic Analysis: ==================== The following observations will indicate that we have the maxloghistory limit of 65535: 1. The Lgwr trace file should show the following stack trace: - in 8.0.3 and 8.0.4, OSD skgfdisp returns ORA-27069, stack: kcrfds -> kcrrlh -> krcpwnc -> kccroc -> kccfrd -> kccrbl -> kccrbp - in 8.0.5 kccrbl causes SEGV before the call to skgfdisp with wrong block number. stack: kcrfds -> kcrrlh -> krcpwnc -> kccwnc -> kccfrd -> kccrbl 2. FROM the 'dump of the controlfile': ... ... numerous lines omittted ... LOG FILE HISTORY RECORDS: (blkno = 0x13, size = 36, max = 65535, in-use = 65535, last-recid= 188706) ... the max value of 65535 reconfirms that the limit has been reached. 3. Further confirmation can be seen FROM the controlfile trace: CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS NOARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 50 MAXINSTANCES 1 MAXLOGHISTORY 65535 ... Diagnostic Solution: =================== 1. Set control_file_record_keep_time = 0 in the init.ora. This parameter specifies the minimum age of a log history record in days before it can be reused. With the parameter set to 0, reusable sections never expand and records are reused immediately as required. [NOTE:1063567.6] <ml2_documents.showDocument?p_id=1063567.6&p_database_id=NOT> gives a good description on the use of this parameter. 2. Mount the database and retrieve details of online redo log files for use in step 6. Because the recovery will need to roll forward through current online redo logs, a list of online log details is required to indicate which redo log is current. This can be obtained using the following command: startup mount

SELECT * FROM v$logfile; 3. Open the database. This is a very important step. Although the startup will fail, it is a very important step before recreating the controlfile in step 5 and hense, enabling crash recovery to repair any incomplete log switch. Without this step it may be impossible to recover the database. alter database open 4. Shutdown the database, if it did not already crash in step 3. 5. Using the backup controlfile trace, recreate the controlfile with a smaller maxloghistory value. The MAXLOGHISTORY section of the current control file cannot be extended beyond 65536 entries. The value should reflect the amount of log history that you wish to maintain. An ORA-219 may be returned when the size of the controlfile, based on the values of the MAX- parameters, is higher then the maximum allowable size. [NOTE:1012929.6] <ml2_documents.showDocument?p_id=1012929.6&p_database_id=NOT> gives a good step-by-step guide to recreating the control file. 6. Recover the database. The database will automatically be mounted due to the recreation of the controlfile in step 5 : Recover database using backup controlfile; At the recovery prompt apply the online logs in sequence by typing the unquoted full path and file name of the online redo log to apply, as noted in step 2. After applying the current redo log, you will receive the message 'Media Recovery Complete'. 7. Once media recovery is complete, open the database as follows: alter database open resetlogs; Note: keep recurring "Control file resized from" > /dbms/tdbaplay/playroca/admin/dump/udump/playroca_ora_1548438.trc > Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Productio n > With the Partitioning, OLAP and Data Mining options > ORACLE_HOME = /dbms/tdbaplay/ora10g/home > System name: AIX > Node name: pl003 > Release: 3 > Version: 5 > Machine: 00CB560D4C00 > Instance name: playroca > Redo thread mounted by this instance: 1 > Oracle process number: 28 > Unix process pid: 1548438, image: oracle@pl003 (TNS V1-V3) > > *** 2008-02-21 12:51:57.587 > *** ACTION NAME:(0000010 FINISHED67) 2008-02-21 12:51:57.583 > *** SERVICE NAME:(SYS$USERS) 2008-02-21 12:51:57.583 > *** SESSION ID:(518.643) 2008-02-21 12:51:57.583 > Control file resized from 454 to 470 blocks > kccrsd_append: rectype = 28, lbn = 227, recs = 1128

19.30 Compatible init.ora change: -------------------------------Database files have the COMPATIBLE version in the file header. If you set the parameter to a higher value, all the headers will be updated at next

database startup. This means that if you shutdown your database, downgrade the COMPATIBLE parameter, and try to restart your database, you'll receive an error message something like: ORA-00201: control file version 7.3.2.0.0 incompatible with ORACLE version 7.0.12.0.0 ORA-00202: control file: '/usr2/oracle/dbs/V73A/ctrl1V73A.ctl' In the above case, database was running with COMPATIBLE 7.3.2.0. I commented out the parameter in init.ora, that is; kernel uses default 7.0.12.0 and returns an error before mounting since kernel cannot read the controlfile header. - You may only change the value of COMPATIBLE after a COLD Backup. - You may only change the value of COMPATIBLE if the database has been shutdown in NORMAL/IMMEDIATE mode. This parameter allows you to use a new release, while at the same time guarantee ing backward compatibility with an earlier release (in case it becomes necessary to revert to the earlier release). This parameter specifies the release with which Oracle7 Server must maintain com patibility. Some features of the current release may be restricted. For example, if you are running release 7.2.2.0 with compatibility set to 7.1.0.0 in order to guarantee compatibility, you will not be able to use 7.2 features. When using the standby database and feature, this parameter must have the same v alue on the primary and standby databases, and the value must be 7.3.0.0.0 or higher. This parameter allows you to immediately take advantage of the maintenance improvements of a new release in your producti on systems without testing the new functionality in your environment. The default value is the earliest release with which compatibility can be guaranteed. Ie: It is not possible to set COMPATIBLE to 7.3 on an Oracle8 database. ----------------Hi Tom, Just installed DB9.0.1, I tried to modify parameter in init.ora file: co mpatible=9.0.0(default) to 8.1.0. After I restarted the 901 DB, I got error below when I login to sqlplus: ERROR: ORA-01033: ORACLE initialization or shutdown in progress Anything wrong with that? If I cha nge back, everything is ok. The database could not start up. If you start the database manually, from the co mmand line -you would discover this. For example: idle> startup pfile=initora920.ora ORACLE instance started. Total System Global Area 143725064 bytes Fixed Size 451080 bytes Variable Size 109051904 bytes Database Buffers 33554432 bytes Redo Buffers 667648 bytes Database mounted. ORA-00402: database changes by release 9.2.0.0.0 cannot be used by release 8.1.0 .0.0

ORA-00405: compatibility type "Locally Managed SYSTEM tablespace" ..... Generally, compatible cannot be set DOWN as you are already using new features many times that are not compatible with the older release. You would have had to of created the database with 8.1 file formats (compatible set to 8.1 from the very beginning) -----------------------------19.31 ORA-27044: unable to write the header block of file: --------------------------------------------------------Problem Description: ==================== When you manually switch redo logs, or when the log buffer causes the redo threads to switch, you see errors similar to the following in your alert log: ... Fri Apr 24 13:42:00 1998 Thread 1 advanced to log sequence 170 Current log# 4 seq# 170 mem# 0: /.../rdlACPT04.rdl Fri Apr 24 13:42:04 1998 Errors in file /.../acpt_arch_15973.trc: ORA-202: controlfile: '/.../ctlACPT01.dbf' ORA-27044: unable to write the header block of file SVR4 Error: 48: Operation not supported Additional information: 3 Fri Apr 24 13:42:04 1998 kccexpd: controlfile resize from 356 to 368 block(s) denied by OS ... Note: The particular SVR4 error observed may differ in your case and is irrelevant here. ORA-00202: "controlfile: '%s'" Cause: This message reports the name file involved in other messages. Action: See associated error messages for a description of the problem. ORA-27044: "unable to write the header block of file" Cause: write system call failed, additional information indicates which function encountered the error Action: check errno Solution Description: ===================== To workaround this problem you can: 1. Use a database blocksize smaller than 16k. This may not be practical in all cases, and to change the db_block_size of a database you must rebuild the database. - OR 2. Set the init.ora parameter CONTROL_FILE_RECORD_KEEP_TIME equal to zero. This can be done by adding the following line to your init.ora file:

CONTROL_FILE_RECORD_KEEP_TIME = 0 The database must be shut down and restarted to have the changed init.ora file read. Explanation: ============ This is [BUG:663726] <ml2_documents.showDocument?p_id=663726&p_database_id=BUG>, which is fixed in release 8.0.6. The write of a 16K buffer to a control file seems to fail during an implicit resize operation on the controlfile that came as a result of adding log history records (V$LOG_HISTORY) when archiving an online redo log after a log switch. Starting with Oracle8 the control file can grow to a much larger size than it was able to in Oracle7. Bug 663726 <ml2_documents.showDocument?p_id=663726&p_da tabase_id=BUG> is only reproducible when the control file needs to grow AND when the db_block_size = 16k. This has been tested on instances with a smaller database block size and the problem has not been able to be reproduced. Records in some sections in the control file are circularly reusable while records in other sections are never reused. CONTROL_FILE_RECORD_KEEP_TIME applies to reusable sections. It specifies the minimum age in days that a record must have before it can be reused. In the event a new record needs to be added to a reusable section and the oldest record has not aged enough, the record section expands. If CONTROL_FILE_RECORD_KEEP_TIME is set to 0, then reusable sections never expand and records are reused as needed. 19.32 ORA-04031 error shared_pool: --------------------------------DIAGNOSING AND RESOLVING ORA-04031 ERROR For most applications, shared pool size is critical to Oracle perfoRMANce. The s hared pool holds both the d ata dictionary cache and the fully parsed or compiled representations of PL/SQL blocks and SQL statements. When any attempt to allocate a large piece of contiguous memory in the shared po ol fails Oracle first flushes all objects that are not currently in use from the pool and the resulting free memory chunks are merged. If there is still not a single chunk large enough to satisfy the request ORA-040 31 is returned. The message that you will get when this error appears is the following: Error: ORA 4031 Text: unable to allocate %s bytes of shared memory (%s,%s,%s) The ORA-04031 error is usually due to fragmentation in the library cache

or shared pool reserved space. Before of increasing the shared pool size conside r to tune the application to use shared sql and tune SHARED_POOL_SIZE, SHARED_POOL_RESERVED_SIZE, and SHARED_POOL_RESERVED_MIN_ALLOC. First determine if the ORA-04031 was a result of fragmentation in the library cache or in the shared pool reserved space by issuing the following query: SELECT free_space, avg_free_size, used_space, avg_used_size, request_failures, last_failure_size FROM v$shared_pool_reserved; The ORA-04031 is a result of lack of contiguous space in the shared pool reserved space if: REQUEST_FAILURES is > 0 and LAST_FAILURE_SIZE is > SHARED_POOL_RESERVED_MIN_ALLOC. To resolve this consider increasing SHARED_POOL_RESERVED_MIN_ALLOC to lower the number of objects being cached into the shared pool reserved space and increase SHARED_POOL_RESERVED_SIZE and SHARED_POOL_SIZE to increase the available memory in the shared pool reserved space. The ORA-04031 is a result of lack of contiguous space in the library cache if: REQUEST_FAILURES is > 0 and LAST_FAILURE_SIZE is < SHARED_POOL_RESERVED_MIN_ALLOC or REQUEST_FAILURES is 0 and LAST_FAILURE_SIZE is < SHARED_POOL_RESERVED_MIN_ALLOC The first step would be to consider lowering SHARED_POOL_RESERVED_MIN_ALLOC to put more objects into the shared pool reserved space and increase SHARED_POOL_SIZE. This view keeps information of every SQL statement and PL/SQL block executed in the database. The following SQL can show you statements with literal values or candidates to i nclude bind variables: SELECT substr(sql_text,1,40) "SQL", count(*) , sum(executions) "TotExecs" FROM v$sqlarea WHERE executions < 5 GROUP BY substr(sql_text,1,40) HAVING count(*) > 30 ORDER BY 2; 19.33 ORA-4030 Out of memory: ---------------------------Possibly no memory left in Oracle, or the OS does not grant more memory. Also inspect the size of any swap file. The errors is also reported if execute permissions are not in place on some procedure. 19.34 wrong permissions on oracle: ---------------------------------Hi,

I am under very confusing situation. I'm running database (8.1.7) My oracle is installed under ownership of userid "oracle" when i login with unix id "TEST" and give oracle_sid,oracle_home,PATH variables and then do sqlplus sys after logging in when i give "select file#,error from v$datafile_header;" for some file# i get error as "CAN NOT READ HEADER" but when i login through other unix id and do the same thing. I'm not getting any error.. This seems very very confusing, Could you tell me the reason behind this?? Thank & Regards, Atul Followup: sounds like you did not run the root.sh during the install and the permissions on the oracle binaries are wrong. what does ls -l $ORACLE_HOME/bin/oracle look like. it should look like this: $ ls -l $ORACLE_HOME/bin/oracle -rwsr-s--x 1 ora920 ora920 /usr/oracle/ora920/bin/oracle with the "s" bits set. rwsr-s--x 1 oracle dba 494456 Dec 7 1999 lsnrctl

51766646 Mar 31 13:03

regardless of who I log in as, when you have a setuid program as the oracle binary is, it'll be running "as the owner" tell me, what does ipcs -a show you, who is the owner of the shared memory segments associated with the SGA. If that is not Oracle -- you are "getting confused" somewhere for the s bit would ensure that Oracle was the owner. Some connection troubleshooting: -------------------------------19.35: ====== ORA-12545: ---------This one is probaly due to the fact the IP or HOSTNAME in tnsnames is wrong.

ORA-12514: ---------This one is probaly due to the fact the SERVICE_NAME in tnsnames is wrong or sho uld be fully qualified with domain name. ORA-12154: ---------This one is probaly due to the fact the alias you have used in the logon dialogb ox is wrong. fully qualified with domain name. ORA-12535: ---------The TNS-12535 or ORA-12535 error is normally a timeout error associated with Firewalls or slow Networks. + It can also be an incorrect listener.ora parameter setting for the CONNECT_TIMEOUT_<listener_name> value specified. + In essence, the ORA-12535/TNS-12535 is a timing issue between the client and server. ORA-12505: ---------TNS:listener does not currently know of SID given in connect descriptor Note 1: ------Symptom: When trying to connect to Oracle the following error is generated: ORA-12224: TNS: listener could not resolve SID given in connection description. Cause: The SID specified in the connection was not found in the listener s tables. This e rror will be returned if the database instance has not registered with the listener. Possible Remedy: Check to make sure that the SID is correct. The SIDs that are currently register ed with the listener can be obtained by typing: LSNRCTL SERVICES <listener-name> These SIDs correspond to SID_NAMEs in TNSNAMES.ORA or DB_NAME in the initialisat ion file. Note 2: ------ORA-12505: TNS:listener could not resolve SID given in connect descriptor You are trying to connect to a database, but the SID is not known.

Although it is possible that a tnsping command succeeds, there might still a pro blem with the SID parameter of the connection string. eg. C:>tnsping ora920 TNS Ping Utility for 32-bit Windows: Version 9.2.0.7.0 - Production Copyright (c) 1997 Oracle Corporation. All rights reserved. Used parameter files: c:\oracle\ora920\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = DEV01)(PORT = 2491))) (CONNECT_DATA = (SID = UNKNOWN) (SERVER = DEDICATED))) OK (20 msec) As one can see, this is the connection information stored in a tnsnames.ora file : ORA920.EU.DBMOTIVE.COM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = DEV01)(PORT = 2491)) ) (CONNECT_DATA = (SID = UNKNOWN) (SERVER = DEDICATED) ) ) However, the SID UNKNOWN is not known by the listener at the database server sid e. In order to test the known services by a listener, we can issue following comman d at the database server side: C:>lsnrctl services LSNRCTL for 32-bit Windows: Version 10.1.0.2.0 - Production Copyright (c) 1991, 2004, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=DEV01)(PORT=1521))) Services Summary... Service "ORA10G.eu.dbmotive.com" has 1 instance(s). Instance "ORA10G", status UNKNOWN, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 LOCAL SERVER Service "ORA920.eu.dbmotive.com" has 2 instance(s). Instance "ORA920", status UNKNOWN, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 LOCAL SERVER Instance "ORA920", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:2 refused:0 state:ready LOCAL SERVER The command completed successfully

Know services are ORA10G and ORA920. Changing the SID in our tnsnames.ora to a known service by the listener (ORA920) solved the problem.

19.36 ORA-12560 --------------Note 1: ------Oracle classify this as a generic protocol adapter error . In my experience it indi cates that Oracle client does not know what instance to connect to or what TNS alias to use . Set the correct ORACLE_HOME ans ORACLE_SID variables. Note 2: ------Doc ID: Note:73399.1 Subject: WINNT: ORA-12560 DB Start via SVRMGRL or SQL*PLUS ORACLE_SID is set co rrectly Type: BULLETIN Status: PUBLISHED Content Type: TEXT/PLAIN Creation Date: 28-JUL-1999 Last Revision Date: 14-JAN-2004 PURPOSE To assist in resolving ORA-12560 errors on Oracle8i. SCOPE & APPLICATION Support Analysts and customers. RELATED DOCUMENTS PR:1070749.6 NOTE:1016454.102 <ml2_documents.showDocument?p_id=1016454.102&p_database_id=NOT> TNS 12560 DB CREATE VIA INSTALLATION OR CONFIGURATION ASSISTANT FAILS BUG:948671 <ml2_documents.showDocument?p_id=948671&p_database_id=BUG> ORADIM SUC CSSFULLY CREATES AN UNUSABLE SID WITH NON-ALPHANUMERIC CHARACTER BUG:892253 <ml2_documents.showDocument?p_id=892253&p_database_id=BUG> ORA-12560 CREATING DATABASE WITH DB CONFIGURATION ASSISTANT IF SID HAS NON-ALPHA If you encounter an ORA-12560 error when you try to start Server Manager or SQL*Plus locally on your Windows NT server, you should first check the ORACLE_SID value. Make sure the SID is correctly set, either in the Windows NT registry or in your environment (with a set command). Also, you must verify that the service is running. See the entries above for more details .

If you have verified that ORACLE_SID is properly set, and the service is running, yet you still get an ORA-12560, then it is possible that you have created an instance with a non-alphanumeric character. The Getting Started Guide for Oracle8i on Windows NT documents that SID names can contain only alphanumerics, however if you attempt to create a SID with an underscore or a dash on Oracle8i you are not prevented from doing so. The service will be created and started successfully, but attempts to connect will fail with an ORA-12560. You must delete the instance and recreate it with no special characters only alphanumerics are allowed in the SID name. See BUG#948671, which was logged against 8.1.5 on Windows NT for this issue. Note 3: ------Doc ID </help/usaeng/Search/search.html>: Note:119008.1 Content Type: TEXT/PLAIN Subject: ORA-12560 Connecting to the Server on Unix - Troubleshooting Creation Date: 04-SEP-2000 Type: PROBLEM Last Revision Date: 20-MAR-2003 Status: PUBLISHED PURPOSE ------This note describes some of the possible reasons for ORA-12560 errors connecting to server on Unix Box. The list below shows some of the causes, the symptoms and the action to take. It is possible you will hit a cause not described here, in that case the information above should allow it to be identified. SCOPE & APPLICATION ------------------Support Analysts and customers alike. ORA-12560 CONNECTING TO THE SERVER ON UNIX - TROUBLESHOOTING -----------------------------------------------------------ORA-12560: TNS:protocol adapter error Cause: A generic protocol adapter error occurred. Action: Check addresses used for proper protocol specification. Before reporting this error, look at the error stack and check for lower level transport errors. For further details, turn on tracing and re execute the operation. Turn off tracing when the operation is complete. This is a high-level error just reporting an error occurred in the actual transport layer. Look at the next error down the stack and process that. 1. ORA-12500 ORA-12560 MAKING MULTIPLE CONNECTIONS TO DATABASE Problem: Trying to connect to the database via listener and the ORA-12500 are

prompted. You may see in the listener.log ORA-12500 and ORA-12560: ORA-12500: TNS:listener failed to start a dedicated server process Cause: The process of starting up a dedicated server process failed. The executable could not be found or the environment maybe set up incorrectly. Action: Turn on tracing at the ADMIN level and re execute the operation. Verify that the ORACLE Server executable is present and has execute permissions enabled. Ensure that the ORACLE environment is specified correctly in LISTENER.ORA. If error persists, contact Worldwide Customer Support. In many cases the error ORA-12500 is caused due to leak of resources in the Unix Box, if you are enable to connect to database and randomly you get the error your operating system is reached the maximum values for some resources. Otherwise, if you get the error in first connection the problem may be in the configuration of the system. Solution: Finding the resource which is been reached is difficult, the note 2064862.102 <ml2_documents.showDocument?p_id=2064862.102&p_database_id=NOT> indicates some suggestion to solve the problems. 2. ORA-12538/ORA-12560 connecting to the database via SQL*Net Problem: Trying to connect to database via SQL*Net the error the error ORA-12538 is prompted. In the trace file you can see: nscall: error exit nioqper: error from nscall nioqper: nr err code: 0 nioqper: ns main err code: nioqper: ns (2) err code: nioqper: nt main err code: nioqper: nt (2) err code: nioqper: nt OS err code:

12538 12560 508 0 0

Solution: - Check the protocol used in the TNSNAMES.ORA by the connection string - Ensure that the TNSNAMES.ORA you check is the one that is actually being used by Oracle. Define the TNS_ADMIN environment variable to point to the TNSNAMES directory. - Using the $ORACLE_HOME/bin/adapters command, ensure the protocol is installed. Run the command without parameters to check if the protocol is installed, then run the command with parameters to see whether a particular tool/application contains the protocol symbols e.g.: 1. $ORACLE_HOME/bin/adapters 2. $ORACLE_HOME/bin/adapters $ORACLE_HOME/bin/oracle $ORACLE_HOME/bin/adapters $ORACLE_HOME/bin/sqlplus Explanation: If the protocol is not installed every connection attempting to use it will fail with ORA-12538 because the executable doesn't contain the required protocol symbol/s.

Error ORA-12538 may also be caused by an issue with the '$ORACLE_HOME/bin/relink all' command. 'Relink All' does not relink the sqlplus executable. If you receive error ORA-12538 when making a sqlplus connection, it may be for this reason. To relink sqlplus manually: $ su - oracle $ cd $ORACLE_HOME/sqlplus/lib $ make -f ins_sqlplus.mk install $ ls -l $ORACLE_HOME/bin/sqlplus --> should show a current date/time stamp 3. ORA-12546 ORA-12560 connecting locally to the database Problem: Trying to connect to database locally with a different account to the software owner, the error the error ORA-12546 is prompted. In the trace file you can see: nioqper: error from nscall nioqper: nr err code: 0 nioqper: ns main err code: 12546 nioqper: ns (2) err code: 12560 nioqper: nt main err code: 516 nioqper: nt (2) err code: 13 nioqper: nt OS err code: 0 Solution: Make sure the permissions of oracle executable are correct, this should be: 52224 -rwsr-sr-x 1 oracle dba 53431665 Aug 10 11:07 oracle

Explanation: The problem occurs due to an incorrect setting on the oracle executable. 4. ORA-12541 ORA-12560 TRYING TO CONNECT TO A DATABASE Problem: You are trying to connect to a database using SQL*Net and receive the following error ORA-12541 ORA-12560 after change the TCP/IP port in the listener.ora and you are using PARAMETER USE_CKPFILE_LISTENER in listener.ora. The following error struct appears in the SQLNET.LOG: nr err code: 12203 TNS-12203: TNS:unable to connect to destination ns main err code: 12541 TNS-12541: TNS:no listener ns secondary err code: 12560 nt main err code: 511 TNS-00511: No listener nt secondary err code: 239 nt OS err code: 0 Solution: Check [NOTE:1061927.6] <ml2_documents.showDocument?p_id=1061927.6&p_database_ id=NOT> to resolve the problem.

Explanation: If TCP protocol is listed in the Listener.ora's ADDRESS_LIST section and the parameter USE_CKPFILE_LISTENER = TRUE, the Listener ignores the TCP port number defined in the ADDRESS section and listens on a random port. RELATED DOCUMENTS ----------------Note:39774.1 <ml2_documents.showDocument?p_id=39774.1&p_database_id=NOT> & TRACE Facilities on NET . Note:45878.1 <ml2_documents.showDocument?p_id=45878.1&p_database_id=NOT> Net Common Errors & Diagnostic Worksheet Net8i Admin/Ch.11 Troubleshooting Net8 / Resolving the Most Common Error Messages

LOG SQL*

19.37 ORA-12637 --------------Packet received failed. A process was unable to receive a packet from another process. Possible causes a re: 1. The other process was terminated. 2. The machine on which the other process is running went down. 3. Some other communications error occurred. Note 1: Just edit the file sqlnet.ora and search for the string SQLNET.AUTHENTICATION_SE RVICES. When it exists it s set to = (TNS), change this to = (NONE). When it doesn t exist, add the string SQLNET.AUTHENTICATION_SERVICES = (NONE) Note 2: What does SQLNET.AUTHENTICATION_SERVICES do? SQLNET.AUTHENTICATION_SERVICES Purpose Use the parameter SQLNET.AUTHENTICATION_SERVICES to enable one or more authentic ation services. If authentication has been installed, it is recommended that this parameter be s et to either none or to one of the authentication methods. Default None Values Authentication Methods Available with Oracle Net Services: none for no authentication methods. A valid username and password can be used to access the database. all for all authentication methods nts for Windows NT native authentication Authentication Methods Available with Oracle Advanced Security: kerberos5 for Kerberos authentication cybersafe for Cybersafe authentication

radius for RADIUS authentication dcegssapi for DCE GSSAPI authentication See Also: Oracle Advanced Security Administrator's Guide Example SQLNET.AUTHENTICATION_SERVICES=(kerberos5, cybersafe) Note 3: ORA-12637 for members of one NT group, using OPS$ login Being "identified externally", users can work fine until the user is added to a "wwwauthor" NT group to allow them to publish documents on Microsoft IIS (intranet) -- then they get ORA-12637 star ting the Oracle c/s application (document management system). The environment is: Oracle 9.2.0.1.0 on Windows 2000 Advanced Server w. SP4, Win dows 2003 domain controllers in W2K compatible mode, client workstations with W2K and Win XP. Any hint will be appreciated. Problem solved. Specific NT group (wwwauthor) which caused problems had existed already with specific permissions, then it was dropped and created again with exactly the same name (but, of course , with different internal ID). This situation have been identified as causing some kind of mess. A completely new group with different name has been created. Note 4: ORA-12637 packet receive failure I added a second instance to the Oracle server. Since then, on the server and al l clients, I get ORA-12637 packet receive failure when I try to connect to this database. W hy is this? Hello Try commenting out the SQLNET.CRYPTO_SEED and SQLNET.AUTHENTICATION_SERVICES in the server's SQLNET.ORA and on the client sqlnet file if they exist. Please also verify that the server's LISTENER.ORA file contains the following pa rameter: CONNECT_TIMEOUT_LISTENER=0 Note 5: Workaround is to turn off prespawned server processes in "listener.ora". In the "listener.ora", comment out or delete the prespawn parameters, ie: SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = prd) (ORACLE_HOME = /raid/app/oracle/product/7.3.4)

# # # )

(PRESPAWN_MAX = 99) (PRESPAWN_LIST = (PRESPAWN_DESC = (PROTOCOL = TCP) (POOL_SIZE = 1) (TIMEOUT = 30)) # ) )

Note 6: Problem Description ------------------Connections to Oracle 9.2 using a Cybersafe authenticated user fails on Solaris 2.6 with ORA-12637 and a core dump is generated. Solution Description -------------------1) Shutdown Oracle, the listener and any clients. 2) In $ORACLE_HOME/lib take a backup copy of the file sysliblist 3) Edit sysliblist. Move the -lthread entry to the beginning. So change from, -lnsl -lsocket -lgen -ldl -lsched -lthread To, -lthread -lnsl -lsocket -lgen -ldl -lsched 4) Do $ORACLE_HOME/bin/relink all Note 7: fact: Oracle Server - Personal Edition 8.1 fact: MS Windows symptom: Starting Server Manager (Svrmgrl) Fails symptom: ORA-12637: Packet Receive Failed cause: Oracle's installer will set the authentication to (NTS) by default. However, if the Windows machine is not in a Domain where there is a Windows Domain Controller, it will not be able to contact the KDC (Key Distribtion Centre) needed for Authentication. fix: Comment out SQLNET.AUTHENTICATION_SERVICES=(NTS) in sqlnet.ora 19.38 ORA 02058: ================ dba_2pc_pending: Lists all in-doubt distributed transactions. The view is empty until populated b y an in-doubt transaction. After the transaction is resolved, the view is purged. SQL> SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT# 2 FROM DBA_2PC_PENDING 3 / LOCAL_TRAN_ID GLOBAL_TRAN_ID

---------------------- --------------------------------------------------------6.31.5950 1145324612.10D447310B5FCE408A296417959EBEEC00000000 SQL> select STATE, MIXED, HOST, COMMIT# 2 FROM DBA_2PC_PENDING

3 / STATE MIX HOST

---------------- --- ----------------------------------------------------------forced rollback no REBV\PGSS-TST-TCM SQL> select * from dba_2pc_neighbors; LOCAL_TRAN_ID IN_ DATABASE ---------------------- --- -------------------------------------------------6.31.5950 in O SQL> select state, tran_comment, advice from dba_2pc_pending; STATE TRAN_COMMENT ---------------- -----------------------------------------------------------prepared SQL> rollback force '6.31.5950'; Rollback complete. SQL> commit; Doc ID: Note:290405.1 Subject: ORA-30019 When Executing Dbms_transaction.Purge_lost_db_entry Type: PROBLEM Status: MODERATED Content Type: TEXT/X-HTML Creation Date: 11-NOV-2004 Last Revision Date: 16-NOV-2004 The information in this document applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.5 This problem can occur on any platform. Errors ORA-30019 Illegal rollback Segment operation in Automatic Undo mode Symptoms Attempting to clean up the pending transaction using DBMS_TRANSACTION.PURGE_LOST _DB_ENTRY, getting ora-30019: ORA-30019: Illegal rollback Segment operation in Automatic Undo mode Changes AUTO UNDO MANAGEMENT is running Cause DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY is not supported in AUTO UNDO MANAGEMENT This is due to fact that "set transaction use rollback segment.." cannot be done in AUM. Fix 1.) alter session set "_smu_debug_mode" = 4; 2.) execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('local_tran_id');

19.39. ORA-600 [12850]: ======================= Doc ID </help/usaeng/Search/search.html>: Note:1064436.6 Content Type: TEXT/PLAIN Subject: ORA-00600 [12850], AND ORA-00600 [15265]: WHEN SELECT OR DESCRIB E ON TABLE Creation Date: 14-JAN-1999 Type: PROBLEM Last Revision Date: 29-FEB-2000 Status: PUBLISHED Problem Description: --------------------You are doing a describe or select on a table and receive: ORA-600 [12850]: Meaning: 12850 occurs when it can't find the user who owns the object from the dictionary. If you try to delete the table, you receive: ORA-600 [15625]: Meaning: The arguement 15625 is occuring because some index entry for the table is not found in obj$. Problem Explanation: -------------------The data dictionary is corrupt. You cannot drop the tables in question because the data dictionary doesn't know they exist. Search Words: ------------ORA-600 [12850] ORA-600 [15625] describe delete table Solution Description: --------------------You need to rebuild the database. Solution Explanation: --------------------Since the table(s) cannot be accessed or dropped because of the data dictionary corruption, rebuilding the database is the only option.

19.40 ORA-01092: ================ ------------------------------------------------------------------------------------------

Doc ID </help/usaeng/Search/search.html>: Note:222132.1 Content Type: TEXT/PLAIN Subject: ORA-01599 and ORA-01092 while starting database Creation Date: 03-DEC-2002 Type: PROBLEM Last Revision Date: 07-AUG-2003 Status: PUBLISHED PURPOSE ------The purpose of this Note is to fix errors ORA-01599 & ORA-01092 when recieved at startup. SCOPE & APPLICATION ------------------All DBAs, Support Analyst. Symptom(s) ~~~~~~~~~~ Starting the database gives errors similar to: ORA-01599: failed to acquire rollback segment (20), cache space is full (currently has (19) entries) ORA-01092: ORACLE instance terminated Change(s) ~~~~~~~~~~ Increased shared_pool_size parameter. Increased processes and/or sessions parameters. Cause ~~~~~~~ Low value for max_rollback_segments The above changes changed the value for max_rollback_segments internally. Fix ~~~~ The value for max_rollback_segments which is to be calculated as follows: max_rollback_segments = transactions/transactions_per_rollback_segment or 30 whichever is greater. transactions = session * 1.1; sessions = (processes * 1.1) + 5; The default value for transactions_per_rollback_segment = 5; 1. Use these calculations and find out the value for max_rollback_segments. 2. Set it to this value or 30 whichever is greater.

3. Startup database after this correct setting. Reference info ~~~~~~~~~~~~~~ [BUG:2233336] <ml2_documents.showDocument?p_id=2233336&p_database_id=BUG> RDBMS ERRORS AT STARTUP CAN CAUSE ODMA TO OMIT CLEANUP ACTIONS [NOTE:30764.1] <ml2_documents.showDocument?p_id=30764.1&p_database_id=NOT> Init.ora Parameter "MAX_ROLLBACK_SEGMENTS" Reference Note ------------------------------------------------------------------------------------------Doc ID </help/usaeng/Search/search.html>: Note:1038418.6 Content Type: TEXT/PLAIN Subject: ORA-01092 STARTING UP ORACLE RDBMS DATABASE Creation Date: 17-NOV-1997 Type: PROBLEM Last Revision Date: 06-JUL-1999 Status: PUBLISHED Problem Summary: ================ ORA-01092 starting up Oracle RDBMS database. Problem Description: ==================== When you startup your Oracle RDBMS database, you receive the following error: ORA-01092: ORACLE instance terminated. Disconnection forced. Problem Explanation: ==================== Oracle cannot write to the alert_<SID>.log file because the ownership and/or permissions on the BACKGROUND_DUMP_DEST directory are incorrect. Solution Summary: ================= Modify the ownership and permissions of directory BACKGROUND_DUMP_DEST. Solution Description: ===================== To allow oracle to write to the BACKGROUND_DUMP_DEST directory (contains alert_<SID>.log), modify the ownership of directory BACKGROUND_DUMP_DEST so that the oracle user (software owner) is the owner and make the permissions on directory BACKGROUND_DUMP_DEST 755. Follow these steps: 1. Determine the location of the BACKGROUND_DUMP_DEST parameter defined in the init<SID>.ora or config<SID>.ora files.

2. Login as root. 3. Change directory to the location of BACKGROUND_DUMP_DEST. 4. Change the owner of all the files and the directory to the software owner. For example: % chown oracle * 5. Change the permissions on the directory to 755. % chmod 755 . Solution Explanation: ===================== Changing the ownership and permissions of the BACKGROUND_DUMP_DEST directory, enables oracle to write to the alert_<SID>.log file. --------------------------------------------------------------------------Doc ID </help/usaeng/Search/search.html>: Note:273413.1 Content Type: TEXT/X-HTML Subject: Database Does not Start, Ora-00604 Ora-25153 Ora-00604 Ora-1092 Creation Date: 19-MAY-2004 Type: PROBLEM Last Revision Date: 04-OCT-2004 Status: MODERATED The information in this article applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 10.1.0.4 This problem can occur on any platform. Errors ORA-1092 Oracle instance terminated. ORA-25153 Temporary Tablespace is Empty ORA-604 error occurred at recursive SQL level <num> Symptoms The database is not opening and in the alert.log the following errors are report ed: ORA-00604: error occurred at recursive SQL level 1 ORA-25153: Temporary Tablespace is Empty Error 604 happened during db open, shutting down database USER: terminating instance due to error 604 Instance terminated by USER, pid = xxxxx ORA-1092 signalled during: alter database open... You might find SQL in the trace file like: select distinct d.p_obj#,d.p_timestamp from sys.dependency$ d, obj$ o where d.p_ obj#>=:1 and d.d_obj#=o.obj# and o.status!=5 Cause In the case where there's locally managed temp tablespace in the database,after controlfile is re-created using the statement generated by "alter database backup controlfile t o trace", the database

can't be opened again because it complains that temp tablespace is empty. Howeve r no tempfiles can be added to the temp tablespace, nor can the temp tablespace be dropped because the datab ase is not yet open. The query failed because of inadequate sort space(memory + disk) Fix We can increase the sort_area_size and sort_area_retained_size to a very high va lue so that the query completes. Then DB will open and we can take care of the TEMP tablespace If the error still persists after increasing the sort_area_size and sort_area_re tained_size to a high vale, then the only option remains is to restore and recover. ------------------------------------------------------------------------------Displayed below are the messages of the selected thread. Thread Status: Active From: Ronald Shaffer 17-Mar-05 19:23 Subject: Deleted OUTLN and now I get ORA-1092 and ORA-18008 RDBMS Version: 10G Operating System and Version: RedHat ES 3 Error Number (if applicable): ORA-1092 and ORA-18008 Product (i.e. SQL*Loader, Import, etc.): Product Version: Deleted OUTLN and now I get ORA-1092 and ORA-18008 One of our DBAs dropped the OUTLN user in 10G and now the instance will not star t. We get an ORA-18008 specifying the schema is missing and an ORA-1092 when it att empts to OPEN. Startup mount is as far as we can get. Any experience with this issue out there? Thanks... From: Fairlie Rego 23-Mar-05 01:26 Subject: Re : Deleted OUTLN and now I get ORA-1092 and ORA-18008 Hi Ronald, You are hitting bug 3786479 AFTER DROPPING THE OUTLN USER/SCHEMA, DB WILL NO LONGER OPEN.ORA-18008 http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_ id=BUG&p_id=3786479 If this is still an issue file a Tar and get a backport. Regards, Fairlie Rego

--------------------------------------------------------------------------------Displayed below are the messages of the selected thread. Thread Status: Closed From: Henry Lau 06-Mar-03 10:38 Subject: ORA-01092 while alter datbase open RDBMS Version: 9.0.1.3 Operating System and Version: Linux Redhat 7.1 Error Number (if applicable): ORA-01092 Product (i.e. SQL*Loader, Import, etc.): ORACLE DATABASE Product Version: 9.0.1.3 ORA-01092 while alter datbase open Hi, Since our undotbs is very large and we try to follow the Doc ID: 157278.1, we ar e trying to change the undotbs to a new one We try to 1. Create UNDO tablespace undotb2 datafile $ORACLE_HOME/oradata/undotb2.dbf size 300M 2. ALTER SYSTEM SET undo_tablespace=undotb2; 3. Change undo = undotb2; 4. Restart the database; 5. alter tablespace undotbs offline; 6. when we restart the database, it shows the following error. SQL> startup mount pfile=$ORACLE_HOME/admin/TEST/pfile/init.ora ORACLE instance started. Total System Global Area 386688540 bytes Fixed Size 280092 bytes Variable Size 318767104 bytes Database Buffers 67108864 bytes Redo Buffers 532480 bytes Database mounted. SQL> alter database nomount; alter database nomount * ERROR at line 1: ORA-02231: missing or invalid option to ALTER DATABASE SQL> alter database open; alter database open * ERROR at line 1: ORA-01092: ORACLE instance terminated. Disconnection forced I have checked the Log file as follow:

SQL> /u01/oracle/product/9.0.1/admin/TEST/udump/ora_29151.trc Oracle9i Release 9.0.1.3.0 - Production JServer Release 9.0.1.3.0 - Production ORACLE_HOME = /u01/oracle/product/9.0.1 System name: Linux Node name: utxrho01.unitex.com.hk Release: 2.4.2-2smp Version: #1 SMP Sun Apr 8 20:21:34 EDT 2001 Machine: i686 Instance name: TEST Redo thread mounted by this instance: 1 Oracle process number: 9 Unix process pid: 29151, image: oracle@utxrho01.unitex.com.hk (TNS V1-V3) *** SESSION ID:(8.3) 2003-03-06 17:25:38.615 Evaluating checkpoint for thread 1 sequence 8 block 2 ORA-00376: file 2 cannot be read at this time ORA-01110: data file 2: '/u01/oracle/product/9.0.1/oradata/TEST/undotbs01.dbf' ~ ~ ~ ~ Please help to check what the problem is ?? Thank you !! Regards, Henry From: Oracle, Pravin Sheth 07-Mar-03 09:31 Subject: Re : ORA-01092 while alter datbase open Hi Henry, What you are seeing is bug 2360088, which is fixed in Oracle 9.2.0.2. I suggest that you log an iSR (formerly iTAR) for a quicker solution for the pro blem. Regards Pravin ----------------------------------------------------------------------------------

19.41 ORA-600 [qerfxFetch_01] ============================= Note 1: ------Doc ID: Note:255881.1 Subject: ORA-600 [qerfxFetch_01] Type: REFERENCE Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 10-NOV-2003 Last Revision Date: 12-NOV-2004

<Internal_Only> This note contains information that has not yet been reviewed by the PAA Internals group or DDR. As such, the contents are not necessarily accurate and care should be taken when dealing with customers who have encountered this error. If you are going to use the information held in this note then please take whatever steps are needed to in order to confirm that the information is accurate. Until the article has been set to EXTERNAL, we do not guarantee the contents. Thanks. PAA Internals Group (Note - this section will be deleted as the note moves to publication) </Internal_Only> Note: For additional ORA-600 related information please read Note 146580.1 PURPOSE: This article represents a partially published OERI note. It has been published because the ORA-600 error has been reported in at least one confirmed bug. Therefore, the SUGGESTIONS section of this article may help in terms of identifying the cause of the error. This specific ORA-600 error may be considered for full publication at a later date. If/when fully published, additional information will be available here on the nature of this error. <Internal_Only> PURPOSE: This article discusses the internal error "ORA-600 [qerfxFetch_01]", what it means and possible actions. The information here is only applicable to the versions listed and is provided only for guidance. ERROR: ORA-600 [qerfxFetch_01] VERSIONS: versions 9.2 DESCRIPTION: During database operations, user interrupts need to be handled correctly. ORA-600 [qerfxFetch_01] is raised when an interrupt has been trapped but has not been handled correctly. FUNCTIONALITY: Fixed table row source. IMPACT: NON CORRUPTIVE - No underlying data corruption.

</Internal_Only> SUGGESTIONS: If the Known Issues section below does not help in terms of identifying a solution, please submit the trace files and alert.log to Oracle Support Services for further analysis. Known Issues: Bug# 2306106 See Note 2306106.8 OERI:[qerfxFetch_01] possible - affects OEM Fixed: 9.2.0.2, 10.1.0.2 <Internal_Only> INTERNAL ONLY SECTION - NOT FOR PUBLICATION OR DISTRIBUTION TO CUSTOMERS ======================================================================== Ensure that this note comes out on top in Metalink when ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 </Internal_Only> Note 2: ------Doc ID </help/usaeng/Search/search.html>: Note:2306106.8 Content Type: TEXT/X-HTML Subject: Support Description of Bug 2306106 Creation Date: 13-AUG-2003 Type: PATCH Last Revision Date: 14-AUG-2003 Status: PUBLISHED Click here <javascript:getdoc('NOTE:245840.1')> for details of sections in this note. Bug 2306106 OERI:[qerfxFetch_01] possible - affects OEM This note gives a brief overview of bug 2306106. Affects: Product (Component) Oracle Server (RDBMS) Range of versions believed to be affected Versions >= 9.2 but < 10G Versions confirmed as being affected 9.2.0.1 Platforms affected Generic (all / most platforms affected) Fixed: This issue is fixed in 9.2.0.2 (Server Patch Set) 10G Production Base Release Symptoms: Error may occur <javascript:taghelp('TAGS_ERROR')> Internal Error may occur (ORA-600) <javascript:taghelp('TAGS_OERI')> ORA-600 [qerfxFetch_01] Related To: (None Specified) Description searched qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01

ORA-600 [qerfxFetch_01] possible - affects OEM Note 3: ------Bug 2306106 is fixed in the 9.2.0.2 patchset. This bug is not published and thus cannot be viewed externally in MetaLink. All it says on this bug is 'ORA-600 [qerfxFetch_01] possible - affe cts OEM'. 19.42 Undo corruption: ====================== Note 1: ------Doc ID </help/usaeng/Search/search.html>: Note:2431450.8 Content Type: TEXT/X-HTML Subject: Support Description of Bug 2431450 Creation Date: 08-AUG-2 003 Type: PATCH Last Revision Date: 05-JAN-2004 Status: PUBLISHED Click here <javascript:getdoc('NOTE:245840.1')> for details of sections in this note. Bug 2431450 SMU Undo corruption possible on instance crash This note gives a brief overview of bug 2431450. Affects: Product (Component) (Rdbms) Range of versions believed to be affected Versions >= 9 but < 10G Versions confirmed as being affected 9.0.1.4 9.2.0.3 Platforms affected Generic (all / most platforms affected) Fixed: This issue is fixed in 9.0.1.5 iAS Patch Set 9.2.0.4 (Server Patch Set) 10g P roduction Base Release Symptoms: Corruption (Physical) <javascript:taghelp('TAGS_CORR_PHY')> Internal Error may occur (ORA-600) <javascript:taghelp('TAGS_OERI')> ORA-600 [kteuPropTime-2] / ORA-600 [4191] Related To: System Managed Undo Description SMU (System Managed Undo) Undo corruption possible on instance crash. This can result in subsequent ORA-600 errors due to the undo corruption. Note 2: ------Doc ID </help/usaeng/Search/search.html>: TEXT/X-HTML Subject: ORA-600 [kteuproptime-2] Type: REFERENCE Last Revision Date: Status: PUBLISHED Note:233864.1 Content Type:

Creation Date: 28-MAR-2003 07-APR-2005

Note: For additional ORA-600 related information please read Note 146580.1 </metalink/plsql/showdoc?db=NOT&id=146580.1> PURPOSE: This article discusses the internal error "ORA-600 [kteuproptime-2]", what it means and possible actions. The information here is only applicable to the versions listed and is provided only for guidance. ERROR: ORA-600 [kteuproptime-2] VERSIONS: versions 9.0 to 9.2 DESCRIPTION: Oracle has encountered an error propagating Extent Commit Times in the Undo Segment Header / Extent Map Blocks, for System Managed Undo Segments The extent being referenced is not valid. FUNCTIONALITY: UNDO EXTENTS IMPACT: INSTANCE FAILURE POSSIBLE PHYSICAL CORRUPTION SUGGESTIONS: If instance is down and fails to restart due to this error then set the following parameter, which will gather additional information to assist support in identifing the cause: # Dump Undo Segment Headers during transaction recovery event="10015 trace name context forever, level 10" Restart the instance and submit the trace files and alert.log to Oracle Support Services for further analysis. Do not set any other undo/rollback_segment parameters without direction from Support. Known Issues: Bug# 2431450 > SMU Undo corruption possible on instance crash Fixed: 9.2.0.4, 10.1.0.2 Note 3: ------Hi, apply patchset 9.2.0.2, bug 2431450 is fixed in 9.2.0.2 that made SMU (System Managed Undo) Undo corruption possible on instance crash. See Note 2431450.8 </metalink/plsql/showdoc?db=NOT&id=2431450.8

It's a very rare scenario : This will only cause a problem if there was an instance crash after a transaction committed but before it propogated the extent commit times to all its extents AND there was a shrink of extents before the transaction could be recovered. But still, this bug was not published (not for any particular reason except it was found internal). Greetings, Note 4: ------From: Oracle, Ken Robinson 21-Feb-03 17:44 Subject: Re : ORA-600 kteuPropTime-2 Forgot to mention the second bug for this....bug 2689239. Regards, Ken Robinson Oracle Server EE Analyst ORA-600 [4191] possible on shrink of system managed undo segment. Note 5: ------BUGBUSTER - System-managed undo segment corruption Affects Versions: 9.2.0.1.0, 9.2.0.2.0, Fixed in: Patch 2431450, 9.2.0.4.0 BUG# (if recognised) 2431450 This info. correct on: 31-AUG-2003 Symptoms Oracle instance crashes and details of the ORA-00600 error are written to the al ert.log ORA-00600: internal error code, arguments: [kteuPropTime-2], [], [], [] Followed by Fatal internal error happened while SMON was doing active transaction recovery. Then SMON: terminating instance due to error 600 Instance terminated by SMON, pid = 22972 This occurs as Oracle encounters an error when propagating Extent Commit Times i n the Undo Segment Header Extent Map Blocks. It could be because SMON is over-enthusiastic in shrinking extents in SMU segmen ts. As a result, extent commit times do not get written to all the extents and SMON causes the instance to crash, lea ving one or more of the undo segments 9.2.0.3.0

corrupt. When opening the database following the crash, Oracle tries to perform crash rec overy and encounters problems recovering committed transactions stored in the corrupt undo segments. This lead s to more ORA-00600 errors and a further instance crash. The net result is that the database cannot be open ed: "Error 600 happened during db open, shutting down database" Workaround Until the corrupt undo segment can be identified and offlined then unfortunately the database will not open. Identify the corrupt undo segment by setting the following parameters in the ini t.ora file: _smu_debug_mode=1 event="10015 trace name context forever, level 10" (set event 10511) event="10511 trace name context forever, level 2" _smu_debug_mode simply collects diagnostic information for support purposes. Eve nt 10015 is the undo segment recovery tracing event. Use this to identify corrupted rollback/undo segments wh en a database cannot be started. With these parameters set, an attempt to open the database will still cause a cr ash, but Oracle will write vital information about the corrupt rollback/undo segments to a trace file in us er_dump_dest. This is an extract from such a trace file, revealing that undo segment number 6 (_SYSSMU6$) is corrupt. Notice that the information stored in the segment header about the number of ext ents was inconsistent with the extent map. Recovering rollback segment _SYSSMU6$ UNDO SEG (BEFORE RECOVERY): usn = 6 Extent Control Header ----------------------------------------------------------------Extent Header:: spare1: 0 spare2: 0 #extents: 7 #blocks: 1934 last map 0x00805f89 #maps: 1 offset: 4080 Highwater:: 0x0080005b ext#: 0 blk#: 1 ext size: 7 #blocks in seg. hdr's freelists: 0 #blocks below: 0 mapblk 0x00000000 offset: 0 Unlocked Map Header:: next 0x00805f89 #extents: 5 obj#: 0 flag: 0x40000000 Extent Map ----------------------------------------------------------------0x0080005a length: 7 0x00800061 length: 8 0x0081ac89 length: 1024 0x00805589 length: 256 0x00805a89 length: 256

Retention Table ----------------------------------------------------------Extent Number:0 Commit Time: 1060617115 Extent Number:1 Commit Time: 1060611728 Extent Number:2 Commit Time: 1060611728 Extent Number:3 Commit Time: 1060611728 Extent Number:4 Commit Time: 1060611728 Comment out parameters undo_management and undo_tablespace and set the undocumen ted _corrupted_rollback_segments parameter to tell Oracle to ignore any corruptions and force the database open: _corrupted_rollback_segments=(_SYSSMU6$) This time, Oracle will start and open OK, which will allow you to check the stat us of the undo segments by querying DBA_ROLLBACK_SEGS. select segment_id, segment_name, tablespace_name, status from dba_rollback_segs where owner='PUBLIC'; SEGMENT_ID ---------1 2 3 4 5 6 7 8 9 10 SEGMENT_NAME -----------_SYSSMU1$ _SYSSMU2$ _SYSSMU3$ _SYSSMU4$ _SYSSMU5$ _SYSSMU6$ _SYSSMU7$ _SYSSMU8$ _SYSSMU9$ _SYSSMU10$ TABLESPACE_NAME --------------UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS STATUS ---------------OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE NEEDS RECOVERY OFFLINE OFFLINE OFFLINE OFFLINE

SMON will complain every 5 minutes by writing entries to the alert.log as long a s there are undo segments in need of recovery SMON: about to recover undo segment 6 SMON: mark undo segment 6 as needs recovery At this point, you must either download and apply patch 2431450 or create privat e rollback segments. Note 6: ------Repair UNDO log corruption Don Burleson In rare cases (usually DBA error) the Oracle UNDO tablespace can become corrupte d. This manifests with this error: ORA-00376: file xx cannot be read at this time In cases of UNDO log corruption, you must: Change the undo_management parameter from Create a new UNDO tablespace AUTO to MANUAL

Drop the old UNDO tablespace Dropping the corrupt UNDO tablespace can be tricky and you may get the message: ORA-00376: file string cannot be read at this time To drop a corrupt UNDO tablespace: 1 Identify the bad segment:

select segment_name, status from dba_rollback_segs where tablespace_name='undotbs_corrupt' and status = NEEDS RECOVERY ; SEGMENT_NAME STATUS ------------------------------ ---------------_SYSSMU22$ NEEDS RECOVERY 2. Bounce the instance with the hidden parameter _offline_rollback_segments , speci fying the bad segment name: _OFFLINE_ROLLBACK_SEGMENTS=_SYSSMU22$ 3. Bounce database, nuke the corrupt segment and tablespace: SQL> drop rollback segment "_SYSSMU22$"; Rollback segment dropped. SQL > drop tablespace undotbs including contents and datafiles; Tablespace dropped. Note 7: ------Sometimes there can be trouble with an undo segment. Actually there might be something with a normal object: PUT the following in the init.oraevent = "10015 trace name context forever, level 10" Setting this event will generate a trace file that will reveal the necessary information about the transaction Oracle is trying to rollback and most importantly, what object Oracle is trying to apply the undo to. USE the following query to find out what object Oracle is trying to perform recovery on. select owner, object_name, object_type, status from dba_objects where object_id = <object #>; THIS object must be dropped so the undo can be released. An export or relying on a backup may be necessary to restore the object after the corrupted

rollback segment goes away.

19.43 ORA-1653 ============== Note 1: ------Doc ID </help/usaeng/Search/search.html>: Note:151994.1 Content Type: TEXT/PLAIN Subject: Overview Of ORA-01653: Unable To Extend Table %s.%s By %s In Tab lespace %s Creation Date: 12-JUL-2001 Type: TROUBLESHOOTING Last Revision Date: 15-JUN-2004 Status: PUBLISHED PURPOSE ------This bulletin is an overview of ORA-1653 error message for tablespace dictionary managed. SCOPE& APPLICATION -----------------It is for users requiring further information on ORA-01653 error message. When looking to resolve the error by using any of the solutions suggested, pleas e consult the DBA for assistance. Error: ORA-01653 Text: unable to extend table %s.%s by %s in tablespace %s ------------------------------------------------------------------------------Cause: Failed to allocate an extent for table segment in tablespace. Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more files to the tablespace indicated. Explanation: -----------This error does not necessarily indicate whether or not you have enough space in the tablespace, it merely indicates that Oracle could not find a large enough area of free contiguous space in which to fit the next extent. Diagnostic Steps: ----------------1. In order to see the free space available for a particular tablespace, you mus t use the view DBA_FREE_SPACE. Within this view, each record represents one fragment of space. How the view DBA_FREE_SPACE can be used to determine the space available in the database is described in: [NOTE:121259.1] <ml2_documents.showDocument?p_id=121259.1&p_database_id=NOT> Using DBA_FREE_SPACE 2. The DBA_TABLES view describes the size of next extent (NEXT_EXTENT) and the percentage increase (PCT_INCREASE) for all tables in the database.

The "next_extent" size is the size of extent that is trying to be allocated ( and for which you have the error). When the extent is allocated : next_extent = next_extent * (1 + (pct_increase/100)) Algorythm to allocate extent for segment is described in the Concept Guide Chapter : Data Blocks, Extents, and Segments - How Extents Are Allocated 3. Look to see if any users have the tablespace in question as their temporary t ablespace. This can be checked by looking at DBA_USERS (TEMPORARY_TABLESPACE). Possible solutions: ------------------- Manually Coalesce Adjacent Free Extents ALTER TABLESPACE <tablespace name> COALESCE; The extents must be adjacent to each other for this to work. - Add a Datafile: ALTER TABLESPACE <tablespace name> ADD DATAFILE '<full path and file nam e>' SIZE <integer> <k|m>; - Resize the Datafile: ALTER DATABASE DATAFILE '<full path and file name>' RESIZE <integer> <k| m>; - Enable autoextend: ALTER DATABASE DATAFILE '<full path and file name>' AUTOEXTEND ON MAXSIZE UNLIMITED; - Defragment the Tablespace: - Lower "next_extent" and/or "pct_increase" size: ALTER <segment_type> <segment_name> STORAGE ( next <integer> <k|m> pctincrease <integer>); - If the tablespace is being used as a temporary tablespace, temporary segments may be still holding the space. References: ----------[NOTE:1025288.6] <ml2_documents.showDocument?p_id=1025288.6&p_database_id=NOT> H ow to Diagnose and Resolve ORA-01650, ORA-01652, ORA-01653, ORA-01654, ORA-01688 : Unable to Extend < OBJECT > by %S in Tablespace [NOTE:1020090.6] <ml2_documents.showDocument?p_id=1020090.6&p_database_id=NOT> S cript to Report on Space in Tablespaces [NOTE:1020182.6] <ml2_documents.showDocument?p_id=1020182.6&p_database_id=NOT> S cript to Detect Tablespace Fragmentation [NOTE:1012431.6] <ml2_documents.showDocument?p_id=1012431.6&p_database_id=NOT> O verview of Database Fragmentation [NOTE:121259.1] <ml2_documents.showDocument?p_id=121259.1&p_database_id=NOT> Us ing DBA_FREE_SPACE [NOTE:61997.1] <ml2_documents.showDocument?p_id=61997.1&p_database_id=NOT> SMO N - Temporary Segment Cleanup and Free Space Coalescing

Note 2: ------Doc ID </help/usaeng/Search/search.html>: Note:1025288.6 Content Type: TEXT/PLAIN Subject: How to Diagnose and Resolve ORA-01650,ORA-01652,ORA-01653,ORA-01 654,ORA-01688 : Unable to Extend < OBJECT > by %S in Tablespace %S Creation Date: 02-JAN-1997 Type: TROUBLESHOOTING Last Revision Date: 10-JUN-2004 Status: PUBLISHED PURPOSE ------This document can be used to diagnose and resolve space management errors - ORA1650, ORA-1652, ORA-1653, ORA-1654 and ORA-1688. SCOPE & APPLICATION ------------------You are working with the database and have encountered one of the following errors: ORA-01650: unable to extend rollback segment %s by %s in tablespace %s Cause: Failed to allocate extent for the rollback segment in tablespace. Action: Use the ALTER TABLESPACE ADD DATAFILE statement to add one or more files to the specified tablespace. ORA-01652: unable to extend temp segment by %s in tablespace %s Cause: Failed to allocate an extent for temp segment in tablespace. Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more files to the tablespace indicated or create the object in other tablespace. ORA-01653: unable to extend table %s.%s by %s in tablespace %s Cause: Failed to allocate extent for table segment in tablespace. Action: Use the ALTER TABLESPACE ADD DATAFILE statement to add one or more files to the specified tablespace. ORA-01654: unable to extend index %s.%s by %s in tablespace %s Cause: Failed to allocate extent for index segment in tablespace. Action: Use the ALTER TABLESPACE ADD DATAFILE statement to add one or more files to the specified tablespace. ORA-01688: unable to extend table %s.%s partition %s by %s in tablespace %s Cause: Failed to allocate an extent for table segment in tablespace. Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more files to the tablespace indicated. How to Solve the Following Errors About UNABLE TO EXTEND -------------------------------------------------------An "unable to extend" error is raised when there is insufficient contiguous space available to extend the object. A. In order to address the UNABLE TO EXTEND issue, you need to get the following

information: 1. The largest contiguous space available for the tablespace SELECT max(bytes) FROM dba_free_space WHERE tablespace_name = '<tablespace name>'; The above query returns the largest available contiguous chunk of space. Please note that if the tablespace you are concerned with is of type TEMPO RARY, then please refer to [NOTE:188610.1] <ml2_documents.showDocument?p_id=1886 10.1&p_database_id=NOT>. If this query is done immediately after the failure, it will show that the largest contiguous space in the tablespace is smaller than the next extent the object was trying to allocate. 2. => "next_extent" for the object => "pct_increase" for the object => The name of the tablespace in which the object resides Use the "next_extent" size with "pct_increase" in the following formula to determine the size of extent that is trying to be allocated. extent size = next_extent * (1 + (pct_increase/100) next_extent = 512000 pct_increase = 50 => extent size = 512000 * (1 + (50/100)) = 512000 * 1.5 = 768000 ORA-01650 Rollback Segment ========================== SELECT next_extent, pct_increase, tablespace_name FROM dba_rollback_segs WHERE segment_name = '<rollback segment name>'; Note: pct_increase is only needed for early versions of Oracle, by default in later versions pct_increase for a rollback segment is 0. ORA-01652 Temporary Segment =========================== SELECT next_extent, pct_increase, tablespace_name FROM dba_tablespaces WHERE tablespace_name = '<tablespace name>'; Temporary segments take the default storage clause of the tablespace in which they are created. If this error is caused by a query, then try and ensure that the query is tuned to perform its sorts as efficiently as possible. To find the owner of a sort, please refer to [NOTE:1069041.6] <ml2_documen ts.showDocument?p_id=1069041.6&p_database_id=NOT>

ORA-01653 Table Segment ======================= SELECT next_extent, pct_increase , tablespace_name FROM dba_tables WHERE table_name = '<table name>' AND owner = '<owner>'; ORA-01654 Index Segment ======================= SELECT next_extent, pct_increase, tablespace_name FROM dba_indexes WHERE index_name = '<index name>' AND owner = '<owner>'; ORA-01688 Table Partition ========================= SELECT next_extent, pct_increase, tablespace_name FROM dba_tab_partitions WHERE partition_name='<partition name>' AND table_owner = '<owner>'; B. Possible Solutions There are several options for solving errors due to failure to extend: a. Manually Coalesce Adjacent Free Extents --------------------------------------ALTER TABLESPACE <tablespace name> COALESCE; The extents must be adjacent to each other for this to work. b. Add a Datafile -------------ALTER TABLESPACE <tablespace name> ADD DATAFILE '<full path and file name>' SIZE <integer> <k|m>; c. Lower "next_extent" and/or "pct_increase" size ---------------------------------------------For non-temporary and non-partitioned segment problem: ALTER <segment_type> <segment_name> STORAGE ( next <integer> <k|m> pctincrease <integer>); For non-temporary and partitioned segment problem: ALTER TABLE <table_name> MODIFY PARTITION <partition_name> STORAGE ( next <integer> <k|m> pctincrease <integer>); For a temporary segment problem: ALTER TABLESPACE <tablespace name> DEFAULT STORAGE (initial <integer> next <integer> <k|m> pctincrease <integ

er>); d. Resize the Datafile ------------------ALTER DATABASE DATAFILE '<full path and file name>' RESIZE <integer> <k|m>; e. Defragment the Tablespace ------------------------If you would like more information on fragmentation, the following documents are available from Oracle WorldWide Support . (this is not a comprehensive list) [NOTE:1020182.6] <ml2_documents.showDocument?p_id=1020182.6&p_database_id =NOT> Script to Detect Tablespace Fragmentation [NOTE:1012431.6] <ml2_documents.showDocument?p_id=1012431.6&p_database_id =NOT> Overview of Database Fragmentation [NOTE:30910.1] <ml2_documents.showDocument?p_id=30910.1&p_database_id=NOT > Recreating Database Objects Related Documents: ================== [NOTE:15284.1] <ml2_documents.showDocument?p_id=15284.1&p_database_id=NOT> Und erstanding and Resolving ORA-01547 <Note.151994.1> Overview Of ORA-01653 Unable To Extend Table %s.%s By %s In Ta blespace %s: <Note.146595.1> Overview Of ORA-01654 Unable To Extend Index %s.%s By %s In Ta blespace %s: [NOTE:188610.1] <ml2_documents.showDocument?p_id=188610.1&p_database_id=NOT> DB A_FREE_SPACE Does not Show Information about Temporary Tablespaces [NOTE:1069041.6] <ml2_documents.showDocument?p_id=1069041.6&p_database_id=NOT> H ow to Find Creator of a SORT or TEMPORARY SEGMENT or Users Performing Sorts for Oracle8 and 9 Search Words: ============= ORA-1650 ORA-1652 ORA-1653 ORA-1654 ORA-1688 ORA-01650 ORA-01652 ORA-01653 ORA-01654 ORA-01688 1650 1652 1653 1654 1688

19.44: Other ORA- errors on 9i: =============================== Doc ID </help/usaeng/Search/search.html>: Note:201342.1 Content Type: TEXT/X-HTML Subject: Top Internal Errors - Oracle Server Release 9.2.0 Creation Date: 27-JUN-2002 Type: BULLETIN Last Revision Date: 24-MAY-2004 Status: PUBLISHED Top Internal Errors - Oracle Server Release 9.2.0 Additional information or documentation on ORA-600 errors not listed here

may be available from the ORA-600 Lookup tool : <Note:153788.1 </metalink/plsql /showdoc?db=Not&id=153788.1>> <Note:189908.1 </metalink/plsql/showdoc?db=Not&id=189908.1>> Oracle9i Release 2 (9.2) Support Status and Alerts ORA-600 [KSLAWE:!PWQ] Possible bugs: Fixed in: <Bug:3566420 </metalink/plsql/showdoc?db=Bug&id=3566420>> SS GOT OERI:KSLAWE:!PWQ AND INSTANCE CRASHES 9.2.0.6, 10G References: <Note:271084.1 </metalink/plsql/showdoc?db=Not&id=271084.1>> SLAWE:!PWQ] RAISED IN V92040 OR V92050 ON SUN 64BIT ORACLE ORA-600 [ksmals] Possible bugs: Fixed in: <Bug:2662683 </metalink/plsql/showdoc?db=Bug&id=2662683>> CORRUPTION WHEN RUNNING APPS PROGRAM THAT DOES HEAVY INSERTS References: <Note:247822.1 </metalink/plsql/showdoc?db=Not&id=247822.1>>

BACKGROUND PROCE

ALERT: ORA-600[K

ORA-7445 & HEAP 9.2.0.4 ORA-600 [ksmals]

ORA-600 [4000] Possible bugs: Fixed in: <Bug:2959556 </metalink/plsql/showdoc?db=Bug&id=2959556>> STARTUP after an ORA-701 fails with OERI[4000] 9.2.0.5, 10G <Bug:1371820 </metalink/plsql/showdoc?db=Bug&id=1371820>> OERI:4506 / OERI :4000 possible against transported tablespace 8.1.7.4, 9.0.1.4, 9.2.0.1 References: <Note:47456.1 </metalink/plsql/showdoc?db=Not&id=47456.1>> trying to get dba of undo segment header block from usn" ORA-600 [4454] Possible bugs: Fixed in: <Bug:1402161 </metalink/plsql/showdoc?db=Bug&id=1402161>> 454 on long running job 8.1.7.3, 9.0.1.3, 9.2.0.1 References: <Note:138836.1 </metalink/plsql/showdoc?db=Not&id=138836.1>> ORA-600 [4000] "

OERI:4411/OERI:4

ORA-600 [4454]

ORA-600 [kcbgcur_9] Possible bugs: Fixed in: <Bug:2722809 </metalink/plsql/showdoc?db=Bug&id=2722809>> OERI:kcbgcur_9 o n direct load into AUTO space managed segment 9.2.0.4, 10G <Bug:2392885 </metalink/plsql/showdoc?db=Bug&id=2392885>> Direct path load may fail with OERI:kcbgcur_9 / OERI:ktfduedel2 9.2.0.4, 10G <Bug:2202310 </metalink/plsql/showdoc?db=Bug&id=2202310>> OERI:KCBGCUR_9 p ossible from SMON dropping a rollback segment in locally managed tablespace 9.0.1.4, 9.2.0.1 <Bug:2035267 </metalink/plsql/showdoc?db=Bug&id=2035267>> OERI:KCBGCUR_9 p ossible during TEMP space operations 9.0.1.3, 9.2.0.1 <Bug:1804676 </metalink/plsql/showdoc?db=Bug&id=1804676>> OERI:KCBGCUR_9 p ossible from ONLINE REBUILD INDEX with concurrent DML 8.1.7.3, 9.0.1.3, 9.2.0. 1 <Bug:1785175 </metalink/plsql/showdoc?db=Bug&id=1785175>> OERI:kcbgcur_9 f rom CLOB TO CHAR or BLOB TO RAW conversion 9.2.0.2, 10G

References: <Note:114058.1 </metalink/plsql/showdoc?db=Not&id=114058.1>> _9] "Block class pinning violation" ORA-600 [qerrmOFBu1], [1003] Possible bugs: Fixed in: <Bug:2308496 </metalink/plsql/showdoc?db=Bug&id=2308496>> N TTC LOGGING INTO ORACLE 7.3.4 DATABASE References: <Note:209363.1 </metalink/plsql/showdoc?db=Not&id=209363.1>> Bu1] - "Error during remote row fetch operation <Note:207319.1 </metalink/plsql/showdoc?db=Not&id=207319.1>> ns from Oracle 9.2 to Oracle7 are Not Supported

ORA-600 [kcbgcur

SQL*PLUS CRASH I

ORA-600 [qerrmOF ALERT: Connectio

ORA-600 [ktsgsp5] or ORA-600 [kdddgb2] Possible bugs: Fixed in: <Bug:2384289 </metalink/plsql/showdoc?db=Bug&id=2384289>> ORA-600 [KDDDGB2 ] [435816] [2753588] & PROBABLE INDEX CORRUPTION 9.2.0.2 References: <Note:139037.1 </metalink/plsql/showdoc?db=Not&id=139037.1>> ] <Note:139180.1 </metalink/plsql/showdoc?db=Not&id=139180.1>> ] <Note:197737.1 </metalink/plsql/showdoc?db=Not&id=197737.1>> n / Internal Errors possible after Upgrading to 9.2.0.1 19.45: ADJUST SCN: ================== Note 1 Adjust SCN: -----------------Doc ID: Note:30681.1 Subject: EVENT: ADJUST_SCN - Quick Reference Type: REFERENCE Status: PUBLISHED Content Type: TEXT/PLAIN Creation Date: 20-OCT-1997 Last Revision Date: 04-AUG-2000 Language: USAENG ADJUST_SCN Event ~~~~~~~~~~~~~~~~ *** WARNING *** This event should only ever be used under the guidance of an experienced Oracle analyst. If an SCN is ahead of the current database SCN, this indicates some form of database corruption. The database should be rebuilt after bumping the SCN. **************** The ADJUST_SCN event is useful in some recovery situations where the current SCN needs to be incremented by a large value to ensure it is ahead of the highest SCN in the database. This is typically required if either: ORA-600 [kdddgb2 ORA-600 [ktsgsp5 ALERT: Corruptio

a. An ORA-600 [2662] error is signalled against database blocks or b. ORA-1555 errors keep occuring after forcing the database open or ORA-604 / ORA-1555 errors occur during database open. (Note: If startup reports ORA-704 & ORA-1555 errors together then the ADJUST_SCN event cannot be used to bump the SCN as the error is occuring during bootstrap. Repeated startup/shutdown attempts may help if the SCN mismatch is small) or c. If a database has been forced open used _ALLOW_RESETLOGS_CORRUPTION (See <Parameter:Allow_Resetlogs_Corruption> ) The ADJUST_SCN event acts as described below. **NOTE: You can check that the ADJUST_SCN event has fired as it should write a message to the alert log in the form "Debugging event used to advance scn to %s". If this message is NOT present in the alert log the event has probably not fired. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If the database will NOT open: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Take a backup. You can use event 10015 to trigger an ADJUST_SCN on database open: startup mount; alter session set events '10015 trace name adjust_scn level 1'; (NB: You can only use IMMEDIATE here on an OPEN database. If the database is only mounted use the 10015 trigger to adjust SCN, otherwise you get ORA 600 [2251], [65535], [4294967295] ) alter database open; If you get an ORA 600:2256 shutdown, use a higher level and reopen. Do *NOT* set this event in init.ora or the instance will crash as soon as SMON or PMON try to do any clean up. Always use it with the "alter session" command. ~~~~~~~~~~~~~~~~~~~~~~~~~~ If the database *IS* OPEN: ~~~~~~~~~~~~~~~~~~~~~~~~~~ You can increase the SCN thus: alter session set events 'IMMEDIATE trace name ADJUST_SCN level 1'; LEVEL: Level 1 is usually sufficient - it raises the SCN to 1 billion (1024*1024*1024) Level 2 raises it to 2 billion etc... If you try to raise the SCN to a level LESS THAN or EQUAL to its current setting you will get <OERI:2256> - See below. Ie: The event steps the SCN to known levels. You cannot use the same level twice.

Calculating a Level from 600 errors: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To get a LEVEL for ADJUST_SCN: a) Determine the TARGET scn: ora-600 [2662] See <OERI:2662> Use TARGET >= blocks SCN ora-600 [2256] See <OERI:2256> Use TARGET >= Current SCN b) Multiply the TARGET wrap number by 4. This will give you the level to use in the adjust_scn to get the correct wrap number. c) Next, add the following value to the level to get the desired base value as well : Add to Level Base ~~~~~~~~~~~~ ~~~~~~~~~~~~ 0 0 1 1073741824 2 2147483648 3 3221225472

Note 2: Adjust SCN -----------------Subject: OERR: 600 2662 Block SCN is ahead of Current SCN Creation Date: 21-OCT-1997 ORA-600 [2662] [a] [b] [c] [d] [e] Versions: 7.0.16 - 8.0.5 Source: kcrf.h =========================================================================== Meaning: There are 3 forms of this error. 4/5 argument forms The SCN found on a block (dependant SCN) was ahead of the current SCN. See below for this 1 Argument (before 7.2.3): Oracle is in the process of writing a block to a log file. If the calculated block checksum is less than or equal to 1 (0 and 1 are reserved) ORA-600 [2662] is returned. This is a problem generating an offline immediate log marker (kcrfwg). *NOT DOCUMENTED HERE* --------------------------------------------------------------------------Argument Description: Until version 7.2.3 this internal error can be logged for two separate reasons, which we will refer to as type I and type II. The two types can be distinguished by the number of arguments: Type I has four or five arguments after the [2662]. Type II has one argument after the [2662]. From 7.2.3 onwards type II no longer exists. Type I ~~~~~~

a. b. c. d. e.

Current SCN WRAP Current SCN BASE dependant SCN WRAP dependant SCN BASE Where present this is the DBA where the dependant SCN came from. From kcrf.h: If the SCN comes from the recent or current SCN then a dba of zero is saved. If it comes from undo$ because the undo segment is not available then the undo segment number is saved, which looks like a block from file 0. If the SCN is for a media recovery redo (i.e. block number == 0 in change vector), then the dba is for block 0 of the relevant datafile. If it is from another database for distribute xact then dba is DBAINF(). If it comes from a TX lock then the dba is really usn<<16+slot.

Type II ~~~~~~~ a. checksum -> log block checksum - zero if none (thread # in old format) --------------------------------------------------------------------------Diagnosis: ~~~~~~~~~~ In addition to different basic types from above, there are different situations and coherences where ORA-600 [2662] type 'I' can be raised. For diagnosis we can split up startup-issues and no-startup-issues. Usually the startup-issues are more critical. Getting started: ~~~~~~~~~~~~~~~~ (1) is the error raised during normal database operations (i.e. when the database is up) or during startup of the database? (2) what is the SCN difference [d]-[b] ( subtract argument 'b' from arg 'd')? (3) is there a fifth argument [e] ? If so convert the dba to file# block# Is it a data dictionary object? (file#=1) If so find out object name with the help of reference dictionary from second database (4) What is the current SQL statement? (see trace) Which table is refered to? Does the table match the object you found in step before? Be careful at this point: there may be no relationship between DBA in [e] and real source of problem (blockdump). Deeper analysis: ~~~~~~~~~~~~~~~~ - investigate trace file this will be a user trace file normally but could be an smon trace too - search for: 'buffer' ("buffer dba" in Oracle7 dumps, "buffer tsn" in Oracle8 dumps) this will bring you to a blockdump which usually represents the 'real' source of OERI:2662 WARNING: There may be more than one buffer pinned to the process so ensure you check out all pinned buffers.

-> does the blockdump match the dba from e.? -> what kind of blockdump is it? (a) rollbacksegment header (b) datablock (c) other SEE BELOW for EXAMPLES which demonstrate the sort of output you may see in trace files and the things to check. Check list and possible causes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - If Parallel Server check both nodes are using the same lock manager instance & point at the same control files. - If not Parallel Server check that 2 instances haven't mounted the same database (Is there a second PMON process around ?? - shut down any other instances to be sure) Possible causes: - doing an open resetlogs with _ALLOW_RESETLOGS_CORRUPTION enabled - a hardware problem, like a faulty controller, resulting in a failed write to the control file or the redo logs - restoring parts of the database from backup and not doing the appropriate recovery - restoring a control file and not doing a RECOVER DATABASE USING BACKUP CONTROLFILE - having _DISABLE_LOGGING set during crash recovery - problems with the DLM in a parallel server environment - a bug Solutions: - if the SCNs in the error are very close: Attempting a startup several times will bump up the dscn every time we open the database even if open fails. The database will open when dscn=scn. - ** You can bump the SCN on open using <Event:ADJUST_SCN> See [NOTE:30681.1] Be aware that you should really rebuild the database if you use this option. - Once this has occurred you would normally want to rebuild the database via exp/rebuild/imp as there is no guarantee that some other blocks are not ahead of time. Articles: ~~~~~~~~~ Solutions: [NOTE:30681.1] Details of the ADJUST_SCN Event [NOTE:1070079.6] alter system checkpoint Possible Causes: [NOTE:1021243.6] [NOTE:74903.1] [NOTE:41399.1] [NOTE:851959.9] Known Bugs: CHECK INIT.ORA SETTING _DISABLE_LOGGING How to Force the Database Open (_ALLOW_RESETLOGS_CORRUPTION) Forcing the database open with `_ALLOW_RESETLOGS_CORRUPTION` OERI:2662 DURING CREATE SNAPSHOT AT MASTER SITE

~~~~~~~~~~~ Fixed In. Bug No. Description ---------+------------+---------------------------------------------------7.0.14 BUG:153638 7.1.5 BUG:229873 7.1.3 Bug:195115 Miscalculation of SCN on startup for distributed TX ? 7.1.6.2.7 Bug:297197 Port specific Solaris OPS problem 7.3 Bug:336196 Port specific IBM SP AIX problem -> dlm issue 7.3.4.5 Bug:851959 OERI:2662 possible from distributed OPS select ----------------------------------------------------------------------------------------------------------------------------------------------------Examples: ~~~~~~~~ Below are some examples of this type of error and the information you will see in the trace files. ~~~~~~~~~~ CASE (a) ~~~~~~~~~~ blockdump should look like this: *** buffer dba: 0x05000002 inc: 0x00000001 seq: 0x0001a9c6 ver: 1 type: 1=KTU UNDO HEADER Extent Control Header ----------------------------------------------------------------Extent Control:: inc#: 716918 tsn: 4 object#: 0 *** -> interpret: dba: 0x05000002 -> 83886082 (0x05000002) = 5,2 XXX tsn: 4 -> this is rollback segment 4 tsn: 4 -> this rollback segment is in tablespace 4 ORA-00600: Interner Fehlercode, Argumente: [2662], [0], [71183], [0], [71195], [83886082], [], [] -> [e] > 0 and represents dba from block which is in trace -> [d]-[b] = 71195 - 71183 = 12 -> convert [b] to hex: 71195 = 0x1161B so this value can be found in blockdump: *** TRN TBL:: index state cflags wrap# uel scn dba -----------------------------------------------------------------... 0x4e 9 0x00 0x00d6 0xffff 0x0000.0001161b 0x00000000 ... *** -> possible cause

so in this case the CURRENT SCN is LOWER than the SCN on this transaction ie: The current SCN looks like it has decreased !! This could happen if the database is opened with the _allow_resetlogs_corruption parameter -> If some recovery steps have just been performed review these steps as the mismatch may be due to open resetlogs with _allow_resetlogs_corruption enabled or similar. See <Parameter:Allow_Resetlogs_corruption> for information on this parameter. -----------------------------------------------------------------~~~~~~~~~~ CASE (b) ~~~~~~~~~~ blockdump looks like this: *** buffer dba: 0x0100012f inc: 0x00000815 seq: 0x00000d48 ver: 1 type: 6=trans data Block header dump: dba: 0x0100012f Object id on Block? Y seg/obj: 0xe csc: 0x00.5fed6 itc: 2 flg: O typ: 1 - DATA fsl: 0 fnx: 0x0 Itl 0x01 0x02 Xid Uba Flag Lck Scn/Fsc 0x0000.00b.0000036c 0x0100261c.0138.04 --U1 fsc 0x0000.0005fed7 0x0000.00a.0000037b 0x0100261d.0138.01 --U1 fsc 0x0000.0005fed4

data_block_dump =============== ... *** interpret: dba: 0x0100012f ->

8,10 ==>

16777519 (0x0100012f) = (0x1

1,303 0x12f)

*** SVRMGR> SELECT SEGMENT_NAME, SEGMENT_TYPE FROM DBA_EXTENTS 2> WHERE FILE_ID = 1 AND 303 BETWEEN BLOCK_ID AND 3> BLOCK_ID + BLOCKS - 1; SEGMENT_NAME SEGMENT_TYPE ---------------------------------------------------------- ----------------UNDO$ TABLE 1 row selected. *** -> current sql-statement (trace): *** update undo$ set name=:2,file#=:3,block#=:4,status$=:5,user#=:6, undosqn=:7,xactsqn=:8,scnbas=:9,scnwrp=:10,inst#=:11 where us#=:1 ksedmp: internal or fatal error ORA-00600: internal error code, arguments: [2662], [0], [392916], [0], [392919], [0], [], [] ***

-> -> -> ->

e. = 0 info not available d-b = 392919 - 392916 = 3 dba from blockdump matches the object from current sql statement convert b. to hex: = 0x5FED7 so this value can be found in blockdump -> see ITL slot 0x01!

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Some more internals: ~~~~~~~~~~~~~~~~~~~~ I will try to give another example in oder to answer question if current SCN is decreased or dependant SCN increase. hypothesis: current SCN decreased Evidence: reproduced ORA-600 [2662] by aborting tx and using _allow_resetlog_corruption while open resetlogs. check database SCN before! Prerequisits: _allow_resetlogs_corruption = true in init<SID>.ora shutdown/startup db *** BEGIN TESTCASE SVRMGR> drop table tx; Statement processed. SVRMGR> create table tx (scn# number); Statement processed. SVRMGR> insert into tx values( userenv('COMMITSCN') ); 1 row processed. SVRMGR> select * from tx; SCN# ---------392942 1 row selected. ************ another session ************** SQL> connect scott/tiger Connected. SQL> update emp set sal=sal+1; 13 rows processed. SQL> -- no commit here ******************************************* SVRMGR> insert into tx values( userenv('COMMITSCN') ); 1 row processed. SVRMGR> select * from tx; SCN# ---------392942 392943 2 rows selected.

-- so current SCN will be 392943 SVRMGR> shutdown abort ORACLE instance shut down. -- this breaks tx SVRMGR> startup mount pfile=e:\jv734\initj734.ora ORACLE instance started. Total System Global Area 11018952 bytes Fixed Size 35760 bytes Variable Size 7698200 bytes Database Buffers 3276800 bytes Redo Buffers 8192 bytes Database mounted. SVRMGR> recover database until cancel; ORA-00279: Change 392925 generated at 10/26/99 17:13:03 needed for thread 1 ORA-00289: Suggestion : e:\jv734\arch\arch_2.arc ORA-00280: Change 392925 for thread 1 is in sequence #2 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} cancel Media recovery cancelled. SVRMGR> alter database open resetlogs; alter database open resetlogs * ORA-00600: internal error code, arguments: [2662], [0], [392928], [0], [392931], [0], [], [] *** END TESTCASE because we know current SCN before (392943) we see, that current SCN has decreased after solving the problem with: shutdown abort/startup -> works SVRMGR> drop table tx; Statement processed. SVRMGR> create table tx (scn# number); Statement processed. SVRMGR> insert into tx values( userenv('COMMITSCN') ); 1 row processed. SVRMGR> select * from tx; SCN# ---------392943 1 row selected. so we have exactly reached the current SCN from before 'shutdown abort' So current SCN was bumpt up from 392928 to 392942. Note 3: Adjust SCN -----------------Doc ID </help/usaeng/Search/search.html>: Note:28929.1 Content Type: TEXT/X-HTML Subject: ORA-600 [2662] "Block SCN is ahead of Current SCN" Creation

Date: 21-OCT-1997 Type: REFERENCE Last Revision Date: Status: PUBLISHED <Internal_Only>

15-OCT-2004

This note contains information that was not reviewed by DDR. As such, the contents are not necessarily accurate and care should be taken when dealing with customers who have encountered this error. Thanks. PAA Internals Group </Internal_Only> Note: For additional ORA-600 related information please read Note 146580.1 </met alink/plsql/showdoc?db=NOT&id=146580.1> PURPOSE: This article discusses the internal error "ORA-600 [2662]", what it means and possible actions. The information here is only applicable to the versions listed and is provided only for guidance. ERROR: ORA-600 [2662] [a] [b] [c] [d] [e] VERSIONS: versions 6.0 to 10.1 DESCRIPTION: A data block SCN is ahead of the current SCN. The ORA-600 [2662] occurs when an SCN is compared to the dependent SCN stored in a UGA variable. If the SCN is less than the dependent SCN then we signal the ORA-600 [2662] internal error. ARGUMENTS: Arg [a] Arg [b] Arg [c] Arg [d] Arg [e] Current SCN WRAP Current SCN BASE dependent SCN WRAP dependent SCN BASE Where present this is the DBA where the dependent SCN came from.

FUNCTIONALITY: File and IO buffer management for redo logs IMPACT: INSTANCE FAILURE POSSIBLE PHYSICAL CORRUPTION SUGGESTIONS: There are different situations where ORA-600 [2662] can be raised. It can be raised on startup or duing database operation. If not using Parallel Server, check that 2 instances have not mounted the same database.

Check for SMON traces and have the alert.log and trace files ready to send to support. Check the SCN difference [argument d]-[argument b]. If the SCNs in the error are very close, then try to shutdown and startup the instance several times. In some situations, the SCN increment during startup may permit the database to open. Keep track of the number of times you attempted a startup. If the Known Issues section below does not help in terms of identifying a solution, please submit the trace files and alert.log to Oracle Support Services for further analysis. Known Issues: Bug# 2899477 See Note 2899477.8 </metalink/plsql/showdoc?db=NOT&id=2899477.8 > Minimise risk of a false OERI[2662] Fixed: 9.2.0.5, 10.1.0.2 Bug# 2764106 > False OERI[2662] possible on SELECT which can crash the instance Fixed: 9.2.0.5, 10.1.0.2 Bug# 2054025 > OERI:2662 possible on new TEMPORARY index block Fixed: 9.0.1.3, 9.2.0.1 Bug# 851959 See Note 851959.8 </metalink/plsql/showdoc?db=NOT&id=851959.8> OERI:2662 possible from distributed OPS select Fixed: 7.3.4.5 Bug# 647927 P See Note 647927.8 </metalink/plsql/showdoc?db=NOT&id=647927.8> Digital Unix ONLY: OERI:2662 could occur under heavy load Fixed: 8.0.4.2, 8.0.5.0 <Internal_Only> INTERNAL ONLY SECTION - NOT FOR PUBLICATION OR DISTRIBUTION TO CUSTOMERS ======================================================================== There were 2 forms of this error until 7.2.3: Type I: 4/5 argument forms The SCN found on a block (dependent SCN) is ahead of the current SCN. See below for this 1 Argument (before 7.2.3 only): Oracle is in the process of writing a block to a log file. If the calculated block checksum is less than or equal to 1 (0 and 1 are reserved) ORA-600 [2662] is returned. This is a problem generating an offline immediate log marker (kcrfwg). *NOT DOCUMENTED HERE* See Note 2054025.8 </metalink/plsql/showdoc?db=NOT&id=2054025.8 See Note 2764106.8 </metalink/plsql/showdoc?db=NOT&id=2764106.8

Type II:

Type I ~~~~~~ a. b. c. d. e.

Current SCN WRAP Current SCN BASE dependent SCN WRAP dependent SCN BASE Where present this is the DBA where the dependent SCN came from. From kcrf.h: If the SCN comes from the recent or current SCN then a dba of zero is saved. If it comes from undo$ because the undo segment is not available then the undo segment number is saved, which looks like a block from file 0. If the SCN is for a media recovery redo (i.e. block number == 0 in change vector), then the dba is for block 0 of the relevant datafile. If it is from another database for a distributed transaction then dba is DBAINF(). If it comes from a TX lock then the dba is really usn<<16+slot.

Type II ~~~~~~~ a. checksum -> log block checksum - zero if none (thread # in old format) --------------------------------------------------------------------------Diagnosis: ~~~~~~~~~~ In addition to different basic types from above, there are different situations where ORA-600 [2662] type I can be raised. Getting started: ~~~~~~~~~~~~~~~~ (1) is the error raised during normal database operations (i.e. when the database is up) or during startup of the database? (2) what is the SCN difference [d]-[b] ( subtract argument 'b' from arg 'd')? (3) is there a fifth argument [e] ? If so convert the dba to file# block# Is it a data dictionary object? (file#=1) If so find out object name with the help of reference dictionary from second database (4) What is the current SQL statement? (see trace) Which table is refered to? Does the table match the object you found in previous step? Be careful at this point: there may be no relationship between DBA in [e] and the real source of problem (blockdump). Deeper analysis: ~~~~~~~~~~~~~~~~ (1) investigate trace file: this will be a user trace file normally but could be an smon trace too (2) search for: 'buffer' ("buffer dba" in Oracle7 dumps, "buffer tsn" in Oracle8/Oracle9 dumps) this will bring you to a blockdump which usually represents the 'real' source of OERI:2662 WARNING: There may be more than one buffer pinned to the process so ensure you check out all pinned buffers. -> does the blockdump match the dba from e.?

-> what (a) (b) (c)

kind of blockdump is it? rollback segment header datablock other

Check list and possible causes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If Parallel Server check both nodes are using the same lock manager instance & point at the same control files. Possible causes: (1) doing an open resetlogs with _ALLOW_RESETLOGS_CORRUPTION enabled (2) a hardware problem, like a faulty controller, resulting in a failed write to the control file or the redo logs (3) restoring parts of the database from backup and not doing the appropriate recovery (4) restoring a control file and not doing a RECOVER DATABASE USING BACKUP CONTROLFILE (5) having _DISABLE_LOGGING set during crash recovery (6) problems with the DLM in a parallel server environment (7) a bug Solutions: (1) if the SCNs in the error are very close, attempting a startup several times will bump up the dscn every time we open the database even if open fails. The database will open when dscn=scn. (2)You can bump the SCN either on open or while the database is open using <Event:ADJUST_SCN> (see Note 30681.1 </metalink/plsql/showdoc?db=NOT &id=30681.1>). Be aware that you should rebuild the database if you use this option. Once this has occurred you would normally want to rebuild the database via exp/rebuild/imp as there is no guarantee that some other blocks are not ahead of time. Articles: ~~~~~~~~~ Solutions: Note 30681.1 </metalink/plsql/showdoc?db=NOT&id=30681.1> Details of the ADJ UST_SCN Event Note 1070079.6 </metalink/plsql/showdoc?db=NOT&id=1070079.6> Alter System Che ckpoint Possible Causes: Note 1021243.6 </metalink/plsql/showdoc?db=NOT&id=1021243.6> CHECK INIT.ORA S ETTING _DISABLE_LOGGING Note 41399.1 </metalink/plsql/showdoc?db=NOT&id=41399.1> Forcing the databa se open with `_ALLOW_RESETLOGS_CORRUPTION` Note 851959.9 </metalink/plsql/showdoc?db=NOT&id=851959.9> OERI:2662 DURING CREATE SNAPSHOT AT MASTER SITE Known Bugs: ~~~~~~~~~~~

Fixed In. Bug No. Description ---------+------------+---------------------------------------------------7.1.5 Bug 229873 </metalink/plsql/showdoc?db=Bug&id=229873> 7.1.3 Bug 195115 </metalink/plsql/showdoc?db=Bug&id=195115> Miscalculation of SCN on startup for distributed TX ? 7.1.6.2.7 Bug 297197 </metalink/plsql/showdoc?db=Bug&id=297197> Port specific Solaris OPS problem 7.3 Bug 336196 </metalink/plsql/showdoc?db=Bug&id=336196> Port specific IBM SP AIX problem -> dlm issue 7.3.4.5 Bug 851959 </metalink/plsql/showdoc?db=Bug&id=851959> OERI:2662 poss ible from distributed OPS select Not fixed Bug 2216823 </metalink/plsql/showdoc?db=Bug&id=2216823> OERI:2662 rep orted when reusing tempfile with restored DB 8.1.7.4 Bug 2177050 </metalink/plsql/showdoc?db=Bug&id=2177050> OERI:729 spac e leak possible (with tags "define var info"/"oactoid info") can corrupt UGA and cause OERI:2662 --------------------------------------------------------------------------Ensure that this note comes out on top in Metalink when searched ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 </Internal_Only 19.47: _allow_read_only_corruption ================================== If you have a media failure and for some reason (such as having lost an archived log file) you cannot perform a complete recovery on some datafiles, then you might need this parameter. It is new for 8i. Previously there was only _allow_resetlogs_corruption which allowed you to do a RESETLOGS open of the database in such situations. Of course, a database forced open in this way would be in a crazy state because the current SCN would reflect the extent of the incomplete recovery, but some datafiles would have blocks in the future, which would lead to lots of nasty ORA-00600 err ors (although there is an ADJUST_SCN event that could be used for relief). Once in t his position, the only thing to do would be to do a full database export, rebuild the database , import and then assess the damage. The new _allow_read_only_corruption provides a much cleaner solution to the same problem. You should only use it if all other recovery options have been exhausted, and yo u cannot open the database read/write. Once again, the intent is to export, rebuild and import . Not pleasant, but sometimes better than going back to an older usable backup and performing incomplete recov ery to a consistent state. Also, the read only open allows you to assess better which recovery option you w

ant to take without committing you to either. 19.48: _allow_resetlogs_corruption ================================== log problem: Try this approach to solve problems with redolog files: 1. create a backup of all datafiles, redolog files and controlfiles. 2.set next initialization parameter in init.ora _allow_resetlogs_corruption = true 3.startup the database and try to open it 4. if the database can't be opened, then mount it and try to issue: alter session set events '10015 trace name adjust_scn level 1'; #or if previous doesn't work increase the level to 2 alter session set events '10015 trace name adjust_scn level 4096'; 5. alter database open You can try with recover database until cancel and then open iz with resetlogs o ption. With this procedure I succesfully recovered from loosing my redolog files. Using event 10015 you are forcing a SCN jump that will eventually syncronize the SCN values from your datafiles and controlfiles. The level controls how much the SCN will be incremented with. In the case of a 9 .0.1 I had, it worked only with 4096, however it may be that even a level of 1 to 3 would make the SCN jump 1 million. So you have to dump those headers and compare the SCNs inside before and after t he event 10015. I was succeful too in opening a db after loosing controlfile and online redo log s , however Oracle support made it pretty clear that the only usage for the database afterwards is to do a full export and recreate it from that. It would be better if Oracle support walk s you through this procedure. 19.49: ORA-01503: CREATE CONTROLFILE failed ============================================ ORA-01503: CREATE CONTROLFILE failed ORA-01161: database name PEGACC in file header does not match given name of PEGSAV ORA-01110: data file 1: '/u02/oradata/pegsav/system01.dbf' Note 1: =======

Problem: You are attempting to recreate a controlfile with a 'createcontrolfile' script and the script fails with the following error when it tries to access one of the datafiles: ORA-1161, database name <name> in file header does not match given name You are certain that the file is good and that it belongs to that database. Solution: Check the file's properties in Windows Explorer and verify that it is not a "Hidden" file. Explanation: If you have set the "Show All Files' option under Explorer, View, Options, you are able to see 'hidden' files that other users and/or applications cannot. If any or all datafiles are marked as 'hidden' files, Oracle does not see them when it tries to recreate the controlfile. You must change the properties of the file by right-clicking on the file in Windows Explorer and then deselecting the check box marked "Hidden" under the General tab. You should then be able to create the controlfile. References: Note 1084048.6 ORA-01503, ORA-01161: on Create Controlfile. Note 2: ======= This message may result, if the db_name in the init.ora does not match with the set "db_name" given while creating the controlfile. Also, remove any old controlfiles present in the specified directory. Thanks, Note 3: ======= We ran into a similar problem when trying to create a new instance with datafile s from another database. The error comes in the create control file statement. Oracle uses REUSE as the d efault option when you do the alter database backup controlfile to trace. If you delete REUSE then the new dat abase name you will change all the header information in all the database datafiles and you will be able to start up the instance. Hope this helps.

Note 4: ======= Try this command "CREATE CONTROLFILE SET DATABASE..." instead of "CREATE CONTROL FILE REUSE DATABASE..." I think it would be better.

19.50. ORA-01031 ================ Note 1: ------The 'OSDBA' and 'OSOPER' groups are chosen at installation time and usually both default to the group 'dba'. These groups are compiled into the 'oracle' exe cutable and so are the same for all databases running from a given ORACLE_HOME directory. The actual groups bein g used for OSDBA and OSOPER can be checked thus: cd $ORACLE_HOME/rdbms/lib cat config.[cs] The line '#define SS_DBA_GRP "group"' should name the chosen OSDBA group. The line '#define SS_OPER_GRP "group"' should name the chosen OSOPER group. Note 2: ------Bookmark Fixed font Go to End

Doc ID: Note:69642.1 Content Type: TEXT/PLAIN Subject: UNIX: Checklist for Resolving Connect AS SYSDBA Issues Creation Date: 20-APR-1999 Type: TROUBLESHOOTING Last Revision Date: 31-DEC-2004 Status: PUBLISHED Introduction: ~~~~~~~~~~~~~ This bulletin lists the documented causes of getting ---> prompted for a password when trying to CONNECT as SYSDBA ---> errors such as ORA-01031, ORA-01034, ORA-06401, ORA-03113,ORA-09925, ORA-09817, ORA-12705, ORA-12547 a) SQLNET.ORA Checks: --------------------1. The "sqlnet.ora" can be found in the following locations (listed by search or der): $TNS_ADMIN/sqlnet.ora $HOME/sqlnet.ora $ORACLE_HOME/network/admin/sqlnet.ora Depending upon your operating system, it may also be located in: /var/opt/oracle/sqlnet.ora

/etc/sqlnet.ora A corrupted "sqlnet.ora" file, or one with security options set, will cause a 'connect internal' request to prompt for a password. To determine if this is the problem, locate the "sqlnet.ora" that is being us ed. The one being used will be the first one found according to the search order listed above. Next, move the file so that it will not be found by this search: % mv sqlnet.ora sqlnet.ora_save Try to connect internal again. If it still fails, search for other "sqlnet.ora" files according to the searc h order listed above and repeat using the move command until you are sure there are no other "sqlnet.ora" files being used. If this does not resolve the issue, use the move command to put all the "sqlnet.ora" files back where they were before you made the change: % mv sqlnet.ora_save sqlnet.ora If moving the "sqlnet.ora" resolves the issue, then verify the contents of th e file: a) SQLNET.AUTHENTICATION_SERVICES If you are not using database links, comment this line out or try setting it to: SQLNET.AUTHENTICATION_SERVICES = (BEQ,NONE) b) SQLNET.CRYPTO_SEED This should not be set in a "sqlnet.ora" file on UNIX. If it is, comment the line out. (This setting is added to the "sqlnet.ora" if it is built by one of Oracle's network cofiguration products shipped wi th client products) c) AUTOMATIC_IPC If this is set to "ON" it can force a "TWO_TASK" connection. Try setting this to "OFF": AUTOMATIC_IPC = OFF 2. Set the permissions correctly in the "TNS_ADMIN" files. The environment variable TNS_ADMIN defines the directory where the "sqlnet.or a", "tnsnames.ora", and "listener.ora" files reside. These files must contain the correct permissions, which are set when "root.sh " runs during installation. As root, run "root.sh" or edit the permissions on the "sqlnet.ora", "tnsnames .ora", and "listener.ora" files by hand as follows:

$ cd $TNS_ADMIN $ chmod 644 sqlnet.ora tnsnames.ora listener.ora $ ls -l sqlnet.ora tnsnames.ora listener.ora -rw-r--r--rw-r--r--rw-r--r-1 oracle dba 1 oracle dba 1 oracle dba 1628 Jul 12 15:25 listener.ora 586 Jun 1 12:07 sqlnet.ora 82274 Jul 12 15:23 tnsnames.ora

b) Software and Operating System Issues: ---------------------------------------1. Be sure $ORACLE_HOME is set to the correct directory and does not have any typing mistakes: % cd $ORACLE_HOME % pwd If this returns a location other than your "ORACLE_HOME" or is invalid, you will need to reset the value of this environment variable: sh or ksh: ---------$ ORACLE_HOME=<path_to_ORACLE_HOME> $ export ORACLE_HOME Example: $ ORACLE_HOME=/u01/app/oracle/product/7.3.3 $ export ORACLE_HOME csh: ---% setenv ORACLE_HOME <path_to_ORACLE_HOME> Example: % setenv ORACLE_HOME /u01/app/oracle/product/7.3.3 If your "ORACLE_HOME" contains a link or the instance was started with the "ORACLE_HOME" set to another value, the instance may try to start using the memory location that another instance is using. An example of this might be: You have "ORACLE_HOME" set to "/u01/app/oracle/product/7.3.3" and start the instance. Then you do something like: % ln -s /u01/app/oracle/product/7.3.3 /u01/app/oracle/7.3.3 % setenv ORACLE_HOME /u01/app/oracle/7.3.3 % svrmgrl SVRMGR> connect internal If this prompts for a password then most likely the combination of your "ORACLE_HOME" and "ORACLE_SID" hash to the same shared memory address of another running instance. Otherwise you may be able to connect internal but you will receive an ORA-01034 "Oracle not available" error. In most cases using a link as part of your "ORACLE_HOME" is fine as long as you are consistent. Oracle recommends that links not be used as part of the "ORACLE_HOME", but

their use is supported. 2. Check that $ORACLE_SID is set to the correct SID, (including capitalization), and does not have any typos: % echo $ORACLE_SID Refer to Note:1048876.6 for more information. 3. Ensure $TWO_TASK is not set. To check if "TWO_TASK" is set, do the following: sh, ksh or on HP/UX only csh: ----------------------------env |grep -i two - or echo $TWO_TASK csh: ---setenv |grep -i two If any lines are returned such as: TWO_TASK= - or TWO_TASK=PROD You will need to unset the environment variable "TWO_TASK": sh or ksh: ---------unset TWO_TASK csh: ---unsetenv TWO_TASK Example : $ TWO_TASK=V817 $ export TWO_TASK $ sqlplus /nolog SQL*Plus: Release 8.1.7.0.0 - Production on Fri Dec 31 10:12:25 2004 (c) Copyright 2000 Oracle Corporation. All rights reserved. SQL> conn / as sysdba ERROR: ORA-01031: insufficient privileges $ unset TWO_TASK $ sqlplus /nolog SQL> conn / as sysdba Connected. If you are running Oracle release 8.0.4, and upon starting "svrmgrl" you receive an ORA-06401 "NETCMN: invalid driver designator" error, you should also unset two_task.

The login connect string may be getting its value from the TWO_TASK environment variable if this is set for the user. 4. Check the permissions on the Oracle executable: % cd $ORACLE_HOME/bin % ls -l oracle ('ls -n oracle' should work as well)

The permissions should be rwsr-s--x, or 6751. If the permissions are incorrect, do the following as the "oracle" software owner: % chmod 6751 oracle If you receive an ORA-03113 "end-of-file on communication" error followed by a prompt for a password, then you may also need to check the ownership and permissions on the dump directories. These directories must belong to Oracle, group dba, (or the appropriates name s for your installation). This error may occur while creating a database. Permissions should be: 755 (drwxr-xr-x) Also, the alert.log must not be greater than 2 Gigabytes in size. When you start up "nomount" an Oracle pseudo process will try to write the "alert.log" file in "udump". When Oracle cannot do this (either because of permissions or because of the "alert.log" being greater than 2 Gigabytes in size), it will issue the ORA-03113 error. 5. "osdba" group checks: a. Make sure the operating system user issuing the CONNECT INTERNAL belongs to the "osdba" group as defined in the "$ORACLE_HOME/rdbms/lib/config.s" or "$ORACLE_HOME/rdbms/lib/config.c". Typically this is set to "dba". To verify the operating system groups the user belongs to, do the followin g: % id uid=1030(oracle) gid=1030(dba) The "gid" here is "dba" so the "config.s" or "config.c" may contain an entry such as: /* 0x0008 15 */ .ascii "dba\0"

If these do not match, you either need to add the operating system user to the group as it is seen in the "config" file, or modify the "config" file and relink the "oracle" binary. Refer to entry [NOTE:50507.1] section 3 for more details. b. Be sure you are not logged in as the "root" user and that the environment variables "USER", "USERNAME", and "LOGNAME" are not set to "root". The "root" user is a special case and cannot connect to Oracle as the "internal" user unless the effective group is changed to the "osdba" group , which is typically "dba". To do this, either modify the "/etc/password" file (not recommended) or

use the "newgrp" command: # newgrp dba "newgrp" always opens a new shell, so you cannot issue "newgrp" from within a shell script. Keep this in mind if you plan on executing scripts as the "root" user. c. Verify that the "osdba" group is only listed once in the "/etc/group" file : % grep dba /etc/group dba::1010: dba::1100: If more than one line starting with the "osdba" group is returned, you need to remove the ones that are not correct. It is not possible to have more than one group use a group name. d. Check that the oracle user uid and gid are matching with /etc/passwd and /etc/group : $ id uid=500(oracle) gid=235(dba) $ grep oracle /etc/passwd oracle:x:500:235:oracle:/home/oracle:/bin/bash ^^^ $ grep dba /etc/group dba:x:253:oracle ^^^ The mismatch also causes an ORA-1031 error. 6. Verify that the file system is not mounted no set uid: % mount /u07 on /dev/md/dsk/d7 nosuid/read/write If the filesytem is mounted "nosuid", as seen in this example, you will need to unmount the filesystem and mount it without the "nosuid" option. Consult your operating system documentation or your operating system vendor for instruction on modifying mount options. 7. Please read the following warning before you attempt to use the information in this step: ****************************************************************** * * * WARNING: If you remove segments that belong to a running * * instance you will crash the instance, and this may * * cause database corruption. * * * * Please call Oracle Support Services for assistance * * if you have any doubts about removing shared memory * * segments. * * * ****************************************************************** If an instance crashed or was killed off using "kill" there may be shared

memory segments hanging around that belong to the down instance. If there are no other instances running on the machine you can issue: % ipcs -b T ID Shared Memory: m 0 m 1601 KEY MODE OWNER GROUP root dba SEGSZ 68 4530176

0x50000ffe --rw-r--r-- root 0x0eedcdb8 --rw-r----- oracle

In this case the "ID" of "1601" is owned by "oracle" and if there are no other instances running in most cases this can safely be removed: % ipcrm -m 1601 If your SGA is split into multiple segments you will have to remove all segments associated with the instance. If there are other instances running, and you are not sure which memory segments belong to the failed instance, you can do the following: a. Shut down all the instances on the machine and remove whatever shared memory still exists that is owned by the software owner. b. Reboot the machine. c. If your Oracle software is release 7.3.3 or newer, you can connect into each instance that is up and identify the shared memory owned by that instance: % svrmgrl SVRMGR> connect internal SVRMGR> oradebug ipc In Oracle8: ----------Area #0 `Fixed Size', containing Subareas 0-0 Total size 000000000000b8c0, Minimum Subarea size 00000000 Subarea Shmid Size Stable Addr 0 7205 000000000000c000 80000000 In Oracle7: ------------------------ Shared Seg Id Address 2016 80000000 Total: # of segments ance. By process of elimination find the segments that do not belong to an instance and remove them. 8. If you are prompted for a password and then receive error ORA-09925 "unable to create audit trail file" or error ORA-09817 "write to audit file failed", along with "SVR4 Error: 28: No space left on device", do the following: Check your "pfile". It is typically in the "$ORACLE_HOME/dbs" directory and will be named "init<your_sid>.ora, where "<your_sid>" is the value of memory -------------Size 4308992 = 1, size = 4308992

Note the "Shmid" for Oracle8 and "Seg Id" for Oracle7 for each running inst

"ORACLE_SID" in your environment. If the "init<your_sid>.ora" file has the "ifile" parameter set, you will also have to check the included file as well. You are looking for the parameter "audit_file_dest". If "audit_file_dest" is set, change to that directory; otherwise change to the "$ORACLE_HOME/rdbms/audit" directory, as this is the default location for audit files. If the directory does not exist, create it. Ensure that you have enough space to create the audit file. The audit file is generally 600 bytes in size. If it does exist, verify you can write to the directory: % touch afile If it could not create the called "afile", you need to change the permission s on your audit directory: % chmod 751 9. If connect internal prompts you for a password and then you receive an ORA-12705 "invalid or unknown NLS parameter value specified" error, you need to verify the settings for "ORA_NLS", "ORA_NLS32", "ORA_NLS33" or "NLS_LANG". You will need to consult your Installation and Configuration Guide for the proper settings for these environment variables. 10. If you have installed Oracle software and are trying to connect with Server Manager to create or start the database, and receive a TNS-12571 "packet writer failure" error, please refer to Note:1064635.6 11. If in SVRMGRL (Server Manager line mode), you are running the "startup.sql" script and receive the following error: ld:so.1: oracle_home/bin/svrmgrl fatal relocation error symbol not found kgffiop RDBMS v7.3.2 is installed. RDBMS v8.0.4 is a separate "oracle_home", and you are attempting to have it coexist. This is due to the wrong version of the client shared library "libclntsh.so. 1" being used at runtime. Verify environment variable settings. You need to ensure that "ORACLE_HOME" and "LD_LIBRARY_PATH" are set correctl y. For C-shell, type: % setenv LD_LIBRARY_PATH $ORACLE_HOME/lib % setenv ORACLE_HOME /u01/app/oracle/product/8.0.4 For Bourne or Korn shell, type: $ $ $ $ LD_LIBRARY_PATH=$ORACLE_HOME/lib export LD_LIBRARY_PATH ORACLE_HOME=/u01/app/oracle/product/8.0.4 export ORACLE_HOME

12. Ensure that the disk the instance resides on has not reached 100% capacity.

% df -k If it has reached 100% capacity, this may be the cause of 'connect internal' prompting for a password. Additional disk space will need to be made available before 'connect interna l' will work. For additional information refer to Note:97849.1 13. Delete process.dat and regid.dat files in $ORACLE_HOME/otrace/admin director y. Oracle Trace is enabled by default on 7.3.2 and 7.3.3 (depends on platform) This can caused high disk space usage by these files and cause a number of apparently mysterious side effects. See Note:45482.1 for more details. 14. When you get ora-1031 "Insufficient privileges" on connect internal after yo u supply a valid password and you have multiple instances running from the sam e ORACLE_HOME, be sure that if an instance has REMOTE_LOGIN_PASSWORDFILE set t o exclusive that the file $ORACLE_HOME/dbs/orapw<sid> does exist, otherwise it defaults to the use of the file orapw that consequently causes access proble ms for any other database that has the parameter set to shared. Set the parameter REMOTE_LOGIN_PASSWORDFILE to shared for all instances that share the common password file and create an exclusive orapw<sid> password files f or any instances that have this set to exclusive. 15. Check permissions on /etc/passwd file (Unix only). If Oracle cannot open the password file, connect internal fails with ORA-1031, since Oracle is not able to verify if the user trying to connect is indeed in the dba group. Example: -------# chmod 711 /etc/passwd # ls -ltr passwd -rwx--x--x 1 root sys 901 Sep 21 14:26 passwd $ sqlplus '/ as sysdba' SQL*Plus: Release 9.2.0.1.0 - Production on Sat Sep 21 16:21:18 2002 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. ERROR: ORA-01031: insufficient privileges Trussing sqlplus will show also the problem: 25338: munmap(0xFF210000, 8192) 25338: lwp_mutex_wakeup(0xFF3E0778) 25338: lwp_mutex_lock(0xFF3E0778) = 0 = 0 = 0

25338: time() 25338: open("/etc/passwd", O_RDONLY) 25338: getrlimit(RLIMIT_NOFILE, 0xFFBE8B28)

= 1032582594 Err#13 EACCES = 0

c) Operating System Specific checks: -----------------------------------1. On OpenVMS, check that the privileges have been granted at the Operating Syst em level: $ SET DEFAULT SYS$SYSTEM: $ RUN AUTHORIZE If the list returned by AUTHORIZE does not contain ORA_<SID>_DBA, or ORA_DBA, then you do not have the correct OS privileges to issue a connect internal. If ORA_<SID>_DBA was added AFTER ORA_DBA, then ORA_DBA needs to be removed and granted again to be updated. Please refer to Note:1010852.6 for more details. 2. On Windows NT, check if DBA_AUTHORIZATION is set to BYPASS in the registry. 3. On Windows NT, if you are able to connect internally but then startup fails for some reason, successive connect internal attempts might prompt for a password. You may also receive errors such as: ORA-12705: ORA-01012: LCC-00161: ORA-01031: invalid or unknown NLS parameter value specified not logged on Oracle error (possible syntax error) insufficient privileges

Refer to entry Note:1027964.6 for suggestions on how to resolve this problem 4. If you are using Multi-Threaded Server (MTS), make sure you are using a dedic ated server connection. A dedicated server connection is required to start up or shutdown the databas e. Unless the database alias in the "TNSNAMES.ORA" file includes a parameter to make a dedicated server connection, it will make a shared connection to a dispatch er. See Note:1058680.6 for more details. 5. On Solaris, if the file "/etc/.name_service_door" has incorrect permissions, Oracle cannot read the file. You will receive a message that "The Oracle user cannot access "/etc/.name_service_door" (permission denied). This file is a flavor of IPC specific to Solaris which Oracle software is usi ng This can also cause connect internal problems. See entry Note:1066589.6 6. You are on Digital Unix, running SVRMGRL (Server Manager line mode), and you receive an ORA-12547 "TNS:lost contact" error and a password prompt. This problem occurs when using Parallel Server and the True Cluster software together. If Parallel Server is not linked in, svrmgrl works as expected.

Oracle V8.0.5 requires an Operating System patch which previous versions of Oracle did not require. The above patch allows svrmgrl to communicate with the TCR software. You can determine if the patch is applied by running: % nm /usr/ccs/lib/libssn.a | grep adjust If this returns nothing, then you need to: 1. Obtain the patch for TCR 1.5 from Digital. This patch is for the MC SCN and adds the symbol "adjustSequenceNumber" to the library /usr/ccs/lib/libssn.a. 2. Apply the patch. 3. Relink Oracle Another possibility is that you need to raise the value of kernel parameter per-proc-stack-size when increased from its default value of 2097152 to 83886080 resolved this pr oblem. 7. You are on version 6.2 of the Silicon Graphics UNIX (IRIX) operating system and you have recently installed RDBMS release 8.0.3. If you are logged on as "oracle/dba" and an attempt to log in to Server Manag er using "connect/internal" prompts you for a password, you should refer to entr y Note:1040607.6 8. On AIX 4.3.3 after applying ML5 or higher you can not longer connect as inter nal or if on 9.X '/ as sysdba' does not work as well. This is a known AIX bug and it occurs on all RS6000 ports including SP2. There is two workarounds and one solution. They are as follows: 1) Use mkpasswd command to remove the index. This is valid until a new user is added to "/etc/passwd" or modified: # mkpasswd -v -d 2) Touch the "/etc/passwd" file. If the "/etc/passwd" file is newer than the index it will not use the password file index: # touch /etc/passwd 3) Obtain APAR IY22458 from IBM. Any questions about this APAR should be directed to IBM. d) Additional Information: -------------------------1. In the "Oracle7 Administrator's Reference for UNIX", there is a note that sta tes: If REMOTE_OS_AUTHENT is set to true, users who are members of the dba group on the remote machine are able to connect as INTERNAL without a password. However, if you are connecting remotely, that is connecting via anything except the bequeath adapter, you will be prompted for a password regardless

of the value of "REMOTE_OS_AUTHENT". Refer to bug 644988 References: ~~~~~~~~~~~ [NOTE:1048876.6] UNIX: Connect internal prompts for password after install [NOTE:1064635.6] ORA-12571: PACKET WRITER FAILURE WHEN STARTING SVRMGR [NOTE:1010852.6] OPENVMS: ORA-01031: WHEN ISSUING "CONNECT INTERNAL" IN SQL*DBA OR SERVER MANAGER [NOTE:1027964.6] LCC-00161 AND ORA-01031 ON STARTUP [NOTE:1058680.6] ORA-00106 or ORA-01031 ERROR when trying to STARTUP or SHUTDOW N DATABASE [NOTE:1066589.6] UNIX: Connect Internal asks for password when TWO_TASK is set [NOTE:1040607.6] L [NOTE:97849.1] [NOTE:50507.1] [NOTE:18089.1] [BUG:644988] WITHOUT PASSWORD SGI: ORA-01012 ORA-01031: WHEN USING SRVMGR AFTER 8.0.3 INSTAL Connect internal Requires Password SYSDBA and SYSOPER Privileges in Oracle8 and Oracle7 UNIX: Connect INTERNAL / AS SYSBDA Privilege on Oracle 7/8 REMOTE_OS_AUTHENT=TRUE: NOT ALLOWING USERS TO CONNECT INTERNAL

Search Words: ~~~~~~~~~~~~~ svrmgrm sqldba sqlplus sqlnet remote_login_passwordfile

Note 3: ------ORA-01031: insufficient privileges Cause: An attempt was made to change the current username or password without th e appropriate privilege. This error also occurs if attempting to install a database without the necessary operating system privileges. Action: Ask the database administrator to perform the operation or grant the req uired privileges. Note 4: ------ORA-01031: insufficient privileges In most cases, the user receiving this error lacks a privilege to create an obj ect (such as a table, view, procedure and the like). Grant the required privilege like so: grant create table to user_lacking_privilege; Startup If someone receives this error while trying to startup the instance, the logged on user must belong to the ora_dba group on Windows or dba group on Unix. Note 5: ------I am not sure it is the same, but I got this error today in windows when sql_aut

hentication in sqlnet.ora was NONE. Changing it to NTS solved the problem.

19.51 ORA-00600: internal error code, arguments: [17059]: ========================================================= Note 1: ------Doc ID </help/usaeng/Search/search.html>: Note:138554.1 Content Type: TEXT/PLAIN Subject: ORA-600 [17059] Creation Date: 02-APR-2001 Type: REFERENCE Last Revision Date: 09-DEC-2004 Status: PUBLISHED Note: For additional ORA-600 related information please read [NOTE:146580.1] <ml 2_documents.showDocument?p_id=146580.1&p_database_id=NOT> PURPOSE: This article discusses the internal error "ORA-600 [17059]", what it means and possible actions. The information here is only applicable to the versions listed and is provided only for guidance. ERROR: ORA-600 [17059] [a] VERSIONS: versions 7.1 to 10.1 DESCRIPTION: While building a table to hold the list of child cursor dependencies relating to a given parent cursor, we exceed the maximum possible size of the table. ARGUMENTS: Arg [a] Object containing the table FUNCTIONALITY: Kernel Generic Library cache manager IMPACT: PROCESS FAILURE NON CORRUPTIVE - No underlying data corruption. SUGGESTIONS: One symptom of this error is that the session will appear to hang for a period of time prior to this error being reported. If the Known Issues section below does not help in terms of identifying a solution, please submit the trace files and alert.log to Oracle Support Services for further analysis. Issuing this SQL as SYS (SYSDBA) may help show any problem objects in the dictionary: select do.obj#,

po.obj# , p_timestamp, po.stime , decode(sign(po.stime-p_timestamp),0,'SAME','*DIFFER*') X from sys.obj$ do, sys.dependency$ d, sys.obj$ po where P_OBJ#=po.obj#(+) and D_OBJ#=do.obj# and do.status=1 /*dependent is valid*/ and po.status=1 /*parent is valid*/ and po.stime!=p_timestamp /*parent timestamp not match*/ order by 2,1 ; Normally the above select would return no rows. If any rows are returned the listed dependent objects may need recompiling. Known Issues: Bug# 3555003 See [NOTE:3555003.8] <ml2_documents.showDocument?p_id=3555003.8 &p_database_id=NOT> View compilation hangs / OERI:17059 after DBMS_APPLY_ADM.SET_DML_HANDLER Fixed: 9.2.0.6 Bug# 2707304 See [NOTE:2707304.8] <ml2_documents.showDocument?p_id=2707304.8 &p_database_id=NOT> OERI:17059 / OERI:kqlupd2 / PLS-907 after adding partitions to Partitioned IOT Fixed: 9.2.0.3, 10.1.0.2 Bug# 2636685 See [NOTE:2636685.8] <ml2_documents.showDocument?p_id=2636685.8 &p_database_id=NOT> Hang / OERI:[17059] after adding a list value to a partition Fixed: 9.2.0.3, 10.1.0.2 Bug# 2626347 See [NOTE:2626347.8] <ml2_documents.showDocument?p_id=2626347.8 &p_database_id=NOT> OERI:17059 accessing view after ADD / SPLIT PARTITION Fixed: 9.2.0.3, 10.1.0.2 Bug# 2306331 See [NOTE:2306331.8] <ml2_documents.showDocument?p_id=2306331.8 &p_database_id=NOT> Hang / OERI[17059] on view after SET_KEY or SET_DML_INVOKATION on base tab le Fixed: 9.2.0.2 Bug# 1115424 See [NOTE:1115424.8] <ml2_documents.showDocument?p_id=1115424.8 &p_database_id=NOT> Cursor authorization and dependency lists too long - can impact shared poo l / OERI:17059 Fixed: 8.0.6.2, 8.1.6.2, 8.1.7.0 Bug# 631335 See [NOTE:631335.8] <ml2_documents.showDocument?p_id=631335.8&p_ database_id=NOT> OERI:17059 from extensive re-user of a cursor Fixed: 8.0.4.2, 8.0.5.0, 8.1.5.0 Bug# 558160 See [NOTE:558160.8] <ml2_documents.showDocument?p_id=558160.8&p_ database_id=NOT> OERI:17059 from granting privileges multiple times

Fixed: 8.0.3.2, 8.0.4.0, 8.1.5.0 Note 2: ------Doc ID </help/usaeng/Search/search.html>: Note:234457.1 Content Type: TEXT/X-HTML Subject: ORA-600 [17059] Error When Compiling A Package Creation Date: 19-FEB-2003 Type: PROBLEM Last Revision Date: 24-AUG-2004 Status: PUBLISHED

fact: fact: Oracle Server - Enterprise Edition fact: Partitioned Tables / Indexes symptom: ORA-600 [17059] Error When Compiling A Package symptom: When Compiling a Package symptom: The Package Accesses a Partitioned Table symptom: ORA-00600: internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s] symptom: internal error code, arguments: [17059], [352251864] symptom: Calling Location kglgob symptom: Calling Location kgldpo symptom: Calling Location kgldon symptom: Calling Location pkldon symptom: Calling Location pkloud symptom: Calling Location - phnnrl_name_resolve_by_loading cause: This is due to <bug:2073948 </metalink/plsql/showdoc?db=bug&id=2073948>> fixed in 10i, and occurs when accessing a partitioned table via a dblink within the package, where DDL (such as adding/dropping partitions) is performed on the table.

fix: This is fixed in 9.0.1.4, 9.2.0.2 & 10i. One-off patches are available for 8.1.7.4. A workaround is to flush the shared pool. Note 3: -------

Doc ID </help/usaeng/Search/search.html>: Note:239796.1 Content Type: TEXT/PLAIN Subject: ORA-600 [17059] when querying dba_tablespaces, dba_indexes, dba_ ind_partitions etc Creation Date: 28-MAY-2003 Type: PROBLEM Last Revision Date: 13-AUG-2004 Status: PUBLISHED Problem: ~~~~~~~~ The information in this article applies to: Internal Error ORA-600 [17059] when querying Data dictionary views like dba_tabl espaces, dba_indexes, dba_ind_partitions etc Symptom(s) ~~~~~~~~~~ While querying Data dictionary views like dba_tablespaces, dba_indexes, dba_ind_partitions etc, getting internal error ORA-600 [17059] Change(s) ~~~~~~~~~~ You probably altered some objects or executed some cat*.sql scripts. Cause ~~~~~~~ Some SYS objects are INVALID. Fix ~~~~ Connect SYS run $ORACLE_HOME/rdbms/admin/utlrp.sql and make sure all the objects are valid. 19.52: ORA-00600: internal error code, arguments: [17003] ========================================================= Note 1: ------The information in this article applies to: Oracle Forms - Version: 9.0.2.7 to 9.0.2.12 Oracle Server - Enterprise Edition - Version: 9.2 This problem can occur on any platform. Errors ORA 600 "internal error code, arguments: [%s],[%s],[%s], [%s], [%s], Symptoms The following error occurs when compiling a form or library ( fmb / pll ) agains t RDBMS 9.2 PL/SQL ERROR 0 at line 0, column 0 ORA-00600: internal error code, arguments: [17003], [0x11360BC], [275], [1], [], [], [], [] The error reproduces everytime.

Triggers / local program units in the form / library contain calls to stored database procedures and / or functions. The error does not occur when compiling against RDBMS 9.0.1 or lower. Cause This is a known bug / issue. The compilation error occurs when the form contains a call to a stored database function / procedure which has two DATE IN variables receiving DEFAULT values su ch as SYSDATE. Reference: <Bug:2713384> Abstract: INTERNAL ERROR [1401] WHEN COMPILE FUNCTION WITH 2 DEFAULT DATE VARIABLES ON 9.2 Fix The bug is fixed in Oracle Forms 10g (9.0.4). There is no backport fix available for Forms 9i (9.0.2) To work-around, modify the offending calls to the stored database procedure/ fun ctions so that DEFAULT parameter values are not passed directly . For example, pass the DEFAULT value SYSDATE indirectly to the stored database pr ocedure/ function by first assigning it to a local variable in the form. Note 2: ------Doc ID </help/usaeng/Search/search.html>: Note:138537.1 Content Type: TEXT/PLAIN Subject: ORA-600 [17003] Creation Date: 02-APR-2001 Type: REFERENCE Last Revision Date: 15-OCT-2004 Status: PUBLISHED Note: For additional ORA-600 related information please read [NOTE:146580.1] <ml 2_documents.showDocument?p_id=146580.1&p_database_id=NOT> PURPOSE: This article discusses the internal error "ORA-600 [17003]", what it means and possible actions. The information here is only applicable to the versions listed and is provided only for guidance. ERROR: ORA-600 [17003] [a] [b] [c] VERSIONS: versions 7.0 to 10.1 DESCRIPTION: The error indicates that we have tried to lock a library cache object by using the dependency number to identify the target object and have found that no such dependency exists. Under this situation we will raise an ORA-600 [17003] if the dependency number that we are using exceeds the number of entries in the dependency table or the dependency entry is not marked as invalidated. ARGUMENTS: Arg [a] Library Cache Object Handle Arg [b] Dependency number

Arg [c] 1 or 2 (indicates where the error was raised internally) FUNCTIONALITY: Kernel Generic Library cache manager IMPACT: PROCESS MEMORY FAILURE NO UNDERLYING DATA CORRUPTION. SUGGESTIONS: A common condition where this error is seen is problematic upgrades. If a patchset has recently been applied, please confirm that there were no errors associated with this upgrade. Specifically, there are some XDB related bugs which can lead to this error being reported. Known Issues: Bug# 2611590 See [NOTE:2611590.8] <ml2_documents.showDocument?p_id=2611590.8 &p_database_id=NOT> OERI:[17003] running XDBRELOD.SQL Fixed: 9.2.0.3, 10.1.0.2 Bug# 3073414 XDB may not work after applying a 9.2 patch set Fixed: 9.2.0.5

19.53: ORA-00600: internal error code, arguments: [qmxiUnpPacked2], [121], [], [ ], [], [], [], [] ================================================================================ ================= Note 1. ------Doc ID: Subject: Note:222876.1 Content Type: ORA-600 [qmxiUnpPacked2] TEXT/PLAIN Creation Date: 09-DEC-2002

Type: REFERENCE Last Revision Date: 15-OCT-2004 Status: PUBLISHED Note: For additional ORA-600 related information please read [NOTE:146580.1] PURPOSE: This article discusses the internal error "ORA-600 [qmxiUnpPacked2]", what it means and possible actions. The information here is only applicable to the versions listed and is provided only for guidance. ERROR: ORA-600 [qmxiUnpPacked2] [a] VERSIONS: versions 9.2 to 10.1

DESCRIPTION: When unpickling an XOB or an array of XOBs an unexpected datatype was found. Generally due to XMLType data that has not been successfully upgraded from a previous version. ARGUMENTS: Arg [a] Type of XOB FUNCTIONALITY: Qernel xMl support Xob to/from Image IMPACT: PROCESS FAILURE NON CORRUPTIVE - No underlying data corruption. SUGGESTIONS: Please review the following article on Metalink : [NOTE:235423.1] How to resolve ORA-600 [qmxiUnpPacked2] during upgrade If you still encounter the error having tried the suggestions in the above article, or the article isn't applicible to your environment then ensure that the upgrade to current version was completed succesfully without error. If the Known Issues section below does not help in terms of identifying a solution, please submit the trace files and alert.log to Oracle Support Services for further analysis. Known Issues: Bug# 2607128 See [NOTE:2607128.8] OERI:[qmxiUnpPacked2] if CATPATCH.SQL/XDBPATCH.SQL fails Fixed: 9.2.0.3 Bug# 2734234 CONSOLIDATION BUG FOR ORA-600 [QMXIUNPPACKED2] DURING CATPATCH.SQL 9.2.0.2

Note 2. -------

Doc ID: Note:235423.1 Content Type: TEXT/X-HTML Subject: How to resolve ORA-600 [qmxiUnpPacked2] during upgrade Creation Date: 14-APR-2003 Type: HOWTO Last Revision Date: 18-MAR-2005 Status: PUBLISHED The information in this article applies to: Oracle 9.2.0.2 Multiple Platforms, 64-bit

Symptom(s) ~~~~~~~~~~ ORA-600 [qmxiUnpPacked2] [] Cause ~~~~~ If the error is seen after applying 9.2.0.2 on a 9.2.0.1 database or if using DBCA in 9.2.0.2 to create a new database (which is using the 9.2.0.1 seed database) then it is very likely that either shared_pool_size or java_pool_size was too small when catpatch.sql was executed. Error is generally seen as ORA-600: internal error code, arguments: [qmxiUnpPacked2], [121] There are 3 options to proceed from here:Fix ~~~~ Option 1 ======== If your shared_pool_size and java_pool_size are less than 150Mb the do the following :1/ Set your shared_pool_size and java_pool_size to 150Mb each. In some case you may need to use larger pool sizes. 2/ Get the xdbpatch.sql script from Note 237305.1 3/ Copy xdbpatch.sql to $ORACLE_HOME/rdbms/admin/xdbpatch.sql having taken a backup of the original file first 4/ Restart the instance with: startup migrate; 5/ spool catpatch @?/rdbms/admin/catpatch.sql Option 2 ======== If you already have shared_pool_size and java_pool_size set at greater than 150Mb then the problem may be caused by the shared memory allocated during the JVM upgrade is not released properly. In which case do the following :1/ Set your shared_pool_size and java_pool_size to 150Mb each. In some case you may need to use larger pool sizes. 2/ Get the xdbpatch.sql script from Note 237305.1

3/ Edit the xdbpatch.sql script and add the following as the first line in the script:alter system flush shared_pool; 3/ Copy xdbpatch.sql to $ORACLE_HOME/rdbms/admin/xdbpatch.sql having taken a backup of the original file first 3/ Restart the instance with: startup migrate; 4/ spool catpatch @?/rdbms/admin/catpatch.sql Option 3 ======== If XDB is NOT in use and there are NO registered XML Schemas an alternative is to drop, and maybe re-install XDB :1/ To drop the XDB subsystem connect as sys and run @?/rdbms/admin/catnoqm.sql 2/ You can then run catpatch.sql to perform the upgrade startup migrate; @?/rdbms/admin/catpatch.sql 3/ Once complete you may chose to re-install the XDB subsystem, if so connect as sys and run catqm.sql @?/rdbms/admin/catqm.sql <XDB_PASSWD> <TABLESPACE> <TEMP_TABLESPACE> If the error is seen during normal database operation, ensure that upgrade to current version was completed succesfully without error. Once this is confirmed attempt to reproduce the error, if successful forward ALERT.LOG, trace files and full error stack to Oracle Support Services for further analysis. References ~~~~~~~~~~~ Bug 2734234 CONSOLIDATION BUG FOR ORA-600 [QMXIUNPPACKED2] DURING CATPATCH.SQL 9.2.0.2 Note 237305.1 Modified xdbpatch.sql

19.54 ORA-00600: internal error code, arguments: [kcbget_37], [1], [], [], [], [ ], [], [] ================================================================================ =========

ORA-00600: internal error code, arguments: [kcbso1_1], [], [], [], [], [], [], [ ] ORA-00600: internal error code, arguments: [kcbget_37], [1], [], [], [], [], [], [] Doc ID: Note:2652771.8 Subject: Support Description of Bug 2652771 Type: PATCH Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 13-AUG-2003 Last Revision Date: 14-AUG-2003 Click here for details of sections in this note. Bug 2652771 AIX: OERI[1100] / OERI[KCBGET_37] SGA corruption This note gives a brief overview of bug 2652771. Affects: Product (Component) Oracle Server (RDBMS) Range of versions believed to be affected Versions < 10G Versions confirmed as being affected 8.1.7.4 9.2.0.2 Platforms affected Aix 64bit 5L Aix 64bit 433 Fixed: This issue is fixed in 9.2.0.3 (Server Patch Set) Symptoms: Memory Corruption Internal Error may occur (ORA-600) ORA-600 [1100] / ORA-600 [kcbget_37] Known Issues: Bug# 2652771 P See [NOTE:2652771.8] AIX: OERI[1100] / OERI[KCBGET_37] SGA corruption Fixed: 9.2.0.3

19.55 ORA-00600: internal error code, arguments: [kcbzwb_4], [], [], [], [], [], [], [] ================================================================================ ======= Doc ID: Note:4036717.8 Subject: Bug 4036717 - Truncate table in exception handler can causes OERI:kcbz wb_4 Type: PATCH Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 25-FEB-2005 Last Revision Date: 09-MAR-2005 Click here for details of sections in this note. Bug 4036717 Truncate table in exception handler can causes OERI:kcbzwb_4

This note gives a brief overview of bug 4036717. Affects: Product (Component) PL/SQL (Plsql) Range of versions believed to be affected Versions < 10.2 Versions confirmed as being affected 10.1.0.3 Platforms affected Generic (all / most platforms affected) Fixed: This issue is fixed in 9.2.0.7 (Server Patch Set) 10.1.0.4 (Server Patch Set) 10g Release 2 (future version) Symptoms: Related To: Internal Error May Occur (ORA-600) ORA-600 [kcbzwb_4] PL/SQL Truncate Description Truncate table in exception handler can cause OERI:kcbzwb_4 with the fix for bug 3768052 installed. Workaround: Turn off or deinstall the fix for bug 3768052. Note that the procedure containing the affected transactional commands will have to be recompiled after backing out the bug fix.

Doc ID: Note:4036717.8 Subject: Bug 4036717 - Truncate table in exception handler can causes OERI:kcbz wb_4 Type: PATCH Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 25-FEB-2005 Last Revision Date: 09-MAR-2005 Click here for details of sections in this note. Bug 4036717 Truncate table in exception handler can causes OERI:kcbzwb_4 This note gives a brief overview of bug 4036717. Affects: Product (Component) PL/SQL (Plsql) Range of versions believed to be affected Versions < 10.2 Versions confirmed as being affected 10.1.0.3 Platforms affected Generic (all / most platforms affected) Fixed: This issue is fixed in 9.2.0.7 (Server Patch Set) 10.1.0.4 (Server Patch Set) 10g Release 2 (future version)

Symptoms: Related To: Internal Error May Occur (ORA-600) ORA-600 [kcbzwb_4] PL/SQL Truncate Description Truncate table in exception handler can cause OERI:kcbzwb_4 with the fix for bug 3768052 installed. Workaround: Turn off or deinstall the fix for bug 3768052. Note that the procedure containing the affected transactional commands will have to be recompiled after backing out the bug fix.

19.56 ORA-00600: internal error code, arguments: [kcbgtcr_6], [], [], [], [], [] , [], [] ================================================================================ ======== Doc ID: Note:248874.1 Subject: ORA-600 [kcbgtcr_6] Type: REFERENCE Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 18-SEP-2003 Last Revision Date: 25-MAR-2004 <Internal_Only> This note contains information that has not yet been reviewed by DDR. As such, the contents are not necessarily accurate and care should be taken when dealing with customers who have encountered this error. Thanks. PAA Internals Group </Internal_Only> Note: For additional ORA-600 related information please read Note 146580.1 PURPOSE: This article discusses the internal error "ORA-600 [kcbgtcr_6]", what it means and possible actions. The information here is only applicable to the versions listed and is provided only for guidance. ERROR: ORA-600 [kcbgtcr_6] [a] VERSIONS: versions 8.0 to 10.1 DESCRIPTION: Two buffers have been found in the buffer cache that are both current and for the same DBA (Data Block Address).

We should not have two 'current' buffers for the same DBA in the cache, if this is the case then this error is raised. ARGUMENTS: Arg [a] Buffer class Note that for Oracle release 9.2 and earlier there are no additional arguments reported with this error. FUNCTIONALITY: Kernel Cache Buffer management IMPACT: PROCESS FAILURE POSSIBLE INSTANCE FAILURE NON CORRUPTIVE - No underlying data corruption. SUGGESTIONS: Retry the operation. Does the error still occur after an instance bounce? If using 64bit AIX then ensure that minimum version in use is 9.2.0.3 or patch for Bug 2652771 has been applied. If the Known Issues section below does not help in terms of identifying a solution, please submit the trace files and alert.log to Oracle Support Services for further analysis. Known Issues: Bug 2652771 Shared data structures corrupted around latch code on 64bit AIX ports. Fixed 9.2.0.3 backports available for older versions (8.1.7) from Metalink. <Internal_Only> ORA-600 [kcbgtcr_6] Versions: 8.0.5 - 10.1 Meaning: We have two 'CURRENT' buffers for the same DBA. Argument Description: None --------------------------------------------------------------------------Explanation: We have identified two 'CURRENT' buffers for the same DBA in the cache, this is incorrect, and this error will be raised. --------------------------------------------------------------------------Diagnosis: Source: kcb.c

Check the trace file, this will show the buffers i.e :BH (0x70000003ffe9800) file#: 39 rdba: 0x09c131e6 (39/78310) class 1 ba: 0x70000003fcf0000 set: 6 dbwrid: 0 obj: 11450 objn: 11450 hash: [70000000efa9b00,70000004d53a870] lru: [70000000efa9b68,700000006fb8d68] ckptq: [NULL] fileq: [NULL] st: XCURRENT md: NULL rsop: 0x0 tch: 1 LRBA: [0x0.0.0] HSCN: [0xffff.ffffffff] HSUB: [255] RRBA: [0x0.0.0] BH (0x70000000efa9b00) file#: 39 rdba: 0x09c131e6 (39/78310) class 1 ba: 0x70000000e4f6000 set: 6 dbwrid: 0 obj: 11450 objn: 11450 hash: [70000004d53a870,70000003ffe9800] lru: [700000012fbaf68,70000003ffe9868] ckptq: [NULL] fileq: [NULL] st: XCURRENT md: NULL rsop: 0x0 tch: 2 LRBA: [0x0.0.0] HSCN: [0xffff.ffffffff] HSUB: [255] RRBA: [0x0.0.0] Here it is clear that we have two current buffers for the dba. Most likely cause for this is 64bit AIX Bug 2652771. If this isn't the case check the error reproduces consistently after bouncing the instance? Via SQLplus? What level of concurrency to reproduce? Is a testcase available? Check OS memory for errors. --------------------------------------------------------------------------Known Bugs: Bug 2652771 Shared data structures corrupted around latch code on 64bit AIX ports. - Fixed 9.2.0.3, backports available for older versions.

19.57 ORA-00600: internal error code, arguments: [1100], [0x7000002FDF83F40], [0 x7000002FDF83F40], [], [], [], [], [] ================================================================================ ===================================== Doc ID: Note:138123.1 Subject: ORA-600 [1100] Type: REFERENCE Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 28-MAR-2001 Last Revision Date: 08-FEB-2005 Note: For additional ORA-600 related information please read Note 146580.1

PURPOSE: This article discusses the internal error "ORA-600 [1100]", what it means and possible actions. The information here is only applicable to the versions listed and is provided only for guidance. ERROR: ORA-600 [1100] [a] [b] [c] [d] [e] VERSIONS: versions 6.0 to 9.2 DESCRIPTION: This error relates to the management of standard double-linked (forward and backward) lists. Generally, if the list is damaged an attempt to repair the links is performed. Additional information will accompany this internal error. A dump of the link and often a core dump will coincide with this error. This is a problem with a linked list structure in memory. FUNCTIONALITY: GENERIC LINKED LISTS IMPACT: PROCESS FAILURE POSSIBLE INSTANCE FAILURE IF DETECTED BY PMON PROCESS No underlying data corruption. SUGGESTIONS: Known Issues: Bug# 3724548 See Note 3724548.8 OERI[kglhdunp2_2] / OERI[1100] under high load Fixed: 9.2.0.6, 10.1.0.4, 10.2 Bug# 3691672 + See Note 3691672.8 OERI[17067]/ OERI[26599] / dump (kgllkdl) from JavaVM / OERI:1100 from PMO N Fixed: 10.1.0.4, 10.2 Bug# 2652771 P See Note 2652771.8 AIX: OERI[1100] / OERI[KCBGET_37] SGA corruption Fixed: 9.2.0.3 Bug# 1951929 See Note 1951929.8 ORA-7445 in KQRGCU/kqrpfr/kqrpre possible Fixed: 8.1.7.3, 9.0.1.2, 9.2.0.1 Bug# 959593 See Note 959593.8 CTRL-C During a truncate crashes the instance Fixed: 8.1.6.3, 8.1.7.0 <Internal_Only>

INTERNAL ONLY SECTION - NOT FOR PUBLICATION OR DISTRIBUTION TO CUSTOMERS No internal information at the present time. Ensure that this note comes out on top in Metalink when searched ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 </Internal_Only> Note 2: ------Doc ID: Note:3724548.8 Subject: Bug 3724548 - OERI[kglhdunp2_2] / OERI[1100] under high load Type: PATCH Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 24-SEP-2004 Last Revision Date: 13-JAN-2005 Click here for details of sections in this note. Bug 3724548 OERI[kglhdunp2_2] / OERI[1100] under high load This note gives a brief overview of bug 3724548. Affects: Product (Component) Oracle Server (Rdbms) Range of versions believed to be affected Versions < 10.2 Versions confirmed as being affected 9.2.0.4 9.2.0.5 Platforms affected Generic (all / most platforms affected) Fixed: This issue is fixed in 9.2.0.6 (Server Patch Set) 10.1.0.4 (Server Patch Set) 10g Release 2 (future version) Symptoms: Related To: Memory Corruption Internal Error May Occur (ORA-600) ORA-600 [kglhdunp2_2] ORA-600 [1100] (None Specified) Description When an instance is under high load it is possible for sessions to get ORA-600[KGLHDUNP2_2] and ORA-600 [1100] errors. This can also show as a corrupt linked list in the SGA. The full bug text (if published) can be seen at <Bug:3724548> (This link will no

t work for UNPUBLISHED bugs) You can search for any interim patches for this bug here <Patch:3724548> (This l ink will Error if no interim patches exist) 19.58 Compilation problems DBI DBD: =================================== We upgraded Oracle from 8.1.6 to 9.2.0.5 and I tried to rebuild the DBD::Oracle module but it threw errors like: . gcc: unrecognized option `-q64' ld: 0711-736 ERROR: Input file /lib/crt0_64.o: XCOFF64 object files are not allowed in 32-bit mode. collect2: ld returned 8 exit status make: 1254-004 The error code from the last command is 1. Stop. After some digging I found out that this is because the machine is AIX 5.2 running under 32-bit and it is looking at the oracle's lib directory which has 64 bit libraries. So after running "perl Makefile.PL", I edited the Makefile 1. changing the references to Oracle's ../lib to ../lib32, 2. changing change crt0_64.o to crt0_r.o. 3. Remove the -q32 and/or -q64 options from the list of libraries to link with. Now when I ran "make" it went smoothly, so did make test and make install. I ran my own simple perl testfile which connects to the Oracle and gets some info and it works fine. Now I have an application which can be customised to call perl scripts and when I call this test script from that application it fails with: install_driver(Oracle) failed: Can't load '/usr/local/perl/lib/site_perl/5.8.5/a ix/auto/DBD/Oracle/Oracle.so' for module DBD::Oracle: 0509-022 Cannot load mod ule /usr/local/perl/lib/site_perl/5.8.5/aix/auto/DBD/Oracle/Oracle.so. 0509-150 Dependent module /u00/oracle/product/9.2.0/lib/libclntsh.a(sh r.o) could not be loaded. 0509-103 The module has an invalid magic number. 0509-022 Cannot load module /u00/oracle/product/9.2.0/lib/libclntsh.a. 0509-150 Dependent module /u00/oracle/product/9.2.0/lib/libclntsh.a co uld not be loaded. at /usr/local/perl/lib/5.8.5/aix/DynaLoader.pm line 230. at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a required shared library or dll isn't installed where expected at /opt/dscmdevc/src/udps/test_oracle_dbd.pl line 45 whats happening here is that the application sets its own LIBPATH to include oracle's lib(instead of lib32) in the beginning and that makes perl look at the wrong place for the file - libclntsh.a .Unfortunately it will take too long for the application developers to change this in their

application and I am looking for a quick solution. The test script is something like: use Env; use strict; use lib qw( /opt/harvest/common/perl/lib ) ; #use lib qw( $ORACLE_HOME/lib32 ) ; use DBI; my $connect_string="dbi:Oracle:"; my $datasource="d1ach2"; $ENV{'LIBPATH'} = "${ORACLE_HOME}/lib32:$ENV{'LIBPATH'}" ; . . my $dbh = DBI->connect($connect_string, $dbuser, $dbpwd) or die "Can't connect to $datasource: $DBI::errstr"; . . Adding 'use lib' or using'$ENV{LIBPATH}' to change the LIBPATH is not working because I need to make this work in this perl script and the "use DBI" is run (or whatever the term is) in the compile-phase before the LIBPATH is set in the run-phase. I have a work around for it: write a wrapper ksh script which exports the LIBPATH and then calls the perl script which works fine but I was wondering if there is a way to set the libpath or do something else inside the current perl script so that it knows where to look for the right library files inspite of the wrong LIBPATH? Or did I miss something when I changed the Makefile and did not install everything right? Is there anyway I check this? (the make install didnot throw any errors) Any help or thoughts on this would be much appreciated. Thanks! Rachana. note 12: -------P550:/ # find . -name "libclnt*" -print ./apps/oracle/product/9.2/lib/libclntst9.a ./apps/oracle/product/9.2/lib/libclntsh.a ./apps/oracle/product/9.2/lib32/libclntst9.a ./apps/oracle/product/9.2/lib32/libclntsh.a ./apps/oracle/oui/bin/aix/libclntsh.so.9.0 P550:/ #

19.59 Listener problem: IBM/AIX RISC System/6000 Error: 13: Permission denied ----------------------------------------------------------------------------When starting listener start listener TNS-12546: TNS:permission denied

TNS-12560: TNS:protocol adapter error TNS-00516: Permission denied IBM/AIX RISC System/6000 Error: 13: Permission denied Note 1: 'TNS-12531: TNS:cannot allocate memory' may be misleading, it seems to be a perm ission problem (see also IBM/AIX RISC System/6000 Error: 13: Permission denied). A possible rea son is: Oracle (more specific the listener) is unable to read /etc/hosts, because of per mission problems. So host resolution is not possible. .. .. The problem really was in permissions of /etc/hosts on the node2. It was -rw-r---- (640). Now it is -rw-rw-r-- (664) and everything goes ok. Thank you! BUGS WITH REGARDS TO PRO*COBOL ON 9i:

10.59 Listener problem: IBM/AIX RISC System/6000 Error: 79: Connection refused -----------------------------------------------------------------------------d0planon@zb121l01:/data/oracle/d0planon/admin/home/$ lsnrctl LSNRCTL for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Production on 12-OCT2007 08:29:14 Copyright (c) 1991, 2006, Oracle. All rights reserved. Welcome to LSNRCTL, type "help" for information. LSNRCTL> status Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener IBM/AIX RISC System/6000 Error: 79: Connection refused Answer 1: Check if the oracle user can read /etc/hosts Answer 2: Maybe there are multiple instances of the listener, so if you try the following LSNRCTL> status <listener_name> You might have a correct response.

19.61: 64BIT PRO*COBOL IS NOT THERE EVNN AFTER UPGRDING TO 9.2.0.3 ON AIX-5L BOX -------------------------------------------------------------------------------Bookmark Fixed font Go to End Monitor Bug

Bug No. 2859282 Filed 19-MAR-2003 Updated 01-NOV-2003 Product Precompilers Product Version Platform AIX5L Based Systems (64-bit)

9.2.0.3 Platform Version

5.*

Database Version 9.2.0.3 Affects Platforms Port-Specific Severity Severe Loss of Service Status Closed, Duplicate Bug Base Bug 2440385 Fixed in Product Version No Data Problem statement: 64BIT PRO*COBOL IS NOT THERE EVNN AFTER UPGRDING TO 9.2.0.3 ON AIX-5L BOX

*** 03/19/03 10:13 am *** 2889686.996 . ========================= PROBLEM: . 1. Clear description of the problem encountered: . cst. has upgraded from 9.2.0.2 to 9.2.0.3 on a AIX 5L 64-Bit Box and is not seeing the 64-bit Procob executable. Actually the same problem existed when upgraded from 9.2.0.1 to 9.2.0.2, but the one-off patch has been provided in the Bug#2440385 to resolve the issue. As per the Bug, problem has been fixed in 9.2.0.3. But My Cst. is facing the same problem on 9.2.0.3 also. . This is what the Cst. says ============================ This is the original bug # 2440385. The fix provides 64 bit versions of Pro*Cobol.There are two versions of the patch for the bug: one is for the 9.2.0.1 RDBMS and the other is for 9.2.0.2. So the last time I hit this issue, I applied the 9.2.0.2 RDBMS patch to the 9.2.0.1 install. The 9.2.0.2 patch also experienced the relinking problem on rtsora just like the 9.2.0.1 install did. I ignored the error to complete the patch application. Then I used the patch for the 2440385 bug to get 64 bit procob/rtsora executables (the patch actually provides executables rather than performing a successful relinking) to get the Pro*Cobol 1.8.77 precompiler to work with the MicroFocus Server Express 2.0.11 (64 bit) without encountering "bad magic number" error. . The current install that I am performing I've downloaded the Oracle 9.2.0.3 Pro*Cobol capability fix either so the rtsora relinking fails as well. Thus I don't have a working Pro*Cobol precompiler to allow me to generate our Cobol programs against the database. . 2. Pertinent configuration information (MTS/OPS/distributed/etc) .

3. Indication of the frequency and predictability of the problem . 4. Sequence of events leading to the problem . 5. Technical impact on the customer. Include persistent after effects. . ========================= DIAGNOSTIC ANALYSIS: . One-off patch should be provided on top of 9.2.0.3 as provided on top of 9.2.0.2/9.2.0.1 . ========================= WORKAROUND: . . ========================= RELATED BUGS: . 2440385 . ========================= REPRODUCIBILITY: . 1. State if the problem is reproducible; indicate where and predictability . 2. List the versions in which the problem has reproduced . 9.2.0.3 . 3. List any versions in which the problem has not reproduced

Further notes on PRO*COBOL: =========================== Note 1: ======= 32 bit cobol: 64 bit cobol: PATCHES: 1. Patch 2663624: (Cobol patch for 9202 AIX 5L) ----------------------------------------------PSE FOR BUG2440385 ON 9.2.0.2 FOR AIX5L PORT 212 Patchset Exception: 2663624 / Base Bug 2440385 #------------------------------------------------------------------------# # DATE: November 26, 2002 # ----------------------# Platform Patch for : AIX Based Systems (Oracle 64bit) for 5L # Product Version # : 9.2.0.2 # Product Patched : RDBMS # 9201,9202,9203,9204,9205 procob32 or procob18_32. procob or procob18

# # # # # # # # # # # # # # # # # # #

Bugs Fixed by this patch: ------------------------2440385 : PLEASE PROVIDE THE PATCH FOR SUPPORTING 64BIT PRO*COBOL Patch Installation Instructions: -------------------------------To apply the patch, unzip the PSE container file; % unzip p2440385_9202_AIX64-5L.zip Set your current directory to the directory where the patch is located: % cd 2663624 Ensure that the directory containing the opatch script appears in your $PATH; then enter the following command: % opatch apply

2. Patch 2440385: ----------------Results for Platform : AIX5L Based Systems (64-bit) Patch Description Release Updated Size 2440385 Pro*COBOL: PATCH FOR SUPPORTING 64BIT PRO*COBOL 9.2.0.3 27-APR-2003 34M 2440385 Pro*COBOL: PATCH FOR SUPPORTING 64BIT PRO*COBOL 9.2.0.2 26-NOV-2002 17M 2440385 Pro*COBOL: PATCH FOR SUPPORTING 64BIT PRO*COBOL 9.2.0.1 01-OCT-2002 17M

3. Patch 3501955 9205: ---------------------Also includes 2440385. Provide the patch for supporting 64-bit Pro*COBOL. Note 2: ======= Problem precompiling Cobol program under Oracle 9i...... Hi, we recently upgraded to 9i. However, we still have 32 bit Cobol, so we're us ing the procob18_32 precompiler to compile our programs. Some of my compiles have worked successfully. However, I'm receiving the follow error in one of my compiles: 1834 183400 01 IB0-STATUS PIC 9. 7SA 350 1834 ...................................^ PCC-S-0018: Expected "PICTURE clause", but found "9" at line 1834 in file What's strange is that if I compile the program against the same DB using procob instead of procob18_32,

it compiles cleanly. I noticed in my compile that failed using procob18_32, it h ad the following message: System default option values taken from: /u01/app/oracle/product/9.2.0.4/precomp /admin/pcccob.cfg Yet, when I used procob, it had this message: System default option values taken from: /u01/app/oracle/product/9.2.0.4/precomp /admin/pcbcfg.cfg .. .. Hi, I started using procob32 instead of procob18_32, and that resolved my proble m. Thanks for any help you may have already started to provide. Note 3: ======= Doc ID: Note:257934.1 Content Type: TEXT/X-HTML Subject: Pro*COBOL Application Fails in Runtime When Using Customized old Make Files With Signal 11 (MF Errror 114) Creation Date: 20-NOV-2003 Type: PROBLEM Last Revision Date: 04-APR-2005 Status: MODERATED The information in this article applies to: Precompilers - Version: 9.2.0.4 This problem can occur on any platform. Symptoms After upgrading from Oracle server and Pro*COBOL 9.2.0.3.0 to 9.2.0.4.0 application are failing with cobol runtime error 114 when using 32-bit builds. Platform is AIX 4.3.3 which does not support 64-bit builds with Micro Focus Server Express 2.0.11. Execution error : file 'sample1' error code: 114, pc=0, call=1, seg=0 114 Attempt to access item beyond bounds of memory (Signal 11) Changes Upgraded from 9.2.0.3.0 to 9.2.0.4.0. Cause The customized old make files for building 32-bit applications invoked the 64-bi t precompilers procob or procob18 instead of procob32 or procob18_32. Fix Use the Oracle Supplied make templates or change the customized old make files f or 32-bit application builds $ORACLE_HOME/precomp/demo/procob2/demo_procob_32.mk, $ORACLE_HOME/precomp/demo/procob/demo_procob_32.mk and $ORACLE_HOME/precomp/demo/procob/demo_procob18_32.mk invoke the wrong precompiler. To fix the problem add the following to $ORACLE_HOME/precomp/demo/procob2/demo_procob_32.mk:

PROCOB=procob32 Using $ORACLE_HOME/precomp/demo/procob/demo_procob_32.mk: PROCOB_32=procob32 Using $ORACLE_HOME/precomp/demo/procob/demo_procob18_32.mk PROCOB18_32=procob18_32 The change can be added to the bottom of the make file. References Bug 3220095 - Procobol App Fails114 Attempt To Access Item Beyond Bounds Of Memo ry (Signal 11) Note 4: ======= Displayed below are the messages of the selected thread. Thread Status: Closed From: Jean-Daniel DUMAS 23-Nov-04 16:39 Subject: PROCOB18_32 Problem at execution ORA-00933 PROCOB18_32 Problem at execution ORA-00933 We try to migrate from Oracle 8.1.7.4 to Oracle 9.2.0.5. We've got problems with a lot of procobol programs using host table variables in PL SQL blocks like: EXEC SQL EXECUTE BEGIN FOR nIndice IN 1..:WI-NB-APPELS-TFO009S LOOP UPDATE tmp_edition_erreur SET mon_nb_dec = :WTI-S2-MON-NB-DEC (nIndice) WHERE mon_cod = :WTC-S2-MON-COD (nIndice) AND run_id = :WC-O-RUN-ID; END LOOP; END; END-EXEC At execution, we've got "ORA-00933 SQL command not properly ended". The problem seems to appear only if the host table variable is used inside a SEL ECT,UPDATE or DELETE command. For the INSERT VALUES command, it seems that we've got no problem. A workaround consists to assign host table variables into oracle table variables and replace inside SQL command host table variables by oracle table variables. But, as we've got a lot a program like this, we don't enjoy to do this. Have somebody another idea ? jddumas@eram.fr From: Oracle, Amit Joshi 05-Jan-05 06:26 Subject: Re : PROCOB18_32 Problem at execution ORA-00933

Hi Please refer to bug 3802067 on Metalink. From the details provided , it seems you are hitting the same. Best Regards Amit Joshi Note 5: ======= Re: Server Express 64bit and Oracle 9i problem (114) on AIX 5.2 Hi Wayne (and Panos) Apologies if you're aware of some of this already, but I just wanted to clarify the steps involved in creating and executing a Pro*COBOL application with Micro Focus Server Express on UNIX. When installing Pro*COBOL on UNIX (as part of the main Oracle installation), you need to have your COBOL environment setup, in order for the installer to relink a COBOL RTS containing the Oracle support libraries (rtsora/rtsora32/rtsora64). The 64-bit edition of Oracle 9i on AIX 5.x creates rtsora -- the 64-bit version of the run-time -- and rtsora32 -- the 32-bit version of the run-time. It's imperative that you use the correct edition of Server Express, i.e. 32-bit or 64-bit -- note well, that these are separate products on this platform -- for the mode in which you wish to use Oracle. In addition, you need to ensure that LIBPATH is set to point to the correct Oracle 'lib' directory -- $ORACLE_HOME/lib32 for 32-bit, or $ORACLE_HOME/lib for 64-bit If you wish to recreate those executables, say if you've updated your COBOL environment since installing Oracle, then from looking at the makefiles -ins_precomp.mk and env_precomp.mk -- then the effective commands to use to re-link the run-time correctly are as follows (logged in under your Oracle user ID) : either mode: <set up COBDIR, ORACLE_HOME, ORACLE_BASE, ORACLE_SID as appropriate for your installation> export PATH=$COBDIR/bin:$ORACLE_HOME/bin:$PATH 32-bit : export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib32:$LIBPATH cd $ORACLE_HOME/precomp/lib make LIBDIR=lib32 -f ins_precomp.mk EXE=rtsora32 rtsora32 64-bit: export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib:$LIBPATH cd $ORACLE_HOME/precomp/lib make -f ins_precomp.mk rtsora Regarding precompiling your application, Oracle provide two versions of Pro*COBOL. Again, you need to use the correct one depending on whether you're creating a 32-bit or 64-bit application, as the precompiler will

generate different code. If invoking Pro*COBOL directly, you need to use : 32-bit : procob32 / procob18_32 , e.g. procob32 myapp.pco cob -it myapp.cob rtsora32 myapp.int or 64-bit : procob / procob18 , e.g. procob myapp.pco cob -it myapp.cob rtsora myapp.int If you're using Server Express 2.2 SP1 or later, you can also compile using the Cobsql preprocessor, which will invoke the correct version of Pro*COBOL under the covers, allowing for a single precompile-compile step, e.g. cob -ik myapp.pco -C "p(cobsql) csqlt==oracle8 endp" This method also aids debugging, as you will see the original source code while animating, rather than the output from the precompiler. See the Server Express Database Access manual. Prior to SX 2.2 SP1, Cobsql only supported the creation of 32-bit applications. I hope this helps -- if you're still having problems, please let me know. Regards, SimonT. Re: Re: Server Express 64bit and Oracle 9i problem (114) on AIX 5.2 Hi Simon (and anyone else) Thanks for that. We still seem to be getting a very unusual error with our c ompiles in or makes. A bit of background: we are "upgrading" from Oracle8i, SAS6, Solaris, MF COB OL 4.5 to AIX 5L, Oracle9i, SAS8 and MF Server Express COBOL. When we attempt to compile our COBOL it works fine. However if the COBOL has embedded Oracle SQL our procomp makes try to access ADA. We do not use ADA. I thought this must have been included by accident; but can find no flag or install option for it. So can you give us any clues as to why we are suffer ing an ADA plague :-)) Wayne Re: Server Express 64bit and Oracle 9i problem (114) on AIX 5.2 Hi Wayne. On the surface, it appears as if you're not picking up the correct Pro*COBOL binary. If you invoke 'procob' from the command line, you should see something along the lines of : Pro*COBOL: Release 9.2.0.4.0 - Production on Mon Apr 19 13:38:07 2004

followed by a list of Pro*COBOL options. Do you see this, or do you see a different banner (say, Pro*ADA, or Pro*Fortran)? Assuming you see something other than a Pro*COBOL banner, then if you invoke 'whence procob', does it show procob as being picked up from your Oracle bin directory (/home/oracle/9.2.0/bin/procob in my case) ? If you're either not seeing the correct Pro*COBOL banner, or it's not located in the correct directory, I'd suggest rebuilding the procob and procob32 binaries. Logged in under your Oracle user ID, with the Oracle environment set up : cd $ORACLE_HOME/precomp/lib make -f ins_precomp.mk procob32 procob and then try your compilation process again. Regards, SimonT. Re: Re: Server Express 64bit and Oracle 9i problem (114) on AIX 5.2 Hi Simon Firstly, thanks for all your help, it was greatly appreciated. We have the solution to our problem: The problem is resolved by modifying the line in the job from: make -f $SRC_DIR/procob.mk COBS="$SRC_DIR/PFEM025A.cob SYSDATE.cob CNTLG ET. cob" EXE=$SRC_DIR/PFEM025A to make -f $SRC_DIR/procob.mk build COBS="$SRC_DIR/PFEM025A.cob SYSDATE.cob CN TLGET.cob" EXE=$SRC_DIR/PFEM025A It appears this (build keyword) is not a requirement for the job to run on S olaris but is for AIX. All is working fine. Cheers Wayne Note 6: ======= Doc ID: Note:2440385.8 Content Type: TEXT/X-HTML Subject: Support Description of Bug 2440385 Creation Date: 08-AUG-2 003 Type: PATCH Last Revision Date: 15-AUG-2003 Status: PUBLISHED Click here for details of sections in this note. Bug 2440385 AIX: Support for 64 bit ProCobol

This note gives a brief overview of bug 2440385. Affects: Product (Component) Precompilers (Pro*COBOL) Range of versions believed to be affected Versions >= 7 but < 10G Versions confirmed as being affected Platforms affected Aix 64bit 5L Fixed: 9.2.0.3

This issue is fixed in 9.2.0.4 (Server Patch Set) Symptoms: (None Specified) Related To: Pro* Precompiler Description Add support for 64 bit ProCobol The full bug text (if published) can be seen at Bug 2440385 This link will not work for UNPUBLISHED bugs. Note 7: ======= Displayed below are the messages of the selected thread. Thread Status: Closed From: Cathy Agada 18-Sep-03 21:40 Subject: How do I relink rtsora for 64 bit processing How do I relink rtsora for 64 bit processing I have the following error while relinking "rtsora" on AIX 5L/64bit platform on oracle 9.2.0.3 (I believe my patch is up-to-date). Our Micro Focus compiler version is 2.0.11 $>make -f ins_precomp.mk relink EXENAME=rtsora /bin/make -f ins_precomp.mk LIBDIR=lib32 EXE=/app/oracle/product/9.2.0/precomp/l ib/rtsora rtsora32 Linking /app/oracle/product/9.2.0/precomp/lib/rtsora cob64: bad magic number: /app/oracle/product/9.2.0/precomp/lib32/cobsqlintf.o make: 1254-004 The error code from the last command is 1. Stop. make: 1254-004 The error code from the last command is 2. My environment variable is as follows: COBDIR=/usr/lpp/cobol LD_LIBRARY_PATH=$ORACLE_HOME/lib:/app/oracle/product/9.2.0/network/lib SHLIB_PATH=$ORACLE_HOME/lib64:/app/oracle/product/9.2.0/lib32 I added 'define=bit64' on precomp config file. Any ideas on what could be wrong. Thanks.

From: Oracle, Amit Chitnis 19-Sep-03 05:26 Subject: Re : How do I relink rtsora for 64 bit processing Cathy, Support for 64 bit Pro*Cobol 9.2.0.3 on AIX 5.1 was provided through one off pat ch for bug 2440385 You will need to download and apply the patch for bug 2440385. ==OR== You can dowload and apply the latest 9.2.0.4 patchset where the bug is fixed. Thanks, Amit Chitnis. Note 8: ======= Doc ID: Note:215279.1 Content Type: TEXT/X-HTML Subject: Building Pro*COBOL Programs Fails With "cob64: bad magic number: " Creation Date: 08-APR-2003 Type: PROBLEM Last Revision Date: 15-APR-2003 Status: PUBLISHED fact: Pro*COBOL 9.2.0.2 fact: Pro*COBOL 9.2.0.1 fact: AIX-Based Systems (64-bit) symptom: Building Pro*COBOL programs fails symptom: cob64: bad magic number: %s symptom: /oracle/product/9.2.0/precomp/lib32/cobsqlintf.o cause: Bug 2440385 AIX: Support for 64 bit ProCobol fix: This is fixed in Pro*COBOL 9.2.0.3 One-Off patch for Pro*COBOL 9.2.0.2 has been provided in Metalink Patch Number 2440385 Reference: How to Download a Patch from Oracle Note 9: =======

If you wish to recreate those executables, say if you've updated your COBOL environment since installing Oracle, then from looking at the makefiles -ins_precomp.mk and env_precomp.mk -- then the effective commands to use to re-link the run-time correctly are as follows (logged in under your Oracle user ID) : either mode: <set up COBDIR, ORACLE_HOME, ORACLE_BASE, ORACLE_SID as appropriate for your installation> export PATH=$COBDIR/bin:$ORACLE_HOME/bin:$PATH 32-bit : export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib32:$LIBPATH cd $ORACLE_HOME/precomp/lib make LIBDIR=lib32 -f ins_precomp.mk EXE=rtsora32 rtsora32 64-bit: export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib:$LIBPATH cd $ORACLE_HOME/precomp/lib make -f ins_precomp.mk rtsora Note 10: ======== On 9.2.0.5, try to get the pro cobol patch for 9203. Then just copy the procobol files to the cobol directory.

19.62: ORA-12170: ================= Connection Timeout. Doc ID: Note:274303.1 Content Type: TEXT/X-HTML Subject: Description of parameter SQLNET.INBOUND_CONNECT_TIMEOUT Creation Date: 26-MAY-2004 Type: BULLETIN Last Revision Date: 10-FEB-2005 Status: MODERATED *** This article is being delivered in Draft form and may contain errors. Please use the MetaLink "Feedback" button to advise Oracle of any issues related to this article. *** PURPOSE ------To specify the time, in seconds, for a client to connect with the database serve r and provide the necessary authentication information. Description of parameter SQLNET.INBOUND_CONNECT_TIMEOUT -------------------------------------------------------

This parameter has been introduced in 9i version. This has to be configured in sqlnet.ora file. Use the SQLNET.INBOUND_CONNECT_TIMEOUT parameter to specify the time, in seconds, for a client to connect with the database server and provide the necessary authentication information. If the client fails to establish a connection and complete authentication in the time specified, then the database server terminates the connection. In addition, the database server logs the IP address of the client and an ORA-12170: TNS:Connect timeout occurred error message to the sqlnet.log file. The client receives either an ORA-12547: TNS:lost contact or an ORA-12637: Packet receive failed error message. Without this parameter, a client connection to the database server can stay open indefinitely without authentication. Connections without authentication can introduce possible denial-of-service attacks, whereby malicious clients attempt to flood database servers with connect requests that consume resources. To protect both the database server and the listener, Oracle Corporation recommends setting this parameter in combination with the INBOUND_CONNECT_TIMEOUT_listener_name parameter in the listener.ora file. When specifying values for these parameters, consider the following recommendations: *Set both parameters to an initial low value. *Set the value of the INBOUND_CONNECT_TIMEOUT_listener_name parameter to a lower value than the SQLNET.INBOUND_CONNECT_TIMEOUT parameter. For example, you can set INBOUND_CONNECT_TIMEOUT_listener_name to 2 seconds and INBOUND_CONNECT_TIMEOUT parameter to 3 seconds. If clients are unable to complete connections within the specified time due to system or network delays that are normal for the particular environment, then increment the time as needed. By default is set to None Example SQLNET.INBOUND_CONNECT_TIMEOUT=3 RELATED DOCUMENTS ----------------Oracle9i Net Services Reference Guide, Release 2 (9.2), Part Number A96581-02 SQLNET.EXPIRE_TIME: ------------------Purpose: Determines time interval to send a probe to verify the session is alive See Also: Oracle Advanced Security Administrator's Guide Default: None

Minimum Value: 0 minutes Recommended Value: 10 minutes Example: sqlnet.expire_time=10 sqlnet.expire_time Enables dead connection detection, that is, after the specifed time (in minutes) the server checks if the client is still connected. If not, the server process exits. This parameter must be set on the server

PROBLEM: Long query (20 minutes) returns ORA-01013 after about a minute. SOLUTION: The SQLNET.ORA parameter SQLNET.EXPIRE_TIME was set to a one(1). The parameter was changed to... SQLNET.EXPIRE_TIME=2147483647 This allowed the query to complete. This is documented in the Oracle Troubleshooting manual on page 324. The manual part number is A54757.01. Keywords: SQLNET.EXPIRE_TIME,SQLNET.ORA,ORA-01013 sqlnet.expire_time should be set on the server. The server sends keep alive traf fic over connections that have already been established. You won't need to change your firewall. sqlnet.expire_time is actually intended to test connections in order to allow or acle to clean up resources from connection that abnormally terminated. The architecture to do that means that the server will send a probe packet to th e client. That probe packet is viewed by the most firewalls as traffic on the line. That will in short reset the idle timers on the firewall. If you happen to have the disconnects from idle timers then it may help. It was not intended for that feature but it is a byproduct of the design. 19.63: Tracing SQLNET: ====================== Note 1: -------

Doc ID: Note:219968.1 Subject: SQL*Net, Net8, Oracle Net Services - Tracing and Logging at a Glance Type: BULLETIN

Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 20-NOV-2002 Last Revision Date: 26-AUG-2003

TITLE ----SQL*Net, Net8, Oracle Net Services - Tracing and Logging at a Glance. PURPOSE ------The purpose of Oracle Net tracing and logging is to provide detailed information to track and diagnose Oracle Net problems such as connectivity issues, abnormal disconnection and connection delay. Tracing provides varying degrees of information that describe connection-specific internal operations during Oracle Net usage. Logging reports summary, status and error messages. Oracle Net Services is the replacement name for the Oracle Networking product formerly known as SQL*Net (Oracle7 [v2.x]) and Net8 (Oracle8/8i [v8.0/8.1]). For consistency, the term Oracle Net is used thoughout this article and refers to all Oracle Net product versions. SCOPE & APPLICATION ------------------The aim of this document is to overview SQL*Net, Net8, Oracle Net Services tracing and logging facilities. The intended audience includes novice Oracle users and DBAs alike. Although only basic information on how to enable and disable tracing and logging features is described, the document also serves as a quick reference. The document provides the reader with the minimum information necessary to generate trace and log files with a view to forwarding them to Oracle Support Services (OSS) for further diagnosis. The article does not intend to describe trace/log file contents or explain how to interpret them. LOG & TRACE PARAMETER OVERVIEW -----------------------------The following is an overview of Oracle Net trace and log parameters. TRACE_LEVEL_[CLIENT|SERVER|LISTENER] TRACE_FILE_[CLIENT|SERVER|LISTENER] TRACE_DIRECTORY_[CLIENT|SERVER|LISTENER] TRACE_UNIQUE_[CLIENT|SERVER|LISTENER] TRACE_TIMESTAMP_[CLIENT|SERVER|LISTENER] TRACE_FILELEN_[CLIENT|SERVER|LISTENER] TRACE_FILENO_[CLIENT|SERVER|LISTENER] LOG_FILE_[CLIENT|SERVER|LISTENER] LOG_DIRECTORY_[CLIENT|SERVER|LISTENER] LOGGING_LISTENER TNSPING.TRACE_LEVEL = = = = = = = [0-16|USER|ADMIN|SUPPORT|OFF] <FILE NAME> <DIRECTORY> [ON|TRUE|OFF|FALSE] [ON|TRUE|OFF|FALSE] #Oracle8i+ <SIZE in KB> #Oracle8i+ <NUMBER> #Oracle8i+

= <FILE NAME> = <DIRECTORY NAME> = [ON|OFF] = [0-16|USER|ADMIN|SUPPORT|OFF]

TNSPING.TRACE_DIRECTORY NAMES.TRACE_LEVEL NAMES.TRACE_FILE NAMES.TRACE_DIRECTORY NAMES.TRACE_UNIQUE NAMES.LOG_FILE NAMES.LOG_DIRECTORY NAMES.LOG_UNIQUE NAMESCTL.TRACE_LEVEL NAMESCTL.TRACE_FILE NAMESCTL.TRACE_DIRECTORY NAMESCTL.TRACE_UNIQUE

= <DIRECTORY> = = = = = = = = = = = [0-16|USER|ADMIN|SUPPORT|OFF] <FILE NAME> <DIRECTORY> [ON|OFF] <FILE NAME> <DIRECTORY> [ON|OFF] [0-16|USER|ADMIN|SUPPORT|OFF] <FILE NAME> <DIRECTORY> [ON|OFF]

Note: With the exception of parameters suffixed with LISTENER, all other parameter suffixes and prefixes [CLIENT|NAMES|NAMESCTL|SERVER|TNSPING] are fixed and cannot be changed. For parameters suffixed with LISTENER, the suffix name should be the actual Listener name. For example, if the Listener name is PROD_LSNR, an example trace parameter name would be TRACE_LEVEL_PROD_LSNR=OFF. CONFIGURATION FILES ------------------Files required to enable Oracle Net tracing and logging features include: Oracle Net Listener Oracle Net - Client Oracle Net - Server TNSPING Utility Oracle Name Server Oracle NAMESCTL Oracle Connection Manager LISTENER.ORA SQLNET.ORA on SQLNET.ORA on SQLNET.ORA on NAMES.ORA SQLNET.ORA on CMAN.ORA client server client/Server server LISTENER.TRC SQLNET.TRC SQLNET.TRC TNSPING.TRC NAMES.TRC

CONSIDERATIONS WHEN USING LOGGING/TRACING ----------------------------------------1. Verify which Oracle Net configuration files are in use. By default, Oracle Net configuration files are sought and resolved from the following locations: TNS_ADMIN environment variable (incl. Windows Registry Key) /etc or /var/opt/oracle (Unix) $ORACLE_HOME/network/admin (Unix) %ORACLE_HOME%/Network/Admin or %ORACLE_HOME%/Net80/Admin (Windows) Note: User-specific Oracle Net parameters may also reside in $HOME/sqlnet.ora file. An Oracle Net server installation is also a client. 2. Oracle Net tracing and logging can consume vast quantities of disk space. Monitor for sufficient disk space when tracing is enabled. On some Unix operating systems, /tmp is used for swap space. Although generally writable by all users, this is not an ideal location for trace/log file generation. 3. Oracle Net tracing should only be enabled for the duration of the issue at

hand. Oracle Net tracing should always be disabled after problem resolution. 4. Large trace/log files place an overhead on the processes that generate them. In the absence of issues, the disabling of tracing and/or logging will improve Oracle Net overall efficiency. Alternatively, regularly truncating log files will also improve efficiency. 5. Ensure that the target trace/log directory is writable by the connecting user, Oracle software owner and/or user that starts the Net Listener. LOG & TRACE PARAMETERS ---------------------This section provides a detailed description of each trace and log parameter. TRACE LEVELS TRACE_LEVEL_[CLIENT|SERVER|LISTENER] = [0-16|USER|ADMIN|SUPPORT|OFF] Determines the degree to which Oracle Net tracing is provided. Configuration file is SQLNET.ORA, LISTENER.ORA. Level 0 is disabled - level 16 is the most verbose tracing level. Listener tracing requires the Net Listener to be reloaded or restarted after adding trace parameters to LISTENER.ORA. Oracle Net (client/server) tracing takes immediate effect after tracing parameters are added to SQLNET.ORA. By default, the trace level is OFF. OFF USER ADMIN (equivalent to 0) disabled - provides no tracing. (equivalent to 4) traces to identify user-induced error conditions. (equivalent to 6) traces to identify installation-specific problems.

SUPPORT (equivalent to 16) trace information required by OSS for troubleshooting. TRACE FILE NAME TRACE_FILE_[CLIENT|SERVER|LISTENER] = <FILE NAME> Determines the trace file name. Any valid operating system file name. Configuration file is SQLNET.ORA, LISTENER.ORA. Trace file is automatically appended with '.TRC'. Default trace file name is SQLNET.TRC, LISTENER.TRC. TRACE DIRECTORY TRACE_DIRECTORY_[CLIENT|SERVER|LISTENER] = <DIRECTORY> Determines the directory in which trace files are written. Any valid operating system directory name. Configuration file is SQLNET.ORA, LISTENER.ORA. Directory should be writable by the connecting user and/or Oracle software owner. Default trace directory is $ORACLE_HOME/network/trace. UNIQUE TRACE FILES TRACE_UNIQUE_[CLIENT|SERVER|LISTENER] = [ON|TRUE|OFF|FALSE] Allows generation of unique trace files per connection. Trace file names are automatically appended with '_<PID>.TRC'. Configuration file is SQLNET.ORA, LISTENER.ORA.

Unique tracing is ideal for sporadic issues/errors that occur infrequently or randomly. Default value is OFF TRACE TIMING TRACE_TIMESTAMP_[CLIENT|SERVER|LISTENER] = [ON|TRUE|OFF|FALSE] A timestamp in the form of [DD-MON-YY 24HH:MI;SS] is recorded against each operation traced by the trace file. Configuration file is SQLNET.ORA, LISTENER.ORA Suitable for hanging or slow connection issues. Available from Oracle8i onwards. Default value is is OFF. MAXIMUM TRACE FILE LENGTH TRACE_FILELEN_[CLIENT|SERVER|LISTENER] = <SIZE> Determines the maximum trace file size in Kilobytes (Kb). Configuration file is SQLNET.ORA, LISTENER.ORA. Available from Oracle8i onwards. Default value is UNLIMITED. TRACE FILE CYCLING TRACE_FILENO_[CLIENT|SERVER|LISTENER] = <NUMBER> Determines the maximum number of trace files through which to perform cyclic tracing. Configuration file is SQLNET.ORA, LISTENER.ORA. Suitable when disk space is limited or when tracing is required to be enabled for long periods. Available from Oracle8i onwards. Default value is 1 (file). LOG FILE NAME LOG_FILE_[CLIENT|SERVER|LISTENER] = <FILE NAME> Determines the log file name. May be any valid operating system file name. Configuration file is SQLNET.ORA, LISTENER.ORA. Log file is automatically appended with '.LOG'. Default log file name is SQLNET.LOG, LISTENER.LOG. LOG DIRECTORY LOG_DIRECTORY_[CLIENT|SERVER|LISTENER] = <DIRECTORY NAME> Determines the directory in which log files are written. Any valid operating system directory name. Configuration file is SQLNET.ORA, LISTENER.ORA. Directory should be writable by the connecting user or Oracle software owner. Default directory is $ORACLE_HOME/network/log. DISABLING LOGGING LOGGING_LISTENER = [ON|OFF] Disables Listener logging facility. Configuration file is LISTENER.ORA. Default value is ON.

ORACLE NET TRACE/LOG EXAMPLES ----------------------------CLIENT (SQLNET.ORA) trace_level_client = 16 trace_file_client = cli trace_directory_client = /u01/app/oracle/product/9.0.1/network/trace trace_unique_client = on trace_timestamp_client = on trace_filelen_client = 100 trace_fileno_client = 2 log_file_client = cli log_directory_client = /u01/app/oracle/product/9.0.1/network/log tnsping.trace_directory = /u01/app/oracle/product/9.0.1/network/trace tnsping.trace_level = admin SERVER (SQLNET.ORA) trace_level_server = 16 trace_file_server = svr trace_directory_server = /u01/app/oracle/product/9.0.1/network/trace trace_unique_server = on trace_timestamp_server = on trace_filelen_server = 100 trace_fileno_server = 2 log_file_server = svr log_directory_server = /u01/app/oracle/product/9.0.1/network/log namesctl.trace_level = 16 namesctl.trace_file = namesctl namesctl.trace_directory = /u01/app/oracle/product/9.0.1/network/trace namesctl.trace_unique = on LISTENER (LISTENER.ORA) trace_level_listener = 16 trace_file_listener = listener trace_directory_listener = /u01/app/oracle/product/9.0.1/network/trace trace_timestamp_listener = on trace_filelen_listener = 100 trace_fileno_listener = 2 logging_listener = off log_directory_listener = /u01/app/oracle/product/9.0.1/network/log log_file_listener=listener NAMESERVER TRACE (NAMES.ORA) names.trace_level = 16 names.trace_file = names names.trace_directory = /u01/app/oracle/product/9.0.1/network/trace names.trace_unique = off CONNECTION MANAGER TRACE (CMAN.ORA) tracing = yes RELATED DOCUMENTS -----------------

Note Note Note Note Note Note

16658.1 111916.1 39774.1 73988.1 1011114.6 1030488.6

(7) Tracing SQL*Net/Net8 SQLNET.ORA Logging and Tracing Parameters Log & Trace Facilities on Net v2 How to Get Cyclic SQL*Net Trace Files when Disk Space is Limited SQL*Net V2 Tracing Net8 Tracing

Note 2: ------Doc ID: Note:39774.1 Subject: LOG & TRACE Facilities on NET v2. Type: FAQ Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 25-JUL-1996 Last Revision Date: 31-JAN-2002

LOG AND TRACE FACILITIES ON SQL*NET V2 ====================================== This article describes the log and trace facilities that can be used to examine application connections that use SQL*Net. This article is based on usage of SQL*NET v2.3. It explains how to invoke the trace facility and how to use the log and trace information to diagnose and resolve operating problems. Following topics are covered below: o What the log facility is o What the trace facility is o How to invoke the trace facility o Logging and tracing parameters o Sample log output o Sample trace output Note: Information in this section is generic to all operating system environments. You may require further information from the Oracle operating system-specific documentation for some details of your specific operating environment. ________________________________________ 1. What is the Log Facility? ============================ All errors encountered in SQL*Net are logged to a log file for evaluation by a network or database administrator. The log file provides additional information for an administrator when the error on the screen is inadequate to understand the failure. The log file, by way of the error stack, shows the state of the TNS software at various layers. The properties of the log file are:

o Error information is appended to the log file when an error occurs. o Generally, a log file can only be replaced or erased by an administrator, although client log files can be deleted by the user whose application created them. (Note that in general it is bad practice to delete these files while the program using them is still actively logging.) o Logging of errors for the client, server, and listener cannot be disabled. This is an essential feature that ensures all errors are recorded. o The Navigator and Connection Manager components of the MultiProtocol Interchange may have logging turned on or off. If on, logging includes connection statistics. o The Names server may have logging turned on or off. If on, a Names server's operational events are written to a specified logfile. You set logging parameters using the Oracle Network Manager. ________________________________________ 2. What is the Trace Facility? ============================== The trace facility allows a network or database administrator to obtain more information on the internal operations of the components of a TNS network than is provided in a log file. Tracing an operation produces a detailed sequence of statements that describe the events as they are executed. All trace output is directed to trace output files which can be evaluated after the failure to identify the events that lead up to an error. The trace facility is typically invoked during the occurrence of an abnormal condition, when the log file does not provide a clear indication of the cause. Attention: The trace facility uses a large amount of disk space and may have a significant impact upon system performance. Therefore, you are cautioned to turn the trace facility ON only as part of a diagnostic procedure and to turn it OFF promptly when it is no longer necessary. Components that can be traced using the trace facility are: o Network listener o SQL*Net version 2 components - SQL*Net client - SQL*Net server o MultiProtocol Interchange components - the Connection Manager and pumps - the Navigator o Oracle Names - Names server - Names Control Utility The trace facility can be used to identify the following types of problems: - Difficulties in establishing connections - Abnormal termination of established connections - Fatal errors occurring during the operation of TNS network components

________________________________________ 3. What is the Difference between Logging and Tracing? ====================================================== While logging provides the state of the TNS components at the time of an error, tracing provides a description of all software events as they occur, and therefore provides additional information about events prior to an error. There are three levels of diagnostics, each providing more information than the previous level. The three levels are: 1. The reported error from Oracle7 or tools; this is the single error that is commonly returned to the user. 2. The log file containing the state of TNS at the time of the error. This can often uncover low level errors in interaction with the underlying protocols. 3. The trace file containing English statements describing what the TNS software has done from the time the trace session was initiated until the failure is recreated. When an error occurs, a simple error message is displayed and a log file is generated. Optionally, a trace file can be generated for more information. (Remember, however, that using the trace facility has an impact on your system performance.) In the following example, the user failed to use Oracle Network Manager to create a configuration file, and misspelled the word "PORT" as "POT" in the connect descriptor. It is not important that you understand in detail the contents of each of these results; this example is intended only to provide a comparison. Reported Error (On the screen in SQL*Forms): ERROR: ORA-12533: Unable to open message file (SQL-02113) Logged Error (In the log file, SQLNET.LOG): **************************************************************** Fatal OSN connect error 12533, connecting to: (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala) (USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc) (KEY=bad_port))(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=1521)))) VERSION INFORMATION: TNS for SunOS: Version 2.0.14.0.0 - Developer's Release Oracle Bequeath NT Protocol Adapter for SunOS: Version 2.0.14.0.0 - Developer's Release Unix Domain Socket IPC NT Protocol Adaptor for SunOS: Version 2.0.14.0.0 - Developer's Release TCP/IP NT Protocol Adapter for SunOS: Version 2.0.14.0.0 Developer's Release Time: 07-MAY-93 17:38:50 Tracing to file: /home/ginger/trace_admin.trc Tns error struct:

nr err code: 12206 TNS-12206: TNS:received a TNS error while doing navigation ns main err code: 12533 TNS-12533: TNS:illegal ADDRESS parameters ns secondary err code: 12560 nt main err code: 503 TNS-00503: Illegal ADDRESS parameters nt secondary err code: 0 nt OS err code: 0 Example of Trace of Error ------------------------The trace file, SQLNET.TRC at the USER level, contains the following information: --New New --TRACE trace trace TRACE CONFIGURATION INFORMATION FOLLOWS --stream is "/private1/oracle/trace_user.trc" level is 4 CONFIGURATION INFORMATION ENDS ---

--- PARAMETER SOURCE INFORMATION FOLLOWS --Attempted load of system pfile source /private1/oracle/network/admin/sqlnet.ora Parameter source was not loaded Error stack follows: NL-00405: cannot open parameter file Attempted load of local pfile source /home/ginger/.sqlnet.ora Parameter source loaded successfully -> PARAMETER TABLE LOAD RESULTS FOLLOW <Some parameters may not have been loaded See dump for parameters which loaded OK -> PARAMETER TABLE HAS THE FOLLOWING CONTENTS <TRACE_DIRECTORY_CLIENT = /private1/oracle trace_level_client = USER TRACE_FILE_CLIENT = trace_user --- PARAMETER SOURCE INFORMATION ENDS ----- LOG CONFIGURATION INFORMATION FOLLOWS --Attempted open of log stream "/tmp_mnt/home/ginger/sqlnet.log" Successful stream open --- LOG CONFIGURATION INFORMATION ENDS --Unable to get data from navigation file tnsnav.ora local names file is /home/ginger/.tnsnames.ora system names file is /etc/tnsnames.ora -<ERROR>- failure, error stack follows -<ERROR>- NL-00427: bad list -<ERROR>- NOTE: FILE CONTAINS ERRORS, SOME NAMES MAY BE MISSING Calling address: (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)(USER=gi nger))) (ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc)(KEY=bad_port))(ADDRESS=(PROTOCOL=t cp)(HOST Getting local community information Looking for local addresses setup by nrigla No addresses in the preferred address list

TNSNAV.ORA is not present. No local communities entry. Getting local address information Address list being processed... No community information so all addresses are "local" Resolving address to use to call destination or next hop Processing address list... No community entries so iterate over address list This a local community access Got routable address information Making call with following address information: (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_port))) Calling with outgoing connect data (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)(USER=gi nger))) (ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=1521)))) (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_port))) KEY = bad_port connecting... opening transport... -<ERROR>- sd=8, op=1, resnt[0]=511, resnt[1]=2, resnt[2]=0 -<ERROR>- unable to open transport -<ERROR>- nsres: id=0, op=1, ns=12541, ns2=12560; nt[0]=511, nt[1]=2, nt[2]=0 connect attempt failed Call failed... Call made to destination Processing address list so continuing Getting local community information Looking for local addresses setup by nrigla No addresses in the preferred address list TNSNAV.ORA is not present. No local communities entry. Getting local address information Address list being processed... No community information so all addresses are "local" Resolving address to use to call destination or next hop Processing address list... No community entries so iterate over address list This a local community access Got routable address information Making call with following address information: (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=1521))) Calling with outgoing connect data (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)(USER=gi nger))) (ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=521)))) (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=1521))) -<FATAL?>- failed to recognize: POT -<ERROR>- nsres: id=0, op=13, ns=12533, ns2=12560; nt[0]=503, nt[1]=0, nt[2]=0 Call failed... Exiting NRICALL with following termination result -1 -<ERROR>- error from nricall -<ERROR>- nr err code: 12206 -<ERROR>- ns main err code: 12533 -<ERROR>- ns (2) err code: 12560 -<ERROR>- nt main err code: 503 -<ERROR>- nt (2) err code: 0 -<ERROR>- nt OS err code: 0

-<ERROR>- Couldn't connect, returning 12533 In the trace file, note that unexpected events are preceded with an -<ERROR>- stamp. These events may represent serious errors, minor errors, or merely unexpected results from an internal operation. More serious and probably fatal errors are stamped with the -<FATAL?>- prefix. In this example trace file, you can see that the root problem, the misspelling of "PORT," is indicated by the trace line: -<FATAL?>- failed to recognize: POT Most tracing is very similar to this. If you have a basic understanding of the events the components will perform, you can identify the probable cause of an error in the text of the trace. ________________________________________ 4. Log File Names ================= Log files produced by different components have unique names. The default file names are: SQLNET.LOG LISTENER.LOG INTCHG.LOG NAVGATR.LOG NAMES.LOG Contains client and/or server information Contains listener information Contains Connection Manager and pump information Contains Navigator information Contains Names server information

You can control the name of the log file. For each component, any valid string can be used to create a log file name. The parameters are of the form: LOG_FILE_component = string For example: LOG_FILE_LISTENER = TEST Some platforms have restrictions on the properties of a file name. See your Oracle operating system specific manuals for platform specific restrictions. _____________________________________ 5. Using Log Files ================== Follow these steps to track an error using a log file: 1. Browse the log file for the most recent error that matches the error number you have received from the application. This is almost always the last entry in the log file. Notice that an entry or error stack in the log file is usually many lines in length. In the example earlier in this

chapter, the error number was 12207. 2. Starting at the bottom, look up to the first non-zero entry in the error report. This is usually the actual cause. In the example earlier in this chapter, the last non-zero entry is the "ns" error 12560. 3. Look up the first non-zero entry in later chapters of this book for its recommended cause and action. (For example, you would find the "ns" error 12560 under ORA-12560.) To understand the notation used in the error report, see the previous chapter, "Interpreting Error Messages." 4. If that error does not provide the desired information, move up the error stack to the second to last error and so on. 5. If the cause of the error is still not clear, turn on tracing and re-execute the statement that produced the error message. The use of the trace facility is described in detail later in this chapter. Be sure to turn tracing off after you have re-executed the command. ________________________________________ 6. Using the Trace Facility =========================== The steps used to invoke tracing are outlined here. Each step is fully described in subsequent sections. 1. Choose the component to be traced from the list: o o o o o o o Client Server Listener Connection Manager and pump (cmanager) Navigator (navigator) Names server Names Control Utility

2. Save existing trace file if you need to retain information on it. By default most trace files will overwrite an existing ones. TRACE_UNIQUE parameter need s to be included in appropriate config. files if unique trace files are require d. This appends Process Id to each file. For Example: For Names server tracing, NAMES.TRACE_UNIQUE=ON needs to be set in NAMES. ORA file. For Names Control Utility, NAMESCTL.TRACING_UNIQUE=TRUE needs to be in SQLNET.ORA. TRACE_UNIQUE_CLIENT=ON in SQLNET.ORA for Client Tracing. 3. For any component, you can invoke the trace facility by editing the component configuration file that corresponds to the component traced. The component config. files are SQLNET.ORA, LISTENER.ORA, INTCHG.ORA, and NAMES. ORA. 4. Execute or start the component to be traced. If the trace component configuration files are modified while the component is running, the modified trace parameters will take effect the next time the component is invoked or restarted. Specifically for each component:

CLIENT:

Set the trace parameters in the client-side SQLNET.ORA and invoke a client application, such as SQL*Plus, a Pro*C application, or any application that uses the Oracle network products. Set the trace parameters in the server-side SQLNET.ORA. The next process started by the listener will have tracing enabled. The trace parameters must be created or edited manually.

SERVER:

LISTENER: Set the trace parameters in the LISTENER.ORA CONNECTION MANAGER: Set the trace parameters in INTCHG.ORA and start the Connection Manager from the Interchange Control Utility or command line. The pumps are started automatically with the Connection Manager, and their trace files are controlled by the trace parameters for the Connection Manager. NAVIGATOR:Again, set the trace parameters in INTCHG.ORA and start the Navigator NAMES SERVER: Trace parameters needs to be set in NAMES.ORA and start the Names server. NAMES CONTROL UTILITY: Set the trace parameters in SQLNET.ORA and start the Names Control Utility 5. Be sure to turn tracing off when you do not need it for a specific diagnostic purpose. ________________________________________ 7. Setting Trace Parameters =========================== The trace parameters are defined in the same configuration files as the log parameters. Table below shows the configuration files for different network components and the default names of the trace files they generate. -------------------------------------------------------| Trace Parameters | Configuration | | | Corresponding to | File | Output Files | |-------------------|-----------------|------------------| | | | | | Client | SQLNET.ORA | SQLNET.TRC | | Server | | SQLNET.TRC | | TNSPING Utility | | TNSPING.TRC | | Names Control | | | | Utility | | NAMESCTL.TRC | |-------------------|-----------------|------------------| | Listener | LISTENER.ORA | LISTENER.TRC | |-------------------|-----------------|------------------|

| Interchange | INTCHG.ORA | | | Connection | | | | Manager | | CMG.TRC | | Pumps | | PMP.TRC | | Navigator | | NAV.TRC | |-------------------|-----------------|------------------| | Names server | NAMES.ORA | NAMES.TRC | |___________________|_________________|__________________| The configuration files for each component are located on the computer running that component. The trace characteristics for two or more components of an Interchange are controlled by different parameters in the same configuration file. For example, there are separate sets of parameters for the Connection Manager and the Navigator that determine which components will be traced, and at what level. Similarly, if there are multiple listeners on a single computer, each listener is controlled by parameters that include the unique listener name in the LISTENER.ORA file. For each component, the configuration files contain the following information: o A valid trace level to be used (Default is OFF) o The trace file name (optional) o The trace file directory (optional) ________________________________________ 7a. Valid SQLNET.ORA Diagnostic Parameters ========================================== The SQLNET.ORA caters for: o Client Logging & Tracing o Server Logging & Tracing o TNSPING utility o NAMESCTL program -----------------------------------------------------------------------------| | | | | PARAMETERS | VALUES | Example (DOS client, UNIX server) | | | | | |------------------------|----------------|------------------------------------| |Parameters for Client | |===================== | |------------------------------------------------------------------------------| | | | | | TRACE_LEVEL_CLIENT | OFF/USER/ADMIN | TRACE_LEVEL_CLIENT=USER | | | | | | TRACE_FILE_CLIENT | string | TRACE_FILE_CLIENT=CLIENT | | | | | | TRACE_DIRECTORY_CLIENT | valid directory| TRACE_DIRECTORY_CLIENT=c:\NET\ADMIN| | | | | | TRACE_UNIQUE_CLIENT | OFF/ON | TRACE_UNIQUE_CLIENT=ON | | | | |

| LOG_FILE_CLIENT | string | LOG_FILE_CLIENT=CLIENT | | | | | | LOG_DIRECTORY_CLIENT | valid directory| LOG_DIRECTORY_CLIENT=c:\NET\ADMIN | |------------------------------------------------------------------------------| |Parameters for Server | |===================== | |------------------------------------------------------------------------------| | | | | | TRACE_LEVEL_SERVER | OFF/USER/ADMIN | TRACE_LEVEL_SERVER=ADMIN | | | | | | TRACE_FILE_SERVER | string | TRACE_FILE_SERVER=unixsrv_2345.trc | | | | | | TRACE_DIRECTORY_SERVER | valid directory| TRACE_DIRECTORY_SERVER=/tmp/trace | | | | | | LOG_FILE_SERVER | string | LOG_FILE_SERVER=unixsrv.log | | | | | | LOG_DIRECTORY_SERVER | valid directory| LOG_DIRECTORY_SERVER=/tmp/trace | |------------------------------------------------------------------------------| ---(SQLNET.ORA Cont.)--------------------------------------------------------| | | | | PARAMETERS | VALUES | Example (DOS client, UNIX server) | | | | | |------------------------|----------------|------------------------------------| | |Parameters for TNSPING | |====================== | |------------------------------------------------------------------------------| | | | | | TNSPING.TRACE_LEVEL | OFF/USER/ADMIN | TNSPING.TRACE_LEVEL=user | | | | | | TNSPING.TRACE_DIRECTORY| directory |TNSPING.TRACE_DIRECTORY= | | | | /oracle7/network/trace | | | | | |------------------------------------------------------------------------------| |Parameters for Names Control Utility | |==================================== | |------------------------------------------------------------------------------| | | | | | NAMESCTL.TRACE_LEVEL | OFF/USER/ADMIN |NAMESCTL.TRACE_LEVEL=user | | | | | | NAMESCTL.TRACE_FILE | file |NAMESCTL.TRACE_FILE=nc_south.trc | | | | | | NAMESCTL.TRACE_DIRECTORY| directory |NAMESCTL.TRACE_DIRECTORY=/o7/net/trac e| | | | | | NAMESCTL.TRACE_UNIQUE | TRUE/FALSE |NAMESCTL.TRACE_UNIQUE=TRUE or ON/OFF| | | | | -----------------------------------------------------------------------------Note: You control log and trace parameters for the client through Oracle Network Manager. You control log and trace parameters for the server by manually adding the desired parameters to the SQLNET.ORA file. Parameters for Names Control Utility & TNSPING Utility need to be added manually to SQLNET.ORA file. You cannot create them using Oracle Network M anager.

________________________________________ 7b. Valid LISTENER.ORA Diagnostic Parameters ============================================ The following table shows the valid LISTENER.ORA parameters used in logging and tracing of the listener. -----------------------------------------------------------------------------| | | | | PARAMETERS | VALUES | Example (DOS client, UNIX server) | | | | | |------------------------|----------------|------------------------------------| | | | | |TRACE_LEVEL_LISTENER | USER | TRACE_LEVEL_LISTENER=OFF | | | | | |TRACE_FILE_LISTENER | string | TRACE_FILE_LISTENER=LISTENER | | | | | |TRACE_DIRECTORY_LISTENER| valid directory| TRACE_DIRECTORY_LISTENER=$ORA_SQLNET V2 | | | | | |LOG_FILE_LISTENER | | string | | LOG_FILE_LISTENER=LISTENER | | |

|LOG_DIRECTORY_LISTENER | valid directory| LOG_DIRECTORY_LISTENER=$ORA_ERRORS | | | | | -----------------------------------------------------------------------------________________________________________ 7c. Valid INTCHG.ORA Diagnostic Parameters ========================================== The following table shows the valid INTCHG.ORA parameters used in logging and tracing of the Interchange. --------------------------------------------------------------------------------| | | | | PARAMETERS | VALUES | Example (DOS client, UNIX server ) | | | (default)| | |------------------------|--------------------|-----------------------------------| | | | | |TRACE_LEVEL_CMANAGER | OFF|USER|ADMIN | TRACE_LEVEL_CMANAGER=USER | | | | | |TRACE_FILE_CMANAGER | string (CMG.TRC) | TRACE_FILE_CMANAGER=CMANAGER |

| | | |TRACE_DIRECTORY_CMANAGER| valid directory | TRACE_DIRECTORY_CMANAGER=C:\ADMI N | | | | | |LOG_FILE_CMANAGER | string (INTCHG.LOG)| LOG_FILE_CMANAGER=CMANAGER | | | | | |LOG_DIRECTORY_CMANAGER | valid directory | LOG_DIRECTORY_CMANAGER=C:\ADMIN | | | | | |LOGGING_CMANAGER | OFF/ON | LOGGING_CMANAGER=ON | | | | | |LOG_INTERVAL_CMANAGER | Any no of minutes | LOG_INTERVAL_CMANAGER=60 | | | (60 minutes)| | |TRACE_LEVEL_NAVIGATOR | OFF/USER/ADMIN | TRACE_LEVEL_NAVIGATOR=ADMIN | | | | | |TRACE_FILE_NAVIGATOR | string (NAV.TRC)| TRACE_FILE_NAVIGATOR=NAVIGATOR | | | | | |TRACE_DIRECTORY_NAVIGATOR| valid directory | TRACE_DIRECTORY_NAVIGATOR=C:\ADM IN | | | | | |LOG_FILE_NAVIGATOR |string (NAVGATR.LOG)| LOG_FILE_NAVIGATOR=NAVIGATOR | | | | | |LOG_DIRECTORY_NAVIGATOR | valid directory | LOG_DIRECTORY_NAVIGATOR=C:\ADMIN | | | | | |LOGGING_NAVIGATOR | OFF/ON | LOGGING_NAVIGATOR=OFF | | | | | |LOG_LEVEL_NAVIGATOR | ERRORS|ALL (ERRORS)| LOG_LEVEL_NAVIGATOR=ERRORS | | | | | --------------------------------------------------------------------------------Note: The pump component shares the trace parameters of the Connection Manager, but it generates a separate trace file with the unchangeable default name PMPpid.TRC. ________________________________________ 7d. Valid NAMES.ORA Diagnostic Parameters

========================================= The following table shows the valid NAMES.ORA parameters used in logging and tracing of the Names server. -----------------------------------------------------------------------------| | | | | PARAMETERS | VALUES | Example (DOS client, UNIX server) | | | (default)| | |------------------------|----------------|------------------------------------| | | | | | NAMES.TRACE_LEVEL | OFF/USER/ADMIN | NAMES.TRACE_LEVEL=ADMIN | | | | | | NAMES.TRACE_FILE | file(names.trc)| NAMES.TRACE_FILE=nsrv3.trc | | | | | | NAMES.TRACE_DIRECTORY | directory | NAMES.TRACE_DIRECTORY=/o7/net/trace| | | | | | NAMES.TRACE_UNIQUE | TRUE/FALSE | NAMES.TRACE_UNIQUE=TRUE or ON/OFF | | | | | | NAMES.LOG_FILE | | NAMES.LOG_DIRECTORY | | file(names.log)| NAMES.LOG_FILE=nsrv1.log | | | directory | | NAMES.LOG_DIRECTORY= /o7/net/log | | | | |

-----------------------------------------------------------------------------________________________________________ 8. Example of a Trace File =========================== In the following example, the SQLNET.ORA file includes the following line: TRACE_LEVEL_CLIENT = ADMIN The following trace file is the result of a connection attempt that failed because the hostname is invalid. The trace output is a combination of debugging aids for Oracle specialists and English information for network administrators. Several key events can be seen by analyzing this output from beginning to end: (A) The client describes the outgoing data in the connect descriptor used to contact the server. (B) An event is received (connection request). (C) A connection is established over the available transport (in this case TCP/IP). (D) The connection is refused by the application, which is the listener. (E) The trace file shows the problem, as follows: -<FATAL?>- ***hostname lookup failure! ***

(F) Error 12545 is reported back to the client. If you look up Error 12545 in Chapter 3 of this Manual, you will find the following description: ORA-12545 TNS:Name lookup failure Cause: A protocol specific ADDRESS parameter cannot be resolved. Action: Ensure the ADDRESS parameters have been entered correctly; the most likely incorrect value is the node name. ++++++ NOTE: TRACE FILE EXTRACT +++++++ --New New --TRACE trace trace TRACE CONFIGURATION INFORMATION FOLLOWS --stream is "/private1/oracle/trace_admin.trc" level is 6 CONFIGURATION INFORMATION ENDS ---

++++++ NOTE: Loading Parameter files now. +++++++ --- PARAMETER SOURCE INFORMATION FOLLOWS --Attempted load of system pfile source /private1/oracle/network/admin/sqlnet.ora Parameter source was not loaded Error stack follows: NL-00405: cannot open parameter file Attempted load of local pfile source /home/ginger/.sqlnet.ora Parameter source loaded successfully -> PARAMETER TABLE LOAD RESULTS FOLLOW <Some parameters may not have been loaded See dump for parameters which loaded OK -> PARAMETER TABLE HAS THE FOLLOWING CONTENTS <TRACE_DIRECTORY_CLIENT = /private1/oracle trace_level_client = ADMIN TRACE_FILE_CLIENT = trace_admin --- PARAMETER SOURCE INFORMATION ENDS --++++++ NOTE: Reading Parameter files. +++++++ --- LOG CONFIGURATION INFORMATION FOLLOWS --Attempted open of log stream "/private1/oracle/sqlnet.log" Successful stream open --- LOG CONFIGURATION INFORMATION ENDS --Unable to get data from navigation file tnsnav.ora local names file is /home/ginger/.tnsnames.ora system names file is /etc/tnsnames.ora initial retry timeout for all servers is 500 csecs max request retries per server is 2 default zone is [root] Using nncin2a() to build connect descriptor for (possibly remote) databa se. initial load of /home/ginger/.tnsnames.ora -<ERROR>- failure, error stack follows -<ERROR>- NL-00405: cannot open parameter file -<ERROR>- NOTE: FILE CONTAINS ERRORS, SOME NAMES MAY BE MISSING

initial load of /etc/tnsnames.ora -<ERROR>- failure, error stack follows -<ERROR>- NL-00427: bad list -<ERROR>- NOTE: FILE CONTAINS ERRORS, SOME NAMES MAY BE MISSING Inserting IPC address into connect descriptor returned from nncin2a(). ++++++ NOTE: Looking for Routing Information. +++++++ Calling address: (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala) (USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc (KEY=bad_host))(ADDRESS=(PROTOCOL=tcp)(HOST=lavender) (PORT=1521)))) Getting local community information Looking for local addresses setup by nrigla No addresses in the preferred address list TNSNAV.ORA is not present. No local communities entry. Getting local address information Address list being processed... No community information so all addresses are "local" Resolving address to use to call destination or next hop Processing address list... No community entries so iterate over address list This a local community access Got routable address information ++++++ NOTE: Calling first address (IPC). +++++++ Making call with following address information: (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_host))) Calling with outgoing connect data (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala) (USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp) (HOST=lavender)(PORT=1521)))) (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_host))) KEY = bad_host connecting... opening transport... -<ERROR>- sd=8, op=1, resnt[0]=511, resnt[1]=2, resnt[2]=0 -<ERROR>- unable to open transport -<ERROR>- nsres: id=0, op=1, ns=12541, ns2=12560; nt[0]=511, nt[1]=2, nt[2]=0 connect attempt failed Call failed... Call made to destination Processing address list so continuing ++++++ NOTE: Looking for Routing Information. +++++++ Getting local community information Looking for local addresses setup by nrigla No addresses in the preferred address list TNSNAV.ORA is not present. No local communities entry. Getting local address information Address list being processed...

No community information so all addresses are "local" Resolving address to use to call destination or next hop Processing address list... No community entries so iterate over address list This a local community access Got routable address information ++++++ NOTE: Calling second address (TCP/IP). +++++++ Making call with following address information: (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp) (HOST=lavender)(PORT=1521))) Calling with outgoing connect data (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala) (USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp) (HOST=lavender) (PORT=1521)))) (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp) (HOST=lavender)(PORT=1521))) port resolved to 1521 looking up IP addr for host: lavender -<FATAL?>- *** hostname lookup failure! *** -<ERROR>- nsres: id=0, op=13, ns=12545, ns2=12560; nt[0]=515, nt[1]=0, nt[2]=0 Call failed... Exiting NRICALL with following termination result -1 -<ERROR>- error from nricall -<ERROR>- nr err code: 12206 -<ERROR>- ns main err code: 12545 -<ERROR>- ns (2) err code: 12560 -<ERROR>- nt main err code: 515 -<ERROR>- nt (2) err code: 0 -<ERROR>- nt OS err code: 0 -<ERROR>- Couldn't connect, returning 12545 Most tracing is very similar to this. If you have a basic understanding of the events the components will perform, you can identify the probable cause of an error in the text of the trace. 19.64 ORA-01595: error freeing extent (2) of rollback segment (9)): =================================================================== Note 1: ORA-01595, 00000, "error freeing extent (%s) of rollback segment (%s))" Cause: Some error occurred while freeing inactive rollback segment extents. Action: Investigate the accompanying error. Note 2: Two factors are necessary for this to happen. A rollback segment has extended beyond OPTIMAL. There are two or more transactions sharing the rollback segment at the time of t he shrink. What happens is that the first process gets to the end of an extent, notices the

need to shrink and begins the recursive transaction to do so. But the next transaction blunders past the end of that extent before the recursive transaction has been committed. The preferred solution is to have sufficient rollback segments to eliminate the sharing of rollback segments between processes. Look in V$RESOURCE_LIMIT for the high-water -mark of transactions. That is the number of rollback segments you need. The alternative solution is to raise OPTIMAL to reduce the risk of the error. Note 3: This error is harmless. You can try (and probably should) set optimal to null and maxextents to unlimited (which might minimize the frequency of these errors). These errors happen sometimes when oracle is shrinking the rollback segments upto the optimal size. The undo data for shrinking is also kept in the rollback segments. So when it attempts to shrink the same rollback segment where its trying to write the undo, it throws this warning. Its not a failure per se .. since oracle will retry and succeed. 19.65: OUI-10022: oraInventory cannot be used because it is in an invalid state =============================================================================== Note 1: ------If there are other products installed through the OUI, create a copy of = the oraInst.loc file (depending on the UNIX system, possibly in /etc or /var/opt/oracle). Modify the inventory_loc parameter to point to a different location for = the OUI to create the oraInventory directory. Run the installer using the -invPtrLoc parameter (eg: runInstaller -invPtrLoc /PATH/oraInst.loc). This will retain the existing oraInventory directory and create a new = one for use by the new product. 19.66: Failure to extend rollback segment because of 30036 condition ==================================================================== Not a serious problem. Do some undo tuning.

19.67: ORA-06502: PL/SQL: numeric or value error: character string buffer too sm all ================================================================================ ===

Note 1: Hi, I am having a strange problem with an ORA-06502 error I am getting and don't und erstand why. I would expect this error to be quite easy to fix, it would suggest that a varia ble is not large enough to cope with a value being assigned to it. But I'm fairly sure that isn't the p roblem. Anyway I have a stored procedure similar to the following: PROCEDURE myproc(a_user IN VARCHAR2, p_1 OUT <my_table>.<my_first_column>%TYPE, p_2 OUT <my_table>.<my_second_column>%TYPE) IS BEGIN SELECT my_first_column, my_second_column INTO p_1, p_2 FROM my_table WHERE user_id = a_user; END; / The procedure is larger than this, but using error_position variables I have tra cked it down to one SQL statement. But I don't understand why I'm getting the ORA-06502, bec ause the variables I am selecting into are defined as the same types as the columns I'm selecting. The variable I am selecting into is in fact a VARCHAR2(4), but if I replace the sql statement with p_1 := 'AB'; it still fai ls. It succeeds if I do p_1 := 'A'; Has anyone seen this before or anything similar that they might be able to help me with please? Thanks, mtae. -- Answer 1: It is the code from which you are calling it that has the problem, e.g. DECLARE v1 varchar2(1); v2 varchar2(1); BEGIN my_proc ('USER',v1,v2); END; /

-- Answer 2 try this: PROCEDURE myproc(a_user IN VARCHAR2, p_1 OUT varchar2, p_2 OUT varchar2) IS v_1 <my_table>.<my_first_column>%TYPE; v_2 <my_table>.<my_second_column>%TYPE; BEGIN SELECT my_first_column, my_second_column INTO v_1, v_2 FROM my_table WHERE user_id = a_user; p_1 := v_1; p_2 := v_2; END; / Comment from mtae Date: 07/28/2004 04:24AM PDT Author Comment It was the size of the variable that was being used as the actual parameter bein g passed in. Feeling very silly, but thanks, sometimes you can look at a problem too long.

19.68 ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose], [], [], [], [], [], [], [] ================================================================================ ========================== thread: see this error every time I shutdown a 10gR3 grid control database on 10.2.0.3 R DBMS, even though all opmn and OMS processes are down. So far, I have not seen any problems, apart from the annoyin g shutdown warning. Note 365103.1 seems to indicate it can be ignored: Cause This is due to unpublished Bug 4483084 'ORA-600 [LIBRARYCACHENOTEMPTYONCLOSE]' This is a bug in that an ORA-600 error is reported when it is found that somethi ng is still going on during shutdown. It does not indicate any damage or a problem in the system.

Solution

At the time of writing, it is likely that the fix will be to report a more meani ngful external error, although this has not been finalised. The error is harmless so it is unlikely that this will be backported to 10.2. The error can be safely ignored as it does not indicate a problem with the datab ase. thread: ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose], [],[], [], [], [], [], [] 14-DEC-06 05:15:35 GMT Hi, There is no patch available for the bug 4483084. You need to Ignore this error, as there is absolutely no impact to the database due to this error. Thanks, Ram thread: 19.69: ORA-12518 Tns: Listener could not hand off: ------------------------------------------------->>>> thread 1: Q: ORA-12518 Tns: Listener could not hand off client conenction Posted: May 31, 2007 2:02 AM Reply Dear exeprts, Plz tell me how can I resolve ORA-12518 Tns: Listener could not hand off client conenction. ORA-12518: TNS:listener could not hand off client connection A: Your server is probably running out of memory and need to swap memory to disk. One cause can be an Oracle process consuming too much memory. A possible workaround is to set following parameter in the listener.ora and rest art the listener: DIRECT_HANDOFF_TTC_LISTENER=OFF You might need to increase the value of large_pool_size. Regards.

>>>> thread 2: Q: Hi All, I'm using oracle 10g in window XP system. Java programmers will be accessing the database. Frequently they will get "ORA-12518: TNS:listener could not hand off" error and through sqlplus also i'll get this error. But, after sometime it works fine. I checked tnsnames.ora and listner.ora files entry. they seems to be ok. i have used system name itself for HOST flag instead of IP address. But still i'm getting this error. Can anybody tell me what might be the problem? Thanks, A: From Oracle's error messages docco, we see -------TNS-12518 TNS:listener could not hand off client connection Cause: The process of handing off a client connection to another process failed. Action: Turn on listener tracing and re-execute the operation. Verify that the listener and database instance are properly configured for direct handoff. If the problem persists, contact Oracle Support Services. -------So what does the listener trace indicate? A: Did you by any chance upgrade with SP2? If so, you could be running into firewall problems - 1521 is open, the initial contact made, but the handoff to a random (blocked!) port fails... -Regards, Frank van Bortel >>>> thread 3: Q: I install Oracle9i and Oracle8i on Win2000 Server. I used Listener of 9i. My dat abase based on Oracle8i. I found this error:ORA-12518: TNS:listener could not hand off client connectionw hen I logged on database. If I restarted database and listener it run but a few minutes it failed. Can u h elp me???

A: Are you usting MTS? First start the listener and then the database ( both the databases). Now check the status of listener. if nothing works, try DIRECT_HANDOFF_TTC_<listener name> = OFF in listener.ora. >>>> thread 4 Q: This weekend I installed Oracle Enterprise 10g release 2 on Windows 2003 server. The server is a Xeon dual processor 2.5MHz each with 3GB RAM and 300GB harddisk on RAID 1. The installation was fine, I then installed our application on it, that went smo othly as well. I had 3 users logged in to test the installation and everything was ok. Today morning we had 100 users trying to login and some got access, but majority got the ORA error above and have no access. I checked the tnsnames.ora file and sqlnet.ora file, service on the database all looks ok. I also restarted the listener service on the server, but I still get this error message. I've also increased no of sessions to 1000. Has anyone ever come across a issue like this in Oracle 10g. Regards A: I think I've resolved the problem, majority of my users are away on easter brea k so when they return I will know whether this tweak has paid off or not. Basically my SGA settings were quite high, so 60% of RAM was being used by SGA a nd 40% by Windows. I basically reduced the total SGA to 800 MB and i've had no connection problems, ever since. >>>> thread 5 ORA-12518: TNS:listener could not hand off client connection Your server is probably running out of memory and need to swap memory to disk. One cause can be an Oracle process consuming too much memory. A possible workaround is to set following parameter in the listener.ora and rest art the listener: DIRECT_HANDOFF_TTC_LISTENER=OFF Should you be working with Multi threaded server connections, you might need to increase the value of large_pool_size.

19.70: Private strand flush not complete: ------------------------------------------ thread: Q: I just upgraded to Oracle 10g release 2 and I keep getting this error in my aler t log Thread 1 cannot allocate new log, sequence 509 Private strand flush not complete Current log# 2 seq# 508 mem# 0: /usr/local/o1_mf_2_2cx5wnw5_.log Current log# 2 seq# 508 mem# 1: /usr/local/o1_mf_2_2cx5wrjk_.log What causes the "private strand flush not complete" message? A: This is not a bug, it's the expected behavior in 10gr2. The "private strand flu sh not complete" is a "noise" error, and can be disregarded because it relates to internal cache redo file management . Oracle Metalink note 372557.1 says that a "strand" is a new 10gr2 term for redo latches. It notes that a strand is a new mechanism to assign redo latches to multiple processes, and it's related to the log_parall elism parameter. The note says that the number of strands depends on the cpu_count. When you switch redo logs you will see this alert log message since all private strands have to be flushed to the current redo log.

-- thread: Q: HI, I'm using the Oracle 10g R2 in a server with Red Hat ES 4.0, and i received the following message in alert log "Private strand flush not complete", somebody knows this error? The part of log, where I found this error is: Fri Feb 10 10:30:52 2006 Thread 1 advanced to log sequence 5415 Current log# 8 seq# 5415 mem# 0: /db/oradata/bioprd/redo081.log Current log# 8 seq# 5415 mem# 1: /u02/oradata/bioprd/redo082.log Fri Feb 10 10:31:21 2006 Thread 1 cannot allocate new log, sequence 5416 Private strand flush not complete Current log# 8 seq# 5415 mem# 0: /db/oradata/bioprd/redo081.log Current log# 8 seq# 5415 mem# 1: /u02/oradata/bioprd/redo082.log Thread 1 advanced to log sequence 5416 Current log# 13 seq# 5416 mem# 0: /db/oradata/bioprd/redo131.log Current log# 13 seq# 5416 mem# 1: /u02/oradata/bioprd/redo132.log

Thanks, A: Hi, Note:372557.1 has brief explanation of this message. Best Regards, -- thread: Q: Hi, I`m having such info in alert_logfile... maybee some ideas or info... Private strand flush not complete What could this posible mean ?? Thu Feb 9 22:03:44 2006 Thread 1 cannot allocate new log, Private strand flush not complete Current log# 2 seq# 386 mem# 0: Thread 1 advanced to log sequence Current log# 3 seq# 387 mem# 0: Thanks A: see http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents 003.htm#sthref4478 regards log file switch (private strand flush incomplete) User sessions trying to generate redo, wait on this event when LGWR waits for DB WR to complete flushing redo from IMU buffers into the log buffer; when DBWR is complete LGWR can then finish writing the cur rent log, and then switch log files. Wait Time: 1 second Parameters: None sequence 387 /path/redo02.log 387 /path/redo03.log

Error message :Thread 1 cannot allocate new log ----------------------------------------------Note 1: ------Q: Hi Iam getting error message "Thread 1 cannot allocate new log", sequence40994

can any one help me out , how to overcome this problem. Give me a solution. regards A: Perhaps this will provide some guidance. Rick Sometimes, you can see in your alert.log file, the following corresponding messages: Thread 1 advanced to log sequence 248 Current log# 2 seq# 248 mem# 0: /prod1/oradata/logs/redologs02.log Thread 1 cannot allocate new log, sequence 249 Checkpoint not complete

This message indicates that Oracle wants to reuse a redo log file, but the corresponding checkpoint associated is not terminated. In this case, Oracle must wait until the checkpoint is completely realized. This situation may be encountered particularly when the transactional activity is important. This situation may also be checked by tracing two statistics in the BSTAT/ESTAT report.txt file. The two statistics are: - Background checkpoint started. - Background checkpoint completed. These two statistics must not be different more than once. If this is not true, your database hangs on checkpoints. LGWR is unable to continue writing the next transactions until the checkpoints complete. Three reasons may explain this difference: - A frequency of checkpoints which is too high. - A checkpoints are starting but not completing - A DBWR which writes too slowly. The number of checkpoints completed and started as indicated by these statistics should be weighed against the duration of the bstat/estat report. Keep in mind the goal of only one log switch per hour, which ideally

should equate to one checkpoint per hour as well. The way to resolve incomplete checkpoints is through tuning checkpoints and logs: 1) Give the checkpoint process more time to cycle through the logs - add more redo log groups - increase the size of the redo logs 2) Reduce the frequency of checkpoints - increase LOG_CHECKPOINT_INTERVAL - increase size of online redo logs 3) Improve the efficiency of checkpoints enabling the CKPT process with CHECKPOINT_PROCESS=TRUE 4) Set LOG_CHECKPOINT_TIMEOUT = 0. This disables the checkpointing based on time interval. 5) Another means of solving this error is for DBWR to quickly write the dirty buffers on disk. The parameter linked to this task is: DB_BLOCK_CHECKPOINT_BATCH.

DB_BLOCK_CHECKPOINT_BATCH specifies the number of blocks which are dedicated inside the batch size for writing checkpoints. When you want to accelerate the checkpoints, it is necessary to increase this value. Note 2: ------Q: Hi All, Lets generate a good discussion thread for this database performance issue. Sometimes this message is found in the alert log generated. Thread 1 advanced to log sequence xxx Current log# 2 seq# 248 mem# 0: /df/sdfds Thread 1 cannot allocate new log, sequence xxx Checkpoint not complete I would appreciate a discussion on the following 1. What are the basic reasons for this warning 2. What is the preventive measure to be taken / Methods to detect its occurance 3. What are the post occurance measures/solutions for this. Regards A: Increase size of your redo logs.

A: Amongst other reasons, this happens when redo logs are not sized properly. A checkpoint could not be completed because a new log is trying to be allocated while it is still in use (or hasn't been archived yet). This can happen if you are running very long transactions that are producing large amounts of redo (which you did not anticipate) and the redo logs are too small to handle it. If you are not archiving, increasing the size of your logfiles should help (each log group should have at least 2 members on separate disks). Also, be aware of what type of hardware you are using. Typically, raid-5 is slower for writes than raid-1. If you are archiving and have increased the size of the redo logs, also try adding an additional arch process. I have read plenty of conflicting documentation on how to resolve this problem. One of the "solutions" is to increase the size of your logbuffer. I have not found this to be helpful (for my particular databases). In the future, make sure to monitor the ratio of redo log entries to requests (it should be around 5000 to 1). If it slips below this ratio, you may want to consider adding addtional members to your log groups and increasing their size. A: Configuring redo logs is an art and you may never archieve 100% of the time that there is no waiting for available log files. But in my opinion, the best bet for your situation is to add one (or more) redo log instead of increase the size of the redo logs. Because even if your redo logs are huge, but if your disk controller is slow, a large transaction (for example, data loading) may use up all three redo logs before the first redo log completes the archive and becomes available, thus Oracle will halt until the archive is completed.

19.71: tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x10): ------------------------------------------------------------------------- thread 1: Q: tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x1) tkcrrpa: (WARN) Failed initial attempt to send ARCH message (message:0x1) Most repords speak of a harmless message. Some reports refer to a bug affecting Oracle versions op to 10.2.0.2

19.72 ORA-600 12333 ------------------thread 1: ORA-600[12333] is reported with three additional numeric values when arequest is being received from a network packet and the request code inthe packet is not recognized. The three additional values report theinvalid request values received. The error may have a number of different root causes. For example, anetwork erro r may have caused bad data to be received, or the clientapplication may have sent wrong data, or the data in the network bu ffermay have been overwritten. Since there are many potential causes of thiserror, it is essential to have a re producible testcase to correctlydiagnose the underlying cause. If operating system network logs areavailable, it is advis able to check them for evidence of networkfailures which may indicate network transmission problems. thread 2: We just found out that it was related to Block option DML RETURNINGVALUE in Forms4.5 We set it to NO, and the problem was solved Thanks anyway thread 3: From: Oracle, Kalpana Malligere 05-Oct-99 22:09 Subject: Re : ORA-00600: internal error code, arguments: [12333], [0], [3], [81], [], [], [] Hello, An ORA-600 12333 occurs because there has been a client/server protocol violation. There can be many reasons for this: Network errors, network hardware problems, etc. Where do you see or when do you get this error? Do you have any idea what was going on at the time of this error? Which process received it, i.e., was it a background or user process? Were you running sql*loader? Does this error have any adverse impact on the application or database? We cannot generally progress unless there is reproducible test case or reproducible environment. There are many bugs logged for this error which are closed as 'could not reproduce'. In one such bug, the developer indicated that "The problem does not normally have any bad side effects." So suggest you try to isolate what is causing it as much as possible. The error can be due to underlying network problems as well. It is not indicative of a problem with the database itself.

19.73: SMON: Parallel transaction recovery tried: ------------------------------------------------Note 1: -------

Q: I was inserting 2.000.000 records in a table and the connection has been killed. in my alert file I found the following message : "SMON: Parallel transaction rec overy tried" here the content of the smon log file: Redo thread mounted by this instance: 1 Oracle process number: 6 Windows thread id: 2816, image: ORACLE.EXE *** 2006-06-29 21:33:05.484 *** SESSION ID:(5.1) 2006-06-29 21:33:05.453 *** 2006-06-29 21:33:05.484 SMON: Restarting fast_start parallel rollback *** 2006-06-30 02:50:54.695 SMON: Parallel transaction recovery tried A: Hi, This is an expected message when cleanup is occuring and you have fast_start_par allel_rollback set to cleanup rollback segments after a failed transaction Note 2: ------You get this message if SMON failed to generate the slave servers necessary to p erform a parallel rollback of a transaction. Check the value for the parameter, FAST_START_PARALLEL_ROLLBACK (default is LOW) . LOW limits the number of rollback processes to 2 * CPU_COUNT. HIGH limits the number of rollback processes to 4 * CPU_COUNT. You may want to set the value of this parameter to FALSE. Received on Wed Mar 10 2004 - 23:58:40 CST Note 3: ------Q: SMON: Parallel transaction recovery tried We found above message in alert_sid.log file. A: No need to worry about it. it is information message ... SMON start recovery in parrallel but failed and done in serial mode. Note 4:

------The system monitor process (SMON) performs recovery, if necessary, at instance s tartup. SMON is also responsible for cleaning up temporary segments that are no longer in use and for coalescing contiguous free extents within dictionary managed tablespaces. If any terminated transactions were skipp ed during instance recovery because of file-read or offline errors, SMON recovers them when the tablespace o r file is brought back online. SMON checks regularly to see whether it is needed. Other processes can call SMON if they detect a need for it. With Real Application Clusters, the SMON process of one instance can perform ins tance recovery for a failed CPU or instance.

19.74: KGX Atomic Operation: ============================

Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdbaaccp/ora10g/home System name: AIX Node name: pl003 Release: 3 Version: 5 Machine: 00CB560D4C00 Instance name: accptrid Redo thread mounted by this instance: 1 Oracle process number: 16 Unix process pid: 2547914, image: oracle@pl003 (TNS V1-V3) *** 2008-03-20 07:22:28.571 *** SERVICE NAME:(SYS$USERS) 2008-03-20 07:22:28.570 *** SESSION ID:(161.698) 2008-03-20 07:22:28.570 KGX cleanup... KGX Atomic Operation Log 700000036eb4350 Mutex 70000003f9adcf8(161, 0) idn 0 oper EXAM Cursor Parent uid 161 efd 5 whr 26 slp 0 oper=DEFAULT pt1=700000039ce1c30 pt2=700000039ce1e18 pt3=700000039ce2338 pt4=0 u41=0 stt=0 Note 1: ------Q: Hi there, Oracle has started using mutexes and it is said that they are more efficient as compared to latches. Questions 1)What is mutex?I know mutex are mutual exclusions and they are the concept of m ultiple threads.What I want to know that how this concept is implemented in Oracledatabase?

2) How they are better than latches?both are used for low level locking so how o ne is better than the other? Any input is welcome. Thanks and regards Aman.... A: 1) Simply put mutexes are memory structures. They are used to serialize the acce ss to shared structures. IMHO their most important characteristics are two. First, they can be taken in s hared or exclusive mode. Second, getting a mutex can be done in wait or no-wait mode. 2) The main advantages over latches are that mutexes requires less memory and ar e faster to get and release. A: In Oracle, latches and mutexes are different things and managed using different modules. KSL* modules for latches and KGX* for mutexes. As Chris said, general mutex operatins require less CPU instructions than latch operations (as they aren't as sophisticated as latches and don't maintain get/miss counts as latches do). But the main scalability benefit comes from that there's a mutex structure in ea ch child cursor handle and the mutex itself acts as cursor pin structure. So if you have a cursor open (or cached in session cursor cache) you don't need to get the library cache latch (which was previously needed for changing cursor pin status), but you can modify the cursor's mutex refcount directly (with help of pointers in open cursor state area in sess ions UGA). Therefore you have much higher scalability when pinning/unpinning cursors (no li brary cache latching needed, virtually no false contention) and no separate pin structures need to be allocat ed/maintained. Few notes: 1) library cache latching is still needed for parsing etc, the mutexes address o nly the pinning issue in library cache 2) mutexes are currently used for library cache cursors (not other objects like PL/SQL stored procs, table defs etc) 3) As mutexes are a generic mechanism (not library cache specific) they're used in V$SQLSTATS underlying structures too 4) When mutexes are enabled, you won't see cursor pins from X$KGLPN anymore (as X$KGLPN is a fixed table based on the KGL pin array - which wouldn't be used for cursors anymore)

19.75: ktsmgtur(): TUR was not tuned for 361 secs: ==================================================

[pl101][tdbaprod][/dbms/tdbaprod/prodrman/admin/dump/bdump] cat prodrman_mmnl_10 11950.trc /dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_mmnl_1011950.trc Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdbaprod/ora10g/home System name: AIX Node name: pl101 Release: 3 Version: 5 Machine: 00CB85FF4C00 Instance name: prodrman Redo thread mounted by this instance: 1 Oracle process number: 12 Unix process pid: 1011950, image: oracle@pl101 (MMNL) *** 2008-03-25 06:58:08.841 *** SERVICE NAME:(SYS$BACKGROUND) 2008-03-25 06:58:08.811 *** SESSION ID:(105.1) 2008-03-25 06:58:08.811 ktsmgtur(): TUR was not tuned for 361 secs What does this mean? Note 1: ------Tur works along with a pathchecker, and if a SAN connection is lost, TUR will co mplain. Note 2: ------The MMNL process does also tracking query lengths and is involved in tuning undo _retention. Here is means that TUR was not tuned for 361 seconds. If these are occasional messages, it is not to worry about. Otherwise you should dive into undo tuning. 19.76: tkcrrpa: (WARN) Failed initial attempt to send ARCH message: =================================================================== > *** SERVICE NAME:() 2008-03-22 14:56:43.590 > *** SESSION ID:(221.1) 2008-03-22 14:56:43.590 > Maximum redo generation record size = 132096 bytes > Maximum redo generation change vector size = 98708 bytes > tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x10) > tkcrrpa: (WARN) Failed initial attempt to send ARCH message (message:0x10) No good answer yet.

19.77: Weird errors 1: ====================== In a trace file of an Oracle 10.2.0.3 db on AIX 5.3 we can find:

>>>> DATABASE CALLED PRODTRID: > OS pid = 3907726 > loadavg : 1.12 1.09 1.13 > swap info: free_mem = 49.16M rsv = 24.00M > alloc = 2078.75M avail = 6144.00M swap_free = 4065.25M > F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD > 240001 A tdbaprod 3907726 1 0 60 20 1cfff7400 90692 06: 00:39 - 0:00 ora_m000_prodtrid > open: Permission denied > 3907726: ora_m000_prodtrid > 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ?? > 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94 > 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640 > 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24 > 0x0000000100170374 kskthbwt(0x0, 0x7000000, 0x0, 0x0, 0x15ab3c, 0x28284288 , 0xfffffff, 0x7000000) + 0x214 > 0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84 > 0x00000001002c8fb0 ksvrdp() + 0x550 > 0x00000001041c8c34 opirip(??, ??, ??) + 0x554 > 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448 > 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90 > 0x0000000100000870 opimai_real(??, ??) + 0x150 > 0x00000001000006d8 main(??, ??) + 0x98 > 0x0000000100000360 __start() + 0x90 > *** 2008-04-01 06:01:43.294 At other instances we find: >>>> DATABASE CALLED PRODRMAN 06:01:41 - Check for changes since lastscan in file: /dbms/tdbaprod/prodrman/adm in/dump/bdump/prodrman_cjq0_1003754.trc Warning: Errors detected in file /dbms/tdbaprod/prodrman/admin/dump/bdump/prodrm an_cjq0_1003754.trc > OS pid = 3997922 > loadavg : 1.00 1.09 1.17 > swap info: free_mem = 62.76M rsv = 24.00M > alloc = 2087.91M avail = 6144.00M swap_free = 4056.09M > F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD > 240001 A tdbaprod 3997922 1 4 62 20 1322c8400 91516 05: 43:28 - 0:00 ora_j000_prodrman > open: Permission denied > 3997922: ora_j000_prodrman > 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ?? > 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94 > 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640 > 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24 > 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000, 0x7000000, 0x15ab10, 0x1, 0xfffffff, 0x7000000) + 0x214 > 0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84 > 0x00000001021d4fcc kkjsexe() + 0x32c > 0x00000001021d5d58 kkjrdp() + 0x478

> > > > > > >

0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90 0x0000000100000870 opimai_real(??, ??) + 0x150 0x00000001000006d8 main(??, ??) + 0x98 0x0000000100000360 __start() + 0x90 *** 2008-04-01 05:46:23.170

05:46:20 - Check for changes since lastscan in file: /dbms/tdbaprod/prodrman/adm in/dump/bdump/prodrman_cjq0_1003754.trc Warning: Errors detected in file /dbms/tdbaprod/prodrman/admin/dump/bdump/prodrm an_cjq0_1003754.trc > /dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_cjq0_1003754.trc > Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Productio n > > > > > > > > > > > > > > > > > > With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdbaprod/ora10g/home System name: AIX Node name: pl101 Release: 3 Version: 5 Machine: 00CB85FF4C00 Instance name: prodrman Redo thread mounted by this instance: 1 Oracle process number: 10 Unix process pid: 1003754, image: oracle@pl101 (CJQ0) *** 2008-04-01 05:46:17.709 *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 05:44:28.394 *** SESSION ID:(107.1) 2008-04-01 05:44:28.394 Waited for process J000 to initialize for 60 seconds *** 2008-04-01 05:46:17.709 Dumping diagnostic information for J000:

>>>> DATABASE CALLED ACCPROSS 06:01:26 - Check for changes since lastscan in file: /dbms/tdbaaccp/accpross/adm in/dump/bdump/accpross_cjq0_1970272.trc Warning: Errors detected in file /dbms/tdbaaccp/accpross/admin/dump/bdump/accpro ss_cjq0_1970272.trc > /dbms/tdbaaccp/accpross/admin/dump/bdump/accpross_cjq0_1970272.trc > Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Productio n > > > > > > > > > With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdbaaccp/ora10g/home System name: AIX Node name: pl003 Release: 3 Version: 5 Machine: 00CB560D4C00 Instance name: accpross Redo thread mounted by this instance: 1

*** 2008-04-01 06:01:21.210 *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 06:00:48.099 *** SESSION ID:(217.1) 2008-04-01 06:00:48.099 Waited for process J001 to initialize for 60 seconds *** 2008-04-01 06:01:21.210 Dumping diagnostic information for J001: OS pid = 3645448 loadavg : 1.28 1.18 1.16 swap info: free_mem = 107.12M rsv = 24.00M alloc = 3749.61M avail = 6144.00M swap_free = 2394.39M F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD > 240001 A tdbaaccp 3645448 1 8 64 20 7566c510 91844 05:5 9:48 - 0:00 ora_j001_accpross > open: Permission denied > 3645448: ora_j001_accpross > 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ?? > 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94 > 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640 > 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24 > 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000, 0x7000000, 0x16656c, 0x1, 0xfffffff, 0x7000000) + 0x214 > 0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84 > 0x00000001021d4fcc kkjsexe() + 0x32c > 0x00000001021d5d58 kkjrdp() + 0x478 > 0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0 > 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448 > 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90 > 0x0000000100000870 opimai_real(??, ??) + 0x150 > 0x00000001000006d8 main(??, ??) + 0x98 > 0x0000000100000360 __start() + 0x90 > *** 2008-04-01 06:01:26.792 >>>> DATABASE CALLED PRODROSS 05:15:00 - Check for changes since lastscan in file: /dbms/tdbaprod/prodross/adm in/dump/bdump/prodross_cjq0_2068516.trc Warning: Errors detected in file /dbms/tdbaprod/prodross/admin/dump/bdump/prodro ss_cjq0_2068516.trc > /dbms/tdbaprod/prodross/admin/dump/bdump/prodross_cjq0_2068516.trc > Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Productio n > > > > > > > > > > > With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdbaprod/ora10g/home System name: AIX Node name: pl101 Release: 3 Version: 5 Machine: 00CB85FF4C00 Instance name: prodross Redo thread mounted by this instance: 1 Oracle process number: 10 Unix process pid: 2068516, image: oracle@pl101 (CJQ0)

> > > > > > > > > > > > > >

Oracle process number: 10 Unix process pid: 1970272, image: oracle@pl003 (CJQ0)

*** 2008-04-01 05:13:52.362 *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 05:11:46.862 *** SESSION ID:(217.1) 2008-04-01 05:11:46.861 Waited for process J000 to initialize for 60 seconds *** 2008-04-01 05:13:52.362 Dumping diagnostic information for J000: OS pid = 1855710 loadavg : 1.08 1.15 1.20 swap info: free_mem = 63.91M rsv = 24.00M alloc = 2110.61M avail = 6144.00M swap_free = 4033.39M F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD > 240001 A tdbaprod 1855710 1 4 66 22 1cb2f5400 92672 05: 10:46 - 0:00 ora_j000_prodross > open: Permission denied > 1855710: ora_j000_prodross > 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ?? > 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94 > 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640 > 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24 > 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000, 0x7000000, 0x15aab2, 0x1, 0xfffffff, 0x7000000) + 0x214 > 0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84 > 0x00000001021d4fcc kkjsexe() + 0x32c > 0x00000001021d5d58 kkjrdp() + 0x478 > 0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0 > 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448 > 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90 > 0x0000000100000870 opimai_real(??, ??) + 0x150 > 0x00000001000006d8 main(??, ??) + 0x98 > 0x0000000100000360 __start() + 0x90 > *** 2008-04-01 05:13:59.017

> > > > > > > > > > > >

06:01:42 - Check for changes since lastscan in file: /dbms/tdbaprod/prodroca/adm in/dump/bdump/prodroca_cjq0_757946.trc Warning: Errors detected in file /dbms/tdbaprod/prodroca/admin/dump/bdump/prodro ca_cjq0_757946.trc > OS pid = 1867996 > loadavg : 1.00 1.09 1.17 > swap info: free_mem = 66.71M rsv = 24.00M > alloc = 2087.91M avail = 6144.00M swap_free = 4056.09M > F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD > 240001 A tdbaprod 1867996 1 3 65 22 1078c5400 92656 05: 44:06 - 0:00 ora_j000_prodroca > open: Permission denied > 1867996: ora_j000_prodroca > 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ?? > 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94 > 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640 > 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24 > 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000, 0x7000000, 0x15ab10, 0x1, 0xfffffff, 0x7000000) + 0x214

> > > > > > > > > >

0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84 0x00000001021d4fcc kkjsexe() + 0x32c 0x00000001021d5d58 kkjrdp() + 0x478 0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90 0x0000000100000870 opimai_real(??, ??) + 0x150 0x00000001000006d8 main(??, ??) + 0x98 0x0000000100000360 __start() + 0x90 *** 2008-04-01 05:46:23.398

06:01:42 - Check for changes since lastscan in file: /dbms/tdbaprod/prodtrid/adm in/dump/bdump/prodtrid_mmon_921794.trc Warning: Errors detected in file /dbms/tdbaprod/prodtrid/admin/dump/bdump/prodtr id_mmon_921794.trc > /dbms/tdbaprod/prodtrid/admin/dump/bdump/prodtrid_mmon_921794.trc > Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Productio n > > > > > > > > > > > > > > > > > > With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdbaprod/ora10g/home System name: AIX Node name: pl101 Release: 3 Version: 5 Machine: 00CB85FF4C00 Instance name: prodtrid Redo thread mounted by this instance: 1 Oracle process number: 11 Unix process pid: 921794, image: oracle@pl101 (MMON) *** 2008-04-01 06:01:39.797 *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 06:01:39.385 *** SESSION ID:(106.1) 2008-04-01 06:01:39.385 Waited for process m000 to initialize for 60 seconds *** 2008-04-01 06:01:39.797 Dumping diagnostic information for m000: lastscan in file: /dbms/tdbaprod/prodrman/adm lastscan in file: /dbms/tdbaprod/prodrman/adm lastscan in file: /dbms/tdbaprod/prodroca/adm lastscan in file: /dbms/tdbaprod/prodroca/adm lastscan in file: /dbms/tdbaprod/prodross/adm lastscan in file: /dbms/tdbaprod/prodross/adm lastscan in file: /dbms/tdbaprod/prodslot/adm lastscan in file: /dbms/tdbaprod/prodslot/adm lastscan in file: /dbms/tdbaprod/prodtrid/adm

06:01:42 - Check for changes since in/dump/bdump/alert_prodrman.log 06:01:42 - Check for changes since in/dump/udump/sbtio.log 06:01:42 - Check for changes since in/dump/bdump/alert_prodroca.log 06:01:42 - Check for changes since in/dump/udump/sbtio.log 06:01:42 - Check for changes since in/dump/bdump/alert_prodross.log 06:01:42 - Check for changes since in/dump/udump/sbtio.log 06:01:42 - Check for changes since in/dump/bdump/alert_prodslot.log 06:01:42 - Check for changes since in/dump/udump/sbtio.log 06:01:42 - Check for changes since in/dump/bdump/alert_prodtrid.log

06:01:42 - Check for changes since lastscan in file: /dbms/tdbaprod/prodtrid/adm in/dump/udump/sbtio.log 06:01:42 - Check for changes since lastscan in file: /dbms/tdbaprod/ora10g/home/ network/log/listener.log File /dbms/tdbaprod/ora10g/home/network/log/listener.log is cha nged, but no errors detected

Note 1: ------Q: Hi, we're running oracle 10 on AIX 5.3 TL04. We're experiencing some troubles with paging space. We've got 7 GB real mem and 10 GB paging space, and smoetimes the paging space o ccupation increases and it "freezes" the server (no telnet nor console connection). We've seen oracle has shown this error:

CODE *** 2007-06-18 11:16:49.696 Dump diagnostics for process q002 pid 786600 which did not start after 120 secon ds: (spawn_time:x10BF1F175 now:x10BF3CB36 diff:x1D9C1) *** 2007-06-18 11:16:54.668 Dumping diagnostic information for q002: OS pid = 786600 loadavg : 0.07 0.27 0.28 swap info: free_mem = 9.56M rsv = 40.00M alloc = 4397.23M avail = 10240.00M swap_free = 5842.77M skgpgpstack: fgets() timed out after 60 seconds skgpgpstack: pclose() timed out after 60 seconds ERROR: process 786600 is not alive *** 2007-06-18 11:19:41.152 *** 2007-06-18 11:27:36.403 Process startup failed, error stack: ORA-27300: OS system dependent operation:fork failed with status: 12 ORA-27301: OS failure message: Not enough space ORA-27302: failure occurred at: skgpspawn3 So we think it's oracle's fault, but we're not sure. We're AIX guys, not oracle, so we're not sure about this. Can anyone confirm if this is caused by oracle? A: Looks like a bug. We are running on a Windows 2003 Server Standard edition. I ha d the same problem. Server was not responding anymore after the following errors: ORA-27300: OS system dependent operation:spcdr:9261:4200 failed with status: 997 ORA-27301: OS failure message: Overlapped I/O operation is in progress. ORA-27302: failure occurred at: skgpspawn

And later: O/S-Error: (OS 1450) Insufficient system resources exist to complete the request ed service. We are running the latest patchset 10.2.0.2 because of a big problem in 10.2.0.1 (wrong parsing causes client memory problems. Procobol., plsql developer ect crash because oracle made mistakes skipping the p arse process, goto direct execute and return corrupted data to the client. Tomorrow I will rise a level 1 TAR indicating we had a crach. Server is now runn ing normaly. A: Oracle finally admit there was a bug: BUG 5607984 ORACLE DOES NOT CLOSE TCP CONNECTIONS. REMAINS IN CLOSE_WAIT STATE. [On Windows 32-bit]. The patch 10 (patch number 5639232) is supposed to solve the problem for 10.2.0. 2.0. We applied it monday morning and everything is fine up to now. This bug is also supposed to be solved in the 10.2.0.3.0 patchset that is availl able on the Metalink site. Note 2: ------Q: question: ----------------------------------------------------------my bdump received two error message traces this morning. One of the trace displa ys a lot of detail, mainly as: *** SESSION ID:(822.1) 2007-02-11 00:35:06.147 Waited for process J000 to initialize for 60 seconds *** 2007-02-11 00:35:20.276 Dumping diagnostic information for J000: OS pid = 811172 loadavg : 0.55 0.42 0.44 swap info: free_mem = 3.77M rsv = 24.50M alloc = 2418.36M avail = 6272.00M swap_free = 3853.64M F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD 240001 A oracle 811172 1 0 60 20 5bf12400 86396 00:34:32 - 0:00 ora_j000_BAAN open: The file access permissions do not allow the specified action. Then whole bunch of the pointers and something like this "0x0000000100055800 kgh bshrt(??, ??, ??, ??, ??, ??) + 0x80" how do I find out what really went wrong? This error occured after I did an expo rt pump of the DB, about 10 minutes later. This is first time I sae such and the export pump has been for a year. My system is Oracle 10g R2 on AIX 5.3L

Note 3: ------At least here you have an explanation about the Oracle processes: pmon The process monitor performs process recovery when a user process fails. PMON is responsible for cleaning up the cache and freeing resources that the process was using. PMON also checks on the dispatcher processes (described later in this table) and server processes and restarts them if they h ave failed. mman Used for internal database tasks. dbw0 The database writer writes modified blocks from the database buffer cache to the datafiles. Oracle Database allows a maximum of 20 database writer processes (DBW0-DBW9 and DBWa-DBWj). The initialization pa rameter DB_WRITER_PROCESSES specifies the number of DBWn processes. The database selects an appropriate default settin g for this initialization parameter (or might adjust a user specified setting) based upon the number of CPUs and the number of processor groups. lgwr The log writer process writes redo log entries to disk. Redo log entries are gen erated in the redo log buffer of the system global area (SGA), and LGWR writes the redo log entries sequential ly into a redo log file. If the database has a multiplexed redo log, LGWR writes the redo log entries to a group of redo log files. ckpt At specific times, all modified database buffers in the system global area are w ritten to the datafiles by DBWn. This event is called a checkpoint. The checkpoint process is responsible for sig nalling DBWn at checkpoints and updating all the datafiles and control files of the database to indicate the most recent checkpoint. smon The system monitor performs recovery when a failed instance starts up again. In a Real Application Clusters database, the SMON process of one instance can perform instance recovery for other instanc es that have failed. SMON also cleans up temporary segments that are no longer in use and recovers dead transactions skip ped during system failure and instance recovery because of file-read or offline errors. These transactions are eventually recove red by SMON when the tablespace or file is brought back online. reco The recoverer process is used to resolve distributed transactions that are pendi ng due to a network or system failure in a distributed database. At timed intervals, the local RECO attempts to connec t to remote databases and automatically complete the commit or rollback of the local portion of any pending distributed transacti ons.

cjq0 Job Queue Coordinator (CJQ0) Job queue processes are used for batch processing. The CJQ0 process dynamically spawns job queue slave processes (J000...J999) to run the jobs. d000 Dispatchers are optional background processes, present only when the shared serv er configuration is used. s000 Dunno. qmnc Queue monitor background process A queue monitor process which monitors the message queues. Used by Oracle Stream s Advanced Queuing. mmon Performs various manageability-related background tasks. mmnl Performs frequent and light-weight manageability-related tasks, such as session history capture and metrics computation. j000 A job queue slave. (See cjq0) Addition: --------Sep 13, 2006 Oracle Background Processes, incl. 10gR2 --------------------New in 10gR2 ------------------PSP0 (new in 10gR2) - Process SPawner - to create and manage other Oracle proces ses. NOTE: There is no documentation currently in the Oracle Documentation set on thi s process. LNS1(new in 10gR2) - a network server process used in a Data Guard (primary) dat abase. Further explaination From "What's New in Oracle Data Guard?" in the Oracle Data G uard Concepts and Administration 10g Release 2 (10.2) "During asynchronous redo transmission, the network server (LNSn) process transm its redo data out of the online redo log files on the primary database and no longer interacts directly with the log writ er process. This change in behavior allows the log writer (LGWR) process to write redo data to the current online redo log file and continue processing the next request without waiting for inter-process communication or network I/O to complete."

--------------------New in 10gR1 ------------------MMAN - Memory MANager - it serves as SGA Memory Broker and coordinates the sizin g of the memory components, which keeps track of the sizes of the components and pending resize operations. Used by Automatic Shared Memory Management feature. RVWR -Recovery Writer - which is responsible for writing flashback logs which st ores pre-image(s) of data blocks. It is used by Flashback database feature in 10g, which provides a way to quickly revert an entire Oracle database to the state it was in at a past point in time. - This is different from traditional point in time recovery. - One can use Flashback Database to back out changes that: - Have resulted in logical data corruptions. - Are a result of user error. - This feature is not applicable for recovering the database in case of media failure. - The time required for flashbacking a database to a specific time in past is DIRECTLY PROPORTIONAL to the number of changes made and not on the size of the database. Jnnn - Job queue processes which are spawned as needed by CJQ0 to complete sched uled jobs. This is not a new process. CTWR - Change Tracking Writer (CTWR) which works with the new block changed trac king features in 10g for fast RMAN incremental backups. MMNL - Memory Monitor Light process - which works with the Automatic Workload Re pository new features (AWR) to write out full statistics buffers to disk as needed. MMON - Memory MONitor (MMON) process - is associated with the Automatic Workload Repository new features used for automatic problem detection and self-tuning. MMON writes out the required statistics for AWR on a scheduled basis. M000 - MMON background slave (m000) processes. CJQn - Job Queue monitoring process - which is initiated with the job_queue_proc esses parameter. This is not new. RBAL - It is the ASM related process that performs rebalancing of disk resources controlled by ASM. ARBx - These processes are managed by the RBAL process and are used to do the ac tual rebalancing of ASM controlled disk resources. The number of ARBx processes invoked is directly influenced by the asm_power_lim it parameter. ASMB - is used to provide information to and from the Cluster Synchronization Se rvices used by ASM to manage the disk resources.

It is also used to update statistics and provide a heartbeat mechanism. Changes about Queue Monitor Processes The QMON processes are optional background processes for Oracle Streams Advanced Queueing (AQ) which monitor and maintain all the system and user owned AQ objects. These optional processes, like the job_que ue processes, does not cause the instance to fail on process failure. They provide the mechanism for message expiration, retry, an d delay, maintain queue statistics, remove processed messages from the queue table and maintain the dequeue IOT. QMNx - Pre-10g QMON Architecture The number of queue monitor processes is controlled via the dynamic initialisati on parameter AQ_TM_PROCESSES. If this parameter is set to a non-zero value X, Oracle creates that number of QM NX processes starting from ora_qmn0_ (where is the identifier of the database) up to ora_qmnX_ ; if the parameter is not specified or is set to 0, then QMON processes are not created. There can be a maximum of 10 QMON processes running on a single instance. For example the parameter can be set in the init.ora as follows aq_tm_processes=1 or set dynamically via alter system set aq_tm_processes=1; QMNC & Qnnn - 10g QMON Architecture Beginning with release 10.1, the architecture of the QMON processes has been cha nged to an automatically controlled coordinator slave architecture. The Queue Monitor Coordinator, ora_qmnc_, dynami cally spawns slaves named, ora_qXXX_, depending on the system load up to a maximum of 10 in total. For version 10.01.XX.XX onwards it is no longer necessary to set AQ_TM_PROCESSES when Oracle Streams AQ or Streams is used. However, if you do specify a value, then that value is taken into account. Howev er, the number of qXXX processes can be different from what was specified by AQ_TM_PROCESSES. If AQ_TM_PROCESSES is not specified in versions 10.1 and above, QMNC only runs when you have AQ objects in your database.

19.78: ORA-00600: internal error code, arguments: [13080], [], [], [], [], [], [ ], []: ================================================================================ ====== When running statement ALTER TABLE ENABLE CONSTAINT this ORA-00600 error appears.

19.79: WARNING: inbound connection timed out (ORA-3136): ======================================================== Note 1:

Q: WARNING: inbound connection timed out (ORA-3136) this error appearing in Alert l og . Please explain following:--------------1.How to overcome this error? 2.Is there any adverse effect in long run? 3.Is it require to SHUTDOWN the DATABASE to solve it. A: A good dicussion at freelist.ora http://www.freelists.org/archives/oracle-l/08-2005/msg01627.html In 10gR2, SQLNET.INBOUND_CONNECT_TIMEOUT the parameters were set to have a defau lt of 60 (seconds). Set the parameters SQLNET.INBOUND_CONNECT_TIMEOUT and INBOUND_CONNECT_TIMEOUT_li stenername to 0 (indefinite). A: What the error is telling you is that a connection attempt was made, but the ses sion authentication was not provided before SQLNET.INBOUND_CONNECT_TIMEOUT seconds. As far as adverse effects in the long run, you have a user or process that is un able to connect to the database. So someone is unhappy about the database/application. Before setting SQLNET.INBOUND_CONNECT_TIMEOUT, verify that there is not a firewa ll or Network Address Translation (NAT) between the client and server. Those are common cause for ORA-3136. Q: Subject: WARNING: inbound connection timed out (ORA-3136) I have been getting like 50 of these error message a day in my alert_log the past couple of days. Anybody know what they mean? WARNING: inbound connection timed out (ORA-3136) A: Yep this is annoying, especially if you have alert log monitors :(. I had these when I first went to 10G... make these changes to get rid of them: Listener.ora: INBOUND_CONNECT_TIMEOUT_<LISTENER_NAME>=0 .. for every listener Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=0

Then the errors stop...

Note 2: SQLNET.INBOUND_CONNECT_TIMEOUT Purpose Use the SQLNET.INBOUND_CONNECT_TIMEOUT parameter to specify the time, in seconds , for a client to connect with the database server and provide the necessary authentication information. If the client fails to establish a connection and complete authentication in the time specified, then the database server terminates the connection. In addition, the database server logs the IP address of the client and an ORA-12170: TNS:Connect timeout occurred error message to the sqlnet.log f ile. The client receives either an ORA-12547: TNS:lost contact or an ORA-12637: Packet receive failed error message . . Without this parameter, a client connection to the database server can stay op en indefinitely without authentication. Connections without authentication can introduce possible denial-of-service atta cks, whereby malicious clients attempt to flood database servers with connect requests that consume resources. To protect both the database server and the listener, Oracle Corporation recomme nds setting this parameter in combination with the INBOUND_CONNECT_TIMEOUT_listener_name parameter in the listener.ora fil e. When specifying values for these parameters, consider the following recommendations: Set both parameters to an initial low value. Set the value of the INBOUND_CONNECT_TIMEOUT_listener_name parameter to a lower value than the SQLNET.INBOUND_CONNECT_TIMEOUT parameter. For example, you can set INBOUND_CONNECT_TIMEOUT_listener_name to 2 seconds and INBOUND_CONNECT_TIMEOUT parameter to 3 seconds. If clients are unable to complete connections within the specified time due to s ystem or network delays that are normal for the particular environment, then increment the time as needed.

19.80 How to insert special symbols: ==================================== Note 1: ------Q: Hi, Is there anyone who knows how to insert a value containing "&" into a table? sth like this: insert into test_tab (test_field) values ('&test'); I tried ''&test' and many more but none of them works:-(

As far as I know Oracle tries to bind a value when it encounters '&sth'... thanks in advance A: Try: set define off Then execute your insert. 19.81: SGA POLICY: Cache below reserve getting from component1: ===============================================================

19.82: AUTO SGA: Not free: ========================== Q: Hi, We have 10gr2 on windows server 2003 standard edition. The below errors are generated in mman trace files every now and then. AUTO AUTO AUTO AUTO AUTO AUTO AUTO AUTO AUTO AUTO SGA: SGA: SGA: SGA: SGA: SGA: SGA: SGA: SGA: SGA: Not Not Not Not Not Not Not Not Not Not free free free free free free free free free free 0x2DFE78A8, 0x2DFE78A8, 0x2DFE795C, 0x2DFE7A10, 0x2DFE7AC4, 0x2DFE7B78, 0x2DFE7C2C, 0x2DFE7CE0, 0x2DFE7D94, 0x2DFF2708, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0 0 0 0 0 0 0 0 0 0

metalink doesnt give much info either.( BUG : 5201883 for your reference ) did anybody happened to have come across this issue and probably resolved it. Any comments are appreciated. A: This can be safely ignored.Since ASMM(Automatic Shared Memory Management) is ena bled at instance level, you might be hitting this bug. Check Metalink note: 394026.1 Adding the Metalink note. A: As stated in the bug description, either 1) ignore the messages and delete generated trace files periodically and/or

2) wait for patchset 10.2.0.4

19.83: ORA-06512: ================= Error: ORA-06512: at line <number> Cause: This error message indicates the line number in the PLSQL code that the error r esulted. Action: For example, if you had the following PLSQL code: declare v_number number(2); begin v_number := 100; end; You would receive the following error message: ORA-06502 nummeric or value error. number precision too large ORA=06512 at line 4 So it tells you at which line the error is. 19.84: ORA-06512: at "SYS.DBMS_CDC_UTILITY": ============================================ Errormessage in trace file: > /dbms/tdbaplay/playroca/admin/dump/udump/playroca_ora_1667142.trc > Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Productio n > With the Partitioning, OLAP and Data Mining options > ORACLE_HOME = /dbms/tdbaplay/ora10g/home > System name: AIX > Node name: pl003 > Release: 3 > Version: 5 > Machine: 00CB560D4C00 > Instance name: playroca > Redo thread mounted by this instance: 1 > Oracle process number: 27 > Unix process pid: 1667142, image: oracleplayroca@pl003 > > *** SERVICE NAME:(SYS$USERS) 2008-05-28 12:40:22.399 > *** SESSION ID:(530.27892) 2008-05-28 12:40:22.399 > oracle.jdbc.driver.OracleSQLException: ORA-01405: fetched column value is N ULL > ORA-06512: at "SYS.DBMS_CDC_UTILITY", line 226 > ORA-06512: at line 1 > at oracle.jdbc.driver.T2SConnection.check_error(T2SConnection.j

ava:153) > at oracle.jdbc.driver.T2SCallableStatement.checkError(T2SCallab leStatement.java:92) > at oracle.jdbc.driver.T2SCallableStatement.executeForRows(T2SCa llableStatement.java:449) > at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(Orac leStatement.java:1294) > at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(O raclePreparedStatement.java:3514) > at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePre paredStatement.java:3620) > at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCal lableStatement.java:5261) > at oracle.CDC.SubscriptionWindow.qccsero(SubscriptionWindow.jav a:759) > at oracle.CDC.SubscriptionWindow.extendWindow(SubscriptionWindo w.java:337) > at oracle.CDC.SubscribeApi.extendWindow(SubscribeApi.java:428)

19.85: TNS-12519: TNS:no appropriate service handler found: =========================================================== Note 1: ------>> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> > Hi all, we have an Oracle 10 (version details below) on Linux (RHEL 4). When connecting via JDBC we get intermittend ORA-12519 (reflected as TNS-12519 in listener.log). sqlldr also has a problem, although at the moment I can't exactly determine whether it's the same (I'm guessing it is because the happen about the same time). Research on the web revealed that a too low value for "processes" might be the reason. (The other possible cause I found was non matching versions of DB and client but this is not the case here.) So we increased DB param "parallel_max_servers" to 200. Since the error still showed up we went up to 400. It's been quiet since the last change of this parameter on Tuesday but some minutes ago I got an email notification that the error occurred again. I rather not want to increase the value by trial and error since we have only 36 sessions on the database right now and there seems to be a discrepancy between parameter "processes" (at 150 now, the value is derived from "parallel_max_servers") and the actual # of processes. Also the system is not much utilized and there's enough free resources (CPU wise and memory wise). So I'd like to first find out what is causing this error before I take further measures. I checked the alert log but there were no significant entries. I checked job scheduling to check whether there might be a job that eats up connections, but no. I guess switching on some trace might be helpful but at the moment I don't have an idea which one would be appropriate. Any ideas? Thanks for any insights!

> > > > > > >

I fought this battle earlier this month. The problem is that the more recent version listeners "count" the incominng connection requests. When the count would exceed the processes value the ORA-12519 error is raised. The problem is that the listener does not really count the disconnetions. I learns of them only periodically. If/when you have many, many short lived connections you can see this error.

I had found this article which basically states the same: http://forums.oracle.c om/forums/thread.jspa?threadID=360226&tstart=0

Note 2: ------This is an issue your DBA needs to address, it is not due to your client applica tion -Here is a Metalink note I found - for further information you will need to conta ct support. Article-ID: Note 240710.1 Title: Intermittent TNS-12516 or TNS-12519 Errors Connecting Via Net Symptom(s) ~~~~~~~~~~ Client connections may intermittently fail with either of the following errors: TNS-12516 TNS:listener could not find instance with matching protocol stack TNS-12519 TNS:no appropriate service handler found Additionally, a TNS-12520 error may appear in the listener log. TNS:listener could not find available handler for requested type of server The output of the lsnrctl services command may show that the service handler is in a "blocked" state. e.g. '"DEDICATED" established:1 refused:0 state:blocked' Change(s) ~~~~~~~~~~ None necessarily. Perhaps increase in load. Cause ~~~~~~~ By way of instance registration, PMON is responsible for updating the listener with information about a particular instance such as load and dispatcher information. Maximum load for dedicated connections is determined by the PROCESSES parameter. The frequency at which PMON provides SERVICE_UPDATE information varies according to the workload of the instance. The maximum interval between these service updates is 10 minutes.

The listener counts the number of connections it has established to the instance but does not immediately get information about connections that have terminated. Only when PMON updates the listener via SERVICE_UPDATE is the listener informed of current load. Since this can take as long as 10 minutes, there can b e a difference between the current instance load according to the listener and the actual instance load. When the listener believes the current number of connections has reached maximum load, it may set the state of the service handler for an instance to "blocked" and begin refusing incoming client connections with either of the following errors: TNS-12516 TNS:listener could not find instance with matching protocol stack TNS-12519 TNS:no appropriate service handler found Additionally, a TNS-12520 error may appear in the listener log. The output of lsnrctl service may show that the service handler is "blocked". e.g. '"DEDICATED" established:1 refused:0 state:blocked' Fix ~~~~ Increase the value for PROCESSES.

Note 3: ------Q: Hi all, I installed 10g Release 2 on Windows 2003 ... My application working fine .... recently i got an error .. TNS-12519: TNS:no appropriate service handler found i checked my listener it is working fine... but after some time connection start working... What was the issue? Regards Mani A: It could be a system load issue. If the problem happens again, try executing "ls nrctl services". If you see something like the following :

Service "test10" has 1 instance(s). Instance "test10", status READY, has 1 hand ler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 state:blocked LOCAL SERVER then you may want to increase PROCESSES initialization parameter.

19.86. ORA-00313: open failed for members of log group 1 of thread 1: ===================================================================== Note 1: ------Subject: Loss Of Online Redo Log And ORA-312 And ORA-313 Doc ID: 117481.1 Type: BULLETIN Modified Date : 04-AUG-2008 Status: PUBLISHED Scenario -------You have a database in archive log mode, shutdown immediate and deleted one of the online redo logs, in this case there are only 2 groups with 1 log member in each. When you try to open the database you receive the following errors: ora-313 open failed for memebers of log group 2 of thread 1. ora-312 online log 2 thread 1 'filename' It is not possible to recover the missing log, so the following needs to be performed Mount the database and check v$log to see if the deleted log is current; -If the log is not current, simply drop the log group (alter database drop logfile group N). If there are only 2 log groups then it will be necessary to add another group before dropping this one. -If the log is current they should simply perform fake recovery and then open resetlogs connect internal startup mount recover database until cancel; (cancel immediately) alter database open resetlogs; The database will open up as required, providing the log file directory is available. If not available then create it and rerun the resetlogs. This will give error ora-344 unable to recreate online log <name>

search words -----------lost online redo

ora-312 ora-313 Recovering After the Loss of Online Redo Log Files: Scenarios ============================================================= If a media failure has affected the online redo logs of a database, then the appropriate recovery procedure depends on the following: The configuration of the online redo log: mirrored or non-mirrored The type of media failure: temporary or permanent The types of online redo log files affected by the media failure: current, active, unarchived, or inactive 1) Recovering After Losing a Member of a Multiplexed Online Redo Log Group --------------------------------------------------------------------------If the online redo log of a database is multiplexed, and if at least one member of each online redo log group is not affected by the media failure, then the database continues functioning as normal, but error messages are written to the log writer trace file and the alert_SID.log of the database. ACTION PLAN ============= If the hardware problem is temporary, then correct it. The log writer process accesses the previously unavailable online redo log files as if the problem never existed. If the hardware problem is permanent, then drop the damaged member and add a new member by using the following procedure. To replace a damaged member of a redo log group: =============================================== Locate the filename of the damaged member in V$LOGFILE. The status is INVALID if the file is inaccessible: SQL> SELECT GROUP#, STATUS, MEMBER FROM V$LOGFILE WHERE STATUS='INVALID'; GROUP# ------0002 STATUS MEMBER ----------- --------------------INVALID /oracle/oradata/trgt/redo02.log

++ Drop the damaged member. For example, to drop member redo01.log from group 2, issue: SQL > ALTER DATABASE DROP LOGFILE MEMBER '/oracle/oradata/trgt/redo02.log'; ++ Add a new member to the group. For example, to add redo02.log to group 2, issue: SQL> ALTER DATABASE ADD LOGFILE MEMBER '/oracle/oradata/trgt/redo02b.log' TO GROUP 2; If the file you want to add already exists, then it must be the same size as the other group members, and you must specify REUSE. For example:

SQL > ALTER DATABASE ADD LOGFILE MEMBER '/oracle/oradata/trgt/redo02b.log' REUSE TO GROUP 2; 2) Losing an Inactive Online Redo Log Group =========================================== If all members of an online redo log group with INACTIVE status are damaged, then the procedure depends on whether you can fix the media problem that damaged the inactive redo log group. If the failure is . . Temporary... then Fix the problem. LGWR can reuse the redo log group when required. If the failure is ... Permanent then the damaged inactive online redo log group eventually halts normal database operation. ACTION PLAN ============ Reinitialize the damaged group manually by issuing the ALTER DATABASE CLEAR LOGFILE You can clear an inactive redo log group when the database is open or closed. The procedure depends on whether the damaged group has been archived. To clear an inactive, online redo log group that has been archived: -------------------------------------------------------------------If the database is shut down, then start a new instance and mount the database: STARTUP MOUNT Reinitialize the damaged log group. For example, to clear redo log group 2, issue the following statement: ALTER DATABASE CLEAR LOGFILE GROUP 2; Clearing Inactive, Not-Yet-Archived Redo ======================================== Clearing a not-yet-archived redo log allows it to be reused without archiving it. This action makes backups unusable if they were started before the last change in the log, unless the file was taken offline prior to the first change in the log. Hence, if you need the cleared log file for recovery of a backup, then you cannot recover that backup. Also, it prevents complete recovery from backups due to the missing log. To clear an inactive, online redo log group that has not been archived: If the database is shut down, then start a new instance and mount the database: STARTUP MOUNT Clear the log using the UNARCHIVED keyword. For example, to clear log group 2, issue: ALTER DATABASE CLEAR LOGFILE UNARCHIVED GROUP 2; If there is an offline datafile that requires the cleared log to bring it online, then the keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have to be dropped because the redo necessary to bring it online is being cleared, and there is no copy of it. For example, enter: ALTER DATABASE CLEAR LOGFILE UNARCHIVED GROUP 2 UNRECOVERABLE DATAFILE;

Immediately back up the whole database with an operating system utility, so that you have a backup you can use for complete recovery without relying on the cleared log group. For example, enter: % cp /disk1/oracle/dbs/*.f /disk2/backup Back up the database's control file with the ALTER DATABASE statement. For example, enter: ALTER DATABASE BACKUP CONTROLFILE TO '/oracle/dbs/cf_backup.f'; Failure of CLEAR LOGFILE Operation ---------------------------------------The ALTER DATABASE CLEAR LOGFILE statement can fail with an I/O error due to media failure when it is not possible to: * Relocate the redo log file onto alternative media by re-creating it under the currently configured redo log filename * Reuse the currently configured log filename to re-create the redo log file because the name itself is invalid or unusable (for example, due to media failure) In these cases, the ALTER DATABASE CLEAR LOGFILE statement (before receiving the I/O error) would have successfully informed the control file that the log was being cleared and did not require archiving. The I/O error occurred at the step in which the CLEAR LOGFILE statement attempts to create the new redo log file and write zeros to it. This fact is reflected in V$LOG.CLEARING_CURRENT. Note 2: ------Subject: RMAN Recovery Until Time Failed When Redo-Logs Missed - ORA-00313, ORA -00312 AND ORA-27037 Doc ID: 550077.1 Type: PROBLEM Modified Date : 05-AUG-2008 Status: PUBLISHED In this Document Symptoms Cause Solution References --------------------------------------------------------------------------------

Applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.6 to 10.2.0.3 This problem can occur on any platform. Symptoms o Testing rman in this scenario: - Taken a RMAN Backup for Database and Archiving logs -- successful. - All other files (datafiles, Online redolog files ) are deleted.

- CURRENT CONTROLFILE (with rman-repository data) is still available, and Current Controlfile is used for RMAN Point In Time Recovery (Restore/Recover) RMAN Failed with Errors: ---ORA-00283: recovery session canceled due to errors RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile 'D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\L10 2\ARCHIVELOG\2008_01_22\O1_MF_1_12_3SCBO34D_.ARC' ORA-00283: recovery session canceled due to errors ORA-00313: open failed for members of log group 1 of thread 1 ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\L102\REDO01.LOG' ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified. ---Cause Bug:3902506 INCOMPLETE RECOVERY USING RMAN ENDED WITH ORA-313, ORA-312 AND ORA-2 7037 Bug:3991656 PLEASE ADD DESCRIPTION USING ONLINE REDO LOG WHEN USE UNTIL SCN AND UNTIL TIME If you use "UNTIL TIME" or "UNTIL SCN" clause without online redo log, you should use BACKUPED CONTROL FILES. . When using 'UNTIL SEQUENCE' clause for incomplete recovery of database, archive logs are restored and used for recovery with ALTER DATABASE RECOVER LOGFILE command. However, when UNTIL TIME or UNTIL CHANGE is used, even if the archive log is available, it is not used with 'ALTER DATABASE RECOVER LOGFILE' but instead RMAN tries to use the online redo log. . Solution -------------------Solution/Workaround -------------------o use BACKUP CONTROL FILE o use UNTIL SEQUENCE - instead of UNTIL TIME, or UNTIL SCN Example ------SQL> select current_scn from v$database ; CURRENT_SCN ----------37079844 SQL> select sysdate from dual ; SYSDATE -------------------22-JAN-2008:16:06:35 SQL> select * from v$log ; GROUP# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME ------ ---------- ---------