Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Pgina 1 de 13
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 2 de 13
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 3 de 13
Event, an inactive session will not be sampled OK the interval _ash_sampling_interval parameters.
In 10g there is a new view: the v $ SESSION_WAIT_HISTORY. This view is saved for each active session in
v $ session_wait in wait in the last 10 events, but this data for a period of time performance status monitoring is not enough
To solve this problem, in 10g new added to a view: the V $ ACTIVE_SESSION_HISTORY. This is ASH
(Active session history).
2.2 ASH strategy adopted by --Typical case, in order to diagnose the state of the current database, you need more information in the recent five to ten minutes. However, since information on the
activities of the recording session is time and space, ASH adopted the strategy is: to save the the activity session information in a wait state, per second from the v $
session_wait and v $ session sampling, and sampling information (Note: ASH the sampled data is stored in memory) stored in memory.
2.3 Ash work principle --Active Session sampling (related view the information collected per second) data stored in the SGA allocated to the SGA in the size of the ASH from v $ sgastat is in
the query (the shared pool under Ash buffers), the space can be recycled, if required, the previous information can be new information coverage. All the activities of
all session should be recorded is very resource consuming. ASH only be obtained from the V $ SESSION and a few view the session information of those activities.
ASH every 1 second to collect session information, not through SQL statement, instead of using direct access to memory is relatively more efficient.
Need to sample data per second, so ASH cache very large amount of data, all of them flushed to disk, it will be very consume disk space, so the ASH data in the
cache is flushed to the AWR related tables when to take the following Strategy:
1 MMON default every 60 minutes (can be adjusted) ash buffers data in the 1/10 is flushed to disk.
The MMNL default when ASH buffers full 66% of the ash buffers 1/10 data is written to disk (specific 1/10 which is the data, follow the FIFO principle).
The MMNL written data percentage of 10% the percentage of total ash buffers in the amount of sampled data is written to disk data (rather than accounting for the
proportion of the the Ash Buffers total size)
4 To save space, the data collected by the AWR in default automatically cleared after 7 days.
Specific reference implicit parameter:
_ash_sampling_interval: sampled once per second
_ash_size: ASH Buffer minimum value defined, the default is 1M
_ash_enable: Enable ASH sampling
_ash_disk_write_enable: sampling data written to disk
_ash_disk_filter_ratio: the sampling data written to disk accounted for a percentage of the total sampling data ASHbuffer, default 10%
_ash_eflush_trigger: ASH buffer full would later write, default 66%
_ash_sample_all: If set to TRUE, all sessions will be sampled, including those that session is idle waiting. The default is FALSE.
ASH cache is a fixed size of the SGA area corresponding to each CPU 2M space. The ASH cache can not over sharedpool the 5% or 2% of the sga_target.
The data inquiry: v $ active_session_history ASH buffers
ASH buffers to flush data to the table: WRH $ _active_session_history
(A partition table, WRH = WorkloadRepository History)
Respect to the table View: dba_hist_active_sess_history,
2.4 ASH --This view by the v $ ACTIVE_SESSION_HISTORY view access to relevant data, can also get some performance information.
----------Sampling information
----------SAMPLE_ID sample ID
SAMPLE_TIME sampling time
IS_AWR_SAMPLE AWR sampling data is 1/10 of the basic data
---------------------Information that uniquely identifies the session
---------------------SESSION_ID corresponds to the SID V $ SESSION
SESSION_SERIAL # uniquely identifies a session objects
SESSION_TYPE background or foreground program foreground / background
USER_ID Oracle user identifier; maps to V $ SESSION.USER #
SERVICE_HASH Hash that identifies the Service; maps toV $ ACTIVE_SERVICES.NAME_HASH
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 4 de 13
PROGRAM procedures
MODULE procedures corresponding software and versions
ACTION
CLIENT_ID Client identifier of the session
---------------------session executing SQL statement information
---------------------The SQL_ID sampling executing SQL ID
The executing SQL SQL_CHILD_NUMBER sampling sub-cursor Number
The SQL_PLAN_HASH_VALUE SQL plan hash value
SQL_OPCODE pointed out that the SQL statement at which stage of the operation corresponds to V $ SESSION.COMMAND
QC_SESSION_ID
QC_INSTANCE_ID
---------------------session wait state
---------------------SESSION_STATE session state Waiting / ON CPU
WAIT_TIME
---------------------session wait event information
---------------------EVENT
EVENT_ID
Event #
SEQ #
P1
P2
P3
TIME_WAITED
---------------------the session waits object information
---------------------CURRENT_OBJ #
CURRENT_FILE #
CURRENT_BLOCK #
Of AWR (AutomaticWorkload Repository)
ASH sample data is stored in memory. The memory space allocated to the ASH is limited, when the allocated space
Occupied, the old record will be overwritten; database is restarted, all these the ASH information will disappear.
Thus, for long-term performance of the detection oracle is impossible. Oracle10g, permanently retained ASH
The method of information, which is AWR (automatic workload repository). Oracle recommends using AWR replace
Statspack (10gR2 still retains the statspack).
3.1 ASH to AWR
ASH and AWR process can use the following icon Quick description:
v $ session -> the v $ session_wait -> v $ SESSION_WAIT_HISTORY> (in fact, without this step)
-> V $ active_session_history (ASH) -> wrh $ _active_session_history (AWR)
-> Dba_hist_active_sess_history
v $ session on behalf of the database activity began from the source;
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 5 de 13
The v $ session_wait view to wait for the real-time recording activity session, current information;
The v $ SESSION_WAIT_HISTORY enhanced is v $ session_wait the simple record activity session last 10 waiting;
The v $ active_session_history ASH core to the history of recorded activity session waiting for information, samples per second, this part of the record in memory,
the expected value is the contents of the record one hour;
the WRH $ _active_session_history is the AWR the v $ active_session_history in the storage pool,
v $ active_session_history recorded information will be refreshed on a regular basis (once per hour) to load the library, and the default
One week reserved for analysis;
view dba_hist_active_sess_history is wrh $ _active_session_history view and several other view
The joint show, we usually this view historical data access.
Above it comes to ASH by default MMON, MMNL background process sampling data every one hour from ASH buffers, then the collected data is stored in it?
AWR more tables to store the collected performance statistics, tables are stored in the SYSAUX tablespace SYS user, and to WRM $ _ * and WRH $ _ * and WRI $
_ *, WRR $ _ * format name. The AWR historical data stored in the underlying table wrh $ _active_session_history
(Partition table).
WRM $ _ * the type storage AWR metadata information (such as checking the database and collection of snapshots), M behalf of metadata
WRH $ _ * Type to save sampling snapshot of history statistics. H stands for "historical data"
WRI $ _ * data type representation of the stored database in Hong suggestion feature (advisor)
WRR $ _ * represents the 11g new features Workload Capture and Workload Replay related information
Built several the prefix DBA_HIST_ the view on these tables, these views can be used to write your own performance diagnostic tools. The name of the view directly
associated with the table; example, view DBA_HIST_SYSMETRIC_SUMMARY is the the WRH $ _SYSMETRIC_SUMMARY table built.
Note: ASH save the session recording system is the latest in a wait, can be used to diagnose the current state of the database;
While the the AWR information in the longest possible delay of 1 hour (can be adjusted manually), so the sampling information does not
This state for the diagnostic database, but can be used as a period of the adjusted reference database performance.
3.2 Setup AWR
To use AWR must be set the parameters of STATISTICS_LEVEL, a total of three values: BASIC, TYPICAL, ALL.
A. typical - default values, enable all automation functions, and this collection of information in the database. The information collected includes: Buffer Cache
Advice, MTTR Advice, Timed Statistics, Segment LevelStatistics, PGA Advice ..... and so on, you can select statistics_name, activation_level from v $
statistics_level
order by 2; to query the information collected. Oracle recommends that you use the default values ??typical.
B. all - If set to all, in addition to typical, but also collect additional information, including
plan execution statistics and Timed OSstatistics of SQL query (Reference A).
In this setting, it may consume too much server resources in order to collect diagnostic information.
C. basic - Close all automation functions.
3.3: AWR related data collection and management
3.3.1 Data
In fact, the the AWR information recorded not only is Ash can also collect all aspects of statistical information to the database is running and wait for the information
to diagnostic analysis.
The AWR sampling at fixed time intervals for all of its important statistical information and load information to perform a sampling
And sampling information is stored in the AWR. It can be said: Ash in the information is saved to the AWR view
in wrh $ _active_session_history. ASH AWR subset of.
These sampling data is stored in the SYSAUX tablespace SYSAUX tablespace is full, AWR will automatically overwrite the old
Information in the warning log records an information:
ORA-1688: unable to extend tableSYS.WRH $ _ACTIVE_SESSION_HISTORY partition WRH $ _ACTIVE_3533490838_1522 by 128 intablespace SYSAUX
3.3.2 collection and management
The AWR permanently save the system performance diagnostic information owned by the SYS user. After a period of time, you might want to get rid of these
information; sometimes for performance diagnosis, you may need to define the sampling frequency to obtain a system snapshot information.
Oracle 10g in the package dbms_workload_repository provide a lot of processes, these processes, you can manage snapshots and set baseline.
AWR information retention period can be modified by modifying the retention parameters. The default is seven days, the smallest value is the day.
Retention is set to zero, automatically cleared close. Awr find sysaux space is not enough, it by removing
The oldest part of the snapshot to re-use these spaces, it would also send a warning to the dba tell sysaux space
Enough (in the alert log). AWR information sampling frequency can be modified by modifying the interval parameter. Youngest
Value is 10 minutes, the default is 60 minutes. Typical value is 10,20,30,60,120 and so on. The interval is set to 0 Close
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 6 de 13
Automatically capture snapshots such as the collection interval was changed to 30 minutes at a time. (Note: The units are minutes) and 5 days reserved
The MMON collection snapshot frequency (hourly) and collected data retention time (7 days) and can be modified by the user.
View: Select * from dba_hist_wr_control the;
For example: Modify the frequency of 20 minutes, to collect a snapshot retain data two days:
begin
dbms_workload_repository.modify_snapshot_settings (interval => 20, retention => 2 * 24 * 60);
end;
The 3.4 manually create and delete AWR snapshot
AWR automatically generated by ORACLE can also through DBMS_WORKLOAD_REPOSITORY package to manually create, delete and modify. Desc command
can be used to view the process of the package. The following is only a few commonly used:
SQL> select count (*) fromwrh $ _active_session_history;
COUNT (*)
---------317
SQL> begin
2 dbms_workload_repository.create_snapshot ();
End;
4/
PL / SQL procedure successfully completed.
SQL> select count (*) fromwrh $ _active_session_history;
COUNT (*)
---------320
Manually delete the specified range of snapshots
SQL> select * fromwrh $ _active_session_history;
SQL> begin
2 dbms_workload_repository.drop_snapshot_range (low_snap_id => 96, high_snap_id => 96, dbid => 1160732652);
End;
4/
SQL> select * fromwrh $ _active_session_history where snap_id = 96;
No rows selected
3.5 Setting and remove baseline (baseline)
Baseline (baseline) is a mechanism so you can mark important snapshot of the time information set. A baseline definition
Between a pair of snapshots, snapshot through their snapshot sequence number to identify each baseline has one and only one pair of snapshots. A typical
Performance Tuning practice from the acquisition measurable baseline collection, make changes, and then start collecting another set of baseline.
You can compare two sets to check the effect of the changes made. AWR, the existing collection of snapshots can perform
Row of the same type of comparison.
Assume that a name apply_interest highly resource-intensive process run between 1:00 to 3:00 pm,
Corresponds to the the snapshot ID 95 to 98. We can define a name for these snapshots of apply_interest_1 baseline:
SQL> select * From dba_hist_baseline;
SQL> select * from wrm $ _baseline;
SQL> execdbms_workload_repository.create_baseline (95, 98, 'apply_interest_1');
After some adjustments steps, we can create another baseline - assuming the name apply_interest_2 then
Only with two baseline snapshot measures
SQL> execdbms_workload_repository.create_baseline, (92, 94 'apply_interest_2');
Can be used in the analysis drop_baseline () to delete the reference line; snapshot is retained (cascade delete). Furthermore,
Clear routines delete the old snapshot, a baseline snapshot will not be cleared to allow for further analysis.
To delete a baseline:
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 7 de 13
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 8 de 13
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 9 de 13
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 10 de 13
Conditions occur:
block is read into the buffer, or already in the buffer being other session to modify a session to try to pin live it, then the current block has been pin live competition to
produce a bufferbusy waits, the value should not be greater than 1%. View the v $ waitstat see approximate buffer BUSY WAITS distribution.
The solution:
This happens usually may be adjusted in several ways: increasing the data buffer, freelist, reduce pctused, increasing the number of rollback segments, increases
initrans, consider using the LMT + ASSM confirmation is not caused due to the hot block (can inverted index, or more small size).
The wait event indicates that is waiting for a non-shared buffer, or currently being read into the buffer cache. In general buffer BUSY wait should not be more than
1%. Check buffer wait statistics section (see below) Segments by Buffer Busy Waits (or the V $ WAITSTAT), look at the wait is in paragraph head
(SegmentHeader,). If so, you can consider increasing the free list (freelist for Oracle8i DMT) or increase the freelist groups (in many cases this adjustment is
immediate, 8.1.6 and later, the dynamic modification feelists need to set COMPATIBLE at least 8.1.6) Oracle9i or later can use ASSM.
alter table xxx storage (freelists n);
- Find wait block type
SELECT 'segment Header' CLASS, a.Segment_Type,
a.Segment_Name
a.Partition_Name
FROM Dba_Segments a, V $ session_Wait b
WHERE a.Header_File = b.P1
AND a.Header_Block = b.P2
AND b.Event = 'buffer busy waits'
UNION
The SELECT 'freelist Groups' Class
a.Segment_Type
a.Segment_Name
a.Partition_Name
FROM Dba_Segments a, V $ session_Wait b
WHERE b.P2 BETWEEN a.Header_Block + 1
AND (a.Header_Block + a.Freelist_Groups)
AND a.Header_File = b.P1
All AND a.Freelist_Groups>
AND b.Event = 'buffer busy waits'
UNION
SELECT a.Segment_Type | | 'Block' CLASS,
a.Segment_Type
a.Segment_Name
a.Partition_Name
FROM Dba_Extents a, V $ session_Wait b
WHERE b.P2 BETWEEN a.Block_Id AND a.Block_Id + a.Blocks - 1
AND a.File_Id = b.P1
AND b.Event = 'buffer busy waits'
AND NOT EXISTS (SELECT 1
FROM DBA_SEGMENTS
WHERE Header_File = b.P1 AND Header_Block = b.P2);
For different wait block type, we take a different approach:
1.data segment header:
Process recurring access Data Segment header usually for two reasons: to obtain or modify process freelists information is; expansion of the high-water mark. First
case, the process frequently access processfreelists information leading to freelist contention, we can increase the storage parameters of the corresponding segment
object freelist or freelist Groups a; often want to modify the freelist the data block and out of freelist a result of the process, you can the pctfree value and value
pctused of settings is a big gap, so as to avoid frequent data block and out of the freelist; For the second case, the segment space consumed quickly, and set the next
extent is too small, resulting in frequent expansion of the high-water mark, the The approach is to increase the segment object storage parameters next extent or create
a table space set extent size uniform.
2.data block:
One or more data blocks are multiple processes simultaneously read and write, has become a hot block, to solve this problem by the following way:
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 11 de 13
(1) reduce the concurrency of the program If the program uses a parallel query, reduce paralleldegree, in order to avoid multiple parallel slave simultaneously access
the same data object wait degrade performance
(2) adjusting the application so that it can read less data block will be able to obtain the required data, reducing the Buffer gets and physical reads
(3) to reduce the number of records in the same block, so that the distribution of records in the data block, which can be achieved in several ways: You can adjust the
segment object pctfree value segment can be rebuilt to a smaller block size table space , you can also use the alter table minimize records_per_block statement to
reduce the number of records in each block
(4) If the hot block object is similar to the index increment id field, you can index into reverse index, scattered data distribution, dispersion hot block; wait in the
index block should consider rebuilding the index, partitioned index or use reverse key index.
ITL competition and wait for multi-transactional concurrent access to the data sheet, may occur, in order to reduce this wait, you can increase initrans, using multiple
ITL slots.
3.undo segment header:
undo segment header contention because the system undosegment not enough, the need to increase the undo segment, the undo segment management methods,
manual management mode, you need to modify ROLLBACK_SEGMENTS initialization parameter to increase the rollback segment, if the automatic mode, you can
reduce the transactions_per_rollback_segment initialization parameter to the oracle automatic increase in the number of rollbacksegment
4.undo block:
undo block contention with the application data read and write at the same time (requires appropriate to reduce the large-scale consistency read), the read process to
undo segment to obtain consistent data, the solution is to stagger application modify the data and a lot of time to query data ASSM combination LMT completely
changed the Oracle storage mechanism, bitmap freelist can reduce the buffer busy waits (buffer busy wait), this problem was a serious problem in the previous
versions of Oracle9i.
Oracle claims ASSM significantly improve the performance of the DML concurrent operation, because (a) the different portions of the bitmap can be used
simultaneously, thus eliminating the serialized Looking remaining space. According to the the Oracle test results, use bitmap will eliminate all sub-head (competition
for resources), but also very fast concurrent insert operation. Among Oracle9i or later, the buffer busy wait no longer common.
Free buffer waits
There is no free data buffer available buffer, so that the process of the current session in Free the buffer wiats wait state, free buffer waits reason for the wait
Like the following:
- DATA buffer is too small;
- DBWR process to write the efficiency is relatively low;
- LGWR write too slow, DBWR wait;
- A large number of dirty blocks are written to disk;
- Low efficiency of SQL statements that need to be optimized on the Top SQL.
enqueue
The queue competition: enqueue a locking mechanism to protect shared resources. The locking mechanisms to protect shared resources, such as data in the record, in
order to avoid two people update the same data at the same time. The Enqueue including a queuing mechanism, FIFO (first-in, first-out) queuing mechanism.
Enqueue wait for the ST, HW, TX, TM
STenqueue interval allocated for space management, and dictionary-managed tablespace (DMT) DMT is typical for uet $ and FET $ data dictionary table contention.
Version LMT should try to use locally managed tablespaces or consider the manual pre-allocated a certain number of areas (Extent) reduce the dynamic expansion
serious queue competition.
The HW enqueue the segment high water mark the relevant wait; manually assign an appropriate area to avoid this wait.
The TX lock (affairs lock) is the most common enqueue wait. TX enqueue wait is usually the result of one of the following three issues.
The first question is a duplicate index unique index, you need to release the enqueue (commit) performs a commit / rollback (rollback) operation.
The second problem is the same bitmap index segment is updated several times. As single bitmap segment may contain more than one row address (rowid), when
multiple users attempt to update the same period of a user locks the records requested by the other users, then wait for the. Committed or rolled back until the locked
user enqueue release. The third question, the problem is most likely to occur, multiple users to simultaneously update the same block. If there is not enough ITL slot
occurs block-level locking. By increasing initrans and / or maxtrans to allow the use of multiple ITL slots (the data sheet for frequent concurrent DML operations in
the beginning of the construction of the table should be considered a reasonable value for the corresponding parameter settings to avoid changes to the system is
running online, before 8i, the freelists and other parameters can not be changed online design consideration is particularly important), or increase table on pctfree
value, you can easily avoid this situation.
The TM enqueue queue lock during DML operations before the acquisition, in order to prevent the data being operated table any DDL operations (DML operations
on a data-sheet, its structure can not be changed).
log file parallel write / logfile sync (synchronize log file)
If you log group there are several members of the group, when flush log buffer, the write operation is parallel, this time waiting for this event possible.
Trigger LGWR process:
1 user submits
2 1/3 redo log buffer is full
Greater than 1M redo log buffer is not written to disk
4.3 seconds timeout
Need to write data 5.DBWR the SCN greater than LGWR records the SCN the DBWR trigger LGWR writes.
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 12 de 13
When a user commits (commits) or rollback (rollback), the session's redo information needs to be written out to the redo logfile user process will inform the LGWR
to perform the write operation will notify the user process LGWR to complete tasks. Wait event means user process waiting for the LGWR write completion notice.
rollback operation, the event record from the user to issue a rollback command to the time of the rollback is complete.
If the wait too much, may indicate that LGWR write inefficient or submitted too often solve the problem, can follow: log file parallel write wait event. user commits,
user rollback statistics can be used to observe the number of committed or rolled back
Solution:
1. Increase LGWR properties: try to use a fast disk, do not redo log file is stored in the disk of RAID 5's
2 Use batch submission
Appropriate use of the NOLOGGING / UNRECOVERABLE option
Average redo write size can be calculated by the following equation:
avg.redo write size = (Redo blockwritten / redo writes) * 512 bytes
If the system generates a lot of redo each write less general description LGWR is activated too often.
Competition may lead to excessive redo latch.
The following wait events and RAC (resource contention between nodes):
gc current block busy:
gcs log flush sync:
gc buffer busy: hot block; the node isolation / service isolation to reduce inter-node resources contention;
Log File Switch
When this wait appears, which means that the request submission (commit) need to wait for the completion of the log file switch ". This wait event occurs usually
because the log group cycle is full, the first log archive is not yet complete, there the wait. The wait may indicate io problems.
The solution:
Consider increasing the log file and increase the log group
The archive files are moved to a fast disk
Adjustment log_archive_max_processes.
log file switch (checkpoint incomplete) - log switch (checkpoint not complete)
The wait event usually indicates you the DBWR write speed of slow or IO problems.
Want to consider adding additional DBWR or increase your log group or log file size.
control file sequential read / control fileparallel write
If you wait a long time, it is clear, you need to consider improving the control file where the disk I / O.
SQL Statistics accordance with the statistics of different indicators Sort useful data, combined with all the statistics, you can
Identify poor performance running SQL statements and run unreasonable (such as the number of runs very much) SQL easier to understand, here not described in
detail.
Many of the above are better understood, explain it here briefly a few of the following:
SQLordered by Parse Calls: Parse calls please reference (including hard parse and soft parse, and softer resolution):
SQL ordered by Version Count: SQL statement contains a version more same parentcursors, children cursors sql statement. That is, the SQL text is exactly the same,
the father, the cursor can be shared, but the different optimizer environment settings (OPTIMIZER_MISMATCH), bind variables length of the value in the second
execution occurrence of significant changes (BIND_MISMATCH), licensing relationship does not match (AUTH_CHECK_MISMATCH ) or basis convert an object
does not match (TRANSLATION_MISMATCH) lead to sub-cursors can not be shared, you need to generate a new child cursor. Shared with SQL (cursor sharing).
This case, the execution plan may be different, and may be the same (we can be seen through the plan_hash_value); specific mismatch can query V $
SQL_SHARED_CURSOR
Advisory Statistics
With this view recommendations. By the following view query.
GV_ $ DB_CACHE_ADVICE
GV_ $ MTTR_TARGET_ADVICE
GV_ $ PGATARGET_ADVICE_HISTOGRAM
GV_ $ PGA_TARGET_ADVICE
GV_ $ SHARED_POOL_ADVICE
V_ $ DB_CACHE_ADVICE
V_ $ MTTR_TARGET_ADVICE
V_ $ PGA_TARGET_ADVICE
V_ $ PGA_TARGET_ADVICE_HISTOGRAM
V_ $ SHARED_POOL_ADVICE
http://www.databaseskill.com/1182859/
26/09/2015
Pgina 13 de 13
http://www.databaseskill.com/1182859/
26/09/2015