Sei sulla pagina 1di 13

AWR basic operations, analysis - Database - Database Skill

Pgina 1 de 13

AWR basic operations, analysis


Tag: buffer, sql, session, oracle, database Category: Database Author: jj1006238384 Date: 2010-09-21

1 AWR basic operation


C: \> sqlplus "/ as sysdba"
SQL * Plus: Release 10.2.0.1.0 - Productionon Wednesday May 25 08:20:25 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise EditionRelease 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data MiningScoring Engine options
SQL> @ D: \ oracle \ product \ 10.2.0 \ db_2 \ RDBMS \ ADMIN \ awrrpt.sql
Current Instance
~~~~~~~~~~~~~~~~
DBId DB Name Inst Num Instance
------------------------------------------3556425887 TEST01 1 test01
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plaintext report?
Enter 'html' for an HTML report, or 'text'for plain text
Defaults to 'html'
Input value of report_type:
Type Specified: html
Instances in this Workload Repositoryschema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DBId Inst Num DB Name Instance Host
-------------------------------------------------- -----* 3556425887 1 TEST01 test01 PCE-TSG-036
Using 3556425887 for database Id
Using 1 for instance number
Specify the number of days of snapshots tochoose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will resultin the most recent
(N) days of snapshots being listed. Pressing <return> without
specifying a number lists all completedsnapshots.
The value input num_days: 2
Listing the last 2 days of CompletedSnapshots
Snap
Instance DB Name Snap Id Snap Started Level
-------------------------------------------------- -----test01 TEST01 214 24 5 2011 07:53 1
09:00 1 215 245 2011
10:01 1 216 245 2011

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 2 de 13

11:00 1 217 245 2011


12:00 1 218 245 2011
13:01 1 219 245 2011
220 24 May 2011 14:00
15:00 1 221 245 2011
16:00 1 222 245 2011
17:00 1 223 245 2011
07:51 1 224 255 2011
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To input begin_snap: 223
Begin Snapshot Id specified: 223
To input end_snap: 224
End Snapshot Id specified: 224
declare
*
Line 1 error:
ORA-20200: The instance was shutdownbetween snapshots 223 and 224
ORA-06512: in LINE 42
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0-Production
With the Partitioning, OLAP and Data MiningScoring Engine options disconnect
One more time:
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To input begin_snap: 214
Begin Snapshot Id specified: 214
To input end_snap: 215
End Snapshot Id specified: 215
Then enter the name of the report you want to generate ....
......
<P>
<P>
End of Report
</ BODY> </ HTML>
Report written to awrrpt_1_0524_08_09.html
SQL>
Generated report to postpone doing. We first get to know ASH and AWR.
Recognizing ASH (Active Session History)
2.1 Ash (ActiveSession History) architecture
Oracle10g before the current session record stored in the v $ session; session in a wait state will be copied placed in a
The v $ session_wait. When the connection is disconnected, the original connection information in the v $ session and v $ SESSION_WAIT
Will be deleted. No view can provide information about session each time point in the history of doing, and waiting for
What resources. The original v $ session and v $ session_wait just display the current session is running SQL and wait
What resources.
Oracle10g, Oracle provides Active Session History (ASH) to solve this problem. Every 1 second
ASH will currently active session information is recorded in a buffer of the SGA (recycled). ASH, this too
Process is called sampling (Sampling). ASH default every second collection v $ session active sessions, recording sessions waiting

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 3 de 13

Event, an inactive session will not be sampled OK the interval _ash_sampling_interval parameters.
In 10g there is a new view: the v $ SESSION_WAIT_HISTORY. This view is saved for each active session in
v $ session_wait in wait in the last 10 events, but this data for a period of time performance status monitoring is not enough
To solve this problem, in 10g new added to a view: the V $ ACTIVE_SESSION_HISTORY. This is ASH
(Active session history).
2.2 ASH strategy adopted by --Typical case, in order to diagnose the state of the current database, you need more information in the recent five to ten minutes. However, since information on the
activities of the recording session is time and space, ASH adopted the strategy is: to save the the activity session information in a wait state, per second from the v $
session_wait and v $ session sampling, and sampling information (Note: ASH the sampled data is stored in memory) stored in memory.
2.3 Ash work principle --Active Session sampling (related view the information collected per second) data stored in the SGA allocated to the SGA in the size of the ASH from v $ sgastat is in
the query (the shared pool under Ash buffers), the space can be recycled, if required, the previous information can be new information coverage. All the activities of
all session should be recorded is very resource consuming. ASH only be obtained from the V $ SESSION and a few view the session information of those activities.
ASH every 1 second to collect session information, not through SQL statement, instead of using direct access to memory is relatively more efficient.
Need to sample data per second, so ASH cache very large amount of data, all of them flushed to disk, it will be very consume disk space, so the ASH data in the
cache is flushed to the AWR related tables when to take the following Strategy:
1 MMON default every 60 minutes (can be adjusted) ash buffers data in the 1/10 is flushed to disk.
The MMNL default when ASH buffers full 66% of the ash buffers 1/10 data is written to disk (specific 1/10 which is the data, follow the FIFO principle).
The MMNL written data percentage of 10% the percentage of total ash buffers in the amount of sampled data is written to disk data (rather than accounting for the
proportion of the the Ash Buffers total size)
4 To save space, the data collected by the AWR in default automatically cleared after 7 days.
Specific reference implicit parameter:
_ash_sampling_interval: sampled once per second
_ash_size: ASH Buffer minimum value defined, the default is 1M
_ash_enable: Enable ASH sampling
_ash_disk_write_enable: sampling data written to disk
_ash_disk_filter_ratio: the sampling data written to disk accounted for a percentage of the total sampling data ASHbuffer, default 10%
_ash_eflush_trigger: ASH buffer full would later write, default 66%
_ash_sample_all: If set to TRUE, all sessions will be sampled, including those that session is idle waiting. The default is FALSE.
ASH cache is a fixed size of the SGA area corresponding to each CPU 2M space. The ASH cache can not over sharedpool the 5% or 2% of the sga_target.
The data inquiry: v $ active_session_history ASH buffers
ASH buffers to flush data to the table: WRH $ _active_session_history
(A partition table, WRH = WorkloadRepository History)
Respect to the table View: dba_hist_active_sess_history,
2.4 ASH --This view by the v $ ACTIVE_SESSION_HISTORY view access to relevant data, can also get some performance information.
----------Sampling information
----------SAMPLE_ID sample ID
SAMPLE_TIME sampling time
IS_AWR_SAMPLE AWR sampling data is 1/10 of the basic data
---------------------Information that uniquely identifies the session
---------------------SESSION_ID corresponds to the SID V $ SESSION
SESSION_SERIAL # uniquely identifies a session objects
SESSION_TYPE background or foreground program foreground / background
USER_ID Oracle user identifier; maps to V $ SESSION.USER #
SERVICE_HASH Hash that identifies the Service; maps toV $ ACTIVE_SERVICES.NAME_HASH

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 4 de 13

PROGRAM procedures
MODULE procedures corresponding software and versions
ACTION
CLIENT_ID Client identifier of the session
---------------------session executing SQL statement information
---------------------The SQL_ID sampling executing SQL ID
The executing SQL SQL_CHILD_NUMBER sampling sub-cursor Number
The SQL_PLAN_HASH_VALUE SQL plan hash value
SQL_OPCODE pointed out that the SQL statement at which stage of the operation corresponds to V $ SESSION.COMMAND
QC_SESSION_ID
QC_INSTANCE_ID
---------------------session wait state
---------------------SESSION_STATE session state Waiting / ON CPU
WAIT_TIME
---------------------session wait event information
---------------------EVENT
EVENT_ID
Event #
SEQ #
P1
P2
P3
TIME_WAITED
---------------------the session waits object information
---------------------CURRENT_OBJ #
CURRENT_FILE #
CURRENT_BLOCK #
Of AWR (AutomaticWorkload Repository)
ASH sample data is stored in memory. The memory space allocated to the ASH is limited, when the allocated space
Occupied, the old record will be overwritten; database is restarted, all these the ASH information will disappear.
Thus, for long-term performance of the detection oracle is impossible. Oracle10g, permanently retained ASH
The method of information, which is AWR (automatic workload repository). Oracle recommends using AWR replace
Statspack (10gR2 still retains the statspack).
3.1 ASH to AWR
ASH and AWR process can use the following icon Quick description:
v $ session -> the v $ session_wait -> v $ SESSION_WAIT_HISTORY> (in fact, without this step)
-> V $ active_session_history (ASH) -> wrh $ _active_session_history (AWR)
-> Dba_hist_active_sess_history
v $ session on behalf of the database activity began from the source;

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 5 de 13

The v $ session_wait view to wait for the real-time recording activity session, current information;
The v $ SESSION_WAIT_HISTORY enhanced is v $ session_wait the simple record activity session last 10 waiting;
The v $ active_session_history ASH core to the history of recorded activity session waiting for information, samples per second, this part of the record in memory,
the expected value is the contents of the record one hour;
the WRH $ _active_session_history is the AWR the v $ active_session_history in the storage pool,
v $ active_session_history recorded information will be refreshed on a regular basis (once per hour) to load the library, and the default
One week reserved for analysis;
view dba_hist_active_sess_history is wrh $ _active_session_history view and several other view
The joint show, we usually this view historical data access.
Above it comes to ASH by default MMON, MMNL background process sampling data every one hour from ASH buffers, then the collected data is stored in it?
AWR more tables to store the collected performance statistics, tables are stored in the SYSAUX tablespace SYS user, and to WRM $ _ * and WRH $ _ * and WRI $
_ *, WRR $ _ * format name. The AWR historical data stored in the underlying table wrh $ _active_session_history
(Partition table).
WRM $ _ * the type storage AWR metadata information (such as checking the database and collection of snapshots), M behalf of metadata
WRH $ _ * Type to save sampling snapshot of history statistics. H stands for "historical data"
WRI $ _ * data type representation of the stored database in Hong suggestion feature (advisor)
WRR $ _ * represents the 11g new features Workload Capture and Workload Replay related information
Built several the prefix DBA_HIST_ the view on these tables, these views can be used to write your own performance diagnostic tools. The name of the view directly
associated with the table; example, view DBA_HIST_SYSMETRIC_SUMMARY is the the WRH $ _SYSMETRIC_SUMMARY table built.
Note: ASH save the session recording system is the latest in a wait, can be used to diagnose the current state of the database;
While the the AWR information in the longest possible delay of 1 hour (can be adjusted manually), so the sampling information does not
This state for the diagnostic database, but can be used as a period of the adjusted reference database performance.
3.2 Setup AWR
To use AWR must be set the parameters of STATISTICS_LEVEL, a total of three values: BASIC, TYPICAL, ALL.
A. typical - default values, enable all automation functions, and this collection of information in the database. The information collected includes: Buffer Cache
Advice, MTTR Advice, Timed Statistics, Segment LevelStatistics, PGA Advice ..... and so on, you can select statistics_name, activation_level from v $
statistics_level
order by 2; to query the information collected. Oracle recommends that you use the default values ??typical.
B. all - If set to all, in addition to typical, but also collect additional information, including
plan execution statistics and Timed OSstatistics of SQL query (Reference A).
In this setting, it may consume too much server resources in order to collect diagnostic information.
C. basic - Close all automation functions.
3.3: AWR related data collection and management
3.3.1 Data
In fact, the the AWR information recorded not only is Ash can also collect all aspects of statistical information to the database is running and wait for the information
to diagnostic analysis.
The AWR sampling at fixed time intervals for all of its important statistical information and load information to perform a sampling
And sampling information is stored in the AWR. It can be said: Ash in the information is saved to the AWR view
in wrh $ _active_session_history. ASH AWR subset of.
These sampling data is stored in the SYSAUX tablespace SYSAUX tablespace is full, AWR will automatically overwrite the old
Information in the warning log records an information:
ORA-1688: unable to extend tableSYS.WRH $ _ACTIVE_SESSION_HISTORY partition WRH $ _ACTIVE_3533490838_1522 by 128 intablespace SYSAUX
3.3.2 collection and management
The AWR permanently save the system performance diagnostic information owned by the SYS user. After a period of time, you might want to get rid of these
information; sometimes for performance diagnosis, you may need to define the sampling frequency to obtain a system snapshot information.
Oracle 10g in the package dbms_workload_repository provide a lot of processes, these processes, you can manage snapshots and set baseline.
AWR information retention period can be modified by modifying the retention parameters. The default is seven days, the smallest value is the day.
Retention is set to zero, automatically cleared close. Awr find sysaux space is not enough, it by removing
The oldest part of the snapshot to re-use these spaces, it would also send a warning to the dba tell sysaux space
Enough (in the alert log). AWR information sampling frequency can be modified by modifying the interval parameter. Youngest
Value is 10 minutes, the default is 60 minutes. Typical value is 10,20,30,60,120 and so on. The interval is set to 0 Close

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 6 de 13

Automatically capture snapshots such as the collection interval was changed to 30 minutes at a time. (Note: The units are minutes) and 5 days reserved
The MMON collection snapshot frequency (hourly) and collected data retention time (7 days) and can be modified by the user.
View: Select * from dba_hist_wr_control the;
For example: Modify the frequency of 20 minutes, to collect a snapshot retain data two days:
begin
dbms_workload_repository.modify_snapshot_settings (interval => 20, retention => 2 * 24 * 60);
end;
The 3.4 manually create and delete AWR snapshot
AWR automatically generated by ORACLE can also through DBMS_WORKLOAD_REPOSITORY package to manually create, delete and modify. Desc command
can be used to view the process of the package. The following is only a few commonly used:
SQL> select count (*) fromwrh $ _active_session_history;
COUNT (*)
---------317
SQL> begin
2 dbms_workload_repository.create_snapshot ();
End;
4/
PL / SQL procedure successfully completed.
SQL> select count (*) fromwrh $ _active_session_history;
COUNT (*)
---------320
Manually delete the specified range of snapshots
SQL> select * fromwrh $ _active_session_history;
SQL> begin
2 dbms_workload_repository.drop_snapshot_range (low_snap_id => 96, high_snap_id => 96, dbid => 1160732652);
End;
4/
SQL> select * fromwrh $ _active_session_history where snap_id = 96;
No rows selected
3.5 Setting and remove baseline (baseline)
Baseline (baseline) is a mechanism so you can mark important snapshot of the time information set. A baseline definition
Between a pair of snapshots, snapshot through their snapshot sequence number to identify each baseline has one and only one pair of snapshots. A typical
Performance Tuning practice from the acquisition measurable baseline collection, make changes, and then start collecting another set of baseline.
You can compare two sets to check the effect of the changes made. AWR, the existing collection of snapshots can perform
Row of the same type of comparison.
Assume that a name apply_interest highly resource-intensive process run between 1:00 to 3:00 pm,
Corresponds to the the snapshot ID 95 to 98. We can define a name for these snapshots of apply_interest_1 baseline:
SQL> select * From dba_hist_baseline;
SQL> select * from wrm $ _baseline;
SQL> execdbms_workload_repository.create_baseline (95, 98, 'apply_interest_1');
After some adjustments steps, we can create another baseline - assuming the name apply_interest_2 then
Only with two baseline snapshot measures
SQL> execdbms_workload_repository.create_baseline, (92, 94 'apply_interest_2');
Can be used in the analysis drop_baseline () to delete the reference line; snapshot is retained (cascade delete). Furthermore,
Clear routines delete the old snapshot, a baseline snapshot will not be cleared to allow for further analysis.
To delete a baseline:

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 7 de 13

SQL> execdbms_workload_repository.drop_baseline (baseline_name => 'apply_interest_1', cascade => false);


AWR RAC environment
Rac environment, each snapshot includes all the nodes of the cluster (as stored in the shared database, not in each instance). Snapshot data of each node has the same
snap_id, but rely on the instance id to distinguish. In general, in the RAC snapshot is captured at the same time. You can also use the Database Control to manually
snapshot. Manual snapshot support system automatic snapshot.
ADDM
Automatic Database Diagnostic Monitor: ADDM the introduction of this AWR data warehouse, Oracle can naturally achieve a higher level of intelligence
applications on this basis, greater play to the credit of the AWR, which is Oracle 10g introduced In addition, a function Automatic Database Diagnostic Monitor
program (Automatic Database Diagnostic Monitor, ADDM) by ADDM, Oracle attempts to become more automated and simple database maintenance, management
and optimization.
The ADDM may be periodically check the state of the database, according to the built-in expert system automatically determines the potential database performance
bottlenecks, and adjustment measures and recommendations. All built within the Oracle database system, its implementation is very efficient, almost does not affect
the overall performance of the database. The new version of DatabaseControl a convenient and intuitive form ADDM findings and recommendations, and guide the
administrator to the progressive implementation of the recommendations of the ADDM, quickly resolve performance problems.
AWR common operations
AWR configuration by dbms_workload_repository package configuration.
6.1 Adjusting the AWR snapshot frequency and retention policies, such as the collection interval changed to 30 minutes at a time, and retain five days time (in
minutes):
SQL> execdbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 5 * 24 * 60);
6.2 Close AWR, the interval is set to 0 to turn off automatically capture snapshot
The SQL> execdbms_workload_repository.modify_snapshot_settings (interval => 0);
6.3 manually create a snapshot
The SQL> execDBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
6.4 View snapshot
SQL> select * fromsys.wrh $ _active_session_history
6.5 manually delete the specified range of snapshots
SQL> execDBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id => 973, high_snap_id => 999, dbid => 262089084);
6.6 created Baseline, save the data for future analysis and comparison
SQL> execdbms_workload_repository.create_baseline (start_snap_id => 1003, end_snap_id => 1013, 'apply_interest_1');
6.7 Delete baseline
SQL> execDBMS_WORKLOAD_REPOSITORY.DROP_BASELINE (baseline_name => 'apply_interest_1', cascade => FALSE);
6.8 AWR data export and migrate to other databases for later analysis
SQL> execDBMS_SWRF_INTERNAL.AWR_EXTRACT (dmpfile => 'awr_data.dmp', mpdir => 'DIR_BDUMP', bid => 1003, eid => 1013);
6.9 Migration the AWR data file to the other database
SQL> execDBMS_SWRF_INTERNAL.AWR_LOAD (SCHNAME => 'AWR_TEST', dmpfile => 'awr_data.dmp', dmpdir => 'DIR_BDUMP');
The the AWR data transfer to the TEST mode:
SQL> exec DBMS_SWRF_INTERNAL.MOVE_TO_AWR (SCHNAME => 'TEST');
Analysis AWR report
Prohibit the uploading of files, because of the company's the AWR report failed to upload. Description can know the corresponding field.
DB Time = CPU time + wait time (does not include idle waiting) (background process), db time is recorded the server spent on database operations (background
process) and wait for the (non-idle wait)
System for 24-core CPU, the snapshot interval, a total of about 1380.04 minutes, a total of 1380.4 * 24 = 33129.6 minutes of CPU time, the DB time is 2591.15
minutes, which means that cpu spent 2591.15 minutes in dealing with the non-idle wait and operator of Oracle ( For example, the logical reading)
That CPU 2591.15/33129.6 * 100% (percentage: 7.82%) spent in dealing with Oracle's operation, which does not include a background process, server load average
is relatively low. Elapsed from AWR report and DB Time can get an idea db load.
That is: by DB Time / (Elapsed the * CPU Auditors) * 100% of the value obtained illustrates the CPU spent on dealing with Oracle's operating ratio (excluding
background processes). The higher the proportion of the higher load.
The Load Profile describes the current state of the library as a whole.
Redo size: The average per second or per transaction generated redo log size is 161K (unit: bytes) per transaction 5K redo log.
Physical writes: Average the physical per second write 66.52 blocks.
Physical reads / Logical Reads = 430.48 / 38788.19 = 1.1% of logical reads led to the physical I / O. Average in things the logical reads 1351.11 (blocks). This
number should be as small as possible. Read the unit block.
Parses: CPU per second of 1454.21 parsing system is busy, 35.79 hard parse (the hard parse proportionate share of 2.5%) per second, 1/35.79 = 0.02 sec CPU to be
dealt a new SQL statement, indicating the system within different SQL statements more recommended to use bind variables and procedure processing.

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 8 de 13

Sorts: 70.30 times per second sort is more.


Transactions: number of transactions generated per second, reflecting the heavy task of database or
% Blocks changed per Read: 1-2.43% = 97.57% of logical reads for read-only instead of modified blocks, average duration of each operation to update only
accounted for 2.43% of the block. Is the DML update operation (the 23-hour collection of snapshots) block the total block operation (logical R) ratio is 2.43%.
Recursive Call%: 71.77% SQL PL / SQL to perform. Recursive: recursive
Rollback per transaction%: the percentage of transaction rollback, the smaller the better. 19.95 value is very high (every thing 0.1995 rollback), the system has rolled
back issues, very expensive because of the cost of rollback, or an average of every 5 (1/0.1995) of the transaction is necessary to generate a return roll. The
combination of the previous transactions per second at 28.71, 28.71 / 5 = 5.7 times per second will be rolled back. Should carefully examine why such a high rate of
rollback.
The efficiency of the database instance. The target value is 100%.
The Buffer Nowait: obtained in the buffer Buffer not wait the ratio (buffercache request hit rate), if the Buffer Nowait <99% Description, there may be a heat block
(Find x $ bh tch and the v $ latch_children of cache buffers chains).
Redo NOWAIT%: Get Buffer Redo buffer wait ratio.
Buffer Hit%: the data blocks in the data buffer hit ratio should normally be more than 95%, or less than 95%, the important parameters need to be adjusted, less than
90% may be added db_cache_size, but a large number of non-selected index can also cause the value is high (the db file Sequential read).
In-memory Sort%: sort in memory. If it is too low, you need to consider increasing the PGA or inspection procedures to reduce Sort.
Library Hit%: the main representative of SQL in the shared area (library cache) hit rate, usually more than 95%, or increased need to consider SharedPool bind
variables to modify the cursor_sharing (need to be careful to modify this parameter) and other parameters.
Soft Parse%: the percentage of soft parse SQL approximately regarded as the hit rate in the shared area, less than 95%, need to be considered binding if less than
80%, then it is likely that your SQL basically not be reused.
Execute to Parse%: SQL statement execution and resolution ratio. If a new SQL statement after a resolution and then perform and can no longer be performed not in
the same session, then the ratio of 0, this ratio should be as high as possible. For example, the 36.04% Notes, the SQL statement executed in the same session, only
36.04% of the SQL is already parsing the (you do not need to resolve it again). DB new SQL statement is relatively large.
Execute to parse = round (100 * (1-Parses/Executions), 2), if the parse times greater than the executions may cause this value is negative, affect the performance will
be. This value is closer to 100% as possible (ie Parses / Executions closer to zero, that is almost all SQL has parsed the simply run just fine).
Latch Hit%: a latch for each application, the probability of success is how much. If less than 99%, then the with latch competitive problems. To ensure that> 99%,
otherwise there is a serious performance issues, such as bind variables, the hot block dispersed, shared pool adjustment (too small).
Parse CPU to Parse Elapsd%:
Is calculated as: Parse CPU to Parse Elapsd% = 100 * (parse time cpu / parse timeelapsed). Namely: to resolve the actual running time / (parse the actual running time
+ analytic resources of time). Here was 89.28% for parsing spent each CPU seconds spent about 1/0.8928 = 1.12 seconds of Wall clock (wall clock) time, 0.12
seconds to wait for a resource. If the ratio is 100%, which means that the CPU time is equal to the elapsed time, without any wait. The larger the value, the less time
consumed in waiting for resources.
% Non-Parse CPU: is calculated as follows:% Non-Parse CPU = round (100 * 1-PARSE_CPU/TOT_CPU), 2). Too low means that the resolution time-consuming
too much. This ratio closer to 100% as possible, the database most of the time is spent executing the SQL statement, instead of parsing the SQL statement.
Memory Usage%: means that part accounted for percentage of total sharedpool size, if it is too low, a waste of memory, if the value is too high, the excessive
utilization, probably because of the shared pool object is often flushed out of memory, resulting in the SQL statement hard The parsing increase. This number should
be stabilized at 75% to 90% for a long time.
% SQLwith executions> 1: shared pool in the implementation of the SQL statement number greater than 1% of the total number of SQL statements proportion is
94.48%.
% Memory for SQL w / exec> 1: This is not frequently used SQL statements compared to frequently used SQL statements of memory accounted sharedpool
consumption percentage. This figure will, in general, with the% SQL With executions> 1 is very close, there is no law unless there is some query task consumes
memory. In the steady state, will see the overall approximately from 75% to 85% of the shared pool is used over time. If the time window for the report is large
enough to cover all of the cycle, the execution times greater than the percentage of SQL statements should be close to 100%. This is observed between the duration of
the statistics. You can expect it to increase with the length of time between the observed increases.
Top 5 Timed Events idle time, without concern, we only need to care about non-idle wait events. The common free event:
dispatcher timer
lock element cleanup
Null event
parallel query dequeue wait
parallel query idle wait - Slaves
pipe get
PL / SQL lock timer
pmon timer-pmon
rdbms ipc message
slave wait
smon timer
SQL * Net break / reset to client
SQL * Net message from client

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 9 de 13

SQL * Net message to client


SQL * Net more data to client
virtual circuit status
client message
Waiting for an event may not be listed in the Top 5 Timed Events are then listed five, each collection will change
Often some of the events listed here to do a simple analysis. Note that in Oracle 9.2 This project is called the Top 5 Wait Events, version 9.2 and after changes to Top
5 Timed Events, and contains the CPU time which Waits waiting times, Time (s) wait time (in seconds) Generally mainly to see the waiting time. AVG Wait (ms)
average each time to wait,% Total Call Time said the wait for the event what is the percentage of the total call time, WAIT Class said to wait for the level.
CPU time: CPU time is not really waiting for an event. It is an important indicator to measure whether the CPU bottleneck,
Elapsed Time = CPU Time + Wait Time. In general, a good system, the CPU time should be ranked first in the TOP 5 TIME Event, otherwise, it is necessary to make
adjustments to reduce the other the WAIT TIME. Of course, this is relative, if there is significant the latch wait or excessive Logical Read and other high percentage
of CPU time accounting for the time is reassuring. That CPU work in high efficiency is a good thing, but whether because of inefficient settings or SQL consume
CPU time on the need to pay attention.
db file sequential read and db filescattered read.
These two events are more frequent events. They show that the Oracle kernel request data blocks read from disk (to the buffer cache), the difference between them is
the sequential single block read (serial read), and scattered multi-block read. (And whether full table scan has nothing to do, just full table scan general performance
of the multi-block read). These two events described how the data blocks stored in the memory, rather than how to read from the disk.
db file scattered read
Fetch block is dispersed in the the discontinuous buffer space, usually means too many full table scan can check whether the application is a reasonable use of the
index, the database is reasonable to create the index. db file scattered read is used to indicate sequential read (for example, a full table scan).
db file sequential read
Usually implies that the index for large amount of data (such as through an index range scan for table data percentage is too large or the wrong use of the index),
multi-table joins improper connection order, hash join when hash_area_size can not accommodate hash table. db file sequential read is used to indicate the random
read (for example, an index scan).
Depth analysis of db file sequential read and db file scatteredread of:
Defined
The the event name db file sequential read and db file scatteredread described how the data blocks stored in the memory, rather than how to read from the disk the fill
disk to read the contents of the memory is continuous, the occurrence of disk read is db file sequential read, when filled with the data read from the disk memory
continuity can not be guaranteed, the occurrence of disk read is db file scatteredread.
db file sequential read
Oracle for all single block reads to generate db filesequential read event (since it is a single, of course, is continuous, you can find the P3 parameter db file sequential
read wait events are generally 1) Oracle always a single block of data stored in a single cache block (cache buffer), so a single block reads will never produce dbfile
scattered read event index block if it is not a fast full index scan, are generally a block read, so to say, the wait event many When are indexed read.
This event usually display a single data block read operations (such as index read). If this wait event is significant, it may indicate in a multi-table joins, table join
order problems, may not have the correct driver table; indiscriminately index. In most cases, we say that the index can be more rapid access to records, for a coding
standard, well-tuned database, the wait is normal. However, in many cases, the use of the index is not the best choice, for example, to read large amounts of data in a
large table full table scan may be significantly faster than an index scan, so in development we should note that, for this query should avoid the use of an index scan.
db file scattered read
db file scattered read are generally wait to read multiple blocks into memory. Performance and more efficient memory space utilization Oracle generally will disperse
these blocks in memory. db file scattered read wait event the P3 parameter indicates the number of blocks per I / O read. Every time I / O to read the number of
blocks, controlled by parameters db_file_multiblock_read_count. Full table scan or index fast full scan generally read block this way, so the wait are often caused
because a full table scan; most cases, the full table scan and fast full index scan will produce one or more times db file scattered read. Sometimes, however, these
scans will only have dbfile sequential read.
Full table scan is placed LRU (Least Recently Used, the least recently used) list the cold side (cold end) Cache put them into memory for frequently accessed smaller
data table, you can choose to avoid repeated read . When this wait event more significant, it is possible to combine v $ session_longops is dynamic performance view
to the diagnosis, the view recorded in a long time (running time of more than 6 seconds) to run things may be a lot of full table scan operation (in any case , this part
of the information is worthy of our attention).
latch free
The latch is a lightweight lock. In general, latch consists of three memory elements: pid (process id), memory address and memory length. Latch ensure the shared
data structure exclusive access to, in order to ensure the integrity of the memory structure damage. In multiple sessions at the same time to modify or view the same
memory structure (inspect) SGA must be serialized access to ensure the integrity of the sga data structure.
Latch is used to protect the memory structure in the SGA. Protection for objects in the database, the use of the lock is not a latch. Oracle SGA in many latch used to
protect sga memory structures will not be damaged because of concurrent access. Common the latch free wait event is caused due to the heat block (buffer cache latch
contention) and not using bind variables (in the shared pool latch contention).
The most common Latch focused on the competition BufferCache competition and Shared Pool. Latch competition related and Buffer Cache cache buffers chains and
the cache buffers LRU chain, and Shared Pool Latch competition SharedPool Latch and Library Cache Latch. The Buffer Cache Latch competition often is caused
due to the hot block competition or inefficient SQL statements; Latch Shared Pool of competition is usually caused due to the hard parse SQL. Too large shared pool
could lead to a shared pool latch contention (version 9i before);
When the latch system-wide wait time significantly, you can v $ latch sleeps column to find contention significantly latch:
Select name, gets, misses, immediate_gets, immediate_misses, sleeps
from v $ latch order by sleeps desc;
buffer busy waits

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 10 de 13

Conditions occur:
block is read into the buffer, or already in the buffer being other session to modify a session to try to pin live it, then the current block has been pin live competition to
produce a bufferbusy waits, the value should not be greater than 1%. View the v $ waitstat see approximate buffer BUSY WAITS distribution.
The solution:
This happens usually may be adjusted in several ways: increasing the data buffer, freelist, reduce pctused, increasing the number of rollback segments, increases
initrans, consider using the LMT + ASSM confirmation is not caused due to the hot block (can inverted index, or more small size).
The wait event indicates that is waiting for a non-shared buffer, or currently being read into the buffer cache. In general buffer BUSY wait should not be more than
1%. Check buffer wait statistics section (see below) Segments by Buffer Busy Waits (or the V $ WAITSTAT), look at the wait is in paragraph head
(SegmentHeader,). If so, you can consider increasing the free list (freelist for Oracle8i DMT) or increase the freelist groups (in many cases this adjustment is
immediate, 8.1.6 and later, the dynamic modification feelists need to set COMPATIBLE at least 8.1.6) Oracle9i or later can use ASSM.
alter table xxx storage (freelists n);
- Find wait block type
SELECT 'segment Header' CLASS, a.Segment_Type,
a.Segment_Name
a.Partition_Name
FROM Dba_Segments a, V $ session_Wait b
WHERE a.Header_File = b.P1
AND a.Header_Block = b.P2
AND b.Event = 'buffer busy waits'
UNION
The SELECT 'freelist Groups' Class
a.Segment_Type
a.Segment_Name
a.Partition_Name
FROM Dba_Segments a, V $ session_Wait b
WHERE b.P2 BETWEEN a.Header_Block + 1
AND (a.Header_Block + a.Freelist_Groups)
AND a.Header_File = b.P1
All AND a.Freelist_Groups>
AND b.Event = 'buffer busy waits'
UNION
SELECT a.Segment_Type | | 'Block' CLASS,
a.Segment_Type
a.Segment_Name
a.Partition_Name
FROM Dba_Extents a, V $ session_Wait b
WHERE b.P2 BETWEEN a.Block_Id AND a.Block_Id + a.Blocks - 1
AND a.File_Id = b.P1
AND b.Event = 'buffer busy waits'
AND NOT EXISTS (SELECT 1
FROM DBA_SEGMENTS
WHERE Header_File = b.P1 AND Header_Block = b.P2);
For different wait block type, we take a different approach:
1.data segment header:
Process recurring access Data Segment header usually for two reasons: to obtain or modify process freelists information is; expansion of the high-water mark. First
case, the process frequently access processfreelists information leading to freelist contention, we can increase the storage parameters of the corresponding segment
object freelist or freelist Groups a; often want to modify the freelist the data block and out of freelist a result of the process, you can the pctfree value and value
pctused of settings is a big gap, so as to avoid frequent data block and out of the freelist; For the second case, the segment space consumed quickly, and set the next
extent is too small, resulting in frequent expansion of the high-water mark, the The approach is to increase the segment object storage parameters next extent or create
a table space set extent size uniform.
2.data block:
One or more data blocks are multiple processes simultaneously read and write, has become a hot block, to solve this problem by the following way:

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 11 de 13

(1) reduce the concurrency of the program If the program uses a parallel query, reduce paralleldegree, in order to avoid multiple parallel slave simultaneously access
the same data object wait degrade performance
(2) adjusting the application so that it can read less data block will be able to obtain the required data, reducing the Buffer gets and physical reads
(3) to reduce the number of records in the same block, so that the distribution of records in the data block, which can be achieved in several ways: You can adjust the
segment object pctfree value segment can be rebuilt to a smaller block size table space , you can also use the alter table minimize records_per_block statement to
reduce the number of records in each block
(4) If the hot block object is similar to the index increment id field, you can index into reverse index, scattered data distribution, dispersion hot block; wait in the
index block should consider rebuilding the index, partitioned index or use reverse key index.
ITL competition and wait for multi-transactional concurrent access to the data sheet, may occur, in order to reduce this wait, you can increase initrans, using multiple
ITL slots.
3.undo segment header:
undo segment header contention because the system undosegment not enough, the need to increase the undo segment, the undo segment management methods,
manual management mode, you need to modify ROLLBACK_SEGMENTS initialization parameter to increase the rollback segment, if the automatic mode, you can
reduce the transactions_per_rollback_segment initialization parameter to the oracle automatic increase in the number of rollbacksegment
4.undo block:
undo block contention with the application data read and write at the same time (requires appropriate to reduce the large-scale consistency read), the read process to
undo segment to obtain consistent data, the solution is to stagger application modify the data and a lot of time to query data ASSM combination LMT completely
changed the Oracle storage mechanism, bitmap freelist can reduce the buffer busy waits (buffer busy wait), this problem was a serious problem in the previous
versions of Oracle9i.
Oracle claims ASSM significantly improve the performance of the DML concurrent operation, because (a) the different portions of the bitmap can be used
simultaneously, thus eliminating the serialized Looking remaining space. According to the the Oracle test results, use bitmap will eliminate all sub-head (competition
for resources), but also very fast concurrent insert operation. Among Oracle9i or later, the buffer busy wait no longer common.
Free buffer waits
There is no free data buffer available buffer, so that the process of the current session in Free the buffer wiats wait state, free buffer waits reason for the wait
Like the following:
- DATA buffer is too small;
- DBWR process to write the efficiency is relatively low;
- LGWR write too slow, DBWR wait;
- A large number of dirty blocks are written to disk;
- Low efficiency of SQL statements that need to be optimized on the Top SQL.
enqueue
The queue competition: enqueue a locking mechanism to protect shared resources. The locking mechanisms to protect shared resources, such as data in the record, in
order to avoid two people update the same data at the same time. The Enqueue including a queuing mechanism, FIFO (first-in, first-out) queuing mechanism.
Enqueue wait for the ST, HW, TX, TM
STenqueue interval allocated for space management, and dictionary-managed tablespace (DMT) DMT is typical for uet $ and FET $ data dictionary table contention.
Version LMT should try to use locally managed tablespaces or consider the manual pre-allocated a certain number of areas (Extent) reduce the dynamic expansion
serious queue competition.
The HW enqueue the segment high water mark the relevant wait; manually assign an appropriate area to avoid this wait.
The TX lock (affairs lock) is the most common enqueue wait. TX enqueue wait is usually the result of one of the following three issues.
The first question is a duplicate index unique index, you need to release the enqueue (commit) performs a commit / rollback (rollback) operation.
The second problem is the same bitmap index segment is updated several times. As single bitmap segment may contain more than one row address (rowid), when
multiple users attempt to update the same period of a user locks the records requested by the other users, then wait for the. Committed or rolled back until the locked
user enqueue release. The third question, the problem is most likely to occur, multiple users to simultaneously update the same block. If there is not enough ITL slot
occurs block-level locking. By increasing initrans and / or maxtrans to allow the use of multiple ITL slots (the data sheet for frequent concurrent DML operations in
the beginning of the construction of the table should be considered a reasonable value for the corresponding parameter settings to avoid changes to the system is
running online, before 8i, the freelists and other parameters can not be changed online design consideration is particularly important), or increase table on pctfree
value, you can easily avoid this situation.
The TM enqueue queue lock during DML operations before the acquisition, in order to prevent the data being operated table any DDL operations (DML operations
on a data-sheet, its structure can not be changed).
log file parallel write / logfile sync (synchronize log file)
If you log group there are several members of the group, when flush log buffer, the write operation is parallel, this time waiting for this event possible.
Trigger LGWR process:
1 user submits
2 1/3 redo log buffer is full
Greater than 1M redo log buffer is not written to disk
4.3 seconds timeout
Need to write data 5.DBWR the SCN greater than LGWR records the SCN the DBWR trigger LGWR writes.

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 12 de 13

When a user commits (commits) or rollback (rollback), the session's redo information needs to be written out to the redo logfile user process will inform the LGWR
to perform the write operation will notify the user process LGWR to complete tasks. Wait event means user process waiting for the LGWR write completion notice.
rollback operation, the event record from the user to issue a rollback command to the time of the rollback is complete.
If the wait too much, may indicate that LGWR write inefficient or submitted too often solve the problem, can follow: log file parallel write wait event. user commits,
user rollback statistics can be used to observe the number of committed or rolled back
Solution:
1. Increase LGWR properties: try to use a fast disk, do not redo log file is stored in the disk of RAID 5's
2 Use batch submission
Appropriate use of the NOLOGGING / UNRECOVERABLE option
Average redo write size can be calculated by the following equation:
avg.redo write size = (Redo blockwritten / redo writes) * 512 bytes
If the system generates a lot of redo each write less general description LGWR is activated too often.
Competition may lead to excessive redo latch.
The following wait events and RAC (resource contention between nodes):
gc current block busy:
gcs log flush sync:
gc buffer busy: hot block; the node isolation / service isolation to reduce inter-node resources contention;
Log File Switch
When this wait appears, which means that the request submission (commit) need to wait for the completion of the log file switch ". This wait event occurs usually
because the log group cycle is full, the first log archive is not yet complete, there the wait. The wait may indicate io problems.
The solution:
Consider increasing the log file and increase the log group
The archive files are moved to a fast disk
Adjustment log_archive_max_processes.
log file switch (checkpoint incomplete) - log switch (checkpoint not complete)
The wait event usually indicates you the DBWR write speed of slow or IO problems.
Want to consider adding additional DBWR or increase your log group or log file size.
control file sequential read / control fileparallel write
If you wait a long time, it is clear, you need to consider improving the control file where the disk I / O.
SQL Statistics accordance with the statistics of different indicators Sort useful data, combined with all the statistics, you can
Identify poor performance running SQL statements and run unreasonable (such as the number of runs very much) SQL easier to understand, here not described in
detail.
Many of the above are better understood, explain it here briefly a few of the following:
SQLordered by Parse Calls: Parse calls please reference (including hard parse and soft parse, and softer resolution):
SQL ordered by Version Count: SQL statement contains a version more same parentcursors, children cursors sql statement. That is, the SQL text is exactly the same,
the father, the cursor can be shared, but the different optimizer environment settings (OPTIMIZER_MISMATCH), bind variables length of the value in the second
execution occurrence of significant changes (BIND_MISMATCH), licensing relationship does not match (AUTH_CHECK_MISMATCH ) or basis convert an object
does not match (TRANSLATION_MISMATCH) lead to sub-cursors can not be shared, you need to generate a new child cursor. Shared with SQL (cursor sharing).
This case, the execution plan may be different, and may be the same (we can be seen through the plan_hash_value); specific mismatch can query V $
SQL_SHARED_CURSOR
Advisory Statistics
With this view recommendations. By the following view query.
GV_ $ DB_CACHE_ADVICE
GV_ $ MTTR_TARGET_ADVICE
GV_ $ PGATARGET_ADVICE_HISTOGRAM
GV_ $ PGA_TARGET_ADVICE
GV_ $ SHARED_POOL_ADVICE
V_ $ DB_CACHE_ADVICE
V_ $ MTTR_TARGET_ADVICE
V_ $ PGA_TARGET_ADVICE
V_ $ PGA_TARGET_ADVICE_HISTOGRAM
V_ $ SHARED_POOL_ADVICE

http://www.databaseskill.com/1182859/

26/09/2015

AWR basic operations, analysis - Database - Database Skill

Pgina 13 de 13

Buffer Pool Advisory / PGAMemory Advisory / SGA Target Advisory /. . . . . .


WaitStatistics
Description buffer wait what kind of the wait block type (refer to the previous buffer wait instructions and ways to improve).
Segment Statistics:
* Segments by Logical Reads
* Segments by Physical Reads
* Segments by Row Lock Waits
* Segments by ITL Waits
* Segments by Buffer Busy Waits
* Segments by Global Cache Buffer Busy
* Segments by CR Blocks Received
* Segments by Current Blocks Received

http://www.databaseskill.com/1182859/

26/09/2015

Potrebbero piacerti anche