ORACLE-BASE AWR - MywallpapersMobi



From Oracle FAQ
Jump to: navigation , search

AWR (Automatic Workload Repository) is a built-in repository (in the SYSAUX tablespace) that exists in every Oracle Database. At regular intervals, the Oracle Database makes a snapshot of all of its vital statistics and workload information and stores them in the AWR.


  • 1 History
  • 2 Licensing
  • 3 Change snapshot settings
  • 4 Also see
  • 5 External links

History[ edit ]

AWR was first introduced with Oracle 10g .

Licensing[ edit ]

Only users that licensed Oracle Enterprise Manager Diagnostic Pack are entitled to use AWR. Other users should continue to use Statspack .

Change snapshot settings[ edit ]

SELECT * FROM dba_hist_wr_control;

Also see[ edit ]

  • ADDM – Automatic Database Diagnostic Monitor
  • Statspack – tool for taking and comparing database performance snapshots

External links[ edit ]

  • spViewer Tools – tools for AWR and STATSPACK performance data analysis and tuning
  • Performance Explorer-i – tool for automated Oracle AWR and statspack performance spikes root cause analysis

Glossary of Terms
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z #

Retrieved from ” “
Categories :

  • 9
  • Oracle 10g

Navigation menu


  • Page
  • Discussion
  • Edit
  • History

Personal tools

  • Not logged in
  • Talk
  • Contributions

Site Navigation

  • Wiki Home
  • Forum Home
  • Blogger Home

Site highlights

  • Blog Aggregator
  • FAQ’s
  • Mailing Lists
  • Usenet News
  • RSS Feeds

Wiki Navigation

  • Categories
  • Recent changes
  • Random page
  • Help


  • What links here
  • Related changes
  • Special pages
  • Printable version
  • Permanent link
  • Page information

Powered by OraFAQ
  • This page was last edited on 9 March 2016, at 08:37.
  • Privacy policy
  • About Oracle Wiki
  • Disclaimers


Oracle AWR Reports for Dummies

Oracle AWR Reports for Dummies

Ravi Yogesh

Ravi Yogesh Follow

Web Performance Engineer | Wells Fargo | BITS Pilani

  • 132
  • 13
    • LinkedIn
    • Facebook
    • Twitter


Life is really simple, but we insist on making it complicated. ―

Introduced in Oracle 10g and having evolved from the Statspack functionality of Oracle 8i/9i, the Automatic Workload Repository report or AWR report (as it is more commonly known as) is a vital tool to collect historical performance statistics of an Oracle Database by DBAs and System Performance engineers. An AWR report is primarily useful in gathering data on the Wait Events, Active Session History (ASH) Statistics, Object Usage Statistics and Resource hungry SQL Queries, which together help in identifying, analyzing and resolving performance bottlenecks of the database. Apart from this, ADDM (Automatic Database Diagnostic Monitor), the self-diagnosis utility of Oracle DB relies on the stats contained in AWR reports for accurately identifying root causes of performance issues. The AWR report in itself runs into hundreds of metrics and can be quite overwhelming to analyze at once, for someone not making a living out of doing so! This article is intended to call out the most important sections from the report as listed below, for drilling down the performance bottlenecks and fixing them fast. (For an advanced user, this can be clubbed with the recommendations from the advisory sections towards the end of an AWR report and ADDM report recommendations for getting an in-depth information of DB health, both of which are beyond the scope of this article).

  1. DB/Host/Snapshot Details: The topmost part of AWR contains a row indicating DB Name, Instance Name, Startup Time and if the DB is RAC enabled, followed by a row containing Host Configuration details like Host Name, OS Platform, number of CPUs/Cores and total memory available in GB. This is followed by the Snapshot details which basically indicate the start and end of the time interval represented in the report and the time elapsed between them. Elapsed DB Time takes into consideration the cumulative time elapsed for all the CPU cores and hence is usually greater than the actual elapsed time on clock, for a multi core system. Also, by default snapshots of the relevant data are taken every hour and retained for 7 days in an Oracle DB, but this can be altered as per need.

2. Load Profile: The first row in this table i.e. DB Time per second is the ratio of Elapsed DB Time to elapsed Clock time as listed in the Snapshot details section, while DB CPU per second indicates average number of CPU cores in use per second. Ideally this value should be less than the total number of CPU cores available for most of the time in order for the DB to perform well. Next metric to look for in this section is the ratio of Hard Parses and Parses, which should not be much higher when compared to the baseline report, as it indicates potential issues with cursor management and bind variables ( More on Parsing ). High hard parse rates cause serious performance issues. A high hard parse rate is usually accompanied by latch contention on the shared pool and library cache latches. Frequent hard parses are caused by inadequately sized shared pool and ineffective utilization of bind variables.

3. Instance Level Statistics: All the metrics in this section like Buffer Hit % (% of times a particular block was found in buffer cache instead of performing a physical I/O (reading from disk), buffer Nowait% and Latch Hit % are targeted at 100% for best performance. % Non-Parse CPU indicates percentage of CPU time spent in non-parsing activities which is preferred to be on the higher side for better DB performance, since parsing is CPU intensive. Also, the execute to parse ratio should be very high in an ideal situation.

4. Top 5 Timed Events: This is the most important table to consider when identifying potential bottlenecks. The last column Wait Class if having the value “Concurrency” indicates serious issues. Also, the Average wait time in fourth column should be noticed for alarmingly high values when compared to the other events listed. The fifth column %DB Time or %Total Call Time denotes the percentage of total DB time spent on the corresponding event and ideally DB CPU should be taking the major chunk of it. If not, then the corresponding event (say X) needs to be investigated further. For this, one can click on SQL Statistics link in the Main Report Section and click on the ‘SQL Ordered by’ link closest to the wait event (X) identified earlier. The obtained table can be used to drill down the SQL Statements using the maximum of the identified resource (X) for further optimization.

5. Shared Pool Statistics: Indicates percentage usage of memory allocated to the shared pool and the percent of SQLs with execution count > 1. Extremely high percent usage of Shared pool indicates a need for increasing the shared pool size while a very low utilization indicates a larger than required pool size for the given workload. Also, the percent of SQL with executions > 1, should be near to 100 for effective utilization of shared SQL statements. If not so, this indicates ineffective utilization of bind variables.

6. Time Model Statistics: This section provides a summary of components where the database is spending it’s time. The table contains system resource consumption metrics, which can be ordered by Time (s) and % of DB Time. One has to look for the statistic taking maximum % of DB time and compare it with baseline for any abnormal behavior. If SQL time>>DB CPU time then the system needs to be diagnosed for I/O issues. Also, as stated earlier, hard parse time is an important metric to watch for and compare with baseline.

7. OS Details: Lists the CPU Utilization breakup across system, user, busy, iowait and idle along with load statistics.

8. SQL Statistics: This section gives the option of sorting all the SQL Queries executed in desired fashion like ‘SQL ordered by elapsed Time’, ‘SQL ordered by CPU Time’ etc. Any query figuring in top five of two or more sorted lists is a definite candidate for tuning. This section when used in combination with ‘Top 5 Timed events’, is the most vital and fastest way to identify the bottleneck from an AWR Report. More specifically, a query with less number of executions and high value of ‘Elapsed Time per Exec (s) ‘ should be a prime suspect. Similarly, in case of a high CPU Utilization observed on the DB, one should check for queries with highest ‘CPU per exec(s)’ and low execution count and target them for optimization.


  • Baseline: A statistical baseline is collection of statistic rates usually taken over time period where the system is performing well at peak load. Comparing statistics captured during a period of bad performance to a baseline helps discover specific statistics that have increased significantly and could be the cause of the problem. A baseline should ideally be taken at times of peak load and good performance.
  • Latching: Latches are a lightweight serialization mechanism that is used to single-thread access to internal Oracle structures.
  • ADDM: ADDM is a self diagnostic engine designed to analyze AWR data automatically after an AWR snapshot while making specific performance recommendations.









Ravi Yogesh

Ravi Yogesh

Web Performance Engineer | Wells Fargo | BITS Pilani



Sign in to leave your comment
Show more comments.

More from Ravi Yogesh

6 articles

Speculative Execution Side-Channel Vulnerabilities (Meltdown, Spectre) & How They Impact System Performance

Speculative Execution Side-Channel…

June 22, 2018

How to Baseline & Troubleshoot Application Performance with CA Wily Introscope

How to Baseline & Troubleshoot Application…

March 13, 2017

7 Vital Command Line Tools for Measuring UNIX System Performance

7 Vital Command Line Tools for Measuring…

February 18, 2017