In AWR, the ‘Top 10 Foreground Events’ table comes under the ‘Report Summary’ section. Refer to the below figure:
The following metrics are available in the ‘Top 10 Foreground Events’ table of an AWR report:
DB CPU
It shows the total wait time for DB CPU and waiting %. Referring to Figure 01, the total wait time is 1151.4 seconds (= 19.19 minutes). The CPU time observed in the snap summary section is 22.25 minutes (Figure 02).
Formula: % DB Time = Total Wait Time / DB Time
% DB Time = 19.19/22.25 => 86.2%
Log File Sync
When a user commits or rolls back data, the LGWR flushes the session’s redo from the log buffer to the redo logs. The log file sync process must wait for this event to complete.
To reduce wait events here:
- Slow disk I/O: Segregating the redo log file onto separate disk spindles can reduce log file sync waits. Moving the online redo logs to fast SSD storage and increasing the log_buffer size above 10 megabytes (It is automatically set to 11g and beyond). If I/O is slow (timings in AWR or STATSPACK reports > 15 ms), then the only solution for log file sync waits is to improve I/O bandwidth.
- LGWR is not getting enough CPU: If the vmstat runqueue column is greater than cpu_count, then the instance is CPU-bound and this can manifest itself in high log file sync waits. The solution is to tune SQL (to reduce CPU overhead), to add processors, or ‘nice’ the dispatching priority of the LGWR process.
- High COMMIT activity: A poorly-written application is issuing COMMITs too frequently, causing high LGWR activity and high log file sync waits. The solution would be to reduce the frequency of COMMIT statements in the application.
- LGWR is paged out: Check the server for RAM swapping, and add RAM if the instance processes are getting paged out.
- Bugs: There is also the possibility that bugs can cause high log file sync waits. In summary, high log file sync waits can be caused either by too-high COMMIT frequency in the application or by exhausted CPU, disk or RAM resources.
Buffer Busy Waits
It indicates that a particular block is being used by more than one process at the same. When the first process is reading the block the other processes go in a wait as the block is in unshared more. A typical scenario for this event to occur is when we have a batch process which is continuously polling the database by executing particular SQL repeatedly and more than one parallel instance running for the process. All the instances of the process will try to access the same memory blocks as the SQL they are executing is the same. This is one of the situations in which we experience this event. Buffer busy waits should not be greater than 1%.
Note: Refer to the Buffer Wait Statistics section in AWR to find out the type of wait and their wait time.
DB File Scattered Read
This generally indicates waits related to full table scans. As full table scans are pulled into memory, they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache. A large number here indicates that the table may have missing or suppressed indexes.
DB File Parallel Read
It measures the number of occurrences when a lot of partition activity is happening. It could be a table or index partition.
Latch: Cache Buffers Chains
Latches are like locks on memory that are very quickly obtained and released. They are used to prevent concurrent access to a shared memory structure. If the latch is not available, a latch-free miss is recorded.
This metric provides the wait time for obtaining the latch.
If latch-free waits are in the Top 5 Wait Events or high in the complete Wait Events list, look at the latch-specific sections of the AWR report to see which latches are contended for.
Failed Logon Delay
It is specific to the case; and provides information on the waiting time to connect with DB.
SQL*Net Message to client
It indicates network contention. A high value shows network latency problems.
Row Cache Lock
The Row Cache or Data Dictionary Cache is a memory area in the shared pool that holds data dictionary information. Row cache holds data as rows instead of buffers. A Row cache enqueue lock is a lock on the data dictionary rows. It is used primarily to serialize changes to the data dictionary and to wait for a lock on a data dictionary cache. The enqueue will be on a specific data dictionary object. This is called the enqueue type and can be found in the v$rowcache view. Waits on this event usually indicate some form of DDL occurring, or possibly recursive operations such as storage management, sequence numbers incrementing frequently, etc. Diagnosing the cause of the contentionLatches are low-level queuing mechanisms (they’re accurately referred to as mutual exclusion mechanisms) used to protect shared memory structures in the system global area (SGA).
You may be interested:
- AWR – Buffer Hit Ratio – Analysis
- AWR – Execute to Parse Ratio – Analysis
- AWR – Instance Efficiency Percentages
- AWR Report Analysis Guide