AWR – Buffer Hit Ratio – Analysis

What is Buffer Hit ratio OR Buffer Hit %?

‘Buffer Hit Ratio’ is measured by the percentage of times the data block is requested by the query and available in memory. It is also called the ‘Buffer-Cache Hit Ratio’.

AWR - Buffer Hit Ratio - Analysis
Figure 01

Effective use of the buffer cache can greatly reduce the I/O load on the database. If the buffer cache is too small, frequently accessed data will be flushed from the buffer cache too quickly which forces the information to be re-fetched from disk. Since disk access is much slower than memory access, application performance will suffer. In addition, the extra burden imposed on the I/O subsystem could introduce a bottleneck at one or more devices that would further degrade performance.

How do you interpret it?

Although known as one of the most important statistics to evaluate, this ratio can sometimes be misleading. A low buffer hit ratio does not necessarily mean the cache is too small; it may be that potentially valid full-table scans are artificially reducing what is otherwise a good ratio.

Similarly, a high buffer hit ratio (say, 99%) normally indicates that the cache is adequately sized, but this assumption may not always be valid.

For example, frequently executed SQL statements that repeatedly refer to a small number of buffers via indexed lookups can create a misleadingly high buffer hit ratio.

When these buffers are read, they are placed at the most recently used (MRU) end of the buffer cache; iterative access to these buffers can artificially inflate the buffer hit ratio. This inflation makes tuning the buffer cache a challenge. Sometimes you can identify a too-small buffer cache by the appearance of the write complete waits event, which indicates that hot blocks (that is, blocks that are still being modified) are ageing out of the cache while they are still needed; check the Wait Events list for evidence of this event.

Tips for Tuning

If the queries seem to be well-tuned, the size of the buffer cache also determines how often buffers need to be fetched from disk. The DB_BLOCK_BUFFERS initialization parameter determines the number of database buffers available in the buffer cache. It is one of the primary parameters that contribute to the total memory requirements of the SGA on the instance. The DB_BLOCK_BUFFERS parameter, together with the DB_BLOCK_SIZE parameter, controls the total size of the buffer cache. Since DB_BLOCK_SIZE can only be specified when the database is first created, normally the size of the buffer cache size is controlled using the DB_BLOCK_BUFFERS parameter.

Consider increasing the DB_BLOCK_BUFFERS initialization parameter to increase the size of the buffer cache. This increase allows the Oracle Server to keep more information in memory, thus reducing the number of I/O operations required to do an equivalent amount of work using the current cache size.


You may be interested: