The Tuning Process :
It is imperative to take a proactive approach to the Oracle tuning process. If you wait until users complain about performance, it is usually too late to take advantage of some of the most effective tuning strategies. The later in the life cycle performance issues are identified and addressed, the more it costs in time and resources to resolve the problems. When unforeseen performance issues do arise, it is the responsibility of the DBA to pinpoint the cause of the problem as early -- and to correct the situation as promptly -- as possible.
Before you start any tuning process, you need to establish a set of quantifiable goals that directly relate to a reason for tuning. You should keep these goals in mind as you evaluate any modifications that you are considering for your system. Your tuning goals should be specific and measurable rather than generic. For example, a tuning goal for a catalog ordering system might be the ability to process 25,000 orders a day by the end of the month.
Before you begin any effort to tune the Oracle system, make sure that the operating system is performing at its peak. Work with your system administrator to take advantage of the operating system monitoring tools to check for paging and swapping and resource contention from non-Oracle applications. The operating system might need to move information from one storage area to another to accommodate large amounts of information that will not fit into real memory. This procedure is known as paging and swapping. Excessive paging or swapping can degrade performance.
It is best to approach tuning with a structured methodology. After ensuring that the operating system is performing at its peak and that sufficient operating system resources have been allocated to your Oracle system, we recommend that you tune the following in this order:
1. Design
2. Application
3. Memory
4. I/O
5. Contention
Tuning The Design
A properly designed system can dramatically reduce performance problems throughout the life cycle of an application. Spend enough time in the design phase to ensure that the application will operate at an acceptable (that is, optimal) level of performance.
The application designer should clearly understand the performance expectations of the system he or she is designing and must have a solid understanding of Oracle's query processing to design the database for optimal efficiency and minimal maintenance.
Careful consideration to tuning during the design phase gives you the maximum benefit at the lowest cost in the life cycle of the application. You need to avoid any possibility of needing to completely revamp the logical design of the database simply because performance is unacceptable and all other tuning options have been exhausted.
Tuning The Application
Developers also play an important role in writing applications with effective SQL statements. They should utilize hints, indexes, and bind variables whenever necessary to obtain optimal performance. Do not underestimate the importance of application tuning. As much as 80 percent of your performance gains will be accomplished through application tuning.
The application developer should have a solid understanding of Oracle's SQL processing, including:
w DML (Data Manipulation Language)
w DDL (Data Definition Language)
w Transaction control
w Shared SQL and PL/SQL areas
w Optimizer modes
w Parallel query
It is important for developers to use as many PL/SQL packages as possible because PL/SQL packages are stored in parsed format. If necessary, the packages can be pinned in the shared pool to increase performance.
Database design and application design might not fall under the job description of the DBA in your organization. However, you need to understand the importance of proper initial design and, if necessary, be able to guide designers and developers to achieve this goal. It is strongly recommended that all parties involved in the development and maintenance of an Oracle system have the proper training in the version of Oracle that is used for the application. It's important to note that you can have a finely tuned instance that runs poorly if database design is improper or SQL statements are poorly written within the application.
Click Here for more information on application tuning.
Tuning Memory
The DBA's primary tuning responsibility really begins once you have a well-designed model and a properly tuned application. The proper sizing of memory structures allows sufficient information to be stored in memory. Because memory access is significantly faster than disk access, it is always better to satisfy requests for information in memory than from disk. Tuning memory allocation involves proper memory distribution to each Oracle memory area while ensuring that paging and swapping is not occurring at the operating system level.
The system global area (SGA) is the memory area that holds the most commonly requested information about the database. When a database is started, the SGA is allocated, and the background processes are started. The combination of the SGA and the background processes is called an instance. The SGA is divided into three memory components:
w Buffer cache
w Shared pool
w Redo log buffers
By setting the database initialization file parameter PRE_PAGE_SGA to YES, Oracle will read the entire SGA into memory when you start the instance. Setting this parameter will most likely increase the amount of time required for instance startup, but it should decrease the amount of time required for Oracle to reach its full performance capacity after instance startup.
The buffer cache is the area of the SGA that stores copies of the data blocks that are read from the segments in the database. These data blocks include tables, clusters, rollback segments, and indexes. Each buffer holds a single Oracle data block. The size of the buffer cache, in bytes, is determined by multiplying the database initialization file parameter DB_BLOCK_BUFFERS by DB_BLOCK_SIZE. Once the database has been created, the size of the buffer cache can be altered only by changing the DB_BLOCK_BUFFERS parameter; the DB_BLOCK_SIZE parameter should never be changed after the database has been created. When the database buffer cache is properly sized, performance is optimized by satisfying the requests for data blocks from memory rather than disk.
The shared pool is an area of the SGA that contains two major memory areas: the library cache and the dictionary cache. The library cache consists of shared and private SQL areas, PL/SQL packages and procedures, and control structures. When there is insufficient memory in the library cache, no space will be available to cache new statements until old statements are removed to make room. Any statements that were removed but are needed again will need to be reparsed. Excessive reparsing is shown by large values for the reloads and invalidation columns of the library cache report in the utlestat report report.txt. This reparsing procedure requires CPU and I/O resources. To avoid objects being unloaded and reloaded in the library cache, be sure that the value of SHARED_POOL_SIZE is large enough, and be sure to use functions, procedures, and packages whenever possible as they do not need to be parsed when they are loaded into the database. Database triggers do not need to be parsed because they are stored in compiled format. The library cache is sized only indirectly with the database initialization file parameter SHARED_POOL_SIZE.
The dictionary cache stores information about the database, its structures, and its users. Information in the dictionary cache includes segment names, users, privileges, and locations of extents. Before most operations can take place in the Oracle database, the data dictionary tables must be read; that data is then stored in the dictionary cache. As with the buffer cache and the library cache, the efficiency of the dictionary cache is determined by its hit ratio. The V$ROWCACHE view is useful in determining the hit ratio of the dictionary cache. The dictionary cache is sized only indirectly with the database initialization file parameter SHARED_POOL_SIZE.
If you are running in a multithreaded server (MTS) environment, user session information, such as private SQL and sort areas is stored in the shared pool rather than in the memory of user processes. If you are using an MTS, you might need to make your shared pool larger to accommodate the extra memory requirements caused by moving some sort information into the user global area (UGA). The size of the shared pool is determined by the database initialization file parameter SHARED_POOL_SIZE. The size of the library cache and dictionary cache is limited by the size of the shared pool.
The redo log buffer is an area of the SGA that records all changes made to the database. Information is periodically written from the redo log buffer to the online redo logfiles so that they can be applied in a roll-forward action if recovery is needed. The size of the redo log buffer, in bytes, is specified by the database initialization file parameter LOG_BUFFER.
Click Here for more information on tuning memory.
Tuning I/O
To obtain maximum performance on your system, I/O distribution is evened out by spreading the Oracle database files across multiple devices. Disk contention can occur when multiple processes try to access the same disk simultaneously. When the maximum number of accesses to a disk has been reached, other processes will need to wait for access to the disk.
The V$FILESTAT view provides information about datafile read and write statistics. FILE# can be used to join to the V$DATAFILE view to obtain the file name.
The sum of PHYRDS and PHYWRTS will give the total I/O for a given file. Excessive I/O on the SYSTEM tablespace might indicate that the shared pool size is too small, causing excessive physical reads and writes to the data dictionary, which resides in the SYSTEM tablespace.
Another cause of heavy I/O on the SYSTEM tablespace can result from a failure to specify a user's default tablespace. Make sure that data segments are not written to the SYSTEM tablespace. If a user is created without a default tablespace assignment, the user's default tablespace will be the SYSTEM tablespace. Always assign a non-SYSTEM default tablespace to the user.
If sort segments are written to the SYSTEM tablespace, excessive I/O will result. If a user is created without a temporary tablespace assignment, the user will perform all sorts to disk in the SYSTEM tablespace. Always assign a non-SYSTEM temporary tablespace to the user.
Report.txt and V$FILESTAT report I/O statistics for the Oracle database files only. If you are running non-Oracle applications on your system, you might want to utilize operating system utilities to track I/O for all files on the system.
Click Here for more information on tuning disk I/O.
Tuning Contention
Contention occurs when a process competes with another process for the same resource simultaneously. This causes processes to wait for a resource on the Oracle system and can have an effect on performance.
Latch contention can occur for the library cache, buffer cache, and log buffer structures in the SGA. Latch contention is identified in report.txt, which is the output file generated when you run utlestat. In addition, the V$LATCH view will provide information about each type of latch in your system.
Contention for rollback segments can also result in poor system performance. Rollback segments are used to store undo information for transactions and for rollback, read consistency, and recovery. Contention problems can occur when transactions experience waits for rollback segments. You must ensure that you have a sufficient number of rollback segments and that the segments are sized properly.
Resolving the Common Performance Issues :
Topics in this section address the most common database performance issues.
Buffer Cache
Missing Indexes
Too many Indexes
Similar Indexes
Optimizer Selection
I/O Distribution
Refertial Integrity Locking Problems
Buffer Cache :
Many sites have very large buffer caches. For these, scripts that check disk reads do not catch all poorly performing SQL. A column that you can look at is the BUFFER_GETS column in the V$SQLAREA table. A SQL statement may be scanning hundreds of thousands of buffers in the buffer cache. To capture these types of statements, use the following script. Note that the number of buffers scanned to achieve our timings is 500 per second.
COLUMN "Response" FORMAT 999,999,999.99
TTITLE 'List Statements in Shared Pool with the Most Buffer Gets'
SELECT sql_text, buffer_gets ,
executions ,
buffer_gets / DECODE(executions, 0,1,executions) "Average",
' Estimated Response = ',
buffer_gets / DECODE(executions, 0,1,executions) / 500
"Response"
FROM v$sqlarea
WHERE buffer_gets / DECODE(executions, 0,1,executions) > 2000;
Missing Indexes:
The following script lists all tables that don't have indexes. Many sites leave indexes off smaller tables, expecting them to be stored in the buffer cache. You can force a table to be cached using the statement ALTER TABLE tname CACHE, to guarantee that it's in the buffer cache. The problem is that having the unindexed table cached does not always guarantee the best performance. When an unindexed small table is joined to a larger table, the smaller table will often become the driving table because the smaller table has a full table scan performed on it. This situation provides much slower performance than having the larger table being the driving table. Also, a table without an index cannot be guaranteed of uniqueness unless it is done through application code.
TTITLE 'Report on all Tables Without Indexes'
SELECT owner, table_name
FROM all_tables
MINUS
SELECT owner, table_name
FROM all_indexes;
Too many Indexes :
The script below lists all tables that have more than six indexes. There are no hard-and-fast rules for the number of indexes a table has to have to maintain optimal performance. At the very least, though, tables with more than six indexes should have their indexes justified.
TTITLE 'Tables which have > 6 Indexes'
SELECT owner, table_name, COUNT(*)
FROM all_indexes
WHERE owner NOT IN ('SYS', 'SYSTEM')
GROUP BY owner, table_name
HAVING COUNT(*) > 6
Similar Indexes :
TThe script below gives you a list of indexes that have the same column as the leading column in the index. These indexes can cause queries to use the inappropriate index; in other words, Oracle will use the index that was created most recently if two indexes are of equal ranking. This can cause different indexes to be used from one environment to the next (e.g., from DEV to TEST to PROD). Each environment may have had its indexes created in a different sequence. We strongly recommend that you use hints in your programs, where necessary, to force specific queries to use a specified index.
The following information does not automatically indicate that an index is incorrect; however, you'll need to justify the existence of each of the indexes. The script lists all tables that have more than one index with the same leading column. If you run EXPLAIN PLAN on your SQL statements, you may be surprised at how many times a statement is not using the most appropriate index and the index is one of the indexes provided by the following script.
Note: Multiple indexes on a table with the same leading column may be totally legitimate, so you can't always assume that they will cause a problem.
COLUMN table_owner FORMAT a20;
COLUMN table_name FORMAT a26;
COLUMN column_name FORMAT a26;
TTITLE 'Indexes which may be Superfluous'
SELECT table_owner, table_name ,column_name FROM all_ind_columns
WHERE column_position =1
GROUP BY table_owner, table_name, column_name
HAVING COUNT(*) > 1;
Optimizer selection :
We recommend that every site aim to use the cost-based optimizer. If application developers are not skilled in writing SQL that is tuned for the rule-based optimizer, the cost-based optimizer can provide more acceptable response because it selects more appropriate driving tables and sometimes performs full table scans on tables that have a high percentage of rows for the value you are searching on. The cost-based optimizer offers many performance advantages such as the parallel query option, which makes more intelligent decisions on load sharing across processors, distributed queries being able to use indexes that aren't used with the rule-based optimizer. Oracle's performance-monitoring tool Oracle Expert operates with and provides advice based on the cost-based optimizer.
The DBA must tell the application development team which optimizer to use and she must analyze a new table. If the table is not analyzed and is joined to a table that has been analyzed, an inefficient full table scan often results for the table not analyzed. The following script checks which tables have been analyzed. If some are analyzed and some aren't, you will almost always experience some performance problems. If you are supposed to be running the cost-based optimizer, you must have your tables analyzed.
COLUMN owner FORMAT a16
COLUMN "Chained Rows" FORMAT 99,999
COLUMN table_name FORMAT a26
COLUMN analyzed FORMAT a16
TTITLE 'Tables that Are Analyzed (Summary by Owner)'
SELECT owner,
SUM(DECODE(NVL(num_rows,9999), 9999,0,1)) "Tables Analyzed",
SUM(DECODE(NVL(num_rows,9999), 9999,1,0)) "Tables Not Analyzed"
FROM all_tables
WHERE owner NOT IN ('SYS', 'SYSTEM')
GROUP BY owner;
The next script provides you with a table-by-table account of which tables are analyzed. Don't analyze indexes only! This will cause the cost-based optimizer to make some bad decisions. The ideal is to analyze your table and compute statistics on all tables. For very large tables, however, this may take too long. If this the case, we recommend that you estimate 40% or more of the rows in your tables and analyze compute statistics on the indexes. One other item that you should be aware of is that the optimizer path can change with new versions of the Oracle RDBMS. It may also change if you re-analyze tables on a periodic basis.
Notice that this script also lists the number of chained rows in each table.
Click Here for information on how to eliminate chained rows.
TTITLE 'Tables that Are Analyzed'
SELECT owner, table_name,
DECODE(NVL(to_char(num_rows), '** Not Analyzed' ),
'** Not Analyzed', '**> Not Analyzed' , 'OK') "Analyzed"
, NVL(chain_cnt,0) "Chained Rows"
FROM all_tables
WHERE owner NOT IN ('SYS', 'SYSTEM')
ORDER BY owner, table_name ;
I/O Distribution :
Check the output of the following query from the UTLBSTAT/UTLESTAT temporary tables to observe how well the I/O load is distributed across the disk devices in your system.
set charwidth 48;
set numwidth 12;
rem I/O should be spread evenly across drives. A big difference
rem between PHYS_READS and PHYS_BLKS_RD implies table
rem scans are going on
select * from stats$files;
Sample Output:
TABLE_SPACE FILE_NAME PHYS_ PHYS_ PHYS_ PHYS_ PHYS_ PHYS_
READS BLKS_RD RD_TIME WRITES BLKS_WR WRI_TIM
----- ------- ------- ------ ------- ------
ROLLBACK /db/datafile1.ora 0 0 0 78 78 2034
TEMP /db/datafile2.ora 0 0 0 0 0 0
DATA1 /db/datafile3.ora 202 766 1330 54 54 1058
DATA2 /db/datafile4.ora 62 391 720 29 29 1112
INDEX1 /db/datafile5.ora 37 37 117 3 3 39
SYSTEM /db/datafile0.ora 740 3353 6779 44 44 1619
6 rows selected.
In the example shown, the system tablespace was the most active, which is not a healthy scenario. The tablespace table shown here indicates that there might be sort activity going on in the system tablespace. To identify the users who will perform their disk sorting in the SYSTEM tablespace, run the following command:
SELECT username FROM dba_users WHERE temporary_tablespace = 'SYSTEM';
What can you do about this problem? Create a separate tablespace for temporary table activity. With the ALTER USER username TEMPORARY TABLESPACE tablespace command, you can assign each user to this new temporary tablespace. If users do not have a temporary tablespace explicitly assigned, they default to the SYSTEM tablespace, which should not be allowed. If you cannot add a new temporary tablespace because there isn't enough free disk space on your system, assign your users elsewhere-perhaps to the DATA1 tablespace.
Referentail Integrity Constraints :
RI constraints are validated by the database via a simple SELECT from the dependent (parent) table in question. This is very simple and straightforward. If a row is deleted or a primary key is modified within the parent table, all associated child tables need to be scanned to make sure no orphaned records will result. If a row is inserted or the foreign key is modified, the parent table is scanned to ensure that the new foreign key value(s) is valid. If a DELETE CASCADE clause is included, all associated child table records are deleted. Problems begin to arise when we look at how the referential integrity is enforced.
Oracle assumes the existence of an index over every foreign key within a table. This assumption is valid for a primary key constraint or even a unique key constraint but is a little presumptuous for every foreign key. Most applications we've seen have a small number of core tables which can have 100 or more columns, of which 10 or 20 columns will be foreign keys. For example, the EMP table could have many foreign keys: emp_category, emp_type, salary_range, post_code, dependency_code, mgr_emp_no, dept_code-and the list goes on. It would not be practical to have 20 or more indexes over each of these core tables.
If an index exists on the foreign key column of the child table, no DML lock, other than a lock over the rows being modified, is required. If the index is not created, a share lock is taken out on the child table for the duration of the transaction. The referential integrity validation could take several minutes or even hours to resolve. The share lock over the child table will allow other users to simultaneously read from the table, while restricting certain types of modification. The share lock over the table can actually block other normal, everyday modification of other rows in that table.
Referential integrity has been responsible for bringing many good applications to their knees. Well-meaning DBAs try to make their jobs easier by implementing every conceivable RI constraint possible. They assume that this will guarantee the integrity of the data. However, creating every conceivable foreign key index is out of the question.
The only viable workarounds to this locking behavior are the following:
Enable referential integrity and create all associated indexes for the core RI constraints of the application.
Replace embedded RI constraints with row-level stored triggers for those important RI constraints that are not practical to index.
Omit all referential integrity for the less important RI application constraints, leaving the validation up to the application code itself.
Module 2 :
Memory Tuning :
The Following Things needs to be Covered during following Section
Buffer Cache
Multiple Buufer Pool Feature
Shared Pool
Shared Executable Images
System Global Area
Memory Tuning –Buffer Cache :
The buffer cache is an area in memory that stores copies of database blocks for tables, indexes, rollback segments, clusters, sort data (if SORT_DIRECT_WRITES=FALSE and COMPATIBLE is not set), and Oracle dictionary data. Each buffer holds a single Oracle data block as set in the DB_BLOCK_SIZE INIT.ORA parameter. The buffer cache significantly reduces disk I/O and improves performance. By simply increasing one INIT.ORA parameter, DB_BLOCK_BUFFERS, you can often realize better than 60% performance improvements for both long-running update jobs and online transaction-processing systems. In general, the larger the buffer cache, the faster Oracle runs on most systems. This is not the case if enormous tables are being serially scanned on a continual basis, but it is true for the more common online transactions.
In DB_BLOCK_BUFFERS, you specify the number of Oracle database blocks that will fit in the buffer cache. To make this change take effect, you must shut down the database or database instance and then restart it. Typically, a production database cannot perform with a buffer cache that is under 10 megabytes, and most perform at their optimum with a buffer cache 50 megabytes or larger.
Oracle recommends that you gauge the performance of the buffer cache by looking at the hit ratio — how often the data being retrieved are available in memory (rather than having to be retrieved from disk). We find, however, that the hit ratio is only one part of the performance story. The other consideration is how effectively the database writer (DBWR process) is writing to your database. When a user process reads data into the buffer cache, it has to find free buffers to place the data into. If the buffers are dirty, it has to wait for the DBWR process to write the dirty buffers to disk before the user data can be loaded in. This task can increase response times markedly.
As a guide, you should aim your hit-ratio for as high as 95% for OLTP applications and 85% for batch applications. The main reason for the higher allowance for the OLTP applications is that dictionary tables, rollback data, and indexes range data are often accessed, boosting the figure up. Sort through the buffer cache (not setting SORT_DIRECT_WRITES and COMPATIBLE), and note that physical reads are accumulated into the hit ratio but logical reads aren't, which forces the hit ratio down.
Note: Increase the buffer cache only after you have tuned the shared pool size.
Remember that you must have at least 5% free memory on your machine. If you have closer to 10%, ask yourself whether you would benefit from adding this memory to your buffer cache. The answer is almost always yes. Make sure you include the planned memory usage of new users coming on to your system!
Multiple Buffer Pool Features :
The Oracle buffer cache is critical to obtaining optimal database performance. Prior to Oracle8, there were very few tuning options for the buffer cache other than assigning buffers using the DB_BLOCK_BUFFERS initialization parameter. In most cases, the LRU algorithm governing the buffer cache works quite well, no other tuning is required. However, certain data access patterns have been identified that could benefit from alternative buffer aging algorithms. Oracle8 allows the buffer cache to be divided into up to three separate buffer pools to accommodate multiple cache management strategies and assignment of objects to specific buffer pools. These new cache areas are known as the DEFAULT, RECYCLE and KEEP buffer pools.
This topic presents results of preliminary investigations into this new and little-utilized feature of Oracle8, including some experimental evidence that it can provide significant performance benefits under the right circumstances.
In addition, it discusses some Oracle8i/9i enhancements to the buffer cache and presents test results of the midpoint insertion algorithms introduced in Oracle8i. Click on each of the buttons below for more infomation.
Buffer Cache Review
Oracle8 Buffer Pools
Viewing Buffer Cache Information
Testing the Multiple Buffer Pool
Oracle8i Buffer Cache Changes
Buffer Cache Review :
Following Points will be Discussed in the coming Section .
Cache Hit Ratio
Buffer Aging and the LRU Lists
DBWR and the Dirty List
Full Table Scans: CACHE and NOCACHE
Buffer Cache Problems
All Oracle data is obtained by users from the buffer cache. The basic purpose of the cache is to minimize physical disk I/O by holding (buffering) copies of requested data blocks in memory. Data that is buffered in the cache can be served to users at memory access speed, much faster than going to disk for it. The large performance penalty of disk I/O makes tuning the buffer cache an extremely important task for the DBA.
Oracle data is stored on disk in identically sized units called blocks. Block size is determined at database creation by the DB_BLOCK_SIZE initialization parameter. The buffer cache is also divided into buffers of this same size, and each buffer can hold exactly one database block. Thus, the block is the basic unit of data transfer between disk and memory in Oracle.
The figure below shows a schematic of block movement between disk and the buffer cache through user processes and the DBWR background process. User processes obtain blocks from the cache if they can, otherwise they read from disk into the cache. DBWR is responsible for writing dirty (modified) blocks out from the cache to disk.
Cache Hit Ratio
The buffer cache is a shared resource, accessible by all users. When a user process requests data, Oracle firsts looks for that data's block in the cache. If the data is buffered in the cache it is returned to the requestor immediately. This is called a cache hit. When the data is not found, a cache miss occurs and the user process reads the data from disk into an available buffer in the cache. The cache hit ratio is the overall percentage of data requests that are served directly from the cache. In Oracle, the buffer cache hit ratio is normally computed using the following formula:
Cache Hit Ratio = 100 * (1 - physical reads/logical reads)
In this formula, 'physical reads' corresponds to cache misses and 'logical reads' corresponds to total data requests.
Tuning the buffer cache for optimum performance usually involves adding buffers to the cache until the hit ratio has been maximized. The number of buffers in the cache is specified by the DB_BLOCK_BUFFERS initialization parameter.
Buffer Aging and LRU Lists
An Oracle database typically has many more data blocks on disk than memory buffers in the buffer cache. Since not all blocks can be buffered in the cache at once, new block requests (cache misses) must find room in the cache to be read in from disk. When this happens, another block in the cache is usually discarded since the cache is normally full (and fixed in size).
The buffer cache is carefully designed to favor keeping frequently requested blocks in memory and allow less popular blocks to be replaced by new block requests. These decisions about which blocks to replace are made using a least recently used (or LRU) algorithm. This algorithm uses a data structure called the LRU list. This list basically orders the buffers in the cache according to when they were last accessed by a user. When a block is accessed, it is moved to the MRU (most recently used) end of the list. Blocks in the cache that are not accessed for awhile will find more and more blocks ahead of them in the list, and they will be closer to the LRU end of the list. This is also known as buffer aging in the LRU list.
Buffers are replaced in the cache from the least recently used end of the LRU list. This helps insure that frequently accessed buffers are not discarded, as they are regularly moved to the MRU end of the list with each access. This mechanism of keeping the most requested blocks in the buffer cache is normally very effective at minimizing disk I/O.
Managing a single LRU list can sometimes be a bottleneck in a heavily loaded database. The buffer cache can be divided into multiple working sets of buffers, each of which is effectively an individual LRU list. The number of working sets (LRU lists) used by Oracle is determined by the DB_BLOCK_LRU_LATCHES initialization parameter. This parameter is also important for configuring multiple buffer pools, as each pool must have at least one working set (and thus LRU latch).
DBWR and the Dirty List
In addition to the LRU list Oracle keeps a list of buffers that contain data that has been modified by users. This list is called the dirty list. Changed data committed by users must eventually be written to disk, as this is the permanent storage of the database. The DBWR background process is responsible for moving blocks from the dirty list to their permanent locations in disk files.
Dirty blocks cannot be replaced in the cache until they have been written to disk, otherwise the changes would be lost. An overabundance of dirty buffers can negatively impact cache efficiency by reducing available slots for new blocks. This can happen when DBWR is unable to keep up with the volume of update activity. Multiple DBWR processes can be configured in this case to increase the capacity to write out dirty blocks.
Full Table Scans: CACHE and NOCACHE
The LRU aging algorithm of Oracle's buffer cache treats blocks accessed for full table scans differently than blocks accessed randomly based on the observation that full scans could flood the MRU end of the list with blocks that have low probability of re-use. To avoid the negative impact on cache performance that this could cause, Oracle places blocks from full scans at the LRU end of the list, allowing them to be quickly replaced rather than aging other blocks.
In certain cases (like small lookup tables) full table scan blocks may actually be good candidates for re-use and would be better placed at the MRU end of the list. Oracle provides the ability to override the default cache treatment of full table scans (for large or small tables) at the table level using the CACHE or NOCACHE clauses of the CREATE TABLE and ALTER TABLE statements. Specifying CACHE indicates that the table's blocks should be placed at the MRU end of the LRU list when read during a full table scan.
Full scan LRU treatment of blocks can also be specified at the individual SQL statement level using the CACHE and NOCACHE hints.
Buffer Cache Problems
The LRU algorithm of the Oracle buffer cache is normally very good at providing efficient minimization of physical disk I/O. However, there are some situations where normal buffer cache aging may not be the best option for overall performance, for instance:
Blocks that should not go to the MRU end of the list
Blocks that should be excluded from aging and stay in the cache
The first situation can occur when very large tables are accessed randomly by users with very little block use overlap between users. In this case, the MRU end of the list is flooded by blocks that will not result in subsequent cache hits yet age other blocks down to the LRU end of the list. These other blocks may be replaced when they could have resulted in cache hits had they been kept.
The second situation occurs when there are data blocks that will definitely be requested regularly and we want to insure that physical disk I/O is not necessary to obtain their data. An example might be small lookup tables, or perhaps specific indexes.
The multiple buffer pool feature allows greater control over buffer cache usage to help address these problems and obtain even better cache performance.
Buffer Cache Pools :
The Oracle8 buffer cache can be divided into three separate pools of buffers to help segregate blocks with different LRU aging requirements. These pools are known as the RECYCLE, KEEP and DEFAULT buffer pools.
The RECYCLE Pool
In the first case, random access blocks flooding the MRU end of the list will not have a high re-use rate, yet they can age more desirable blocks out of the cache. Physical disk reads will be incurred when these other blocks are subsequently requested again. It would be better if these random access blocks could "age faster" in the cache and be quickly replaced themselves rather than pushing other blocks out. The Oracle8 RECYCLE buffer pool is specifically designed to provide working sets of buffers that can be rapidly aged out of the cache.
By isolating blocks from large, randomly accessed tables away from the rest of the buffer cache the RECYCLE pool relieves the pressure these blocks can place on the LRU list. This allows buffers from other objects to age less quickly and increases the chances of subsequent cache hits.
The RECYCLE pool can also be used to place a strict upper bound on the number of buffers any particular object will consume in the buffer cache. Since blocks from objects assigned to the RECYCLE pool will not likely be needed again soon, the pool itself can often be far smaller than the number of buffers these objects would occupy in the DEFAULT pool. This results in more memory available for the other pools, increasing their efficiency.
The KEEP Pool
Some blocks would be better off if they "aged slower" (or not at all) out of the cache. Imagine a transaction that randomly accesses some table and joins in data from five small lookup tables based on foreign keys. If the blocks from these lookup tables and their indexes are not in the cache, this transaction may incur an additional 10 or more physical disk reads to get the data. When response time is critical, this extra overhead can cause a wide disparity between the best and worst cases. The Oracle8 KEEP buffer pool allows objects to be effectively "pinned" into the buffer cache and excluded from the LRU aging process.
The KEEP pool is used to insure the presence of buffers in the cache regardless of when they were last accessed. This feature should be used carefully, though, as pinning infrequently used objects into the pool can be a waste of memory that could be better used elsewhere.
The DEFAULT Pool
The DEFAULT pool is for all objects not explicitly assigned to one of the other pools. There is always a DEFAULT pool and it will most likely be the largest of the pools under normal circumstances.
Configuring and Using Multiple Buffer Pools
The DEFAULT, KEEP and RECYCLE buffer pools are configured using the initialization parameters shown in the following table.
Parameter Description
DB_BLOCK_BUFFERS Total number of block buffers for all pools
DB_BLOCK_LRU_LATCHES Total number of LRU latches for all pools
BUFFER_POOL_KEEP Number of buffers and latches for the KEEP pool
BUFFER_POOL_RECYCLE Number of buffers and latches for the RECYCLE pool
The KEEP and RECYCLE pools are explicitly configured with buffers and LRU latches, the DEFAULT pool is allocated the remainder from the overall totals specified by DB_BLOCK_BUFFERS and DB_BLOCK_LRU_LATCHES.
For example, suppose we have 1000 total cache buffers to allocate. We can assign 700 to DEFAULT, 200 to KEEP and 100 to the RECYCLE pools with one LRU latch per pool as follows:
db_block_buffers = 1000
db_block_lru_latches = 3
buffer_pool_keep = (buffers:200,lru_latches:1)
buffer_pool_recycle = (buffers:100,lru_latches:1)
Note the syntax for the two new parameters BUFFER_POOL_KEEP and BUFFER_POOL_RECYCLE.
The minimum size of an LRU working set is 50 buffers, so the number of buffers configured for any of the pools must be at least 50 times the number of LRU latches. Conversely, the number of LRU latches cannot exceed DB_BLOCK_BUFFERS divided by 50. Each LRU working set in a pool is equally sized at the number of buffers divided by the number of latches.
The maximum number of LRU latches that can be configured for all the pools is six times the number of CPUs.
Load Balancing Multiple DBWRs
When multiple buffer pools are used in combination with multiple DBWR processes, there is a potential for uneven load balancing to occur between the DBWR processes. The DBWR processes are assigned to LRU working sets in round-robin fashion, and the working sets from different pools may be of different sizes and impose different database writing workloads. For instance, a DBWR assigned to working sets from the KEEP and DEFAULT pools may have a much different workload than one assigned sets from the RECYCLE pool. Oracle recommends that such load imbalances can be avoided by allocating to each pool a number of LRU latches equal to a multiple of the number of DBWR processes. Under this strategy each DBWR will draw the same number of working sets from each pool and thus the load will be balanced.
Assigning Objects to Buffer Pools
Segments can be assigned to buffer pools using the new BUFFER_POOL option of the STORAGE clause. For example, a table can be created and assigned to the KEEP pool as follows:
CREATE TABLE new_table
(col1 NUMBER, col2 VARCHAR2(100))
TABLESPACE tablespace_name
STORAGE (INITIAL 64K
NEXT 128K
BUFFER_POOL KEEP);
Other valid values are BUFFER_POOL RECYCLE and BUFFER_POOL DEFAULT.
The following statements accept this new STORAGE clause option:
CREATE TABLE
CREATE INDEX
CREATE CLUSTER
ALTER TABLE
ALTER INDEX
ALTER CLUSTER
When a pool is not explicitly specified, segments are assigned to the DEFAULT pool. Clustered tables inherit the pool of their cluster. Note also that pools can be specified at the partition level for partitioned segments.
If a segment is reassigned from one pool to another using an ALTER command, blocks in the cache from the segment stay in their current pool until aged out. New blocks from the segment will be loaded into the new pool.
Memory Tuning-----Shared Pool :
The SGA contains a shared buffer pool that holds the following:
Library cache, which contains shared SQL and PL/SQL statements
Dictionary cache, which stores data dictionary information
Some session information (only if you have a multithreaded server architecture)
To tune this pool most effectively, you need to set several INIT.ORA parameters: SHARED_POOL_SIZE, SHARED_POOL_RESERVED_MIN_ALLOC, and SHARED_POOL_RESERVED_SIZE. You should also be making use of a packaged procedure, dbms_shared_pool.keep('user.object'), that allows you to maintain a selected package, procedure, trigger, or other code object in memory.
The topics below describe each of these components and suggests how to monitor the shared buffer pool to make sure you have allocated enough space for this area.
Tuning the Data Dictionary Cache
Tuning the Library Cache
Tuning the Shared SQL Area
Tuning the Shared Pool - Splitting the Pool
Tuning Session Data
Pinning objects in the shared pool
Tuning the Data Dictionary Cache:
You need to set only one parameter, SHARED_POOL_SIZE, to control the amount of space allocated for the entire dictionary cache. You need to monitor the number of times Oracle tried but failed to find items in the dictionary.
Missing a get on the data dictionary or shared pool area of the SGA is more costly than missing a get on a data buffer or waiting for a redo buffer.
Some things to notice about the script is that it only selects statistics that have been used greater than 100 times and have had getmisses occur. The assumption here is that if the parameter has had no getmisses, it should be satisfactory. The factor of 100 gets was selected to ensure that the parameter has had enough activity to generate valid statistics. Note also that the percentage of misses is automatically calculated and reported for each parameter.
In reviewing this report the following things should be checked:
1. Review Count and Usage columns. If Usage is equal to Count the cache area is being fully utilized. If Usage is consistently low compared to Counts, consider reducing the INIT.ORA parameter that controls the caches (SHARED_POOL_SIZE).
2. If Counts and Usage are equal and Percents are greater than 10%, consider increasing the INIT.ORA parameter that controls the caches (SHARED_POOL_SIZE).
Since we are actually only concerned with an aggregate look at the cache area performance the following query can be substituted into the report to give you an overall health indicator:
SELECT (SUM(getmisses) / SUM(gets)) 'DD CACHE MISS RATIO'
FROM V$ROWCACHE;
This substitution simplifies the report into:
Date: 06/21/97 Page: 1
Time: 04:39 PM DD Cache Hit Ratio SYSTEM
ORTEST1 database
RATIO
---------
.01141403
1 row selected.
The library cache is used to store information concerning the shared objects. These consist mainly of the SQL AREA, TABLE/PROCEDURE, BODY, and TRIGGER type objects.
There is only a certain amount of memory in the library cache. If no more room is available in the cache for new entries, old statements are removed to make room. Then, if the statement that was removed from the cache is needed again, Oracle will have to parse it again before putting it back into the shared area. This will consume CPU and I/O resources.
To avoid objects being unloaded and reloaded into the library cache, be sure that the value of SHARED_POOL_SIZE is large enough, and use packages, procedures, and functions wherever possible. The advantage of using packages, procedures, and functions is that they do not need to be parsed when they are loaded into the database.
To determine the optimal size for the library cache monitor your system to find out how often Oracle is looking for statements in the cache that have been removed because of inadequate memory. You can determine this using the V$LIBRARYCACHE table.
Warning: Oracle always has some free memory in the shared pool area, which it sets aside for incoming objects, so having free memory does not give you an immediate instruction to decrease the size of the shared pool. Oracle is also quite wasteful of memory, allocating larger chunks of memory to objects than they actually require. It does this to reduce the CPU overhead of allocating memory chunks to the many objects being loaded into and flushed from the shared pool.
The V$LIBRARYCACHE view includes the columns: NAMESPACE, GETS, GETHITRATIO, PINS, PINHITRATIO, RELOADS, INVALIDATIONS.
NAMESPACE refers to the type of object (listed above). PINS refers to the number of times the object was executed. RELOADS shows the number of library cache misses on execution steps. If the ratio of reloads to pins exceeds 1% the SHARED_POOL_SIZE parameter should be increased. This can be determined by a simple query:
SELECT (SUM(reloads)/SUM(pins)) * 100 'Miss %'
FROM v$librarycache;
To fully utilize the higher value for SHARED_POOL_SIZE, you may also want to increase the number of cursors available to each user. This is accomplished via the OPEN_CURSORS INIT.ORA parameter.
If your gethitratio (Get Hit%) or pinhitratio (Pin Hit%) fall below 70-80% for objects with high values for Gets and Pins then you should increase your SHARED_POOL_SIZE parameter. If you see excessive reloads then you will also want to increase the shared pool. Invalidation's should not occur in a production environment but are probably going to be seen frequently under development.
Tuning the Shared SQL Area :
Each SQL statement that is executed is stored partly in the shared SQL area and partly in the private SQL area. The private area is further broken into two parts: the persistent area and the run-time area. The persistent area is used for binding information. The larger the number of columns in a query, the larger the persistent area. The size of the run-time area depends on the complexity of the statement. The type of statement is also a factor. An INSERT, UPDATE, or DELETE statement will use more run-time area than a SELECT statement will.
For INSERT, UPDATE, and DELETE statements, the run-time area is freed immediately after the statement has been executed. For a query, the run-time area is cleared only after all rows have been fetched or the query is canceled.
A private SQL area continues to exist until the corresponding cursor is closed. Note that the run-time area is freed but the persistent (binding) area remains open. If the statement is re-used, leaving cursors open is not bad practice if you have sufficient memory on your machine. However, leaving cursors open that are not likely to be used again is bad practice, particularly if you are short of memory. The number of private areas is limited by the setting of the INIT.ORA parameter OPEN_CURSORS. The user process will continue to operate, despite having reached the OPEN_CURSORS limit. Cursors will be flushed and will need to be re-parsed the next time they are accessed. Recursive calls are used to handle the reloading of the cursors if they have to be rebinded after being closed.
There are numerous statistics available via the v$sqlarea DPT. The text of SQL statements in the shared pool can be retrieved from the v$sqltext DPT. The following report displays the SQL statements in the SQL area with the greatest amount of disk reads (these will probably be the ones you will want to review and tune).
The example report was taken using a size of 10 for reads limit. Usually disk reads will be in the range specified by the define statement.
You might also be interested in the amount of memory used by a single user. This could point to a user who is using too much adhoc query type SQL and not enough packages, procedures and functions.
If you detect a user that seems to be using more than their share of the shared pool, the script below can be run to see exactly what they have been executing. From the results of the script you might be able to help them optimize their processing.
Note that in the sample output above the total is incorrect and so is the count of rows. The output was truncated; the original is three pages long.
Tuning the Shared Pool—Splitting the Shared Pool :
The SHARED_POOL_RESERVED_SIZE and SHARED_POOL_RESEVED_MIN_ALLOC parameters allow you to separate large objects from small objects. Enough memory is reserved for the loading of large packages and procedures without having too disruptive an effect on the shared pool area performance. Smaller objects will not be able to fragment the area because all objects smaller than the size specified in the parameter SHARED_POOL_RESERVED_MIN_ALLOC will be placed into a shared pool area that is especially reserved for the smaller objects.
SHARED_POOL_RESERVED_SIZE specifies the total amount of space given to the larger area.
The amount of space assigned to the small objects is the SHARED_POOL_SIZE less the SHARED_POOL_RESERVED_SIZE.
There is also a procedure that controls the amount of flushing from the shared pool to make way for new objects being moved into the pool. The RDBMS will continue to flush unused objects from the buffer pool until enough free memory is available to fit the object into the shared pool. If not enough memory is available even after all of the objects have been flushed, Oracle reports a 4031 error. Unfortunately, getting far enough to discover that there is not enough memory can be very resource-consuming.
The DBMS_SHARED_POOL.ABORT_REQUEST_THRESHOLD parameter sets the limit on the size of objects allowed to flush the shared pool if the free space is not sufficient to satisfy the request size. All objects larger than the setting (the valid range is 5,000 to 2,147,483,647) will immediately return an error 4031 if sufficient free space is not available.
As a rule of thumb, if a site is using a lot of packages and procedures, we recommend setting the SHARED_POOL_RESERVED_SIZE to half the size of the SHARED_POOL_SIZE. Typically, for a large production system, the SHARED_POOL_SIZE will be set to 60 megabytes and the SHARED_POOL_RESERVED_SIZE will be set to 30 megabytes. We then set the size of the SHARED_POOL_RESERVED_MIN_ALLOC to 2,500. Note that these are all starting point recommendations.
The following scripts can be used to assist your tuning of the SHARED_POOL_RESERVED_SIZE:
1. This script provides you with the current settings from your database.
TTITLE ' The Reserve Pool Settings for the Shared Pool Area'
SELECT SUBSTR(name,1,32) "Parameter", substr(value,1,12) "Setting"
FROM v$parameter
WHERE NAME LIKE '%reser%'
OR
name = 'shared_pool_size';
2. This script tells us whether larger objects have failed to obtain memory in the portion of the shared pool reserved for larger objects. Increasing the size of the SHARED_POOL_RESERVED_SIZE will often overcome this problem. You should increase your SHARED_POOL_RESERVED_SIZE in 20% increments.
COLUMN next_line FORMAT a60 newline
TTITLE ' Shared Pool Reserved Size Recommendation'
SELECT 'You may need to increase the SHARED_POOL_RESERVED_SIZE' next_line, '
Request Failures = '||request_failures
FROM v$shared_pool_reserved
WHERE request_failures > 0
AND 0 != ( SELECT to_number(value)
FROM v$parameter
WHERE name = 'shared_pool_reserved_size' );
3. This script tells you if you can decrease the SHARED_POOL_RESERVED_SIZE. Decreasing in 20% chunks is a reasonable starting point.
SELECT 'You may be able to decrease the SHARED_POOL_RESERVED_SIZE'
next_line,
'Request Failures = '||request_failures
FROM v$shared_pool_reserved
WHERE request_failures < 5
AND 0 != ( SELECT to_number(value)
FROM v$parameter
WHERE name = 'shared_pool_reserved_size' );
4. This script reports the total memory of packages, procedures, triggers, views, functions, and other objects stored in the database. Some sites prefer to sum all currently running objects that are larger than the size of the SHARED_POOL_RESERVED_MIN_ALLOC. The following script assumes that you have set you SHARED_POOL_RESERVED_MIN_ALLOC to 2,500.
SELECT SUM(sharable_mem)
FROM v$db_object_cache
WHERE sharable_mem > 2500;
5. This script shows the amount of storage required for SQL. If you add the two values together, you have an approximate sizing for the SHARED_POOL_RESERVED size. It is best to add on some contingency, perhaps 40%, for factors such as dynamic SQL and for all statements that are not currently running. The same methods can be used to calculate the total shared pool size. You simply remove the WHERE sharable_mem > 2500 clause.
SELECT SUM(sharable_mem)
FROM v$sqlarea
WHERE sharable_mem > 2500;
6. When you are estimating the total shared pool size, you have to take into account user cursors, which also use memory. You need about 250 bytes of shared pool memory per user for each cursor that the user has open. To obtain the total cursor usage, either run the following query or get the figure from the "library cache" from the V$SGASTAT table.
SELECT SUM(250 * users_opening)
FROM v$sqlarea;
Shared Pool tuning – Session Data :
If you are using Oracle's multithreaded server, you may need to allocate a larger-than-usual SHARED_POOL_SIZE. This is because for such systems, the shared pool stores the user process's private SQL area and sort areas.
To gauge how much larger to make your shared pool size to accommodate session data access, query the V$SESSTAT table as follows:
SELECT SUM(value)
FROM v$sesstat se, v$statname st
WHERE st.name = 'session uga memory'
AND se.statistic# = st.statistic#;
The result indicates the memory that is currently allocated to all sessions. You can use this figure to increase the shared pool size if you are planning to use the multithreaded server. The following script also lets you obtain the maximum amount of memory that the server sessions have used:
SELECT SUM(value)
FROM v$sesstat se, v$statname st
WHERE st.name = 'session uga memory max'
AND se.statistic# = st.statistic#;
It is usually best to use the latter calculation and add on a 30% contingency.
Pinning Objects into SGA :
IIt is the DBA's responsibility to ensure that all large, critical procedures, packages, and triggers are pinned into the SGA. Pinning objects into memory yields best results for OLTP users. Because long-running jobs will load these packages into the SGA at the beginning of their job and hold them there for the duration of the task, performance improvements will be minimal (but every little bit helps).
Once an object has been loaded into the shared buffer pool it can be pinned into the buffer pool via the DBMS_SHARED_POOL package. Pinned objects can never be swapped out of memory until the database is shut down or until they are explicitly unpinned.
Be careful not to pin too many objects into memory and use up all of your available shared buffer space or the performance of the remainder of the applications will suffer.
By default, the RDBMS handles all loading and flushing of SQL information from the shared buffer pool. The database uses a least-recently-used algorithm to prioritize shared pool memory. Database objects are loaded into the shared pool (in compiled pcode form) the first time they are referenced and are immediately available for removal once all references to that object have finished. Pinning database objects into the SGA shared pool is very important to tuning large TPO-based applications. Database packages, functions, procedures, and individual cursors
can all be pinned into memory. For example:
PACKAGE dbms_shared_pool IS
PROCEDURE Keep (pkg_name varchar2);
END;
sql >> execute dbms_shared_pool.keep ('scott.GL_package');
When a database object is pinned into memory and it is not already in the shared pool, the object is queued to be kept when it is first referenced. Note that the keep function does not compile and load the object into the SGA.
Memory Tuning ---SGA
The System Global Area (SGA) is an area in memory that is shared among all users. The SGA is divided into four major areas, the shared pool, the redo log buffer, the Database Buffers, and the large pool, for databases using the multithreaded server and NET8.
The script below lists the full contents of the SGA. The listing gives details of the SGA breakdown in memory. One particularly significant piece of information in the list is "free memory," which indicates that the SHARED_POOL_SIZE INIT.ORA parameter may be reduced if the free memory is excessive. If it is low, you should not be decreasing the SHARED_POOL_SIZE. Be careful, however, because Oracle tends to maintain some free memory even when the shared pool is flooded with activity and should be made larger.
Other figures that are useful are the size of the buffer cache, "db_block_buffers," which is usually required to be at least 20 megabytes for optimum performance; the "sql_area," which is where all of the shared SQL is placed; the "dictionary cache," which is where Oracle's dictionary is placed; and the "log buffer," which is where all changes are written prior to writing to your redo logs. User cursor usage is the value shown in "library cache."
The log buffer should typically be 32,078 or larger. The "shared_sql" and "dictionary_cache" sizes are affected by the size to which you set the SHARED_POOL_SIZE INIT.ORA parameter. Unfortunately, the dictionary cache is tuned automatically. Most sites operate most efficiently with a shared pool size of at least 30,000,000.
Click Here to Copy Script to Clipboard Run Script
TTITLE ' SGA Statistic Listing '
SELECT *
FROM v$sgastat;
Module 3:
Tuning Disk I/O :
Once the application and memory areas have been tuned, the next performance bottleneck can be the disk subsystem. This system is tuned by tuning the input and output processes that Oracle uses, reducing contention for disk resources and reducing or eliminating dynamic space allocation within database data files.
I/O per Tablespace or Datafile
SORT_AREA_SIZE Tuning
Chained and Migrated Rows
Archive Writer (ARCH) Process
Identifying Full Table Scans
I/O per Tablespace or Datafile
VV$FILESTAT provides a file-monitoring table that you can query to find out the number of disk I/Os per disk file.
Click Here to view a script to report the spread of disk I/Os.
Click Here to view a sample spread of disk I/Os report output.
The above script shows the spread of disk I/Os. Ideally, the disk I/Os should be even across disks. If the disk I/O on one of the disks is too high, consider moving one of the data files on that disk to another disk. If you are using raw devices, it is handy to make the raw devices consistent sizes for this purpose. If you can't move one of the data files, consider moving objects into alternate tablespaces on different disks.
The output from this query will show you which files are the most active. However, it does not show you if any disk is exceeding its maximum I/Os per second at any given point in time. If you combine all of the I/Os on data files on a per-disk basis, you can identify the data files most likely to cause a disk bottleneck. You must spread your database differently if you are experiencing disk bottlenecks.
If there is a large amount of activity on one of the disks, there is often an untuned query causing the damage. Be aware of which table is causing the problems. The V$SQLAREA table has a column called disk_reads as well as the SQL statement that was performed. If you select all statements with an average greater than 2,500 disk_reads, and order it by disk_reads desc, this is often an indicator of the problem query.
Another symptom of poor performance is when you have almost as many reads as writes on the rollback segment data files. The two major causes of this are an undersized buffer cache, or a batch update running during prime OLTP which is updating tables that are read from heavily by OLTP users. We recommend that batch update jobs be run overnight.
For sites that are more batch-oriented, be aware that if the disk containing the rollback segments is being written to heavily that it can cause a disk I/O bottleneck. If you have several users performing heavy updates at the same time, consider alternating rollbacks from one disk to the next by placing them into alternate tablespaces that have data files on different disks; for example, rollback1 will be created on the rollback_disk1 tablespace, rollback2 on rollback_disk2 tablespace, rollback3 on rollback_disk1 tablespace, rollback4 on rollback_disk2 tablespace, and so on. You must have the rollback_disk1 tablespace on a separate disk drive from rollback_disk2.
For both recovery and performance reasons, it's also important to have your data files on different disks from your redo logs. The archive logs should be on a different disk from both data files and redo logs.
The script below can be used to determine if you have your redo logs on the same disk as your data files, or the archive logs on the same disk as your data files:
TTITLE ' Breakup of files across Disks / Check Recovery '
SELECT value
FROM v$parameter
WHERE name LIKE 'log_archive_dest'
UNION
SELECT name
FROM v$datafile
UNION
SELECT SUBSTR(member,1,55)
FROM v$logfile;
Another common occurrence is when the redo logs are multiplexed, with the first redos being placed away from the data files for performance reasons and the multiplexed redos being placed on the same disk as your most heavily hit data files. This causes an instant bottleneck.
Tuning Disk I/O Sort Area Size :
A sort area in memory is used to sort records before they are written out to disk. Increasing the size of this area by increasing the value of the INIT.ORA parameter, SORT_AREA_SIZE, lets you sort more efficiently. To allow a new value for SORT_AREA_SIZE to take effect, you must shut down the database and then restart it.
Another INIT.ORA parameter that has an effect on sort performance is:
w COMPATIBLE, which avoids reading sorted data through the buffer cache. Both can have a significant effect on performance.
Most online sorting queries request sorts of only a handful of records at a time, so unless the size of your sort area is unusually small, the whole operation can usually be done in memory. However, in large batch jobs the size of the sort area becomes an issue. The problem is that most sites tend to have a mixture of OLTP and batch processing, so you are faced with the dilemma of tuning for both. Oracle provides the SORT_AREA_RETAINED_SIZE parameter to assist you with allocating smaller amounts of memory for sorting to all users. If required, an indicated user can have his or her memory allocation for sorting increased to the number of bytes specified by the parameter SORT_AREA_SIZE.
If the data being sorted does not fit in memory, Oracle must sort it in small runs. As each run is completed, Oracle stores the data in temporary segments on disk. After all of the runs have completed, Oracle merges the data to produce the sorted data. This is less efficient than doing the entire sort in memory.
In general, try to allocate as much space in memory as possible for SORT_AREA_SIZE (assuming that your entire sort can fit into memory). Consider using a two megabyte SORT_AREA_SIZE when your sorted data exceeds 100 megabytes in size. Because SORT_AREA_SIZE is allocated per user, increasing this parameter can exhaust memory very quickly if a large number of users are logged on. You also need to make sure that the temporary segments to which the sort operation will write its output (if it runs out of memory) are large enough, with appropriately sized INITIAL and NEXT extents.
To find out whether sorting is affecting performance in your system, monitor the sorting disk activity in your system and then adjust accordingly. One good way to do this is to define a separate tablespace for temporary tables. By watching the I/O rate on the temporary tablespaces, you can detect how frequently the sort process failed to perform the entire sort in memory. You can monitor the V$SYSSTAT table to observe memory and disk activity.
Click Here for special considerations when running overnight and other long jobs.
Below is an example of querying the V$SYSSTAT table:
SELECT name, value
FROM v$sysstat
WHERE name IN ('sorts(memory)', 'sorts(disk)');
Name Value
---------------------------------------------------------------------
sorts(memory) 1291
sorts(disk) 2
The "sorts(memory)" statistic shows the total number of sort operations that could be performed completely within the sort buffer in memory, without using the temporary tablespace segments. The "sorts(disk)" statistic shows the number of sort operations that could not be performed in memory. (Note that this number does not represent the total number of times a temporary table extent was written by a sort process.) Out of a total of 1293 sorts, only two required disk usage.
You may not always realize that your program statements invoke a sort. Sorting is performed by the following statements:
w CREATE INDEX w DISTINCT
w GROUP BY w ORDER BY
w INTERSECT w MINUS
w UNION w Unindexed table joins
w Some correlated subqueries
Follow these suggestions if your monitoring shows that you have a sorting problem:
1. Check if the sort is really necessary. Has an index been inadvertently overlooked? Can a SQL statement be structured more efficiently.
2. Increase the value of the SORT_AREA_SIZE parameter. Because this increase applies to all user processes, this is likely to consume a lot of memory. Make sure you don't increase the value of SORT_AREA_SIZE to the point where you have little free memory. The maximum allowable value is system-dependent.
3. Verify your temporary tablespace extents are large enough. Make sure that you specify large enough table extents (in the INITIAL and NEXT parameters on the CREATE statement) to allow SORT_AREA_SIZE of memory to be written to disk without having to throw multiple extents. Make your temporary segments a minimum of SORT_AREA_SIZE + 1 block.
4. Let users who require larger sorts use a temporary tablespace with larger INITIAL and NEXT default tablespace storage parameters. This is a less likely, but possible, alternative which will help reduce the degree of dynamic extension. For example, during daily online transaction processing hours, set your default settings to:
ALTER TABLESPACE temp_tspace DEFAULT STORAGE
(INITIAL 260K NEXT 260K PCTINCREASE 0);
For overnight processing, you might set the default storage to:
ALTER TABLESPACE temp_tspace DEFAULT STORAGE
(INITIAL 5M NEXT 5M PCTINCREASE 0);
5. Set your INITIAL and NEXT extent sizes to one block plus a multiple of the sort area size. This will achieve minor improvements in response times. Assuming that you have a DB_BLOCK_SIZE of 4K and a SORT_AREA_SIZE of 64K, you may consider any of the following sizes or a higher size, depending on your requirements:
4K + (1 * 64K) = 68K
4K + (2 * 64K) = 130K
4K + (3* 64K) = 196K
4K + (4 * 64K) = 260K
6. Make sure to use the SORT_AREA_RETAINED_SIZE. Oracle will restore the sort area available to user processes to the size specified in this parameter if it believes that the sort area data will not be referenced in the near future. This will save memory. If memory is tight, we highly recommend that you take advantage of this feature by setting your SORT_AREA_RETAINED_SIZE to half the SORT_AREA_SIZE. For example, you might set:
SORT_AREA_SIZE = 131072 (128K)
SORT_AREA_RETAINED_SIZE = 65536 (64K)
7. Use the TEMPORARY type when creating tablespaces. This tablespace type is tuned especially for sorting and will boost your performance. We recommend that you use it as your users' TEMPORARY tablespace.
Tuning Disk I/O -- Chained Rows :
Disk chaining occurs when a row can't physically fit into an Oracle block. Another block is required to store the remainder of the row. Chaining can cause serious performance problems and is especially prevalent with those storing multimedia data or large binary objects (blobs). You should pay special attention to the DB_BLOCK_SIZE parameter when you create your database. Block sizes of 4 kilobytes or more are the norm, not the exception.
Migration of an Oracle row occurs when a row is updated in an Oracle block and the amount of free space in the block is not adequate to store all of the row's data. The row is migrated to another physical block in the table. The problem is that the indexes that refer to the migrated row are still pointing to the block where the row used to be, and hence the table reads are doubled. Note however that full table scans will scan blocks as they come and will perform the same number of reads whether the rows are migrated or not.
If a table has chaining problems, you can rebuild the table, specifying a larger value for the PCTFREE parameter. If the bulk of the rows currently in the table have already been updated to their full lengths, a lot of space will be wasted. The free space will be reserved for rows that will not expand any further. To eliminate this waste, you can create the table with a smaller PCTFREE parameter, load the existing data, and then run the ALTER command on the table with a larger PCTFREE.
The following query can be used to identify tables with chaining problems:
Run Script
TTITLE 'Tables Experiencing Chaining'
SELECT owner, table_name,
NVL(chain_cnt,0) "Chained Rows"
from all_tables
WHERE owner NOT IN ('SYS', 'SYSTEM')
AND NVL(chain_cnt,0) > 0
ORDER BY owner, table_name;
The above query is useful only for tables that have been analyzed. Note the NVL function to replace a NULL with a zero -- tables that have not been analyzed will appear to have been.
The following steps explain how to list all of the chained rows in any selected table:
1. Create a table named CHAINED_ROWS using the following script (taken from Oracle's utlchain.sql script):
Run Script
CREATE TABLE chained_rows (
owner_name VARCHAR2(30),
table_name VARCHAR2(30),
cluster_name VARCHAR2(30),
partition_name VARCHAR2(30),
subpartition_name VARCHAR2(30),
head_rowid ROWID,
analyze_timestamp DATE
);
2. Issue the ANALYZE command to collect the necessary statistics:
Run Script
ANALYZE TABLE
3. Query the CHAINED_ROWS table to see a full listing of all chained rows, as shown below:
Run Script
SELECT * FROM CHAINED_ROWS
WHERE TABLE_NAME = 'ACCOUNT';
Sample Output:
Owner_name Table_Name Cluster_Name Head_Rowid Timestamp
---------------------------------------------------------------------
GURRY ACCOUNT 00000723. 0012.0004 30-SEP-93
GURRY ACCOUNT 00000723. 0007.0004 30-SEP-93
The following is an example of how to eliminate the chained rows:
Run Script
CREATE TABLE CHAINED_TEMP AS
SELECT * FROM
WHERE ROWID IN (SELECT HEAD_ROWID
FROM CHAINED_ROWS
WHERE TABLE_NAME = '
DELETE FROM
WHERE ROWID IN (SELECT HEAD_ROWID
FROM CHAINED_ROWS
WHERE TABLE_NAME = '
INSERT INTO
SELECT * FROM CHAINED_TEMP;
4. Drop the temporary table when you are convinced that everything has worked properly.
Run Script
DROP TABLE CHAINED_TEMP;
5. Clean out the CHAINED_ROWS table:
Run Script
DELETE FROM CHAINED_ROWS
WHERE TABLE_NAME = '
Even when you analyze your tables without the LIST CHAINED ROWS option (i.e., ANALYZE
NOTE: If you are using the rule-based optimizer and you have set OPTIMIZER_MODE to CHOOSE (the default), don't forget to remove the statistics from your tables and indexes using ANALYZE
Run Script to Remove Statistics
You can also obtain the overall number of chained and migrated rows read by your instance since startup time using the V$SYSSTAT table.
Run Script
SELECT SUBSTR(name, 1, 30) "Parameter", value "Value"
FROM V$SYSSTAT
WHERE name = 'table fetch continued row';
Tuning Disk I/O --- Archive Writer :
TThe archive writer process (ARCH) copies your after-image data from the redo logs to the archive logs. With Oracle databases put under more stress with a large number of changes, many sites experience a serious archive bottleneck.
The easiest way to detect a problem with the ARCH process is when your redo log disks are experiencing an I/O bottleneck or when your alert file detects a bottleneck in the archiver and mentions that LGWR is being held up.
Follow the steps below to alleviate archive bottlenecks:
w Ping-pong your redo logs from disk to disk, with redos 1 and 3 on disk A and redo logs 2 and 4 on disk B. This will allow the ARCH process to read from one disk and the LGWR to write to a separate disk. Archive logs should not be on the same disk as active data files or redo logs. In the case of multiplexed logs, try to multiplex such that no two members are on the same disk and they are "ping-ponged" so that while a log member is being written another isn't being read for archive at the same time on the same disk.
w Enlarge the INIT.ORA parameter LOG_ARCHIVE_BUFFER_SIZE to a value such as 256 or 512 kilobytes. The parameter has a default on most machines of about 150 operating system blocks, which is often inadequate for a heavily used production system.
w If you are archiving to tape, consider archiving to disk and then copy to tape in background. You may also consider compressing your archive logs (however, not the one currently being written to) to save disk space if it is at a premium.
w Place your archive log files onto a higher-speed disk.
w Add more redo logs; for example, have six disks, with 1, 3, and 5 on disk A and redo logs 2, 4, and 6 on disk B.
w Enlarge your redo log sizes. Note however that if you lose your instance, you may be off the air for quite some time while your current redo log is being read to ensure that the database is intact.
w Avoid performing hot backups at the same times as overnight jobs - in particular at the same time that batch updates are occurring. Entire blocks are written to your log buffer, redo logs, and, eventually, archive logs, for the duration of the hot backup on the data file being written to. If you must have 24-hour uptime, coordinate your backups so that there is little activity on the data file being backed up.
w Add another archive writer using the ALTER SYSTEM ARCHIVE LOG ALL command.
Identifying Full Table Scans :
A full table scan occurs when every block is read from a table. Full table scans are often a preferred performance option in batch-style applications, such as decision support. We have seen some excellent run-time improvements in decision support systems that use the parallel query option, which relies on full table scans to operate. However, full table scans at an OLTP site during prime online usage times can create havoc with response times. Full table scans, even on small tables, can degrade response times because the small table drives the query, and this table is not always the most efficient access path.
The following query reports how many full table scans are taking place:
Click Here to Copy Script to Clipboard Run Script
SELECT name, value
FROM v$sysstat
WHERE name LIKE '%table %'
ORDER BY name;
The values relating to the full table scans are:
w table scans (long tables) - a scan of a table that has more than five database blocks
w table scans (short tables) - a count of full table scans with five or fewer blocks
If the number of long table scans is significant, there is a strong possibility that SQL statements in your application need tuning or indexes need to be added.
To get an appreciation of how many rows and blocks are being accessed on average for the long full table scans, use this calculation:
Average Long Table Scan Blocks
= (table scan blocks gotten - (short table scans * 5))
/ long table scans
= (3,540,450 - (160,618 * 5)) / 661
= (3,540,450 - (803,090)) / 661
= 4,141 blocks read per full table scan
In our example, 4141 average disk reads performed on an OLTP application 661 times in the space of a few short hours is not a healthy situation.
If you can identify the users who are experiencing the full table scans, you can find out what they were running to cause these scans.
Module 4 :
Tuning Contention
Contention occurs when one or more of your user processes vies with another process for use of an Oracle or system resource. The fastest-growing contention problem is latch contention, particularly for the library cache, buffer cache, and log buffer. These growing contention problems have been caused by the increased use of symmetric multiprocessor (SMP) machines.
Database Writer
Rollback Segments
Redo Log Files
Buffer Cache Latches
Library Cache Latches
Multi-Threaded Server
Locking Contention
Tuning Contention --- Database Writer :
TThe database writer (DBWR) process handles all writes to the database. The DBWR process maintains two lists of buffers:
w Dirty list holds modified buffers that have not yet been written to disk.
w Least-recently-used (LRU) list holds free buffers that are in use or pinned buffers that are waiting on multiblock buffering before writing dirty buffers that have not yet moved to the dirty list. You must be sure that free buffers are available in the buffer cache as needed.
When a user process requires a block that is currently in the cache, it moves it to the most-recently-used end of the LRU list. If the block is not in the cache, a search begins at the least-recently-used end of the LRU list and searches until it finds a free buffer or until the _DB_BLOCK_MAX_SCAN_CNT buffers have been scanned. If your user process finds dirty buffers, it moves them to the dirty list.
The DBWR writes dirty buffers to disk under the following circumstances:
w A user process finds that there are _DB_BLOCK_WRITE_BATCH / 2 buffers in the dirty list.
w A user process scans _DB_BLOCK_MAX_SCAN_CNT buffers but doesn't find a free one.
w A timeout occurs (timeouts occur every 3 seconds).
w A checkpoint occurs, and LGWR gives DBWR a list of buffers to write.
A lazy DBWR becomes most noticeable when a checkpoint occurs. The LGWR instructs the DBWR to write all dirty buffers to the datafiles, whether they have been committed or not. You will see a definite spike in poor performance when a checkpoint occurs during benchmarks. The worse tuned the DBWR, the more severe the spike.
There are many methods of identifying a DBWR that is not writing frequently enough. The following statement detects contention in the buffer cache:
SELECT name, value
FROM V$SYSSTAT
WHERE name = 'free buffer waits';
The following are methods to reduce DBWR contention:
w Increase the INIT.ORA parameter DBWR_IO_SLAVES. If you are using DBWR_IO_SLAVES, do not set the parameter higher than two times the number of disk drives that hold database files that are being written to. Before setting this parameter, check with Oracle to ensure that the parameter will not create any problems on your machine.
NOTE: Regardless of whether there is DBWR contention, using additional DBWR_IO_SLAVES processes can improve performance, and we recommend that you take advantage of this parameter. We have found the response time improvements most evident when a checkpoint occurs and LGWR requests that the DBWR clears all dirty buffers from the buffer cache to the database.
Tune disk I/O. Disk I/O load can be a critical factor for DBWR performance. If the DBWR is continually writing to the one disk or is competing with reads to the same disk it is trying to write to, the DBWR is forever waiting for disk I/O, and a huge backlog of data to be written (dirty list) is often created.
Tune buffer cache. An undersized buffer cache can be a major cause of database writer problems because the DBWR is continually running in "panic mode," cleaning out dirty buffers to make way for new buffers that are being brought into the buffer cache.
w Lower the LOG_CHECKPOINT_INTERVAL init.ora parameter below the size of the redo logs by giving LOG_CHECKPOINT_TIMEOUT a value or by making your redo logs smaller. More frequent checkpoints tend to cause more performance spikes as each checkpoint occurs, but each spike is lower than the spike associated with a larger checkpoint.
Tuning Contention --- Rollback Segments :
Rollback segments are used by all kinds of transactions for rollback, transaction, read consistency, and recovery. Tuning problems can occur when transactions experience contention for rollback segments.
Rollback segments are usually stored in their own tablespace (other than the single SYSTEM rollback segment, which is in the SYSTEM tablespace) for two reasons. First, a tablespace cannot be taken offline if it contains even a single online rollback segment. Second, rollback segments grow and shrink as needed (assuming that the OPTIMAL setting is used), and, with dynamic table extension, this could result in severe tablespace fragmentation.
The following statement reports the overall number of rollback segment waits:
SELECT class, count
FROM v$waitstat
WHERE class IN ('undo header', 'undo block');
If the number of waits is greater than zero, you have had contention for your rollback segments.
Each rollback segment has a transaction table that controls the transactions accessing the rollback segment. Oracle documentation says that the transaction table has approximately 30 slots in the rollback if your database has a 2-kilobyte block size.
The following query lists the number of waits on a slot in the transaction tables:
SELECT usn "Rollback Table", GETS, WAITS , xacts "Active Transactions"
FROM V$ROLLSTAT;
The ideal is to have the waits zero; but in the real world, this is not always achievable. However, they should be as close to zero as possible. At the very worst, the ratio of gets to waits should be around 99%.
We recommend to create n new rollback segments (where n equals the number of rollback segments that have experienced waits). In the example above, you would add three rollback segments.
Large updates
Transactions performing large updates perform better with larger rollbacks because of the minimizing of dynamic extension. Take advantage of the SET TRANSACTION statement:
SET TRANSACTION USE ROLLBACK SEGMENT
Long-running jobs
Long-running overnight-style jobs use very large rollback segments. It is common to swap rollback segments from small to large for overnight processing and back to small for daily processing.
Tuning Contention : RedoLog Files :
TThe role of the redo logs is to protect your database against losing a disk, having the processor suddenly go off the air, or suffering any other kind of system failure. The system writes heavily to the redo logs. They contain all of the changes made to your tables, indexes, and rollback segments, as well as information on checkpoint and other administrative information that can be used to recover your database.
Some conditions that cause redo log contention are:
w Placing mirrored redo logs on disks that contain heavily used data files or operating system files.
w An undersized LOG_BUFFER parameter used with batch processing.
Oracle does not provide a means of monitoring redo disk I/Os, so you must use your operating system disk monitoring commands, such as sar -d 1 15, which monitors the disks 15 times at 1-second intervals on UNIX machines.
The following describes the INIT.ORA parameters that you can set to make the writing of redo log files as efficient as possible:
LOG_CHECKPOINT_INTERVAL
Number of new redo log file blocks needed to trigger a checkpoint. (These are operating system blocks, not Oracle blocks.) This parameter controls the frequency of checkpoints, which has a major impact on performance. Checkpoints occur regardless of archiving method. Each checkpoint forces all modified database buffers in the buffer cache to be written to the database; old log files don't need to be kept for instance recovery. Because database-processing overheads are incurred each time a checkpoint is written, we recommend that you perform checkpoints only as each log file fills and a log file switch occurs by setting LOG_CHECKPOINT_INTERVAL larger than the size of the redo log file size. All of your redo logs should be set to the same size.
Forcing checkpoints within your redo log files reduces the time taken to do instance recovery, because the amount of work needed to roll forward is not as large. The same effect can be achieved by having very small redo log files, automatically forcing checkpoints every time each one fills. The ongoing performance of the database is a more important consideration than the time taken to perform a recovery of your database, which is a much rarer event.
We have found that having either four or six redo logs of 5 or 10 megabytes each works well for most sites. If you are performing huge once-off data loads into your database, consider increasing the size of your redos (perhaps to 500 megabytes). Other options that you may consider are to use the UNRECOVERABLE clause in your CREATE TABLE and CREATE INDEX statements. You may also consider taking a cold backup before processing and turning off archiving for the duration of the processing. You then perform your data loads, take a cold backup, and reenable archiving.
It is advisable to ping-pong your redo logs if you have archiving enabled. This means that redo logs 1, 3, and 5 will be on disk A, and redo logs 2, 4, and 6 will be on disk B. When a redo log switch occurs, having the next redo on an alternate disk allows you to read from one redo (e.g., redo 1), which is written to your archive logs, and write new changes from the log buffer to redo 2, which is located on another disk.
CHECKPOINT_PROCESS
Enables the CKPT process. When a checkpoint occurs, it forces the redo log writer process (LGWR) to update each data file in your database with the latest checkpoint information. This writing momentarily stops the LGWR process from performing its primary role of writing log entries to the redo logs. Set CHECKPOINT_PROCESS to TRUE to cause CKPT to handle the updating of the data files and prevent the LGWR process from being held up while it performs this task. This is especially important when you have many data files in your database.
LOG_CHECKPOINT_TIMEOUT
Specifies the frequency, in seconds, with which checkpoints will occur. We recommend that you leave this parameter at its default value of 0. That value forces the LOG_CHECKPOINT_INTERVAL parameter to be the deciding factor on the frequency of checkpoints. We also recommend that you set the LOG_CHECKPOINT_INTERVAL to a value higher than the size of your redo logs. This forces checkpoints to occur when a redo log fills and a redo log switch occurs.
LOG_BUFFER
Number of physical bytes allocated to the redo log buffer in the SGA. Whenever a database buffer block is modified, the redo information is also written to the log buffer pool. Only the modified data are written by the LGWR to the redo log, not the entire database block, unless you are performing hot backups, in which case the entire block is written to the log buffer, redo logs, and archive logs. Whenever a commit occurs, only the changes within the redo buffer need to be written to disk. The actual database blocks can be written by the DBWR from the buffer cache to the data files at a later time.
If the redo log buffer is too small, LGWR will have to write to disk too frequently, causing a disk I/O bottleneck on the redo log disks and potentially a wait for a user process to have its changes placed into the log buffer. If many processes are accessing the log buffer, they will be forced to wait for the write to complete; the result is redo log contention.
You can reduce log file disk I/O overheads and redo log contention by increasing the value of the LOG_BUFFER parameter to at least 64 kilobytes on a busy OLTP database. Our benchmarks show that heavily used applications that have a lot of database modifications will often have their response times improved more than 50% by enlarging the LOG_BUFFER to 1 megabyte. Overnight jobs that perform heavy updates may also improve by increasing LOG_BUFFER even more.
WARNING: Do not enlarge your log buffer to the point at which you have little free memory available on your machine.
To optimize redo log file disk I/O, do the following:
1. Separate redo log files from database data files.
2. Place redo files on faster disks.
3. Increase the LOG_BUFFER parameter.
4. If you are multiplexing your redo logs using Oracle's redo group facility, be careful where you place your mirrored redos. You must ensure that they are not located on a heavily used data file disk. Ideally, each side of the redo should be on a disk of its own.
5. If you have archivelog enabled, ping-pong your redos.
Tuning Contention –Buffer Cache Latches :
To identify buffer cache latch contention, run the following statement. The latch is used to allocate buffers on the LRU chain of the buffer cache. All user processes are able to read data into the buffer cache. Sites that had many OLTP users combined with multiple CPUs are the worst hit.
Click Here to Copy Script to Clipboard Run Script
SELECT SUBSTR(name,1,25), gets, misses,
immediate_gets, immediate_misses
FROM V$LATCH
WHERE misses > 0 OR immediate_misses > 0
AND name LIKE 'cache bu%'
/
If your misses or getmisses are greater than 3%, do the following:
w Make sure you have DB_BLOCK_LRU_EXTENDED_STATISTICS turned off. The parameter is used to test the effect of increasing and decreasing the buffer cache. It can cause chronic latch contention, so use it sparingly.
w Increase the size of your buffer cache using the DB_BLOCK_BUFFERS parameter.
w Decrease the SPIN_COUNT parameter if you are running a UNIX system. You will need to check your Installation and User's Guide for details on this parameter. Decreasing the parameter can often relieve latch contention.
w Set DB_BLOCK_LRU_LATCHES to the number of CPUs on your machine for Oracle7.3 or later. This parameter is a huge help in avoiding buffer latch contention.
w Avoid the buffer cache whenever possible by setting parameters such as SORT_DIRECT_WRITES and COMPATIBLE.
w Try not to mix OLTP users with long-running batch and decision support users on the one instance.
Tuning Contention –Library Cache Latches :
To identify buffer cache latch contention, run the following statement. The latch is used to allocate buffers on the LRU chain of the buffer cache. All user processes are able to read data into the buffer cache. Sites that had many OLTP users combined with multiple CPUs are the worst hit.
Click Here to Copy Script to Clipboard Run Script
SELECT SUBSTR(name,1,25), gets, misses,
immediate_gets, immediate_misses
FROM V$LATCH
WHERE misses > 0 OR immediate_misses > 0
AND name LIKE 'cache bu%'
/
If your misses or getmisses are greater than 3%, do the following:
w Make sure you have DB_BLOCK_LRU_EXTENDED_STATISTICS turned off. The parameter is used to test the effect of increasing and decreasing the buffer cache. It can cause chronic latch contention, so use it sparingly.
w Increase the size of your buffer cache using the DB_BLOCK_BUFFERS parameter.
w Decrease the SPIN_COUNT parameter if you are running a UNIX system. You will need to check your Installation and User's Guide for details on this parameter. Decreasing the parameter can often relieve latch contention.
w Set DB_BLOCK_LRU_LATCHES to the number of CPUs on your machine for Oracle7.3 or later. This parameter is a huge help in avoiding buffer latch contention.
w Avoid the buffer cache whenever possible by setting parameters such as SORT_DIRECT_WRITES and COMPATIBLE.
w Try not to mix OLTP users with long-running batch and decision support users on the one instance.
Tuning Contention : Locking :
TTo avoid most locking problems, enable the transaction processing option (TPO), which allows you to use row-level locking. Make sure you set the INIT.ORA parameter, ROW_LOCKING, to ALWAYS, in all database instances.
The most common cause of locking problems is when a user leaves his or her screen in the middle of an update and leaves without committing the changes. The only solution to this problem however is to train your users to commit all transactions and exit the screen back to the menu whenever they are going to leave their terminals for any period of time.
Oracle provides several tools that can help you monitor the locks in your system:
w Use the UTLLOCKT.sql script.
w Be sure to run CATBLOCK.sql as part of your database installation. If you haven't, UTLLOCKT.sql won't function correctly. CATBLOCK.sql can be run at any time and then rerun UTLLOCKT.sql to obtain current locking problems.
Other scripts that you can run to detect lock contention in your database are listed below.
Foreign key locking problems
Warning: Unless an index is placed on the child table on the columns that are used in the foreign key, a share lock occurs on the parent table for the duration of the insert, update, or delete on the child table.
The effect of a share lock is that all query users hoping to access the table have to wait until a single update user on the table completes his or her update. Update users cannot perform their update until all query users complete their queries against the table. The bottom line is that if the parent table is a volatile table, the share lock can cause the most incredible performance degradation.
The negative factor of the index on the child table is that we have observed tables with as many as 20 indexes on them. Response times have been severely damaged because of maintaining the excessive number of indexes. Our advice to these sites has been to use only foreign key constraints on columns that have an index that can be used for other purposes (e.g., reporting) or that point to a nonvolatile reference table. Most tables have difficulty maintaining acceptable performance if they have more than 10 indexes on them.
You may wish to take the foreign keys offline during the day and put them online at night to report any errors into an exceptions table. You should do this when the parent table is not being accessed.
Current Contention
Oracle provides a means of performing a spot check on any contention currently occurring in your database. The information that you receive is a list of all contention happening "right now." It is not cumulative. Note that the results will fluctuate from one run to the next.
SELECT event, count(*)
FROM v$session_wait
GROUP BY event;
The example output above is from a site that had 2,500 users running through a transaction processor. The figures were taken when a redo log file switch was taking place. This is the time when you would observe the most waits on your system. There is usually a response time pause of between 1 and 2 seconds for a log switch. If you have ongoing waits on the items listed below, you should investigate further:
w Free buffer waits: DBWR not writing frequently enough; increase checkpoints
w Latch free: any one of a number of latches is causing contention
w Buffer busy waits: contention for a data, index, or rollback buffer
w db file sequential reads: I/O contention caused by indexed lookups
w db file scattered read: I/O contention caused by full table scans
w db file parallel write: not checkpointing often enough
w undo segment extension: too much dynamic extension or shrinkage of rollback segments
w undo segment tx slot: not enough rollback segments
To determine which users are experiencing contention, you can run the following script:
SELECT ses.username || '('||sw.sid||')' users, event
FROM v$session ses, v$session_wait sw
WHERE ses.sid = sw.sid;
Click Here for a script to determine the objects that your users are accessing when they are forced to wait for disk I/O.
Note that the objects listed are not always the cause of the wait; but if the object continually appears on the list, there is a good chance that the object is one of several causing a disk I/O bottleneck on the disk that stores the object.
Tuning Sorts :
Oracle uses disk sorts and memory sorts. Disk accesses are slower than memory-based operations and as such can be a major performance hit. Memory sorts are done entirely with the process memory area preallocated for sorting. Memory sorts are cheap in terms of resources, and should be your goal.
This topic addresses:
Disk Sorts
Memory Sorts
Sorts And The Temporary Tablespace
Temporary Tablespace Storage Considerations
Controlling Sorts
Disk Sorts
If a sort exceeds the size of the SORT_AREA_SIZE initialization parameter, it will require a disk sort. Because sorts are always performed in the temporary tablespace for a user, the database administrator (DBA) must be sure that no user has the SYSTEM tablespace assigned as a temporary tablespace. If the value of SORT_AREA_SIZE is greater than 10 times the size of DB_BLOCK_SIZE, memory area from the sort area is allocated to perform direct writes to disk if SORT_DIRECT_WRITES is set to AUTO. One exception is when sorts are being used during the creation of indexes; in this case the sorts are done in the target tablespace for the index. You especially need to be aware of when index creations are being performed on parallel tables as these types of index creations will involve multiple initial extents that are then consolidated into the final set of extents.
IMPORTANT NOTE: When a user is created, if the default and temporary tablespaces are not explicitly assigned, the assignment will default to the SYSTEM tablespace. Always explicitly assign users their default and temporary tablespaces. Fragmentation and excessive SYSTEM tablespace I/O are prime indicators that a user's temporary tablespace assignment has been set to SYSTEM.
In environments rich in memory and disk space, you might want to do disk sorts rather than tie up memory resources.
Forced disk sorting is accomplished by setting the following initialization parameters correctly.
w SORT_DIRECT_WRITES Turns on forced disk sorts when this parameter is set to TRUE; its default setting is AUTO. This controls whether sort writes pass through the buffer caches. Bypassing the buffer caches means that sort speed can be increased by a factor of 6 or more.
w SORT_WRITE_BUFFERS The number of sort direct write buffers; set at 2 as a default and increase as needed to a maximum of 8 to improve sort speeds. Each process has its own set of sort write buffers.
w SORT_WRITE_BUFFER_SIZE The size of each sort write buffer, initially set at 32,768 bytes; increase as needed up to 65,536 bytes to improve sort speed.
w SORT_SPACEMAP_SIZE The size of the disk map used to map disk areas used for sorting; defaults to 512 bytes, which is usually enough, unless you have extremely large sorts, such as during index builds. Suggested setpoint is ((total sort bytes / (SORT_AREA_SIZE)) + 64, where total sort bytes is equal to (number of records) * (sum of column sizes + (2 * number of columns)).
w SORT_READ_FAC The number of blocks read from disk in a single operation for a sort; defaults to 20, which is usually enough but can be increased if you do numerous large sorts. The calculation for this parameter is (avg_seek_time + avg_latency + blk_transfer_time)/blk_transfer_time.
Therefore, if you have abundant memory and temporary disk space on your system, using SORT_DIRECT_WRITES to bypass the buffer cache and to write directly to disk for large sorts (through sort buffers defined for each process) can increase the speed of large sorts by up to 6 times. Because we are talking about increasing disk I/O load by using SORT_DIRECT_WRITES, I/O should be properly tuned for your system as well.
Memory Sorts
Ideally, all sorts should be done in memory because memory provides the fastest data sorting and access capabilities. Doing all sorts in memory, except for a few small databases, is not a very realistic possibility. However, we can tune sorts using the SORT_AREA_SIZE and SORT_AREA_RETAINED_SIZE initialization parameters to minimize disk sorts and to reduce memory requirements for sorts.
By monitoring the V_$SYSSTAT table through use of a query similar to:
SELECT name, value FROM v$sysstat WHERE name like '%sort%';
we can determine how many sorts are being done to disk and how many sorts are being done in memory. For example, running the previous query against a 20GB production that mixed online transaction processing (OLTP) and batch system with a 3MB SORT_AREA_SIZE and a 2MB SORT_AREA_RETAINED_SIZE produced the following results:
SQL> SELECT name,value FROM v$sysstat WHERE name LIKE '%sort%';
NAME VALUE
---------------------------------------------------------------- ---------
sorts (memory) 234126
sorts (disk) 30
sorts (rows) 75234036
In this example, we are seeing 0.013 percent of sorts going to disk (not a bad ratio). The 0.013 percent ratio is especially good when you realize that several large batch jobs (e.g., snapshot builds and table analysis operations) were run since the last time this database was restarted. Your goal should be to reduce disk sorts to less than 10 percent, and generally less than 1 percent is attainable.
Some general guidelines for setting SORT_AREA_SIZE and SORT_AREA_RETAINED_SIZE follow:
SORT_AREA_SIZE The default setting usually is insufficient, except in the most minimal of OLTP environments. If you can quantify the average sort done on your system, set this parameter to that value or a multiple of that value; otherwise, monitor the sort statistics in V$SYSSTAT and adjust this parameter up until disk sorts are less than 10 percent of total sorts as a minimum. For non-multithreaded-server (non-MTS) systems, this memory is taken from the user's process global area (PGA). For systems using MTS, this memory is taken from the shared pool area of the system global area (SGA), which is referred to as the user global area (UGA). If a sort is larger than SORT_AREA_SIZE, it is broken into SORT_AREA_SIZE chunks and done on disk.
SORT_AREA_RETAINED_SIZE This parameter sets the initial size of the sort area allocated per user. In memory-poor environments, set this parameter to a fraction of the value of SORT_AREA_SIZE. Users will be allocated SORT_AREA_RETAINED_SIZE for initial sorting and will grow to SORT_AREA_SIZE. Any memory in addition to the SORT_AREA_RETAINED_SIZE that is allocated for a large sort will be returned to the user's UGA after the sort.
Sorts And The Temporary Tablespace
All disk sorts are done in the user's assigned temporary tablespace. If a user is not assigned default and temporary tablespaces when they are created, these items will default to the SYSTEM tablespace. All users, including SYS and SYSTEM, should have their temporary tablespace reassigned to one other than SYSTEM. Sorts that are allowed to occur in the SYSTEM tablespace can cause excessive fragmentation of the SYSTEM tablespace. In addition, allowing large sorts in the SYSTEM tablespace could result in the database hanging if the SYSTEM tablespace runs out of space. Another consideration is that the default storage parameters are not optimized for sorting in the SYSTEM tablespace, and this will result in inefficient sorting operations.
Temporary Tablespace Storage Considerations
Temporary tablespaces exist entirely to support sorting operations. The temporary tablespace default storage clause must be based directly on how you have designated sorting to be done on your system.
In all systems, even where direct writes are being done, the INITIAL and NEXT values for the temporary tablespace default storage clause should be the same and should equal the value of the SORT_AREA_SIZE initialization parameter. The PCTINCREASE default storage parameter for a temporary tablespace should always be set to zero. Setting PCTINCREASE to zero for a temporary tablespace does two things. First, it prevents automatic coalescing of unused extents. Second, it ensures that the extent size always matches SORT_AREA_SIZE and therefore wastes little space.
You don't want automatic coalescing of the unused extents in a temporary tablespace because any used extent that is returned to the free extent pool will be the proper size for the next sort operation that requires an extent (by setting PCTINCREASE to zero) which reduces the time required to reallocate extents.
If your system is using Oracle7.3.x or above, use the TEMPORARY clause for creating a temporary tablespace. A tablespace designated as TEMPORARY will not allow the creation of nontemporary segments (such as tables, indexes, and clusters) and is optimized for sorting. Note that the COMPATIBLE initialization parameter must be set to your Oracle release (such as 7.3.3) to make full use of new features and capabilities. The actual sort segment in a tablespace designated as TEMPORARY is not created until the first sort request is issued.
The extents in a temporary tablespace are mapped into a sort extent pool (SEP) when the first sort operation occurs against that temporary table. Once a tablespace is mapped into a SEP, subsequent sorts will use the SEP to locate available sort extents. The V$SORT_SEGMENT dynamic performance view is used to view information about sort extents in the SEP. The V$SORT_SEGMENT view was new in version 7.3 and is not present in earlier Oracle releases.
The use of SORT_DIRECT_WRITES can increase the required temporary tablespace disk space. Therefore, when going from a system where SORT_DIRECT_WRITES is not configured to one where it is used, be sure to allow for more temporary tablespace storage space.
Controlling Sorts
By properly tuning SQL statements and understanding which type of operations produce which sorts, you can reduce the number and size of sort operations and therefore allow more sorts to be performed in memory.
The following operations always require a sort:
w SELECT DISTINCT
w SELECT UNIQUE
w SELECT ....ORDER BY ...
w SELECT....GROUP BY ...
w CREATE INDEX
w CREATE TABLE .... AS SELECT with primary key specification
w Use of INTERSECT, MINUS, and UNION set operators
w Unindexed table joins
w Some correlated subqueries
Proper indexing often can reduce sorts. If a SELECT uses an index, it will normally return the values in indexed order. By creating an index on the column by which you want the query to be ordered, you can eliminate the need for an ORDER BY clause. Avoid using SELECT DISTINCT or SELECT UNIQUE unless absolutely necessary. An example of an unneeded DISTINCT or UNIQUE might be a SELECT that includes the primary key value as, by definition, a row including a primary key will always be distinct or unique.
Module 5 :
Tuning Long Running Jobs
Before trying any of the following tuning suggestions, be sure that you first tune your database and SQL statements.
Correct Object Sizing
Exploiting Array Processing
Optimizing INIT.ORA Parameters
Tuning the I/O
Alternative DBA Tuning
Improving Index Creation Performance
Inline Functions
Utilizing Concurrent Cursors
Correct Object Sizing
The accuracy with which you assign table and index sizes can contribute a great deal to the performance of long-running jobs. If a table or index fits in one large contiguous extent, Oracle could read the entire object with a single multiblock read. If the table or index consists of many small extents, scattered all over the disk (as the result of dynamic extension), the many disk accesses will slow down overall processing.
Note: OLTP applications generally don't see large performance degradation when objects begin to fragment. Because small amounts of data is being randomly accessed by many simultaneous users, contiguous data won't make that much difference in such environments.
Many DBAs fail to thoroughly investigate the optimal settings of the PCTFREE and PCTUSED parameters when creating database objects. Ignoring the problem and opting for the database defaults are all too common.
w The PCTFREE parameter specifies the percentage of space in each segment's data block reserved for future record expansion. Records can be inserted into a data block only up until this threshold has been reached. After that point, no new rows can be inserted into that block until the free space percentage falls below the PCTUSED threshold. The default for PCTFREE is 10%.
w The PCTUSED parameter defines the minimum percentage of data block used space that is necessary before the block is eligible for row insertion. A segment block is added to the free space list once its used space falls below this threshold. The default for PCTUSED is 40%.
You might set these parameters as follows:
CREATE table emp (......) PCTFREE 5 PCTUSED 80
The combined sum of PCTFREE and PCTUSED must be less than 100. Correct choice of these parameters can be used to improve the efficiency of table and index segments. For example, tables that are insert-only (auditing tables, history tables, etc.) should have a PCTFREE setting of 1. This simple change to a table's storage definition can reduce its total disk space requirements by 10%. This means less object extension and less physical disk I/O when reading and writing: 10% more data can be read or written with a single data block read; 10% more data can be held in the database's SGA. This is a very simple way of increasing your block buffer cache without having to buy any more memory.
Optimizing Init.ora Parameters :
You'll typically run long-running jobs outside normal business hours, when fewer user processes are logged on to the system. At such times, valuable machine resources, such as real memory, can be put to better use. This memory that would normally be used to support a multitude of user connections to the database can be allocated to the database itself. The following INIT.ORA parameters detail a few of the more important parameters to consider resetting when you are running long-running jobs.
DB_BLOCK_BUFFERS
The buffer cache holds database data blocks for tables, indexes, rollback segments, and clusters. The larger the buffer cache, the more application data can be cached in memory, and the faster long-running jobs will execute. By holding more data in memory, you avoid expensive disk I/O that can adversely affect performance. Once data is read into the SGA, they become accessible to other processes, alleviating the need to read them again from disk.
DBWR_IO_SLAVES, LIST_IO, and DISK_ASYNCH_IO
Oracle uses a database writer (DBWR) process to write all database changes to disk. By default, this process uses synchronous disk writes and can perform only one write at a time. During heavy batch updating periods, this single path to the disk(s) can cause a bottleneck, and you need to reduce it by increasing the output bandwidth of the database. Increasing the overall speed and total throughput of your database is important enough for normal day-to-day processing but absolutely paramount during batch processing and other periods of very high I/O. A few minutes spent investigating alternative I/O strategies can mean the difference between a high update program taking 1 hour and taking 4 hours to complete.
Oracle provides three different methods to help increase DBWR output. You can only use one, or sometimes two, of these features on any one platform, depending on your hardware, the version of Oracle, the operating system level, and the OS kernel patches you currently have loaded. Consult your Hardware Installation Guide for more details.
w DBWR_IO_SLAVES: (Prior to Version 8.0, this was named DB_WRITERS) Number of DBWR processes started. If this parameter is used, set it to the number of disk drives used by the Oracle datafiles. Use of this parameter will result in an increased need for system semaphores and an increase in the processes parameter.
w LIST_IO: Causes the database to queue many I/O requests into a list and process it as a single I/O request. This provides nonblocking disk writes, allowing your program to continue executing after having queued the I/O. Most SVR4 UNIX releases support list I/O functionality.
w DISK_ASYNCH_IO (Prior to Release 8.0, this was named ASYNCH_WRITE): Permits a program to continue executing without having to wait for an I/O write to complete. The DBWR can continue writing modified database blocks out to disk without having to wait for each I/O operation to finish. You should use asynchronous I/O if it is available on your platform.
DB_FILE_MULTIBLOCK_READ_COUNT
Long-running jobs tend to perform many times more database I/O than your normal OLTP activity. This I/O is often made up of full table scans, data aggregation, and data sorting. Increasing the DB_FILE_MULTIBLOCK_READ_COUNT forces the database to read more data blocks when performing a sequential data scan (full table scan). Setting this parameter to a higher value for full table scans and long-running batch routines can help to reduce program times.
When altering this parameter, do not increase it beyond the physical I/O block size of your disk and/or controller. If you do, you will be performing several physical disk I/Os for every single logical database I/O and in this way may actually degrade overall performance.
SORT_AREA_SIZE and SORT_AREA_RETAINED_SIZE
By setting the INIT.ORA parameter SORT_AREA_SIZE, you specify the size of each process's sort area in memory. The Oracle RDBMS tries to perform the entire sort or sort/merge operation in this area of memory. If the area fills up, the system writes data out to temporary segments in the temporary tablespace. To prevent sorting from degrading performance, especially for long-running jobs, you need to set the sizes of SORT_AREA_SIZE and the temporary segments to the largest sizes you can.
If you are fortunate enough to be able to configure your database for batch-only periods, you should increase the SORT_AREA_RETAINED_SIZE parameter to equal SORT_AREA_SIZE. This parameter means that each process will be allocated its maximum sorting quota at connection time and will retain it for the duration of the session. If the SORT_AREA_RETAINED_SIZE parameter is lower than the SORT_AREA_SIZE parameter, Oracle will automatically release a process's memory if it believes that the sort area will not be referenced in the near future. This type of operation is beneficial only for OLTP applications that have many concurrent sessions spasmodically performing large transactions sorts.
If you suspect that sorting is slowing down processing in your system (a likely risk with long-running jobs), follow these tuning steps:
1. Before you incur any sort overhead, ask some basic questions:
Is this sort really necessary? Has an index been inadvertently overlooked? Can a SQL statement be structured more efficiently?
2. Increase the value of the SORT_AREA_SIZE parameter. The maximum allowable value is system dependent.
3. Increase the value of the SORT_AREA_RETAINED_SIZE parameter to equal SORT_AREA_SIZE. This stops Oracle from automatically shrinking a user process's sorting memory back to an unacceptable level.
LOG_BUFFER
This parameter governs the amount of memory allocated to the redo log buffer within the SGA. All database changes are buffered to this area before being written to the redo logs. Increasing the size of the redo log buffers can produce surprisingly good performance benefits. This is even more so for large, long-running update tasks. If the log buffer is too small, contention will occur and updating tasks will have to wait while the buffer is continually flushed to disk. A log buffer of between 512 kilobytes and 1 megabytes is not unreasonable for batch-only processing periods.
LOG_CHECKPOINT_INTERVAL
This parameter specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks.
Regardless of this value, a checkpoint always occurs when switching from one online redo log file to another. Therefore, if the value exceeds the actual redo log file size, checkpoints occur only when switching logs. Checkpoint frequency is one of the factors that influence the time required for the database to recover from an unexpected failure.
Specifying a value of 0 (zero) for LOG_CHECKPOINT_INTERVAL has the same effect as setting the parameter to infinity and causes the parameter to be ignored. Only nonzero values of this parameter are considered meaningful.
Recovery I/O can also be limited by setting the FAST_START_IO_TARGET or LOG_CHECKPOINT_TIMEOUT parameter or by the size specified for the smallest redo log. For information on which mechanism is controlling checkpointing behavior, query the V$INSTANCE_RECOVERY view
DBAs should try to reduce the number and frequency of database checkpoints, regardless of whether they are operations in OLTP or in batch mode. Checkpoints force all modified (uncommitted) database blocks to be written to disk, and also update all control and data file headers with the latest checkpoint number. Many large databases can have 50 or more data files, causing checkpoints to take several seconds (or even minutes).
Large, long-running updating jobs generate more changes, causing more checkpoints to fire. Setting the LOG_CHECKPOINT_TIMEOUT to zero (its default) and setting LOG_CHECKPOINT_INTERVAL to a very large number will result in checkpoints being generated only on a redo log switch.
LOG_CHECKPOINT_TIMEOUT
This paramter specifies the amount of time, in seconds, that has passed since the incremental checkpoint at the position where the last write to the redo log (sometimes called the tail of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for more than integer seconds.
Specifying a value of 0 for the timeout disables time-based checkpoints. Hence, setting the value to 0 is not recommended.
A checkpoint scheduled to occur because of this parameter is delayed until the completion of the previous checkpoint if the previous checkpoint has not yet completed.
Recovery I/O can also be limited by setting the FAST_START_IO_TARGET or LOG_CHECKPOINT_INTERVAL parameter or by the size specified for the smallest redo log. For information on which mechanism is controlling checkpointing behavior, query the V$INSTANCE_RECOVERY view.
LOG_ARCHIVE_BUFFERS and LOG_ARCHIVE_BUFFER_SIZE
Jobs that generate large numbers of database changes in a short period of time may experience some archive log contention. Simply put, changes are being written to the online redo logs faster than they can be copied to the archive disk. This can mean a "log writer unable to proceed" wait forcing your application job to also have to wait.
Increasing the values of both LOG_ARCHIVE_BUFFERS and LOG_ARCHIVE_BUFFER_SIZE will help to avoid this bottleneck and will contribute to the faster running of your application batch routines.
SMALL_TABLE_THRESHOLD
This parameter controls the number of blocks from one table that will be stored at the most-recently-used end of the buffer cache before the remainder of blocks are held at the least-recently-used end of the list. If your batch processing is repeatedly performing full table scans for the same table(s) over and over again and that table is of moderate size, increasing this parameter can mean that it will be retained within the Oracle buffer cache for a longer period of time.
PROCESSES
Reducing the number of processes (the PROCESSES parameter) in the INIT.ORA file limits the maximum number of concurrent database connections. This has very little effect on overall memory usage but helps to prevent mistakes. When the database is in batch mode, we may be allocating more physical memory to each process than usual. Maximizing the SORT_AREA_SIZE, increasing the DB_BLOCK_BUFFERS, and the like, all lead to higher physical memory usage. Lowering the PROCESSES parameter to just enough to handle the batch requirements prevents normal users from mistakenly logging on to the database and wasting large amounts of valuable memory resources. It also immediately highlights the fact that the database may still be mistakenly in batch mode before too many users are able to connect to the database and force the machine into uncontrollable swapping.
Improving Index Creation Performance :
No comments:
Post a Comment