AUDITING AND TUNING ORACLE APPLICATIONS- TIPS AND TRICKS
Summary
Application performance is not just based on the database component but many other factors like optimal utilization of shared resources like CPU, I/O, memory, client application configuration, type of application etc. This article will briefly define an Audit and Tuning Methodology to be applied for improving performance of Oracle based applications. The article also discusses rolling out of newer enhancements like parallel features, bitmapped indexes, PL/SQL functionalities etc. with a view to improve application performance. The paper is intended to help the audience prepare a Performance Audit and Recommendation Plan for ensuring that their Oracle based applications are performant and make optimal use of available resources and features.
Introduction
This auditing and tuning methodology identifies a plan which helps to
· Analyze the performance problem to identify various factors contributing to it like application layer, database layer, operating system layer etc.
· Establish guidelines for application and database tuning
· Benchmark and maintain a set of features and recommend using them at appropriate situations like parallel queries, PL/SQL features, bitmapped indexes etc.
Performance Audit methodology is a recursive, on-going process. Periodically, revisit all the steps and make appropriate changes and additions to them so that they reflect current technology trends. It really helps to maintain a plethora of tuning tips and techniques and use them at appropriate situations. This paper summarizes this performance audit and tuning methodology.
Performance Audit and Tuning Strategy
The application of audit and tuning methodology differs depending on whether the application is custom or a purchased application (Oracle Financials, Manufacturing, PeopleSoft etc.). In case of purchased applications, you may spend comparatively more pre-implementation time and may probably have less control over the tuning process as compared to custom applications. Audit Methodology also varies depending on the nature of the application e.g. whether it is an OLTP or DSS or batch since access of data, nature of transactions and SQL calls differ drastically.
Performance Audit Cycle
A typical performance audit cycle has four broad phases:
· Identify and define an objective and setup a baseline
· Monitor
· Analyze
· Tune
Tuning is a recursive process. No matter how much of memory, disks and CPUs are available; the requirements can still outgrow availability. Audit and tuning should thus be planned as an ongoing task. Auditing and Tuning processes and policies can be laid out to standardize and deploy them across various application environments.
Performance Audit Layers
Several layers like govern performance Monitoring
· Application Layer (Client Apps, third party tools, other apps)
· Database Layer (Oracle)
· System Layer (OS, network, client configuration, hardware)
Optimum performance involves tuning of all layers to work together.
The performance of an application is a factor of these components. Some of the factors that affect performance-monitoring activities include memory, hardware configuration, user load and number of instances, tools/products used and database features used.
It is critical to examine each layer and understand their effect on the overall performance. Also it often helps to tune application first, before the other layers. The benefits obtained by tuning the applications far exceed the benefits obtained by tuning other layers. This can however be achieved depending on the level of control on the application, availability of source control, trained resources etc.
Performance Audit Approach
The audit and tuning methodology can be broadly classified into two types:
(a) Overall performance bottleneck
(b) An identified process going slow (e.g. batch programs or invoice entry process etc.)
In case of (a) above the support group and users could have a general perception about overall response of the system. This calls for collecting statistics for the database, operating system and sometimes performing an applicatin audit and analyzing the findings to identify causes.
In case of (b) above many times it make require tracing SQL statements and making other database modifications or rolling out better enhancements to improve those programs.
Depending on the situations there could be small variations in this broad picture. The Audit and Tuning Methodology can be divided into the following sections:
Application
· Designing and coding applications to ensure optimum use of available resources.
Oracle
· Tracking use of memory, CPU, I/O and other system contentions
Operating System (We shall discuss UNIX in this article)
In the following sections we shall cover some of the tips for application and database tuning.
Application Tuning
1. Identify SQL statements taking long time to execute. Also identify SQL statement involving join of large number of big tables and outer joins.
Trace SQL Statements by setting SQL_TRACE=TRUE and TIMED_STATISTICS=TRUE For session statistics use alter session set sql_trace=true. Also capture database statistics using utlbstat and utlestat to study database performance during the time the application is running. In Oracle 7.3, the cost-based optimizer also gathers information on the distribution of data values within a table. These statistics help deciding appropriate execution path and the one with lowest COST can be selected. Set the OPTIMIZER_GOAL to FIRST_ROWS or LAST_ROWS and also ensure that the tables are analyzed regularly using a batch process. Trace long queries using EXPLAIN PLAN and Oracle Trace.
Star queries are useful in many datamart applications especially when joining one large table with many small fact tables. Snowflake schemas are a more complex form of a star schema. Star queries require Cartesian product joins and are not efficient with rule-based optimizer. Use the STAR hint to force the optimizer to use only the star approach.
Limit the number of tables in a join. There are several instances when the processing time can be reduced several times by breaking the SQL statement into smaller statements and writing a PL/SQL block to reduce database calls. Also packages reduce I/O since all related functions and procedures are cached together. Use DBMS_SHARED_POOL package to pin a SQL or PL/SQL area. To pin a set of packages to the SQL area, startup the database and make a reference to the objects that causes them to be loaded. Use DBMS_SHARED_POOL.KEEP to pin it. Pinning prevents memory fragmentation. Pinning of packages is done when there are varying memory requirements by processes. It also helps to reserve memory for specific programs.
Recursive calls should be kept to a minimum. Recursive calls are SQL statements that are triggered by Oracle itself. The Oracle trace utility tkprof provides information about recursive calls. This value should be taken into consideration when calculating resource requirement for a process. Tkprof also provides library cache misses and also gives the username who executed the SQL statement. Tkprof generated statistics can be stored in a table tkprof_table to be queried later.
The shared pool can be tuned using the query
SELECT name,bytes FROM v$sgastat WHERE name IN
('db block buffers', 'log buffer','dictionary cache','sql area','library cache', 'free memory');
High value of free memory could indicate objects were aged out of the shared pool indicating a fragmented shared pool.
2. Tuning applications for parallel query applications.
Queries can benefit from parallel query features if
· %usr+%sys <>
· There are multiple processors
· Adequate memory available
Most of the DSS based queries require a full table scan of large tables. Such queries can be improved by implementing parallel query features. If the %usr+%sys <50%>
SELECT * FROM v$pq_sysstat WHERE statistic='Servers Busy';
Observe CPU utilization using sar -u for this period of time and ensure no query is processing sequentially.
3. Tune SORTS
SQL Statements using create index, group by, order by, distinct, joins, union/minus/intersect, etc. use sort area. Set appropriate size for the SORT_AREA_SIZE. Large sort area reduces Oracle merges to get final result and also number of runs reduces disk I/O to temporary segments. Increased sort areas cause more memory utilization. It is hard to maintain poorly written applications. It is best to trace and tune SQL statements before application is rolled over to production. Of course, inspite of the audit and tuning process, business needs like addition of new modules, enhancements and requirement changes make tuning an ongoing process.
4. Choose and use indexes appropriately
Indexes should have high selectivity. Bitmapped indexes improve performance when the index has fewer distinct values like Male or Female. Bit indexes are stored in a sorted way in bitmapped segments each containing distinct values. Some of the init.ora parameters are CREATE_BITMAP_AREA_SIZE (for defining the memory used for bitmap index creation) and BITMAP_MERGE_AREA_SIZE to define the amount of memory used to do merges.
Parallel index creation can be used for large data loads. Oracle 7.3 also has an index rebuild feature.
5. Implement PL/SQL enhancements
PL/SQL has rich features and has powerful enhancements. Version 2.3 available with Oracle 7.3 supports PL/SQL table of records avoiding multiple scalar tables, utl_file utility to perform operations on operating system files and also dynamic SQL and cursor variable functionalities.
Oracle Tuning
Analyze and tune the three critical memory related init.ora parameters, DB_BLOCK_SIZE, DB_BLOCK_BUFFER and SHARED_POOL_SIZE.
6. Size DB_BLOCK_BUFFER appropriately.
Buffer requirement for a given application can be accurately determined and can be changed accordingly. Set the init.ora parameter DB_BLOCK_LRU_STATISTICS=TRUE and DB_BLOCK_LRU_EXTENDED_STATISTICS=n, where n = no of proposed number of DB_BLOCK_BUFFERS.
Use the report generated by utlbstat/utlestat to calculate
((consistent gets + db block gets) - physical reads)/(consistent gets + db block gets) * 100 % .If this value is <80>
7. Lock SGA in memory
SGA should never be swapped. If this happens system runs very slow. Use sar -w and sar -q to find swap statistics for the system Use ps -elf|grep oracle Look for field F= state 0 (indicates process is swapping)
8. Ensure minimal library cache reloads, cache misses
Use the script
SELECT SUM(reloads)/SUM(pins)*100 FROM v$librarycache
if this value is > 1 % , it indicates library cache misses. Increase SHARED_POOL_SIZE,OPEN_CURSORS.
9. Ensure that there is enough memory for individual users.
Extrapolate number of concurrent users. Estimate expected memory consumption. I have developed a resource consumption utility using PL/SQL which generates useful statistics like number of users at various times in a day, memory consumption, CPU consumed, I/O statistics etc. We can also use many of the third party performance monitoring tools for this.
10. Size data properly
Tables and indexes should be assigned proper storage parameters to avoid fragmentation, row chaining etc. Large tables should be stored in a small number of extents. Analyze tables to get statistics. These statistics provide important feedback about the storage parameters. If the objects are sized properly, they do not have to be reorganized very often.
It usually pays to estimate the growth of tables and nature of operations and decide appropriate values of storage parameters like INITIAL, NEXT, PCTFREE, PCTUSED etc. Oracle 7.3 does not have any limit on number of extents. Analyze objects (tables and indexes) to study row size variances and decide on values for PCTFREE and PCTUSED. PCTFREE should be bigger if the row size is going to grow. The difference between PCTFREE and PCTUSED should be carefully decided. Calculate the average rowsize using vsize(column_name) for all columns and setup these values as an appropriate percentage of the average rowsize.
11. Make SQL statements use same cursor area.
If many users run similar SQL statement with only the where statements different use bind variables like c =:bind_variable. Spaces and cases are important. Sharing the same SQL area improves the library cache performance. Monitor library cache using v$librarycache.
12. Adjusting the CURSOR_SPACE_FOR_TIME
Adjusting the CURSOR_SPACE_FOR_TIME init.ora parameter helps speed up execution even if there are no significant library cache misses. This parameter specifies when a SQL should be deallocated from the cache to make room for a new statement. TRUE means that a cursor will be deallocated, when all application cursors associated with its statement, are closed. Make sure that you have enough memory available and also there are no cache misses on execution calls.
13. If memory and temporary space are abundant, set SORT_DIRECT_WRITES to increase sort performance.
Oracle allocates several large buffers in memory for direct I/O. Set SORT_WRITE_BUFFERS and SORT_WRITE_BUFFER_SIZE to control the number and size of the buffers. The oracle process performs sort writes directly to the disk bypassing the buffer cache.
Note that using sort direct writes causes Oracle process that sorts to allocate SORT_WRITE_BUFFERS*SORT_WRITE_BUFFER_SIZE bytes of memory in addition to the memory already allocated to sort area. Ensure that you have enough free memory available. Memory usage can be decreased by decreasing sort area by the amount of memory allocated to direct writes. But this can affect the performance of disk sorts. Set direct writes somewhere around 8-10% of the size of sort area.
14. Check the database write process for INSERT and UPDATE intensive applications
Ensure that redolog files are segregated to a separate disk . Increase number of db_writers (n to 2n depending on number of disk used excluding log files). Tune DB_WRITERS using the query,
SELECT * FROM v$system_event
look for free buffer wait, if > 0 then increasing DB_WRITERS may help. The value 'dirty buffers inspected' should be close to 0 (This statistic gives the number of buffers seen by a foreground process on the LRU list) Also check 'DBWR summed depth' should be close to 'DBWR buffers scanned'. This measures whether DBWR is keeping up with the background processes.
15. Identify large disk requests.
Refer to the flow chart under the UNIX Tuning section.
16. Identify Hot files and move them to non-hot disks.
Assign weights to different tablespaces or datafiles depending on number of reads/writes on them. Then ensure that there are not many datafiles having high weights on the same disk. Move datafiles over many disks and as many controllers as possible. For parallel query implementation, to avoid I/O bottlenecks, stripe data over at least as many disks as the degree of parallelism
17. Using Partition View feature available in Oracle 7.3
Using Partition View feature available in Oracle 7.3, a large table can be partitioned into multiple small tables (partitions). These tables can then be accessed together using UNION ALL. This allows indexes, data purges etc. to be done at table level thus improving performance. Consider partition-views for very large tables which are frequently scanned by range e.g. customer no, date/time etc. Partition views should not be updated directly.
18. Set the value of LOG_CHECKPOINT_INTERVAL
Set the value of LOG_CHECKPOINT_INTERVAL to be larger than the size of the largest redolog file. Also set LOG_CHECKPOINT_TIMEOUT =0 to ensure timed-based checkpoints and set CHECKPOINT_PROCESS=true.
19. Create temporary tablespace as temporary using
CREATE TABLESPACE temp TEMPORARY;
This feature is available starting Oracl e 7.3.
20. Checking overall performance
Check v$session_wait. Look for session waits and tune. The V$session_waits lists the events that cause all user and system sessions to wait. Many times this is a quick way of identifying performance bottlenecks.
SELECT sid,event,wait_time FROM v$session_wait ORDER BY wait_time,event;
Analysis of this will lead to further investigation of the causes of waits.
Tuning UNIX for Oracle
This section discusses tips to identify contentions at the UNIX level. The flow charts below provide a step-by-step method to analyze system performance and take corrective action. The UNIX parameters can be collected from various utilities like sar, glance, perfview and other UNIX utilities giving processwise resource consumption. I normally collect statistics over a period of time and focus on the trend rather than a particular point in time. The analysis at UNIX level helps to identify specific areas to be drilled down in Oracle or application level. The first flow chart shows an overall tuning picture. Flow chart (0) discusses analysis of CPU usage, (1) I/O bottlenecks, (2) monitoring memory and swap, (3) disk usage and (4) tuning unix related Oracle parameters.
Tuning Network Traffic
Performance can be improved by minimizing the communication between the client application and the database layer. The overall goal in this case is to package all SQL requests into as a fewer calls as possible. The following Oracle Server features help reduce network traffic by reducing calls:
· Stored procedures
· Database triggers (triggers are stored in compiled form in Oracle 7.3)
· Integrity constraints
· Sequences
· Array processing
Writing PL/SQL based code, considerably reduces the network traffic. If you are running a multi threaded server, network statistics over a period can be monitored using the query
SELECT network, SUM (busy)/(SUM (busy)+SUM (idle)) FROM v$dispatcher GROUP BY network;
Or
SELECT network, DECODE(SUM(totalq),0, 'No Response', SUM(wait)/SUM(totalq) ||' 1/100 second' "Avg Wait/Response"
FROM v$queue q, v$dispatcher d
WHERE q.type = 'DISPATCHER'
AND q.paddr = d.addr GROUP BY network;
Conclusion
Effective audit and tuning of Oracle process not only requires a set of tested techniques to be applied depending on the environment and situation but also important to upgrade to current technology to be able to use all available features. The attempt in this paper was to help plan the audit process and present some tips and tricks to tune the environment.