Oracle & MySQL Support

InfraStack-Labs Oracle & MySQL DBA Services help you manage, maintain, and optimize your critical Oracle systems. We deliver 24/7, year-round support with flexible monthly contracts that don’t lock you in.

Please contact me :-

Tuesday, September 30, 2008

Troubleshooting Concurrent Managers

Click on the diagram to view it clearly !
  • Submit a Standard Concurrent Report and check if it completes successfully.
  • Check if you are able to view the log/output file if not check if your APPS listener is running.
  • If your concurrent report fails check the log report or request log file.
  • Check if your standard manager is running from the front end.
  • Check if the internal concurrent manager is up and running.
  • Check the UNIX background process of your concurrent manager
    ps -ef|grep FNDLIBR.
  • Check your Concurrent Manager log files at $APPLCSF/$APPLLOG.
  • If the managers are failing due a problematic request try running cmclean.sql or use the recover concurrent manager option from Oracle Applications Manager.
  • In case your concurrent managers are not able to start due to failing to accuire locks, try clearing up the locks from the database.
  • If you are still having trouble with your concurrent managers you can try re creating the concurrent manager views using
Courtesy :


Monday, September 22, 2008

Oracle Applications 11i Architecture Diagrams

This post describes the Oracle Applications architecture in brief with architecture diagrams.

• The Desktop Tier
• The Application Tier
• The Database Tier
• The Oracle Applications Technology Layer

The Oracle Applications Architecture is a framework for multi-tiered, distributed
computing that supports Oracle Applications products. In this model, various servers are
distributed among multiple levels, or tiers.
A server is a process or group of processes that runs on a single machine and provides a
particular functionality, often referred to as a service. For example, the HTTP server is a
process that listens for and processes HTTP requests, and the Forms server is a process
that listens for and processes requests for activities related to Oracle Forms.
A tier is a logical grouping of services, potentially spread across more than one
physical machine. The three-tier architecture that comprises an Oracle E-Business Suite
installation is made up of the database tier, which supports and manages the Oracle
database; the application tier, which supports and manages the various Applications
components, and is sometimes known as the middle tier; and the desktop tier, which
provides the user interface via an add-on component to a standard web browser.
A machine may be referred to as a node, particularly in the context of a group of
computers that work closely together in a cluster. Each tier may consist of one or more
nodes, and each node can potentially accommodate more than one tier. For example, the
database can reside on the same node as one or more application tier components. This
may be done for simplicity of administration, or (as in the case of concurrent processing)
to maximize performance by reducing network traffic.
Centralizing the Oracle Applications software on the application tier eliminates the
need to install and maintain application software on each desktop client PC, and also
enables Oracle Applications to scale well with an increasing load. Extending this concept
further, one of the key benefits of using the Shared APPL_TOP model (see Chapter 9) is
the need to maintain only a single copy of the relevant Applications code, instead of a
copy for every application tier machine.

On the database tier, there is increasing use of Real Application Clusters (RAC) , where
multiple nodes support a single database instance to give greater availability and

Figure 1-1 Oracle Applications Architecture

Figure 1-2 Forms-based Desktop Tier Architecture

Figure 1-3 Example Oracle E-Business Suite Home Page Applications

Figure 1-4 HTML-Based Applications Architecture

Figure 1-5 Oracle Applications Framework Architecture

Figure 1-6 Forms Server Architecture

Figure 1-7 Forms Server Load Balancing Forms

Figure 1-8 Reports Server Architecture

Figure 1-9 Viewing Concurrent Processing Output

Figure 1-10 Example of Deploying Oracle Applications on Heterogeneous Platforms


Automatic Storage Management (ASM) in Oracle Database 10g

Automatic Storage Management (ASM) is a new feature that has be introduced in Oracle 10g to simplify the storage of Oracle datafiles, controlfiles and logfiles.

  • Overview of Automatic Storage Management (ASM)
  • Initialization Parameters and ASM Instance Creation
  • Startup and Shutdown of ASM Instances
  • Administering ASM Disk Groups
    • Disks
    • Templates
    • Directories
    • Aliases
    • Files
    • Checking Metadata
  • ASM Filenames
  • ASM Views
  • SQL and ASM
  • Migrating to ASM Using RMAN

Overview of Automatic Storage Management (ASM)

Automatic Storage Management (ASM) simplifies administration of Oracle related files by allowing the administrator to reference disk groups rather than individual disks and files, which are managed by ASM. The ASM functionality is an extention of the Oracle Managed Files (OMF) functionality that also includes striping and mirroring to provide balanced and secure storage. The new ASM functionality can be used in combination with existing raw and cooked file systems, along with OMF and manually managed files.

The ASM functionality is controlled by an ASM instance. This is not a full database instance, just the memory structures and as such is very small and lightweight.

The main components of ASM are disk groups, each of which comprise of several physical disks that are controlled as a single unit. The physical disks are known as ASM disks, while the files that reside on the disks are know as ASM files. The locations and names for the files are controlled by ASM, but user-friendly aliases and directory structures can be defined for ease of reference.

The level of redundancy and the granularity of the striping can be controlled using templates. Default templates are provided for each file type stored by ASM, but additional templates can be defined as needed.

Failure groups are defined within a disk group to support the required level of redundancy. For two-way mirroring you would expect a disk group to contain two failure groups so individual files are written to two locations.

In summary ASM provides the following functionality:
  • Manages groups of disks, called disk groups.
  • Manages disk redundancy within a disk group.
  • Provides near-optimal I/O balancing without any manual tuning.
  • Enables management of database objects without specifying mount points and filenames.
  • Supports large files.

Initialization Parameters and ASM Instance Creation

The initialization parameters that are of specific interest for an ASM instance are:
  • INSTANCE_TYPE - Set to ASM or RDBMS depending on the instance type. The default is RDBMS.
  • DB_UNIQUE_NAME - Specifies a globally unique name for the database. This defaults to +ASM but must be altered if you intend to run multiple ASM instances.
  • ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance. The valid values range from 1 to 11, with 1 being the default. The higher the limit the more resources are allocated resulting in faster rebalancing operations. This value is also used as the default when the POWER clause is omitted from a rebalance operation.
  • ASM_DISKGROUPS - The list of disk groups that should be mounted by an ASM instance during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM configuration changes are automatically reflected in this parameter.
  • ASM_DISKSTRING - Specifies a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed of disk group mount time and the speed of adding a disk to a disk group. Changing the parameter to a value which prevents the discovery of already mounted disks results in an error. The default value is NULL allowing all suitable disks to be considered.
Incorrect usage of parameters in ASM or RDBMS instances result in ORA-15021 errors.

To create an ASM instance first create a file called init+ASM.ora in the /tmp directory containing the following information.
Next, using SQL*Plus connect to the ide instance.
sqlplus / as sysdba
Create an spfile using the contents of the init+ASM.ora file.

File created.
Finally, start the instance with the NOMOUNT option.
SQL> startup nomount
ASM instance started

Total System Global Area 125829120 bytes
Fixed Size 1301456 bytes
Variable Size 124527664 bytes
Database Buffers 0 bytes
Redo Buffers 0 bytes
The ASM instance is now ready to use for creating and mounting disk groups. To shutdown the ASM instance issue the following command.
SQL> shutdown
ASM instance shutdown
Once an ASM instance is present disk groups can be used for the following parameters in database instances (INSTANCE_TYPE=RDBMS) to allow ASM file creation:

Startup and Shutdown of ASM Instances

ASM instance are started and stopped in a similar way to normal database instances. The options for the STARTUP command are:
  • FORCE - Performs a SHUTDOWN ABORT before restarting the ASM instance.
  • MOUNT - Starts the ASM instance and mounts the disk groups specified by the ASM_DISKGROUPS parameter.
  • NOMOUNT - Starts the ASM instance without mounting any disk groups.
  • OPEN - This is not a valid option for an ASM instance.
The options for the SHUTDOWN command are:
  • NORMAL - The ASM instance waits for all connected ASM instances and SQL sessions to exit then shuts down.
  • IMMEDIATE - The ASM instance waits for any SQL transactions to complete then shuts down. It doesn't wait for sessions to exit.
  • ABORT - The ASM instance shuts down instantly.

Administering ASM Disk Groups


Disk groups are created using the CREATE DISKGROUP statement. This statement allows you to specify the level of redundancy:
  • NORMAL REDUNDANCY - Two-way mirroring, requiring two failure groups.
  • HIGH REDUNDANCY - Three-way mirroring, requiring three failure groups.
  • EXTERNAL REDUNDANCY - No mirroring for disks that are already protected using hardware mirroring or RAID.
In addition failure groups and preferred names for disks can be defined. If the NAME clause is omitted the disks are given a system generated name like "disk_group_1_0001". The FORCE option can be used to move a disk from another disk group into this one.
FAILGROUP failure_group_1 DISK
'/devices/diska1' NAME diska1,
'/devices/diska2' NAME diska2,
FAILGROUP failure_group_2 DISK
'/devices/diskb1' NAME diskb1,
'/devices/diskb2' NAME diskb2;
Disk groups can be deleted using the DROP DISKGROUP statement.
Disks can be added or removed from disk groups using the ALTER DISKGROUP statement. Remember that the wildcard "*" can be used to reference disks so long as the resulting string does not match a disk already used by an existing disk group.
-- Add disks.

-- Drop a disk.
ALTER DISKGROUP disk_group_1 DROP DISK diska2;
Disks can be resized using the RESIZE clause of the ALTER DISKGROUP statement. The statement can be used to resize individual disks, all disks in a failure group or all disks in the disk group. If the SIZE clause is omitted the disks are resized to the size of the disk returned by the OS.
-- Resize a specific disk.
ALTER DISKGROUP disk_group_1

-- Resize all disks in a failure group.
ALTER DISKGROUP disk_group_1

-- Resize all disks in a disk group.
ALTER DISKGROUP disk_group_1
The UNDROP DISKS clause of the ALTER DISKGROUP statement allows pending disk drops to be undone. It will not revert drops that have completed, or disk drops associated with the dropping of a disk group.
Disk groups can be rebalanced manually using the REBALANCE clause of the ALTER DISKGROUP statement. If the POWER clause is omitted the ASM_POWER_LIMIT parameter value is used. Rebalancing is only needed when the speed of the automatic rebalancing is not appropriate.
Disk groups are mounted at ASM instance startup and unmounted at ASM instance shutdown. Manual mounting and dismounting can be accomplished using the ALTER DISKGROUP statement as seen below.


Templates are named groups of attributes that can be applied to the files within a disk group. The following example show how templates can be created, altered and dropped.
-- Create a new template.

-- Modify template.

-- Drop template.
ALTER DISKGROUP disk_group_1 DROP TEMPLATE my_template;
Available attributes include:
  • UNPROTECTED - No mirroring or striping regardless of the redundancy setting.
  • MIRROR - Two-way mirroring for normal redundancy and three-way mirroring for high redundancy. This attribute cannot be set for external redundancy.
  • COARSE - Specifies lower granuality for striping. This attribute cannot be set for external redundancy.
  • FINE - Specifies higher granularity for striping. This attribute cannot be set for external redundancy.


A directory heirarchy can be defined using the ALTER DISKGROUP statement to support ASM file aliasing. The following examples show how ASM directories can be created, modified and deleted.
-- Create a directory.
ALTER DISKGROUP disk_group_1 ADD DIRECTORY '+disk_group_1/my_dir';

-- Rename a directory.
ALTER DISKGROUP disk_group_1 RENAME DIRECTORY '+disk_group_1/my_dir' TO '+disk_group_1/my_dir_2';

-- Delete a directory and all its contents.
ALTER DISKGROUP disk_group_1 DROP DIRECTORY '+disk_group_1/my_dir_2' FORCE;


Aliases allow you to reference ASM files using user-friendly names, rather than the fully qualified ASM filenames.
-- Create an alias using the fully qualified filename.
ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'
FOR '+disk_group_1/mydb/datafile/my_ts.342.3';

-- Create an alias using the numeric form filename.
ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'
FOR '+disk_group_1.342.3';

-- Rename an alias.
ALTER DISKGROUP disk_group_1 RENAME ALIAS '+disk_group_1/my_dir/my_file.dbf'
TO '+disk_group_1/my_dir/my_file2.dbf';

-- Delete an alias.
ALTER DISKGROUP disk_group_1 DELETE ALIAS '+disk_group_1/my_dir/my_file.dbf';
Attempting to drop a system alias results in an error.


Files are not deleted automatically if they are created using aliases, as they are not Oracle Managed Files (OMF), or if a recovery is done to a point-in-time before the file was created. For these circumstances it is necessary to manually delete the files, as shown below.
-- Drop file using an alias.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/my_dir/my_file.dbf';

-- Drop file using a numeric form filename.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1.342.3';

-- Drop file using a fully qualified filename.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/mydb/datafile/my_ts.342.3';

Checking Metadata

The internal consistency of disk group metadata can be checked in a number of ways using the CHECK clause of the ALTER DISKGROUP statement.
-- Check metadata for a specific file.
ALTER DISKGROUP disk_group_1 CHECK FILE '+disk_group_1/my_dir/my_file.dbf'

-- Check metadata for a specific failure group in the disk group.
ALTER DISKGROUP disk_group_1 CHECK FAILGROUP failure_group_1;
-- Check metadata for a specific disk in the disk group. ALTER DISKGROUP disk_group_1 CHECK DISK diska1; -- Check metadata for all disks in the disk group. ALTER DISKGROUP disk_group_1 CHECK ALL;

ASM Views

The ASM configuration can be viewed using the V$ASM_% views, which often contain different information depending on whether they are queried from the ASM instance, or a dependant database instance.

View ASM Instance DB Instance
V$ASM_ALIAS Displays a row for each alias present in every disk group mounted by the ASM instance. Returns no rows
V$ASM_CLIENT Displays a row for each database instance using a disk group managed by the ASM instance. Displays a row for the ASM instance if the database has open ASM files.
V$ASM_DISK Displays a row for each disk discovered by the ASM instance, including disks which are not part of any disk group. Displays a row for each disk in disk groups in use by the database instance.
V$ASM_DISKGROUP Displays a row for each disk group discovered by the ASM instance. Displays a row for each disk group mounted by the local ASM instance.
V$ASM_FILE Displays a row for each file for each disk group mounted by the ASM instance. Displays no rows.
V$ASM_OPERATION Displays a row for each file for each long running operation executing in the ASM instance. Displays no rows.
V$ASM_TEMPLATE Displays a row for each template present in each disk group mounted by the ASM instance. Displays a row for each template present in each disk group mounted by the ASM instance with which the database instance communicates.

ASM Filenames

There are several ways to reference ASM file. Some forms are used during creation and some for referencing ASM files. The forms for file creation are incomplete, relying on ASM to create the fully qualified name, which can be retrieved from the supporting views. The forms of the ASM filenames are summarised below.

Filename Type Format
Fully Qualified ASM Filename +dgroup/dbname/file_type/file_type_tag.file.incarnation
Numeric ASM Filename +dgroup.file.incarnation
Alias ASM Filenames +dgroup/directory/filename
Alias ASM Filename with Template +dgroup(template)/alias
Incomplete ASM Filename +dgroup
Incomplete ASM Filename with Template +dgroup(template)


ASM filenames can be used in place of conventional filenames for most Oracle file types, including controlfiles, datafiles, logfiles etc. For example, the following command creates a new tablespace with a datafile in the disk_group_1 disk group.

Migrating to ASM Using RMAN

The following method shows how a primary database can be migrated to ASM from a disk based backup:
  • Disable change tracking (only available in Enterprise Edition) if it is currently being used.
  • Shutdown the database.
  • Modify the parameter file of the target database as follows:
    • Set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n parameters to the relevant ASM disk groups.
    • Remove the CONTROL_FILES parameter from the spfile so the control files will be moved to the DB_CREATE_* destination and the spfile gets updated automatically. If you are using a pfile the CONTROL_FILES parameter must be set to the appropriate ASM files or aliases.

  • Start the database in nomount mode.
  • Restore the controlfile into the new location from the old location.
    RMAN> RESTORE CONTROLFILE FROM 'old_control_file_name';
  • Mount the database.
  • Copy the database into the ASM disk group.
  • Switch all datafile to the new ASM location.
  • Open the database.
  • Create new redo logs in ASM and delete the old ones.

  • Enable change tracking if it was being used.

Tuesday, September 16, 2008

Oracle Apps 11i - Key Configuration Files

The following table lists the file names and contents as they relate to instance-specific information for the Applications Administrator role.

The following table lists the file names and contents as they relate to instance-specific information for the Database Administrator role.

The following table lists the file names and contents as they relate to instance-specific information for the Web Administrator role.


Monday, September 15, 2008

Minimising Physical I/Os in Oracle Applications

This article focuses on how to minimize the physical (disk) I/O in any Oracle application
system. It mainly talks for large-scale OLTP systems and describes pointers to areas that
can be looked upon for performance bottlenecks and improvements.

Physical I/Os vs. Logical I/Os
Logical I/O can be defined as the amount of Input and Output Oracle performs on the lowest
level of it’s storage i.e. the Oracle Blocks. So the logical I/O is calculated on the basis of the
number of blocks read, number of times the block(s) is changed during a particular DML
Logical I/Os are calculated as the sum of Consistent Gets and DB Block Gets which are further
explained as below:

Consistent Gets – the number of buffers gotten in consistent mode. During the course of a read operation on a table, there is a possibility that some other transaction may change the underlying matching records for the query. During the read operation, any information for the row changes is first checked from the Block Header. This avoids the need to check for each row in the block, if the corresponding row has been changed or not. The buffers as read in this mode are referred to as Consistent Gets.
DB Block Gets – these are the number of buffers gotten normally during update/insert
operations and where the information on the segment header block needs to be updated.

Logical I/O = Consistent Gets + Db Block Gets.

Physical I/Os are part of Logical I/Os. This means that for a typical read operation on a table,
the resulting operation can be a mix of information gotten from the Buffer Cache and the
physical disk (for whatever information not available in the cache). The information read from
the disk forms physical I/O for the operation, while the total information forms Logical I/Os
for the operation.

Logical I/O can be relatively straightforward (albeit not simple) to calculate, and is mostly done
by breaking down the application in terms of tables being accessed against the transactions in the
system. However, the estimates depend heavily on various factors like data volumes, data
distribution, probable access methods, frequency of access, frequency of change, and so forth.
There are, however, other simple rules to calculate estimates for the logical I/O. The resulting
physical I/O is much more subject to the specific nature of the system

Why look at physical I/Os
Physical I/Os tend to be expensive in terms of desired performance from a typical OLTP
system. Physical disk I/O is by far the slowest of the three core components found in all
computing devices (CPU, memory, and disk). For that reason, minimising the volume of physical
I/O becomes a major step in system performance tuning process.

While investigating physical I/O for a system, it should, however, be kept in mind that physical
I/O is not inherently a problem, and is in fact necessary for the database to function at all.
However, unnecessary physical I/O can become major bottlenecks in the system performance.

We can convert peak Logical I/Os per second to the amount of CPU required for the
application, and we can convert peak Physical I/Os per second to the number of disks. This
leads us to the direction of tuning the process of disk access to enable better performance.
However, it must be kept be kept in mind, that steps for tuning physical I/O still start from the
very first step of tuning the application first, moving on to the database, and finally attacking the
operating system and hardware.

Following sections describe pointers to areas to tune w.r.t. The Application, Database, and the

Pointers for Application Tuning
1) Minimising block accesses – choosing best retrieval mechanism – Indexes vs. Table Scans
Typically, we always try to tune the application in order to avoid Full Table Scans for queries
being executed. Here is a choice between Full Table Scans and putting indexes in force.
Most often, in a long running and continually growing OLTP system, index look -ups tend to
experience less physical I/O in comparison with table scans. However, the choice lies with
the optimiser unless we force it to use indexes with hints (Setting Optimiser mode to Rule
is not advised as its going to get de-supported in later versions of Oracle). Table scans
will sometimes consume fewer resources in terms of I/O than the equivalent index look -up.
This will prominently be true when the selection criteria points to a large proportion of the
table data. Apart from this, this will also depend on many other factors like data distribution,
optimiser statistics are up to date, selectivity of the index, depth of the index, init file
So properly analysing the factors influencing cost-based optimiser decisions to choose
between table scans or index scans, will help minimise the physical I/O for the system.
We can help cost-based optimiser chose between index and table scans by doing following
basic things:

a) Keeping optimiser statistics up to date
b) Specifying correct optimiser goal – thumb rule is to set FIRST_ROWS_n (n is 1, 10,
100..) for best response times, and ALL_ROWS for best possible throughput.
c) Using hints, wherever necessary
d) Creating histograms, wherever necessary

2) Avoiding Disk sorts
Disk sorts occur when there is not sufficient memory available for any sort operation to
complete; in which case, Oracle allocates a temporary segment within the temporary
tablespace. This causes additional I/O overheads for the read and write operations from the
temporary tablespace.
We try to avoid / minimise disk sorts by:
a) Properly setting the value for configuration parameter – SORT_AREA_SIZE. The value
of this parameter should not exceed the memory capacity of the system.
b) Properly setting the value for configuration parameter –
SORT_AREA_RETAINED_SIZE. One should be very careful in setting the value for
this parameter. Setting the value too high can cause memory contentions resulting in
heavy paging and swapping activities.
c) Think of striping the temporary tablespace across disks
d) With optimiser mode set to FIRST_ROWS, the optimiser will favour avoiding sorts if
other alternatives are available.
e) Avoid DISTINCT clause, if possible in queries. The DISTINCT clause will usually
perform sorts.
f) Using UNION ALL instead of UNION, if possible. This will avoid sorts required for
duplicate row elimination when UNION is used.
g) Sorts can be avoided if the indexes are being used. For e.g. if an index exists on some or
all the columns in the ORDER BY clause, the already sorted index may get used.
h) Replacing sort -merge join operations with hash joins.

3) Avoiding Parse Operations
Excessive parsing operations cause recursive SQL’s to be frequently generated. Excessive
recursive SQL’s may sometimes result in physical disk I/O if the information desired is not
available in the dictionary cache.
Avoid parse overhead by:
a) Using bind variables
b) Reusing cursors available for later operations in a transaction
c) Properly setting the value of configuration parameter –
d) Properly setting the value of configuration parameter – CURSOR_SHARING
(especially when it is difficult to re -write application or using bind variables)

Pointers for Database Tuning
1) Proper Space Management
Proper space management plays a major role in enhancing system’s performance. Form an
I/O perspective, wastage of space means Oracle has to perform more I/O in order to satisfy
a request, or getting a job done. Therefore, it becomes imperative to take steps to avoid
wastage of space in order to improve performance as desired.
A proper value of the block size along -with other space parameters like PCTFREE,
PCTUSED, INITTRANS, MXTRANS (as explained below) can help utilize the space
effectively thereby reducing I/O.
It all depends how a block is packed efficiently so that minimum I/Os is required to retrieve
a particular amount of data.
These two parameters affect the way the transactional section of the block is allowed to
grow for the transactions applied to the block. INITTRANS specifies the initial number
of slots reserved for the transactions to occur. MAXTRANS specifies the maximum
slots that can be created in the block (defaults to 255). Setting too high a value of
MAXTRANS means allowing Oracle to push the transaction table to a size that is
unnecessarily large for most of the life cycle of the table/index

PCTFREE tells Oracle how much of he remainder of the block is available for inserting
new data. PCTUSED tells to what percentage the used space be dropped to allow Oracle
to insert new data in the same block. Setting a high value for PCTFREE could mean
more empty space in the block. This, in turn, means more I/Os are to be performed for
retrieving the amount of data required. An improper setting of this parameter can also
lead to row migrations thus leaving a lot of waste space in the original block (this is not
desirable as an extra I/O will still be required on the original block to read the rowid of
the migrated row).

c) Choosing appropriate block size
Traditionally speaking, smaller block sizes are often quoted to be good for OLTP
systems and larger blocks are better for DSS systems. Bigger blocks mean fewer buffers,
and less degree of buffering. In a typical OLTP system – where the data is accessed
mostly through indexes, there is a higher probability that the some other query may flush
data from the previously buffered block.
The above problem, however, can be addressed well by proper use of various buffer
pools available – KEEP, RECYCLE, and DEFAULT
For a typical OLTP system averaging 1-1.5 Terabytes of data for one year into
production, a block size of 16K can work well to deliver good performance.

d) Extent size
Never set the extent size smaller than the physical maximum read size allowed by the
operating system. This is important because if any object were to be scanned, it would
mean extra I/O if Oracle has to issue more than one read request in order to get its
single maximum read.

e) Tablespace
Splitting database across multiple Tablespaces is a common practice and offers great deal
of benefits in terms of backup, recovery, and the administration tasks getting easier to
handle. Identifying and correcting I/O bottlenecks becomes easy, as the I/O costs can
be isolated across sub-sets of databases.

2) Physical I/O Distribution across physical devices
Balancing physical I/O is an important step to execute for maximising Oracle’s system
performance. This step involves stripping total I/O across data-files that are spread evenly
across physical disk drives.
Following points below indicate areas to tune for achieving this goal:

a) Identifying Hot-spots
Query on dictionary view – V$FILESTAT and V$DBFILE – can get the following
information to read for identifying hot files / Tablespaces

b) Balancing Physical I/O
Hot data files i.e. files with the most I/O can be identified by sorting the above
information by either Physical Reads or Physical Writes. If a small number of data-files
are experiencing a large percentage of load (around 80% of total load), and the overall
volume (Blks Rd/Min + Blks Wrt/Min) approaches the capacity of the disk, then it’s an
indication of spreading I/O across multiple disks. If there is just one hot file on a disk
that is responsible for heavy I/O, moving it on to another disk will not help. In this case,
considering the option of moving heavily accessed segments to separate Tablespaces can
help reduce I/O bottleneck here.
It’s a common and a useful practice to have small sized files, instead of one large file for
a tablespace, and spread them across multiple disks (if available). For e.g. spreading 10
100MB files across 10 disks for a 1GB tablespace required (instead of a single 1GB datafile)
c) Placing Redo Logs on separate disk
For large databases involving heavy update and insert activities, performance can be
improved by placing the redo log files on a separate disk that supports no other activity.

d) Separate Indexes from tables
Having indexes and tables assigned to different Tablespaces respectively can help
improve the I/O performance. Performance can further be expected to improve if the
data-files under individual Tablespaces are spread across different disks

3) Database Fragmentation
Extent / Tablespace Fragmentation: Database segments may be composed of many non -
contiguous extents of disk space. The I/O times will increase due to non-sequential disk
reads and/or because of split I/Os. The extent fragmentation further results in Tablespace
fragmentation (the Oracle Tablespaces may be composed of many files, and the Oracle
segments within a Tablespace may be composed of many individual extents; thus resulting in
Tablespace fragmentation.
Tablespace fragmentation prevents Oracle to take advantage of its multi-block read capability
along-with resulting in wastage of space.
Steps to eliminate Tablespace fragmentation:
a) Recreate Tablespace – this can just be a temporary solution and might not prevent it
from recurring
b) Appropriately sizing segment storage parameters

4) Adjusting Database buffers
Increasing the number of database buffers to an optimum value (check to avid excessive
paging here) will also help improve the I/O efficiency. A checkpoint can be established for
the Buffer it Ratio. Check for the Buffer Hit Ratio – if it is consistently less than 90%, it
becomes necessary to increase DB_BLOCK_BUFFERS
Note: If the dictionary cache hit Ratio is less than 95%, consider increasing the shared Pool
size to an optimum value.

5) Considering Multiple DBWRs
For a large scale Oracle application, multiple database writers should be considered. The
configurable parameter – DB_WRITERS – can be set in initdb.ora to reflect number of
DBWRs. This needs to be adjusted in an iterative manner i.e. increasing conservatively and
monitoring the results for I/O improvements.




High-water mark is an indicator or pointer up to which table or index has ever contain data

Let us illustrate this statement as

Suppose we create an empty table , the high-water mark would be at the beginning of the table segment

Unused block or extent occupied by the table segment on initial creation.

|___ HWM

After inserting data in the table segment , the HWM moves and point the position up to which the data is in the segment



Un-Used Blocks

By inserting more data in the table segment, HWM moves further to point the position up to which the data is in the segment

ß------- DATA-----------------à

Un-Used Blocks

Now let us delete the data and see the pointer of HWM



Empty Blocks

Un-Used Blocks

---------------------------------------> Full Table Scan

As you seen above by deleting the data , HWM does not move. The main disadvantage of this is that oracle always read the blocks up to high water mark in case of full table scan . You may have ever notice that doing a count(*) on empty table , takes time to show you 0 rows. The reason for delay is setting of HWM at higher position.

Now the question arises in front of us , how we set the high-water mark at lower position ?

The only way to set the HWM is to truncate a table.

Let us see how truncate set the HWM.

No data in the segment

HWM is reset now , after truncating data.

Table where lots of deletion or insertion takes place , probably has High HWM. If the HWM is high , it is better to rebuild table segment for performance sake.

Calculate the HWM as follows


If the degree of difference is 40% or more , then performing the rebuild to reset the HWM is worthwhile. However , consider the overall database size in terms of the number of rows in the table, the total blocks held by the table, and the frequency of FTS being performed (on that table), prior to deciding whether the difference between HWM blocks and actual data blocks is substantial and whether the table is indeed a candidate for resetting the HWM.

In case of partitioned table , you would be dealing HWM at partition level

Calculate empty blocks and total blocks occupied by the segments


Empty Blocks

Un-Used Blocks

/* Analyze table to show correct result */

Select blocks “Block containing data”, empty_block “Empty blocks” from user_table

Where table_name = ;


Select blocks “Block containing data”,empty_block “Empty blocks” from dba_tables

Where table_name =

And owner = ;

HWMBLOCKS = “Blocks Containing data” + “Empty blocks”

HWMBLOCKS – “Blocks containing data”

HWM % = (----------------------------------) * 100


You can also use DBMS_SPACE.UNUSED_SPACE procedure to determine HWM. This procedure returns values for USED BLOCKS and TOTAL BLOCK. Calculate the HWM as (TOTAL_BLOCKS – USED BLOCKS).

Also I want to point out that people confused about setting of HWM through ALTER TABLE DEALLOCATE UNUSED clause. This clause only free unused space above the high water mark but can not reset HWM position.

Hope this article helped you to understand the concept of HWM.