Applications Tier Tuning In R12
Check for necessary upgrades required in tech stack
===================================================
a) Upgrade to the latest certified technology stack
OC4J: OracleAS 10g 10.1.3.3.0, Metalink Note 454811.1
Forms: OracleAS 10.1.2.2, Metalink Note 437878.1
ATG: RUP 12.0.4 (Patch 6272680)
b) Upgrade to the latest JDK
Metalink Note 418664.1 for using Java with R12
Metalink Note 300482.1 for the latest certifications
c) Apps Version JDK Version Metalink Note
Forms
======
a) Deploy with socket mode for internal users
R12: Refer to Note 384241.1.
b) Enable Forms Dead Client Detection
Value specified in minutes: FORMS_TIMEOUT=10
In context file search for the following l
Terminates fwebmx processes for dead clients.
c) Enable Forms Abnormal Termination Handler
Do not set FORMS_CATCHTERM
FORMS_CATCHTERM oa_var="s_forms_catchterm"
d) Disable Cancel Query
Cancel Query increases middle-tier CPU as well as DB CPU
To Disable Cancel Query
Set the Profile “FND: Enable Cancel Query” to ‘No’
In context file search for the following line
FORMS_BLOCKING_LONGLIST oa_var="s_forms_blocklist"
Note: - For any forms related issues check for errors in the following log file
$LOG_HOME/ora/10.1.2/forms/socket.log
OC4J/JVM
=========
Thumbrule
• Only one JVM per 2 CPUs
• No more than one JVM/CPU
• No more than 100 concurrent users per JVM
Response Time/CPU Usage
Customer complains about response time?
Solution: - Configure Apache to log the time it takes to service a request
Edit: $ORA_CONFIG_HOME/10.1.3/Apache/Apache/conf/httpd.conf
LogFormat "%h %T
Logs: $LOG_HOME/ora/10.1.3/Apache/access_log*
Example: -
Log Files to check
$LOG_HOME/ora/10.1.3/Apache/access_log*
For example:
• Status 500 (internal server error) may typically be seen for a JServ request and often means the JVM has some kind of problem or has died.
For example: This entry may indicate that the JServ JVM is not responding to any requests:
192.168.1.10 - - [21/Jun/2006:13:25:30 +0100] "POST /oa_servlet/actions/processApplicantSearch HTTP/1.1" 500 0
• Status 403 (forbidden) could typically be seen for oprocmgr
For example: This entry in access_log may indicate a problem with system configuration (oprocmgr.conf): requests and often means there is a misconfiguration that needs to be resolved.
192.168.1.10 - - [21/Jun/2006:13:25:30 +0100] "GET /oprocmgr-service?cmd=Register&index=0&modName=JServ
&grpName=OACoreGroup&port=16000 HTTP/1.1" 403 226
Run the below script to search for the above errors from access_log
## Start of script
##
## Check for HTTP statuses in 400 or 500 range for JServ
## or PLSQL requests only
##
awk ' $9>=400 && $9<=599 { print $0 }' access_log*
grep -e "servlet" -e "\/pls\/"
grep -v .gif ## ## Check for requests taking more than 30 seconds to be returned ## awk ' $11>30 {print $0} ' access_log*
##
## This one is not an exception report, you need to manually check
## Look for when the JVMs are restarting
##
grep "GET /oprocmgr-service?cmd=Register" access_log*
##
## End of script
Framework applications
========================
If there are no database-related issues, then you need to analyze the JVM
Techniques you can use:
$LOG_HOME/ora/10.1.3/opmn/OC4J~oacore~default_group_*
Example:-
94562.018: [GC 670227K->595360K(892672K), 0.0221060 secs]
94617.600: [GC 672480K->617324K(799104K), 0.0307160 secs]
94648.483: [GC 694444K->623826K(872384K), 0.0405620 secs]
94706.754: [Full GC 756173K->264184K(790720K), 0.8990440 secs]
94718.575: [GC 458782K->424403K(737536K), 0.0471040 secs]
94740.380: [GC 501646K->436633K(793600K), 0.0656750 secs]
94817.197: [GC 512473K->441116K(795136K), 0.0749340 secs]
Description: -
Here the first column 94562.018, 94617.600 show the time in seconds when GC happened. Inside the square bracket it indicates whether it’s a minor GC or FULL GC. That is followed by some number 670227K->595360K. The number on left side of -> indicate original size of live objects before GC and number after -> indicate size of live objects after GC. Number in the bracket (892672K) indicates total size of live objects allocated. Number after comma indicates time it took to complete garbage collection. For example in the first rows it took 0.0221060 secs for completing GC.
Review the frequency of collections; especially major collections (i.e. Full GC)
Recommendations to be given
a) Enable verbose GC to tune heap sizes based on the GC traffic
b) If full GCs are too frequent, consider increasing -Xms and -Xmx GC tuning
c) Bigger heaps => GC will take longer
d) Longer GCs => users may experience pauses
e) For the OACoreGroup JVMs start with the lower of the following two values:
Number of cores on the middle tier
Peak Concurrent users / 100
For example:
If you have 2 x Dual Core CPUs in your server and have 500 peak users, then 4 JVMs is the recommended starting point, since the number of cores is the lower number. However, if you only had 300 peak users, then you would configure 3 JVMs as a starting point as this is now the lower figure.
f) Size your maximum heap size as either 512 MB or 1 GB. If you start with 512 MB and find that more memory is required, increase to a maximum of 1 GB. If more memory than 1 GB is required, then add another JVM instead (free physical memory allowing) to increase the total memory available overall.
For example:
You are using 1 x JVM with 1 GB heap size and find you need to increase memory. Configure your system for 2 JVMs, each with 750 MB heap size, thus providing 1.5 GB in total.
HAPPY LEARNING!
Check for necessary upgrades required in tech stack
===================================================
a) Upgrade to the latest certified technology stack
OC4J: OracleAS 10g 10.1.3.3.0, Metalink Note 454811.1
Forms: OracleAS 10.1.2.2, Metalink Note 437878.1
ATG: RUP 12.0.4 (Patch 6272680)
b) Upgrade to the latest JDK
Metalink Note 418664.1 for using Java with R12
Metalink Note 300482.1 for the latest certifications
c) Apps Version JDK Version Metalink Note
Forms
======
a) Deploy with socket mode for internal users
R12: Refer to Note 384241.1.
b) Enable Forms Dead Client Detection
Value specified in minutes: FORMS_TIMEOUT=10
In context file search for the following l
FORMS_TIMEOUT
oa_var="s_forms_time"
Terminates fwebmx processes for dead clients.
c) Enable Forms Abnormal Termination Handler
Do not set FORMS_CATCHTERM
FORMS_CATCHTERM oa_var="s_forms_catchterm"
d) Disable Cancel Query
Cancel Query increases middle-tier CPU as well as DB CPU
To Disable Cancel Query
Set the Profile “FND: Enable Cancel Query” to ‘No’
In context file search for the following line
FORMS_BLOCKING_LONGLIST oa_var="s_forms_blocklist"
Note: - For any forms related issues check for errors in the following log file
$LOG_HOME/ora/10.1.2/forms/socket.log
OC4J/JVM
=========
Thumbrule
• Only one JVM per 2 CPUs
• No more than one JVM/CPU
• No more than 100 concurrent users per JVM
Response Time/CPU Usage
Customer complains about response time?
Solution: - Configure Apache to log the time it takes to service a request
Edit: $ORA_CONFIG_HOME/10.1.3/Apache/Apache/conf/httpd.conf
LogFormat "%h %T
Logs: $LOG_HOME/ora/10.1.3/Apache/access_log*
Example: -
Log Files to check
$LOG_HOME/ora/10.1.3/Apache/access_log*
For example:
• Status 500 (internal server error) may typically be seen for a JServ request and often means the JVM has some kind of problem or has died.
For example: This entry may indicate that the JServ JVM is not responding to any requests:
192.168.1.10 - - [21/Jun/2006:13:25:30 +0100] "POST /oa_servlet/actions/processApplicantSearch HTTP/1.1" 500 0
• Status 403 (forbidden) could typically be seen for oprocmgr
For example: This entry in access_log may indicate a problem with system configuration (oprocmgr.conf): requests and often means there is a misconfiguration that needs to be resolved.
192.168.1.10 - - [21/Jun/2006:13:25:30 +0100] "GET /oprocmgr-service?cmd=Register&index=0&modName=JServ
&grpName=OACoreGroup&port=16000 HTTP/1.1" 403 226
Run the below script to search for the above errors from access_log
## Start of script
##
## Check for HTTP statuses in 400 or 500 range for JServ
## or PLSQL requests only
##
awk ' $9>=400 && $9<=599 { print $0 }' access_log*
grep -e "servlet" -e "\/pls\/"
grep -v .gif ## ## Check for requests taking more than 30 seconds to be returned ## awk ' $11>30 {print $0} ' access_log*
##
## This one is not an exception report, you need to manually check
## Look for when the JVMs are restarting
##
grep "GET /oprocmgr-service?cmd=Register" access_log*
##
## End of script
Framework applications
========================
If there are no database-related issues, then you need to analyze the JVM
Techniques you can use:
$LOG_HOME/ora/10.1.3/opmn/OC4J~oacore~default_group_*
Example:-
94562.018: [GC 670227K->595360K(892672K), 0.0221060 secs]
94617.600: [GC 672480K->617324K(799104K), 0.0307160 secs]
94648.483: [GC 694444K->623826K(872384K), 0.0405620 secs]
94706.754: [Full GC 756173K->264184K(790720K), 0.8990440 secs]
94718.575: [GC 458782K->424403K(737536K), 0.0471040 secs]
94740.380: [GC 501646K->436633K(793600K), 0.0656750 secs]
94817.197: [GC 512473K->441116K(795136K), 0.0749340 secs]
Description: -
Here the first column 94562.018, 94617.600 show the time in seconds when GC happened. Inside the square bracket it indicates whether it’s a minor GC or FULL GC. That is followed by some number 670227K->595360K. The number on left side of -> indicate original size of live objects before GC and number after -> indicate size of live objects after GC. Number in the bracket (892672K) indicates total size of live objects allocated. Number after comma indicates time it took to complete garbage collection. For example in the first rows it took 0.0221060 secs for completing GC.
Review the frequency of collections; especially major collections (i.e. Full GC)
Recommendations to be given
a) Enable verbose GC to tune heap sizes based on the GC traffic
b) If full GCs are too frequent, consider increasing -Xms and -Xmx GC tuning
c) Bigger heaps => GC will take longer
d) Longer GCs => users may experience pauses
e) For the OACoreGroup JVMs start with the lower of the following two values:
Number of cores on the middle tier
Peak Concurrent users / 100
For example:
If you have 2 x Dual Core CPUs in your server and have 500 peak users, then 4 JVMs is the recommended starting point, since the number of cores is the lower number. However, if you only had 300 peak users, then you would configure 3 JVMs as a starting point as this is now the lower figure.
f) Size your maximum heap size as either 512 MB or 1 GB. If you start with 512 MB and find that more memory is required, increase to a maximum of 1 GB. If more memory than 1 GB is required, then add another JVM instead (free physical memory allowing) to increase the total memory available overall.
For example:
You are using 1 x JVM with 1 GB heap size and find you need to increase memory. Configure your system for 2 JVMs, each with 750 MB heap size, thus providing 1.5 GB in total.
HAPPY LEARNING!