The following topics are discussed:
Every system of hardware and installed applications is different. Even though Oracle Fusion Applications are written and installed using industry-standard best practices, it is possible to custom tailor the system to improve how it supports the environment.
But to tune the system, it is necessary to locate and examine data. This section will explain what data needs to be examined, and what tools are used to gather the data.
In general, most of the settings that come default in Oracle Fusion Applications are already tuned.
The following guidelines are provided to help ensure the Oracle Fusion Applications instance runs optimally:
Note that all metrics listed are from Oracle Enterprise Manager Cloud Control.
Monitor the key host metrics, shown in Table 14-1, to ensure the underlying server hosts are healthy. Rather than constantly checking the metric values, it is possible to set up alert thresholds in Cloud Control and receive notification when thresholds are exceeded.
Monitor the key component metrics, such as WebLogic server metrics, to ensure each component is healthy.
Monitor the number of incidents and logs to ensure the application is configured properly and not constantly wasting resources generating error messages. Review log levels to ensure they are not set too low. See Prepare to Troubleshoot Using Incidents, Logs, QuickTrace, and Diagnostic Tests for more information.
Monitor the database to ensure it is operating optimally.
Table 14-1 Key Host Metrics
Metric Category | Metric Name | Warning Threshold | Critical Threshold | Comments |
---|---|---|---|---|
Disk Activity |
Disk Device Busy |
>80% |
>95% |
N/A |
Filesystems |
Filesystem Space Available |
<20% |
<5% |
N/A |
Load |
CPU in I/O wait |
>60% |
>80% |
N/A |
Load |
CPU Utilization |
>80% |
>95% |
N/A |
Load |
Run Queue (5 min average) |
>2 |
>4 |
The run queue is normalized by the number of CPU cores. |
Load |
Swap Utilization |
>75% |
>90% |
N/A |
Load |
Total Processes |
>15000 |
>25000 |
N/A |
Load |
Logical Free Memory % |
<20 |
<10 |
N/A |
Load |
CPU in System Mode |
>20% |
>40% |
N/A |
Network Interfaces Summary |
All Network Interfaces Combined Utilization |
>80% |
>95% |
N/A |
Switch/Swap Activity |
Total System Swaps |
>3 |
>5 |
Value is per second. |
Paging Activity |
Pages Paged-in (per second) |
N/A |
N/A |
N/A |
Paging Activity |
Pages Paged-out (per second) |
N/A |
N/A |
The combined value of Pages Paged-in and Pages Paged-out should be <=1000 |
Administrators will find it useful to study these suggestions on further analysis to undertake when a metric value exceeds threshold. The commands provided are for the Unix OS operating system.
When logical free memory/swap activity or paging activity is beyond threshold
This usually happens when memory is not sufficient to handle demands from all the running processes.
Linux: Check cat/proc/meminfo
and confirm total RAM is expected.
AIX: Check /usr/sbin/prtconf | grep "Memory Size"
and confirm total RAM is expected.
Solaris: Check /usr/sbin/prtconf | grep "Memory size"
and confirm total RAM is expected.
Check if there are unallocated huge pages. If there are and the WebLogic Server/Oracle instances are not expected to use them, reduce the huge page pool size.
Linux: Run top
and sort by resident memory (type OQ). Look for processes using the most resident memory and investigate those processes.
Solaris: Run top -o res
. Look for processes using the most resident memory and investigate those processes.
When page activity is beyond threshold
Follow the steps in When logical free memory/swap activity or paging activity is beyond threshold to view and analyze memory usage.
When Network Interface Error Rates Is Beyond Threshold
The normal cause is misconfiguration between the host and the network switch. A bad network card or cabling also can cause this error. Run /sbin/ifconfig
to identify which interface is having packet errors. Contact network administrator to ensure the host and the switch are using same data rate and duplex mode.
Otherwise, check if cabling or the network card is faulty and replace as appropriate.
When Packet Loss Rate Is Beyond Threshold
The normal cause of this error is network saturation of bad network hardware.
On a UNIX system:
Run lsof -Pni | grep ESTAM
to determine which network paths are generating the problem.
Linux: Then run mtr <target host>
or ping <target host>
and look for packet lost on that segment.
20 packets transmitted, 20 received, 0% packet loss, time 18997ms rtt min/avg/max/mdev = 0.168/0.177/0.200/0.010 ms
The packet loss should be 0% and rtt should be less than .5 ms.
Solaris: Then run /usr/sbin/ping -s <target host>
and look for packet loss on that segment.
2 packets transmitted, 2 packets received, 0% packet loss round-trip (ms) min/avg/max/stddev = 0.453/0.465/0.477/0.017
The packet loss should be 0% and round-trip should be less than .5 ms.
Ask the network monitoring staff to look for saturation or network packet loss from their side.
When Network Utilization Is Beyond Threshold
The normal cause is very heavy application load.
On a UNIX system:
Linux/Solaris: Run top
or lsof
to determine which processes are moving a lot of data.
Solaris: Use /usr/sbin/snoop
to capture and inspect network packets.
Linux: Use atop
, iftop
, ntop
or pkstat
to see which processes are moving data.
Solaris: Use top
to see which processes are moving data.
When CPU Usage or Run Queue Length Is Beyond Threshold
The normal cause is runaway demand, a poorly performing application, or poor capacity planning.
Linux/Solaris: Run top
to identify which application/process is using time.
If top processes are WebLogic Server JVM processes, conduct a basic WebLogic Server health check. That is, review logs to see if there are configuration errors causing excessive exceptions, and review metrics to see if the load has increased. Use JVMD for a more detailed analysis.
If top processes are Oracle processes, use Enterprise Manager to look for high load SQL.
When System CPU Usage Is Beyond Threshold
High system CPU use could be due to kernel processes looking for pages to swap out during a memory shortage. Follow the steps listed in the When logical free memory/swap activity or paging activity is beyond threshold section to further diagnose the problem.
High system CPU use is also frequently related to various device failures. Run {{dmesg | less}}
and look for repeated messages about errors on some particular device, and also have hardware support personnel check the hardware console to see if there are any errors reported.
When Filesystem Usage Is Beyond Threshold
The normal cause is an application that is logging excessively or leaving behind temporary files.
On a UNIX system:
Linux/Solaris: Run lsof -d 1-99999 | grep REG | sort -nrk 7 | less
to see currently open files sorted by size from largest to smallest. Investigate the large files.
Run du -k /mount_point_running_out_of_space > /tmp/sizes
to get space used for directories under the mount point. This may take a long time. While it is running, run sort -nr /tmp/sizes
and find the directories using most space and investigate those first.
When Total Processes Is Beyond Threshold
The normal cause is runaway code or a stuck NFS filesystem.
Solaris: Run /usr/ucb/ps aux
. If many processes are in status D, run df
to check for stuck mounts.
If there are hundreds or thousands of processes of a particular program, determine why.
Linux: Run ps -o pid,nlwp,cmd | sort -nrk 2 | head
to look for processes with many threads.
Solaris: Run ps -o pid,nlwp,comm,user | sort -nrk 2 | head
to look for processes with many threads.
When Disk Device Busy Is Beyond Threshold
Check for disk drive failure.
UNIX: As root, check /var/log/messages*
and /var/log/mcelog
for Linux, and /var/adm/messages*
for Solaris, to see if there are any error messages indicating disk failure. For a RAID array, the disk controller needs to be checked. The commands will be specific to the controller manufacturer.
Look for processes that are using the disk. From a shell window, execute ps aux | grep ' D. '
several consecutive times to look for processes with "stat" D.
Poor performance is a major indicator of network connectivity problems.
Check for cumulative dropped packets drops for each host.
UNIX
netstat -s | grep 'TCP data loss' 4007 segments retransmited 3302 TCP data loss events
The counts should be 0 or growing very slowly over time.
Check for realtime dropped packets on specific network paths.
Run the ping
command.
ping -c 20 other_host 20 packets transmitted, 20 received, 0% packet loss, time 18997ms rtt min/avg/max/mdev = 0.168/0.177/0.200/0.010 ms
Packet loss should be 0%.
rtt should be less than .5 ms, except that it can be higher between the browser and load balancer.
Solaris: /usr/sbin/ping -s other_host <data_size> <npackets>
2 packets transmitted, 2 packets received, 0% packet loss
Packet loss should be 0%.
Check for network interface errors.
Linux
/sbin/ifconfig eth0 | grep errors RX packets:842803463 errors:0 dropped:0 overruns:0 frame:0 TX packets:667946307 errors:0 dropped:0 overruns:0 carrier:0
Solaris
/sbin/ifconfig -a // to list all the adapter names kstat <adapter name> |grep -i error
These metrics provide an indication of whether the WebLogic Server is in a healthy state. Performance may degrade if any of the metrics is exceeding its threshold.
Table 14-2 describes the WebLogic Server metrics that should be monitored in Cloud Control.
Table 14-2 WebLogic Server Metrics
Metric Category | Metric Name | Warning Threshold | Critical Threshold | Comments |
---|---|---|---|---|
Datasource Metrics |
Connections in Use |
>250 |
>400 |
N/A |
Datasource Metrics |
Connection Requests that Waited (%) |
>10% |
>20% |
N/A |
Datasource Metrics |
Connection Creation Time (ms) |
N/A |
N/A |
N/A |
JVM Garbage Collectors |
Garbage Collector - Percent Time spent (elapsed) |
>10% |
>20% |
N/A |
JVM Metrics |
Heap Usage |
>90% |
>98% |
N/A |
Response |
Status |
N/A |
=Down |
This provides instance availability. |
Server Servlet/JSP Metrics |
Request Processing Time (ms) |
>10s |
>15s |
N/A |
Server Work Manager Metrics |
Work Manager Stuck Threads |
>5 |
>10 |
N/A |
JVM Threads |
Deadlocked Threads |
>2 |
>5 |
N/A |
Module Metrics By Server |
Active Sessions |
N/A |
N/A |
N/A |
When CPU Usage on Host is Beyond Threshold and WebLogic Server Process is Identified as Top CPU Consumer
Examine the % Time spent in the GC metric to see if JVM is doing excessive GC (>20 percent). If so, follow the process for diagnosing WebLogic Server heap pressure as indicated in section When Percent Time Spent in Garbage Collector is Beyond Threshold.
Look for incident creation rate and error logs and see if something is triggering a massive amount of logging/errors.
In JVMD, select the CPU state filter and look at top methods. Look for threads that are consistently in a CPU state.
When There is a Spike in Active Web Sessions
Check access logs to see if there is a spike in the number of users.
Check if there are stuck threads, which could cause users to log in again.
Check session distribution across WebLogic Server managed servers and see if there is a problem with the load balancer.
Check session timeout in web.xml
, and see if it is too high or too low.
When There are Stuck Threads on the System
Analyzing live stuck thread can be done using the Live Thread Analysis (LTA). For more information see sections Viewing the JVM Live Thread Analysis Page and Using Java Workload Explorer for investigating both live and past stuck thread; in the Oracle Enterprise Manager Cloud Control Oracle Fusion Middleware Management Guide .
When There are Deadlocks Detected on the System
In JVMD, inspect the threads that are in a blocked state.
Deadlock threads normally also will be reported as a stuck thread in the WebLogic Server log. Use the Request Monitor to search for the ECID and expand down into JVMD to show the blocking thread.
When Request Processing Time is Beyond Threshold
Examine the % Time spent in GC metric to see if JVM is doing excessive garbage collection
Look for incident create rate and error logs and see if something is triggering a massive amount of logging/errors.
In JVMD, look at the thread states and see where most processing time is going.
Check the metric Garbage Collection - Invocation Time (ms) under the JVM Garbage Collectors metric category.
Sometimes if many managed server instances are run on the same host, it is possible to reduce time spent in garbage collection by reducing the number of garbage collector threads in each JVM. The default is based on the number of CPUs and could be too high if there are multiple active JVMs running on the same machine. In those cases, if JRockit is being used, add the -XXgcThreads=4
option when starting the JVM. To add the option, edit the DOMAIN_HOME
/bin/fusionapps_start_params.properties
file, look for -Xgc:genpar
and add the -XXgcThreads=4
option after it (for example -Xgc:genpar -XXgcThreads=4
). The value 4 directs the JVM to use four threads to perform garbage collection. Try different values from 4 to the number of CPU cores and observe if the % Time spent in GC metric improves.
When Percent Time Spent in Garbage Collector is Beyond Threshold
Check the session count. If there is a sudden surge of sessions due to user load, the JVM could be short on heap. Increase heap if possible, or add additional managed server instances.
Look at the stuck threads count. Stuck threads could increase the number of active session, as users could be launching new sessions hoping for a faster response.
Look at the incident creation rate and error logs and see if something is triggering a massive amount of logging/errors. The incident creation/logging operations could be causing a high amount of object creation and garbage collection stress.
Generate a heap dump using JVMD and analyze the top retainer of memory.
Use JRMC to connect and extract a JFR recording. Examine the Memory panel and allocation details to see what is doing a lot of allocations.
When Percent Connection Requests Waiting is Beyond Threshold
Examine the number of sessions and request rate, and see if there is a spike in the load that would account for an increased demand for connections.
In JVMD, see where time is spent. For example, requests could be running longer due to slow SQLs (and retain the connection longer). In that case, identify and tune slow SQLs.
Consider increasing the initial capacity setting of the corresponding data source.
These metrics provide an indication of whether the Oracle HTTP Server is in a healthy state. Performance may degrade if any of the metrics is exceeding its threshold.
Table 14-3 describes the Oracle HTTP Server metrics that should be monitored in Cloud Control. See Monitor Mid-Tier Elements section.
Table 14-3 Oracle HTTP Server Metrics
Metric Category | Metric Name | Warning Threshold | Critical Threshold |
---|---|---|---|
OHS Server Metrics |
Busy Threads (%) |
>85% |
>95% |
OHS Server Metrics |
Request Throughput (requests per second) |
N/A |
N/A |
OHS Response Code Metrics |
HTTP 4xx errors |
N/A |
N/A |
OHS Response Code Metrics |
HTTP 5xx errors |
N/A |
N/A |
OHS Virtual Host Metrics |
Request Processing Time for a Virtual Host |
>10s |
>15s |
When Busy Threads % is Beyond Threshold
Check request throughput to see if load has increased. If the increased load is expected and CPU and memory resources on the OHS host has not exceeded threshold, consider increasing ServerLimit/MaxClients and ThreadsPerChild in httpd.conf
.
Check request process time on both OHS and underlying WebLogic Server to see if requests are taking longer. If WebLogic Server response time is increasing, check the key metrics for the WebLogic Server.
If possible, ensure the client browser cache is enabled to reduce number of requests submitted.
Check OHS Response Code Metrics. If there is a sudden increase of HTTP 4xx errors or HTTP 5xx errors, check the health of the underlying WebLogic Servers.
Check and increase the minimum and maximum spare threads for Oracle HTTP Server.
In the httpd.conf
file located in instance_home/config/ohs/<ohs_name>/httpd.conf
:
Increase MaxSpareThreads to 800.
Increase MinSpareThreads to 200.
When Request Processing Time for a Virtual Host Exceeds Threshold
Check the key host metrics to ensure the OHS host is healthy.
For each URL requested, OHS will first check DocumentRoot before passing the request to WebLogic Server. Check the utilization and health of the disk to which the DocumentRoot is pointing. If it is a NFS mount, check the health of the NFS mount point.
Check the key metrics for the underlying WebLogic Server(s) and see if they are healthy.
OHS accesses /tmp for each POST request, so check the performance of the /tmp filesystem.
These metrics provide an indication of whether the Oracle Business Intelligence Server is in a healthy state.
To start monitoring:
Fusion Applications Control can monitor various BI components, including:
Weblogic Analytics Application
Oracle BI Presentation Services
Oracle BI Server
Oracle Weblogic Server (administration and managed servers)
Oracle Access Manager and Oracle Identity Manager are both WebLogic Server instances. See Analyze WebLogic Server Metrics to monitor their health. See also Tune and Troubleshoot Oracle Identity Management.
Use Cloud Control to monitor the Oracle Internet Directory and Oracle Identity Manager databases. See Table 14-4 for Oracle Identity Manager metrics.
Table 14-4 Oracle Identity Manager Metrics
Oracle Identity Manager Cluster or Server | Metric Category | Metric Name | Warning Threshold | Critical Threshold |
---|---|---|---|---|
Oracle Identity Manager Cluster |
Provisioning Requests |
Completed Provisioning Requests |
NA |
NA |
Oracle Identity Manager Cluster |
Provisioning Requests |
Failed Provisioning Requests |
NA |
NA |
Oracle Identity Manager Cluster |
Reconciliations (Last 24 Hours) |
Jobs Completed |
NA |
NA |
Oracle Identity Manager Cluster |
Response |
Status |
NA |
NA |
Oracle Identity Manager Cluster |
Role Grant Requests |
Completed Role Grant Requests |
NA |
NA |
Oracle Identity Manager Cluster |
Role Grant Requests |
Completed Role Grant Requests Processing Time (per sec) |
NA |
NA |
Oracle Identity Manager Cluster |
Role Grant Requests |
Failed Role Grant Requests |
NA |
NA |
Oracle Identity Manager Cluster |
Role Grant Requests |
Pending Role Grant Requests |
NA |
NA |
Oracle Identity Manager Cluster |
Self Service Requests |
Completed Self Service Requests |
NA |
NA |
Oracle Identity Manager Cluster |
Self Service Requests |
Completed Self Service Requests Processing Time (per sec) |
NA |
NA |
Oracle Identity Manager Cluster |
Self Service Requests |
Failed Self Service Requests |
NA |
NA |
Oracle Identity Manager Cluster |
Self Service Requests |
Pending Self Service Requests |
NA |
NA |
Oracle Identity Manager Server |
Response |
Status |
NA |
NA |
Oracle Identity Manager Server |
Resource Utilization |
CPU Utilization (%) |
80 |
90 |
Oracle Identity Manager Server |
Resource Utilization |
Memory Utilization (%) |
80 |
90 |
Oracle Identity Manager Server |
Adapters |
Average Adapter Execution Time (ms) |
NA |
NA |
Oracle Identity Manager Server |
Adapters |
Completed Adapter Executions |
NA |
NA |
Oracle Identity Manager Server |
Adapters |
Maximum Adapter Execution Time (ms) |
NA |
NA |
Oracle Identity Manager Server |
Adapters |
Minimum Adapter Execution Time (ms) |
NA |
NA |
See Table 14-5 for Oracle Internet Directory metrics.
Table 14-5 Oracle Internet Directory Metrics
Metric Category | Metric Name | Warning Threshold | Critical Threshold |
---|---|---|---|
LDAP Operation Response Time |
Bind Operation Response Time |
NA |
NA |
LDAP Operation Response Time |
Request Processing Time (ms) |
NA |
NA |
LDAP Server Resource Usage |
Total CPU Usage (%) |
80 |
90 |
LDAP Server Resource Usage |
Total Memory Usage (%) |
80 |
90 |
Response |
Status |
NA |
NA |
(Critical Events) System Resource Events (3113 Errors) |
Number of 3113 Error Occurrences |
NA |
NA |
(Critical Events) System Resource Events (3114 Errors) |
Number of 3114 Error Occurrences |
NA |
NA |
LDAP Failed Bind Operations Profile |
Failed Bind Operations |
NA |
NA |
LDAP Server Resource Usage |
Completed Bind Operations |
NA |
NA |
LDAP Server Resource Usage |
Completed Compare Operations |
NA |
NA |
LDAP Server Resource Usage |
Completed Modify Operations |
NA |
NA |
LDAP Server Resource Usage |
Completed Search Operations |
NA |
NA |
LDAP Server Resource Usage |
Total Operations |
NA |
NA |
User LDAP Operations Statistics |
Failed Base Search Operations |
NA |
NA |
User LDAP Operations Statistics |
Failed Bind Operations |
NA |
NA |
User LDAP Operations Statistics |
Failed Compare Operations |
NA |
NA |
User LDAP Operations Statistics |
Failed Delete Operations |
NA |
NA |
User LDAP Operations Statistics |
Successful Base Search Operations |
NA |
NA |
To enable the collection of user LDAP Operation Statistics, edit the configuration using Fusion Applications Control:
cn=orcladmin
) to enable user statistics collection for that user.The metrics shown in Table 14-6 provide an indication of whether the Enterprise Scheduler instance is performing well.
Table 14-6 Key Enterprise Scheduler Metrics
Metric Category | Metric Name | Warning Threshold | Critical Threshold | Comments |
---|---|---|---|---|
Completed Job Summary |
Average Elapsed Time (ms) |
NA |
NA |
Different thresholds can be defined for different job names. |
Long Running Job |
Elapsed Time (ms) |
NA |
NA |
NA |
WorkAssignment Metrics aggregated across Group Members |
Average Wait Time for Requests in Ready State (seconds) |
NA |
NA |
NA |
When the Value of Average Elapsed Time for the Completed Jobs is Higher than Expected
Check the key host and WebLogic Server metrics and see if any component that could be involved in process batch jobs is in an unhealthy state.
Follow the steps listed in section Using Java Workload Explorer in the Oracle Enterprise Manager Cloud Control Oracle Fusion Middleware Management Guide.
When the Value of Elapsed Time Under the Long Running Job Category is Higher than Expected
Open the Enterprise Scheduler home page in Oracle Enterprise Manager Fusion Applications Control and examine the Top 10 Long Running Jobs.
Follow the steps listed in section Using Java Workload Explorer in the Oracle Enterprise Manager Cloud Control Oracle Fusion Middleware Management Guide.
When Average Wait Time for Requests in Ready State (seconds) is Higher than Expected
Follow the steps listed in section Using Java Workload Explorer in the Oracle Enterprise Manager Cloud Control Oracle Fusion Middleware Management Guide.
Monitoring SOA involves monitoring SOA Infrastructure, SOA Composite and SOA servers.
Use Table 14-7 to locate the key performance metrics for SOA Composite.
Table 14-7 SOA Composite Metrics
Metric Category | Metric Name |
---|---|
Mediator Case |
Invocation count throughput in last 5 minutes |
SOA Composite - Response Metrics |
Composite Status |
SOA Composite - Component Detail Metrics |
Component: Business Faults |
SOA Composite - Component Detail Metrics |
Component: Error Rate (%) |
SOA Composite - Services/References Detail Metrics |
Service/Reference: Average Incoming Messages Processing Time (ms) |
SOA Composite - Services/References Detail Metrics |
Service/Reference: Average Outbound Messages Processing Time (ms) |
SOA Composite - Services/References Detail Metrics |
SOA Composite: Error Rate (%) |
SOA Composite - Services/References Detail Metrics |
SOA Composite: Synchronous Response Time (ms) |
SOA Composite - Services/References Detail Metrics |
SOA Composite: Total Business |
Use Table 14-8 to locate the key performance metrics for SOA Infrastructure.
Table 14-8 SOA Infrastructure Metrics
Metric Category | Metric Name |
---|---|
SOA Infra Response |
Up Down Status |
SOA Infrastructure - Message Detail Metrics |
Errors (minute) |
SOA Infrastructure - Service Engine Detail Metrics |
Service Engine: Error Rate (%) |
Follow the steps in this section to tune Oracle Identity Management specifically for Oracle Fusion Applications.
Most of these settings should be set by default if the environment is newly provisioned. If the environment is upgraded from a previous release, it will be necessary to manually check and adjust the settings.
Two Oracle Internet Directory configuration parameters, orclmaxcc
and orclserverprocs
, need to be tuned.
Change orclmaxcc
to 10 and tune the number of OID processes:
Name the sample script config_oid_tuning.ldif
. cn=oid1
needs to be set to the component name. In a multi-component environment, this needs to be changed accordingly. The orclserverprocs
will need to be set to the number of cores in the OID server that is used.
dn: cn=oid1,cn=osdldapd,cn=subconfigsubentry changetype: modify replace: orclmaxcc orclmaxcc: 10 orclserverprocs: <number of cores>
Apply the script by running this command:
ldapmodify -p portNum -h hostname -D cn=orcladmin -f config_oid_tuning.ldif
Add parameters to enable timing logging for OID.
Add this entry to the config.xml file in ./oid/user_projects/domains/oid_domain/config/
and the ./oim/user_projects/domains/oim_domain/config/
directories for each WebLogic Server in the Oracle Identity Management domain:
<web-server> <web-server-log> <file-name>logs/access.log.%yyyyMMdd%</file-name> <rotation-type>byTime</rotation-type> <number-of-files-limited>true</number-of-files-limited> <rotate-log-on-startup>true</rotate-log-on-startup> <buffer-size-kb>0</buffer-size-kb> <logging-enabled>true</logging-enabled> <elf-fields>date time time-taken bytes c-ip s-ip sc-status sc(X-ORACLE-DMS-ECID) cs-method cs-uri cs(User-Agent) cs(ECID-Context) cs(Proxy-Remote-User) cs(Proxy-Client-IP)</elf-fields> <log-file-format>extended</log-file-format> <log-time-in-gmt>false</log-time-in-gmt> <log-milli-seconds>true</log-milli-seconds> </web-server-log> </web-server>
To set the access log format, add this string to the httpd.conf
file in the /u01/ohsauth/ohsauth_inst/config/OHS/ohs1
path.
LogFormat "%h %l %u %t \"%r\" %>s %b %D %{X-ORACLE-DMS-ECID}o" common