The installation log files are located at $ORACLE_BASE/oraInventory/logs. For example: /u01/app/oraInventory/logs.
When installing the OHF Middle-Tier, the installer generates the following installation log files:
Log File | Description |
---|---|
installActions<timestamp>.log | Records the actions of the installer and can be used to diagnose issues with the installer. |
oraInstall<timestamp>.out | Records the outputs of all the scripts run by the installer. |
oraInstall<timestamp>.err | Records the errors from all the scripts run by the installer. |
The log files are time stamped and each installation session creates a new set of log files.
An installation summary with all the parameters provided for the installer is saved at:
<INSTALL_HOME>/reports/dps_install_<timestamp>.html
Note:
When reporting any problems that occur during Middle-Tier installation, make sure that you include all the above log files.Issue | Fix |
---|---|
The installer fails due to the time taken by the node manager process to start. | Check the machine network configuration to make sure that other processes are listening on same port, and that the user running the installer has the required file system permissions. |
The AdminServer fails to start because the node manager process is not available. | Verify if the node manager process is still running. |
A wrong database configuration is provided. | Modify the database configuration from the WebLogic Admin console. |
Issue | Fix |
---|---|
The installer fails to connect to the AdminServer | Verify if the AdminServer is running on the primary node by accessing the WebLogic Admin console from the secondary node. |
The installer fails due to a wrong FMW path. | Make sure WebLogic is installed in the same file system location as on the primary node. |
Sometimes, a primary or secondary node may not start due to one of the following errors in the weblogic log files:
<Warning> (thread=Cluster, member=n/a): Received a discovery message that indicates the presence of an existing cluster that does not respond to join requests; this is usually caused by a network layer failure.
<Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; IpMonitor failed to verify the reachability of senior Member…
…
If this persists it is likely the result of a local or remote firewall rule blocking either ICMP pings, or connections to TCP port 7.
To overcome these errors, make sure that the DNS resolutions for the primary and secondary node machines lead to the same IP address when you ping the machines from the local system or from other systems.