Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Service for Oracle Application Server Guide |
1. Installing and Configuring Solaris Cluster HA for Oracle Application Server
HA for Oracle Application Server Overview
Overview of Installing and Configuring HA for Oracle Application Server
Planning the HA for Oracle Application Server Installation and Configuration
Restriction for the supported configurations of HA for Oracle Application Server
Restriction for the location of Oracle Application Server files
Determine which Solaris zone Oracle Application Server will use
Verifying the Installation and Configuration of Oracle Application Server
How to Verify the Installation and Configuration of Oracle Application Server
Installing the HA for Oracle Application Server Packages
How to Install the HA for Oracle Application Server Packages
Registering and Configuring Solaris Cluster HA for Oracle Application Server
How to Register and Configure Solaris Cluster HA for Oracle Application Server
Verifying the Solaris Cluster HA for Oracle Application Server Installation and Configuration
How to Verify the Solaris Cluster HA for Oracle Application Server Installation and Configuration
Upgrading HA for Oracle Application Server
How to Upgrade to the New Version of HA for Oracle Application Server
Understanding the Solaris Cluster HA for Oracle Application Server Fault Monitor
Probing Algorithm and Functionality
Operations of the Oracle 9iAS Application Server Probe
Operations of the Oracle Application Server 10g Probe
Debug Solaris Cluster HA for Oracle Application Server
How to turn on debug for Solaris Cluster HA for Oracle Application Server
A. Deployment Example: Installing Oracle Application Server in Zones
This section contains the procedures you need to install and configure Oracle Application Server.
This section contains the procedures you need to install and configure Oracle Application Server.
Refer to Determine which Solaris zone Oracle Application Server will use for more information.
Refer to System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones for complete information about installing and configuring a zone.
Repeat this step on all nodes of the cluster if a zone is being used.
Boot the zone if it is not running.
# zoneadm list -v # zoneadm -z zonename boot
Refer to Oracle Solaris Cluster Software Installation Guide for information about creating a cluster file system and to Oracle Solaris Cluster Data Services Planning and Administration Guide for information about creating a highly available local file system.
Note - Currently Oracle Application Server does not support the infrastructure installation on a hardware cluster. So for the duration of the install and post install configuration, Oracle Solaris Cluster must be stopped.
Your data service configuration might not be supported if you do not adhere to these requirements.
If a cluster file system was created, then you must temporarily remove the global mount option from /etc/vfstab, as you will manually mount the cluster file system as a local file system after you have stopped the cluster.
# cluster shutdown -y -g0 ok boot -x
Perform this step from the global zone on one node of the cluster.
Ensure the node has ownership of the disk set or disk group.
For Solaris Volume Manager.
# metaset -s disk-set -t -C take
For Veritas Volume Manager.
# vxdg -C import disk-group # vxdg -g disk-group startall
# mount highly-available-local-file-system
Create the mount point on all zones of the cluster that are being used for Oracle Application Server.
# zlogin zonename mkdir highly-available-local-file-system
Mount the highly available local file system on one of the zones being used.
# mount -F lofs highly-available-local-file-system \ > /zonepath/root/highly-available-local-file-system
# zpool import -R / HAZpool
# zpool export -f HAZpool # zpool import -R /zonepath/root HAZpool
Perform this step from the global zone on one node of the cluster.
# ifconfig interface addif logical-hostname up
# ifconfig interface addif logical-hostname up zone zonename
Perform this step from the global zone or zone on one node of the cluster, if you are installing Oracle 9iAS Application Server Infrastructure for version 9.0.2 or version 9.0.3.
Note - Refer to Step 10 if you are installing the Oracle Application Server 10g Infrastructure version 9.0.4 or version 10.
Refer to Oracle Application Server, Quick Installation Guide for complete installation information.
Perform this step on all cluster nodes where Oracle 9iAS Application Server infrastructure will run.
To provide logical host interpositioning for Oracle 9iAS Application Server infrastructure you must create a symbolic link from
/usr/lib/secure/libschost.so.1 to /usr/cluster/lib/libschost.so.1
On all cluster nodes where Oracle 9iAS Application Server infrastructure will run.
# cp /usr/cluster/lib/libschost.so.1 /usr/lib/libschost.so.1 # cd /usr/lib/secure # ln -s /usr/lib/libschost.so.1 libschost.so.1
You must specify the short logical hostname for Oracle 9iAS Application Server infrastructure and not the fully qualify name.
# su - oracle-application-server-userid $ LD_PRELOAD_32=libschost.so.1 $ SC_LHOSTNAME=logical-hostname $ export LD_PRELOAD_32 SC_LHOSTNAME
Test that the short logical hostname is returned.
$ uname -n
If the short logical hostname is returned, you can proceed with the installation.
$ cd oracle-application-server-install-directory $ ./runInstaller
Follow the Oracle Universal Installer screens as required.
When prompted for the Oracle home directory you must enter a destination directory that resides on the cluster file system or highly available local file system. If the destination directory does not exist, Oracle Universal Installer creates it.
After installing the software you will be prompted to execute $ORACLE_HOME/root.sh before the Oracle Universal Installer continues with the configuration assistants.
Before executing $ORACLE_HOME/root.sh, you must complete the following in another window, as some configuration assistants require that LD_PRELOAD_32 and SC_LHOSTNAME are set.
$ cd $ORACLE_HOME/Apache/Apache/bin $ vi apachectl
Add the following three lines to the CONFIGURATION section in apachectl just before the PIDFILE=. You must specify the short logical hostname for SC_LHOSTNAME.
LD_PRELOAD_32=libschost.so.1 SC_LHOSTNAME=logical-hostname export LD_PRELOAD_32 SC_LHOSTNAME
After amending apachectl, you can execute $ORACLE_HOME/root.sh as root and continue with the Oracle Universal Installer.
After the install is complete, you must copy the emtab and oratab files from /var/opt/oracle to the other cluster nodes or zones.
If other Oracle databases have been defined on other cluster nodes or zones, instead of copying emtab and oratab to /var/opt/oracle on the other cluster nodes or zones, you must add the Oracle Application Server database entries to emtab and oratab in /var/opt/oracle on the other cluster nodes or zones.
other-node# mkdir -p /var/opt/oracle other-node# rcp install-node:/var/opt/oracle/*tab /var/opt/oracle
Complete any prerequisites before installing the Oracle 9.0.2.3 Core Patchset, refer to Oracle Metalink Note: 243561.1 for more information.
When installing the Oracle 9.0.2.3 Core Patchset, follow the Oracle Universal Installer screens as required.
After installing the software, you will be prompted to execute $ORACLE_HOME/root.sh before the Oracle Universal Installer will continue with the configuration assistants.
Before executing $ORACLE_HOME/root.sh, you must complete the following in another window, as some configuration assistants require that LD_LIBRARY_PATH, LD_PRELOAD_32 and SC_LHOSTNAME are set.
Edit $ORACLE_HOME/opmn/conf/opmn.xml and include entries for LD_LIBRARY_PATH, LD_PRELOAD_32, and SC_LHOSTNAME in the environment section for OC4J_DAS, home, OC4J_Demos, and CUSTOM.
Note - LD_LIBRARY_PATH should contain the fully qualified $ORACLE_HOME/lib value.
DISPLAY should contain the short logical hostname.
SC_LHOSTNAME should contain the fully qualified logical hostname.
The following shows a sample $ORACLE_HOME/opmn/conf/opmn.xml after it has been modified.
# su - oracle-application-server-userid $ cd $ORACLE_HOME/opmn/conf $ cat opmn.xml <ias-instance xmlns="http://www.oracle.com/ias-instance"> <notification-server> <port local="6100" remote="6200" request="6003"/> <log-file path="/global/ora9ias/infra/opmn/logs/ons.log" level="3"/> </notification-server> <process-manager> <ohs gid="HTTP Server" maxRetry="3"> <start-mode mode="ssl"/> </ohs> <oc4j maxRetry="3" instanceName="home" numProcs="1"> <config-file path="/global/ora9ias/infra/j2ee/home/config/server.xml"/> <oc4j-option value="-properties"/> <port ajp="3000-3100" jms="3201-3300" rmi="3101-3200"/> <environment> <prop name="DISPLAY" value="ora9ias:0.0"/> <prop name="LD_LIBRARY_PATH" value="/global/ora9ias/infra/lib"/> <prop name="SC_LHOSTNAME" value="ora9ias.com"/> <prop name="LD_PRELOAD_32" value="libschost.so.1"/> </environment> </oc4j> <oc4j maxRetry="3" instanceName="OC4J_DAS" gid="OC4J_DAS" numProcs="1"> <config-file path="/global/ora9ias/infra/j2ee/OC4J_DAS/config/server.xml"/> <java-option value="-server -Xincgc -Xnoclassgc -Xmx256m "/> <oc4j-option value="-properties"/> <port ajp="3001-3100" jms="3201-3300" rmi="3101-3200"/> <environment> <prop name="DISPLAY" value="ora9ias:0.0"/> <prop name="LD_LIBRARY_PATH" value="/global/ora9ias/infra/lib"/> <prop name="SC_LHOSTNAME" value="ora9ias.com"/> <prop name="LD_PRELOAD_32" value="libschost.so.1"/> </environment> </oc4j> <oc4j maxRetry="3" instanceName="OC4J_Demos" gid="OC4J_Demos" numProcs="1"> <config-file path="/global/ora9ias/infra/j2ee/OC4J_Demos/config/server.xml"/> <java-option value="-Xmx512M "/> <oc4j-option value="-userThreads -properties"/> <port ajp="3001-3100" jms="3201-3300" rmi="3101-3200"/> <environment> <prop name="%LIB_PATH_ENV%" value="%LIB_PATH_VALUE%"/> <prop name="DISPLAY" value="ora9ias:0.0"/> <prop name="LD_LIBRARY_PATH" value="/global/ora9ias/infra/lib"/> <prop name="SC_LHOSTNAME" value="ora9ias.com"/> <prop name="LD_PRELOAD_32" value="libschost.so.1"/> </environment> </oc4j> <custom gid="dcm-daemon" numProcs="1" noGidWildcard="true"> <start path="/global/ora9ias/infra/dcm/bin/dcmctl daemon -logdir /global/ora9ias/infra/dcm/logs/daemon_logs"/> <stop path="/global/ora9ias/infra/dcm/bin/dcmctl shutdowndaemon"/> <environment> <prop name="DISPLAY" value="ora9ias:0.0"/> <prop name="LD_LIBRARY_PATH" value="/global/ora9ias/infra/lib"/> <prop name="SC_LHOSTNAME" value="ora9ias.com"/> <prop name="LD_PRELOAD_32" value="libschost.so.1"/> </environment> </custom> <log-file path="/global/ora9ias/infra/opmn/logs/ipm.log" level="3"/> </process-manager> </ias-instance>
After amending opmn.xml, you can execute $ORACLE_HOME/root.sh as root and continue with the Oracle Universal Installer.
Perform this step within the global zone or non-global zone as required, if you are installing Oracle Application Server 10g infrastructure version 9.0.4 or version 10.
Note - Refer to Step 9 if you are installing the Oracle 9iAS Application Server infrastructure version 9.0.2 or version 9.0.3.
Refer to Oracle Application Server, Quick Installation Guide for complete installation information.
# su - oracle-application-server-userid $ cd oracle-application-server-install-directory $ ./runInstaller
Follow the Oracle Universal Installer screens as required
When prompted for the Oracle home directory you must enter a destination directory that resides on the cluster file system or highly available local file system. If the destination directory does not exist, Oracle Universal Installer creates it.
When prompted with the Configuration Options screen, you must select High Availability and Replication and provide the fully qualified logical hostname when requested to do so.
After the install is complete, you must copy the oratab file from /var/opt/oracle to the other cluster nodes or zones.
If other Oracle databases have been defined on other cluster nodes or zones, instead of copying oratab to /var/opt/oracle on the other cluster nodes or zones, you must add the Oracle Application Server database entries to oratab in /var/opt/oracle on the other cluster nodes or zones.
other-node# mkdir -p /var/opt/oracle other-node# rcp install-node:/var/opt/oracle/oratab /var/opt/oracle
The middle tier may be installed on multiple active nodes to achieve high availability. Typically the middle tier and infrastructure are installed on separate nodes. However, you may wish to install the middle tier on the same cluster nodes as the infrastructure. This can be done by installing the middle tier on local disks on each cluster node.
However, whenever the middle tier and infrastructure share a cluster node, two /var/opt/oracle directories must be maintained.
One for the infrastructure where oraInst.loc points to the infrastructure oraInventory directory on shared disk.
Another for the middle tier instance installed on local disk on each cluster node where oraInst.loc points to another oraInventory directory on local disk of that node.
These separate directories are needed for applying patches and performing other upgrades or maintenance tasks.
If installing the middle tier on the same cluster nodes as the infrastructure you must copy the infrastructure /var/opt/oracle to another location. This then allows you to apply patches or upgrades to the infrastructure or middle tiers, by reinstating the corresponding original copy of /var/opt/oracle.
Save /var/opt/oracle on each cluster node where the middle tier and infrastructure are installed together.
# cp -rp /var/opt/oracle /var/opt/oracle_infra
If you have copied /var/opt/oracle you will need to supply the new infrastructure directory when you register the HA for Oracle Application Server data service.
Perform this step from the global zone or zone where you installed Oracle 9iAS Application Server Infrastructure or Oracle Application Server 10g Infrastructure.
Note - This step assumes that you have completed the infrastructure install and have included ORACLE_HOME and ORCALE_SID within the profile for the oracle-application-server-userid.
For Oracle 9iAS Application Server Infrastructure
# su - oracle-application-server-userid $ $ORACLE_HOME/bin/emctl stop $ $ORACLE_HOME/opmn/bin/opmnctl stopall $ $ORACLE_HOME/bin/oidctl server=oidldapd configset=0 instance=1 stop $ $ORACLE_HOME/bin/oidmon stop
For Oracle Application Server 10g Infrastructure
# su - oracle-application-server-userid $ $ORACLE_HOME/bin/emctl stop iasconsole $ $ORACLE_HOME/opmn/bin/opmnctl stopall
Perform this step from the global zone or zone where you installed Oracle 9iAS Application Server Infrastructure or Oracle Application Server 10g Infrastructure.
# su - oracle-application-server-userid $ $ORACLE_HOME/bin/lsnrctl stop $ $ORACLE_HOME/bin/sqlplus "/ as sysdba" SQL> shutdown immediate SQL> quit $ exit
Perform this step from the global zone on the node where you installed the Oracle 9iAS Application Server Infrastructure or Oracle Application Server 10g Infrastructure.
# umount oracle-application-server-highly-available-local-file-system
# umount /zonepath/root/oracle-application-server-highly-available-local-file-system
# zpool export -f HAZpool
# ifconfig interface removeif logical-hostname
If a cluster file system was created and you temporarily removed the global mount option from /etc/vfstab, you must reinstate the global mount option before you reboot each node to start the cluster.
# reboot