Sun Cluster Data Service for Oracle Application Server Guide for Solaris OS

Installing and Configuring Oracle Application Server

This section contains the procedures you need to install and configure Oracle Application Server.

ProcedureHow to Install and Configure Oracle Application Server

This section contains the procedures you need to install and configure Oracle Application Server.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Determine which Solaris zone to use.

    Refer to Determine which Solaris zone Oracle Application Server will use for more information.

  3. If a zone will be used, create the zone.

    Refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones for complete information about installing and configuring a zone.

  4. If a zone is being used, ensure the zone is booted.

    Repeat this step on all nodes of the cluster if a zone is being used.

    Boot the zone if it is not running.


    # zoneadm list -v
    # zoneadm -z zonename boot
    
  5. Create a cluster file system or highly available local file system for the Oracle Application Server files.

    Refer to Sun Cluster Software Installation Guide for Solaris OS for information about creating a cluster file system and to Sun Cluster Data Services Planning and Administration Guide for Solaris OS for information about creating a highly available local file system.

  6. Stop the cluster and reboot each node in non cluster mode.


    Note –

    Currently Oracle Application Server does not support the infrastructure installation on a hardware cluster. So for the duration of the install and post install configuration, Sun Cluster must be stopped.

    Your data service configuration might not be supported if you do not adhere to these requirements.


    If a cluster file system was created, then you must temporarily remove the global mount option from /etc/vfstab, as you will manually mount the cluster file system as a local file system after you have stopped the cluster.


    # cluster shutdown -y -g0
    ok boot -x
    
  7. Mount the cluster file system (as a local file system) or the highly available local file system.

    Perform this step from the global zone on one node of the cluster.

    1. If a non ZFS highly available local file system is being used for Oracle Application Server.

      Ensure the node has ownership of the disk set or disk group.

      For Solaris Volume Manager.


      # metaset -s disk-set -t -C take
      

      For Veritas Volume Manager.


      # vxdg -C import disk-group
      # vxdg -g disk-group startall
      
      1. If the global zone is being used for Oracle Application Server.


        # mount highly-available-local-file-system
        
      2. If a zone is being used for Oracle Application Server.

        Create the mount point on all zones of the cluster that are being used for Oracle Application Server.


        # zlogin zonename mkdir highly-available-local-file-system
        

        Mount the highly available local file system on one of the zones being used.


        # mount -F lofs highly-available-local-file-system \
        > /zonepath/root/highly-available-local-file-system
        
    2. If a ZFS highly available local file system is being used for Oracle Application Server.

      1. If the global zone is being used for Oracle Application Server.


        # zpool import -R / HAZpool
        
      2. If a zone is being used for Oracle Application Server.


        # zpool export -f HAZpool
        # zpool import -R /zonepath/root HAZpool
        
  8. Plumb the Oracle Application Server logical hostname.

    Perform this step from the global zone on one node of the cluster.

    1. If the global zone is being used for Oracle Application Server.


      # ifconfig interface addif logical-hostname up
      
    2. If a zone is being used for Oracle Application Server.


      # ifconfig interface addif logical-hostname up zone zonename
      
  9. Install the Oracle 9i Application Server Infrastructure.

    Perform this step from the global zone or zone on one node of the cluster, if you are installing Oracle 9iAS Application Server Infrastructure for version 9.0.2 or version 9.0.3.


    Note –

    Refer to Step 10 if you are installing the Oracle Application Server 10g Infrastructure version 9.0.4 or version 10.


    Refer to Oracle Application Server, Quick Installation Guide for complete installation information.

    1. Enable logical host interpositioning.

      Perform this step on all cluster nodes where Oracle 9iAS Application Server infrastructure will run.

      To provide logical host interpositioning for Oracle 9iAS Application Server infrastructure you must create a symbolic link from

      /usr/lib/secure/libschost.so.1 to /usr/cluster/lib/libschost.so.1

      On all cluster nodes where Oracle 9iAS Application Server infrastructure will run.


      # cp /usr/cluster/lib/libschost.so.1 /usr/lib/libschost.so.1
      # cd /usr/lib/secure
      # ln -s /usr/lib/libschost.so.1 libschost.so.1
      
    2. Execute runInstaller.

      You must specify the short logical hostname for Oracle 9iAS Application Server infrastructure and not the fully qualify name.


      # su - oracle-application-server-userid
      $ LD_PRELOAD_32=libschost.so.1
      $ SC_LHOSTNAME=logical-hostname
      $ export LD_PRELOAD_32 SC_LHOSTNAME
      

      Test that the short logical hostname is returned.


      $ uname -n
      

      If the short logical hostname is returned, you can proceed with the installation.


      $ cd oracle-application-server-install-directory
      $ ./runInstaller
      

      Follow the Oracle Universal Installer screens as required.

      When prompted for the Oracle home directory you must enter a destination directory that resides on the cluster file system or highly available local file system. If the destination directory does not exist, Oracle Universal Installer creates it.

      After installing the software you will be prompted to execute $ORACLE_HOME/root.sh before the Oracle Universal Installer continues with the configuration assistants.


      Note –

      Before you execute $ORACLE_HOME/root.sh you must perform Step c.


    3. Pre-task before executing $ORACLE_HOME/root.sh from ./runInstaller.

      Before executing $ORACLE_HOME/root.sh, you must complete the following in another window, as some configuration assistants require that LD_PRELOAD_32 and SC_LHOSTNAME are set.


      $ cd $ORACLE_HOME/Apache/Apache/bin
      $ vi apachectl
       
      

      Add the following three lines to the CONFIGURATION section in apachectl just before the PIDFILE=. You must specify the short logical hostname for SC_LHOSTNAME.


      LD_PRELOAD_32=libschost.so.1
      SC_LHOSTNAME=logical-hostname
      export LD_PRELOAD_32 SC_LHOSTNAME
      
    4. As root execute $ORACLE_HOME/root.sh.

      After amending apachectl, you can execute $ORACLE_HOME/root.sh as root and continue with the Oracle Universal Installer.

    5. As root, copy /var/tmp/oracle/*tab to the other cluster nodes.

      After the install is complete, you must copy the emtab and oratab files from /var/opt/oracle to the other cluster nodes or zones.

      If other Oracle databases have been defined on other cluster nodes or zones, instead of copying emtab and oratab to /var/opt/oracle on the other cluster nodes or zones, you must add the Oracle Application Server database entries to emtab and oratab in /var/opt/oracle on the other cluster nodes or zones.


      other-node# mkdir -p /var/opt/oracle
      other-node# rcp install-node:/var/opt/oracle/*tab /var/opt/oracle
      
    6. Install the Oracle 9.0.2.3 Core Patchset.

      Complete any prerequisites before installing the Oracle 9.0.2.3 Core Patchset, refer to Oracle Metalink Note: 243561.1 for more information.

      When installing the Oracle 9.0.2.3 Core Patchset, follow the Oracle Universal Installer screens as required.

      After installing the software, you will be prompted to execute $ORACLE_HOME/root.sh before the Oracle Universal Installer will continue with the configuration assistants.


      Note –

      Before you execute $ORACLE_HOME/root.sh you must perform Step g.


    7. Pre-task before executing $ORACLE_HOME/root.sh from Oracle 9.0.2.3 Core Patchset.

      Before executing $ORACLE_HOME/root.sh, you must complete the following in another window, as some configuration assistants require that LD_LIBRARY_PATH, LD_PRELOAD_32 and SC_LHOSTNAME are set.

      Edit $ORACLE_HOME/opmn/conf/opmn.xml and include entries for LD_LIBRARY_PATH, LD_PRELOAD_32, and SC_LHOSTNAME in the environment section for OC4J_DAS, home, OC4J_Demos, and CUSTOM.


      Note –

      LD_LIBRARY_PATH should contain the fully qualified $ORACLE_HOME/lib value.

      DISPLAY should contain the short logical hostname.

      SC_LHOSTNAME should contain the fully qualified logical hostname.


      The following shows a sample $ORACLE_HOME/opmn/conf/opmn.xml after it has been modified.


      # su - oracle-application-server-userid
      $ cd $ORACLE_HOME/opmn/conf
      $ cat opmn.xml
      <ias-instance xmlns="http://www.oracle.com/ias-instance">
        <notification-server>
          <port local="6100" remote="6200" request="6003"/>
          <log-file path="/global/ora9ias/infra/opmn/logs/ons.log" level="3"/>
        </notification-server>
        <process-manager>
          <ohs gid="HTTP Server" maxRetry="3">
            <start-mode mode="ssl"/>
          </ohs>
          <oc4j maxRetry="3" instanceName="home" numProcs="1">
            <config-file path="/global/ora9ias/infra/j2ee/home/config/server.xml"/>
            <oc4j-option value="-properties"/>
            <port ajp="3000-3100" jms="3201-3300" rmi="3101-3200"/>
            <environment>
              <prop name="DISPLAY" value="ora9ias:0.0"/>
              <prop name="LD_LIBRARY_PATH" value="/global/ora9ias/infra/lib"/>
              <prop name="SC_LHOSTNAME" value="ora9ias.com"/>
              <prop name="LD_PRELOAD_32" value="libschost.so.1"/>
            </environment>
          </oc4j>
          <oc4j maxRetry="3" instanceName="OC4J_DAS" gid="OC4J_DAS" numProcs="1">
            <config-file path="/global/ora9ias/infra/j2ee/OC4J_DAS/config/server.xml"/>
            <java-option value="-server -Xincgc -Xnoclassgc -Xmx256m "/>
            <oc4j-option value="-properties"/>
            <port ajp="3001-3100" jms="3201-3300" rmi="3101-3200"/>
            <environment>
              <prop name="DISPLAY" value="ora9ias:0.0"/>
              <prop name="LD_LIBRARY_PATH" value="/global/ora9ias/infra/lib"/>
              <prop name="SC_LHOSTNAME" value="ora9ias.com"/>
              <prop name="LD_PRELOAD_32" value="libschost.so.1"/>
             </environment>
          </oc4j>
          <oc4j maxRetry="3" instanceName="OC4J_Demos" gid="OC4J_Demos" numProcs="1">
            <config-file path="/global/ora9ias/infra/j2ee/OC4J_Demos/config/server.xml"/>
            <java-option value="-Xmx512M "/>
            <oc4j-option value="-userThreads  -properties"/>
            <port ajp="3001-3100" jms="3201-3300" rmi="3101-3200"/>
            <environment>
              <prop name="%LIB_PATH_ENV%" value="%LIB_PATH_VALUE%"/>
              <prop name="DISPLAY" value="ora9ias:0.0"/>
              <prop name="LD_LIBRARY_PATH" value="/global/ora9ias/infra/lib"/>
              <prop name="SC_LHOSTNAME" value="ora9ias.com"/>
              <prop name="LD_PRELOAD_32" value="libschost.so.1"/> 
             </environment> 
          </oc4j>
          <custom gid="dcm-daemon" numProcs="1" noGidWildcard="true">
            <start path="/global/ora9ias/infra/dcm/bin/dcmctl daemon -logdir 
      	/global/ora9ias/infra/dcm/logs/daemon_logs"/>
            <stop path="/global/ora9ias/infra/dcm/bin/dcmctl shutdowndaemon"/>
            <environment>
              <prop name="DISPLAY" value="ora9ias:0.0"/>
              <prop name="LD_LIBRARY_PATH" value="/global/ora9ias/infra/lib"/>
              <prop name="SC_LHOSTNAME" value="ora9ias.com"/>
              <prop name="LD_PRELOAD_32" value="libschost.so.1"/>              
      </environment>
          </custom>
          <log-file path="/global/ora9ias/infra/opmn/logs/ipm.log" level="3"/>
        </process-manager>
      </ias-instance>
    8. As root execute $ORACLE_HOME/root.sh

      After amending opmn.xml, you can execute $ORACLE_HOME/root.sh as root and continue with the Oracle Universal Installer.

  10. Install the Oracle Application Server 10g Infrastructure.

    Perform this step within the global zone or non-global zone as required, if you are installing Oracle Application Server 10g infrastructure version 9.0.4 or version 10.


    Note –

    Refer to Step 9 if you are installing the Oracle 9iAS Application Server infrastructure version 9.0.2 or version 9.0.3.


    Refer to Oracle Application Server, Quick Installation Guide for complete installation information.

    1. Execute runInstaller.


      # su - oracle-application-server-userid
      $ cd oracle-application-server-install-directory
      $ ./runInstaller
      

      Follow the Oracle Universal Installer screens as required

      When prompted for the Oracle home directory you must enter a destination directory that resides on the cluster file system or highly available local file system. If the destination directory does not exist, Oracle Universal Installer creates it.

      When prompted with the Configuration Options screen, you must select High Availability and Replication and provide the fully qualified logical hostname when requested to do so.

    2. As root, copy /var/tmp/oracle/oratab to the other cluster nodes or zones.

      After the install is complete, you must copy the oratab file from /var/opt/oracle to the other cluster nodes or zones.

      If other Oracle databases have been defined on other cluster nodes or zones, instead of copying oratab to /var/opt/oracle on the other cluster nodes or zones, you must add the Oracle Application Server database entries to oratab in /var/opt/oracle on the other cluster nodes or zones.


      other-node# mkdir -p /var/opt/oracle
      other-node# rcp install-node:/var/opt/oracle/oratab /var/opt/oracle
      
  11. (Optional) Installing the Middle Tier on the same cluster nodes as the Infrastructure.

    The middle tier may be installed on multiple active nodes to achieve high availability. Typically the middle tier and infrastructure are installed on separate nodes. However, you may wish to install the middle tier on the same cluster nodes as the infrastructure. This can be done by installing the middle tier on local disks on each cluster node.

    However, whenever the middle tier and infrastructure share a cluster node, two /var/opt/oracle directories must be maintained.

    One for the infrastructure where oraInst.loc points to the infrastructure oraInventory directory on shared disk.

    Another for the middle tier instance installed on local disk on each cluster node where oraInst.loc points to another oraInventory directory on local disk of that node.

    These separate directories are needed for applying patches and performing other upgrades or maintenance tasks.

    If installing the middle tier on the same cluster nodes as the infrastructure you must copy the infrastructure /var/opt/oracle to another location. This then allows you to apply patches or upgrades to the infrastructure or middle tiers, by reinstating the corresponding original copy of /var/opt/oracle.

    Save /var/opt/oracle on each cluster node where the middle tier and infrastructure are installed together.


     # cp -rp /var/opt/oracle /var/opt/oracle_infra
    

    If you have copied /var/opt/oracle you will need to supply the new infrastructure directory when you register the Sun Cluster HA for Oracle Application Server data service.

  12. Stop the Oracle 9iAS Application Server or Oracle Application Server 10g Infrastructure.

    Perform this step from the global zone or zone where you installed Oracle 9iAS Application Server Infrastructure or Oracle Application Server 10g Infrastructure.


    Note –

    This step assumes that you have completed the infrastructure install and have included ORACLE_HOME and ORCALE_SID within the profile for the oracle-application-server-userid.


    For Oracle 9iAS Application Server Infrastructure


    # su - oracle-application-server-userid
    $ $ORACLE_HOME/bin/emctl stop
    $ $ORACLE_HOME/opmn/bin/opmnctl stopall
    $ $ORACLE_HOME/bin/oidctl server=oidldapd configset=0 instance=1 stop
    $ $ORACLE_HOME/bin/oidmon stop
    

    For Oracle Application Server 10g Infrastructure


    # su - oracle-application-server-userid
    $ $ORACLE_HOME/bin/emctl stop iasconsole
    $ $ORACLE_HOME/opmn/bin/opmnctl stopall
    
  13. Stop the Oracle Database and Listener.

    Perform this step from the global zone or zone where you installed Oracle 9iAS Application Server Infrastructure or Oracle Application Server 10g Infrastructure.


    # su - oracle-application-server-userid
    $ $ORACLE_HOME/bin/lsnrctl stop
    $ $ORACLE_HOME/bin/sqlplus "/ as sysdba"
    SQL> shutdown immediate
    SQL> quit
    $ exit
    
  14. Unmount the cluster file system or highly available local file system.

    Perform this step from the global zone on the node where you installed the Oracle 9iAS Application Server Infrastructure or Oracle Application Server 10g Infrastructure.

    1. If a non ZFS highly available local file system is being used for the Oracle Application Server.

      1. If the global zone is being used for Oracle Application Server.


        # umount oracle-application-server-highly-available-local-file-system
        
      2. If a zone is being used for Oracle Application Server.


        # umount /zonepath/root/oracle-application-server-highly-available-local-file-system
        
    2. If a ZFS highly available local file system is being used for Oracle Application Server.


      # zpool export -f HAZpool
      
  15. Unplumb the Oracle Application Server logical hostname.


    # ifconfig interface removeif logical-hostname
    
  16. Reboot each node in cluster mode to start the cluster.

    If a cluster file system was created and you temporarily removed the global mount option from /etc/vfstab, you must reinstate the global mount option before you reboot each node to start the cluster.


    # reboot