Sun Cluster 2.2 Software Installation Guide

Chapter 5 Installing and Configuring Sun Cluster HA for Oracle

This chapter provides instructions for setting up and administering the Sun Cluster HA for Oracle data service on your Sun Cluster nodes.

This chapter includes the following sections:

Preparing to Install Sun Cluster HA for Oracle

Use the following sections to prepare Sun Cluster nodes for Sun Cluster HA for Oracle installation.

Selecting an Install Location for Sun Cluster HA for Oracle

You can install the Oracle binaries either on the local disks of the physical hosts or on the multihost disks. Both locations have advantages and disadvantages. Consider the following points when selecting an install location.

Placing Oracle binaries on the multihost disk eases administration, because there is only one copy to administer. It ensures high availability of the Oracle binaries or server during a cluster reconfiguration. However, it sacrifices redundancy and therefore availability in case of some failures. Note that in cases of switchover or accidental removal of the Oracle binaries from the multihost disk, the data service will be unavailable. Any binaries installed on the multihost disk will be mirrored as part of a disk group, so you must allocate space accordingly.

Alternatively, placing Oracle binaries on the local disk of the physical host increases redundancy (and therefore availability). In case of failure or accidental removal of one copy, Oracle can still run on any other physical node containing the binaries. Note that placing the Oracle binaries on the local disk increases the administrative overhead, because you must manage multiple copies of the files.

Setting Up the /etc/nsswitch.conf File

On each node that can master the logical host running Sun Cluster HA for Oracle, the /etc/nsswitch.conf file must have one of the following entries for group.


group:
group:		 	files
group:		 	files [NOTFOUND=return] nis
group:		 	files [NOTFOUND=return] nisplus

Sun Cluster HA for Oracle uses the su user command when starting and stopping the database node. The above settings will ensure that the su user command does not refer to NIS/NIS+ when the network information name service is not available due to failure of the public network on the cluster node.

Setting Up Multihost Disks for Sun Cluster HA for Oracle

If you are using Solstice DiskSuite, you can configure Sun Cluster HA for Oracle to use UFS logging or raw mirrored metadevices. Refer to Appendix B, Configuring Solstice DiskSuite, for details about setting up metadevices.

If you are using VERITAS Volume Manager, you can configure Sun Cluster HA for Oracle to use VxFS logging or raw devices. Refer to Appendix C, Configuring VERITAS Volume Manager, for information about configuring VxFS file systems.

Installing Sun Cluster HA for Oracle

Use the procedures in this section to prepare the Sun Cluster nodes, to install the Oracle software, to create Oracle databases, and to set up Sun Cluster HA for Oracle.

Before setting up Sun Cluster HA for Oracle, you must have configured the Sun Cluster software on each node by using the procedures described in Chapter 3, Installing and Configuring Sun Cluster Software.

How to Prepare the Nodes and Install the Oracle Software

These are the high-level steps to prepare Sun Cluster nodes for Oracle installation and install the Oracle software:

These are the detailed steps to prepare Sun Cluster nodes and install the Oracle software:

  1. Prepare the environment for Oracle installation.

    Choose a location for the $ORACLE_HOME directory, on either a local or multihost disk.


    Note -

    If you choose to install the Oracle binaries on the local disk of each physical host, then mount the Oracle software distribution as a file system on its own separate disk, if possible. This will prevent Oracle binaries from being overwritten if the operating environment is reinstalled.


  2. On each node, create an entry for the database administrator group in the /etc/group file, and add potential users to the group.

    This group normally is named dba. Verify that root and oracle are members of the dba group, and add entries as necessary for other dba users. Make sure that the group IDs are the same on all nodes running Sun Cluster HA for Oracle. For example:


    dba:*:520:root,oracle 

    While you can make the name service entries in a network name service (for example, NIS or NIS+) so that the information is available to Sun Cluster HA for Oracle clients, you also should make entries in the local /etc files to eliminate dependency on the network name service.


    Note -

    This information must be replicated on each node.


  3. On each node, create an entry for the Oracle user ID (oracle_id) in the /etc/passwd file, and run the pwconv(1M) command to create an entry in the /etc/shadow file, OR run the following command.

    The oracle_id is normally oracle.


    # useradd -u 120 -g dba -d $ORACLE_HOME oracle_id
    

    Make sure that the user IDs are the same on all nodes running Sun Cluster HA for Oracle.

  4. Verify that the $ORACLE_HOME directory is owned by oracle_id and is included in the dba group.

    Permissions on the $ORACLE_HOME directory should be set to 755.


    # chown oracle $ORACLE_HOME
    # chgrp dba $ORACLE_HOME
    

    If $ORACLE_HOME is a symbolic link to the Oracle home directory, then use the following command:


    # chown oracle $ORACLE_HOME
    # chgrp dba $ORACLE_HOME
    # chown -h oracle $ORACLE_HOME
    # chgrp -h dba $ORACLE_HOME
    

  5. Note the requirements for Oracle installation.

    Oracle binaries can be installed on either the local disks of each physical host, or on the multihost disks. See "Selecting an Install Location for Sun Cluster HA for Oracle", for more information.

    If you plan to install Oracle software on the multihost disks, you first must start Sun Cluster and take ownership of the logical host. See "Creating an Oracle Database and Setting Up Sun Cluster HA for Oracle", for details.

    If you are running Oracle versions 7.x.x through 8.0.x, select the Install/Upgrade/Patch Software Only option when first installing Oracle. This is necessary because database initialization and configuration files must be modified to reflect the logical hosts as the location for the database.

  6. Install the Oracle software.

    On each node, modify the /etc/system files according to standard Oracle installation procedures. Also, on each node, create a /var/opt/oracle directory owned by user oracle and group dba.

    Log in as oracle to ensure ownership of the entire directory before performing this step. For complete instructions on installing Oracle software, refer to your Oracle documentation.

  7. (Oracle versions 7.x.x through 8.0.x only) Create the /var/opt/oracle/oratab file.

    As root, run the script $ORACLE_HOME/orainst/oratab.sh to create the /var/opt/oracle/oratab file with the appropriate permissions.


    Note -

    This step is not necessary if you are running Oracle version 8i or later. Creation of the oratab file is handled automatically by the Oracle installer in those versions.


  8. Verify the Oracle installation.

    1. Verify that the Oracle kernel, $ORACLE_HOME/bin/oracle, is owned by oracle and the dba group.

    2. Verify that the $ORACLE_HOME/bin/oracle permissions are set as follows:


      -rwsr-x--x

    3. Verify that the listener binaries exist in $ORACLE_HOME/bin.

    4. For Oracle versions 7.x.x through 8.0.x, verify that the $ORACLE_HOME/orainst/RELVER file exists.

      This step is not necessary with Oracle8i or later.

      $ORACLE_HOME/orainst/RELVER is created when Oracle Unix Installer is installed on the node. If the $ORACLE_HOME/orainst/RELVER file does not exist, then create it manually. Include the correct version number of the Oracle software installed on the node. For example:


      # cat $ORACLE_HOME/orainst/RELVER
      RELEASE_VERSION=8.0.4.0.0

Creating an Oracle Database and Setting Up Sun Cluster HA for Oracle

Complete both procedures in this section to create and configure the initial Oracle database in a Sun Cluster configuration. If you are creating and configuring additional databases, perform only the procedure described in "How to Create an Oracle Database".

How to Prepare Logical Hosts for Oracle Databases
  1. Make sure Sun Cluster is started and the node owns the disk groups.

    To start up the cluster, use scadmin(1M) command, as root. This command starts the cluster and assigns ownership of the disk group to the node you specify:


    # scadmin startcluster physicalhost clustername
    

    If the cluster already exists and the node is in the cluster, use the following command to ensure ownership of the disk group:


    # haswitch physicalhost logicalhost
    

  2. Configure the disk devices for use by your volume manager.


    Caution - Caution -

    While Oracle supports raw I/O to both raw physical devices and raw metadevices (mirrored or nonmirrored), Sun Cluster only supports raw Oracle I/O on raw mirrored volumes or metadevices. You cannot use devices such as /dev/rdsk/c1t1d1s2 to contain Oracle data under Sun Cluster.


    1. If you are using Solstice DiskSuite, set up UFS logs or raw mirrored metadevices on all nodes that will be running Sun Cluster HA for Oracle.

      If you will be using raw mirrored metadevices to contain the databases, change the owner, group, and mode of each of the raw mirrored metadevices. If you are not using raw mirrored metadevices, skip this step. Instructions for creating mirrored metadevices are provided in Appendix B, Configuring Solstice DiskSuite.

      If you are creating raw mirrored metadevices, type the following commands for each metadevice:


      # chown oracle_id /dev/md/disk_group/rdsk/dn
      # chgrp dba_id /dev/md/disk_group/rdsk/dn
      # chmod 600 /dev/md/disk_group/rdsk/dn
      

    2. If you are using VERITAS Volume Manager, set up VxFS logs or raw devices on all nodes.

      If you will be using raw devices to contain the databases, change the owner, group, and mode of each device. If you are not using raw devices, skip this step. Refer to your VERITAS Volume Manager documentation for information on setting up VxFS logs.

      If you are creating raw devices, type the following command for each raw device:


      # vxvol set owner=oracle_id group=dba_id mode=600 \
      /dev/vx/rdsk/diskgroup_name/volume_name
      

How to Create an Oracle Database

These are the high-level steps to create an Oracle database:

These are the detailed steps to create an Oracle database.

  1. Prepare database configuration files.

    Place all parameter files, data files, redolog files, and control files on the logical host, that is, the disk group's multihost disks.

    Within the init$ORACLE_SID.ora or config$ORACLE_SID.ora file, you might need to modify the assignments for control_files and background_dump_dest to specify the location of control files and alert files on the logical host.


    Note -

    If you are using Solaris authentication for database logins, the remote_os_authent variable in the init$ORACLE_SID.ora file must be set to TRUE. Full path names must be provided for background_dump_dest. The special character ? for specifying ORACLE_HOME should not be used.


  2. Create the database.

    During creation, ensure that all configuration and database files are placed on the logical hosts.

    1. Start the Oracle installer (orainst or runInstaller) and select the Create New Database Objects option.


      Note -

      Alternatively, you can create the database using the Oracle svrmgrl command, depending on your Oracle version. For Oracle8i or later installations, you can create databases later, using the Oracle dbassist utility. See your Oracle8i documentation for details.


    2. During the Oracle install session, place all the database files on the logical hosts.

      Override the default file locations provided by the Oracle installer.

    3. Verify that the file names of your control files match the file names in your configuration files.

  3. Verify that the v$sysstat view exists.

    If v$sysstat does not exist already, create it by running the appropriate Oracle catalog scripts. The v$sysstat view is used by the Sun Cluster fault monitoring scripts.

How to Set Up Sun Cluster HA for Oracle

These are the high-level steps to set up Sun Cluster HA for Oracle:

These are the detailed steps to set up Sun Cluster HA for Oracle.

  1. Make SID entries for the Sun Cluster HA for Oracle databases of all database instances.

    You must include the SID of the instance associated with your database in the /var/opt/oracle/oratab file on all nodes running Sun Cluster HA for Oracle. You must keep this file current on all nodes running Sun Cluster HA for Oracle for a failover to succeed. Update the file manually as are you add or remove a SID. If the oratab files do not match, an error message will be returned and the haoracle start command will fail.

    All entries in the /var/opt/oracle/oratab file should have the :N option specified to ensure that the instance will not start automatically on Solaris reboot. For example:


    oracle_sid:/oracle:N

  2. Depending on which authentication method you choose (Oracle authentication or Solaris authentication), perform one of the following steps.

    1. Enable access for the user and password to be used for fault monitoring.

      You must complete this step if you do not enable Solaris authentication, as described in Step b.

      In the following example, the user is scott and the password is tiger. Note that if you are using Oracle authentication, the user and password pair must agree with those used in Step 1.

      For all supported Oracle releases, enable access by becoming the oracle user and typing the following script into the screen brought up by the srvmgrl(1M) command.


      # svrmgrl
      
      	connect internal;
      			grant connect, resource to scott identified by tiger;
      			alter user scott default tablespace system quota 1m on
      
      				system;
             			grant select on v_$sysstat to scott;
      			grant create session to scott;
      			grant create table to scott;
      	disconnect;
      
         exit;

    2. Grant permission for the database to use Solaris authentication.

      The following sample entry enables Solaris authentication


      # svrmgrl
      
      	connect internal;
      			create user ops$root identified by externally
      
      				default tablespace system quota 1m on system;
      			grant connect, resource to ops$root;
            			grant select on v_$sysstat to ops$root;
      			grant create session to ops$root;
      			grant create table to ops$root;
      	disconnect;
      
         exit;
      .

  3. Configure the listener.ora and tnsnames.ora files for Sun Cluster.


    Note -

    Create and update the tnsnames.ora and listener.ora files under /var/opt/oracle on all nodes.


    Sun Cluster HA for Oracle imposes no restrictions on the listener name--it can be any valid Oracle listener name.

    The following code sample identifies the lines in listener.ora that are updated.


    LISTENER =
    	(ADDRESS_LIST =
    			(ADDRESS =
    				(PROTOCOL = TCP) 
    					(HOST = hostname) <- use logical host name
    				(PORT = 1527)
    			)
    	)
    .
    .
    SID_LIST_LISTENER =
    	.
    			.
    						(SID_NAME = SID) <- Database name, default is ORCL	

    The following sample identifies the lines in tnsnames.ora that are updated.


    service_name =
    	.
    			.
    						(ADDRESS = <- listener address 
    								(HOST = server_name)	<- logical host name
    								(PORT = 1527) <-- must match port in LISTENER.ORA 
    						)
    				)
    				(CONNECT_DATA =
    						(SID = <SID>)) <-- database name, default is ORCL

    Sun Cluster HA for Oracle opens the /var/opt/oracle/tnsnames.ora file. It scans the file to service name by matching SID = instance name and server name=logical host. The service_name obtained from tnsnames.ora is used by the Sun Cluster HA for Oracle remote fault monitor to connect to the server.

    Put the logical host name in place of the host name in the listener.ora file in the "HOST = host name" line.

    The following example shows how to update the listener.ora and tnsnames.ora files given the Oracle instances listed in Table 5-1.

    Table 5-1 Example Oracle Configuration

    Instance 

    Logical Host 

    Listener 

    ora8 

    hadbms3 

    LISTENER-ora8 

    ora7 

    hadbms4 

    LISTENER-ora7 

    The following is a sample listener.ora file. See the Oracle documentation for your version of Oracle to configure these files.


    LISTENER-ora7 =
    	(ADDRESS_LIST =
    			(ADDRESS =
    				(PROTOCOL = TCP)
    				(HOST = hadbms4)
    				(PORT = 1530)
    			)
    		)
    SID_LIST_LISTENER-ora7 =
    	(SID_LIST =
    			(SID_DESC =
    				(SID_NAME = ora7)
    			)
    		)
    LISTENER-ora8 =
      (ADDRESS_LIST =
        (ADDRESS= (PROTOCOL=TCP) (HOST=hadbms3)(PORT=1806))
      )
    SID_LIST_LISTENER-ora8 =
      (SID_LIST =
         (SID_DESC = 
    			(SID_NAME = ora8) 
    		 )	
      )

    The following is a sample tnsnames.ora file. See the Oracle documentation for your version of Oracle to configure these files.


    ora8 =
    (DESCRIPTION =
       (ADDRESS_LIST = 
    			(ADDRESS = (PROTOCOL = TCP) 
    			(HOST = hadbms3) 
    			(PORT = 1806))
       	)    
    	(CONNECT_DATA = (SID = ora8))
    )
    ora7 =
    (DESCRIPTION =
      (ADDRESS_LIST =
            (ADDRESS = 
    				(PROTOCOL = TCP) 
    				(HOST = hadbms4) 
    				(PORT = 1530))
      	)
      	(CONNECT_DATA = (
    			SID = ora7))
    )

  4. Verify that Sun Cluster and the cluster daemon are installed and running on all hosts.


    # hastat
    

    If they are not running already, start them by using the scadmin startnode command.

  5. From only one cluster node, register and activate Sun Cluster HA for Oracle by using the hareg(1M) command.

    Run the hareg(1M) command on only one cluster node.

    If the Oracle server is not yet registered, use the hareg(1M) command to register it. Run this command from only one cluster node. To register the data service only on one logical host, use the -h option and supply the logical host name:


    # hareg -s -r oracle [-h logicalhost]

    If the cluster is running already, use the hareg(1M) command to activate the Oracle data service. Run this command from only one cluster node:


    # hareg -y oracle
    

  6. Setup Sun Cluster HA for Oracle configuration data.

    Run the following command so that the instance will be monitored by Sun Cluster:


    # haoracle insert $ORACLE_SID logicalhost 60 10 120 300 \
    
    user/password /logicalhost/.../init$ORACLE_SID.ora listener 
    

    The previous command line includes the following:

    • haoracle insert - Command and subcommand

    • $ORACLE_SID - Name of the Oracle database instance

    • logicalhost - Logical host serving $ORACLE_SID (not the physical host)

    • 60 10 120 300 - These parameters specify a probe cycle time of 60 seconds, a connectivity probe cycle count of 10, a probe time out of 120 seconds, and a restart delay of 300 seconds.

    • user/password - These are the user and password to be used for fault monitoring. They must agree with the permission levels granted in Step 2. To use Solaris authentication, enter a slash (/) instead of the user name and password.

    • /logicalhost/.../init$ORACLE_SID.ora - This indicates the pfile to use to start up the database. This must be on a logical host's disk group.

    • listener - The Oracle listener. The listener is started and monitored using this name. The default is LISTENER. This field is optional.

      See the haoracle(1M) man page for details on all options to haoracle(1M).

  7. Bring the database instance into service.

    Bring the Sun Cluster HA for Oracle database into service by executing the haoracle(1M) command. Monitoring for that instance will start automatically.


    # haoracle start $ORACLE_SID
    

Setting Up Sun Cluster HA for Oracle Clients

Clients must always refer to the database by using the logical host name and not the physical host name.

For example, in the tnsnames.ora file for the Oracle listener, you must specify the logical host as the host on which the database instance is running. See "How to Set Up Sun Cluster HA for Oracle".


Note -

Oracle client-server connections will not survive a Sun Cluster HA for Oracle switchover. The client application must be prepared to handle disconnection and reconnection or recovery as appropriate. A transaction monitor might simplify the application. Further, Sun Cluster HA for Oracle node recovery time is application-dependent.


If your application uses functions from RDBMS dynamic link libraries, you must ensure that these libraries are available in the event of failover. To do so:

Verifying the Sun Cluster HA for Oracle Installation

Perform the following verification tests to ensure the Sun Cluster HA for Oracle installation was performed correctly.

The purpose of these sanity checks is to ensure that the Oracle instance can be started by all nodes running Sun Cluster HA for Oracle and can be accessed successfully by the other nodes in the configuration. Perform these sanity checks to isolate any problems starting Oracle from the Sun Cluster HA for Oracle data service.

How to Verify the Sun Cluster HA for Oracle Installation
  1. Log in to the node mastering the logical host, and set the Oracle environment variables.

    Log in as oracle to the node that currently masters the logical host, and set the environment variables ORACLE_SID and ORACLE_HOME.

    1. Confirm that you can start the Oracle instance from this host.

    2. Confirm that you can connect to the Oracle instance.

      Log in as root and use the sqlplus command with the tns_service variable defined in the tnsnames.ora file:


      # su - oracle_id -c "echo 'select * from v\$sysstat;' | \
      sqlplus user/password@tns_service"
      

    3. Shut down the Oracle instance.

  2. Transfer the logical host containing the Oracle database to another node in the cluster.

    For example:


    # haswitch phys-hahost2 hahost1
    

  3. Log in to the node now mastering the logical host, and repeat the checks listed in Step 1.

    Log in as oracle to the new master node and confirm interactions with the Oracle instance.