Sun Cluster 2.2 Software Installation Guide

Chapter 10 Installing and Configuring Sun Cluster HA for SAP

Sun Cluster HA for SAP is SAP components made highly available by running in a Sun Cluster environment. This chapter provides instructions for planning and configuring Sun Cluster HA for SAP on Sun Cluster servers.

This chapter includes the following procedures:

10.1 Sun Cluster HA for SAP Overview

The Sun Cluster HA for SAP data service eliminates single points of failure in a SAP system and also provides fault monitoring and failover mechanisms for the SAP application.

These basic services of the SAP system should be placed within the Sun Cluster framework:

In a Sun Cluster configuration, protection of SAP components is best provided as described in Table 10-1.

Table 10-1 Protection of SAP Components

SAP Component 

Protected by... 

SAP database instance 

Sun Cluster HA for Oracle 

SAP central instance 

Sun Cluster HA for SAP 

NFS file service 

Sun Cluster HA for NFS 

SAP application servers 

SAP, through redundant configuration 

The Sun Cluster HA for SAP data service can be installed during or after initial cluster installation using scinstall(1M). Sun Cluster HA for SAP requires a functioning cluster that already contains logical hosts and associated IP addresses and disk groups. See Chapter 3, Installing and Configuring Sun Cluster Software, for details about initial installation of clusters and data services. The Sun Cluster HA for SAP data service can be registered after the basic components of the Sun Cluster and SAP software have been installed.

10.2 Configuration Guidelines for Sun Cluster HA for SAP

Consider these general guidelines when designing a Sun Cluster HA for SAP configuration:

10.2.1 Supported Configurations

See your Enterprise Services representative for the most current information about supported SAP versions. More information on each configuration type is provided in the following sections.

10.2.1.1 Two-Node Cluster With One Logical Host

The simplest SAP cluster configuration is a two-node cluster with one logical host, as illustrated in Figure 10-1. In this asymmetric configuration, the SAP central instance and database instance (collectively called the central system), are both placed on one node. NFS is also be placed on the same node. This configuration is relatively easy to configure and administer. A drawback is that the backup node is underutilized. In case of failover, the central instance, database instance, and NFS service are switched to the backup node.

Figure 10-1 Asymmetric SAP Configuration

Graphic

10.2.1.2 Two-Node Cluster With One Logical Host and Development or Test System

In this configuration, the central system (the central instance and database instance) is placed on one node and a development or test system is placed on a backup node. The development or test system remains running until a failover of the logical host moves the central system to the backup node. This scenario is illustrated in Figure 10-2. In this configuration, you must customize the Sun Cluster HA for SAP hasap_stop_all_instances script such that the development or test system is shut down before the SAP central instance is switched over and brought up. See the hasap_stop_all_instances(1M) man page and "10.2.4 Configuration Options for Application Servers and Test/Development Systems", for more information.

Figure 10-2 Asymmetric SAP Configuration with Development or Test System

Graphic

10.2.1.3 Two-Node Cluster With One Logical Host, Application Servers, and Separate NFS Cluster

You can also place SAP application servers on one or both physical hosts. In this configuration, you must provide NFS services from a host outside the cluster. Set up the application servers to NFS-mount the file systems from the external NFS cluster, as illustrated in Figure 10-3. In case of failover, the logical host containing the central system (the central instance and database instance) switches to the backup node. The application servers do not migrate with the logical host, but are instead started or shut down depending on where the logical host is mastered. This prevents the application servers from competing for resources with the central instance and database.

NFS is protected by Sun Cluster HA for NFS. For more information, see "10.2.5 Sun Cluster HA for NFS Considerations".

Figure 10-3 Asymmetric SAP Configuration with Application Servers and External HA-NFS

Graphic

10.2.1.4 Two-Node Cluster With Two Logical Hosts

A two-node cluster with two logical hosts can be configured with the SAP central instance on one logical host and the SAP database instance on the other logical host, as illustrated in Figure 10-4. In this configuration, the nodes are load-balanced and both are utilized. In case of failover, the central instance or database instance is switched to the sibling node.

Figure 10-4 Symmetric SAP Configuration With Two Logical Hosts

Graphic

10.2.1.5 Two-Node Cluster With Two Logical Hosts, Application Servers, and Separate NFS Cluster

A two-node cluster with two logical hosts can be configured with SAP application servers on one or both physical hosts. In this configuration, you must provide NFS services from a host outside the cluster. Set up the application servers to NFS-mount the file systems from the external NFS cluster, as illustrated in "10.2.2 Pre-Installation Considerations". In this case, both nodes are utilized and load-balanced.

In case of failover, the logical hosts switch over to the sibling node. The application servers do not fail over.

If the central instance logical host fails over, the application server can be shut down through the hasap_stop_all_instances script.

There are no customizable scripts to start and stop application servers in case of failover of the database logical host. If the database logical host fails over, the application servers cannot be shut down to release resources for the database logical host. Therefore, you must size your configuration to allow for the possible scenario in which the central instance, database instance, and application server are all running on the same node simultaneously.

In this configuration, NFS is protected by Sun Cluster HA for NFS. For more information, see "10.2.5 Sun Cluster HA for NFS Considerations".

Figure 10-5 Symmetric SAP Configuration With Two Logical Hosts and Application Servers

Graphic

10.2.2 Pre-Installation Considerations

Before installing Sun Cluster with scinstall(1M), consider the following issues that apply to SAP configurations.

See also "10.2.5 Sun Cluster HA for NFS Considerations", and "10.4 Preparing the SAP Environment".

10.2.3 Sun Cluster Software Upgrade Considerations

Note these SAP-related issues before performing an upgrade to Sun Cluster 2.2 from HA 1.3 or Sun Cluster 2.1.

10.2.4 Configuration Options for Application Servers and Test/Development Systems

Conventionally you stop and restart the application server instances manually after the central instance is restarted. Sun Cluster HA for SAP provides hooks that are called whenever the central instance logical host switches over or fails over. These hooks are provided by calling the hasap_stop_all_instances and hasap_start_all_instances scripts. The scripts must be idempotent.

If you configure application servers and want to control them automatically when the logical host switches over or fails over, you can create start and stop scripts according to your needs. Sun Cluster provides sample scripts that can be copied and customized: /opt/SUNWcluster/ha/sap/hasap_stop_all_instances.sample and /opt/SUNWcluster/ha/sap/hasap_start_all_instances.sample. Customization examples are included in these scripts. Copy the sample scripts, rename them by removing the ".sample" suffix, and modify them as appropriate.

After failovers, Sun Cluster HA for SAP will invoke the customized scripts to restart the application servers. The scripts control the application servers from the central instance, and are invoked by the full path name.

If you include a test or development system in your configuration, modify the hasap_stop_all_instances script to stop the test or development system in case of failover of the central instance logical host.

During a central instance logical host switchover or failover, the scripts are called in the following sequence:

  1. Stopping the application server instances and test or development systems by calling hasap_stop_all_instances

  2. Stopping the central instance

  3. Switching over the logical host(s) and disk group(s)

  4. Calling hasap_stop_all_instances again to make sure all application servers and test or development systems have stopped

  5. Starting the central instance

  6. Starting the application server instances by calling hasap_start_all_instances

See the hasap_start_all_instances(1M) and hasap_start_all_instances(1M) man pages for more information

Additionally, you must enable root access to the SAP administrative account (<sapsid>adm) on all SAP application servers and test or development systems from all logical hosts and all physical hosts in the cluster. For test or development systems, also grant root access to the database administrative account (ora<sapsid>). Accomplish this by creating .rhosts files for these users. For example:

...
 phys-hahost1	 root
 phys-hahost2	 root
 phys-hahost3	 root
 hahost1		 	 	 	 root
 hahost2		 	 	 	 root
 hahost3		 	 	 	 root
 ...

In configurations including several application servers or a test or development system, consider increasing the timeout value of the STOP_NET method for Sun Cluster HA for SAP. Increasing the STOP_NET timeout value is necessary only if the hasap_stop_all_instances script takes longer to execute than 60 seconds, because 60 seconds is the default timeout value for the STOP_NET method. If the hasap_stop_all_instances script does not finish within the 60 second timeout, then increase the STOP_NET timeout value.

Check the timeout value of the STOP_NET method by using the following command:

# hareg -q sap -T STOP_NET

Note -

The hasap_dbms command can be used only when Sun Cluster HA for SAP is registered but is in the off state. Run the command on only one node, while that node is a member of the cluster. See the hasap_dbms(1M) man page for more information.



Caution - Caution -

If the hasap_dbms(1M) command returns an error stating that it cannot add rows to or update the CCD, it might be because another cluster utility is also trying to update the CCD. If this occurs, re-run hasap_dbms(1M) until it runs successfully. After the hasap_dbms(1M) command runs successfully, verify that all necessary rows are included in the resulting CCD by running the command hareg -q sap.

If the hareg(1M) command returns an error, then first restore the original method timeouts by running the command hasap_dbms -f. Second, restore the default dependencies by running the command hasap_dbms -r. After both commands complete successfully, retry the original hasap_dbms(1M) command to configure new dependencies and method timeouts. See the hasap_dbms(1M) man page for more information.


Increase the STOP_NET timeout value by using the following command:

# /opt/SUNWcluster/ha/sap/hasap_dbms -t STOP_NET=new_timeout_value

If you increase the STOP_NET method timeout value, you also must increase the timeouts that the Sun Cluster framework uses when remastering logical hosts during cluster reconfiguration. Use the scconf(1M) command to increase logical host timeout values. Refer to the section on configuring timeouts for cluster transition steps in the Sun Cluster 2.2 System Administration Guide for details about how to increase the timeouts for the cluster framework. Make sure that the loghost_timeout value is at least double the new STOP_NET timeout value.

10.2.5 Sun Cluster HA for NFS Considerations

If you have application servers outside the cluster, you must configure Sun Cluster HA for NFS on the central instance logical host. Application servers outside the cluster must NFS-mount the SAP profile directories and executable directories from the SAP central instance. See Chapter 11, Setting Up and Administering Sun Cluster HA for NFS, for detailed procedures on setting up Sun Cluster HA for NFS, and note the following SAP-specific guidelines:

10.3 Overview of Procedures

Table 10-2 summarizes the tasks you must complete to configure SAP.

Table 10-2 High-Level Steps to Install and Configure SAP

Task 

Description 

For Instructions, Go To... 

Plan the SAP installation 

- Read through all guidelines and procedures 

"10.1 Sun Cluster HA for SAP Overview" and

"10.2 Configuration Guidelines for Sun Cluster HA for SAP"

- Complete the SAP installation worksheet 

"10.3.1 Installation Worksheet for Sun Cluster HA for SAP"

Prepare the environment for SAP 

- Perform all pre-requisite installation tasks 

- Set up Solaris 

- Set up the volume manager 

- Create disk groups or disksets 

- Create volumes and file systems 

- Install Sun Cluster 

- Set up PNM 

- Set up logical hosts and mount points 

- Set up HA-NFS, if necessary 

- Adjust kernel parameters 

- Create swap space 

- Create user and group accounts 

"10.4 Preparing the SAP Environment"

 

See also: Chapter 3, Installing and Configuring Sun Cluster Software,"

Appendix B, Configuring Solstice DiskSuite, and

Appendix C, Configuring Sun StorEdge Volume Manager and Cluster Volume Manager

Install and configure SAP and the database 

- Install the SAP central instance and database instance 

- Load the database 

- Load all reports 

- Install the GUI 

"10.5.1 How to Install SAP and the Database"

Enable SAP to run in the cluster 

- Set up the SAP central instance admin environment 

- Modify SAP profile files 

- Modify the database environment 

- Update /etc/services and create /usr/sap/tmp

- Test the SAP installation 

"10.5.2 How to Enable SAP to Run in the Cluster"

Configure the HA-DBMS 

- Shut down SAP and the database 

- Adjust Oracle alert files and listener files 

- Register and activate the database 

- Set up the database instance 

- Start fault monitoring for the database 

- Test switchover of the database 

"10.5.2 How to Enable SAP to Run in the Cluster"

Configure Sun Cluster HA for SAP 

- Install and register Sun Cluster HA for SAP 

"10.6.1 How to Configure Sun Cluster HA for SAP"

- Configure Sun Cluster HA for SAP 

"10.6.1 How to Configure Sun Cluster HA for SAP", and "10.6.2 Configuration Parameters for Sun Cluster HA for SAP"

- Set dependencies, if necessary 

"10.7 Setting Data Service Dependencies for SAP"

- Test switchover of Sun Cluster HA for SAP 

"10.6.1 How to Configure Sun Cluster HA for SAP"

- Customize and test start and stop scripts for the application servers and test/development systems 

"10.2.4 Configuration Options for Application Servers and Test/Development Systems"

10.3.1 Installation Worksheet for Sun Cluster HA for SAP

Complete the following worksheet before beginning the Sun Cluster HA for SAP installation.

Table 10-3 Installation Worksheet for Sun Cluster HA for SAP

Name of Cluster 

 

Number of logical hosts 

 

Name and IP address of all physical hosts that are potential masters of the CI logical host 

 

Name and IP address of CI logical host 

 

SAP system ID (<SAPSID>)

 

SAP system number 

 

Name and IP address of all physical hosts that are potential masters of the DB logical host 

 

Name and IP address of DB logical host 

(In asymmetric configurations, this is identical to the CI logical host) 

 

Name of NFS logical host (If all application servers are external to cluster, this name is the central instance logical host. If the application servers are inside the cluster, this name is the logical host that provides NFS service from the external NFS cluster.) See "10.2.5 Sun Cluster HA for NFS Considerations"."

 

SAP license for each potential master of the CI logical host 

 

10.4 Preparing the SAP Environment

Before beginning the SAP or Sun Cluster HA for SAP installation procedures, perform the following prerequisite tasks.

Table 10-4 Worksheet: File Systems and Volume Names for the SAP Central Instance

File System Name / Mount Point 

Volume Name 

/usr/sap/trans

 

/sapmnt/<SAPSID>

 

/usr/sap/<SAPSID>

 

Use Table 10-5 as a worksheet to capture the name of the volume that corresponds to each file system used for the database instance. Refer to the SAP installation guide for the file system sizes recommended for your particular configuration. These are database-dependent file systems.

Table 10-5 Worksheet: File Systems and Volume Names for the SAP Database Instance

File System Name / Mount Point 

Volume Name 

/oracle/<SAPSID>

 

/oracle/stage/stage_<version>

 

/oracle/<SAPSID>/origlogA

 

/oracle/<SAPSID>/origlogB

 

/oracle/<SAPSID>/mirrlogA

 

/oracle/<SAPSID>/mirrlogB

 

/oracle/<SAPSID>/saparch

 

/oracle/<SAPSID>/sapreorg

 

/oracle/<SAPSID>/sapdata1

 

/oracle/<SAPSID>/sapdata2

 

/oracle/<SAPSID>/sapdata3

 

/oracle/<SAPSID>/sapdata4

 

/oracle/<SAPSID>/sapdata5

 

/oracle/<SAPSID>/sapdata6

 

Table 10-6 File Systems and Mount Points for the SAP Central Instance and Database Instance

Disk Group (SSVM) 

Diskset 

(Solstice DiskSuite) 

Volume Name 

Mount Point 

ci_dg

CIloghost

sap

/usr/sap/<SAPSID>

ci_dg

CIloghost

saptrans

/usr/sap/trans

ci_dg

CIloghost

sapmnt

/sapmnt/<SAPSID>

db_dg

DBloghost

oracle

/oracle/<SAPSID>

db_dg

DBloghost

stage

/oracle/stage/stage_<version>

db_dg

DBloghost

origlogA

/oracle/<SAPSID>/origlogA

db_dg

DBloghost

origlogB

/oracle/<SAPSID>/origlogB

db_dg

DBloghost

mirrlogA

/oracle/<SAPSID>/mirrlogA

db_dg

DBloghost

mirrlogB

/oracle/<SAPSID>/mirrlogB

db_dg

DBloghost

saparch

/oracle/<SAPSID>/saparch

db_dg

DBloghost

sapreorg

/oracle/<SAPSID>/sapreorg

db_dg

DBloghost

sapdata1

/oracle/<SAPSID>/sapdata1

db_dg

DBloghost

sapdata2

/oracle/<SAPSID>/sapdata2

db_dg

DBloghost

sapdata3

/oracle/<SAPSID>/sapdata3

db_dg

DBloghost

sapdata4

/oracle/<SAPSID>/sapdata4

db_dg

DBloghost

sapdata5

/oracle/<SAPSID>/sapdata5

db_dg

DBloghost

sapdata6

/oracle/<SAPSID>/sapdata6

Table 10-7 File Systems to Share in HA-NFS to External SAP Application Servers

File Systems to Share to External Application Servers 

/usr/sap/trans

/sapmnt/<SAPSID>/exe

/sapmnt/<SAPSID>/profile

/sapmnt/<SAPSID>/global

Table 10-8 Home Directory Paths for SAP User Accounts

User 

Home directory 

<sapsid>adm

/usr/sap/<SAPSID>/home

ora<sapsid>

/oracle/<SAPSID>


Note -

For SAP 4.0b, read OSS note 0100125 for special steps required when creating user home directories outside of the /home location.


After all of these prerequisites have been fulfilled, proceed to "10.5 Installing and Configuring SAP and the Database".

10.5 Installing and Configuring SAP and the Database

This section describes how to install and configure SAP.

10.5.1 How to Install SAP and the Database

  1. Verify that you have completed all tasks listed in "10.4 Preparing the SAP Environment".

  2. Verify that all nodes are running in the cluster.

  3. Switch over all logical hosts to the node from which you will install SAP and the database.

    # scadmin switch clustername phys-hahost1 CIloghost DBloghost ...
    
  4. Create the SAP installation directory and begin SAP installation.

    Refer to the "R/3 Installation on UNIX" guidelines in the SAP documentation for details.


    Note -

    Read all SAP OSS notes prior to beginning the SAP installation.


    1. Install the central instance and database instance on the node currently mastering the central instance and database instance logical host.

      (For SAP 3.1x only) When installing SAP using R3INST, specify the physical host name of the current master of the database logical host when prompted for "Database Server." After the installation is complete, you must manually adjust various files to refer to the logical host where the database resides.

      (For SAP 4.0x only) When installing SAP using R3SETUP, select the CENTRDB.SH script to generate the installation command file.

    2. Continue the SAP installation to install the central instance, to create and load the database, to load all reports, and to install the R/3 Frontend (GUI).

10.5.2 How to Enable SAP to Run in the Cluster

  1. Set up the SAP central instance administrative environment.

    During SAP installation, SAP creates files and shell scripts on the server on which the SAP central instance is installed. These files and scripts use physical host names. Follow these steps to replace all occurrences of physical host names with logical host names.


    Note -

    Make backup copies of all files before performing the following steps.


    First, shut down the SAP central instance and database using the following command:

    # su - <sapsid>adm
    $ stopsap all
    ...
     # su - ora<sapsid>
    $ lsnrctl stop
    

    Note -

    Become the <sapsid>adm user before editing these files.


    1. Revise the .cshrc file in the <sapsid>adm home directory.

      On the server on which the SAP central instance is installed, the .cshrc file contains aliases that use Sun Cluster physical host names. Replace the physical host names with the central instance logical host name.

      (For SAP 3.1x only) The resulting .cshrc file should look similar to the following example, in which CIloghost is the logical host containing the central instance and DBloghost is the logical host containing the database. If the central instance and database are on the same logical host, then use that logical host name for the substitutions.

      # aliases
       alias			startsap					"$HOME/startsap_CIloghost_00"
       alias			stopsap					"$HOME/stopsap_CIloghost_00"
      
       # RDBMS environment
       if (-e $HOME/.dbenv_DBloghost.csh) then
          source $HOME/.dbenv_DBloghost.csh
       else if (-e $HOME/.dbenv.csh) then
          source $HOME/.dbenv.csh
       endif

      (For SAP 4.0x only) The resulting .cshrc file should look similar to the following example, in which CIloghost is the logical host containing the central instance and DBloghost is the logical host containing the database. If the central instance and database are on the same logical host, then use that logical host name for the substitutions:

      if ( -e $HOME/.sapenv_CIloghost.csh ) then
          source $HOME/.sapenv_CIloghost.csh
       else if ( -e $HOME/.sapenv.csh ) then
          source $HOME/.sapenv.csh
       endif
      
       # RDBMS environment
       if ( -e $HOME/.dbenv_DBloghost.csh ) then
          source $HOME/.dbenv_DBloghost.csh
       else if ( -e $HOME/.dbenv.csh ) then
          source $HOME/.dbenv.csh
       endif
    2. (For SAP 4.0x only) Rename the file .sapenv_physicalhost.csh to .sapenv_CIloghost.csh, and edit it to replace occurrences of the physical host name with the logical host name.

      First rename the file, replacing the physical host name with the central instance logical host name.

      $ mv .sapenv_physicalhost.csh .sapenv_CIloghost.csh
      

      Then edit the aliases in the file. For example:

      alias startsap "$HOME/startsap_CIloghost_00"
       alias stopsap "$HOME/stopsap_CIloghost_00"
    3. Rename the .dbenv_physicalhost.csh file.

      Rename the .dbenv_physicalhost.csh file to .dbenv_DBloghost.csh. If the central instance and database are on the same logical host, use that logical host name for the substitution.

      $ mv .dbenv_physicalhost.csh .dbenv_DBloghost.csh
      
    4. (For SAP 4.0x only) Edit the .dbenv_DBloghost.csh file to set the ORA_NLS environment variable to point to the appropriate subdirectories of /var/opt/oracle for the database client configuration files. Also, set the TNS_ADMIN environment variable to point to the /var/opt/oracle directory.

      The .dbenv_DBloghost.csh file is located in the <sapsid>adm home directory.

      #setenv ORA_NLS /oracle/<SAPSID>/ocommon/NLS_723/admin/data
       setenv ORA_NLS /var/opt/oracle/ocommon/NLS_723/admin/data
      
       #setenv ORA_NLS32 /oracle/<SAPSID>/ocommon/NLS_733/admin/data
       setenv ORA_NLS32 /var/opt/oracle/ocommon/NLS_733/admin/data
      
       #setenv ORA_NLS33 /oracle/<SAPSID>/ocommon/NLS_804/admin/data
       setenv ORA_NLS33 /var/opt/oracle/ocommon/NLS_804/admin/data
      
       ...
      
       # setenv TNS_ADMIN @TNS_ADMIN@
       setenv TNS_ADMIN /var/opt/oracle
       ...
    5. Rename and revise the SAP instance startsap and stopsap shell scripts in the <sapsid>adm home directory.

      On the server on which the SAP central instance is installed, the <sapsid>adm home directory contains shell scripts that include physical host names. Rename these shell scripts by replacing the physical host names with logical host names. In this example, CIloghost represents the logical host name of the central instance:

      $ mv startsap_physicalhost_00 startsap_CIloghost_00
      $ mv stopsap_physicalhost_00 stopsap_CIloghost_00
      

      The startsap_CIloghost_00 and stopsap_CIloghost_00 shell scripts specify physical host names in their START_PROFILE parameters. Replace the physical host name with the central instance logical host name in the START_PROFILE parameters in both files.

      ...
       START_PROFILE="START_DVEBMGS00_CIloghost"
       ...
    6. Revise the SAP central instance profile files.

      During SAP installation, SAP creates three profile files on the server on which the SAP central instance is installed. These files use physical host names. Use these steps to replace all occurrences of physical host names with logical host names. To revise these files, you must be user <sapsid>adm, and you must be in the profile directory.

      • Rename the START_DVEBMGS00_physicalhost and <SAPSID>_DVEBMGS00_physicalhost profile files.

        In the /sapmnt/<SAPSID>/profile directory, replace the physical host name with the logical host name. In this example, the <SAPSID> is HA1:

      $ cdpro; pwd
      /sapmnt/HA1/profile
       $ mv START_DVEBMGS00_physicalhost START_DVEBMGS00_CIloghost
      $ mv HA1_DVEBMGS00_physicalhost HA1_DVEBMGS00_CIloghost
      
      • In the START_DVEBMGS00_CIloghost profile file, replace occurrences of the physical host name with the central instance logical host name for all `pf=' arguments.

        In this example, the <SAPSID> is HA1:

      ...
       Execute_00 =local $(DIR_EXECUTABLE)/sapmscsa -n \
       pf=$(DIR_PROFILE)/HA1_DVEBMGS00_CIloghost
      Start_Program_01   =local $(_MS) pf=$(DIR_PROFILE)/HA1_DVEBMGS00_CIloghost
      Start_Program_02   =local $(_DW) pf=$(DIR_PROFILE)/HA1_DVEBMGS00_CIloghost
      Start_Program_03   =local $(_CO) -F pf=$(DIR_PROFILE)/HA1_DVEBMGS00_CIloghost
      Start_Program_04   =local $(_SE) -F pf=$(DIR_PROFILE)/HA1_DVEBMGS00_CIloghost
      ...
      • Edit the <SAPSID>_DVEBMGS00_CIloghost file to add a new entry for the SAPLOCALHOST parameter.

        Add this entry only for the central instance profile. Set the SAPLOCALHOST parameter to be the central instance logical host name. This parameter allows external application servers to locate the central instance by using the logical host name.

      ...
       SAPLOCALHOST		 	 	 	 	 	 		=CIloghost
      ...
      • Edit the DEFAULT.PFL file to replace occurrences of the physical host name with the logical host name.

        For each of the rdisp/ parameters, replace the physical host name with the central instance logical host name. For the SAPDBHOST parameter, enter the logical host name of the database. If the central instance and database are installed on the same logical host, enter the central instance logical host name. If the database is installed on a different logical host, use the database logical host name instead. In this example, CIloghost represents the logical host name of the central instance, DBloghost represents the logical host name of the database, and HA1 is the <SAPSID>:

      ...
       SAPDBHOST	 	 	 	 	 	 	 		=DBloghost
      rdisp/mshost	 	 	 	 	 	 	 	 	=CIloghost
      rdisp/sna_gateway	 	 	 	 		 =CIloghost
      rdisp/vbname	 	 	 	 	 	 	 	 	=CIloghost_HA1_00
       rdisp/enqname	 	 	 	 	 	 	 	 	=CIloghost_HA1_00
       rdisp/btcname	 	 	 	 	 	 	 	 	=CIloghost_HA1_00
       ...
    7. Revise the TPPARAM transport configuration file.

      Change to the directory containing the transport configuration file.

      # cd /usr/sap/trans/bin
      

      Replace the database physical host name with the database logical host name. In this example, DBloghost represents the database logical host name and HA1 is the <SAPSID>. For example:

      ...
       HA1/dbhost = DBloghost
      ...
    8. (For SAP 4.0x only) In the TPPARAM file, also set /var/opt/oracle to be the location for the database client configuration files.

      ...
       HA1/dbconfpath = /var/opt/oracle
       ...
  2. Modify the environment for the SAP database user.

    During SAP installation, SAP creates Oracle files that use Sun Cluster physical host names. Replace the physical host names with logical host names using the following steps.


    Note -

    Become the ora<sapsid> user before editing these files.


    1. Revise the .cshrc file in the ora<sapsid> home directory.

      The .cshrc file on the server in which SAP was installed contains aliases that use Sun Cluster physical host names. Replace the physical host names with logical host names.

      (For SAP 3.1x only) The resulting file should look similar to the following example, in which CIloghost represents the central instance logical host and DBloghost is the database logical host. If the central instance and database reside on the same logical host, use the central instance logical host name for each of the substitutions:

      # aliases
       alias			startsap					"$HOME/startsap_CIloghost_00"
       alias			stopsap					"$HOME/stopsap_CIloghost_00"
      
       # RDBMS environment
       if (-e $HOME/.dbenv_DBloghost.csh) then
          source $HOME/.dbenv_DBloghost.csh
       else if (-e $HOME/.dbenv.csh) then
          source $HOME/.dbenv.csh
       endif

      (For SAP 4.0x only) The resulting .cshrc file should look similar to the following example, in which CIloghost is the central instance logical host and DBloghost is the database logical host. If the central instance and database reside on the same logical host, use the central instance logical host name for each of the substitutions:

      if ( -e $HOME/.sapenv_CIloghost.csh ) then
          source $HOME/.sapenv_CIloghost.csh
       else if ( -e $HOME/.sapenv.csh ) then
          source $HOME/.sapenv.csh
       endif
      
       # RDBMS environment
       if ( -e $HOME/.dbenv_DBloghost.csh ) then
          source $HOME/.dbenv_DBloghost.csh
       else if ( -e $HOME/.dbenv.csh ) then
          source $HOME/.dbenv.csh
       endif
    2. (For SAP 4.0x only) Rename the .sapenv_physicalhost.csh to .sapenv_CIloghost.csh.

      In this example, CIloghost represents the central instance logical host name.

      $ mv .sapenv_physicalhost.csh .sapenv_CIloghost.csh
      
    3. Rename the .dbenv_physicalhost.csh file.

      Replace the physical host name with the database logical host name in the .dbenv_physicalhost.csh file name. If the central instance and database are on the same logical host, use the central instance logical host name for the substitution. In this example, DBloghost represents the database logical host:

      $ mv .dbenv_physicalhost.csh .dbenv_DBloghost.csh
      
    4. (For SAP 4.0x only) Edit the .dbenv_DBloghost.csh file to set the ORA_NLS environment variable to point to the appropriate subdirectories of /var/opt/oracle for the database client configuration files. Also, set the TNS_ADMIN environment variable to point to the /var/opt/oracle directory.

      The .dbenv_DBloghost.csh file is located in the ora<sapsid> home directory.

      #setenv ORA_NLS /oracle/<SAPSID>/ocommon/NLS_723/admin/data
       setenv ORA_NLS /var/opt/oracle/ocommon/NLS_723/admin/data
      
       #setenv ORA_NLS32 /oracle/<SAPSID>/ocommon/NLS_733/admin/data
       setenv ORA_NLS32 /var/opt/oracle/ocommon/NLS_733/admin/data
      
       #setenv ORA_NLS33 /oracle/<SAPSID>/ocommon/NLS_804/admin/data
       setenv ORA_NLS33 /var/opt/oracle/ocommon/NLS_804/admin/data
      
       ...
      
       # setenv TNS_ADMIN @TNS_ADMIN@
       setenv TNS_ADMIN /var/opt/oracle
       ...
  3. Edit the Oracle SQL*Net configuration files to replace occurrences of the physical host name with the database logical host name.

    If the central instance and database instance are on the same logical host, use the central instance logical host name for the substitutions.

  4. Make the SQL*Net configuration files locally accessible on every potential master.

    Use the following steps to accomplish this.

    1. Replace all occurrences of physical host names with the database logical host name in the listener.ora and tnsnames.ora files.

      (For SAP 3.1x only) The listener.ora file is located at /etc/listener.ora. The tnsnames.ora file is located at /usr/sap/trans/tnsnames.ora.

      (For SAP 4.0x only) The listener.ora file is located at /oracle/<SAPSID>/network/admin/listener.ora. The tnsnames.ora file is located at /oracle/<SAPSID>/network/admin/tnsnames.ora.

    2. Relocate the SQL*Net configuration files on the node where the database is installed.

      (For SAP 3.1x only) During installation, SAP places the listener.ora file in the local /etc directory of the node where the installation took place, and creates a soft link in /usr/sap/trans. Move the listener.ora file to /var/opt/oracle. Reset soft links in /usr/sap/trans to point to the new location. Move the tnsnames.ora and sqlnet.ora files to the /var/opt/oracle directory.

      $ su
      # mv /etc/listener.ora /var/opt/oracle
      # rm /usr/sap/trans/listener.ora
      # ln -s /var/opt/oracle/listener.ora /usr/sap/trans
      # mv /usr/sap/trans/tnsnames.ora /var/opt/oracle
      # ln -s /var/opt/oracle/tnsnames.ora /usr/sap/trans
      # mv /usr/sap/trans/sqlnet.ora /var/opt/oracle
      # ln -s /var/opt/oracle/sqlnet.ora /usr/sap/trans
      

      (For SAP 4.0x only) SAP places the listener.ora file in the default directory under $ORACLE_HOME/network/admin. Use the steps below to move the listener.ora file to /var/opt/oracle, and re-set soft links in the original directory to point to the new location. Move all other SQL*Net files to the new location and re-set links to point to the new location.

      $ su
      # mv /oracle/<SAPSID>/network/admin/listener.ora /var/opt/oracle
      # ln -s /var/opt/oracle/listener.ora /oracle/<SAPSID>/network/admin
      # mv /oracle/<SAPSID>/network/admin/tnsnames.ora /var/opt/oracle
      # ln -s /var/opt/oracle/tnsnames.ora /oracle/<SAPSID>/network/admin
      # mv /oracle/<SAPSID>/network/admin/sqlnet.ora /var/opt/oracle
      # ln -s /var/opt/oracle/sqlnet.ora /oracle/<SAPSID>/network/admin
      # mv /oracle/<SAPSID>/network/admin/protocol.ora /var/opt/oracle
      # ln -s /var/opt/oracle/protocol.ora /oracle/<SAPSID>/network/admin
      
    3. (For SAP 4.0x only) Copy the Oracle client configuration files to the common /var/opt/oracle directory.

      # cd /var/opt/oracle; mkdir  rdbms ocommon lib
      # cd /var/opt/oracle/rdbms; cp -R /oracle/<SAPSID>/rdbms/mesg .
      # cd /var/opt/oracle/ocommon; cp -R /oracle/<SAPSID>/ocommon/NLS* .
      # cd /var/opt/oracle/lib; cp /oracle/<SAPSID>/lib/libclntsh.so.1.0 .
      
    4. Distribute the SQL*Net configuration files to all potential masters of the central instance and database instance.

      Copy or transfer the SQL*Net configuration files from the node on which the database was initially installed into the local directory /var/opt/oracle on all potential central instance and database masters. In this example, physicalhost2 represents the name of the backup physical host.

      $ su
      # tar cvf - /var/opt/oracle | rsh physicalhost2 tar xvf -
      

      Note -

      As part of the maintenance of HA-DBMS, the configuration files must be synchronized on all potential master nodes, whenever modifications are made.


  5. Update the /etc/services files on all potential masters to include the new SAP service entries.

    The /etc/services files must be identical on all nodes.

  6. Create the /usr/sap/tmp directory on all nodes.

    The saposcol program will rely on this directory.

  7. Test the SAP installation.

    Test the SAP installation by manually shutting down SAP, manually switching the logical host between the potential master nodes, and then manually starting SAP on the backup node. This will verify that all kernel parameters, service port entries, file systems and mount points, and user/group permissions are properly set on all potential masters of the logical hosts.

    1. Start the central instance and database.

      # su - ora<sapsid>
      $ lsnrctl start
      ...
       # su - <sapsid>adm
      $ startsap all
      
    2. Run the GUI and verify that SAP comes up correctly.

      In this example, the dispatcher port number is 3200.

      # su - <sapsid>adm
      $ setenv DISPLAY your_workstation:0
      $ sapgui /H/CIloghost/S/3200
      
    3. Verify that SAP can connect to the database.

      # su - <sapsid>adm
      $ R3trans -d
      
    4. Run the saplicense utility to get a CUSTOMER KEY for the current node.

      You will need a SAP license for all potential masters of the central instance logical host.

    5. Stop SAP and the database.

      # su - <sapsid>adm
      $ stopsap all
      ...
       # su - ora<sapsid>
      $ lsnrctl stop
      
  8. For each remaining node that is a potential master of the central instance logical host, switch the central instance logical host to that node and repeat the test sequence described in Step 7.

    # scadmin switch clustername phys-hahost2 CIloghost
    

10.5.3 How to Configure the HA-DBMS

  1. Shut down SAP and the database.

    # su - <sapsid>adm
    $ stopsap all
    ...
     # su - ora<sapsid>
    $ lsnrctl stop
    
  2. (For SAP 3.1x only) Adjust the Oracle alert file parameter in the init<SAPSID>.ora file.

    By default, SAP uses the prefix "?/..." in the init<SAPSID>.ora file to denote the relative path from $ORACLE_HOME. The Sun Cluster fault monitors cannot parse the prefix, but instead require the full path name to the alert file. Therefore, you must edit the /oracle/<SAPSID>/dbs/init<SAPSID>.ora file and define the dump destination parameters as follows:

    background_dump_dest = /oracle/<SAPSID>/saptrace/background
  3. Register and activate the database.

    Run the hareg(1M) command from only one node. For example, for Oracle:

    # hareg -s -r oracle -h DBloghost
     # hareg -y oracle
    
  4. Set up the database instance.

    See Chapter 5, Setting Up and Administering Sun Cluster HA for Oracle, for more information.

    For example, for Oracle:

    # haoracle insert <SAPSID> DBloghost 60 10 120 300 \
     user/password /oracle/<SAPSID>/dbs/init<SAPSID>.ora LISTENER
    
  5. Start fault monitoring for the database instance.

    For example:

    # haoracle start <SAPSID>
    
  6. Test switchover of the HA-DBMS.

    For example:

    # scadmin switch clustername phys-hahost2 DBloghost
    

10.6 Configuring Sun Cluster HA for SAP

This section describes how to register and configure Sun Cluster HA for SAP.

10.6.1 How to Configure Sun Cluster HA for SAP

  1. If Sun Cluster HA for SAP has not yet been installed, install it now by running scinstall(1M) on all nodes and adding the Sun Cluster HA for SAP data service.

    See "3.2 Installation Procedures", for details. If the cluster is already running, you must stop it before installing the data service.

  2. Register the Sun Cluster HA for SAP data service by running the hareg(1M) command.

    Run this command on only one node:

    # hareg -s -r sap -h CIloghost
    
  3. Verify that all nodes are running in the cluster.

  4. Create a new Sun Cluster HA for SAP instance using the hadsconfig(1M) command.

    The hadsconfig(1M) command is used to create, edit, and delete instances of the Sun Cluster HA for SAP data service. The configuration parameters are described in "10.6.2 Configuration Parameters for Sun Cluster HA for SAP".

    Run this command on only one node, while all nodes are running in the cluster:

    # hadsconfig
    
  5. If Sun Cluster HA for SAP is dependent upon other data services within the same logical host, set dependencies on those data services.

    See "10.7.1 How to Set a Data Service Dependency for SAP". If you do set dependencies, start all services on which SAP depends before proceeding.

  6. Stop the central instance before starting SAP under the control of Sun Cluster HA for SAP.

    # su - <sapsid>adm
    $ stopsap r3
    

    Caution - Caution -

    The SAP central instance must be stopped before Sun Cluster HA for SAP is turned on.


  7. Turn on the Sun Cluster HA for SAP instance.

    # hareg -y sap
    
  8. Test switchover of Sun Cluster HA for SAP.

    For example:

    # scadmin switch clustername phys-hahost2 CIloghost
    
  9. (Optional) If you have application servers or a test/development system, customize and test the hasap_start_all_instances and hasap_stop_all_instances scripts.

    See "10.2.4 Configuration Options for Application Servers and Test/Development Systems", for details. Test switchover of Sun Cluster HA for SAP and verify start and stop of application servers. Verify that the test/development system stops when the central instance logical host is switched to the test/development system physical host.

    # scadmin switch clustername phys-hahost1 CIloghost
    

10.6.2 Configuration Parameters for Sun Cluster HA for SAP

This section describes the information you supply to hadsconfig(1M) to create configuration files for the Sun Cluster HA for SAP data service. The hadsconfig(1M) command uses templates to create these configuration files. The templates contain some default, some hard coded, and some unspecified parameters. You must provide values for all parameters that are unspecified.

The fault probe parameters, in particular, can affect the performance of Sun Cluster HA for SAP. Tuning the probe interval value too low (increasing the frequency of fault probes) might encumber system performance, and also might result in false takeovers or attempted restarts when the system is simply slow.

Configure Sun Cluster HA for SAP by supplying the hadsconfig(1M) command with parameters listed in Table 10-9.

Table 10-9 Sun Cluster HA for SAP Configuration Parameters

Name of the Instance 

Nametag used internally as an identifier for the instance. The log messages generated by Sun Cluster refer to this nametag. The hadsconfig(1M) command prefixes the package name to the value you supply here. You can use the SAPSID for this nametag. For example, if you specify HA1, hadsconfig(1M) produces SUNWscsap_HA1.

Logical Host 

Name of the logical host that provides service for this instance of Sun Cluster HA for SAP. This name should be the logical host name for the central instance. 

Time Between Probes 

The interval, in seconds, of the fault probing cycle. The default value is 60 seconds. 

SAP R/3 System ID 

This is the SAP system name or <SAPSID>.

Central Instance ID 

This is the SAP system number or Instance ID. For example, the CI is normally "00." 

SAP Admin Login Name 

The name used by Sun Cluster HA for SAP to log in to the SAP central instance administrative account. This name must exist on all central instance and application server hosts. This is the <sapsid>adm. For example, "ha1adm."

Database Admin Login Name 

This is the SAP database administrator's account. For SAP with Oracle, this is the ora<sapsid>. For example, oraha1.

Database Logical Host Name 

Name of the logical host for the database used by SAP. This might be the same as the logical host name used for the central instance, depending on your configuration. 

Log Database Warnings 

Possible values are "y" or "n." If set to "y" and the Sun Cluster HA for SAP probe detects that it cannot connect to the database during a probe cycle, a warning message appears saying the database is unavailable. For example, this occurs if the database logical host is in maintenance mode or if the database is being relocated to another node in the cluster. If the parameter is set to "n," then no messages appear if the probe cannot connect to the database.  

Central Instance Start Retry Count 

This must be an integer greater than or equal to 1. This is the number of times Sun Cluster HA for SAP should attempt to start the central instance before giving up. This value is also the number of times the Sun Cluster HA for SAP fault monitor will probe in grace mode before entering normal probe mode. While in grace mode, the probe will not perform a restart or initiate a failover of the central instance if the probe detects that the central instance is not yet up. Instead, the fault monitor will report the status of all probes and will continue in grace mode until all probes pass, or until the retry count has been exhausted. 

Central Instance Start Retry Interval 

This is the number of seconds Sun Cluster HA for SAP should wait between each attempt to start the central instance. This value is also the number of seconds that the Sun Cluster HA for SAP fault monitor will sleep (between probe attempts) while in grace mode.  

Time Allowed to Stop All Instances Before Central Instance Starts 

This must be an integer greater than or equal to 0. This parameter dictates for how much time (in seconds) the hasap_stop_all_instances script should be run before starting the central instance. If set to 0, then hasap_stop_all_instances is run in the background while the central instance is being started. If set to a positive integer, then hasap_stop_all_instances is run for that amount of time in the foreground before the central instance is started.

Allow the Central Instance to Start if Foregrounded Stop All Instances Returns  

Error 

This flag should be set to either "y" or "n". This value determines whether the central instance should be started in the case where the hasap_stop_all_instances script returns a non-zero exit code or does not complete in the time specified by the "Time Allowed to Stop All Instances Before Central Instance Starts" parameter. If set to "n" and the value for "Time Allowed to Stop All Instances Before Central Instance Starts" is greater than 0, and if the hasap_stop_all_instances script does not complete in the time configured above or the hasap_stop_all_instances script returns a non-zero exit status, the central instance will not be started and the fault monitors will take action based on the other configuration parameters. If set to "y," then the central instance will be started regardless of whether hasap_stop_all_instances returns an error code or finishes within the timeout specified above.

Number of Central Instance Restarts on Local Node: 

This must be an integer greater than or equal to 0. This dictates how many times the SAP central instance will be restarted on the local node before giving up, after a failure has been detected. When this number of restarts has been exhausted, Sun Cluster HA for SAP either issues a failover request, if permitted by the "Allow Central Instance Failover" parameter, or does nothing to correct the failure detected by the fault monitor. 

Number of Probe Successes to Reset the Restart Count 

This parameter should be an integer that is greater than or equal to 0. If set to a positive integer, then after that many consecutive successful probes, the count of restarts done so far on the local node will be reset to 0. For example, if the value for "Number of Central Instance Restarts on Local Node" parameter is 1 and the value for "Number of Probe Successes to Reset the Restart Count" is 60, then after the first failure occurs, the probe will try to restart the central instance on the local node. If this restart succeeds, then after 60 successful probes, the restart count will be reset to 0, allowing the probe to do another restart if it detects another failure. If the parameter "Number of Probe Successes to Reset the Restart Count" is set to 0, then the restart count is never reset. This means that the number of restarts set in the parameter "Number of Central Instance Restarts on Local Node" is the absolute number of restarts that will be done on the local node before failing over. 

Allow Central Instance Failover 

Possible values are "y" or "n." If set to "y" and Sun Cluster HA for SAP detects an error in the SAP instance it is monitoring and the "Number of central instance Restarts on Local Node" has been exhausted, then Sun Cluster HA for SAP issues a request to relocate the instance's logical host to another cluster node. If this flag is set to "n," then even if an error is detected and all of the local restarts have been exhausted, Sun Cluster HA for SAP will not cause a relocation of this instance's logical host. When this occurs, the central instance is left in the its failed state, and the probe exits.

10.7 Setting Data Service Dependencies for SAP

Setting a dependency with hasap_dbms is only necessary to specify the order that data services are started and stopped within a single logical host. There is no mechanism for setting dependencies for data services configured on two different logical hosts.

If Sun Cluster HA for Oracle or Sun Cluster HA for NFS are configured on the same logical host as Sun Cluster HA for SAP, then you should set a dependency for Sun Cluster HA for SAP on those data services. You can use the hasap_dbms command to create or remove such a dependency. These dependencies affect the order that the services are started and stopped. Sun Cluster HA for Oracle and Sun Cluster HA for NFS should always be started before Sun Cluster HA for SAP is started. Similarly, Sun Cluster HA for SAP should always be stopped before the other data services are stopped.


Caution - Caution -

If Sun Cluster HA for Oracle or Sun Cluster HA for NFS is not configured on the same logical host as Sun Cluster HA for SAP, then do not use the hasap_dbms command.


10.7.1 How to Set a Data Service Dependency for SAP

To set a data service dependency, issue one of the hasap_dbms commands described below.


Note -

The hasap_dbms command can be used only when Sun Cluster HA for SAP is registered but is in the off state. Run the command on only one node, while that node is a member of the cluster. See the hasap_dbms(1M) man page for more information.



Caution - Caution -

If the hasap_dbms(1M) command returns an error stating that it cannot add rows to or update the CCD, it might be because another cluster utility is also trying to update the CCD. If this occurs, re-run hasap_dbms(1M) until it runs successfully. After the hasap_dbms(1M) command runs successfully, verify that all necessary rows are included in the resulting CCD by running the command hareg -q sap.

If the hareg(1M) command returns an error, then first restore the original method timeouts by running the command hasap_dbms -f. Second, restore the default dependencies by running the command hasap_dbms -r. After both commands complete successfully, retry the original hasap_dbms(1M) command to configure new dependencies and method timeouts. See the hasap_dbms(1M) man page for more information.


  1. Set the data service dependency using one of the following commands.

    If you are using only Sun Cluster HA for NFS and Sun Cluster HA for SAP on the same logical host, use the following command:

    # /opt/SUNWcluster/ha/sap/hasap_dbms -d nfs
    

    If you are using only Sun Cluster HA for Oracle and Sun Cluster HA for SAP on the same logical host, use the following command:

    # /opt/SUNWcluster/ha/sap/hasap_dbms -d oracle
    

    If you are using Sun Cluster HA for Oracle, Sun Cluster HA for NFS, and Sun Cluster HA for SAP on the same logical host, use the following command:

    # /opt/SUNWcluster/ha/sap/hasap_dbms -d oracle,nfs
    
  2. Check the dependencies set for Sun Cluster HA for SAP using the following command:

    # hareg -q sap -D
    

10.7.2 How to Remove a Data Service Dependency for SAP

The dependencies set for Sun Cluster HA for SAP can be removed by running the hasap_dbms -r command. Issuing this command causes all of the dependencies set for Sun Cluster HA for SAP to be removed.


Note -

The hasap_dbms command can be used only when Sun Cluster HA for SAP is registered but is in the off state. Run the command on only one node, while that node is a member of the cluster. See the hasap_dbms(1M) man page for more information.



Caution - Caution -

If the hasap_dbms(1M) command returns an error stating that it cannot add rows to or update the CCD, it might be because another cluster utility is also trying to update the CCD. If this occurs, re-run hasap_dbms(1M) until it runs successfully. After the hasap_dbms(1M) command runs successfully, verify that all necessary rows are included in the resulting CCD by running the command hareg -q sap.

If the hareg(1M) command returns an error, then first restore the original method timeouts by running the command hasap_dbms -f. Second, restore the default dependencies by running the command hasap_dbms -r. After both commands complete successfully, retry the original hasap_dbms(1M) command to configure new dependencies and method timeouts. See the hasap_dbms(1M) man page for more information.


  1. Remove all of the dependencies set for Sun Cluster HA for SAP, using the following command:

    # /opt/SUNWcluster/ha/sap/hasap_dbms -r
    
  2. Check the dependencies set for Sun Cluster HA for SAP, using the following command:

    # hareg -q sap -D