Sun Cluster Data Service for SAP liveCache Guide for Solaris OS

Installing and Configuring Sun Cluster HA for SAP liveCache

This chapter contains the procedures on how to install and configure Sun Cluster HA for SAP liveCache.

This chapter contains the following procedures.

Sun Cluster HA for SAP liveCache Overview

Use the information in this section to understand how Sun Cluster HA for SAP liveCache makes liveCache highly available.

For conceptual information on scalable services, see the Sun Cluster Concepts Guide for Solaris OS.

To eliminate a single point of failure in an SAP Advanced Planner & Optimizer (APO) System, Sun Cluster HA for SAP liveCache provides fault monitoring and automatic failover for liveCache and fault monitoring and automatic restart for SAP xserver. The following table lists the data services that best protect SAP Supply Chain Management (SCM) components in a Sun Cluster configuration. Figure 1–1 also illustrates the data services that best protect SAP SCM components in a Sun Cluster configuration.

Table 1–1 Protection of liveCache Components

liveCache Component 

Protected by 

SAP APO Central Instance 

Sun Cluster HA for SAP 

The resource type is SUNW.sap_ci_v2.

For more information on this data service, see Sun Cluster Data Service for SAP Guide for Solaris OS.

SAP APO database 

All highly available databases that are supported with Sun Cluster software and by SAP. 

SAP APO Application Server 

Sun Cluster HA for SAP 

The resource type is SUNW.sap_as_v2.

For more information on this data service, see Sun Cluster Data Service for SAP Guide for Solaris OS.

SAP xserver 

 

Sun Cluster HA for SAP liveCache 

The resource type is SUNW.sap_xserver.

SAP liveCache database 

Sun Cluster HA for SAP liveCache 

The resource type is SUNW.sap_livecache.

NFS file system 

Sun Cluster HA for NFS 

The resource type is SUNW.nfs.

For more information on this data service, see Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS.

Figure 1–1 Protection of liveCache Components

Illustration's purpose is to describe the protection of liveCache components. The preceding table also outlines the protection of these components.

Installing and Configuring Sun Cluster HA for SAP liveCache

Table 1–2 lists the tasks for installing and configuring Sun Cluster HA for SAP liveCache. Perform these tasks in the order that they are listed.

Table 1–2 Task Map: Installing and Configuring Sun Cluster HA for SAP liveCache

Task 

For Instructions, Go To 

Plan the Sun Cluster HA for SAP liveCache installation 

Your SAP documentation 

Planning the Sun Cluster HA for SAP liveCache Installation and Configuration

Prepare the nodes and disks 

How to Prepare the Nodes

Install and configure liveCache 

How to Install and Configure liveCache

How to Enable liveCache to Run in a Cluster

Verify liveCache installation and configuration 

How to Verify the liveCache Installation and Configuration

Install Sun Cluster HA for SAP liveCache packages 

Installing the Sun Cluster HA for SAP liveCache Packages

Register and configure Sun Cluster HA for SAP liveCache as a failover data service 

How to Register and Configure Sun Cluster HA for SAP liveCache

Verify Sun Cluster HA for SAP liveCache installation and configuration 

Verifying the Sun Cluster HA for SAP liveCache Installation and Configuration

Understand Sun Cluster HA for SAP liveCache Fault Monitors 

Understanding Sun Cluster HA for SAP liveCache Fault Monitors

(Optional) Upgrade the SUNW.sap_xserver resource type

Upgrading the SUNW.sap_xserver Resource Type

Planning the Sun Cluster HA for SAP liveCache Installation and Configuration

This section contains the information you need to plan your Sun Cluster HA for SAP liveCache installation and configuration.


Note –

If you have not already done so, read your SAP documentation before you begin planning your Sun Cluster HA for SAP liveCache installation and configuration because your SAP documentation includes configuration restrictions and requirements that are not outlined in Sun Cluster documentation or dictated by Sun Cluster software.


Configuration Requirements


Caution – Caution –

Your data service configuration might not be supported if you do not adhere to these requirements.


Use the requirements in this section to plan the installation and configuration of Sun Cluster HA for SAP liveCache. These requirements apply to Sun Cluster HA for SAP liveCache only. You must meet these requirements before you proceed with your Sun Cluster HA for SAP liveCache installation and configuration.

For requirements that apply to all data services, see Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Standard Data Service Configurations

Use the standard configurations in this section to plan the installation and configuration of Sun Cluster HA for SAP liveCache. Sun Cluster HA for SAP liveCache supports the standard configurations in this section. Sun Cluster HA for SAP liveCache might support additional configurations. However, you must contact your Sun service provider for information on additional configurations.

Figure 1–2 illustrates a four-node cluster with SAP APO Central Instance, APO application servers, a database, and liveCache. APO Central Instance, the database, and liveCache are configured as failover data services. SAP xserver can be configured only as a scalable data service. APO application servers can be configured as scalable or failover data services.

Figure 1–2 Four-Node Cluster

Illustration: The preceding context describes the graphic.

Configuration Considerations

Use the information in this section to plan the installation and configuration of Sun Cluster HA for SAP liveCache. The information in this section encourages you to think about the impact your decisions have on the installation and configuration of Sun Cluster HA for SAP liveCache.

Configuration Planning Questions

Use the questions in this section to plan the installation and configuration of Sun Cluster HA for SAP liveCache. Insert the answers to these questions into the data service worksheets in “Configuration Worksheets” in Sun Cluster Data Services Planning and Administration Guide for Solaris OS. See Configuration Considerations for information that might apply to these questions.

Preparing the Nodes and Disks

This section contains the procedures you need to prepare the nodes and disks.

How to Prepare the Nodes

Use this procedure to prepare for the installation and configuration of liveCache.

  1. Become superuser on all of the nodes.

  2. Configure the /etc/nsswitch.conf file.

    1. On each node that can master the liveCache resource, include one of the following entries for group, project, an passwd database entries in the /etc/nsswitch.conf file.


      database:
      database: files 
      database: files [NOTFOUND=return] nis
      database: files [NOTFOUND=return] nisplus
    2. On each node that can master the liveCache resource, ensure that files appears first for the protocols database entry in the /etc/nsswitch.conf file.

      Example:


      protocols: files nis

    Sun Cluster HA for SAP liveCache uses the su - user command and the dbmcli command to start and stop liveCache.

    The network information name service might become unavailable when a cluster node's public network fails. Implementing the preceding changes to the /etc/nsswitch.conf file ensures that the su(1M) command and the dbmcli command do not refer to the NIS/NIS+ name services.

Installing and Configuring liveCache

This section contains the procedures you need to install and configure liveCache.

How to Install and Configure liveCache

Use this procedure to install and configure liveCache.

  1. Install and configure SAP APO System.

    See Sun Cluster Data Service for SAP Guide for Solaris OS for the procedures on how to install and configure SAP APO System on Sun Cluster software.

  2. Install liveCache.


    Note –

    Install liveCache by using the physical hostname if you have not already created the required logical host.


    For more information, see your SAP documentation.

  3. Create the .XUSER.62 file for the SAP APO administrator user and the liveCache administrator user by using the following command.


    # dbmcli -d LC-NAME -n logical-hostname -us user,passwd
    
    LC-NAME

    Uppercase name of liveCache database instance

    logical-hostname

    Logical hostname that is used with the liveCache resource


    Caution – Caution –

    Neither SAP APO transaction LC10 nor Sun Cluster HA for SAP liveCache functions properly if you do not create this file correctly.


  4. Copy /usr/spool/sql from the node, on which you installed liveCache, to all the nodes that will run the liveCache resource. Ensure that the ownership of these files is the same on all node as it is on the node on which you installed liveCache.

    Example:


    # tar cfB - /usr/spool/sql | rsh phys-schost-1 tar xfB -
    

How to Enable liveCache to Run in a Cluster

During a standard SAP installation, liveCache is installed with a physical hostname. You must modify liveCache to use a logical hostname so that liveCache works in a Sun Cluster environment. Use this procedure to enable liveCache to run in a cluster.

  1. Create the failover resource group to hold the network and liveCache resource.


    # scrgadm -a -g livecache-resource-group [-h nodelist]
  2. Verify that you added all the network resources you use to your name service database.

  3. Add a network resource (logical hostname) to the failover resource group.


    # scrgadm -a -L -g livecache-resource-group \
    -l lc-logical-hostname [-n netiflist]
  4. Enable the failover resource group.


    # scswitch -Z -g livecache-resource-group
    
  5. Log on to the node that hosts the liveCache resource group.

  6. Start SAP xserver manually on the node that hosts the liveCache resource group.


    # su - lc-nameadm
    # x_server start
    
    lc-name

    Lowercase name of liveCache database instance

  7. Log on to SAP APO System by using your SAP GUI with user DDIC.

  8. Go to transaction LC10 and change the liveCache host to the logical hostname you defined in Step 3.


    liveCache host: lc-logical-hostname
    

Verifying the liveCache Installation and Configuration

This section contains the procedure you need to verify the liveCache installation and configuration.

How to Verify the liveCache Installation and Configuration

Use this procedure to verify the liveCache installation and configuration. This procedure does not verify that your application is highly available because you have not installed your data service yet.

  1. Log on to SAP APO System by using your SAP GUI with user DDIC.

  2. Go to transaction LC10.

  3. Ensure that you can check the state of liveCache.

  4. Ensure that the following dbmcli commands work as user lc_nameadm.


    # dbmcli -d LC_NAME -n logical-hostname db_state
    # dbmcli -d LC_NAME -n logical-hostname db_enum
    

Installing the Sun Cluster HA for SAP liveCache Packages

If you did not install the Sun Cluster HA for SAP liveCache packages during your initial Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on each cluster node where you are installing the Sun Cluster HA for SAP liveCache packages. To complete this procedure, you need the Sun Java Enterprise System Accessory CD Volume 3.

If you are installing more than one data service simultaneously, perform the procedure in “Installing the Software” in Sun Cluster Software Installation Guide for Solaris OS.

Install the Sun Cluster HA for SAP liveCache packages by using one of the following installation tools:


Note –

The Web Start program is not available in releases earlier than Sun Cluster 3.1 Data Services 10/03.


How to Install the Sun Cluster HA for SAP liveCache Packages by Using the Web Start Program

You can run the Web Start program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the Web Start program, see the installer(1M) man page.

  1. On the cluster node where you are installing the Sun Cluster HA for SAP liveCache packages, become superuser.

  2. (Optional) If you intend to run the Web Start program with a GUI, ensure that your DISPLAY environment variable is set.

  3. Load the Sun Java Enterprise System Accessory CD Volume 3 into the CD-ROM drive.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/cdrom0 directory.

  4. Change to the Sun Cluster HA for SAP liveCache component directory of the CD-ROM.

    The Web Start program for the Sun Cluster HA for SAP liveCache data service resides in this directory.


    # cd /cdrom/cdrom0/\
    components/SunCluster_HA_SAP_liveCache_3.1
    
  5. Start the Web Start program.


    # ./installer
    
  6. When you are prompted, select the type of installation.

    • To install only the C locale, select Typical.

    • To install other locales, select Custom.

  7. Follow instructions on the screen to install the Sun Cluster HA for SAP liveCache packages on the node.

    After the installation is finished, the Web Start program provides an installation summary. This summary enables you to view logs that the Web Start program created during the installation. These logs are located in the /var/sadm/install/logs directory.

  8. Exit the Web Start program.

  9. Unload the Sun Java Enterprise System Accessory CD Volume 3 from the CD-ROM drive.

    1. To ensure that the CD-ROM is not being used, change to a directory that does not reside on the CD-ROM.

    2. Eject the CD-ROM.


      # eject cdrom
      

How to Install the Sun Cluster HA for SAP liveCache Packages by Using the scinstall Utility

  1. Load the Sun Java Enterprise System Accessory CD Volume 3 into the CD-ROM drive.

  2. Run the scinstall utility with no options.

    This step starts the scinstall utility in interactive mode.

  3. Choose the Add Support for New Data Service to This Cluster Node menu option.

    The scinstall utility prompts you for additional information.

  4. Provide the path to the Sun Java Enterprise System Accessory CD Volume 3.

    The utility refers to the CD-ROM as the “data services cd.”

  5. Specify the data service to install.

    The scinstall utility lists the data service that you selected and asks you to confirm your choice.

  6. Exit the scinstall utility.

  7. Unload the CD-ROM from the drive.

Registering and Configuring the Sun Cluster HA for SAP liveCache

This section contains the procedures you need to configure Sun Cluster HA for SAP liveCache.

Setting Sun Cluster HA for SAP liveCache Extension Properties

Use the extension properties in Appendix A, Sun Cluster HA for SAP liveCache Extension Properties to create your resources. Use the following command line to configure extension properties when you create your resource.


scrgadm -x parameter=value 
Use the procedure in “Changing Resource Type, Resource Group, and Resource Properties” in Sun Cluster Data Services Planning and Administration Guide for Solaris OS to configure the extension properties if you have already created your resources. You can update some extension properties dynamically. You can update others, however, only when you create or disable a resource. The Tunable fields in Appendix A, Sun Cluster HA for SAP liveCache Extension Properties indicate when you can update each property. See “Standard Properties” in Sun Cluster Data Services Planning and Administration Guide for Solaris OS for details on all Sun Cluster properties.

How to Register and Configure Sun Cluster HA for SAP liveCache

Use this procedure to configure Sun Cluster HA for SAP liveCache as a failover data service for the liveCache database and SAP xserver as a scalable data service. This procedure assumes that you installed the data service packages. If you did not install the Sun Cluster HA for SAP liveCache packages as part of your initial Sun Cluster installation, go to Installing the Sun Cluster HA for SAP liveCache Packages to install the data service packages. Otherwise, use this procedure to configure the Sun Cluster HA for SAP liveCache.


Caution – Caution –

Do not configure more than one SAP xserver resource on the same cluster because one SAP xserver serves multiple liveCache instances in the cluster. More than one SAP xserver resource that runs on the same cluster causes conflicts between the SAP xserver resources. These conflicts cause all SAP xserver resources to become unavailable. If you attempt to start the SAP xserver twice, you receive an error message that says Address already in use.


  1. Become superuser on one of the nodes in the cluster that will host the liveCache resource.

  2. Copy the lccluster file to the same location as the lcinit file.


    # cp /opt/SUNWsclc/livecache/bin/lccluster \
    /sapdb/LC-NAME/db/sap
    
    LC-NAME

    Uppercase name of liveCache database instance

  3. Edit the lccluster file to substitute values for put-LC_NAME-here and put-Confdir_list-here.


    Note –

    The put-Confidir_list-here value exists only in the Sun Cluster 3.1 version.


    1. Open the lccluster file.


      # vi /sapdb/LC-NAME/db/sap/lccluster \LC_NAME="put-LC_NAME-here" \
      CONFDIR_LIST="put-Confdir_list-here"

      Note –

      The CONFDIR_LIST=”put-Confdir_list-here entry exists only in the Sun Cluster 3.1 version.


    2. Replace put-LC_NAME-here with the liveCache instance name. The liveCache instance name is the value you defined in the Livecache_Name extension property.

      For an example, see Step c.


      LC_NAME="liveCache-instance-name"
      
    3. Replace put-Confdir_list-here with the value of the Confidir_list extension property.


      Note –

      This step is only for the Sun Cluster 3.1 version. Skip this step if you are running an earlier version of Sun Cluster.



      CONFDIR_LIST="liveCache-software-directory"
      

    Example:

    If the liveCache instance name is LC1 and the liveCache software directory is /sapdb, edit the lccluster script as follows.


    LC_NAME="LC1"
    CONFDIR_LIST="/sapdb" [Sun Cluster 3.1 version only]
    
  4. Add the HAStoragePlus resource to the liveCache resource group.


    # scrgadm -a -t SUNW.HAStoragePlus
    # scrgadm -a -j livecache-storage-resource -g livecache-resource-group \
    -t SUNW.HAStoragePlus -x filesystemmountpoints=mountpoint,... \
    -x globaldevicepaths=livecache-device-group  -x affinityon=TRUE
    

    Note –

    AffinityOn must be set to TRUE and the local file system must reside on global disk groups to be failover.


    For the procedure on how to set up an HAStoragePlus resource, see Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  5. Enable the liveCache storage resource.


    # scswitch -e -j livecache-storage-resource
    
  6. Register the resource type for liveCache database.


    # scrgadm -a -t SUNW.sap_livecache
    
  7. Register the resource type for SAP xserver.


    # scrgadm -a -t SUNW.sap_xserver
    
  8. Create a scalable resource group for SAP xserver. Configure SAP xserver to run on all the potential nodes that liveCache will run on.


    Note –

    Configure SAP xserver so that SAP xserver starts on all nodes that the liveCache resources can fail over to. To implement this configuration, ensure that the nodelist parameter of the SAP xserver resource group contains all the nodes listed in the liveCache resource groups' nodelist. Also, the value of desired_primaries and maximum_primaries of the SAP xserver resource group must be equal to each other.



    # scrgadm -a -g xserver-resource-group \
    -y Maximum_primaries=value \
    -y Desired_primaries=value \
    -h nodelist
    
  9. Create an SAP xserver resource in this scalable resource group.


    # scrgadm -a -j xserver-resource\
     -g xserver-resource-group -t SUNW.sap_xserver 
    

    See Setting Sun Cluster HA for SAP liveCache Extension Properties for a list of extension properties.

  10. Enable the scalable resource group that now includes the SAP xserver resource.


    # scswitch -Z -g xserver-resource-group
    
  11. Register the liveCache resource.


    # scrgadm -a -j livecache-resource -g livecache-resource-group \
    -t SUNW.sap_livecache -x livecache_name=LC-NAME \
    -y resource_dependencies=livecache-storage-resource,xserver-resource
    
  12. Ensure that the liveCache resource group is brought online only on the node where the SAP xserver resource group is online.

    To meet this requirement, create on the liveCache resource group a strong positive affinity for the SAP xserver resource group.


    # scrgadm -c -g livecache-resource-group \
    -y rg_affinities=++xserver-resource-group
    
  13. Enable the liveCache failover resource group.


    # scswitch -Z -g livecache-resource-group
    
  14. (Optional) Consider configuring your cluster to prevent the APO application server resource group from being brought online on the same node as the liveCache resource group.

    You might plan to run the APO application server on a node to which the liveCache resource can fail over. In this situation, consider using resource group affinities to shut down the APO application server when the liveCache resource fails over to the node.

    To specify this behavior, create on the APO application server resource group a strong negative affinity for the liveCache resource group.


    # scrgadm -c -g apo-resource-group \
    -y rg_affinities=--liveCache-resource-group
    

Verifying the Sun Cluster HA for SAP liveCache Installation and Configuration

This section contains the procedure you need to verify that you installed and configured your data service correctly.

How to Verify the Sun Cluster HA for SAP liveCache Installation and Configuration

Use this procedure to verify that you installed and configured Sun Cluster HA for SAP liveCache correctly. You need the information in the following table to understand the various states of the liveCache database.

Table 1–3 States of the liveCache database

State 

Description 

OFFLINE

liveCache is not running. 

COLD

liveCache is available for administrator tasks. 

WARM

liveCache is online. 

STOPPED INCORRECTLY

liveCache stopped incorrectly. This is also one of the interim states while liveCache starts or stops. 

ERROR 

Cannot determine the current state. This is also one of the interim states while liveCache starts or stops. 

UNKNOWN

This is one of the interim states while liveCache starts or stops. 

  1. Log on to the node that hosts the resource group that contains the liveCache resource, and verify that the fault monitor functionality works correctly.

    1. Terminate liveCache abnormally by stopping all liveCache processes.

      Sun Cluster software restarts liveCache.

      If you do not see this behavior, you might not have correctly performed Step 2 and Step 3 in How to Register and Configure Sun Cluster HA for SAP liveCache.


      # ps -ef|grep sap|grep kernel
      # kill -9 livecache-processes
      
    2. Terminate liveCache by using the Stop liveCache button in LC10 or by running the lcinit command.

      Sun Cluster software does not restart liveCache. However, the liveCache resource status message reflects that liveCache stopped outside of Sun Cluster software through the use of the Stop liveCache button in LC10 or the lcinit command. The state of the liveCache resource is UNKNOWN. When the user successfully restarts liveCache by using the Start liveCache button in LC10 or the lcinit command, the Sun Cluster HA for SAP liveCache Fault Monitor updates the resource state and status message to indicate that liveCache is running under the control of Sun Cluster software.

      If you do not see this behavior, you might not have correctly performed Step 2 and Step 3 in How to Register and Configure Sun Cluster HA for SAP liveCache.

  2. Log on to SAP APO by using your SAP GUI with user DDIC, and verify that liveCache starts correctly by using transaction LC10.

  3. As user root, switch the liveCache resource group to another node.


    # scswitch -z -g livecache-resource-group -h node2
    
  4. Repeat Step 1 through Step 3 for each potential node on which the liveCache resource can run.

  5. Log on to the nodes that host the SAP xserver resource, and verify that the fault monitor functionality works correctly.

    Terminate SAP xserver abnormally by stopping all SAP xserver processes.


    # ps -ef|grep xserver
    # kill -9 xserver-process
    

Understanding Sun Cluster HA for SAP liveCache Fault Monitors

Use the information in this section to understand Sun Cluster HA for SAP liveCache Fault Monitors. This section describes the Sun Cluster HA for SAP liveCache Fault Monitors' probing algorithm or functionality, states the conditions, messages, and recovery actions associated with unsuccessful probing, and states the conditions and messages associated with successful probing.

Extension Properties

See Appendix A, Sun Cluster HA for SAP liveCache Extension Properties for the extension properties that the Sun Cluster HA for SAP liveCache fault monitors use.

Monitor Check Method

A liveCache resource Monitor_check method checks whether SAP xserver is available on this node. If SAP xserver is not available on this node, this method returns an error and rejects the failover of liveCache to this node.

This method is needed to enforce the cross-resource group resource dependency between SAP xserver and liveCache.

Probing Algorithm and Functionality

Sun Cluster HA for SAP liveCache has a fault monitor for each resource type.

SAP xserver Fault Monitor

The SAP xserver parent process is under the control of process monitor pmfadm. If the parent process is stopped or killed, the process monitor contacts the SAP xserver Fault Monitor, and the SAP xserver Fault Monitor decides what action must be taken.

The SAP xserver Fault Monitor performs the following steps in a loop.

  1. Sleeps for Thorough_probe_interval.

  2. Uses the SAP utility dbmcli with db_enum to check SAP xserver availability.

    • If SAP xserver is unavailable, the SAP xserver probe restarts the SAP xserver resource. If the maximum number of restarts is reached, the SAP xserver Fault Monitor takes the SAP xserver resource offline on the node where SAP xserver is unavailable.

    • If any system error messages are logged in syslog during the checking process, the SAP xserver probe concludes that a partial failure has occurred. If the system error messages logged in syslog occur four times within the probe_interval, SAP xserver probe restarts SAP xserver.

liveCache Fault Monitor

The liveCache probe checks for the presence of the liveCache parent process, the state of the liveCache database, and whether the user intentionally stopped liveCache outside of Sun Cluster software. If a user used the Stop liveCache button in LC10 or the lcinit command to stop liveCache outside of Sun Cluster software, the liveCache probe concludes that the user intentionally stopped liveCache outside of Sun Cluster software.

If the user intentionally stopped liveCache outside of Sun Cluster software by using the Stop liveCache button in LC10 or the lcinit command, the Sun Cluster HA for SAP liveCache Fault Monitor updates the resource state and status message to reflect this action, but it does not restart liveCache. When the user successfully restarts liveCache outside of Sun Cluster software by using the Start liveCache button in LC10 or the lcinit command, the Sun Cluster HA for SAP liveCache Fault Monitor updates the resource state and status message to indicate that liveCache is running under the control of Sun Cluster software, and Sun Cluster HA for SAP liveCache Fault Monitor takes appropriate action if it detects liveCache is OFFLINE.

If liveCache database state reports that liveCache is not running or that the liveCache parent process terminated, the Sun Cluster HA for SAP liveCache Fault Monitor restarts or fails over liveCache.

The Sun Cluster HA for SAP liveCache Fault Monitor performs the following steps in a loop. If any step returns liveCache is offline, the liveCache probe restarts or fails over liveCache.

  1. Sleeps for Thorough_probe_interval.

  2. Uses the dbmcli utility with db_state to check the liveCache database state.

  3. If liveCache is online, liveCache probe checks the liveCache parent process.

    • If the parent process terminates, liveCache probe returns liveCache is offline.

    • If the parent process is online, liveCache probe returns OK.

  4. If liveCache is not online, liveCache probe determines if the user stopped liveCache outside of Sun Cluster software by using the Stop liveCache button in LC10 or the lcinit command.

  5. If the user stopped liveCache outside of Sun Cluster software by using the Stop liveCache button in LC10 or the lcinit command, returns OK.

  6. If the user did not stop liveCache outside of Sun Cluster software by using the Stop liveCache button in LC10 or the lcinit command, checks SAP xserver availability.

    • If SAP xserver is unavailable, returns OK because the probe cannot restart liveCache if SAP xserver is unavailable.

    • If SAP xserver is available, returns liveCache is offline.

  7. If any errors are reported from system function calls, returns system failure.

Upgrading the SUNW.sap_xserver Resource Type

Upgrade the SUNW.sap_xserver resource type if all conditions in the following list apply:

For general instructions that explain how to upgrade a resource type, see “Upgrading a Resource Type” in Sun Cluster Data Services Planning and Administration Guide for Solaris OS. The information that you need to complete the upgrade of the SUNW.sap_xserver resource type is provided in the subsections that follow.

Information for Registering the New Resource Type Version

The relationship between a resource type version and the release of Sun Cluster data services is shown in the following table. The release of Sun Cluster data services indicates the release in which the version of the resource type was introduced.

Resource Type Version 

Sun Cluster Data Services Release 

1.0 

3.0 5/02 asynchronous release 

3.1 4/04 

To determine the version of the resource type that is registered, use one command from the following list:

The resource type registration (RTR) file for this resource type is /opt/SUNWsclc/xserver/etc/SUNW.sap_xserver.

Information for Migrating Existing Instances of the Resource Type

The information that you need to migrate instances of the SUNW.sap_xserver resource type is as follows:

The following example shows a command for editing an instance of the SUNW.sap_xserver resource type.


Example 1–1 Editing an Instance of the SUNW.sap_xserver Resource Type During Upgrade


# scrgadm -cj sapxserver-rs -y Type_version=2 \
  -x Independent_Program_Path=/sapdb/indep_prog

This command edits a SUNW.sap_xserver resource as follows: