Sun Cluster 3.0 5/02 Supplement

Chapter 5 Data Services

This chapter provides new data services installation and configuration information that has been added to the Sun Cluster 3.0 5/02 update release. This information supplements the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide. For new cluster framework installation information, see Chapter 4, Installation.

This chapter contains new information for the following topics.

Installing and Configuring Sun Cluster HA for SAP

The following information applies to this update release and all subsequent updates.

Synchronizing the Startups Between Resource Groups and Disk Device Groups

The following feature was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

After a cluster boots up or services fail over to another node, global devices and cluster file systems might require time to become available. However, a data service can run its START method before global devices and cluster file systems-on which the data service depends-come online. In this instance, the START method times out, and you must reset the state of the resource groups that the data service uses and restart the data service manually.The resource types HAStorage and HAStoragePlus monitor the global devices and cluster file systems and cause the START method of the other resources in the same resource group to wait until they become available. (To determine which resource type to use, see "Recommendations" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.) To avoid additional administrative tasks, set up HAStorage or HAStoragePlus for all of the resource groups whose data service resources depend on global devices or cluster file systems.

To create an HAStoragePlus resource type, see "How to Set Up HAStoragePlus Resource Type (5/02)".

Enabling Highly Available Local File Systems

The following feature was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

The HAStoragePlus resource type can be used to make a local file system highly available within a Sun Cluster environment. The local file system partitions must reside on global disk groups with affinity switchovers enabled and the Sun Cluster environment must be configured for failover. This enables the user to make any file system on multi-host disks accessible from any host directly connected to those multi-host disks. (You cannot use HAStoragePlus to make a root file system highly available.)

Using a highly available local file system is strongly recommended for some I/O intensive data services, and configuring the HAStoragePlus resource type has been added to the Registration and Configuration procedures for these data services. For procedures on how to set up the HAStoragePlus resource type for these data services, see the following sections in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

For the procedure to set up HAStoragePlus resource type for other data services, see "How to Set Up HAStoragePlus Resource Type (5/02)".

How to Set Up HAStoragePlus Resource Type (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

The HAStoragePlus resource type was introduced in Sun Cluster 3.0 5/02. This new resource type performs the same functions as HAStorage, and synchronizes the startups between resource groups and disk device groups. The HAStoragePlus resource type has an additional feature to make a local file system highly available. (For background information on making a local file system highly available, see "Enabling Highly Available Local File Systems".) To use both of these features, set up the HAStoragePlus resource type.

To set up HAStoragePlus, the local file system partitions must reside on global disk groups with affinity switchovers enabled and the Sun Cluster environment must be configured for failover.

The following example uses a simple NFS service that shares out home directory data from a locally mounted directory /global/local-fs/nfs/export/home. The example assumes the following:

  1. Become superuser on a cluster member.

  2. Determine whether the resource type is registered.

    The following command prints a list of registered resource types.


    # scrgadm -p | egrep Type
    

  3. If you need to, register the resource type.


    # scrgadm -a -t SUNW.nfs
    

  4. Create the failover resource group nfs-r


    # scrgadm -a -g nfs-rg -y PathPrefix=/global/local-fs/nfs
    

  5. Create a logical host resource of type SUNW.LogicalHostname.


    # scrgadm -a -j nfs-lh-rs -g nfs-rg -L -l log-nfs
    

  6. Register the HAStoragePlus resource type with the cluster.


    # scrgadm -a -t SUNW.HAStoragePlus
    

  7. Create the resource nfs-hastp-rs of type SUNW.HAStoragePlus.


    # scrgadm -a -j nfs-hastp-rs -g nfs-rg -t SUNW.HAStoragePlus \
    -x FilesystemMountPoints=/global/local-fs/nfs \
    -x AffinityOn=TRUE
    

  8. Bring the resource group nfs-rg online on a cluster node.

    This node will become the primary node for the /global/local-fs/nfs file system's underlying global device partition. The file system /global/local-fs/nfs will then be locally mounted on this node.


    # scswitch -Z -g nfs-rg
    

  9. Register the SUNW.NFS resource type with the cluster. Create the resource nfs-rs of type SUNW.nfs and specify its resource dependency on the resource nfs-hastp-rs.

    dfstab.nfs-rs will be present in /global/local-fs/nfs/SUNW.nfs.


    # scrgadm -a -t SUNW.nfs
    # scrgadm -a -g nfs-rg -j nfs-rs -t SUNW.nfs \
    -y Resource_dependencies=nfs-hastp-rs
    


    only -

    The nfs-hastp-rs resource must be online before you can set the dependency in the nfs resource.


  10. Bring the resource nfs-rs online.


    # scswitch -Z -g nfs-rg
    

Now whenever the service is migrated to a new node, the primary I/O path for /global/local-fs/nfs will always be online and collocated with the NFS servers. The file system /global/local-fs/nfs will be locally mounted before starting the NFS server.

Registering and Configuring Sun Cluster HA for Oracle

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.Register and configure Sun Cluster HA for Oracle as a failover data service. You must register the data service and configure resource groups and resources for the Oracle server and listener. See "Planning for Sun Cluster Data Services" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide and the Sun Cluster 3.0 12/01 Concepts document for details on resources and resource groups.

How to Register and Configure Sun Cluster HA for Oracle (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.This procedure describes how to use the scrgadm command to register and configure Sun Cluster HA for Oracle.

This procedure includes creating the HAStoragePlus resource type. This resource type synchronizes actions between HAStoragePlus and the data service and enables you to use a highly available local file system. Sun Cluster HA for Oracle is disk-intensive, and therefore you should configure the HAStoragePlus resource type.

See the SUNW.HAStoragePlus(5) man page and "Relationship Between Resource Groups and Disk Device Groups" on page 5 for background information.


only -

Other options also enable you to register and configure the data service. See "Tools for Data Service Resource Administration" on page 10 for details about these options.


You must have the following information to perform this procedure.


only -

Perform this procedure on any cluster member.


  1. Become superuser on a cluster member.

  2. Run the scrgadm command to register the resource types for the data service.

    For Sun Cluster HA for Oracle, you register two resource types, SUNW.oracle_server and SUNW.oracle_listener, as follows.


    # scrgadm -a -t SUNW.oracle_server
    # scrgadm -a -t SUNW.oracle_listener
    

    -a

    Adds the data service resource type.

    -t SUNW.oracle_type

    Specifies the predefined resource type name for your data service.

  3. Create a failover resource group to hold the network and application resources.

    You can optionally select the set of nodes on which the data service can run with the -h option, as follows.


    # scrgadm -a -g resource-group [-h nodelist]
    -g resource-group

    Specifies the name of the resource group. This name can be your choice but must be unique for resource groups within the cluster.

    -h nodelist

    Specifies an optional comma-separated list of physical node names or IDs that identify potential masters. The order here determines the order in which the nodes are considered as primary during failover.


    only -

    Use the -h option to specify the order of the node list. If all of the nodes that are in the cluster are potential masters, you do not need to use the -h option.


  4. Verify that all of the network resources that you use have been added to your name service database.

    You should have performed this verification during the Sun Cluster installation.


    only -

    Ensure that all of the network resources are present in the server's and client's /etc/hosts file to avoid any failures because of name service lookup.


  5. Add a network resource to the failover resource group.


    # scrgadm -a -L -g resource-group -l logical-hostname [-n netiflist] 
    -l logical-hostname

    Specifies a network resource. The network resource is the logical hostname or shared address (IP address) that clients use to access Sun Cluster HA for Oracle.

    [-n netiflist]

    Specifies an optional, comma-separated list that identifies the NAFO groups on each node. All of the nodes in nodelist of the resource group must be represented in the netiflist. If you do not specify this option, scrgadm(1M) attempts to discover a net adapter on the subnet that the hostname list identifies for each node in nodelist. For example, -n nafo0@nodename, nafo0@nodename2.

  6. Register the HAStoragePlus resource type with the cluster.


    # scrgadm -a -t SUNW.HAStoragePlus
    

  7. Create the resource oracle-hastp-rs of type HAStoragePlus.


    # scrgadm -a -j oracle-hastp-rs -g oracle-rg -t SUNW.HAStoragePlus \
     
    [If your database is on a raw device, specify the global device path.]
    -x GlobalDevicePaths=ora-set1,/dev/global/dsk/dl \
     
    [If your database in on a Cluster File Service, specify the global filesystem mount points.]
    -x FilesystemMountPoints=/global/ora-inst,/global/ora-data/logs \
     
    [If your database is on a highly available local file system, secify the local filesystem mount points.]
    -x FilesystemMountPoints=/local/ora-data \
     
    [Set AffinityOn to true.]
    -x AffinityOn=TRUE
    


    only -

    AffinityOn must be set to TRUE and the local file system must reside on global disk groups to be failover.


  8. Run the scrgadm command to complete the following tasks and bring the resource group oracle-rg online on a cluster node.

    • Move the resource group into a managed state.

    • Bring the resource group online.

    This node will be made the primary for device group ora-set1 and raw device /dev/global/dsk/d1. Device groups associated with file systems such as /global/ora-inst and /global/ora-data/logs will also be made primaries on this node.


    # scrgadm -Z -g oracle-rg
    

  9. Create Oracle application resources in the failover resource group.


    # scrgadm -a -j resource -g resource-group \
    
    -t SUNW.oracle_server \ 
    -x Connect_string=user/passwd \
    -x ORACLE_SID=instance \
    -x ORACLE_HOME=Oracle-home \
    -x Alert_log_file=path-to-log \
    -y resource_dependencies=storageplus-resource
     
    # scrgadm -a -j resource -g resource-group \
    -t SUNW.oracle_listener \
    -x LISTENER_NAME=listener \
    -x ORACLE_HOME=Oracle-home \
    -y resource_dependencies=storageplus-resource
    

    -j resource

    Specifies the name of the resource to add.

    -g resource-group

    Specifies the name of the resource group into which the resources are to be placed.

    -t SUNW.oracle_server/listener

    Specifies the type of the resource to add.

    -x Alert_log_file=path-to-log

    Sets the path under $ORACLE_HOME for the server message log.

    -x Connect_string=user/passwd

    Specifies the user and password that the fault monitor uses to connect to the database. These settings must agree with the permissions that you set up in "How to Set Up Oracle Database Permissions" on page 23 in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide. If you use Solaris authorization, type a slash (/) instead of the user name and password.

    -x ORACLE_SID=instance

    Sets the Oracle system identifier.

    -x LISTENER_NAME=listener

    Sets the name of the Oracle listener instance. This name must match the corresponding entry in listener.ora.

    -x ORACLE_HOME=Oracle-home

    Sets the path to the Oracle home directory.


    only -

    When a fault occurs in an Oracle server resource and causes a restart, the whole resource group is restarted. Any other resources (such as Apache or DNS) in the resource group are restarted, even if they did not have a fault. To prevent other resources from being restarted along with an Oracle server resource, put them in a separate resource group.

    Optionally, you can set additional extension properties that belong to the Oracle data service to override their default values. See "Configuring Sun Cluster HA for Oracle Extension Properties" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for a list of extension properties.


  10. Run the scswitch command to complete the following task.

    • Enable the resource and fault monitoring.


      # scswitch -Z -g resource-group
      
      -Z

      Enables the resource and monitor, moves the resource group to the managed state, and brings it online.

      -g resource-group

      Specifies the name of the resource group.

Example - Registering Sun Cluster HA for Oracle

The following example shows how to register Sun Cluster HA for Oracle on a two-node cluster.


Cluster Information
Node names: phys-schost-1, phys-schost-2
Logical Hostname: schost-1
Resource group: resource-group-1 (failover resource group)
Oracle Resources: oracle-server-1, oracle-listener-1
Oracle Instances: ora-lsnr (listener), ora-srvr (server)
 
(Add the failover resource group to contain all of the resources.)
# scrgadm -a -g resource-group-1
 
(Add the logical hostname resource to the resource group.)
# scrgadm -a -L -g resource-group-1 -l schost-1 
 
(Register the Oracle resource types)
# scrgadm -a -t SUNW.oracle_server
# scrgadm -a -t SUNW.oracle_listener
 
(Add the Oracle application resources to the resource group.)
# scrgadm -a -j oracle-server-1 -g resource-group-1 \
-t SUNW.oracle_server -x ORACLE_HOME=/global/oracle \
-x Alert_log_file=/global/oracle/message-log \
-x ORACLE_SID=ora-srvr -x Connect_string=scott/tiger
 
# scrgadm -a -j oracle-listener-1 -g resource-group-1 \
-t SUNW.oracle_listener -x ORACLE_HOME=/global/oracle \
-x LISTENER_NAME=ora-lsnr
 
(Bring the resource group online.)
# scswitch -Z -g resource-group-1

Registering and Configuring Sun Cluster HA for Sybase ASE

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.Use the procedures in this section to register and configure the Sun Cluster HA for Sybase ASE data service. Register and configure Sun Cluster HA for Sybase ASE as a failover data service.

How to Register and Configure Sun Cluster HA for Sybase ASE (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.This procedure describes how to use the scrgadm(1M) command to register and configure Sun Cluster HA for Sybase ASE.

This procedure includes creating the HAStoragePlus resource type. This resource type synchronizes actions between HAStorage and Sun Cluster HA for Sybase ASE and enables you to use a highly available local file system. Sun Cluster HA for Sybase ASE is disk-intensive, and therefore you should configure the HAStoragePlus resource type.

See the SUNW.HAStoragePlus(5) man page and "Relationship Between Resource Groups and Disk Device Groups" on page 5 for more information about the HAStoragePlus resource type.


only -

Other options also enable you to register and configure the data service. See "Tools for Data Service Resource Administration" on page 10 for details about these options.


To perform this procedure, you must have the following information.


only -

Perform the following steps on one cluster member.


  1. Become superuser on a cluster member.

  2. Run the scrgadm command to register resource types for Sun Cluster HA for Sybase ASE.


    # scrgadm -a -t SUNW.sybase
    

    -a

    Adds the resource type for the data service.

    -t SUNW.sybase

    Specifies the resource type name that is predefined for your data service.

  3. Create a failover resource group to hold the network and application resources.

    You can optionally select the set of nodes on which the data service can run with the -h option, as follows.


    # scrgadm -a -g resource-group [-h nodelist]
    -g resource-group

    Specifies the name of the resource group. This name can be your choice but must be unique for resource groups within the cluster.

    -h nodelist

    Specifies an optional comma-separated list of physical node names or IDs that identify potential masters. The order here determines the order in which the nodes are considered as primary during failover.


    only -

    Use the -h option to specify the order of the node list. If all of the nodes that are in the cluster are potential masters, you do not need to use the -h option.


  4. Verify that all of the network resources that you use have been added to your name service database.

    You should have performed this verification during the Sun Cluster installation.


    only -

    Ensure that all of the network resources are present in the server's and client's /etc/hosts file to avoid any failures because of name service lookup.


  5. Add a network resource to the failover resource group.


    # scrgadm -a -L -g resource-group -l logical-hostname [-n netiflist] 
    -l logical-hostname

    Specifies a network resource. The network resource is the logical hostname or shared address (IP address) that clients use to access Sun Cluster HA for Oracle.

    [-n netiflist]

    Specifies an optional, comma-separated list that identifies the NAFO groups on each node. All of the nodes in nodelist of the resource group must be represented in the netiflist. If you do not specify this option, scrgadm(1M) attempts to discover a net adapter on the subnet that the hostname list identifies for each node in nodelist. For example, -n nafo0@nodename, nafo0@nodename2.

  6. Register the HAStoragePlus resource type with the cluster.


    # scrgadm -a -t SUNW.HAStoragePlus
    

  7. Create the resource sybase-hastp-rs of type HAStoragePlus.


    # scrgadm -a -j sybase-hastp-rs -g sybase-rg \
    -t SUNW.HAStoragePlus \
    -x GlobalDevicePaths=sybase-set1,/dev/global/dsk/dl \
    -x FilesystemMountPoints=/global/sybase-inst \
    -x AffinityOn=TRUE
    


    only -

    AffinityOn must be set to TRUE and the local file system must reside on global disk groups to be failover.


  8. Run the scrgadm command to complete the following tasks and bring the resource group sybase-rg online on a cluster node.

    • Move the resource group into a managed state.

    • Bring the resource group online

    This node will be made the primary for device group sybase-set1 and raw device /dev/global/dsk/d1. Device groups associated with file systems such as /global/sybase-inst will also be made primaries on this node.


    # scrgadm -Z -g sybase-rg
    

  9. Create Sybase ASE application resources in the failover resource group.


    # scrgadm -a -j resource -g resource-group \
    -t SUNW.sybase \ 
    -x Environment_File=environment-file-path \
    -x Adaptive_Server_Name=adaptive-server-name \
    -x Backup_Server_Name=backup-server-name \
    -x Text_Server_Name=text-server-name \
    -x Monitor_Server_Name=monitor-server-name \
    -x Adaptive_Server_Log_File=log-file-path \
    -x Stop_File=stop-file-path \
    -x Connect_string=user/passwd \
    -y resource_dependencies=storageplus-resource
    

    -j resource

    Specifies the resource name to add.

    -g resource-group

    Specifies the name of the resource group into which the RGM places the resources.

    -t SUNW.sybase

    Specifies the resource type to add.

    -x Environment_File=environment-file

    Sets the name of the environment file.

    -x Adaptive_Server_Name=adaptive-server-name

    Sets the name of the adaptive server.

    -x Backup_Server_Name=backup-server-name

    Sets the name of the backup server.

    -x Text_Server_Name=text-server-name

    Sets the name of the text server.

    -x Monitor_Server_Name=monitor-server-name

    Sets the name of the monitor server.

    -x Adaptive_Server_Log_File=monitor-server-name

    Sets the path to the log file for the adaptive server.

    -x Stop_File=stop-file-path

    Sets the path to the stop file.

    -x Connect_string=user/passwd

    Specifies the user and password that the fault monitor uses to connect to the database.

    You do not have to specify extension properties that have default values. See "Configuring Sun Cluster HA for Sybase ASE Extension Properties" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for more information.

  10. Run the scswitch(1M) command to complete the following task.

    • Enable the resource and fault monitoring.


    # scswitch -Z -g resource-group
    
    .

Where to Go From Here

After you register and configure Sun Cluster HA for Sybase ASE, go to "How to Verify the Sun Cluster HA for Sybase ASE Installation" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

Configuration Guidelines for Sun Cluster Data Services

The following information applies to this update release and all subsequent updates.

Determining the Location of the Application Binaries (5/02)

This following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

Planning the Cluster File System Configuration (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

The resource type HAStoragePlus enables you to use a highly available local file system in a Sun Cluster environment configured for failover. This resource type is supported in Sun Cluster 3.0 5/02. See "Enabling Highly Available Local File Systems" for information on setting up the HAStoragePlus resource type.

See the planning chapter of the Sun Cluster 3.0 12/01 Software Installation Guide for information on how to create cluster file systems.

Relationship Between Resource Groups and Disk Device Groups

The following information applies to this update release and all subsequent releases.

HAStorage and HAStoragePlus Resource Types (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.The resource types HAStorage and the HAStoragePlus can be used to configure the following options.

In addition, HAStoragePlus is capable of mounting any cluster file system found to be in an unmounted state. See "Planning the Cluster File System Configuration" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for more information.


only -

If the device group is switched to another node while the HAStorage or HAStoragePlus resource is online, AffinityOn has no effect and the resource group does not migrate along with the device group. On the other hand, if the resource group is switched to another node, AffinityOn being set to True causes the device group to follow the resource group to the new node.


Recommendations (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

To determine whether to create HAStorage or HAStoragePlus resources within a data service resource group, consider the following criteria.

See the individual chapters on data services in this book for specific recommendations.

See "Synchronizing the Startups Between Resource Groups and Disk Device Groups" for information about the relationship between disk device groups and resource groups. The SUNW.HAStorage(5) and SUNW.HAStoragePlus(5) man pages provides additional details.

See "Enabling Highly Available Local File Systems" for procedures for mounting of file systems such as VxFS in a local mode. The SUNW.HAStoragePlus man page provides additional details.

Freeing Node Resources by Off-Loading Non-Critical Resource Groups

The following feature was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

Prioritized Service Management (RGOffload) allows your cluster to automatically free a node's resources for critical data services. RGOffload is used when the startup of a critical failover data service requires a Non-Critical, scalable or failover data service to be brought offline. RGOffload is used to off-load resource groups containing non-critical data services.


only -

The critical data service must be a failover data service. The data service to be off-loaded can be a failover or scalable data service.


How to Set Up an RGOffload Resource (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

  1. Become superuser on a cluster member.

  2. Determine whether the RGOffload resource type is registered.

    The following command prints a list of resource types.


    # scrgadm -p|egrep SUNW.RGOffload
    

  3. If needed, register the resource type


    # scrgadm -a -t SUNW.RGOffload
    
    .

  4. Set the Desired_primaries to zero in each resource group to be offloaded by the RGOffload resource.


    # scrgadm -c -g offload-rg -y Desired_primaries=0
    

  5. Add the RGOffload resource to the critical failover resource group and set the extension properties.


    Caution - Caution -

    Do not place a resource group on more than one resource's rg_to_offload list. Placing a resource group on multiple rg_to_offload lists may cause the resource group to be taken offline and brought back online repeatedly.


    See "Configuring RGOffload Extension Properties (5/02)" for extension property descriptions.


    # scrgadm -aj rgoffload-resource -t SUNW.RGOffload -g critical-rg \
    -x rg_to_offload=offload-rg-1, offload-rg-2, ... \
    -x continue_to_offload=TRUE -x max_offload_retry=15
    


    only -

    Extension properties other than rg_to_offload are shown with default values here. rg_to_offload is a comma-separated list of resource groups that are not dependent on each other. This list cannot include the resource group to which the RGOffload resource is being added.


  6. Enable the RGOffload resource.


    # scswitch -ej rgoffload-resource
    

  7. Set the dependency of the critical failover resource on the RGOffload resource.


    # scrgadm -c -j critical-resource \
    -y Resource_dependencies=rgoffload-resource
    

    Resource_dependencies_weak may also be used. Using Resource_dependencies_weak on the RGOffload resource type will allow the critical failover resource to start up even if errors are encountered during offload of offload-rg.

  8. Bring the resource groups to be offloaded online.


    # scswitch -z -g offload-rg, offload-rg-2, ... -h nodelist
    

    The resource group remains online on all nodes where the critical resource group is offline. The fault monitor prevents the resource group from running on the node where the critical resource group is online.

    Because Desired_primaries for resource groups to be offloaded is set to 0 (see Step 4), the -Z option will not bring these resource groups online.

  9. If the critical failover resource group is not online, bring it online.


    # scswitch -Z -g critical-rg
    

Example - Configuring an RGOffload Resource

This example describes how to configure an RGOffload resource (rgofl), the critical resource group that contains the RGOffload resource (oracle_rg), and scalable resource groups that are off-loaded when the critical resource group comes online (IWS-SC, IWS-SC-2). The critical resource in this example is oracle-server-rs.

In this example, oracle_rg, IWS-SC, and IWS-SC-2 can be mastered on any node of cluster triped: phys-triped-1, phys-triped-2, phys-triped-3.


[Determine whether the SUNW.RGOffload resource type is registered.]
# scrgadm -p|egrep SUNW.RGOffload
 
[If needed, register the resource type.]
# scrgadm -a -t SUNW.RGOffload
 
[Set the Desired_primaries to zero in each resource group to be offloaded by 
the RGOffload resource.]
# scrgadm -c -g IWS-SC-2 -y Desired_primaries=0
# scrgadm -c -g IWS-SC -y Desired_primaries=0
 
[Add the RGOffload resource to the critical resource group and set the extension properties.]
# scrgadm -aj rgofl -t SUNW.RGOffload -g oracle_rg \
-x rg_to_offload=IWS-SC,IWS-SC-2 -x continue_to_offload=TRUE \
-x max_offload_retry=15
 
[Enable the RGOffload resource.]
# scswitch -ej rgofl
 
[Set the dependency of the critical failover resource to the RGOffload resource.]
# scrgadm -c -j oracle-server-rs -y Resource_dependencies=rgofl
 
[Bring the resource groups to be offloaded online on all nodes.]
# scswitch -z -g IWS-SC,IWS-SC-2 -h phys-triped-1,phys-triped-2,phys-triped-3
 
[If the critical failover resource group is not online, bring it online.]
# scswitch -Z -g oracle_rg

Configuring RGOffload Extension Properties (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

Typically, you use the command line scrgadm -x parameter=value to configure extension properties when you create the RGOffload resource. See "Standard Properties" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for details on all of the Sun Cluster standard properties.

Table 5-1 describes extension properties that you can configure for RGOffload. The Tunable entries indicate when you can update the property.

Table 5-1 RGOffload Extension Properties

Name/Data Type 

Default 

rg_to_offload (string)

A comma-separated list of resource groups that need to be offloaded on a node when a critical failover resource group starts up on that node. This list should not contain resource groups that depend upon each other. This property has no default and must be set. 

 

RGOffload does not check for dependency loops in the list of resource groups set in the rg_to_offload extension property. For example, if resource group RG-B depends in some way on RG-A, then both RG-A and RG-B should not be included in rg_to_offload.

 

Default: None

Tunable: Any time

continue_to_offload (Boolean) 

A Boolean to indicate whether to continue offloading the remaining resource groups in the rg_to_offload list after an error in offloading a resource group occurs.

 

This property is only used by the START method. 

 

Default: True

Tunable: Any time

max_offload_retry (integer)

The number of attempts to offload a resource group during startup in case of failures due to cluster or resource group reconfiguration. There is an interval of 10 seconds between successive retries. 

 

Set the max_offload_retry so that (the number of resource groups to be offloaded * max_offload_retry * 10 seconds) is less than the Start_timeout for the RGOffload resource. If this number is close to or more than the Start_timeout number, the START method of RGOffload resource may time out before maximum offload attempts are completed.

 

This property is only used by the START method. 

 

Default: 15

Tunable: Any time

Fault Monitor (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

The Fault Monitor probe for RGOffload resource is used to keep resource groups specified in the rg_to_offload extension property offline on the node mastering the critical resource. During each probe cycle, Fault Monitor verifies that resource groups to be off-loaded (offload-rg) are offline on the node mastering the critical resource. If the offload-rg is online on the node mastering the critical resource, the Fault Monitor attempts to start offload-rg on nodes other than the node mastering the critical resource, thereby bringing offload-rg offline on the node mastering the critical resource.

Because desired_primaries for offload-rg is set to 0, off-loaded resource groups are not restarted on nodes that become available later. Therefore, the RGOffload Fault Monitor attempts to start up offload-rg on as many primaries as possible, until maximum_primaries limit is reached, while keeping offload-rg offline on the node mastering the critical resource.

RGOffload attempts to start up all off-loaded resource groups unless they are in the maintenance or unmanaged state. To place a resource group in an unmanaged state, use the scswitch command.


# scswitch -u -g resourcegroup

The Fault Monitor probe cycle is invoked after every Thorough_probe_interval.

Installing and Configuring iPlanet Directory Server

The following information applies to this update release and all subsequent updates.

How to Install iPlanet Directory Server for Solaris 9 (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

The iPlanet Directory Server is bundled with the Solaris 9 operating environment. If you are using Solaris 9, use the Solaris 9 CD-ROMs to install the iPlanet Directory Server.

  1. Install the iPlanet Directory Server packages on all the nodes of the cluster, if they are not already installed.

  2. Identify a location on a cluster file system where you intend to keep all your directory servers (for example, /global/nsldap).

    If you want to, you may create a separate directory for this file system.

  3. On all nodes, create a link to this directory from /var/ds5. If /var/ds5 already exists on a node, remove it and create the link.


    # rmdir /var/ds5
    # ln -s /global/nsldap /var/ds5
    
  4. On any one node, set up the directory server(s) in the usual way.


    # directoryserver setup
    

    On this node, a link, /usr/iplanet/ds5/slapd-instance-name, will be created automatically. On all other nodes, create the link manually

    In the following example, dixon-1 is the name of the Directory Server.


    # ln -s /var/ds5/slapd-dixon-1 /usr/iplanet/ds5/slapd-dixon-1
    
  5. Supply the logical hostname when the setup command prompts you for the server name.

    This step is required for failover to work correctly.


    only -

    The logical host that you specify must be online on the node from which you run the directoryserver setup command. This state is necessary because at the end of the iPlanet Directory Server installation, iPlanet Directory Server automatically starts and will fail if the logical host is offline on that node.


  6. If prompted for the logical hostname, select the logical hostname along with your domain for the computer name, for example, phys-schost-1.example.com.

    Supply the hostname that is associated with a network resource when the setup command prompts you for the full server name.

  7. If prompted for the IP address to be used as the iPlanet Directory Server Administrative Server, specify the IP address of the cluster node on which you are running directoryserver setup.

As part of the installation, you set up an iPlanet Directory Server Administrative Server. The IP address that you specify for this server must be that of a physical cluster node, not the name of the logical host that will fail over.

Where to Go From Here

After you configure and activate the network resources, go to "How to Configure iPlanet Directory Server" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

Installing and Configuring an iPlanet Web Server

The following information applies to this update release and all subsequent updates.

How to Configure an iPlanet Web Server (5/02)

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

This procedure describes how to configure an instance of the iPlanet Web server to be highly available. Use the NetscapeTM browser to interact with this procedure.

Consider the following points before you perform this procedure.

  1. Create a directory on the local disk of all the nodes to hold the logs, error files, and PID file that iPlanet Web Server manages.

    For iPlanet to work correctly, these files must be located on each node of the cluster, not on the cluster file system.

    Choose a location on the local disk that is the same for all the nodes in the cluster. Use the mkdir -p command to create the directory. Make nobody the owner of this directory.

    The following example shows how to complete this step.


    phys-schost-1# mkdir -p /var/pathname/http-instance/logs/
    

    only -

    If you anticipate large error logs and PID files, do not put them in a directory under /var because they will overwhelm this directory. Rather, create a directory in a partition with adequate space to handle large files.


  2. From the administrative workstation or a cluster node, start the Netscape browser.

  3. On one of the cluster nodes, go to the directory https-admserv, then start the iPlanet admin server.


    # cd https-admserv
    # ./start
    

  4. Enter the URL of the iPlanet admin server in the Netscape browser.

    The URL consists of the physical hostname and port number that the iPlanet installation script established in Step 4 of the server installation procedure, for example, n1.example.com:8888. When you perform Step 2 of this procedure, the ./start command displays the admin URL.

    When prompted, use the user ID and password you specified in Step 6 of the server installation procedure to log in to the iPlanet administration server interface.

  5. Using the administration server where possible and manual changes otherwise, complete the following:

    • Verify that the server name is correct.

    • Verify that the server user is set as superuser.

    • Change the bind address field to one of the following addresses.

      • A logical hostname or shared address if you use DNS as your name service

      • The IP address associated with the logical hostname or shared address if you use NIS as your name service

    • Update the ErrorLog, PidLog, and Access Log entries to reflect the directory created in Step 1 of this section.

    • Save your changes.

Resource Group Properties

The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.

A new resource group property, Auto_start_on_new_cluster, has been added to the Resource Group Properties list.

Table 5-2 Resource Group Properties

Property Name 

Description 

Auto_start_on_new_cluster (Boolean) 

This property can be used to disable automatic startup of the Resource Group when a new cluster is forming. 

 

The default is TRUE. If set to TRUE, the Resource Group Manager attempts to start the resource group automatically to achieve Desired_primaries when the cluster is rebooted. If set to FALSE, the Resource Group does not start automatically when the cluster is rebooted.

 

Category: Optional

Default: True

Tunable: Any time