This chapter provides new data services installation and configuration information that has been added to the Sun Cluster 3.0 5/02 update release. This information supplements the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide. For new cluster framework installation information, see Chapter 4, Installation.
This chapter contains new information for the following topics.
"Synchronizing the Startups Between Resource Groups and Disk Device Groups"
"Relationship Between Resource Groups and Disk Device Groups"
"Freeing Node Resources by Off-Loading Non-Critical Resource Groups"
The following information applies to this update release and all subsequent updates.
Updated Sun Cluster HA for SAP chapter that includes procedures that support SAP as a scalable data service. See Appendix B, Installing and Configuring Sun Cluster HA for SAP.
Updated Sun Cluster HA for SAP chapter that includes procedures on how to set up a lock file. See "Setting Up a Lock File".
The following feature was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
After a cluster boots up or services fail over to another node, global devices and cluster file systems might require time to become available. However, a data service can run its START method before global devices and cluster file systems-on which the data service depends-come online. In this instance, the START method times out, and you must reset the state of the resource groups that the data service uses and restart the data service manually.The resource types HAStorage and HAStoragePlus monitor the global devices and cluster file systems and cause the START method of the other resources in the same resource group to wait until they become available. (To determine which resource type to use, see "Recommendations" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.) To avoid additional administrative tasks, set up HAStorage or HAStoragePlus for all of the resource groups whose data service resources depend on global devices or cluster file systems.
To create an HAStoragePlus resource type, see "How to Set Up HAStoragePlus Resource Type (5/02)".
The following feature was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
The HAStoragePlus resource type can be used to make a local file system highly available within a Sun Cluster environment. The local file system partitions must reside on global disk groups with affinity switchovers enabled and the Sun Cluster environment must be configured for failover. This enables the user to make any file system on multi-host disks accessible from any host directly connected to those multi-host disks. (You cannot use HAStoragePlus to make a root file system highly available.)
Using a highly available local file system is strongly recommended for some I/O intensive data services, and configuring the HAStoragePlus resource type has been added to the Registration and Configuration procedures for these data services. For procedures on how to set up the HAStoragePlus resource type for these data services, see the following sections in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.
For the procedure to set up HAStoragePlus resource type for other data services, see "How to Set Up HAStoragePlus Resource Type (5/02)".
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
The HAStoragePlus resource type was introduced in Sun Cluster 3.0 5/02. This new resource type performs the same functions as HAStorage, and synchronizes the startups between resource groups and disk device groups. The HAStoragePlus resource type has an additional feature to make a local file system highly available. (For background information on making a local file system highly available, see "Enabling Highly Available Local File Systems".) To use both of these features, set up the HAStoragePlus resource type.
To set up HAStoragePlus, the local file system partitions must reside on global disk groups with affinity switchovers enabled and the Sun Cluster environment must be configured for failover.
The following example uses a simple NFS service that shares out home directory data from a locally mounted directory /global/local-fs/nfs/export/home. The example assumes the following:
The mount point /global/local-fs/nfs will be used to mount a UFS local file system on a Sun Cluster global device partition.
The /etc/vfstab entry for the /global/local-fs/nfs file system should specify that it is a local file system and the mount boot flag is no.
The PathPrefix directory (the directory used by HA-NFS to maintain administrative and status information) is on the root directory of the same file system to be mounted (for example, /global/local-fs/nfs).
Become superuser on a cluster member.
Determine whether the resource type is registered.
The following command prints a list of registered resource types.
# scrgadm -p | egrep Type |
If you need to, register the resource type.
# scrgadm -a -t SUNW.nfs |
Create the failover resource group nfs-r
# scrgadm -a -g nfs-rg -y PathPrefix=/global/local-fs/nfs |
Create a logical host resource of type SUNW.LogicalHostname.
# scrgadm -a -j nfs-lh-rs -g nfs-rg -L -l log-nfs |
Register the HAStoragePlus resource type with the cluster.
# scrgadm -a -t SUNW.HAStoragePlus |
Create the resource nfs-hastp-rs of type SUNW.HAStoragePlus.
# scrgadm -a -j nfs-hastp-rs -g nfs-rg -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/global/local-fs/nfs \ -x AffinityOn=TRUE |
Bring the resource group nfs-rg online on a cluster node.
This node will become the primary node for the /global/local-fs/nfs file system's underlying global device partition. The file system /global/local-fs/nfs will then be locally mounted on this node.
# scswitch -Z -g nfs-rg |
Register the SUNW.NFS resource type with the cluster. Create the resource nfs-rs of type SUNW.nfs and specify its resource dependency on the resource nfs-hastp-rs.
dfstab.nfs-rs will be present in /global/local-fs/nfs/SUNW.nfs.
# scrgadm -a -t SUNW.nfs # scrgadm -a -g nfs-rg -j nfs-rs -t SUNW.nfs \ -y Resource_dependencies=nfs-hastp-rs |
The nfs-hastp-rs resource must be online before you can set the dependency in the nfs resource.
Bring the resource nfs-rs online.
# scswitch -Z -g nfs-rg |
Now whenever the service is migrated to a new node, the primary I/O path for /global/local-fs/nfs will always be online and collocated with the NFS servers. The file system /global/local-fs/nfs will be locally mounted before starting the NFS server.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.Register and configure Sun Cluster HA for Oracle as a failover data service. You must register the data service and configure resource groups and resources for the Oracle server and listener. See "Planning for Sun Cluster Data Services" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide and the Sun Cluster 3.0 12/01 Concepts document for details on resources and resource groups.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.This procedure describes how to use the scrgadm command to register and configure Sun Cluster HA for Oracle.
This procedure includes creating the HAStoragePlus resource type. This resource type synchronizes actions between HAStoragePlus and the data service and enables you to use a highly available local file system. Sun Cluster HA for Oracle is disk-intensive, and therefore you should configure the HAStoragePlus resource type.
See the SUNW.HAStoragePlus(5) man page and "Relationship Between Resource Groups and Disk Device Groups" on page 5 for background information.
Other options also enable you to register and configure the data service. See "Tools for Data Service Resource Administration" on page 10 for details about these options.
You must have the following information to perform this procedure.
The names of the cluster nodes that master the data service.
The network resource that clients use to access the data service. Normally, you set up this IP address when you install the cluster. See the Sun Cluster 3.0 12/01 Concepts document for details on network resources.
The path to the Oracle application binaries for the resources that you plan to configure.
Perform this procedure on any cluster member.
Become superuser on a cluster member.
Run the scrgadm command to register the resource types for the data service.
For Sun Cluster HA for Oracle, you register two resource types, SUNW.oracle_server and SUNW.oracle_listener, as follows.
# scrgadm -a -t SUNW.oracle_server # scrgadm -a -t SUNW.oracle_listener |
Adds the data service resource type.
Specifies the predefined resource type name for your data service.
Create a failover resource group to hold the network and application resources.
You can optionally select the set of nodes on which the data service can run with the -h option, as follows.
# scrgadm -a -g resource-group [-h nodelist] |
Specifies the name of the resource group. This name can be your choice but must be unique for resource groups within the cluster.
Specifies an optional comma-separated list of physical node names or IDs that identify potential masters. The order here determines the order in which the nodes are considered as primary during failover.
Use the -h option to specify the order of the node list. If all of the nodes that are in the cluster are potential masters, you do not need to use the -h option.
Verify that all of the network resources that you use have been added to your name service database.
You should have performed this verification during the Sun Cluster installation.
Ensure that all of the network resources are present in the server's and client's /etc/hosts file to avoid any failures because of name service lookup.
Add a network resource to the failover resource group.
# scrgadm -a -L -g resource-group -l logical-hostname [-n netiflist] |
Specifies a network resource. The network resource is the logical hostname or shared address (IP address) that clients use to access Sun Cluster HA for Oracle.
Specifies an optional, comma-separated list that identifies the NAFO groups on each node. All of the nodes in nodelist of the resource group must be represented in the netiflist. If you do not specify this option, scrgadm(1M) attempts to discover a net adapter on the subnet that the hostname list identifies for each node in nodelist. For example, -n nafo0@nodename, nafo0@nodename2.
Register the HAStoragePlus resource type with the cluster.
# scrgadm -a -t SUNW.HAStoragePlus |
Create the resource oracle-hastp-rs of type HAStoragePlus.
# scrgadm -a -j oracle-hastp-rs -g oracle-rg -t SUNW.HAStoragePlus \ [If your database is on a raw device, specify the global device path.] -x GlobalDevicePaths=ora-set1,/dev/global/dsk/dl \ [If your database in on a Cluster File Service, specify the global filesystem mount points.] -x FilesystemMountPoints=/global/ora-inst,/global/ora-data/logs \ [If your database is on a highly available local file system, secify the local filesystem mount points.] -x FilesystemMountPoints=/local/ora-data \ [Set AffinityOn to true.] -x AffinityOn=TRUE |
AffinityOn must be set to TRUE and the local file system must reside on global disk groups to be failover.
Run the scrgadm command to complete the following tasks and bring the resource group oracle-rg online on a cluster node.
Move the resource group into a managed state.
Bring the resource group online.
This node will be made the primary for device group ora-set1 and raw device /dev/global/dsk/d1. Device groups associated with file systems such as /global/ora-inst and /global/ora-data/logs will also be made primaries on this node.
# scrgadm -Z -g oracle-rg |
Create Oracle application resources in the failover resource group.
# scrgadm -a -j resource -g resource-group \ -t SUNW.oracle_server \ -x Connect_string=user/passwd \ -x ORACLE_SID=instance \ -x ORACLE_HOME=Oracle-home \ -x Alert_log_file=path-to-log \ -y resource_dependencies=storageplus-resource # scrgadm -a -j resource -g resource-group \ -t SUNW.oracle_listener \ -x LISTENER_NAME=listener \ -x ORACLE_HOME=Oracle-home \ -y resource_dependencies=storageplus-resource |
Specifies the name of the resource to add.
Specifies the name of the resource group into which the resources are to be placed.
Specifies the type of the resource to add.
Sets the path under $ORACLE_HOME for the server message log.
Specifies the user and password that the fault monitor uses to connect to the database. These settings must agree with the permissions that you set up in "How to Set Up Oracle Database Permissions" on page 23 in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide. If you use Solaris authorization, type a slash (/) instead of the user name and password.
Sets the Oracle system identifier.
Sets the name of the Oracle listener instance. This name must match the corresponding entry in listener.ora.
Sets the path to the Oracle home directory.
When a fault occurs in an Oracle server resource and causes a restart, the whole resource group is restarted. Any other resources (such as Apache or DNS) in the resource group are restarted, even if they did not have a fault. To prevent other resources from being restarted along with an Oracle server resource, put them in a separate resource group.
Optionally, you can set additional extension properties that belong to the Oracle data service to override their default values. See "Configuring Sun Cluster HA for Oracle Extension Properties" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for a list of extension properties.
The following example shows how to register Sun Cluster HA for Oracle on a two-node cluster.
Cluster Information Node names: phys-schost-1, phys-schost-2 Logical Hostname: schost-1 Resource group: resource-group-1 (failover resource group) Oracle Resources: oracle-server-1, oracle-listener-1 Oracle Instances: ora-lsnr (listener), ora-srvr (server) (Add the failover resource group to contain all of the resources.) # scrgadm -a -g resource-group-1 (Add the logical hostname resource to the resource group.) # scrgadm -a -L -g resource-group-1 -l schost-1 (Register the Oracle resource types) # scrgadm -a -t SUNW.oracle_server # scrgadm -a -t SUNW.oracle_listener (Add the Oracle application resources to the resource group.) # scrgadm -a -j oracle-server-1 -g resource-group-1 \ -t SUNW.oracle_server -x ORACLE_HOME=/global/oracle \ -x Alert_log_file=/global/oracle/message-log \ -x ORACLE_SID=ora-srvr -x Connect_string=scott/tiger # scrgadm -a -j oracle-listener-1 -g resource-group-1 \ -t SUNW.oracle_listener -x ORACLE_HOME=/global/oracle \ -x LISTENER_NAME=ora-lsnr (Bring the resource group online.) # scswitch -Z -g resource-group-1 |
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.Use the procedures in this section to register and configure the Sun Cluster HA for Sybase ASE data service. Register and configure Sun Cluster HA for Sybase ASE as a failover data service.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.This procedure describes how to use the scrgadm(1M) command to register and configure Sun Cluster HA for Sybase ASE.
This procedure includes creating the HAStoragePlus resource type. This resource type synchronizes actions between HAStorage and Sun Cluster HA for Sybase ASE and enables you to use a highly available local file system. Sun Cluster HA for Sybase ASE is disk-intensive, and therefore you should configure the HAStoragePlus resource type.
See the SUNW.HAStoragePlus(5) man page and "Relationship Between Resource Groups and Disk Device Groups" on page 5 for more information about the HAStoragePlus resource type.
Other options also enable you to register and configure the data service. See "Tools for Data Service Resource Administration" on page 10 for details about these options.
To perform this procedure, you must have the following information.
The names of the cluster nodes that master the data service.
The network resource that clients use to access the data service. You typically configure the IP address when you install the cluster. See the sections in the Sun Cluster 3.0 12/01 Software Installation Guide on planning the Sun Cluster environment and on how to install the Solaris operating environment for details.
The path to the Sybase ASE application installation.
Perform the following steps on one cluster member.
Become superuser on a cluster member.
Run the scrgadm command to register resource types for Sun Cluster HA for Sybase ASE.
# scrgadm -a -t SUNW.sybase |
Adds the resource type for the data service.
Specifies the resource type name that is predefined for your data service.
Create a failover resource group to hold the network and application resources.
You can optionally select the set of nodes on which the data service can run with the -h option, as follows.
# scrgadm -a -g resource-group [-h nodelist] |
Specifies the name of the resource group. This name can be your choice but must be unique for resource groups within the cluster.
Specifies an optional comma-separated list of physical node names or IDs that identify potential masters. The order here determines the order in which the nodes are considered as primary during failover.
Use the -h option to specify the order of the node list. If all of the nodes that are in the cluster are potential masters, you do not need to use the -h option.
Verify that all of the network resources that you use have been added to your name service database.
You should have performed this verification during the Sun Cluster installation.
Ensure that all of the network resources are present in the server's and client's /etc/hosts file to avoid any failures because of name service lookup.
Add a network resource to the failover resource group.
# scrgadm -a -L -g resource-group -l logical-hostname [-n netiflist] |
Specifies a network resource. The network resource is the logical hostname or shared address (IP address) that clients use to access Sun Cluster HA for Oracle.
Specifies an optional, comma-separated list that identifies the NAFO groups on each node. All of the nodes in nodelist of the resource group must be represented in the netiflist. If you do not specify this option, scrgadm(1M) attempts to discover a net adapter on the subnet that the hostname list identifies for each node in nodelist. For example, -n nafo0@nodename, nafo0@nodename2.
Register the HAStoragePlus resource type with the cluster.
# scrgadm -a -t SUNW.HAStoragePlus |
Create the resource sybase-hastp-rs of type HAStoragePlus.
# scrgadm -a -j sybase-hastp-rs -g sybase-rg \ -t SUNW.HAStoragePlus \ -x GlobalDevicePaths=sybase-set1,/dev/global/dsk/dl \ -x FilesystemMountPoints=/global/sybase-inst \ -x AffinityOn=TRUE |
AffinityOn must be set to TRUE and the local file system must reside on global disk groups to be failover.
Run the scrgadm command to complete the following tasks and bring the resource group sybase-rg online on a cluster node.
Move the resource group into a managed state.
Bring the resource group online
This node will be made the primary for device group sybase-set1 and raw device /dev/global/dsk/d1. Device groups associated with file systems such as /global/sybase-inst will also be made primaries on this node.
# scrgadm -Z -g sybase-rg |
Create Sybase ASE application resources in the failover resource group.
# scrgadm -a -j resource -g resource-group \ -t SUNW.sybase \ -x Environment_File=environment-file-path \ -x Adaptive_Server_Name=adaptive-server-name \ -x Backup_Server_Name=backup-server-name \ -x Text_Server_Name=text-server-name \ -x Monitor_Server_Name=monitor-server-name \ -x Adaptive_Server_Log_File=log-file-path \ -x Stop_File=stop-file-path \ -x Connect_string=user/passwd \ -y resource_dependencies=storageplus-resource |
Specifies the resource name to add.
Specifies the name of the resource group into which the RGM places the resources.
Specifies the resource type to add.
Sets the name of the environment file.
Sets the name of the adaptive server.
Sets the name of the backup server.
Sets the name of the text server.
Sets the name of the monitor server.
Sets the path to the log file for the adaptive server.
Sets the path to the stop file.
Specifies the user and password that the fault monitor uses to connect to the database.
You do not have to specify extension properties that have default values. See "Configuring Sun Cluster HA for Sybase ASE Extension Properties" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for more information.
Run the scswitch(1M) command to complete the following task.
Enable the resource and fault monitoring.
# scswitch -Z -g resource-group |
After you register and configure Sun Cluster HA for Sybase ASE, go to "How to Verify the Sun Cluster HA for Sybase ASE Installation" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.
The following information applies to this update release and all subsequent updates.
This following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Highly available local file system - Using HAStoragePlus, you can integrate your local file system into the Sun Cluster environment making the local file system highly available. HAStoragePlus provides additional file system capabilities such as checks, mounts, and unmounts enabling Sun Cluster to fail over local file systems. In order to failover, the local file system must reside on global disk groups with affinity switchovers enabled.
See the individual data service chapters or "Enabling Highly Available Local File Systems" for information on how to use the HAStoragePlus resource type.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
The resource type HAStoragePlus enables you to use a highly available local file system in a Sun Cluster environment configured for failover. This resource type is supported in Sun Cluster 3.0 5/02. See "Enabling Highly Available Local File Systems" for information on setting up the HAStoragePlus resource type.
See the planning chapter of the Sun Cluster 3.0 12/01 Software Installation Guide for information on how to create cluster file systems.
The following information applies to this update release and all subsequent releases.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.The resource types HAStorage and the HAStoragePlus can be used to configure the following options.
Coordinate the boot order of disk devices and resource groups by causing the START methods of the other resources in the same resource group that contains the HAStorage or HAStoragePlus resource to wait until the disk device resources become available
With AffinityOn set to True, enforce colocation of resource groups and disk device groups on the same node, thus enhancing the performance of disk-intensive data services
In addition, HAStoragePlus is capable of mounting any cluster file system found to be in an unmounted state. See "Planning the Cluster File System Configuration" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for more information.
If the device group is switched to another node while the HAStorage or HAStoragePlus resource is online, AffinityOn has no effect and the resource group does not migrate along with the device group. On the other hand, if the resource group is switched to another node, AffinityOn being set to True causes the device group to follow the resource group to the new node.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
To determine whether to create HAStorage or HAStoragePlus resources within a data service resource group, consider the following criteria.
Determine whether to use HAStorage or HAStoragePlus.
Use HAStorage if you are using Sun Cluster 3.0 12/01 software release or earlier.
Use HAStoragePlus if you are using Sun Cluster 3.0 5/02 software release. (If you want to integrate any file system locally into a Sun Cluster configured for failover, you must upgrade to Sun Cluster 3.0 5/02 and use the HAStoragePlus resource type. See "Planning the Cluster File System Configuration" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for more information.)
In cases where a data service resource group has a node list in which some of the nodes are not directly connected to the storage, you must configure HAStorage or HAStoragePlus resources in the resource group and set the dependency of the other data service resources to the HAStorage or HAStoragePlus resource. This requirement coordinates the boot order between the storage and the data services.
If your data service is disk intensive, such as Sun Cluster HA for Oracle and Sun Cluster HA for NFS, ensure that you perform the following tasks.
Add a HAStorage or HAStoragePlus resource to your data service resource group.
Switch the HAStorage or HAStoragePlus resource online.
Set the dependency of your data service resources to the HAStorage or HAStoragePlus resource.
Set AffinityOn to True.
When you perform these tasks, the resource groups and disk device groups are collocated on the same node.
If your data service is not disk intensive-such as one that reads all of its files at startup (for example, Sun Cluster HA for DNS)-configuring the HAStorage or HAStoragePlus resource type is optional.
See the individual chapters on data services in this book for specific recommendations.
See "Synchronizing the Startups Between Resource Groups and Disk Device Groups" for information about the relationship between disk device groups and resource groups. The SUNW.HAStorage(5) and SUNW.HAStoragePlus(5) man pages provides additional details.
See "Enabling Highly Available Local File Systems" for procedures for mounting of file systems such as VxFS in a local mode. The SUNW.HAStoragePlus man page provides additional details.
The following feature was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Prioritized Service Management (RGOffload) allows your cluster to automatically free a node's resources for critical data services. RGOffload is used when the startup of a critical failover data service requires a Non-Critical, scalable or failover data service to be brought offline. RGOffload is used to off-load resource groups containing non-critical data services.
The critical data service must be a failover data service. The data service to be off-loaded can be a failover or scalable data service.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Become superuser on a cluster member.
Determine whether the RGOffload resource type is registered.
The following command prints a list of resource types.
# scrgadm -p|egrep SUNW.RGOffload |
If needed, register the resource type
# scrgadm -a -t SUNW.RGOffload |
Set the Desired_primaries to zero in each resource group to be offloaded by the RGOffload resource.
# scrgadm -c -g offload-rg -y Desired_primaries=0 |
Add the RGOffload resource to the critical failover resource group and set the extension properties.
Do not place a resource group on more than one resource's rg_to_offload list. Placing a resource group on multiple rg_to_offload lists may cause the resource group to be taken offline and brought back online repeatedly.
See "Configuring RGOffload Extension Properties (5/02)" for extension property descriptions.
# scrgadm -aj rgoffload-resource -t SUNW.RGOffload -g critical-rg \ -x rg_to_offload=offload-rg-1, offload-rg-2, ... \ -x continue_to_offload=TRUE -x max_offload_retry=15 |
Extension properties other than rg_to_offload are shown with default values here. rg_to_offload is a comma-separated list of resource groups that are not dependent on each other. This list cannot include the resource group to which the RGOffload resource is being added.
Enable the RGOffload resource.
# scswitch -ej rgoffload-resource |
Set the dependency of the critical failover resource on the RGOffload resource.
# scrgadm -c -j critical-resource \ -y Resource_dependencies=rgoffload-resource |
Resource_dependencies_weak may also be used. Using Resource_dependencies_weak on the RGOffload resource type will allow the critical failover resource to start up even if errors are encountered during offload of offload-rg.
Bring the resource groups to be offloaded online.
# scswitch -z -g offload-rg, offload-rg-2, ... -h nodelist |
The resource group remains online on all nodes where the critical resource group is offline. The fault monitor prevents the resource group from running on the node where the critical resource group is online.
Because Desired_primaries for resource groups to be offloaded is set to 0 (see Step 4), the -Z option will not bring these resource groups online.
If the critical failover resource group is not online, bring it online.
# scswitch -Z -g critical-rg |
This example describes how to configure an RGOffload resource (rgofl), the critical resource group that contains the RGOffload resource (oracle_rg), and scalable resource groups that are off-loaded when the critical resource group comes online (IWS-SC, IWS-SC-2). The critical resource in this example is oracle-server-rs.
In this example, oracle_rg, IWS-SC, and IWS-SC-2 can be mastered on any node of cluster triped: phys-triped-1, phys-triped-2, phys-triped-3.
[Determine whether the SUNW.RGOffload resource type is registered.] # scrgadm -p|egrep SUNW.RGOffload [If needed, register the resource type.] # scrgadm -a -t SUNW.RGOffload [Set the Desired_primaries to zero in each resource group to be offloaded by the RGOffload resource.] |
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Typically, you use the command line scrgadm -x parameter=value to configure extension properties when you create the RGOffload resource. See "Standard Properties" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for details on all of the Sun Cluster standard properties.
Table 5-1 describes extension properties that you can configure for RGOffload. The Tunable entries indicate when you can update the property.
Table 5-1 RGOffload Extension Properties
Name/Data Type |
Default |
---|---|
rg_to_offload (string) |
A comma-separated list of resource groups that need to be offloaded on a node when a critical failover resource group starts up on that node. This list should not contain resource groups that depend upon each other. This property has no default and must be set.
RGOffload does not check for dependency loops in the list of resource groups set in the rg_to_offload extension property. For example, if resource group RG-B depends in some way on RG-A, then both RG-A and RG-B should not be included in rg_to_offload.
Default: None Tunable: Any time |
continue_to_offload (Boolean) |
A Boolean to indicate whether to continue offloading the remaining resource groups in the rg_to_offload list after an error in offloading a resource group occurs.
This property is only used by the START method.
Default: True Tunable: Any time |
max_offload_retry (integer) |
The number of attempts to offload a resource group during startup in case of failures due to cluster or resource group reconfiguration. There is an interval of 10 seconds between successive retries.
Set the max_offload_retry so that (the number of resource groups to be offloaded * max_offload_retry * 10 seconds) is less than the Start_timeout for the RGOffload resource. If this number is close to or more than the Start_timeout number, the START method of RGOffload resource may time out before maximum offload attempts are completed.
This property is only used by the START method.
Default: 15 Tunable: Any time |
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
The Fault Monitor probe for RGOffload resource is used to keep resource groups specified in the rg_to_offload extension property offline on the node mastering the critical resource. During each probe cycle, Fault Monitor verifies that resource groups to be off-loaded (offload-rg) are offline on the node mastering the critical resource. If the offload-rg is online on the node mastering the critical resource, the Fault Monitor attempts to start offload-rg on nodes other than the node mastering the critical resource, thereby bringing offload-rg offline on the node mastering the critical resource.
Because desired_primaries for offload-rg is set to 0, off-loaded resource groups are not restarted on nodes that become available later. Therefore, the RGOffload Fault Monitor attempts to start up offload-rg on as many primaries as possible, until maximum_primaries limit is reached, while keeping offload-rg offline on the node mastering the critical resource.
RGOffload attempts to start up all off-loaded resource groups unless they are in the maintenance or unmanaged state. To place a resource group in an unmanaged state, use the scswitch command.
# scswitch -u -g resourcegroup |
The Fault Monitor probe cycle is invoked after every Thorough_probe_interval.
The following information applies to this update release and all subsequent updates.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
The iPlanet Directory Server is bundled with the Solaris 9 operating environment. If you are using Solaris 9, use the Solaris 9 CD-ROMs to install the iPlanet Directory Server.
Install the iPlanet Directory Server packages on all the nodes of the cluster, if they are not already installed.
Identify a location on a cluster file system where you intend to keep all your directory servers (for example, /global/nsldap).
If you want to, you may create a separate directory for this file system.
On all nodes, create a link to this directory from /var/ds5. If /var/ds5 already exists on a node, remove it and create the link.
# rmdir /var/ds5 # ln -s /global/nsldap /var/ds5 |
On any one node, set up the directory server(s) in the usual way.
# directoryserver setup |
On this node, a link, /usr/iplanet/ds5/slapd-instance-name, will be created automatically. On all other nodes, create the link manually
In the following example, dixon-1 is the name of the Directory Server.
# ln -s /var/ds5/slapd-dixon-1 /usr/iplanet/ds5/slapd-dixon-1 |
Supply the logical hostname when the setup command prompts you for the server name.
This step is required for failover to work correctly.
The logical host that you specify must be online on the node from which you run the directoryserver setup command. This state is necessary because at the end of the iPlanet Directory Server installation, iPlanet Directory Server automatically starts and will fail if the logical host is offline on that node.
If prompted for the logical hostname, select the logical hostname along with your domain for the computer name, for example, phys-schost-1.example.com.
Supply the hostname that is associated with a network resource when the setup command prompts you for the full server name.
If prompted for the IP address to be used as the iPlanet Directory Server Administrative Server, specify the IP address of the cluster node on which you are running directoryserver setup.
As part of the installation, you set up an iPlanet Directory Server Administrative Server. The IP address that you specify for this server must be that of a physical cluster node, not the name of the logical host that will fail over.
After you configure and activate the network resources, go to "How to Configure iPlanet Directory Server" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.
The following information applies to this update release and all subsequent updates.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
This procedure describes how to configure an instance of the iPlanet Web server to be highly available. Use the NetscapeTM browser to interact with this procedure.
Consider the following points before you perform this procedure.
Before you start, ensure that you have installed the browser on a machine that can access the network on which the cluster resides. You can install the browser on a cluster node or on the administrative workstation for the cluster.
Your configuration files can reside on either a local file system or on the cluster file system.
Any certificates that are installed for the secure instances must be installed from all cluster nodes. This installation involves running the admin console on each node. Thus, if a cluster has nodes n1, n2, n3, and n4, the installation steps are as follows.
Run the admin server on node n1.
From your Web browser, connect to the admin server as http://n1.domain:port-for example, http://n1.example.com:8888-or whatever you specified as the admin server port. The port is typically 8888.
Install the certificate.
Stop the admin server on node n1 and run the admin server from node n2.
From the Web browser, connect to the new admin server as http://n2.domain:port, for example, http://n2.example.com:8888.
Repeat these steps for nodes n3 and n4.
After you have considered the preceding points, complete the following steps.
Create a directory on the local disk of all the nodes to hold the logs, error files, and PID file that iPlanet Web Server manages.
For iPlanet to work correctly, these files must be located on each node of the cluster, not on the cluster file system.
Choose a location on the local disk that is the same for all the nodes in the cluster. Use the mkdir -p command to create the directory. Make nobody the owner of this directory.
The following example shows how to complete this step.
phys-schost-1# mkdir -p /var/pathname/http-instance/logs/ |
If you anticipate large error logs and PID files, do not put them in a directory under /var because they will overwhelm this directory. Rather, create a directory in a partition with adequate space to handle large files.
From the administrative workstation or a cluster node, start the Netscape browser.
On one of the cluster nodes, go to the directory https-admserv, then start the iPlanet admin server.
# cd https-admserv # ./start |
Enter the URL of the iPlanet admin server in the Netscape browser.
The URL consists of the physical hostname and port number that the iPlanet installation script established in Step 4 of the server installation procedure, for example, n1.example.com:8888. When you perform Step 2 of this procedure, the ./start command displays the admin URL.
When prompted, use the user ID and password you specified in Step 6 of the server installation procedure to log in to the iPlanet administration server interface.
Using the administration server where possible and manual changes otherwise, complete the following:
Verify that the server name is correct.
Verify that the server user is set as superuser.
Change the bind address field to one of the following addresses.
A logical hostname or shared address if you use DNS as your name service
The IP address associated with the logical hostname or shared address if you use NIS as your name service
Update the ErrorLog, PidLog, and Access Log entries to reflect the directory created in Step 1 of this section.
Save your changes.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
A new resource group property, Auto_start_on_new_cluster, has been added to the Resource Group Properties list.
Table 5-2 Resource Group Properties
Property Name |
Description |
---|---|
Auto_start_on_new_cluster (Boolean) |
This property can be used to disable automatic startup of the Resource Group when a new cluster is forming.
The default is TRUE. If set to TRUE, the Resource Group Manager attempts to start the resource group automatically to achieve Desired_primaries when the cluster is rebooted. If set to FALSE, the Resource Group does not start automatically when the cluster is rebooted.
Category: Optional Default: True Tunable: Any time |