Sun Cluster uses the concept of node lists for disk device groups and resource groups. Node lists are ordered lists of primary nodes, which are potential masters of the disk device group or resource group. Sun Cluster uses a failback policy to determine what happens when a node has been down and then rejoins the cluster, and the rejoining node appears earlier in the node list than the current primary node. If failback is set to True, the device group or resource group will be switched off of the current primary and switched onto the rejoining node, making the rejoining node the new primary.
To ensure high availability of a failover resource group, make the resource group's node list match the node list of associated disk device groups. For a scalable resource group, the resource group's node list cannot always match the device group's node list because, currently, a device group's node list must contain exactly two nodes. For a greater-than-two-node cluster, the node list for the scalable resource group can have more than two nodes.
For example, assume you have a disk device group disk-group-1 that has nodes phys-schost-1 and phys-schost-2 in its node list, and the failback policy is set to Enabled. Assume you also have a failover resource group, resource-group-1, which uses disk-group-1 to hold its application data. When you set up resource-group-1, also specify phys-schost-1 and phys-schost-2 for the resource group's node list and set the failback policy to True.
To ensure high availability of a scalable resource group, make the scalable resource group's node list a superset of the node list for the disk device group. Doing so ensures that the nodes that are directly connected to the disks are also nodes that can run the scalable resource group. The advantage is that, when at least one cluster node connected to the data is up, the scalable resource group runs on that same node, making the scalable services available also.
See the Sun Cluster 3.0 U1 Installation Guide for information on how to set up disk device groups. See the Sun Cluster 3.0 U1 Concepts document for more details on the relationship between disk device groups and resource groups.
The resource type SUNW.HAStorage serves the following purposes.
coordinates the boot order of disk devices and resource groups by causing the START methods of the other resources in the same resource group containing the SUNW.HAStorage resource to wait until the disk device resources become available
with AffinityOn set to True, enforces colocation of resource groups and disk device groups on the same node, thus enhancing the performance of disk-intensive data services
If the device group is switched to another node while the SUNW.HAstorage resource is online, AffinityOn has no effect and the resource group does not migrate along with the device group. On the other hand, if the resource group is switched to another node, AffinityOn being set to True causes the device group to follow the resource group to the new node.
To determine whether to create SUNW.HAStorage resources within a data service resource group, consider the following criteria.
In cases where a data-service resource group has a node list in which some of the nodes are not directly connected to the storage, you must configure SUNW.HAStorage resources in the resource group and set the dependency of the other data-service resources to the SUNW.HAStorage resource. This requirement coordinates the boot order between the storage and the data services.
If your data service is disk intensive, such as the Sun Cluster HA for Oracle and Sun Cluster HA for NFS data services, add a SUNW.HAStorage resource to your data-service resource group, set the dependency of your data-service resources to the SUNW.HAStorage resource, and set AffinityOn to True. When you perform these steps, the resource groups and disk device groups are colocated on the same node.
If your data service is not disk intensive-such as one that reads all its files at startup (for example, the Sun Cluster HA for DNS data service)-configuring the SUNW.HAStorage resource type is optional.
If your cluster contains only two nodes, configuring the SUNW.HAStorage resource type is optional. However, if you plan to add nodes and run scalable services later on, you must configure the SUNW.HAStorage resource type when you perform these tasks. To prepare, you can set up the SUNW.HAStorage resource type now and add nodes to the node list later.
See the individual chapters on data services in this document for specific recommendations.
See "How to Set Up SUNW.HAStorage Resource Type for New Resources" for information about the relationship between disk device groups and resource groups. Additional details are in the SUNW.HAStorage(5) man page.