Sun Cluster Data Services Planning and Administration Guide for Solaris OS

ProcedureHow to Migrate the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

The procedure assumes a three node cluster with a global-cluster non-voting node created on each of the three nodes. The configuration directory that is made highly available using the HAStoragePlus resource should also be accessible from the global-cluster non-voting nodes.

  1. Create the failover resource group with the global-cluster voting node that holds the shared address that the scalable resource group is to use.


    # clresourcegroup create -n node1,node2,node3 sa-resource-group
    
    sa-resource-group

    Specifies your choice of the name of the failover resource group to add. This name must begin with an ASCII character.

  2. Add the shared address resource to the failover resource group.


    # clressharedaddress create -g sa-resource-group -h hostnamelist, … \
    [-X auxnodelist] -N netiflist network-resource
    
    -g sa-resource-group

    Specifies the resource group name. In the node list of a shared address resource, do not specify more than one global-cluster non-voting node on the same global-cluster voting node. Specify the same list of nodename:zonename pairs as the node list of the scalable resource group.

    -h hostnamelist, …

    Specifies a comma-separated list of shared address hostnames.

    -X auxnodelist

    Specifies a comma-separated list of node names or IDs or zones that identify the cluster nodes that can host the shared address but never serve as primary if failover occurs. These nodes are mutually exclusive, with the nodes identified as potential masters in the resource group's node list. If no auxiliary node list is explicitly specified, the list defaults to the list of all cluster node names that are not included in the node list of the resource group that contains the shared address resource.


    Note –

    To ensure that a scalable service runs in all global-cluster non-voting nodes that were created to master the service, the complete list of nodes must be included in the node list of the shared address resource group or the auxnodelist of the shared address resource. If all the global-cluster non-voting nodes are listed in the node list, the auxnodelist can be omitted.


    -N netiflist

    Specifies an optional, comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.


    Note –

    Sun Cluster does not support the use of the adapter name for netif.


    network-resource

    Specifies an optional resource name of your choice.

  3. Create the scalable resource group.


    # clresourcegroup create\-p Maximum_primaries=m\-p Desired_primaries=n\
    -n node1,node2,node3\
    -p RG_dependencies=sa-resource-group resource-group-1
    
    -p Maximum_primaries=m

    Specifies the maximum number of active primaries for this resource group.

    -p Desired_primaries=n

    Specifies the number of active primaries on which the resource group should attempt to start.

    resource-group-1

    Specifies your choice of the name of the scalable resource group to add. This name must begin with an ASCII character.

  4. Create the HAStoragePlus resource hastorageplus-1, and define the filesystem mount points.


    # clresource create -g resource-group-1 -t SUNW.HAStoragePlus \
    -p FilesystemMountPoints=/global/resource-group-1 hastorageplus-1
    

    The resource is created in the enabled state.

  5. Register the resource type for the application.


    # clresourcetype register resource-type
    
    resource-type

    Specifies name of the resource type to add. See the release notes for your release of Sun Cluster to determine the predefined name to supply.

  6. Add the application resource to resource-group-1, and set the dependency to hastorageplus-1.


    # clresource create -g resource-group-1 -t SUNW.application \
    [-p "extension-property[{node-specifier}]"=value, …] -p Scalable=True \
    -p Resource_dependencies=network-resource -p Port_list=port-number/protocol \
    -p Resource_dependencies=hastorageplus-1 resource
    
  7. Bring the failover resource group online.


    # clresourcegroup online sa-resource-group
    
  8. Bring the scalable resource group online on all the nodes.


    # clresourcegroup online resource-group-1
    
  9. Install and boot zone1 on each of the nodes, node1, node2, node3.

  10. Bring the application resource group offline on two nodes (node1, node2).


    Note –

    Ensure the shared address is online on node3.



    # clresourcegroup switch -n node3 resource-group-1
    
    resource-group-1

    Specifies the name of the resource group to switch.

  11. Update the nodelist property of the failover resource group to include the global-cluster non-voting node of the corresponding nodes removed from the node list.


    # clresourcegroup set -n node1:zone1,node2:zone1,node3 sa-resource-group
    
  12. Update the nodelist property of the application resource group to include the global-cluster non-voting node of the corresponding nodes removed from node list.


    # clresourcegroup set node1:zone1,node2:zone1,node3 resource-group-1
    
  13. Bring the failover resource group and application resource group online only on the newly added zones.


    Note –

    The failover resource group will be online only on node1:zone1 and application resource group will be online only on node1:zone1 and node2:zone1.



    # clresourcegroup switch -n node1:zone1 sa-resource-group
    

    # clresourcegroup switch -n node1:zone1,node2:zone1 resource-group-1
    
  14. Update the nodelist property of both the resource groups to include the global-cluster non-voting node of node3 by removing the global node, node3 from the list.


    # clresourcegroup set node1:zone1,node2:zone1,node3:zone1 sa-resource-group
    

    # clresourcegroup set node1:zone1,node2:zone1,node3:zone1 resource-group-1
    
  15. Bring both the resource groups online on the global-cluster non-voting nodes.


    # clresourcegroup switch -n node1:zone1 sa-resource-group
    

    # clresourcegroup switch -n node1:zone1,node2:zone1,node3:zone1 resource-group-1