Sun Cluster 2.2 System Administration Guide

Adding and Removing Cluster Nodes

When you add or remove cluster nodes, you must reconfigure the Sun Cluster software. When you installed the cluster originally, you specified the number of "active" nodes and the number of "potential" nodes in the cluster, using the scinstall(1M) command. Use the procedures in this section to add "potential" nodes and to remove "active" nodes.

To add nodes that were not already specified as potential nodes, you must halt and reconfigure the entire cluster.

How to Add a Cluster Node

Use this procedure only for nodes that were already specified as "potential" during initial installation.

  1. Use the scinstall(1M) command to install Sun Cluster 2.2 on the node you are adding.

    Use the installation procedures described in the Sun Cluster 2.2 Software Installation Guide, but note the following when responding to scinstall(1M) prompts:

    • When asked for the number of active nodes, include the node you are adding now in the total.

    • You will not be prompted for shared Cluster Configuration Database (CCD) information, since the new cluster will have greater than two nodes.

    • (VxVM with direct attached devices only) When prompted for the node lock port, provide the designated node lock device and port.

    • (VxVM only) Do not select a quorum device when prompted. Instead, select complex mode and then N. Later, you will run the scconf -q command to configure the quorum device.

    • (VxVM only) Select ask when prompted to choose a cluster partitioning behavior.

  2. (Scalable Coherent Interface [SCI] only) Update the sm_config template file to verify information about the new node.

    This step is not necessary for Ethernet configurations.

    Nodes that were specified as "potential" during initial installation should have been included in the sm_config file with their host names commented out by the characters _%. Uncomment the name of the node you will be activating now. Make sure the configuration information in the file matches the physical layout of the node.

  3. (SCI only) Run sm_config.

  4. (VxVM only) Set up the root disk group.

    For details, see the VxVM appendix in the Sun Cluster 2.2 Software Installation Guide.

  5. (SDS only) Set up Solstice DiskSuite disksets.

    For details, see the Solstice DiskSuite appendix in the Sun Cluster 2.2 Software Installation Guide.

  6. If you have a direct attached device connected to all nodes, set up the direct attached disk flag on the new node.

    To set the direct attached flag correctly in the cdb files of all nodes, run the following command on all nodes in the cluster. In this example, the cluster name is sc-cluster:


    # scconf sc-cluster +D
    

  7. (VxVM only) Select a common quorum device.

    If your volume manager is VxVM and if you have a direct attached device connected to all nodes, run the following command on all nodes and select a common quorum device.


    # scconf sc-cluster -q -D
    

    If you do not have a direct attached disk connected to all nodes, run the following command for each pair of nodes that shares a quorum device with the new node.


    # scconf -q
    

  8. (VxVM only) Set the node lock port on the new node.

    If you just installed a direct attached disk, set the node lock port on all nodes.

    If the cluster contained a direct attached disk already, run the following command on only the new node. In this example, the cluster name is sc-cluster and the Terminal Concentrator is cluster-tc.


    # scconf sc-cluster -t cluster-tc -l port_number
    

  9. Stop the cluster.

  10. Run the scconf -A command on all nodes to update the number of active nodes.

    See the scconf(1M) man page for details. In this example, the cluster name is sc-cluster and the new total number of active nodes is three.


    # scconf sc-cluster -A 3
    

  11. (VxVM only) Remove the shared CCD if it exists, since it is necessary only with two-node clusters.

    Run the following command on all nodes.


    # scconf sc-cluster -S none
    

  12. Use ftp (in binary mode) to copy the cdb file from an existing node to the new node.

    The cdb files normally reside in /etc/opt/SUNWclus/conf/clustername.cdb.

  13. Reboot the new node.

  14. Start the cluster.

    Run the following command from any single node.


    # scadmin startcluster phys-hahost sc-cluster
    

    Then run the following command on all other nodes.


    # scadmin startnode
    

How to Remove a Cluster Node

The scconf(1M) command enables you to remove nodes by decrementing the number you specified as active nodes when you installed the cluster software with the scinstall(1M) command. For this procedure, you must run the scconf(1M) command on all nodes in the cluster.

  1. For an HA configuration, switch over all logical hosts currently mastered by the node to be removed.

    For parallel database configurations, skip this step.


    # haswitch phys-hahost3 hahost1
    

  2. Run the scconf -A command to exclude the node.

    Run the scconf(1M) command on all cluster nodes. See the scconf(1M) man page for details.


    Note -

    In this command, the number you specify does not represent a node number. Instead, this number represents the total number of cluster nodes that will be active after the scconf operation. The scconf operation always removes from the cluster the node with the highest node number. There is no procedure to remove, for example, node number 2 from a three-node cluster.


In this example, the cluster name is sc-cluster and the total number of active nodes after the scconf operation is two.


# scconf sc-cluster -A 2