Sun Cluster 3.0 12/01 Release Notes Supplement

Appendix G Scalable Cluster Topology

This appendix provides information and procedures for using the scalable cluster topology. This information supplements the Sun Cluster 3.0 12/01 System Administration Guide. Certain procedures have been updated and included here to accommodate this new Sun Cluster 3.x topology.

This chapter contains new information for the following topics.

Overview of Scalable Topology

The scalable cluster topology allows connectivity of up to four nodes to a single storage array. Note the following considerations for this topology at this time:

Adding or Removing a Cluster Node

The following information and procedures supplement procedures in the Sun Cluster 3.0 12/01 System Administration Guide.

Adding a Cluster Node

The scalable topology does not introduce any changes to the standard procedure for adding cluster nodes. See the Sun Cluster 3.0 12/01 System Administration Guide for the procedure for adding a cluster node.

Figure G-1 shows a sample diagram of cabling for four-node connectivity with scalable topology.

Figure G-1 Sample Scalable Topology Cabling, Four-Node Connectivity

Graphic

Removing a Cluster Node

The following task map procedure is an update to the standard procedure in the Sun Cluster 3.0 12/01 System Administration Guide.

Table G-1 Task Map: Removing a Cluster Node (5/02)

Task 

For Instructions, Go To 

Move all resource groups and disk device groups off of the node to be removed. 

- Use scswitch

 

# scswitch -S -h from-node

Remove the node from all resource groups. 

- Use scrgadm

Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide: See the procedure for how to remove a node from an existing resource group.

Remove node from all disk device groups 

- Use scconf, metaset, and scsetup

Sun Cluster 3.0 12/01 System Administration Guide: see the procedures for how to remove a node from a disk device group (separate procedures for Solstice DiskSuite, VERITAS Volume Manager, and raw disk device groups).

Remove all quorum devices. 

- Use scsetup.

Caution: Do not remove the quorum device if you are removing a node from a two-node cluster.

 

Sun Cluster 3.0 12/01 System Administration Guide: "How to Remove a Quorum Device."

 

Note that although you must remove the quorum device before you remove the storage device in the next step, you can add the quorum device back immediately afterward. 

Remove the storage device from the node.  

- Use devfsadm, scdidadm.

"How to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater Than Two-Node Connectivity"

Add the new quorum device (to only the nodes that are intended to remain in the cluster). 

- Use scconf -a -q globaldev=d[n],node=node1,node=node2,...

scconf(1M) man page

Place the node being removed into maintenance state. 

- Use scswitch, shutdown, and scconf.

Sun Cluster 3.0 12/01 System Administration Guide: "How to Put a Node Into Maintenance State"

Remove all logical transport connections to the node being removed. 

- Use scsetup.

Sun Cluster 3.0 12/01 System Administration Guide: "How to Remove Cluster Transport Cables, Transport Adapters, and Transport Junctions"

 

Remove node from the cluster software configuration. 

- Use scconf.

Sun Cluster 3.0 12/01 System Administration Guide: "How to Remove a Node From the Cluster Software Configuration"

How to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater Than Two-Node Connectivity

Use this procedure to detach a storage array from a single cluster node, in a cluster that has three- or four-node connectivity.

  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.

  2. Determine the resource groups and device groups that are running on the node to be disconnected.


    # scstat
    
  3. If necessary, move all resource groups and device groups off the node to be disconnected.


    Caution - Caution -

    If your cluster is running OPS/RAC software, shut down the OPS/RAC database instance that is running on the node before you move the groups off the node. For instructions see the Oracle Database Administration Guide.



    # scswitch -S -h from-node
    
  4. Put the device groups into maintenance state.

    For the procedure on quiescing I/O activity to Veritas shared disk groups, see your VERITAS Volume Manager documentation.

    For the procedure on putting a device group in maintenance state, see the Sun Cluster 3.0 12/01 System Administration Guide.

  5. Remove the node from the device groups.

    • If you use VERITAS Volume Manager or raw disk, use the scconf command to remove the device groups.

    • If you use Solstice DiskSuite, use the metaset command to remove the device groups.

  6. If the cluster is running HAStorage or HAStoragePlus, remove the node from the resource group's nodelist.


    # scrgadm -a -g resource-group -h nodelist 
    

    See the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for more information on changing a resource group's nodelist.

  7. If the storage array you are removing is the last storage array that is connected to the node, disconnect the fiber-optic cable between the node and the hub or switch that is connected to this storage array (otherwise, skip this step).

  8. Do you want to remove the host adapter from the node you are disconnecting?

    • If yes, shut down and power off the node.

    • If no, skip to Step 11.

  9. Remove the host adapter from the node.

    For the procedure on removing host adapters, see the documentation that shipped with your node.

  10. Without allowing the node to boot, power on the node.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  11. Boot the node into non-cluster mode.


    ok boot -x 
    

    Caution - Caution -

    The node must be in non-cluster mode before you remove OPS/RAC software in the next step or the node will panic and potentially cause a loss of data availability.


  12. If OPS/RAC software has been installed, remove the OPS/RAC software package from the node that you are disconnecting.


    # pkgrm SUNWscucm 
    

    Caution - Caution -

    If you do not remove the OPS/RAC software from the node you disconnected, the node will panic when the node is reintroduced to the cluster and potentially cause a loss of data availability.


  13. Boot the node into cluster mode.


    ok> boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  14. On the node, update the device namespace by updating the /devices and /dev entries.


    # devfsadm -C 
    # scdidadm -C
    
  15. Bring the device groups back online.

    For procedures on bringing a VERITAS shared disk group online, see your VERITAS Volume Manager documentation.

    For the procedure on bringing a device group online, see the procedure on putting a device group into maintenance state in the Sun Cluster 3.0 12/01 System Administration Guide.