Sun Cluster 3.0 System Administration Guide

Chapter 6 Administering the Cluster

This chapter provides the procedures for administering items that affect the entire cluster.

This is a list of the procedures in this chapter.

6.1 Administering the Cluster Overview

Table 6-1 Task Map: Administering the Cluster

Task 

For Instructions, Go To... 

Change the name of the cluster. 

"6.1.1 How to Change the Cluster Name"

List node IDs and their corresponding node names. 

"6.1.1 How to Change the Cluster Name"

Permit or deny new nodes to add themselves to the cluster. 

"6.1.3 How to Work With New Cluster Node Authentication"

Change the time for a cluster using the Network Time Protocol (NTP). 

"6.1.4 How to Reset the Time of Day in a Cluster"

Bring down a node and enter the OpenBootTM PROM.

"6.1.5 How to Enter the OpenBoot PROM (OBP) on a Node"

6.1.1 How to Change the Cluster Name

If necessary, you can change the cluster name after initial installation.

  1. Become superuser on a node in the cluster.

  2. Enter the scsetup(1M) utility.


    # scsetup
    

    The Main Menu appears.

  3. To change the cluster name, enter 6 (Other cluster properties).

    The Other Cluster Properties menu appears.

  4. Make your selection from the menu and follow the onscreen instructions.

6.1.1.1 Example--Changing the Cluster Name

The following example shows the scconf(1M) command generated from the scsetup utility to change to the new cluster name, dromedary.


# scconf -c -C cluster=dromedary

6.1.2 How to Map Node ID to Node Name

During Sun Cluster installation, each node is automatically assigned a unique node ID number. The node ID number is assigned to a node in the order in which it joins the cluster for the first time; once assigned, the number cannot be changed. The node ID number is often used in error messages to identify which cluster node the message concerns. Use this procedure to determine the mapping between node IDs and node names.

You do not need to be superuser to list configuration information.

  1. Use scconf(1M) to list the cluster configuration information.


    % scconf -pv | grep "Node ID"
    

6.1.2.1 Example--Mapping the Node ID to the Node Name

The following example shows the node ID assignments


% scconf -pv | grep "Node ID"
	(phys-schost-1) Node ID:																				1
	(phys-schost-2) Node ID:																				2
	(phys-schost-3) Node ID:																				3

6.1.3 How to Work With New Cluster Node Authentication

Sun Cluster enables you to determine if new nodes can add themselves to the cluster and with what type of authentication. You can permit any new node to join the cluster over the public network, deny new nodes from joining the cluster, or indicate a specific node that can join the cluster. New nodes can be authenticated by using either standard UNIX or Diffie-Hellman (DES) authentication. If you select DES authentication, you must also configure all necessary encryption keys before a node can join. See the keyserv(1M) and publickey(4) man pages for more information.

  1. Become superuser on a node in the cluster.

  2. Enter the scsetup(1M) utility.


    # scsetup
    

    The Main Menu appears.

  3. To work with cluster authentication, enter 5 (New nodes).

    The New Nodes menu appears.

  4. Make your selection from the menu and follow the onscreen instructions.

6.1.3.1 Examples--Preventing New Machines From Being Added to the Cluster

The following example shows the scconf(1M) command generated from the scsetup utility that would prevent new machines from being added to the cluster.


# scconf -a -T node=.

6.1.3.2 Examples--Permitting All New Machines to Be Added to the Cluster

The following example shows the scconf command generated from the scsetup utility that would enable all new machines to be added to the cluster.


# scconf -r -T all

6.1.3.3 Examples--Specifying a New Machine to Be Added to the Cluster

The following example shows the scconf command generated from the scsetup utility to enable a single new machine to be added to the cluster.


# scconf -a -T node=phys-schost-4

6.1.3.4 Examples--Setting the Authentication to Standard UNIX

The following example shows the scconf command generated from the scsetup utility to reset to standard UNIX authentication for new nodes joining the cluster.


# scconf -c -T authtype=unix

6.1.3.5 Examples--Setting the Authentication to DES

The following example shows the scconf command generated from the scsetup utility to use DES authentication for new nodes joining the cluster.


# scconf -c -T authtype=des

Note -

When using DES authentication, you need to also configure all necessary encryption keys before a node can join the cluster. See the keyserv(1M) and publickey(4) man pages for more information.


6.1.4 How to Reset the Time of Day in a Cluster

Sun Cluster uses the Network Time Protocol (NTP) to maintain time synchronization between cluster nodes. Adjustments in the cluster occur automatically as needed when nodes synchronize their time. See the Sun Cluster 3.0 Concepts document and the Network Time Protocol User's Guide for more information.


Caution - Caution -

When using NTP, do not attempt to adjust the cluster time while the cluster is up and running. This includes using the date(1), rdate(1M), or xntpdate(1M) commands interactively or within cron(1M) scripts.


  1. Become superuser on a node in the cluster.

  2. Shut down the cluster.


    # scshutdown -g0 -y
    
  3. Boot each node into non-cluster node.


    ok boot -x
    
  4. On a single node, set the time of day by running the date(1) command.


    # date HHMMSS
    
  5. On the other machines, synchronize the time to that node by running the rdate(1M) command.


    # rdate hostname
    
  6. Boot each node to restart the cluster.


    # reboot
    
  7. Verify that the change took place on all cluster nodes.

    On each node, run the date(1M) command.


    # date
    

6.1.5 How to Enter the OpenBoot PROM (OBP) on a Node

Use this procedure if you need to configure or change OpenBoot PROM settings.

  1. Connect to the terminal concentrator port.


    # telnet tc_name tc_port_number
    
    tc_name

    Specifies the name of the terminal concentrator.

    tc_port_number

    Specifies the port number on the terminal concentrator. Port numbers are configuration dependent. Typically, ports 2 and 3 (5002 and 5003) are used for the first cluster installed at a site.

  2. Shut down the cluster node gracefully by using the scswitch(1M) command to evacuate any resource or disk device groups and then shutdown(1M) to bring the node to the OBP prompt.


    # scswitch -S -h node
    # shutdown -g 0 -y 
    
  3. Send a break to the node.


    telnet> send brk
    
  4. Execute the OpenBoot PROM commands.

6.2 Adding a Cluster Node

The following table lists the tasks to perform when adding a node to an existing cluster.

Table 6-2 Task Map: Adding a Node

Task 

For Instructions, Go To... 

Add the cluster interconnects to the new node. 

    - Install the host adapter, add the transport junction, cable the interconnect. 

Sun Cluster 3.0 Hardware Guide

    - Adding and Replacing Cluster Interconnect and Public Hardware 

Add shared storage 

Sun Cluster 3.0 Hardware Guide

   - Installing and Replacing the StorEdge MultiPack Enclosure 

   - Installing and Replacing the StorEdge D1000 Disk Array 

   - Installing and Replacing the StorEdge A5x00 Disk Array 

Add the node to the authorized node list 

   - Use scsetup.

Sun Cluster 3.0 System Administration Guide

   - How to Add a Cluster Node 

Install and configure the software on the new cluster node 

   - Install the Solaris Operating Environment and Sun Cluster software 

   - Configure the node as part of the cluster 

Sun Cluster 3.0 Installation Guide

   - Installing and Configuring Sun Cluster Software 

6.2.1 How to Add a Cluster Node to the Authorized Node List

Before adding a machine to an existing cluster, be sure the node has all of the necessary software and hardware correctly installed and configured, including a good physical connection to the private cluster interconnect, as indicated in the "Adding a Node" task map. Refer to the Sun Cluster 3.0 Installation Guide and the scinstall(1M) man page for more information regarding software installations. For hardware installations, refer to the Sun Cluster 3.0 Hardware Guide or the hardware documentation that shipped with your server.

  1. Become superuser on a current cluster member node.

  2. Execute the scsetup utility.


    # scsetup
    

    The Main Menu appears.

  3. Access the New Nodes Menu option by entering 5 at the Main Menu.

  4. Modify the authorized list by entering 3 (Specify the name of the machine) at the New Nodes Menu.

  5. Specify the name of a machine that can add itself.

    Follow the prompts to add the cluster node. You will be asked for the name of the node to be added.

  6. Verify that the node has been added to the authorized list.


    # scconf -p | grep "Cluster new node"
    

6.2.1.1 Example--Adding a Cluster Node

The following example shows how to add a node named phys-schost-3 to an existing cluster.


[Become a superuser.]
[Execute scsetup utility.]
# scsetup
*** Main Menu ***
    Please select from one of the following options:
    Option:  5
*** New Nodes Menu ***
    Please select from one of the following options:
      ...
      3) Specify the name of a machine which may add itself
      ...
    Option:  3
>>> Specify a Machine which may Install itself into the Cluster <<<
    ...
    Is it okay to continue (yes/no) [yes]? <Return>
    Name of the host to add to the list of recognized machines?  phys-schost-3
    Is it okay to proceed with the update (yes/no) [yes]? <Return>
 
scconf -a -T node=phys-schost-3
 
    Command completed successfully.
[Quit the scsetup New Nodes Menu and Main Menu:]
    ...
    Option:  q
[Verify the node has been added.]
# scconf -p | grep "Cluster new"
	Cluster new node authentication:      unix
	Cluster new node list:                phys-schost-3

6.2.1.2 Where to Go From Here

Sun Cluster 3.0 Installation Guide: Installing and Configuring Sun Cluster Sofware.

6.3 Removing a Cluster Node

The following table lists the tasks to perform when removing a node from an existing cluster.

Table 6-3 Task Map: Removing a Cluster Node

Task 

For Instructions, Go To... 

Place node being removed into maintenance state 

   - Use shutdown and scconf

Sun Cluster 3.0 System Administration Guide: Chapter 4, Administering Quorum

   - How to Put a Cluster Node Into Maintenance State 

Remove node from all resource groups 

   - Use scrgadm

Sun Cluster 3.0 Data Services Installation and Configuration Guide: Chapter 9, Administering Data Service Resources

   - How to Remove a Node from an Existing Resource Group 

Remove node from all device groups of which the node is a member 

   - Use volume manager commands 

Sun Cluster 3.0 System Administration Guide: Chapter 3, Administering Global Devices and Cluster File Systems

   - How to Remove a Node from a Disk Device Group (SDS) 

   - How to Remove a Node from a Disk Device Group (VxVM) 

Remove all logical transport connections to the node being removed 

   - Use scsetup

Sun Cluster 3.0 System Administration Guide: Chapter 5, Administering Cluster Networks

   - How to Remove Cluster Transport Cables and Transport Adapters 

 

To remove the physical hardware from the node, see Sun Cluster 3.0 Hardware Guide: Chapter 3, Installing and Maintaining Cluster Interconnect and Public Network Hardware. 

Remove all quorum devices shared with the node being removed 

   - Use scsetup

Sun Cluster 3.0 System Administration Guide: Chapter 4, Administering Quorum

   - How to Remove a Quorum Device 

Remove node from the cluster software configuration 

   - Use scconf

Sun Cluster 3.0 System Administration Guide: Chapter 6, Administering the Cluster

   - How to Remove a Cluster Node 

Remove required shared storage from the node and cluster 

   - Follow the procedures in your volume manager documentation and hardware guide 

SDS or VxVM administration guide; 

Sun Cluster 3.0 Hardware Guide:

   - How to Remove a StorEdge MultiPack Enclosure 

   - How to Remove a StorEdge D1000 Disk Array 

   - How to Remove a StorEdge A5x00 Disk Array 

6.3.1 How to Remove a Node From the Cluster Software Configuration

This is the last software configuration procedure that needs to be accomplished in the process for removing a node from a cluster. Before beginning this procedure, you must complete all the prerequisite tasks listed in the "Removing a Cluster Node" task map. When finished with this procedure, remove the hardware connections as described in the Sun Cluster 3.0 Hardware Guide.

  1. Become superuser on a node in the cluster.


    Note -

    Be sure you have placed the node in maintenance state and removed it from all resource groups, device groups, and quorum device configurations before continuing with this procedure.


  2. Determine the boot disks in the cluster.


    # scconf -pvv | grep Local_Disk	
    
  3. Identify which boot disks in the cluster are connected to the node being removed.


    # scdidadm -L did-name
    
  4. Remove the localonly flag from each boot disk.


    # scconf -c -D name=devicename,localonly=false
    

  5. Remove the node from all raw disk device groups, of which it is a member.

    This step must be completed for each disk device group.


    # scconf -pvv | grep nodename | grep Device	
    # scconf -r -D name=devicename,nodelist=nodename
    
  6. Remove the node from the cluster.


    # scconf -r -h node=nodename
    
  7. Verify the node removal using scstat.


    # scstat -n
    

Note -

After the device has been removed from the cluster, you must reinstall the Solaris operating environment on the removed host before it can be placed back into service in any capacity.


6.3.1.1 Example--Removing a Cluster Node

This example shows how to remove a node (phys-schost-2) from a cluster.


[Become superuser on any node.]
[Determine the boot disks on the node:]
# scconf -pvv | grep Local_Disk
	(dsk/d4) Device group type:          Local_Disk	(dsk/d3) Device group type:          Local_Disk
# scdidadm -L d4
  ...
  4        phys-schost-2:/dev/rdsk/c1t3d0 /dev/did/rdsk/d4
[Remove the localonly flag:]
# scconf -c -D name=dsk/d4,localonly=false
[Remove the node from all raw disk device groups:]
# scconf -pvv | grep phys-schost-2 | grep Device
	(dsk/d4) Device group node list:  phys-schost-2
	(dsk/d2) Device group node list:  phys-schost-1, phys-schost-2	(dsk/d1) Device group node list:  phys-schost-1, phys-schost-2
# scconf -r -D name=dsk/d4,nodelist=phys-schost-2
# scconf -r -D name=dsk/d2,nodelist=phys-schost-2
# scconf -r -D name=dsk/d1,nodelist=phys-schost-2
[Remove the node from the cluster:]
# scconf -r -h node=phys-schost-2
[Verify node removal:]# scstat -n
 
-- Cluster Nodes --
 
                    Node name           Status
                    ---------           ------
  Cluster node:     phys-schost-1       Online

6.3.1.2 Where to Go From Here

Sun Cluster 3.0 Hardware Guide:

      How to Remove a StorEdge MultiPack Enclosure

      How to Remove a StorEdge D1000 Disk Array

      How to Remove a StorEdge A5x00 Disk Array