This chapter provides new system administration information that has been added to the Sun Cluster 3.0 5/02 update release. This information supplements the Sun Cluster 3.0 12/01 System Administration Guide.
This chapter contains new information for the following topics.
The following information applies to this update release and all subsequent updates.
The following two items were added to this section in the Sun Cluster 3.0 5/02 update release and apply to this update and all subsequent updates to Sun Cluster 3.0 software.
VxVM does not support the chmod command. To change global device permissions in VxVM, consult the VxVM administrator's guide.
Sun Cluster 3.0 software does not support VxVM Dynamic Multipathing (DMP) to manage multiple paths from the same node.
Use this procedure to remove a cluster node from all disk device groups that list the node in their lists of potential primaries.
Become superuser on the node you want to remove as a potential primary of all disk device groups.
Determine the disk device group(s) of which the node to be removed is a member that are under volume management control.
Look for the node name in the Device group node list for each disk device group.
# scconf -p | grep ¨Device group¨ |
Are any of the disk device groups identified in Step 2 of the device group type SDS?
If yes, perform the procedures in "How to Remove a Node From a Disk Device Group (Solstice DiskSuite) (5/02)".
If no, go to Step 4.
Are any of the disk device groups identified in Step 2 of the device group type VxVM?
If yes, perform the procedures in "How to Remove a Node From a Disk Device Group (VERITAS Volume Manager) (5/02)".
If no, go to Step 5.
Determine the raw disk device groups of which the node to be removed is a member.
Note that the following command contains two "v"s in -pvv. The second "v" is needed to display raw disk device groups.
# scconf -pvv | grep ¨Device group¨ |
Are any of the disk device groups listed in Step 5 of the device group types Disk, Local_Disk, or both?
If yes, perform the procedures in "How to Remove a Node From a Raw Disk Device Group (5/02)".
If no, go to Step 7.
Verify that the node has been removed from the potential primaries list of all disk device groups.
The command returns nothing if the node is no longer listed as a potential primary of any disk device group.
# scconf -pvv | grep ¨Device group¨ | grep nodename |
Use this procedure to remove a cluster node from the list of potential primaries of a Solstice DiskSuite disk device group. A node can belong to more than one disk device group at a time, so repeat the metaset command for each disk device group from which you want to remove the node.
Determine the Solstice DiskSuite disk device group(s) of which the node to be removed is a member.
Device group type SDS indicates a Solstice DiskSuite disk device group.
# scconf -p | grep Device |
Become superuser on the node that currently owns the disk device group you want to modify.
Delete the node's hostname from the disk device group.
# metaset -s setname -d -h nodelist |
Specifies the disk device group name
Deletes from the disk device group the nodes identified with -h
Removes the node from the list of nodes that can master the disk device group
The update can take several minutes to complete.
If the command fails, add the -f (Force) option to the command.
# metaset -s setname -d -f -h nodelist |
Repeat Step 3 for each disk device group from which the node is being removed as a potential primary.
Verify that the node has been removed from the disk device group.
The disk device group name will match the diskset name specified with metaset.
# scstat -D |
The following example shows the removal of the host name phys-schost-2 from a disk device group configuration. This eliminates phys-schost-2 as a potential primary for the designated disk device group. Verify removal of the node by running the scstat -D command and by checking that the removed node is no longer displayed in the screen text.
[Determine the Solstice DiskSuite disk device group(2) for the node:] # scconf -p | grep Device Device group name: dg-schost-1 Device group type: SDS Device group failback enabled: no Device group node list: phys-schost-1, phys-schost-2 Device group ordered node list: yes Device group diskset name: dg-schost-1 [Determine the disk device group(s) for the node:] # scstat -D -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: dg-schost-1 phys-schost-1 phys-schost-2 [Become superuser.] [Remove the hostname from all disk device groups:] # metaset -s dg-schost-1 -d -h phys-schost-2 [Verify removal of the node:] # scstat -D -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: dg-schost-1 phys-schost-1 - |
Use this procedure to remove a cluster node from the list of potential primaries of a VERITAS Volume Manager (VxVM) disk device group (disk group).
Determine the VxVM disk device group(s) of which the node to be removed is a member.
The device group type VxVM indicates a
# scconf -p | grep Device |
Become superuser on a current cluster member node.
Execute the scsetup utility.
# scsetup |
The Main Menu is displayed.
To reconfigure a disk device group, type 4 (Device groups and volumes).
To remove the node from the VxVM disk device group, type 5 (Remove a node from a VxVM device group).
Follow the prompts to remove the cluster node from the disk device group. You will be asked for information about the following:
VxVM device group
Node name
Verify that the node has been removed from the VxVM disk device group(s).
# scconf -p | grep Device |
This example shows removal of the node named phys-schost-1 from the dg1 VxVM disk device group.
[Determine the VxVM disk device group for the node:] # scconf -p | grep Device Device group name: dg1 Device group type: VxVM Device group failback enabled: no Device group node list: phys-schost-1, phys-schost-2 Device group diskset name: dg1 [Become superuser and execute the scsetup utility:] # scsetup Select Device groups and volumes>Remove a node from a VxVM device group. Answer the questions when prompted. You will need the following information. You Will Need: Example: VxVM device group name dg1 node names phys-schost-1 [Verify that the scconf command executed properly:] scconf -r -D name=dg1,nodelist=phys-schost-1 Command completed successfully. Quit the scsetup Device Groups Menu and Main Menu. [Verify that the node was removed:] # scconf -p | grep Device Device group name: dg1 Device group type: VxVM Device group failback enabled: no Device group node list: phys-schost-2 Device group diskset name: dg1 |
Use this procedure to remove a cluster node from the list of potential primaries of a raw disk device group.
Become superuser on a node in the cluster other than the node to remove.
Identify the disk device groups that are connected to the node being removed.
Look for the node name in the Device group node list entry.
# scconf -pvv | grep nodename | grep |
Determine which disk device groups identified in Step 2 are raw disk device groups.
Raw disk device groups are of the Disk or Local_Disk device group type.
# scconf -pvv | grep ¨group type¨ |
Disable the localonly property of each Local_Disk raw disk device group.
# scconf -c -D name=rawdisk-device-group,localonly=false |
See the scconf_dg_rawdisk(1M) man page for more information about the localonly property.
Verify that you have disabled the localonly property of all raw disk device groups that are connected to the node being removed.
The Disk device group type indicates that the localonly property is disabled for that raw disk device group.
# scconf -pvv | grep ¨group type¨ |
Remove the node from all raw disk device groups identified in Step 2.
You must complete this step for each raw disk device group that is connected to the node being removed.
# scconf -r -D name=rawdisk-device-group,nodelist=nodename |
This example shows how to remove a node (phys-schost-2) from a raw disk device group. All commands are run from another node of the cluster (phys-schost-1).
[Identify the disk device groups connected to the node being removed:] phys-schost-1# scconf -pvv | grep phys-schost-2 | grep ¨Device group node list¨ (dsk/d4) Device group node list: phys-schost-2 (dsk/d2) Device group node list: phys-schost-1, phys-schost-2 (dsk/d1) Device group node list: phys-schost-1, phys-schost-2 [Identify the are raw disk device groups:] phys-schost-1# scconf -pvv | grep ¨group type¨ (dsk/d4) Device group type: Local_Disk (dsk/d8) Device group type: Local_Disk [Disable the localonly flag for each local disk on the node:] phys-schost-1# scconf -c -D name=dsk/d4,localonly=false [Verify that the localonly flag is disabled:] phys-schost-1# scconf -pvv | grep ¨group type¨ (dsk/d4) Device group type: Disk (dsk/d8) Device group type: Local_Disk [Remove the node from all raw disk device groups:] phys-schost-1# scconf -r -D name=dsk/d4,nodelist=phys-schost-2 phys-schost-1# scconf -r -D name=dsk/d2,nodelist=phys-schost-2 phys-schost-1# scconf -r -D name=dsk/d1,nodelist=phys-schost-2 |
The following procedure was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
If you intend to create more than three disksets in the cluster, perform the following steps before you create the disksets. Follow these steps regardless of whether you are installing disksets for the first time or you are adding more disksets to a fully configured cluster.
Ensure that the value of the md_nsets variable is set high enough to accommodate the total number of disksets you intend to create in the cluster.
On any node of the cluster, check the value of the md_nsets variable in the /kernel/drv/md.conf file.
If the total number of disksets in the cluster will be greater than the existing value of md_nsets minus one, on each node increase the value of md_nsets to the desired value.
The maximum permissible number of disksets is one less than the value of md_nsets. The maximum possible value of md_nsets is 32.
Ensure that the /kernel/drv/md.conf file is identical on each node of the cluster.
Failure to follow this guideline can result in serious Solstice DiskSuite errors and possible loss of data.
From one node, shut down the cluster.
# scshutdown -g0 -y |
Reboot each node of the cluster.
ok> boot |
On each node in the cluster, run the devfsadm(1M) command.
You can run this command on all nodes in the cluster at the same time.
From one node of the cluster, run the scgdevs(1M) command.
On each node, verify that the scgdevs command has completed before you attempt to create any disksets.
The scgdevs command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the scgdevs command has completed processing, run the following command on each node of the cluster.
% ps -ef | grep scgdevs |
The following information applies to this update release and all subsequent updates.
The following information was introduced in the Sun Cluster 3.0 12/01 update release and applies to that release and all subsequent updates to Sun Cluster 3.0 software.
The following VxFS features are not supported in a Sun Cluster 3.0 configuration.
Quick I/O
Snapshots
Storage checkpoints
Cache advisories (these can be used, but the effect will be observed on the given node only)
VERITAS CFS (requires VERITAS cluster feature & VCS)
VxFS-specific mount options
convosync (Convert O_SYNC)
mincache
qlog, delaylog, tmplog
All other VxFS features and options that are supported in a cluster configuration are supported by Sun Cluster 3.0 software. See VxFS documentation and man pages for details about VxFS options that are or are not supported in a cluster configuration.
The following guidelines for how to use VxFS to create highly available cluster file systems are specific to a Sun Cluster 3.0 configuration.
Create a VxFS file system by following procedures in VxFS documentation.
Globally mount and unmount a VxFS file system from the primary node (the node that masters the disk on which the VxFS file system resides) to ensure that the operation succeeds. A VxFS file system mount or unmount operation that is performed from a secondary node might fail.
Perform all VxFS administration commands from the primary node of the VxFS cluster file system.
The following guidelines for how to administer VxFS cluster file systems are not specific to Sun Cluster 3.0 software. However, they are different from the way you administer UFS cluster file systems.
You can access and administer files on a VxFS cluster file system from any node in the cluster, with the exception of ioctls, which you must issue only from the primary node. If you do not know whether an administration command involves ioctls, issue the command from the primary node.
If a VxFS cluster file system fails over to a secondary node, all standard-system-call operations that were in progress during failover are re-issued transparently on the new primary. However, any ioctl-related operation in progress during the failover will fail. After a VxFS cluster file system failover, check the state of the cluster file system. There might be administrative commands that were issued on the old primary before failover that require corrective measures. See VxFS documentation for more information.
The following note was added to Step 2 of this procedure in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
The newfs(1M) command is only valid for creating new UFS file systems. To create a new VxFS file system, follow procedures provided in your VxFS documentation
The following information applies to this update release and all subsequent updates.
The following task map was changed in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software. Referenced procedures that are not provided in this task map are located in the Sun Cluster 3.0 12/01 System Administration Guide.
Table 6-1 Task Map: Removing a Cluster Node (5/02)
Task |
For Instructions, Go To |
---|---|
Remove node from all resource groups - Use scrgadm |
Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide: See the procedure for how to remove a node from an existing resource group. |
Remove node from all disk device groups - Use scconf, metaset, and scsetup |
"How to Remove a Node From All Disk Device Groups (5/02)" |
Place node being removed into maintenance state - Use scswitch, shutdown, and scconf |
"How to Put a Node Into Maintenance State" |
Remove all logical transport connections to the node being removed - Use scsetup |
"How to Remove Cluster Transport Cables, Transport Adapters, and Transport Junctions"
|
Remove all quorum devices shared with the node being removed - Use scsetup |
"How to Remove a Quorum Device" or "How to Remove the Last Quorum Device From a Cluster" |
Remove node from the cluster software configuration - Use scconf |
"How to Remove a Node From the Cluster Software Configuration (5/02)" |
(Optional) Uninstall Sun Cluster software from the removed node - Use scinstall |
"How to Uninstall Sun Cluster Software From a Cluster Node (5/02)" |
Disconnect required shared storage from the node and cluster - Follow the procedures in your volume manager documentation and hardware guide. To remove the physical hardware from the node, see the Sun Cluster 3.0 12/01 Hardware Guide section on installing and maintaining cluster interconnect and public network hardware. |
Solstice DiskSuite or VxVM administration guide Hardware documentation Sun Cluster 3.0 12/01 Hardware Guide |
The following information was changed in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Steps to remove a node from a raw disk device group have been removed. Those instructions are now located in the new procedure "How to Remove a Node From a Raw Disk Device Group (5/02)".
After the node is removed from the cluster, you now have the option to uninstall Sun Cluster software from the removed node. To uninstall Sun Cluster software, go to "How to Uninstall Sun Cluster Software From a Cluster Node (5/02)".
The following procedure was added in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Perform this procedure to uninstall Sun Cluster software from a cluster node before you disconnect it from a fully established cluster configuration. You can use this procedure to uninstall software from the last remaining node of a cluster.
To uninstall Sun Cluster software from a node that has not yet joined the cluster or is still in install mode, do not perform this procedure. Instead, go to "How to Uninstall Sun Cluster Software to Correct Installation Problems" in the Sun Cluster 3.0 12/01 Software Installation Guide.
Be sure you have correctly completed all prerequisite tasks listed in the task map for removing a cluster node.
See "Adding and Removing a Cluster Node" in the Sun Cluster 3.0 12/01 System Administration Guide.
Be sure you have removed the node from all resource groups, device groups, and quorum device configurations, placed it in maintenance state, and removed it from the cluster before you continue with this procedure.
Become superuser on an active cluster member other than the node you will uninstall.
From the active cluster member, add the node you intend to uninstall to the cluster's node authentication list.
# scconf -a -T node=nodename |
Add
Specifies authentication options
Specifies the name of the node to add to the authentication list
Alternately, you can use the scsetup(1M) utility. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 12/01 System Administration Guide for procedures.
Become superuser on the node to uninstall.
Reboot the node into non-cluster mode.
# shutdown -g0 -y -i0 ok boot -x |
In the /etc/vfstab file, remove all globally mounted file system entries except the /global/.devices global mounts.
Uninstall Sun Cluster software from the node.
# cd / # scinstall -r |
See the scinstall(1M) man page for more information. If scinstall returns error messages, see "Troubleshooting a Node Uninstallation".
Disconnect the transport cables and the transport junction, if any, from the other cluster devices.
If the uninstalled node is connected to a storage device that uses a parallel SCSI interface, install a SCSI terminator to the open SCSI connector of the storage device after you disconnect the transport cables.
If the uninstalled node is connected to a storage device that uses Fibre Channel interfaces, no termination is necessary.
Follow the documentation that shipped with your host adapter and server for disconnection procedures.
This section describes error messages you might receive when you run the scinstall -r command and the corrective actions to take.
The following error messages indicate that the node you removed still has cluster file systems referenced in its vfstab file.
Verifying that no unexpected global mounts remain in /etc/vfstab ... failed scinstall: global-mount1 is still configured as a global mount. scinstall: global-mount1 is still configured as a global mount. scinstall: /global/dg1 is still configured as a global mount. scinstall: It is not safe to uninstall with these outstanding errors. scinstall: Refer to the documentation for complete uninstall instructions. scinstall: Uninstall failed. |
To correct this error, return to "How to Uninstall Sun Cluster Software From a Cluster Node (5/02)" and repeat the procedure. Ensure that you successfully complete Step 6 in the procedure before you rerun the scinstall -r command.
The following error messages indicate that the node you removed is still listed with a disk device group.
Verifying that no device services still reference this node ... failed scinstall: This node is still configured to host device service "service". scinstall: This node is still configured to host device service "service2". scinstall: This node is still configured to host device service "service3". scinstall: This node is still configured to host device service "dg1". scinstall: It is not safe to uninstall with these outstanding errors. scinstall: Refer to the documentation for complete uninstall instructions. scinstall: Uninstall failed. |
To correct this error, perform the following steps.
Attempt to rejoin the node to the cluster.
# boot |
Did the node successfully rejoin the cluster?
If no, proceed to Step 3.
If yes, perform the following steps to remove the node from disk device groups.
If the node successfully rejoins the cluster, remove the node from the remaining disk device group(s).
Follow procedures in "How to Remove a Node From All Disk Device Groups (5/02)".
After you remove the node from all disk device groups, return to "How to Uninstall Sun Cluster Software From a Cluster Node (5/02)" and repeat the procedure.
If the node could not rejoin the cluster, rename the node's /etc/cluster/ccr file to any other name you choose, for example, ccr.old.
# mv /etc/cluster/ccr /etc/cluster/ccr.old |
Return to "How to Uninstall Sun Cluster Software From a Cluster Node (5/02)" and repeat the procedure.