The following information applies to this update release and all subsequent updates.
The following two items were added to this section in the Sun Cluster 3.0 5/02 update release and apply to this update and all subsequent updates to Sun Cluster 3.0 software.
VxVM does not support the chmod command. To change global device permissions in VxVM, consult the VxVM administrator's guide.
Sun Cluster 3.0 software does not support VxVM Dynamic Multipathing (DMP) to manage multiple paths from the same node.
Use this procedure to remove a cluster node from all disk device groups that list the node in their lists of potential primaries.
Become superuser on the node you want to remove as a potential primary of all disk device groups.
Determine the disk device group(s) of which the node to be removed is a member that are under volume management control.
Look for the node name in the Device group node list for each disk device group.
# scconf -p | grep ¨Device group¨ |
Are any of the disk device groups identified in Step 2 of the device group type SDS?
If yes, perform the procedures in "How to Remove a Node From a Disk Device Group (Solstice DiskSuite) (5/02)".
If no, go to Step 4.
Are any of the disk device groups identified in Step 2 of the device group type VxVM?
If yes, perform the procedures in "How to Remove a Node From a Disk Device Group (VERITAS Volume Manager) (5/02)".
If no, go to Step 5.
Determine the raw disk device groups of which the node to be removed is a member.
Note that the following command contains two "v"s in -pvv. The second "v" is needed to display raw disk device groups.
# scconf -pvv | grep ¨Device group¨ |
Are any of the disk device groups listed in Step 5 of the device group types Disk, Local_Disk, or both?
If yes, perform the procedures in "How to Remove a Node From a Raw Disk Device Group (5/02)".
If no, go to Step 7.
Verify that the node has been removed from the potential primaries list of all disk device groups.
The command returns nothing if the node is no longer listed as a potential primary of any disk device group.
# scconf -pvv | grep ¨Device group¨ | grep nodename |
Use this procedure to remove a cluster node from the list of potential primaries of a Solstice DiskSuite disk device group. A node can belong to more than one disk device group at a time, so repeat the metaset command for each disk device group from which you want to remove the node.
Determine the Solstice DiskSuite disk device group(s) of which the node to be removed is a member.
Device group type SDS indicates a Solstice DiskSuite disk device group.
# scconf -p | grep Device |
Become superuser on the node that currently owns the disk device group you want to modify.
Delete the node's hostname from the disk device group.
# metaset -s setname -d -h nodelist |
Specifies the disk device group name
Deletes from the disk device group the nodes identified with -h
Removes the node from the list of nodes that can master the disk device group
The update can take several minutes to complete.
If the command fails, add the -f (Force) option to the command.
# metaset -s setname -d -f -h nodelist |
Repeat Step 3 for each disk device group from which the node is being removed as a potential primary.
Verify that the node has been removed from the disk device group.
The disk device group name will match the diskset name specified with metaset.
# scstat -D |
The following example shows the removal of the host name phys-schost-2 from a disk device group configuration. This eliminates phys-schost-2 as a potential primary for the designated disk device group. Verify removal of the node by running the scstat -D command and by checking that the removed node is no longer displayed in the screen text.
[Determine the Solstice DiskSuite disk device group(2) for the node:] # scconf -p | grep Device Device group name: dg-schost-1 Device group type: SDS Device group failback enabled: no Device group node list: phys-schost-1, phys-schost-2 Device group ordered node list: yes Device group diskset name: dg-schost-1 [Determine the disk device group(s) for the node:] # scstat -D -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: dg-schost-1 phys-schost-1 phys-schost-2 [Become superuser.] [Remove the hostname from all disk device groups:] # metaset -s dg-schost-1 -d -h phys-schost-2 [Verify removal of the node:] # scstat -D -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: dg-schost-1 phys-schost-1 - |
Use this procedure to remove a cluster node from the list of potential primaries of a VERITAS Volume Manager (VxVM) disk device group (disk group).
Determine the VxVM disk device group(s) of which the node to be removed is a member.
The device group type VxVM indicates a
# scconf -p | grep Device |
Become superuser on a current cluster member node.
Execute the scsetup utility.
# scsetup |
The Main Menu is displayed.
To reconfigure a disk device group, type 4 (Device groups and volumes).
To remove the node from the VxVM disk device group, type 5 (Remove a node from a VxVM device group).
Follow the prompts to remove the cluster node from the disk device group. You will be asked for information about the following:
VxVM device group
Node name
Verify that the node has been removed from the VxVM disk device group(s).
# scconf -p | grep Device |
This example shows removal of the node named phys-schost-1 from the dg1 VxVM disk device group.
[Determine the VxVM disk device group for the node:] # scconf -p | grep Device Device group name: dg1 Device group type: VxVM Device group failback enabled: no Device group node list: phys-schost-1, phys-schost-2 Device group diskset name: dg1 [Become superuser and execute the scsetup utility:] # scsetup Select Device groups and volumes>Remove a node from a VxVM device group. Answer the questions when prompted. You will need the following information. You Will Need: Example: VxVM device group name dg1 node names phys-schost-1 [Verify that the scconf command executed properly:] scconf -r -D name=dg1,nodelist=phys-schost-1 Command completed successfully. Quit the scsetup Device Groups Menu and Main Menu. [Verify that the node was removed:] # scconf -p | grep Device Device group name: dg1 Device group type: VxVM Device group failback enabled: no Device group node list: phys-schost-2 Device group diskset name: dg1 |
Use this procedure to remove a cluster node from the list of potential primaries of a raw disk device group.
Become superuser on a node in the cluster other than the node to remove.
Identify the disk device groups that are connected to the node being removed.
Look for the node name in the Device group node list entry.
# scconf -pvv | grep nodename | grep |
Determine which disk device groups identified in Step 2 are raw disk device groups.
Raw disk device groups are of the Disk or Local_Disk device group type.
# scconf -pvv | grep ¨group type¨ |
Disable the localonly property of each Local_Disk raw disk device group.
# scconf -c -D name=rawdisk-device-group,localonly=false |
See the scconf_dg_rawdisk(1M) man page for more information about the localonly property.
Verify that you have disabled the localonly property of all raw disk device groups that are connected to the node being removed.
The Disk device group type indicates that the localonly property is disabled for that raw disk device group.
# scconf -pvv | grep ¨group type¨ |
Remove the node from all raw disk device groups identified in Step 2.
You must complete this step for each raw disk device group that is connected to the node being removed.
# scconf -r -D name=rawdisk-device-group,nodelist=nodename |
This example shows how to remove a node (phys-schost-2) from a raw disk device group. All commands are run from another node of the cluster (phys-schost-1).
[Identify the disk device groups connected to the node being removed:] phys-schost-1# scconf -pvv | grep phys-schost-2 | grep ¨Device group node list¨ (dsk/d4) Device group node list: phys-schost-2 (dsk/d2) Device group node list: phys-schost-1, phys-schost-2 (dsk/d1) Device group node list: phys-schost-1, phys-schost-2 [Identify the are raw disk device groups:] phys-schost-1# scconf -pvv | grep ¨group type¨ (dsk/d4) Device group type: Local_Disk (dsk/d8) Device group type: Local_Disk [Disable the localonly flag for each local disk on the node:] phys-schost-1# scconf -c -D name=dsk/d4,localonly=false [Verify that the localonly flag is disabled:] phys-schost-1# scconf -pvv | grep ¨group type¨ (dsk/d4) Device group type: Disk (dsk/d8) Device group type: Local_Disk [Remove the node from all raw disk device groups:] phys-schost-1# scconf -r -D name=dsk/d4,nodelist=phys-schost-2 phys-schost-1# scconf -r -D name=dsk/d2,nodelist=phys-schost-2 phys-schost-1# scconf -r -D name=dsk/d1,nodelist=phys-schost-2 |
The following procedure was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
If you intend to create more than three disksets in the cluster, perform the following steps before you create the disksets. Follow these steps regardless of whether you are installing disksets for the first time or you are adding more disksets to a fully configured cluster.
Ensure that the value of the md_nsets variable is set high enough to accommodate the total number of disksets you intend to create in the cluster.
On any node of the cluster, check the value of the md_nsets variable in the /kernel/drv/md.conf file.
If the total number of disksets in the cluster will be greater than the existing value of md_nsets minus one, on each node increase the value of md_nsets to the desired value.
The maximum permissible number of disksets is one less than the value of md_nsets. The maximum possible value of md_nsets is 32.
Ensure that the /kernel/drv/md.conf file is identical on each node of the cluster.
Failure to follow this guideline can result in serious Solstice DiskSuite errors and possible loss of data.
From one node, shut down the cluster.
# scshutdown -g0 -y |
Reboot each node of the cluster.
ok> boot |
On each node in the cluster, run the devfsadm(1M) command.
You can run this command on all nodes in the cluster at the same time.
From one node of the cluster, run the scgdevs(1M) command.
On each node, verify that the scgdevs command has completed before you attempt to create any disksets.
The scgdevs command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the scgdevs command has completed processing, run the following command on each node of the cluster.
% ps -ef | grep scgdevs |