For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation. For a list of Sun Cluster documentation, see Related Documentation.
Example 1–5 shows you how to apply this procedure.
This procedure assumes that your cluster is operational.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
Determine whether the disk drive that you want to remove is configured as a quorum device?
If the disk drive you want to remove is configured as a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.
For procedures about how to add and remove quorum devices, see Sun Cluster system administration documentation.
If possible, back up the metadevice or volume.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
Perform volume management administration to remove the disk drive from the configuration.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
Identify the disk drive that needs to be removed.
If the disk error message reports the drive problem by DID, determine the Solaris device name.
On any node that is connected to the storage array, run the luxadm remove_device command.
Remove the disk drive. Press the Return key when prompted.
# luxadm remove_device -F /dev/rdsk/cNtXdYsZ |
On all connected nodes, remove references to the disk drive.
The following example shows how to remove a disk drive from a Sun StorEdge A5x00 storage array in a cluster running Sun Cluster version 3.2 software. The disk drive to be removed is d4 and is a virtual table of contents (VTOC) labelled device.
# cldevice list -v === DID Device Instances === DID Device Name: /dev/did/rdsk/d4 Full Device Path: phys0-schost1:/dev/rdsk/c1t1d0 Full Device Path: phys-schost2:/dev/rdsk/c1t1d0 Replication: none default_fencing: global # luxadm remove_device -F /dev/rdsk/c1t32d0s2 WARNING!!! Please ensure that no file systems are mounted on these device(s). All data on these devices should have been backed up. The list of devices that will be removed is: 1: Box Name "venus1" front slot 0 Please enter 'q' to Quit or <Return> to Continue: <Return> stopping: Drive in "venus1" front slot 0....Done offlining: Drive in "venus1" front slot 0....Done Hit <Return> after removing the device(s). <Return> Drive in Box Name "venus1" front slot 0 Logical Nodes being removed under /dev/dsk/ and /dev/rdsk: c1t32d0s0 c1t32d0s1 c1t32d0s2 c1t32d0s3 c1t32d0s4 c1t32d0s5 c1t32d0s6 c1t32d0s7 # devfsadm -C # cldevice clear |
The following example shows how to remove a disk drive from a Sun StorEdge A5x00 storage array in a cluster running Sun Cluster version 3.1 software. The disk drive to be removed is d4 and is a virtual table of contents (VTOC) labelled device.
# scdidadm -l d4 4 phys-schost-2://dev/rdsk/c1t32d0 /dev/did/rdsk/d4 # luxadm remove_device -F /dev/rdsk/c1t32d0s2 WARNING!!! Please ensure that no file systems are mounted on these device(s). All data on these devices should have been backed up. The list of devices that will be removed is: 1: Box Name "venus1" front slot 0 Please enter 'q' to Quit or <Return> to Continue: <Return> stopping: Drive in "venus1" front slot 0....Done offlining: Drive in "venus1" front slot 0....Done Hit <Return> after removing the device(s). <Return> Drive in Box Name "venus1" front slot 0 Logical Nodes being removed under /dev/dsk/ and /dev/rdsk: c1t32d0s0 c1t32d0s1 c1t32d0s2 c1t32d0s3 c1t32d0s4 c1t32d0s5 c1t32d0s6 c1t32d0s7 # devfsadm -C # scdidadm -C |