JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Fibre Channel JBOD Storage Device Manual SPARC Platform Edition
search filter icon
search icon

Document Information

Preface

1.  Installing and Maintaining a Fibre Channel JBOD Storage Device

Installing Storage Arrays

How to Install a Storage Array in a New Cluster

How to Add the First Storage Array to an Existing Cluster

How to Add a Subsequent Storage Array to an Existing Cluster

Maintaining Storage Arrays

FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures

Sun StorEdge A5x00 FRUs

How to Replace a Storage Array

How to Remove a Storage Array

How to Add a Disk Drive

How to Remove a Disk Drive

How to Replace a Disk Drive

A.  Cabling Diagrams

Index

Maintaining Storage Arrays

The maintenance procedures in FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures are performed the same as in a noncluster environment. Table 1-2 lists the procedures that require cluster-specific steps.

Table 1-2 Task Map: Maintaining a Storage Array

Task
Information
Remove a storage array
Replace a storage array
Add a disk drive
Remove a disk drive
Replace a disk drive

FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures

Each storage device has a different set of FRUs that do not require cluster-specific procedures.

Choose among the following storage devices:

Sun StorEdge A5x00 FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge A5000 Installation and Service Manual for the following procedures.

How to Replace a Storage Array

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

Example 1-1 shows you how to apply this procedure.

  1. If possible, back up the metadevices or volumes that reside in the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Perform volume management administration to remove the storage array from the configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. On all nodes that are connected to the storage array, run the luxadm remove_device command.
    # luxadm remove_device -F boxname

    See Example 1-1 for an example of this command and its use.

  4. Disconnect the fiber-optic cables from the storage array.
  5. Power off and disconnect the storage array from the AC power source.

    For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.

  6. Connect the fiber optic cables to the new storage array.
  7. Connect the new storage array to an AC power source.
  8. One disk drive at a time, remove the disk drives from the old storage array. Insert the disk drives into the same slots in the new storage array.
  9. Power on the storage array.
  10. Use the luxadm insert_device command to find the new storage array.

    Repeat this step for each node that is connected to the storage array.

    # luxadm insert_device

    See Example 1-1 for an example of this command and its use.

  11. On all nodes that are connected to the new storage array, upload the new information to the DID driver.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.

    Use the following command:

    # cldevice populate
  12. Perform volume management administration to add the new storage array to the configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

Example 1-1 Replacing a Sun StorEdge A5x00 Storage Array When Using Oracle Solaris Cluster 3.3 Software

The following example shows how to replace a Sun StorEdge A5x00 storage array when using Oracle Solaris Cluster 3.3 software. The storage array to be replaced is venus1.

# luxadm remove_device -F venus1

WARNING!!! Please ensure that no filesystems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Box name:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
            /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/   \
                ses@w123456789abcdf03,0:0
            /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/   \
                ses@w123456789abcdf00,0:0

Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: 
<Return>
Hit <Return> after removing the device(s). <Return>

# luxadm insert_device
Please hit <RETURN> when you have finished adding Fibre Channel 
Enclosure(s)/Device(s): <Return>
# cldevice populate

How to Remove a Storage Array

Use this procedure to remove a storage array from a cluster. Example 1-2 shows you how to apply this procedure. Use the procedures in your server hardware manual to identify the storage array.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  1. Perform volume management administration to remove the storage array from the configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. On all nodes that are connected to the storage array, run the luxadm remove_device command.
    # luxadm remove_device -F boxname
  3. Remove the storage array and the fiber-optic cables that are connected to the storage array.

    For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.


    Note - If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Solutions in an Oracle Solaris Cluster Environment in Oracle Solaris Cluster 3.3 Hardware Administration Manual for more information.


  4. On all nodes, remove references to the storage array.
    # devfsadm -C
    # cldevice populate
  5. If necessary, remove any unused host adapters from the nodes.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

Example 1-2 Removing a Sun StorEdge A5x00 Storage Array When Using Oracle Solaris Cluster 3.3 Software

The following example shows how to remove a Sun StorEdge A5x00 storage array from a cluster running Oracle Solaris Cluster version 3.3 software. The storage array to be removed is venus1.

# luxadm remove_device -F venus1
WARNING!!! Please ensure that no file systems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Storage Array:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/    \
                ses@w123456789abcdf03,0:0
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/    \
                ses@w123456789abcdf00,0:0


Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return>

Hit <Return> after removing the device(s). <Return>

# devfsadm -C
# cldevice populate  

How to Add a Disk Drive

For conceptual information about quorums, quorum devices, global devices, and device IDs, see your Oracle Solaris Cluster concepts documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

Before You Begin

This procedure assumes that your cluster is operational.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. On one node that is connected to the storage array, install the new disk.

    Install the new disk drive. Press the Return key when prompted. You can insert multiple disk drives at the same time.

    # luxadm insert_device enclosure,slot
  2. On all other nodes that are attached to the storage array, probe all devices. Write the new disk drive to the /dev/rdsk directory.

    The amount of time that the devfsadm command requires to complete its processing depends on the number of devices that are connected to the node. Expect at least five minutes.

    # devfsadm -C
  3. Ensure that entries for the disk drive have been added to the /dev/rdsk directory.
    # ls -l /dev/rdsk
  4. If necessary, partition the disk drive.

    You can use either the format(1M) command or copy the partitioning from another disk drive in the storage array.

  5. From any node in the cluster, update the global device namespace.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.

    Run the following command:

    # cldevice populate
  6. Verify that a device ID (DID) has been assigned to the disk drive.

    Note - The DID that was assigned to the new disk drive might not be in sequential order in the storage array.


    # cldevice list -v
  7. Perform necessary volume management administration actions on the new disk drive.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

How to Remove a Disk Drive

For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Oracle Solaris Cluster concepts documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

Example 1-3 shows you how to apply this procedure.

Before You Begin

This procedure assumes that your cluster is operational.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Determine whether the disk drive that you want to remove is configured as a quorum device.
    # clquorum show
  2. If the disk drive you want to remove is configured as a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.

    For procedures about how to add and remove quorum devices, see Oracle Solaris Cluster system administration documentation.

  3. If possible, back up the metadevice or volume.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  5. Identify the disk drive that needs to be removed.

    If the disk error message reports the drive problem by DID, determine the Oracle Solaris device name.

    # cldevice list -v
  6. On any node that is connected to the storage array, run the luxadm remove_device command.

    Remove the disk drive. Press the Return key when prompted.

    # luxadm remove_device -F /dev/rdsk/cNtXdYsZ
  7. On all connected nodes, remove references to the disk drive.
    # devfsadm -C
    # cldevice clear

Example 1-3 Removing a Disk Drive in a Sun StorEdge A5x00 Storage Array When Using Oracle Solaris Cluster 3.3 Software

The following example shows how to remove a disk drive from a Sun StorEdge A5x00 storage array in a cluster running Oracle Solaris Cluster version 3.3 software. The disk drive to be removed is d4 and is a virtual table of contents (VTOC) labelled device.

# cldevice list -v

=== DID Device Instances ===

DID Device Name:                            /dev/did/rdsk/d4
  Full Device Path:                           phys0-schost1:/dev/rdsk/c1t1d0
  Full Device Path:                           phys-schost2:/dev/rdsk/c1t1d0
  Replication:                                none
  default_fencing:                            global

# luxadm remove_device -F /dev/rdsk/c1t32d0s2

WARNING!!! Please ensure that no file systems are mounted on these device(s).
All data on these devices should have been backed up.

The list of devices that will be removed is: 
            1: Box Name "venus1" front slot 0

Please enter 'q' to Quit or <Return> to Continue: <Return>

stopping:  Drive in "venus1" front slot 0....Done
offlining: Drive in "venus1" front slot 0....Done

Hit <Return> after removing the device(s). <Return>

Drive in Box Name "venus1" front slot 0
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
        c1t32d0s0
        c1t32d0s1
        c1t32d0s2
        c1t32d0s3
        c1t32d0s4
        c1t32d0s5
        c1t32d0s6
        c1t32d0s7
# devfsadm -C
# cldevice clear

How to Replace a Disk Drive

For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Oracle Solaris Cluster concepts documentation.

Before You Begin

This procedure assumes that your cluster is operational.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Identify the disk drive that needs replacement.

    If the disk error message reports the drive problem by device ID (DID), determine the Oracle Solaris logical device name. If the disk error message reports the drive problem by the Oracle Solaris physical device name, use your Oracle Solaris documentation to map the Oracle Solaris physical device name to the Oracle Solaris logical device name. Use this Oracle Solaris logical device name and DID throughout this procedure.

    Run the following command:

    # cldevice list -v 
  2. Determine whether the disk drive that you want to replace is configured as a quorum device.
    # clquorum show
  3. If the disk drive that you want to replace is configured as a quorum device, add a new quorum device on a different storage array. Remove the old quorum device.

    For procedures about how to add and remove quorum devices, see Oracle Solaris Cluster system administration documentation.

  4. If possible, back up the metadevice or volume.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  5. Identify the failed disk drive's physical DID.

    Use this physical DID in Step 14 to verify that the failed disk drive has been replaced with a new disk drive. The DID and the world wide name (WWN) for the disk drive are the same.

    Use the following command:

    # cldevice list -v
  6. If you are using Veritas Volume Manager, proceed to Step 8.
  7. If you are using Solaris Volume Manager, save the disk partitioning information to partition the new disk drive.
    # prtvtoc /dev/rdsk/cNtXdYs2 > filename

    Note - You can also use the format utility to save the disk's partition information.


  8. On any node that is connected to the storage array, remove the disk drive when prompted.
    # luxadm remove_device -F /dev/rdsk/cNtXdYs2

    After running the command, warning messages might display. These messages can be ignored.

  9. On any node that is connected to the storage array, run the luxadm insert_device command. Add the new disk drive when prompted.
    # luxadm insert_device boxname,fslotnumber

    or

    # luxadm insert_device boxname,fslotnumber

    If you are inserting a front disk drive, use the fslotnumber parameter. If you are inserting a rear disk drive, use the rslotnumber parameter.

  10. On all other nodes that are attached to the storage array, probe all devices. Write the new disk drive to the /dev/rdsk directory.

    The amount of time that the devfsadm command requires to complete depends on the number of devices that are connected to the node. Expect at least five minutes.

    # devfsadm -C
  11. If you are using If Veritas Volume Manager, proceed to Step 13.
  12. If you are using Solaris Volume Manager, on one node that is connected to the storage array, partition the new disk drive. Use the partitioning information you saved in Step 6.
    # fmthard -s filename /dev/rdsk/cNtXdYs2

    Note - You can also use the format utility to partition the new disk drive.


  13. From all nodes that are connected to the storage array, update the DID database and driver.
    # cldevice repair
  14. On any node, confirm that the failed disk drive has been replaced. Compare the following physical DID to the physical DID in Step 5.

    If the following physical DID is different from the physical DID in Step 5, you successfully replaced the failed disk drive with a new disk drive.

    Use the following command:

    # cldevice list -v
  15. Perform volume management administration to add the disk drive back to its diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  16. If you want this new disk drive to be a quorum device, add the quorum device.

    For the procedure about how to add a quorum device, see Oracle Solaris Cluster system administration documentation.