This procedure assumes that you are installing one or more storage arrays at initial installation of an x86 based cluster. If you are adding arrays to a running cluster, use the procedure in SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster.
Multi-host storage in clusters uses the multi-initiator capability of the small computer system interface (SCSI) specification. When installing arrays in your cluster, you must ensure that each device in each SCSI chain has a unique SCSI address. The procedure that follows has specific instructions for achieving this requirement. For additional information about multi-initiator capability, see Multi-Initiator SCSI in Sun Cluster Concepts Guide for Solaris OS.
On x86 based systems, setting SCSI initiator IDs is a two-step process. You first set the IDs in the BIOS and then in a configuration file. Until both steps are complete, the IDs are not set and the systems might not boot or the nodes might panic. Set the IDs on one node at a time, as instructed in the procedure.
Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.
You have read the entire procedure.
You can access necessary patches, drivers, software packages, and hardware.
If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.
You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.
Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.
For required firmware, see the Sun System Handbook.
You have planned the SCSI address assignments for your host adapters.
Your nodes and arrays are powered off.
Your cluster interconnect hardware is connected to the nodes in your cluster.
No software is installed.
Verify that the storage arrays are set up correctly for your planned configuration.
If necessary, install the host adapters in the nodes that you plan to connect to the storage array.
If possible, put each host adapter on a separate BUS to ensure maximum redundancy.
Power on one node.
On the first node, ensure that each device in the SCSI chain has a unique SCSI address by configuring the initiator IDs in the BIOS.
To avoid SCSI-chain conflicts, perform the following steps.
Perform these steps on only one cluster node.
Access your BIOS settings.
To access the BIOS on the V40z server with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, press Ctrl-C when prompted during reboot.
Verify that the internal controller is set to the default value of 7.
Set the new host adapter scsi-initiator-id to 6.
Cable the storage arrays to all nodes.
For cabling diagrams, see Chapter 3, Cabling Diagrams.
Ensure that the bus length does not exceed SCSI-bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI-bus-length limitations, see your hardware documentation.
Connect the AC or DC power cords for each storage array to a different power source.
If your storage array has redundant power inputs, connect each power cord from the storage array to a different power source. If the arrays are not mirrors of each other, the arrays can share power sources.
Power on the storage array.
For the procedure about powering on a storage device, see the service manual that shipped with your storage device.
Install the operating system software on the node for which you configured the BIOS in Step 4.
Install the Solaris operating system.
See your Sun Cluster installation documentation for instructions.
Install any unbundled drivers required by your cluster configuration.
For driver installation procedures, see the host adapter documentation.
Apply any required Solaris patches.
If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.
You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.
Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.
For required firmware, see the Sun System Handbook.
On the node for which you configured the BIOS in Step 4, finish configuring the SCSI initiator IDs.
Get the information required for the mpt.conf file.
To create the mpt.conf entries, you need the path to your boot disk and the SCSI unit address.
To find this information on X4000 series servers with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, use the following command:
# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. clt0d0 <DEFAULT cyl 8938 alt 2 hd 255 sec 63> /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0 |
Create a /kernel/drv/mpt.conf file.
Include the following entries:
scsi-initiator-id=6; name="mpt" parent="/pci@0,0/pci1022,7450@a" unit-address="4" scsi-initiator-id=7; |
These entries are based on the foregoing example output of the format command. Your entries must include the values output from your format command. Also, note that the parent and unit-address values are strings. The quotation marks are required to form correct values in the mpt.conf file.
The entries in this example have the following meanings:
Matches your setting in the BIOS for the host adapter ports.
Indicates that these settings are for the mpt driver.
Specifies the path to your local drive which you discovered in Step a.
Specifies the unit address of the local drive. In the example in Step a, this information derives from the pci17c2,10@4 portion of the output.
Sets your node's local drive back to the default SCSI setting of 7.
Reboot the node to activate the mpt.conf file changes.
For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.
Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.
For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
To make the changes to the /kernel/drv/sd.conf file active, perform one of the following options.
On systems that run Solaris 8 Update 7 or below, perform a reconfiguration boot by adding -r to your boot instruction.
For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.
For Solaris 9 and above, run the update_drv -f sd command and then the devfsadm command.
Power on the remaining nodes and install the operating system software on them.
Install the Solaris operating system.
See your Sun Cluster installation documentation for instructions.
Install any unbundled drivers required by your cluster configuration.
For driver installation procedures, see the host adapter documentation.
Apply any required Solaris patches.
If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.
You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.
Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.
For required firmware, see the Sun System Handbook.
Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.
For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
To make the changes to the /kernel/drv/sd.conf file active, perform one of the following options.
On systems that run Solaris 8 Update 7 or below, perform a reconfiguration boot by adding -r to your boot instruction.
For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.
For Solaris 9 and above, run the update_drv -f sd command and then the devfsadm command.
If you are using Sun StorEdge 3310 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter, you must throttle down the speed of the adapter to U160. Add the following entry to your /kernel/drv/mpt.conf file on each node:
scsi-options=0x1ff8; |
Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.
For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
To make the changes to the /kernel/drv/sd.conf file active, perform one of the following options.
On systems that run Solaris 8 Update 7 or below, perform a reconfiguration boot by adding -r to your boot instruction.
For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.
For Solaris 9 and above, run the update_drv -f sd command and then the devfsadm command.
Install the Sun Cluster software and volume management software on each node.
For software installation procedures, see the Sun Cluster installation documentation.
The following mpt.conf file shows all entries, assuming the following:
The output of the format command is that shown in Step a.
You are using Sun StorEdge 3320 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter.
# more /kernel/drv/mpt.conf scsi-initiator-id=6; name="mpt" parent="/pci@0,0/pci1022,7450@a" unit-address="4" scsi-initiator-id=7; |
The following mpt.conf file shows all entries, assuming the following:
The output of the format command is that shown in Step a.
You are using Sun StorEdge 3310 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter.
# more /kernel/drv/mpt.conf scsi-initiator-id=6; name="mpt" parent="/pci@0,0/pci1022,7450@a" unit-address="4" scsi-initiator-id=7; scsi-options=0x1ff8; |
If needed, finish setting up your storage arrays, including partitions. If you are using Solstice DiskSuite/Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.
Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.