Use this procedure to add RAID storage arrays to a running cluster. If you need to install a storage array in a new cluster, use the procedure in How to Install a RAID Storage Array in a New Cluster.
This procedure assumes that your nodes are not configured with dynamic reconfiguration functionality.
If your nodes are configured for dynamic reconfiguration, see your Sun Cluster Hardware Administration Manual for Solaris OS.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read RBAC (role-based access control) authorization.
Install any RAID storage array packages and patches on nodes.
For the most current list of software, firmware, and patches that are required for the RAID storage array, refer to SunSolve. SunSolve is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.
Power on the RAID storage array.
For procedures about how to power on the storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
Configure the RAID storage array.
For the procedure about how to create LUNs, see How to Create and Map a LUN.
On each node that is connected to the RAID storage array, ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.
For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
Shut down the first node.
For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.
If you are installing new host adapters, power off the first node.
For the procedure about how to power off a node, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
Install the host adapters in the first node.
For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.
Cable the RAID storage array to the first node.
Ensure the cable does not exceed bus length limitations. For more information on bus length limitations, see the documentation that shipped with your hardware.
For the procedure about how to cable the storage arrays, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
Boot the first node.
For the procedure about how to boot cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.
Verify that the first node recognizes the new host adapters and disk drives.
If the node does not recognize the new hardware, check all hardware connections and repeat installation steps you performed in Step 7.
SPARC:
{0} ok show-disks a) /pci@1f,4000/pci@2/scsi@5/sd b) /pci@1f,4000/pci@2/scsi@4/sd ... |
x86:
Adaptec AIC-7899 SCSI BIOS v2.57S4 (c) 2000 Adaptec, Inc. All Rights Reserved. Press <Ctrl><A> for SCSISelect(TM) Utility! Ch B, SCSI ID: 0 SEAGATE ST336605LC 160 SCSI ID: 1 SEAGATE ST336605LC 160 SCSI ID: 6 ESG-SHV SCA HSBP M18 ASYN Ch A, SCSI ID: 2 SUN StorEdge 3310 160 SCSI ID: 3 SUN StorEdge 3310 160 AMIBIOS (C)1985-2002 American Megatrends Inc., Copyright 1996-2002 Intel Corporation SCB20.86B.1064.P18.0208191106 SCB2 Production BIOS Version 2.08 BIOS Build 1064 2 X Intel(R) Pentium(R) III CPU family 1400MHz Testing system memory, memory size=2048MB 2048MB Extended Memory Passed 512K L2 Cache SRAM Passed ATAPI CD-ROM SAMSUNG CD-ROM SN-124 SunOS - Intel Platform Edition Primary Boot Subsystem, vsn 2.0 Current Disk Partition Information Part# Status Type Start Length ================================================ 1 Active X86 BOOT 2428 21852 2 SOLARIS 24280 71662420 3 <unused> 4 <unused> Please select the partition you wish to boot: * * Solaris DCB loading /solaris/boot.bin SunOS Secondary Boot version 3.00 Solaris Intel Platform Edition Booting System Autobooting from bootpath: /pci@1,0/pci8086,340f@7,1/sd@0,0:a If the system hardware has changed, or to boot from a different device, interrupt the autoboot process by pressing ESC. Press ESCape to interrupt autoboot in 2 seconds. Initializing system Please wait... Warning: Resource Conflict - both devices are added NON-ACPI device: ISY0050 Port: 3F0-3F5, 3F7; IRQ: 6; DMA: 2 ACPI device: ISY0050 Port: 3F2-3F3, 3F4-3F5, 3F7; IRQ: 6; DMA: 2 <<< Current Boot Parameters >>> Boot path: /pci@1,0/pci8086,340f@7,1/sd@0,0:a Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: |
If necessary, perform a reconfiguration boot on the first node to create the new Solaris device files and links.
Shut down the second node.
For the procedure about how to shut down a node, see your Sun Cluster system administration documentation.
If you are installing new host adapters, power off the second node.
For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.
Install the host adapters in the second node.
For the procedure about how to install host adapters, see the documentation that shipped with your nodes.
Cable the RAID storage array to the second node.
Ensure the cable does not exceed bus length limitations. For more information on bus length limitations, see the documentation that shipped with your hardware.
For the procedure about how to cable the storage arrays, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
Boot the second node.
For the procedure about how to boot cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.
Verify that the second node recognizes the new host adapters and disk drives.
If the node does not recognize the new hardware, check all hardware connections and repeat installation steps you performed in Step 14.
SPARC:
{0} ok show-disks a) /pci@1f,4000/pci@2/scsi@5/sd b) /pci@1f,4000/pci@2/scsi@4/sd ... |
x86:
Adaptec AIC-7899 SCSI BIOS v2.57S4 (c) 2000 Adaptec, Inc. All Rights Reserved. Press <Ctrl><A> for SCSISelect(TM) Utility! Ch B, SCSI ID: 0 SEAGATE ST336605LC 160 SCSI ID: 1 SEAGATE ST336605LC 160 SCSI ID: 6 ESG-SHV SCA HSBP M18 ASYN Ch A, SCSI ID: 2 SUN StorEdge 3310 160 SCSI ID: 3 SUN StorEdge 3310 160 AMIBIOS (C)1985-2002 American Megatrends Inc., Copyright 1996-2002 Intel Corporation SCB20.86B.1064.P18.0208191106 SCB2 Production BIOS Version 2.08 BIOS Build 1064 2 X Intel(R) Pentium(R) III CPU family 1400MHz Testing system memory, memory size=2048MB 2048MB Extended Memory Passed 512K L2 Cache SRAM Passed ATAPI CD-ROM SAMSUNG CD-ROM SN-124 SunOS - Intel Platform Edition Primary Boot Subsystem, vsn 2.0 Current Disk Partition Information Part# Status Type Start Length ================================================ 1 Active X86 BOOT 2428 21852 2 SOLARIS 24280 71662420 3 <unused> 4 <unused> Please select the partition you wish to boot: * * Solaris DCB loading /solaris/boot.bin SunOS Secondary Boot version 3.00 Solaris Intel Platform Edition Booting System Autobooting from bootpath: /pci@1,0/pci8086,340f@7,1/sd@0,0:a If the system hardware has changed, or to boot from a different device, interrupt the autoboot process by pressing ESC. Press ESCape to interrupt autoboot in 2 seconds. Initializing system Please wait... Warning: Resource Conflict - both devices are added NON-ACPI device: ISY0050 Port: 3F0-3F5, 3F7; IRQ: 6; DMA: 2 ACPI device: ISY0050 Port: 3F2-3F3, 3F4-3F5, 3F7; IRQ: 6; DMA: 2 <<< Current Boot Parameters >>> Boot path: /pci@1,0/pci8086,340f@7,1/sd@0,0:a Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: |
If necessary, perform a reconfiguration boot on the second node to create the new Solaris device files and links.
For all nodes that are attached to the RAID storage array, verify that the DIDs have been assigned to the LUNs.