The following sections give details on how hardware will be installed into racks in the data center and how the storage area network (SAN) will be configured. While these sections do not give exhaustive instructions for hardware installation, they include many topics you should consider when you plan your own instructions.
The following guidelines were adopted to determine mounting locations:
Install all Edge hardware with proper mounts in 900 mm depth racks.
Separate FE hardware and BE hardware.
Place Sun StorEdge 3510 mirrors in different rack cabinets.
Place SAN fabrics in different rack units.
Minimize number of racks to reduce footprint in datacenter.
Consolidate all backup hardware on a single rack.
See 1.1.2 Racking Diagrams for the rack placement of all servers resulting from the guidelines above.
Racks should be installed in a predetermined footprint area of the data center and connected to redundant power distribution units (PDUs). The following specifications give an example of rack footprint and PDU assignments in a data center:
Rack 01: location C-01E, connect to PDU-3 circuit 26-28 and PDU-2 circuit 68-70.
Rack 02: location C-02E, connect to PDU-3 circuit 25-27 and PDU-2 circuit 67-69.
Rack 03: location C-03E connect to PDU-3 circuit 29-31 and PDU-2 circuit 75-77.
Rack 04: location C-04E connect to PDU-3 circuit 34-36 and PDU-2 circuit 80-82.
Rack 05: location C-05E connect to PDU-4 circuit 80-82 and PDU-3 circuit 17-19.
The following sections list the components in each of the physical servers. All servers of the same type are configured identically.
The following components are installed in the Sun Fire V210 Servers:
DVD-ROM, part number X7410A
Sun Crypto Accelerator, part number X7405A
The following components are installed in the Sun Fire V240 servers used for the management and administration stations:
DVD-ROM, part number X7410A
Sun Crypto Accelerator, part number X7405A
2 additional disks, part number XRA-SC1GB-73G10K
The PCI slot numbers on the back of Sun Fire V440 servers are numbered right to left 0 through 6. Components are installed in V440 systems as per the following table. HBA stands for host bus adapter that allows the server to attach to external storage.
Table 2–1 PCI Slot Components in Sun Fire V440 Servers
PCI Slot No |
Frequency |
Component |
Description |
---|---|---|---|
0 |
33MHz |
SG-XPCI2FC-QF2 |
PCI Dual HBA |
1 |
33MHz |
SG-XPCI2FC-QF2 |
PCI Dual HBA |
2 |
66MHz |
SG-XPCI2FC-QF2 |
PCI Dual HBA |
3 |
33MHz |
X3151A |
Gigaswift Ethernet |
4 |
66MHz |
SG-XPCI2FC-QF2 |
PCI Dual HBA |
5 |
66MHz |
X444A |
Quad Gigaswift Ethernet |
The following components are installed in the Sun Fire V880 server:
DVD-ROM, part number X6168A
Sun Crypto Accelerator, part number X7405A
6 Additional disks, part number X6756A
The Sun StorEdge 3510 arrays used for primary storage are grouped as set with 2 of JBOD and 1 Raid Controller unit. The following diagram shows how to connect the 3510 set.
The following sections give an example of cabling guidelines. Details such as cabling are important to specify in your hardware plan to avoid any errors during installation. The following examples give typical information about cabling in a data center.
Power distribution units (PDUs) and circuits are associated with the location of a rack, as shown i n 2.1 Rack Installation. Access to PDU cirtcuits should be under or adjacent to labeled floor tiles.
Power is in a bar (vs power whip) with two rows, one for each PDU.
Circuit sockets are on the same row as their associated PDU label. Circuit label is either above or below the actual circuit socket.
Copper harmonica patch panels are labeled under the floor.
Convention to follow: RED is for network; BLACK is for console.
Patch panels for fiber connections are located under the floor tiles and labeled on the ttile or on the patch panels themselves.
Switch fiber patch panels are on right side of switch location. May need to reverse the fibers at the switch patch panels if you do not have connectivity.
You will need to connect fiber from switch patch panel over to switches once ports are assigned.
No drooping cables or fiber. Cables of correct length need to be used in all racking.
Outside corporate network is purple (most cables on the network rack will be purple because most of our servers are outside).
Backup is yellow.
Inside the corporate network is green.
Network cables can be white or gray between server and below floor cable harmonica and need to be purple in the telecom rack, cat 5e plenum rated.
Console cables need to be blue for both telecom rack and server to below floor cable harmonica, cat 5e plenum rated.
All network, console, and fiber cables need to be labeled with cable length in the middle. Server name, Harmonica port, switch name and blade/port info on both ends of network and console connections. Fiber connections need from and to footprint locations, device names, and board/slot info on both ends.
The following sections describe the storage area network implementation details for an Edge complex.
The design of the SAN provides the following benefits:
Has no single point of failure
Minimum of 2 paths from each component to SAN fabric
Minimum performance impact to I/O subsystem with single device failures
Leverages multi-path to maximize throughput on I/O
Uses standard configurations
Provides ase of maintenance
The SAN consists of 2 independent fabrics in a mesh topology. All storage and hosts connect to both fabrics. Each fabric is made up of 3 switches connected through ISL's to form a mesh configuration, as shown in the following figure.
The following table lists the domain and ISL ports for each switch in the two SAN fabrics.
Table 2–2 Switch Details for Each SAN Fabric
Fabric Name |
Switch Name |
Domain ID |
Switch Location |
ISL Ports |
---|---|---|---|---|
SAN A |
fs-amer-01 |
11 |
rack-amer-03 |
0, 1, 2, 3 |
SAN A |
fs-amer-03 |
13 |
rack-amer-03 |
0, 1, 2, 3 |
SAN A |
fs-amer-05 |
15 |
rack-amer-03 |
0, 1, 2, 3 |
SAN B |
fs-amer-02 |
12 |
rack-amer-04 |
0, 1, 2, 3 |
SAN B |
fs-amer-04 |
14 |
rack-amer-04 |
0, 1, 2, 3 |
SAN B |
fs-amer-06 |
16 |
rack-amer-04 |
0, 1, 2, 3 |
McData 4500 switches only support one zoneset to configure and activate any point.
All zones in a fabric configured based on the world-wide names (WWN) of the nodes and storage system.
Default zones in fabric are disabled and all ports not in use are disabled in order to avoid.
There are separate zones for each cluster, backup and management network.
All ports connecting to tape drives will have RSCN (Registered Change Notification) disabled in order to avoid the interruptions to backups.
All configuration are backed up periodically and all changes to SAN configurations will need careful evaluation and review.
The following table describes the port assignments for hosts and storage arrays.
Table 2–3 Port Numbers for Connections to SAN
SAN A |
SAN B |
|||||
---|---|---|---|---|---|---|
Host Name |
fs-amer-01 |
fs-amer-03 |
fs-amer-05 |
fs-amer-02 |
fs-amer-04 |
fs-amer-06 |
ISL's |
ports 0,1,2,3 |
ports 0,1,2,3 |
ports 0,1,2,3 |
ports 0,1,2,3 |
ports 0,1,2,3 |
ports 0,1,2,3 |
phys-bedge1–1 |
port 4 slot 2 |
port 5 slot 4–1 |
port 4 slot 4–0 |
port 5 slot 5 | ||
phys-bedge2–1 |
port 4 slot 2 |
port 5 slot 4–1 |
port 4 slot 4–0 |
port 5 slot 5 |
||
phys-bedge3–1 |
port 5 slot 4–1 |
port 4 slot 2 |
port 5 slot 5 |
port 4 slot 4–0 |
||
phys-bedge4–1 |
port 6 slot 2 |
port 7 slot 4–1 |
port 6 slot 4–0 |
port 7 slot 5 | ||
phys-bedge5–1 |
port 6 slot 2 |
port 7 slot 4–1 |
port 6 slot 4–0 |
port 7 slot 5 |
||
phys-bedge1–2 |
port 7 slot 4–1 |
port 6 slot 2 |
port 7 slot 5 |
port 6 slot 4–0 |
||
phys-bedge2–2 |
port 8 slot 2 |
port 9 slot 4–1 |
port 8 slot 4–0 |
port 9 slot 5 | ||
phys-bedge3–2 |
port 8 slot 2 |
port 9 slot 4–1 |
port 8 slot 4–0 |
port 9 slot 5 |
||
phys-bedge4–2 |
port 9 slot 4–1 |
port 8 slot 2 |
port 9 slot 5 |
port 8 slot 4–0 |
||
phys-bedge4–2 |
port 10 slot 2 |
port 11 slot 4–1 |
port 10 slot 4–0 |
port 11 slot 5 | ||
amer-minnow-01 |
port 22 chl 0 |
port 23 chl 1 |
port 20 chl 5 |
port 21 chl 4 | ||
amer-minnow-03 |
port 22 chl 0 |
port 23 chl 1 |
port 20 chl 5 |
port 21 chl 4 |
||
amer-minnow-05 |
port 23 chl 1 |
port 22 chl 0 |
port 21 chl 4 |
port 20 chl 5 |
||
amer-minnow-02 |
port 20 chl 0 |
port 21 chl 1 |
port 22 chl 5 |
port 23 chl 4 | ||
amer-minnow-04 |
port 20 chl 0 |
port 21 chl 1 |
port 22 chl 5 |
port 23 chl 4 |
||
amer-minnow-06 |
port 21 chl 1 |
port 20 chl 0 |
port 23 chl 4 |
port 22 chl 5 |
The following procedures explain how to set up and configure the SAN hardware.
Connect a crossover Ethernet cable between the switch and management station.
Initialize the interface with the following IP:
ifconfig ce1 plumb
ifconfig ce1 10.1.1.11 netmask 255.0.0.0 broadcast + up
Login to management station and launch a web browser and type the default IP address for the switch 10.1.1.10
When the web browser prompts with login and password screen, provide the following information to access the SAN pilot interface:
Login: Administrator
Password: password
On the SAN pilot interface, select the configure option on left panel and choose the Network tab. Assign the IP and netmask to the switch.
Select the Identification tab and update the switch name.
Select the Date and Time tab and update it with accurate information.
Select the Parameter tab and update the switch's domain ID based on Table 2–2.
Enable the persistent domain ID to ensure the switch will initialize with the proper domain id. Leave the default value for the other parameters.
Set the switch to Interop mode.
Click on Activate to enable the new settings.
Each switch comes with 8 ports activated, and you need to activate the other ports on each switch by installing the activation keys:
Install the EFCM Lite management software on the management station mgmt-amer-01.
Insert the EFCM Lite CD into the CD-ROM drive.
Change directory to /cdrom/efcm81_solaris.
Run the EFCM Lite nstaller and specify default location /opt/ when prompted:
# ./installer
Start the EFC manager over a secure connection:
# ssh -X mgmt-amer-01.us # cd /opt/EFCM81/bin # /opt/EFCM81/bin/EFC_Manager Start # exit |
Launch the EFC client with the following command:
/opt/EFCM81/bin/EFC_client
The following are some of the principles used in creating zones and zone sets:
Each Fabric will have one zone set named AMER_SAN_A or AMER_SAN_B.
All zones are created based on WWN's.
Each cluster will have a separate zone in each Fabric and will consist of WWN's of HBA's of both cluster nodes and corresponding Minnows.
All zones are named with following naming convention:
instance_zone_fabric
For example cluster 1 will have 2 zones one in each fabric named edge1_zone_A and edge1_zone_B. The following table lists all of the zone names and the zone members.
Zone Set |
Zone Name |
Members |
Storage Channels |
---|---|---|---|
SAN-A |
edge1_zone_A |
phys-bedge1–1phys-bedge1–2 |
amer-minnow-01 chl0, chl1amer-minnow-02 chl0, chl1 |
edge2_zone_A |
phys-bedge2–1phys-bedge2–2 |
amer-minnow-03 chl0, chl1amer-minnow-04 chl0, chl1 |
|
edge3_zone_A |
phys-bedge3–1phys-bedge3–2 |
amer-minnow-05 chl0, chl1amer-minnow-06 chl0, chl1 |
|
edge4_zone_A |
phys-bedge4–1phys-bedge4–2 |
amer-minnow-01, 02, 03, 04 chl0amer-minnow-05, 06 chl0, chl1 |
|
edge5_zone_A |
phys-bedge5–1phys-bedge5–2 |
amer-minnow-01 chl0, chl1amer-minnow-02 chl0, chl1 |
|
backup_zone_A |
bu-amer-01all cluster nodes |
amer-minnow-01, 02, 03, 04, 05, 06 |
|
SAN-B |
edge1_zone_B |
phys-bedge1–1phys-bedge1–2 |
amer-minnow-01 chl4, chl5amer-minnow-02 chl4, chl5 |
edge2_zone_B |
phys-bedge2–1phys-bedge2–2 |
amer-minnow-03 chl4, chl5amer-minnow-04 chl4, chl5 |
|
edge3_zone_B |
phys-bedge3–1phys-bedge3–2 |
amer-minnow-05 chl4, chl5amer-minnow-06 chl4, chl5 |
|
edge4_zone_B |
phys-bedge4–1phys-bedge4–2 |
amer-minnow-01, 02, 03, 04 chl4 amer-minnow-05, 06 chl4, chl5 |
|
edge5_zone_B |
phys-bedgephys-bedge |
amer-minnow- chl4, chl5amer-minnow- chl4, chl5 |
|
backup_zone_B |
bu-amer-01all cluster nodes |
amer-minnow-01, 02, 03, 04, 05, 06 |
Obtain the hbamap script and the Solaris Device Path Decoder from your Sun representative. The hbamap script will gather the output of the HBA's WWNs.
Copy hbmap to cluster node and run the script:
root@phys-bedge2-1:# /var/tmp/hbamap FOUND PATH TO 6 LEADVILLE HBA PORTS =================================== C# INST PORT WWN MODEL FCODE STATUS DEVICE PATH ------------------------------------------------ c3 qlc0 210000e08b1b08a6 ISP2312 1.14.09 NOT CONNECTED /pci@1c,600000/SUNW,qlc@1 c4 qlc1 210100e08b3b08a6 ISP2312 1.14.09 CONNECTED /pci@1c,600000/SUNW,qlc@1,1 c5 qlc2 210000e08b1b8ba3 ISP2312 1.14.09 CONNECTED /pci@1d,700000/SUNW,qlc@1 c6 qlc3 210100e08b3b8ba3 ISP2312 1.14.09 CONNECTED /pci@1d,700000/SUNW,qlc@1,1 c7 qlc4 210000e08b1bc5a4 ISP2312 1.14.09 NOT CONNECTED /pci@1d,700000/SUNW,qlc@2 c8 qlc5 210100e08b3bc5a4 ISP2312 1.14.09 CONNECTED /pci@1d,700000/SUNW,qlc@2,1 |
Map the controller with the slot number on the V440 system boards using the Solaris Device Path Decoder. For example:
/devices/pci@1c,600000/SUNW,qlc@1/fp@0,0:devctl : PCI Slot 5 Port 0 /devices/pci@1c,600000/SUNW,qlc@1,1/fp@0,0:devctl : PCI Slot 5 Port 1 /devices/pci@1d,700000/SUNW,qlc@1/fp@0,0:devctl : PCI Slot 4 Port 0 /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl : PCI Slot 4 Port 1 /devices/pci@1d,700000/SUNW,qlc@2/fp@0,0:devctl : PCI Slot 2 Port 0 /devices/pci@1d,700000/SUNW,qlc@2,1/fp@0,0:devctl : PCI Slot 2 Port 1 |
Login to the management station and launch a web browser. You may need to download and install one first.
In the URL field, enter the switch name and log in using the following credentials:
Login: Administrator
Password: password (default is password)
Go to Configure->Zoneset and change the Zoneset name.
Go to Zones tab and create zones by entering a zone name and clicking on Add Zones.
Click on a zone name which will bring up another window where you can enter WWNs and add them to zones. The WWNs must be entered in mac address form, for example 210000e08b1b08a6 should be entered as 21:00:00:e0:8b:1b:08:a6.
After entering all of the WWNs, go back to the Zoneset tab and click on Save and Activate Zoneset information. This will force update on the zones to all switches in the fabric and should then be visible in the Solaris operating system.
The following procedures configure the 3510 hardware (minnows) after rack installation.
Connect serial ports to laptop and launch hyperterminal or connect from a terminal on a Solaris system.
When prompted with menu driven program, select vt100 mode.
Navigate to “View and Edit configuration parameters”, then select Communication Parameters and Select Internet Protocol (TCP/IP).
Enter the IP address and netmask that you have assigned to the computer.
Exit the serial connection and make sure you can now telnet from the management station. Once this is established all configuration will done through the command line interface
Download the latest version of command line interface SUNWsccli package and install it on the management station.
Run the following command to connect to the 3510 with an interactive prompt:
# sccli amer-minnow-01 sccli: selected se3000://172.31.0.141:58632 [SUN StorEdge 3510 SN#084DCD] |
Set Chassis ID on each controller and JBOD as shown in Figure 2–1 and verify you can see all disks by running show disks command.
Verify the following parameters:
Controller parameters |
redundancy mode |
active-active |
redundancy status |
enabled |
|
Drive parameters |
auto-global-spare |
enabled |
Host parameters |
fiber connection mode |
point-to-point (SAN) |
controller name |
amer-minnow-nn |
|
Cache policy |
mode |
write-back |
optimization |
sequential |
sccli> show redundancy-mode Primary controller serial number: 8040703 Redundancy mode: Active-Active Redundancy status: Enabled Secondary controller serial number: 8040608 sccli> show drive-parameters spin-up: disabled reset-at-power-up: enabled disk-access-delay: 15s scsi-io-timeout: 30s queue-depth: 32 polling-interval: 0ms enclosure-polling-interval: 30s auto-detect-swap-interval: 0ms smart: disabled auto-global-spare: disabled sccli> set drive-parameters auto-global-spare enabled sccli> set controller amer-minnow-01 sccli> show controller controller-name: "amer-minnow-01" sccli> show host-parameters max-luns-per-id: 32 queue-depth: 1024 fibre connection mode: point to point sccli> show cache-policy mode: write-back optimization: sequential |
Gather the WWNs of the minnows and add them to zones accordingly:
sccli>show port-wwns Ch Id WWPN ------------------------- 0 40 216000C0FF884DCD 1 42 226000C0FFA84DCD 4 44 256000C0FFC84DCD 5 46 266000C0FFE84DCD sccli> show ses Ch Id Chassis Vendor/Product ID Rev PLD WWNN WWPN Topology: ------------------------------------------------------------- --------------------------- 2 12 084DCD SUN StorEdge 3510F A 1040 1000 204000C0FF084DCD 214000C0FF084DCD loop(a) 2 28 08036B SUN StorEdge 3510F D 1040 1000 205000C0FF08036B 215000C0FF08036B loop(a) 2 44 07D493 SUN StorEdge 3510F D 1040 1000 205000C0FF07D493 215000C0FF07D493 loop(a) 3 12 084DCD SUN StorEdge 3510F A 1040 1000 204000C0FF084DCD 224000C0FF084DCD loop(b) 3 28 08036B SUN StorEdge 3510F D 1040 1000 205000C0FF08036B 225000C0FF08036B loop(b) 3 44 07D493 SUN StorEdge 3510F D 1040 1000 205000C0FF07D493 225000C0FF07D493 loop(b) |
Each 3510 will have 2 logical drives and 1 global spare. One Logical drive consist of 6 drives (Raid5) and another consist of 5 disk Raid5. So you will end up with one logical disk with 682GB and another with 545 GB. Disk 11 will always be Global Spare. All minnows are configured identically.
Each logical drive is further divided in to 4 volumes. Logical drives ld0, ld2 and ld4 of size 682 GB are divided into the following volume sizes:
ld0:00 = 20GB |
ld0:01 = 227GB |
ld0:02 = 227GB |
ld0:03 = 227GB |
ld0:04 = 5MB (leftover) |
Logical Drive ld1, ld3 and ld5 of size 545 GB are divided into the following volume sizes:
ld1:00 = 20GB |
ld1:01 = 180GB |
ld1:02 = 180GB |
ld1:03= 180GB |
ld1:04 = 5MB (leftover) |
Delete the existing logical drive configured by default:
sccli> show logical sccli> unmap ld0 sccli> unmap ld1 sccli> delete logical-drive ld1 sccli> delete logical-drive ld0 |
Remove unneeded global-spare. By default each 3510 brick will have 2 global spares and you need to remove one of them:
sccli> show disk 2.5 Ch Id Size Speed LD Status IDs Rev ------------------------------------------------------------------- 2 5 136.73GB 200MB GLOBAL STAND-BY SEAGATE ST314FSUN146G 0307 S/N 3HY87KSM00007445 sccli> unconfigure global-spare 2.5 |
Create logical drives. This process is same for all minnows and they are all carved up logically same way. This will take couple of hours.
sccli> create logical-drive raid5 2.0 2.1 2.2 2.3 2.4 2.5 sccli: created logical drive 08C6C8D4 sccli> create logical-drive raid5 2.6 2.7 2.8 2.9 2.10 sccli: created logical drive 55689384 sccli> create logical-drive raid5 2.16 2.17 2.18 2.19 2.20 2.21 sccli: created logical drive 68B4C07E sccli> create logical-drive raid5 2.22 2.23 2.24 2.25 2.26 sccli: created logical drive 2B7F3FDA sccli> create logical-drive raid5 2.32 2.33 2.34 2.35 2.36 2.37 sccli: created logical drive 014D9F13 sccli> create logical-drive raid5 2.38 2.39 2.40 2.41 2.42 sccli: created logical drive 189DAEFF |
Configure the global spares as follows:
sccli> configure global-spare 2.43 sccli> configure global-spare 2.27 sccli> configure global-spare 2.11 |
After all logical drives are built, they need to be assigned to either primary or secondary controller. By default all disks will be assigned to primary controller. For better distribution of i/o loads, logical drives ld0, ld2, and ld4 are assigned to primary and ld1, ld3, and ld5 need to be reassigned to secondary. This process needs to be done through telnet.
telnet amer-minnow-01 CTL-L (select vt100 if need to select term type) --- "View and Edit Logical Drives" select LD1 "Logical Drive Assignments" "Redundant Controller Logical Drive Assign to Secondary Controller" "Yes" Query: "Do you want to Reset the Controller now?" "No" |
Repeat the above sequence for LD3 and LD5, except you must reset the controller after reassigning LD5.
Verify the logical disks assignments by using the sccli interface:
# sccli amer-minnow-01 sccli: selected se3000://172.31.0.141:58632 [SUN StorEdge 3510 SN#084DCD] sccli> show logical LD LD-ID Size Assigned Type Disks Spare Failed Status --------------------------------------------------------------------- ld0 08C6C8D4 682.39GB Primary RAID5 6 3 0 Good ld1 55689384 545.91GB Secondary RAID5 5 3 0 Good ld2 68B4C07E 682.39GB Primary RAID5 6 3 0 Good ld3 2B7F3FDA 545.91GB Secondary RAID5 5 3 0 Good ld4 014D9F13 682.39GB Primary RAID5 6 3 0 Good ld5 189DAEFF 545.91GB Secondary RAID5 5 3 0 Good |
Create the logical volumes, called partitions, with the following commands:
sccli> configure partition ld0-00 20g sccli> configure partition ld0-01 226090m sccli> configure partition ld0-02 226090m sccli> configure partition ld0-03 226090m sccli> configure partition ld2-00 20g sccli> configure partition ld2-01 226090m sccli> configure partition ld2-02 226090m sccli> configure partition ld2-03 226090m sccli> configure partition ld4-00 20g sccli> configure partition ld4-01 226090m sccli> configure partition ld4-02 226090m sccli> configure partition ld4-03 226090m sccli> configure partition ld1-00 20g sccli> configure partition ld1-01 179510m sccli> configure partition ld1-02 179510m sccli> configure partition ld1-03 179510m sccli> configure partition ld3-00 20g sccli> configure partition ld3-01 179510m sccli> configure partition ld3-02 179510m sccli> configure partition ld3-03 179510m sccli> configure partition ld5-00 20g sccli> configure partition ld5-01 179510m sccli> configure partition ld5-02 179510m sccli> configure partition ld5-03 179510m |
Verify the logical partitions created. You will notice left over disk space will be under ld0-04 partition:
sccli> show part LD/LV ID-Partition Size ------------------------------- ld0-00 08C6C8D4-00 20.00GB ld0-01 08C6C8D4-01 220.79GB ld0-02 08C6C8D4-02 220.79GB ld0-03 08C6C8D4-03 220.79GB ld0-04 08C6C8D4-04 17MB [...] |
Reset the controller when configuration is complete and recheck partitions.
scli> reset controller WARNING: This is a potentially dangerous operation. The controller will go offline for several minutes. Data loss may occur if the controller is currently in use. Are you sure? y sccli: shutting down controller... sccli: controller is shut down sccli: resetting controller... sccli> show part [...] |
The next step in configuring 3510s is to map logical disks to controller channels. The following tasks needs to be done
Each of logical partitions needs to map a channel in a both controllers in order systems to see them as disks.
Each logical partition is mapped to specific set of hosts as shown in Figure 2–5 below. We will use LUN masking on minnows to filter maps to specific hosts WWNs.
Each channel on minnows needs one logical disks mapped to LUN 0 and it should be free of LUN masking. This is the requirement for each of the hosts to see other volumes. We use leftover partition for this purpose to map to LUN 0 on each channel without any hosts filter.
Each logical partitions must be mapped to both channels on primary and secondary controllers. So depending on the logical drives (disks) the channel assignments vary. Drives assigned to the primary controller (ld0, ld2 and ld4) are mapped as follows:
0 channel tgt:40 is on controller 1
4 channel tgt:44 is on controller 2
Drives assigned to the primary controller (ld0, ld2 and ld4) are mapped as follows:
1 channel tgt:42 is on controller 2
5 channel tgt:46 is on controller 1
All LUN mappings are listed in the following table and color coded in Figure 2–5 below.
Table 2–5 LUN Mappings
3510 name |
Logical Drives |
Logical Volumes |
Hosts Mapped |
---|---|---|---|
amer-minnow-01 amer-minnow-02 |
ld0, ld2 |
ld0-00, ld2-00 |
phys-bedge5-1 phys-bedge5-2 |
ld4 |
ld4–00, ld4-01, ld-02, ld4-03 |
phys-bedge4-1 phys-bedge4-2 |
|
ld0, ld1, ld2, ld3, ld5 |
ld0-01, 02, 03 ld1-01, 02, 03, ld2-00, 01, 02, 03 ld3-00, 01, 02, 03 ld5-00, 01, 02, 03 |
phys-bedge1-1 phys-bedge1-2 |
|
ld1 |
ld1-00 |
phys-bedge1-2 (ldap) |
|
amer-minnow-03 amer-minnow-04 |
ld0, ld2 |
ld0-00, ld2-00 |
phys-bedge6-1 phys-bedge6-2 |
ld4 |
ld4–00, ld4-01, ld-02, ld4-03 |
phys-bedge4-1 phys-bedge4-2 |
|
ld0, ld1, ld2, ld3, ld5 |
ld0-01, 02, 03 ld1-01, 02, 03, ld2-00, 01, 02, 03 ld3-00, 01, 02, 03 ld5-00, 01, 02, 03 |
phys-bedge2-1 phys-bedge2-2 |
|
ld1 |
ld1-00 |
phys-bedge2-2 (ldap) |
|
amer-minnow-05 amer-minnow-06 |
ld0 |
ld0-00 (ldap) |
phys-bedge3-2 |
ld2 |
ld2-00 (IM) |
phys-bedge5-2 |
|
ld4, ld5 |
ld4-00, 01, 02, 03 ld5-00, 01, 02, 03 |
phys-bedge4-1 phys-bedge4-2 |
|
ld0, ld1, ld2, ld3 |
ld0-01, 02, 03 ld1-00, 01, 02, 03 ld2-00, 01, 02, 03 ld3-00, 01, 02, 03 |
phys-bedge3-1 phys-bedge3-2 |
Create an alias for each of the host WWNs on the minnows so that mappings of disks will much easier and for troubleshooting. We need to create 4 aliases for each of system as they have 4 paths (WWNs):
sccli>create host-wwn-name 210100E08B3B4CA5 phys-bedge1-1-c4 sccli>create host-wwn-name 210000E08B1B58A4 phys-bedge1-1-c5 sccli>create host-wwn-name 210100E08B3B58A4 phys-bedge1-1-c6 sccli>create host-wwn-name 210100E08B3B8BA4 phys-bedge1-1-c8 [...] |
Note: c4 ,c5, c6, c8 are controller names as seen by system for each HBA's port and gathered as part of the hbamap script in To Create the Zones or by running cfgadm -al.
Verify that you have created all the aliases required:
sccli> show host-wwn Host-ID/WWN Name -------------------------------------- 210100E08B3B4CA5 phys-bedge1-1-c4 210000E08B1B58A4 phys-bedge1-1-c5 210100E08B3B58A4 phys-bedge1-1-c6 210000E08B1B8BA4 phys-bedge1-1-c7 210100E08B3B8BA4 phys-bedge1-1-c8 210100E08B3B66A5 phys-bedge1-2-c4 210000E08B1B70A9 phys-bedge1-2-c5 210100E08B3B70A9 phys-bedge1-2-c6 210000E08B1BB4A2 phys-bedge1-2-c7 210100E08B3BB4A2 phys-bedge1-2-c8 210100E08B3B08A6 phys-bedge2-1-c4 [...] |
Map all of the LUNs using the sccli interface, according to Table 2–5. Leftover LUNs ld0-04 and ld1-04 are mapped to channels without hosts filters:
sccli> map ld0-04 0.40.0 sccli> map ld0-01 0.40.1 phys-bedge1-1-c6 sccli> map ld0-01 0.40.1 phys-bedge1-1-c8 sccli> map ld0-01 0.40.1 phys-bedge1-2-c6 sccli> map ld0-01 0.40.1 phys-bedge1-2-c8 sccli> map ld0-02 0.40.2 phys-bedge1-1-c6 [...] |
The complete list of mapping commands for each minnow is given in Appendix A, Logical Unit (LUN) Mapping.
Verify the mapping on each minnow by running show map command:
sccli> show map Ch Tgt LUN ld/lv ID-Partition Assigned Filter Map -------------------------------------------------------------------------- 0 40 0 ld0 08C6C8D4-04 Primary 0 40 1 ld0 08C6C8D4-01 Primary 210100E08B3B58A4 {phys-bedge1-1-c6} 0 40 1 ld0 08C6C8D4-01 Primary 210100E08B3B8BA4 {phys-bedge1-1-c8} 0 40 1 ld0 08C6C8D4-01 Primary 210100E08B3B70A9 {phys-bedge1-2-c6} 0 40 1 ld0 08C6C8D4-01 Primary 210100E08B3BB4A2 {phys-bedge1-2-c8} 0 40 2 ld0 08C6C8D4-02 Primary 210100E08B3B58A4 {phys-bedge1-1-c6} 0 40 2 ld0 08C6C8D4-02 Primary 210100E08B3B8BA4 {phys-bedge1-1-c8} 0 40 2 ld0 08C6C8D4-02 Primary 210100E08B3B70A9 {phys-bedge1-2-c6} 0 40 2 ld0 08C6C8D4-02 Primary 210100E08B3BB4A2 {phys-bedge1-2-c8} [...] |
Save the configuration of the Sun StorEdge 3510 FC array using the firmware application “Saving Configuration (NVRAM) to a Disk” and the Configuration Service Console's “Save Configuration” utility.
This procedure needs to be performed on each of the systems connected to the storage area network (SAN) created by the minnow systems.
Verify that all SAN foundation software packages and patches are installed. This is done through puppet system.
Enable Solaris multipathing by editing /kernel/drv/scsi_vhci.conf and setting mpxio-disable="yes". Reboot the system.
Use the cfgadm -al to view all disks recognized by the operating system:
root[bash]@phys-bedge1-2# cfgadm -al Ap_Id Type Receptacle Occupant Condition c0 scsi-bus connected configured unknown c0::dsk/c0t0d0 CD-ROM connected configured unknown c1 scsi-bus connected configured unknown c1::dsk/c1t0d0 disk connected configured unknown c1::dsk/c1t1d0 disk connected configured unknown c2 scsi-bus connected unconfigured unknown c3 fc connected unconfigured unknown c4 fc-fabric connected configured unknown c4::210100e08b3b1fa9 unknown connected unconfigured unknown c4::256000c0ffc84dcd disk connected unconfigured unknown c4::256000c0ffc84ddc disk connected unconfigured unknown c4::266000c0ffe84dcd disk connected unconfigured unknown c4::266000c0ffe84ddc disk connected unconfigured unknown |
Configure each of the disks with the cfgadm -c configure option, for example:
root[bash]@phys-bedge1-2# cfgadm -c configure c4::266000c0ffe84dcd |
Verify that each disk was configured, for example:
root[bash]@phys-bedge1-2# cfgadm -al |grep c4::266000c0ffe84dcd c4::266000c0ffe84dcd disk connected configured unknown |
Now run format and you will be able to see the disks. Make sure their path contains /scsi_vhci/*.
Searching for disks...done AVAILABLE DISK SELECTIONS: 0 c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@0,0 1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@1,0 2. c1t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@2,0 3. c1t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@3,0 4. c9t600C0FF000000000084DCD2B7F3FDA00d0 <SUN-StorEdge3510-327R cyl 20478 alt 2 hd 64 sec 32> /scsi_vhci/ssd@g600c0ff000000000084dcd2b7f3fda00 |
Configure all disks seen on each of the HBAs. In this design, each disks will have 4 redundant paths. To verify number of paths available run luxadm command.
root[bash]@phys-bedge1-2# luxadm display \ /dev/rdsk/c9t600C0FF000000000084DDC30300FD702d0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c9t600C0FF000000000084DDC30300FD702d0s2 Vendor: SUN Product ID: StorEdge 3510 Revision: 327R Serial Num: 084DDC30300F Unformatted capacity: 179510.000 MBytes Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0xffff Device Type: Disk device Path(s): /dev/rdsk/c9t600C0FF000000000084DDC30300FD702d0s2 /devices/scsi_vhci/ssd@g600c0ff000000000084ddc30300fd702:c,raw Controller /devices/pci@1c,600000/SUNW,qlc@1,1/fp@0,0 Device Address 266000c0ffe84ddc,6 Host controller port WWN 210100e08b3b66a5 Class primary State ONLINE Controller /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0 Device Address 226000c0ffa84ddc,6 Host controller port WWN 210100e08b3b70a9 Class primary State ONLINE Controller /devices/pci@1d,700000/SUNW,qlc@1/fp@0,0 Device Address 266000c0ffe84ddc,6 Host controller port WWN 210000e08b1b70a9 Class primary State ONLINE Controller /devices/pci@1d,700000/SUNW,qlc@2,1/fp@0,0 Device Address 226000c0ffa84ddc,6 Host controller port WWN 210100e08b3bb4a2 Class primary State ONLINE |