Deployment Example: Sun Java System Communications Services for Access Anywhere (EdgeMail)

2.2.2 SAN Configuration

The following procedures explain how to set up and configure the SAN hardware.

ProcedureTo Configure McData 4500 Switches

Steps
  1. Connect a crossover Ethernet cable between the switch and management station.

  2. Initialize the interface with the following IP:

    • ifconfig ce1 plumb

    • ifconfig ce1 10.1.1.11 netmask 255.0.0.0 broadcast + up

  3. Login to management station and launch a web browser and type the default IP address for the switch 10.1.1.10

  4. When the web browser prompts with login and password screen, provide the following information to access the SAN pilot interface:

    • Login: Administrator

    • Password: password

  5. On the SAN pilot interface, select the configure option on left panel and choose the Network tab. Assign the IP and netmask to the switch.

  6. Select the Identification tab and update the switch name.

  7. Select the Date and Time tab and update it with accurate information.

  8. Select the Parameter tab and update the switch's domain ID based on Table 2–2.

  9. Enable the persistent domain ID to ensure the switch will initialize with the proper domain id. Leave the default value for the other parameters.

  10. Set the switch to Interop mode.

  11. Click on Activate to enable the new settings.

  12. Each switch comes with 8 ports activated, and you need to activate the other ports on each switch by installing the activation keys:

    1. Select Operation on the navigation panel and choose the Feature Installation tab.

    2. Enter the Activation key in the Feature Key field and click on Activate. This will enable an additional 8 ports on the switch.

ProcedureTo Install the McData EFCM Lite Software

Install the EFCM Lite management software on the management station mgmt-amer-01.

Steps
  1. Insert the EFCM Lite CD into the CD-ROM drive.

  2. Change directory to /cdrom/efcm81_solaris.

  3. Run the EFCM Lite nstaller and specify default location /opt/ when prompted:

    # ./installer

  4. Start the EFC manager over a secure connection:


    # ssh -X mgmt-amer-01.us
    
    # cd /opt/EFCM81/bin
    # /opt/EFCM81/bin/EFC_Manager Start
    # exit
  5. Launch the EFC client with the following command:

    /opt/EFCM81/bin/EFC_client

2.2.2.1 Setting Up Zones

The following are some of the principles used in creating zones and zone sets:

Table 2–4 Zone Names and Zone Members

Zone Set 

Zone Name 

Members 

Storage Channels 

SAN-A 

edge1_zone_A 

phys-bedge1–1phys-bedge1–2 

amer-minnow-01 chl0, chl1amer-minnow-02 chl0, chl1 

edge2_zone_A 

phys-bedge2–1phys-bedge2–2 

amer-minnow-03 chl0, chl1amer-minnow-04 chl0, chl1 

edge3_zone_A 

phys-bedge3–1phys-bedge3–2 

amer-minnow-05 chl0, chl1amer-minnow-06 chl0, chl1 

edge4_zone_A 

phys-bedge4–1phys-bedge4–2 

amer-minnow-01, 02, 03, 04 chl0amer-minnow-05, 06 chl0, chl1 

edge5_zone_A 

phys-bedge5–1phys-bedge5–2 

amer-minnow-01 chl0, chl1amer-minnow-02 chl0, chl1 

backup_zone_A 

bu-amer-01all cluster nodes 

amer-minnow-01, 02, 03, 04, 05, 06 

SAN-B 

edge1_zone_B 

phys-bedge1–1phys-bedge1–2 

amer-minnow-01 chl4, chl5amer-minnow-02 chl4, chl5 

edge2_zone_B 

phys-bedge2–1phys-bedge2–2 

amer-minnow-03 chl4, chl5amer-minnow-04 chl4, chl5 

edge3_zone_B 

phys-bedge3–1phys-bedge3–2 

amer-minnow-05 chl4, chl5amer-minnow-06 chl4, chl5 

edge4_zone_B 

phys-bedge4–1phys-bedge4–2 

amer-minnow-01, 02, 03, 04 chl4 amer-minnow-05, 06 chl4, chl5 

edge5_zone_B 

phys-bedgephys-bedge 

amer-minnow- chl4, chl5amer-minnow- chl4, chl5 

backup_zone_B 

bu-amer-01all cluster nodes 

amer-minnow-01, 02, 03, 04, 05, 06 

ProcedureTo Create the Zones

Steps
  1. Obtain the hbamap script and the Solaris Device Path Decoder from your Sun representative. The hbamap script will gather the output of the HBA's WWNs.

  2. Copy hbmap to cluster node and run the script:


    root@phys-bedge2-1:# /var/tmp/hbamap
    FOUND PATH TO 6 LEADVILLE HBA PORTS
    ===================================
    C# INST PORT WWN         MODEL   FCODE   STATUS
            DEVICE PATH
    ------------------------------------------------
    c3 qlc0 210000e08b1b08a6 ISP2312 1.14.09 NOT CONNECTED
            /pci@1c,600000/SUNW,qlc@1
    c4 qlc1 210100e08b3b08a6 ISP2312 1.14.09 CONNECTED
            /pci@1c,600000/SUNW,qlc@1,1
    c5 qlc2 210000e08b1b8ba3 ISP2312 1.14.09 CONNECTED
            /pci@1d,700000/SUNW,qlc@1
    c6 qlc3 210100e08b3b8ba3 ISP2312 1.14.09 CONNECTED
            /pci@1d,700000/SUNW,qlc@1,1
    c7 qlc4 210000e08b1bc5a4 ISP2312 1.14.09 NOT CONNECTED
            /pci@1d,700000/SUNW,qlc@2
    c8 qlc5 210100e08b3bc5a4 ISP2312 1.14.09 CONNECTED
            /pci@1d,700000/SUNW,qlc@2,1
  3. Map the controller with the slot number on the V440 system boards using the Solaris Device Path Decoder. For example:


    /devices/pci@1c,600000/SUNW,qlc@1/fp@0,0:devctl   : PCI Slot 5 Port 0
    /devices/pci@1c,600000/SUNW,qlc@1,1/fp@0,0:devctl : PCI Slot 5 Port 1
    /devices/pci@1d,700000/SUNW,qlc@1/fp@0,0:devctl   : PCI Slot 4 Port 0
    /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl : PCI Slot 4 Port 1
    /devices/pci@1d,700000/SUNW,qlc@2/fp@0,0:devctl   : PCI Slot 2 Port 0
    /devices/pci@1d,700000/SUNW,qlc@2,1/fp@0,0:devctl : PCI Slot 2 Port 1
  4. Login to the management station and launch a web browser. You may need to download and install one first.

  5. In the URL field, enter the switch name and log in using the following credentials:

    • Login: Administrator

    • Password: password (default is password)

  6. Go to Configure->Zoneset and change the Zoneset name.

  7. Go to Zones tab and create zones by entering a zone name and clicking on Add Zones.

  8. Click on a zone name which will bring up another window where you can enter WWNs and add them to zones. The WWNs must be entered in mac address form, for example 210000e08b1b08a6 should be entered as 21:00:00:e0:8b:1b:08:a6.

  9. After entering all of the WWNs, go back to the Zoneset tab and click on Save and Activate Zoneset information. This will force update on the zones to all switches in the fabric and should then be visible in the Solaris operating system.

2.2.2.2 Setting Parameters on the 3510s

The following procedures configure the 3510 hardware (minnows) after rack installation.

ProcedureTo Set an IP Address on 3510s

Steps
  1. Connect serial ports to laptop and launch hyperterminal or connect from a terminal on a Solaris system.

  2. When prompted with menu driven program, select vt100 mode.

  3. Navigate to “View and Edit configuration parameters”, then select Communication Parameters and Select Internet Protocol (TCP/IP).

  4. Enter the IP address and netmask that you have assigned to the computer.

  5. Exit the serial connection and make sure you can now telnet from the management station. Once this is established all configuration will done through the command line interface

ProcedureTo Configure the 3510s

Steps
  1. Download the latest version of command line interface SUNWsccli package and install it on the management station.

  2. Run the following command to connect to the 3510 with an interactive prompt:


    # sccli amer-minnow-01
    sccli: selected se3000://172.31.0.141:58632 [SUN StorEdge 3510 SN#084DCD]
  3. Set Chassis ID on each controller and JBOD as shown in Figure 2–1 and verify you can see all disks by running show disks command.

  4. Verify the following parameters:

    Controller parameters 

    redundancy mode 

    active-active 

     

    redundancy status 

    enabled 

    Drive parameters 

    auto-global-spare 

    enabled 

    Host parameters 

    fiber connection mode 

    point-to-point (SAN) 

     

    controller name 

    amer-minnow-nn

    Cache policy 

    mode 

    write-back 

     

    optimization 

    sequential 


    sccli> show redundancy-mode
        Primary controller serial number: 8040703
        Redundancy mode: Active-Active
        Redundancy status: Enabled
        Secondary controller serial number: 8040608
    sccli> show drive-parameters
        spin-up: disabled
        reset-at-power-up: enabled
        disk-access-delay: 15s
        scsi-io-timeout: 30s
        queue-depth: 32
        polling-interval: 0ms
        enclosure-polling-interval: 30s
        auto-detect-swap-interval: 0ms
        smart: disabled
        auto-global-spare: disabled
    sccli> set drive-parameters auto-global-spare enabled
    sccli> set  controller amer-minnow-01 
    sccli> show controller
        controller-name: "amer-minnow-01"
    sccli> show host-parameters
        max-luns-per-id: 32
        queue-depth: 1024
        fibre connection mode: point to point
    sccli> show cache-policy
        mode: write-back
        optimization: sequential
  5. Gather the WWNs of the minnows and add them to zones accordingly:


    sccli>show port-wwns
        Ch  Id   WWPN
        -------------------------
         0  40   216000C0FF884DCD
         1  42   226000C0FFA84DCD
         4  44   256000C0FFC84DCD
         5  46   266000C0FFE84DCD
    sccli> show ses
    Ch Id Chassis Vendor/Product ID    Rev  PLD  WWNN
     WWPN              Topology:
    -------------------------------------------------------------
    ---------------------------
    2  12 084DCD  SUN StorEdge 3510F A 1040 1000 204000C0FF084DCD
     214000C0FF084DCD  loop(a)
    2  28 08036B  SUN StorEdge 3510F D 1040 1000 205000C0FF08036B
     215000C0FF08036B  loop(a)
    2  44 07D493  SUN StorEdge 3510F D 1040 1000 205000C0FF07D493
     215000C0FF07D493  loop(a)
    3  12 084DCD  SUN StorEdge 3510F A 1040 1000 204000C0FF084DCD
     224000C0FF084DCD  loop(b)
    3  28 08036B  SUN StorEdge 3510F D 1040 1000 205000C0FF08036B
     225000C0FF08036B  loop(b)
    3  44 07D493  SUN StorEdge 3510F D 1040 1000 205000C0FF07D493
     225000C0FF07D493  loop(b)

2.2.2.3 Creating Logical Drives and Logical Partitions

Each 3510 will have 2 logical drives and 1 global spare. One Logical drive consist of 6 drives (Raid5) and another consist of 5 disk Raid5. So you will end up with one logical disk with 682GB and another with 545 GB. Disk 11 will always be Global Spare. All minnows are configured identically.

Figure 2–3 Logical Drives on each 3510

Description in text above.

Each logical drive is further divided in to 4 volumes. Logical drives ld0, ld2 and ld4 of size 682 GB are divided into the following volume sizes:

ld0:00 = 20GB 

ld0:01 = 227GB 

ld0:02 = 227GB 

ld0:03 = 227GB 

ld0:04 = 5MB (leftover) 

Logical Drive ld1, ld3 and ld5 of size 545 GB are divided into the following volume sizes:

ld1:00 = 20GB 

ld1:01 = 180GB 

ld1:02 = 180GB 

ld1:03= 180GB 

ld1:04 = 5MB (leftover) 

Figure 2–4 Logical Volumes on each 3510

Description in text above.

ProcedureTo Set up Logical Drives and Volumes

Steps
  1. Delete the existing logical drive configured by default:


    sccli> show logical
    sccli> unmap ld0
    sccli> unmap ld1
    sccli> delete logical-drive ld1
    sccli> delete logical-drive ld0
    
  2. Remove unneeded global-spare. By default each 3510 brick will have 2 global spares and you need to remove one of them:


    sccli> show disk 2.5
    Ch  Id      Size   Speed  LD     Status   IDs                   Rev
    -------------------------------------------------------------------
     2   5  136.73GB   200MB  GLOBAL STAND-BY SEAGATE ST314FSUN146G 0307
                                                  S/N 3HY87KSM00007445
    sccli> unconfigure global-spare 2.5
    
  3. Create logical drives. This process is same for all minnows and they are all carved up logically same way. This will take couple of hours.


    sccli> create logical-drive raid5 2.0 2.1 2.2 2.3 2.4 2.5
        sccli: created logical drive 08C6C8D4
    sccli> create logical-drive raid5 2.6 2.7 2.8 2.9 2.10
        sccli: created logical drive 55689384
    sccli> create logical-drive raid5 2.16 2.17 2.18 2.19 2.20 2.21
        sccli: created logical drive 68B4C07E
    sccli> create logical-drive raid5 2.22 2.23 2.24 2.25 2.26
        sccli: created logical drive 2B7F3FDA
    sccli> create logical-drive raid5 2.32 2.33 2.34 2.35 2.36 2.37
        sccli: created logical drive 014D9F13
    sccli> create logical-drive raid5 2.38 2.39 2.40 2.41 2.42
        sccli: created logical drive 189DAEFF
  4. Configure the global spares as follows:


    sccli> configure global-spare 2.43
    sccli> configure global-spare 2.27
    sccli> configure global-spare 2.11
    
  5. After all logical drives are built, they need to be assigned to either primary or secondary controller. By default all disks will be assigned to primary controller. For better distribution of i/o loads, logical drives ld0, ld2, and ld4 are assigned to primary and ld1, ld3, and ld5 need to be reassigned to secondary. This process needs to be done through telnet.


    telnet amer-minnow-01
    CTL-L    (select vt100 if need to select term type)
    ---
    "View and Edit Logical Drives"
    select LD1
    "Logical Drive Assignments"
    "Redundant Controller Logical Drive Assign to Secondary Controller"
    "Yes"
    Query:  "Do you want to Reset the Controller now?"   
    "No"
    

    Repeat the above sequence for LD3 and LD5, except you must reset the controller after reassigning LD5.

  6. Verify the logical disks assignments by using the sccli interface:


    # sccli amer-minnow-01
    sccli: selected se3000://172.31.0.141:58632 [SUN StorEdge 3510 SN#084DCD]
    sccli> show logical
    LD    LD-ID     Size      Assigned    Type  Disks Spare Failed Status
    ---------------------------------------------------------------------
    ld0   08C6C8D4  682.39GB  Primary     RAID5   6     3     0    Good
    ld1   55689384  545.91GB  Secondary   RAID5   5     3     0    Good
    ld2   68B4C07E  682.39GB  Primary     RAID5   6     3     0    Good
    ld3   2B7F3FDA  545.91GB  Secondary   RAID5   5     3     0    Good
    ld4   014D9F13  682.39GB  Primary     RAID5   6     3     0    Good
    ld5   189DAEFF  545.91GB  Secondary   RAID5   5     3     0    Good
  7. Create the logical volumes, called partitions, with the following commands:


    sccli> configure partition ld0-00  20g
    sccli> configure partition ld0-01  226090m
    sccli> configure partition ld0-02  226090m
    sccli> configure partition ld0-03  226090m
    sccli> configure partition ld2-00  20g
    sccli> configure partition ld2-01  226090m
    sccli> configure partition ld2-02  226090m
    sccli> configure partition ld2-03  226090m
    sccli> configure partition ld4-00  20g
    sccli> configure partition ld4-01  226090m
    sccli> configure partition ld4-02  226090m
    sccli> configure partition ld4-03  226090m
    
    sccli> configure partition ld1-00  20g
    sccli> configure partition ld1-01  179510m
    sccli> configure partition ld1-02  179510m
    sccli> configure partition ld1-03  179510m
    sccli> configure partition ld3-00  20g
    sccli> configure partition ld3-01  179510m
    sccli> configure partition ld3-02  179510m
    sccli> configure partition ld3-03  179510m
    sccli> configure partition ld5-00  20g
    sccli> configure partition ld5-01  179510m
    sccli> configure partition ld5-02  179510m
    sccli> configure partition ld5-03  179510m
    
  8. Verify the logical partitions created. You will notice left over disk space will be under ld0-04 partition:


    sccli> show part
    LD/LV    ID-Partition      Size
    -------------------------------
    ld0-00   08C6C8D4-00    20.00GB
    ld0-01   08C6C8D4-01   220.79GB
    ld0-02   08C6C8D4-02   220.79GB
    ld0-03   08C6C8D4-03   220.79GB
    ld0-04   08C6C8D4-04       17MB
    [...]
  9. Reset the controller when configuration is complete and recheck partitions.


    scli> reset controller
    WARNING: This is a potentially dangerous operation.
    The controller will go offline for several minutes.
    Data loss may occur if the controller is currently in use.
    Are you sure? y
    sccli: shutting down controller...
    sccli: controller is shut down
    sccli: resetting controller...
    sccli> show part
    [...]

2.2.2.4 Mapping the 3510 Logical Units (LUN)

The next step in configuring 3510s is to map logical disks to controller channels. The following tasks needs to be done

All LUN mappings are listed in the following table and color coded in Figure 2–5 below.

Table 2–5 LUN Mappings

3510 name 

Logical Drives 

Logical Volumes 

Hosts Mapped 

amer-minnow-01 amer-minnow-02 

ld0, ld2 

ld0-00, ld2-00 

phys-bedge5-1 phys-bedge5-2 

ld4 

ld4–00, ld4-01, ld-02, ld4-03 

phys-bedge4-1 phys-bedge4-2 

ld0, ld1, ld2, ld3, ld5 

ld0-01, 02, 03 ld1-01, 02, 03, ld2-00, 01, 02, 03 ld3-00, 01, 02, 03 ld5-00, 01, 02, 03 

phys-bedge1-1 phys-bedge1-2 

ld1 

ld1-00 

phys-bedge1-2 

(ldap) 

amer-minnow-03 amer-minnow-04 

ld0, ld2 

ld0-00, ld2-00 

phys-bedge6-1 phys-bedge6-2 

ld4 

ld4–00, ld4-01, ld-02, ld4-03 

phys-bedge4-1 phys-bedge4-2 

ld0, ld1, ld2, ld3, ld5 

ld0-01, 02, 03 ld1-01, 02, 03, ld2-00, 01, 02, 03 ld3-00, 01, 02, 03 ld5-00, 01, 02, 03 

phys-bedge2-1 phys-bedge2-2 

ld1 

ld1-00 

phys-bedge2-2 (ldap) 

amer-minnow-05 amer-minnow-06 

ld0 

ld0-00 (ldap) 

phys-bedge3-2 

ld2 

ld2-00 (IM) 

phys-bedge5-2 

ld4, ld5 

ld4-00, 01, 02, 03 ld5-00, 01, 02, 03 

phys-bedge4-1 phys-bedge4-2 

ld0, ld1, ld2, ld3 

ld0-01, 02, 03 ld1-00, 01, 02, 03 ld2-00, 01, 02, 03 ld3-00, 01, 02, 03 

phys-bedge3-1 phys-bedge3-2 

Figure 2–5 Logical Disk Mapping on Minnows

Color coded mapping of LUNs on each minnow to each cluster.

ProcedureTo Map Logical Units (LUNs)

Steps
  1. Create an alias for each of the host WWNs on the minnows so that mappings of disks will much easier and for troubleshooting. We need to create 4 aliases for each of system as they have 4 paths (WWNs):


    sccli>create host-wwn-name 210100E08B3B4CA5  phys-bedge1-1-c4
    sccli>create host-wwn-name 210000E08B1B58A4  phys-bedge1-1-c5
    sccli>create host-wwn-name 210100E08B3B58A4  phys-bedge1-1-c6
    sccli>create host-wwn-name 210100E08B3B8BA4  phys-bedge1-1-c8
    [...]

    Note: c4 ,c5, c6, c8 are controller names as seen by system for each HBA's port and gathered as part of the hbamap script in To Create the Zones or by running cfgadm -al.

  2. Verify that you have created all the aliases required:


    sccli> show host-wwn
      Host-ID/WWN       Name
    --------------------------------------
      210100E08B3B4CA5  phys-bedge1-1-c4
      210000E08B1B58A4  phys-bedge1-1-c5
      210100E08B3B58A4  phys-bedge1-1-c6
      210000E08B1B8BA4  phys-bedge1-1-c7
      210100E08B3B8BA4  phys-bedge1-1-c8
      210100E08B3B66A5  phys-bedge1-2-c4
      210000E08B1B70A9  phys-bedge1-2-c5
      210100E08B3B70A9  phys-bedge1-2-c6
      210000E08B1BB4A2  phys-bedge1-2-c7
      210100E08B3BB4A2  phys-bedge1-2-c8
      210100E08B3B08A6  phys-bedge2-1-c4
      [...]
  3. Map all of the LUNs using the sccli interface, according to Table 2–5. Leftover LUNs ld0-04 and ld1-04 are mapped to channels without hosts filters:


    sccli> map ld0-04 0.40.0
    sccli> map ld0-01 0.40.1 phys-bedge1-1-c6
    sccli> map ld0-01 0.40.1 phys-bedge1-1-c8
    sccli> map ld0-01 0.40.1 phys-bedge1-2-c6
    sccli> map ld0-01 0.40.1 phys-bedge1-2-c8
    sccli> map ld0-02 0.40.2 phys-bedge1-1-c6
    [...]

    The complete list of mapping commands for each minnow is given in Appendix A, Logical Unit (LUN) Mapping.

  4. Verify the mapping on each minnow by running show map command:


    sccli> show map
    Ch Tgt LUN ld/lv ID-Partition Assigned Filter           Map
    --------------------------------------------------------------------------
     0  40   0  ld0  08C6C8D4-04  Primary
     0  40   1  ld0  08C6C8D4-01  Primary  210100E08B3B58A4 {phys-bedge1-1-c6}
     0  40   1  ld0  08C6C8D4-01  Primary  210100E08B3B8BA4 {phys-bedge1-1-c8}
     0  40   1  ld0  08C6C8D4-01  Primary  210100E08B3B70A9 {phys-bedge1-2-c6}
     0  40   1  ld0  08C6C8D4-01  Primary  210100E08B3BB4A2 {phys-bedge1-2-c8}
     0  40   2  ld0  08C6C8D4-02  Primary  210100E08B3B58A4 {phys-bedge1-1-c6}
     0  40   2  ld0  08C6C8D4-02  Primary  210100E08B3B8BA4 {phys-bedge1-1-c8}
     0  40   2  ld0  08C6C8D4-02  Primary  210100E08B3B70A9 {phys-bedge1-2-c6}
     0  40   2  ld0  08C6C8D4-02  Primary  210100E08B3BB4A2 {phys-bedge1-2-c8}
    [...]
  5. Save the configuration of the Sun StorEdge 3510 FC array using the firmware application “Saving Configuration (NVRAM) to a Disk” and the Configuration Service Console's “Save Configuration” utility.