Open HA Cluster Installation Guide

ProcedureHow to Configure iSCSI Storage Using COMSTAR and Single Paths

Perform this procedure to configure OpenSolaris Common Multiprotocol SCSI TARget (COMSTAR) on locally attached storage, to share access among multiple cluster nodes. This procedure uses single paths between iSCSI initiators and iSCSI targets and also configures a mirrored ZFS storage pool to provide high availability.

Note –

If you use multiple paths between iSCSI initiators and iSCSI targets, instead go to How to Configure iSCSI Storage Using COMSTAR and Multiple Paths.

Before You Begin

Ensure that the storage configuration meets Open HA Cluster 2009.06 requirements. See iSCSI Storage.

  1. On each node, perform the required procedures from Configuring an iSCSI Storage Array With COMSTAR (Task Map) that are listed in the following table, observing the Special Instructions.



    Special Instructions 

    1. Perform basic setup. 

    Getting Started with COMSTAR

    To create the SCSI logical unit, perform the procedure How to Create a Disk Partition SCSI Logical Unit.

    If you specify a whole disk instead of a slice to the sbdadm create-lu command, run the cldevice clear command afterwards to clear the DID namespace.

    2. Configure iSCSI target ports. 

    How to Configure iSCSI Target Ports

    Create a target for each private-network adapter on each node. 

    3. Configure the iSCSI target. 

    How to Configure an iSCSI Target for Discovery

    Use either static discovery or SendTargets. Do not use dynamic discovery. 

    4. Make a logical unit available. 

    How to Make Logical Units Available for iSCSI and iSER


    5. Configure an initiator system to access target storage. 

    How to Configure an iSCSI Initiator

    • Specify the node's clprivnet IP address as the target system. To determine the IP address of the clprivnet interface, run the following command. Output is similar to the following:

      phys-schost# ifconfig clprivnet0
          mtu 1500 index 5
          inet netmask fffffe00 broadcast \

          ether 0:0:0:0:0:1 
    • When completed, on each node update and populate the global-devices namespace.

      phys-schost# scdidadm -r
      phys-schost# cldevice populate

  2. Disable fencing for each newly created device.

    phys-schost# /usr/cluster/bin/cldevice set -p default_fencing=nofencing-noscrub device

    Alternatively, disable fencing globally for all devices in the cluster. Do this if there are no shared devices in the cluster that are being used as a quorum device.

    phys-schost# /usr/cluster/bin/cluster set -p global_fencing=nofencing-noscrub
  3. List the DID mappings for the devices in the cluster.

    Output is similar to the following, which shows a path from each node to each device:

    phys-schost# /usr/cluster/bin/cldevice list -v
    DID Device          Full Device Path
    ----------          ----------------
    d3                  phys-schost-1:/dev/rdsk/c14t1d0s4
    d3                  phys-schost-2:/dev/rdsk/c14t1d0s4
    d4                  phys-schost-1:/dev/rdsk/c15t8d0s4
    d4                  phys-schost-2:/dev/rdsk/c15t8d0s4
  4. From one node, create a mirrored ZFS storage pool from the DID devices that you created on each node.

    For the device path name, combine /dev/did/dsk/, the DID device name, and slice s2.

    phys-schost# zpool create pool mirror /dev/did/dsk/dNs2 /dev/did/dsk/dYs2
  5. Configure the mirrored ZFS storage pool as an HAStoragePlus resource.

    phys-schost# /usr/cluster/bin/clresourcegroup resourcegroup
    phys-schost# /usr/cluster/bin/clresourcetype register HAStoragePlus
    phys-schost# /usr/cluster/bin/clresource create -g resourcegroup -t HASToragePlus \
    -p Zpools=pool resource
    phys-schost# /usr/cluster/bin/clresourcegroup manage resourcegroup
    phys-schost# /usr/cluster/bin/clresourcegroup online resourcegroup

Example 3–2 Configuring iSCSI Storage Using COMSTAR and Single Paths

This example shows the steps involved to configure COMSTAR based iSCSI storage and a mirrored ZFS storage pool, zpool-1. The locally attached disk for the node phys-schost-1 is /dev/rdsk/c1t0d0s4 and for phys-schost-2 is /dev/rdsk/c1t8d0s4. The IP address of the clprivnet0 interface is

Static discovery of the iSCSI target is configured. Procedures performed on phys-schost-1 to configure an iSCSI initiator and target are also performed on phys-schost-2. After the devfsadm command attaches the disks as iSCSI targets, /dev/rdsk/c1t0d0s4 becomes /dev/rdsk/c14t0d0s4 on the initiator side and /dev/rdsk/c1t8d0s4 becomes /dev/rdsk/c15t8d0s4.

The cluster does not use any shared disks, so fencing is turned off globally for all disks in the cluster. The resource group rg-1 is configured with HAStoragePlus resource hasp-rs the mirrored ZFS storage pool zpool-1.

Enable and verify the STMF service
phys-schost-1# svcadm enable stmf
phys-schost-1# svcs stmf
online    15:59:53 svc:/system/stmf:default
Repeat on phys-schost-2

Create and verify disk-partition SCSI logical units on each node
phys-schost-1# sbdadm create-lu /dev/rdsk/c1t0d0s4
Created the following LU:

              GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ------------------
600144f05b4c460000004a1d9dd00001      73407800320      /dev/rdsk/c1t0d0s4

phys-schost-2# sbdadm create-lu /dev/rdsk/c1t8d0s4
Created the following LU:

              GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ------------------
600144f07d15cd0000004a202e340001      73407800320      /dev/rdsk/c1t8d0s4

Enable the iSCSI target SMF service
phys-schost-1# svcadm enable -r svc:/network/iscsi/target:default
phys-schost-1# svcs -a | grep iscsi
online  14:21:25 svc:/network/iscsi/target:default
Repeat on phys-schost-2

Configure each iSCSI target for static discovery
phys-schost-1# itadm create-target
successfully created
phys-schost-1# itadm list-target
TARGET NAME                                                  STATE   SESSIONS  online  0
Repeat on phys-schost-2 for the other iSCSI target

Make the logical units available
phys-schost-1# sbdadm list-lu
phys-schost-1# stmfadm add-view 600144f05b4c460000004a1d9dd00001
Repeat on phys-schost-2 for the other logical unit's GUID

Configure iSCSI initiators to access target storage
phys-schost-1# iscsiadm modify discovery --static enable
phys-schost-1# iscsiadm list discovery
Static: enabled
Send Targets: disabled
iSNS: disabled
phys-schost-1# ifconfig clprivnet0
    inet netmask fffffe00 broadcast
phys-schost-1# iscsiadm add static-config \,
phys-schost-1# iscsiadm list static-config
Static Configuration Target:,
phys-schost-1# devfsadm -i iscsi
phys-schost-1# format -e
phys-schost-1# iscsiadm list target
        Alias: -
        TPGT: 1
        ISID: 4000002a0000
        Connections: 1
Repeat on phys-schost-2 for this target
Repeat on both nodes for the other target

Update and populate the global-devices namespace on each node
phys-schost-1# scdidadm -r
phys-schost-1# cldevice populate
Repeat on phys-schost-2

Disable fencing for all disks in the cluster
phys-schost-1# /usr/cluster/bin/cluster set -p global_fencing=nofencing-noscrub

Create a mirrored ZFS storage pool
phys-schost-1/usr/cluster/bin/cldevice list -v
DID Device          Full Device Path
----------          ----------------
d3                  phys-schost-1:/dev/rdsk/c14t0d0s4
d3                  phys-schost-2:/dev/rdsk/c14t0d0s4
d4                  phys-schost-1:/dev/rdsk/c15t8d0s4
d4                  phys-schost-2:/dev/rdsk/c15t8d0s4
phys-schost-1# zpool create zpool-1 mirror /dev/did/dsk/d3s2 /dev/did/dsk/d4s2

Configure the mirrored ZFS storage pool as an HAStoragePlus resource
phys-schost# /usr/cluster/bin/clresourcegroup rg-1
phys-schost# /usr/cluster/bin/clresourcetype register HAStoragePlus
phys-schost# /usr/cluster/bin/clresource create -g rg-1 -t HAStoragePlus \
-p Zpools=zpool-1 hasp-rs
phys-schost# /usr/cluster/bin/clresourcegroup manage rg-1
phys-schost# /usr/cluster/bin/clresourcegroup online rg-1

Next Steps

If you want to use IP Security Architecture (IPsec) to provide secure TCP/IP communication on the cluster interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.

Otherwise, configure the data services that you want to run on your cluster. Go to Configuring Data Services.