Sun Cluster System Administration Guide for Solaris OS

Administering EMC Symmetrix Remote Data Facility Replicated Devices

The following table lists the tasks you must perform to set up an EMC Symmetrix Remote Data Facility (SRDF) storage-based replicated device.

Table 5–3 Task Map: Administering an EMC SRDF Storage-Based Replicated Device

Task 

Instructions 

Install the SRDF software on your storage device and nodes 

The documentation that shipped with your EMC storage device. 

Configure the EMC replication group 

How to Configure an EMC Symmetrix Remote Data Facility Replication Group

Configure the DID device  

How to Configure DID Devices for Replication Using EMC Symmetrix Remote Data Facility (SRDF)

Register the replicated group 

How to Add and Register a Device Group (Solaris Volume Manager) or SPARC: How to Register a Disk Group as a Device Group (Veritas Volume Manager)

Verify the configuration  

How to Verify EMC Symmetrix Remote Data Facility (SRDF) Replicated Global Device Group Configuration

ProcedureHow to Configure an EMC Symmetrix Remote Data Facility Replication Group

Before You Begin

EMC Solutions Enabler software must be installed on all cluster nodes before you configure an EMC Symmetrix Remote Data Facility (SRDF) replication group. First, configure the EMC SRDF device groups on shared disks in the cluster. For more information about how to configure the EMC SRDF device groups, see your EMC SRDF product documentation.

When using EMC SRDF, use dynamic devices instead of static devices. Static devices require several minutes to change the replication primary and can impact failover time.


Caution – Caution –

The name of the Sun Cluster device group that you create (Solaris Volume Manager, Veritas Volume Manager, or raw-disk) must be the same as the name of the replicated device group.


  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on all nodes connected to the storage array.

  2. On each node configured with the replicated data, discover the symmetrix device configuration.

    This might take a few minutes.


    # /usr/symcli/bin/symcfg discover
    
  3. If you have not already created the replica pairs, create them now.

    Use the symrdf command to create your replica pairs. For instructions on creating the replica pairs, refer to your SRDF documentation.

  4. On each node configured with replicated devices, verify that data replication is set up correctly.


    # /usr/symcli/bin/symdg show group-name
    
  5. Perform a swap of the device group.

    1. Verify that the primary and secondary replicas are synchronized.


      # /usr/symcli/bin/symrdf -g group-name verify -synchronized
      
    2. Determine which node contains the primary replica and which node contains the secondary replica by using the symdg show command.


      # /usr/symcli/bin/symdg show group-name
      

      The node with the RDF1 device contains the primary replica and the node with the RDF2 device state contains the secondary replica.

    3. Enable the secondary replica.


      # /usr/symcli/bin/symrdf -g group-name failover
      
    4. Swap the RDF1 and RDF2 devices.


      # /usr/symcli/bin/symrdf -g group-name swap -refresh R1
      
    5. Enable the replica pair.


      # /usr/symcli/bin/symrdf -g group-name establish
      
    6. Verify that the primary node and secondary replicas are synchronized.


      # /usr/symcli/bin/symrdf -g group-name verify -synchronized
      
  6. Repeat all of step 5 on the node which originally had the primary replica.

Next Steps

After you have configured a device group for your EMC SRDF replicated device, you must configure the device identifier (DID) driver that the replicated device uses.

ProcedureHow to Configure DID Devices for Replication Using EMC Symmetrix Remote Data Facility (SRDF)

This procedure configures the device identifier (DID) driver that the replicated device uses.

Before You Begin

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on any node of the cluster.

  2. Determine which DID devices correspond to the configured RDF1 and RDF2 devices.


    # /usr/symcli/bin/symdg show group-name
    

    Note –

    If your system does not display the entire Solaris device patch, set the environment variable SYMCLI_FULL_PDEVNAME to 1 and retype the symdg -show command.


  3. Determine which DID devices correspond to the Solaris devices.


    # cldevice list -v
    
  4. For each pair of matched DID devices, combine the instances into a single replicated DID device. Run the following command from the RDF2/secondary side.


    # cldevice combine -t srdf -g replication-device-group \
     -d destination-instance source-instance
    

    Note –

    The -T option is not supported for SRDF data replication devices.


    -t replication-type

    Specifies the replication type. For EMC SRDF, type SRDF.

    -g replication-device-group

    Specifies the name of the device group as shown in the symdg show command.

    -d destination-instance

    Specifies the DID instance that corresponds to the RDF1 device.

    source-instance

    Specifies the DID instance that corresponds to the RDF2 device.


    Note –

    If you combine the wrong DID device, use the -b option for the scdidadm command to undo the combining of two DID devices.


    # scdidadm -b device 
    
    -b device

    The DID instance that corresponded to the destination_device when the instances were combined.


  5. If the name of a replication device group changes, additional steps are required for Hitachi TrueCopy and SRDF. After you complete steps 1 through 4, perform the appropriate additional step.

    Item 

    Description 

    TrueCopy 

    If the name of the replication device group (and the corresponding global device group) changes, you must rerun the cldevice replicate command to update the replicated device information.

    SRDF 

    If the name of the replication device group (and the corresponding global device group) changes, you must update the replicated device information by first using the scdidadm -b command to remove the existing information. The last step is to use the cldevice combine command to create a new, updated device.

  6. Verify that the DID instances have been combined.


    # cldevice list -v device
    
  7. Verify that the SRDF replication is set.


    # cldevice show device
    
  8. On all nodes, verify that the DID devices for all combined DID instances are accessible.


    # cldevice list -v
    
Next Steps

After you have configured the device identifier (DID) driver that the replicated device uses, you must verify the EMC SRDF replicated global device group configuration.

ProcedureHow to Verify EMC Symmetrix Remote Data Facility (SRDF) Replicated Global Device Group Configuration

Before You Begin

Before you verify the global device group, you must first create it. For information about creating a Solaris Volume Manager device group, see How to Add and Register a Device Group (Solaris Volume Manager). For information about creating a Veritas Volume Manager device group, see SPARC: How to Create a New Disk Group When Encapsulating Disks (Veritas Volume Manager).

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Verify that the primary device group corresponds to the same node as the node that contains the primary replica.


    # symdg -show group-name
    # cldevicegroup status -n nodename group-name
    
  2. Perform a trial switchover to ensure that the device groups are configured correctly and the replicas can move between nodes.

    If the device group is offline, bring it online.


    # cldevicegroup switch -n nodename group-name
    
    -n nodename

    The node to which the device group is switched. This node becomes the new primary.

  3. Verify that the switchover was successful by comparing the output of the following commands.


    # symdg -show group-name
    # cldevicegroup status -n nodename group-name
    

Example: Configuring an SRDF Replication Group for Sun Cluster

This example completes the Sun Cluster specific steps necessary to set up SRDF replication in your cluster. The example assumes that you have already performed the following tasks:

This example involves a four-node cluster where two nodes are connected to one symmetrix and the other two nodes are connected to the second symmetrix. The SRDF device group is called dg1.


Example 5–15 Creating Replica Pairs

Run the following command on all nodes.


# symcfg discover
! This operation might take up to a few minutes.
# symdev list pd

Symmetrix ID: 000187990182

        Device Name          Directors                   Device                
--------------------------- ------------ --------------------------------------
                                                                           Cap 
Sym  Physical               SA :P DA :IT  Config        Attribute    Sts   (MB)
--------------------------- ------------- -------------------------------------

0067 c5t600604800001879901* 16D:0 02A:C1  RDF2+Mir      N/Grp'd      RW    4315
0068 c5t600604800001879901* 16D:0 16B:C0  RDF1+Mir      N/Grp'd      RW    4315
0069 c5t600604800001879901* 16D:0 01A:C0  RDF1+Mir      N/Grp'd      RW    4315
...

On all nodes on the RDF1 side, type:


# symdg -type RDF1 create dg1
# symld -g dg1 add dev 0067

On all nodes on the RDF2 side, type:


# symdg -type RDF2 create dg1
# symld -g dg1 add dev 0067


Example 5–16 Verifying Data Replication Setup

From one node in the cluster, type:


# symdg show dg1

Group Name:  dg1

    Group Type                                   : RDF1     (RDFA)
    Device Group in GNS                          : No
    Valid                                        : Yes
    Symmetrix ID                                 : 000187900023
    Group Creation Time                          : Thu Sep 13 13:21:15 2007
    Vendor ID                                    : EMC Corp
    Application ID                               : SYMCLI

    Number of STD Devices in Group               :    1
    Number of Associated GK's                    :    0
    Number of Locally-associated BCV's           :    0
    Number of Locally-associated VDEV's          :    0
    Number of Remotely-associated BCV's (STD RDF):    0
    Number of Remotely-associated BCV's (BCV RDF):    0
    Number of Remotely-assoc'd RBCV's (RBCV RDF) :    0

    Standard (STD) Devices (1):
        {
        --------------------------------------------------------------------
                                                      Sym               Cap 
        LdevName              PdevName                Dev  Att. Sts     (MB)
        --------------------------------------------------------------------
        DEV001                /dev/rdsk/c5t6006048000018790002353594D303637d0s2 0067      RW      4315
        }

    Device Group RDF Information
...
# symrdf -g dg1 establish

Execute an RDF 'Incremental Establish' operation for device
group 'dg1' (y/[n]) ? y

An RDF 'Incremental Establish' operation execution is
in progress for device group 'dg1'. Please wait...

    Write Disable device(s) on RA at target (R2)..............Done.
    Suspend RDF link(s).......................................Done.
    Mark target (R2) devices to refresh from source (R1)......Started.
    Device: 0067 ............................................ Marked.
    Mark target (R2) devices to refresh from source (R1)......Done.
    Merge device track tables between source and target.......Started.
    Device: 0067 ............................................ Merged.
    Merge device track tables between source and target.......Done.
    Resume RDF link(s)........................................Started.
    Resume RDF link(s)........................................Done.

The RDF 'Incremental Establish' operation successfully initiated for
device group 'dg1'.

#  
# symrdf -g dg1 query  


Device Group (DG) Name             : dg1
DG's Type                          : RDF2
DG's Symmetrix ID                  : 000187990182


       Target (R2) View                 Source (R1) View     MODES           
--------------------------------    ------------------------ ----- ------------
             ST                  LI      ST                                    
Standard      A                   N       A                                   
Logical       T  R1 Inv   R2 Inv  K       T  R1 Inv   R2 Inv       RDF Pair    
Device  Dev   E  Tracks   Tracks  S Dev   E  Tracks   Tracks MDA   STATE       
-------------------------------- -- ------------------------ ----- ------------

DEV001  0067 WD       0        0 RW 0067 RW       0        0 S..   Synchronized

Total          -------- --------           -------- --------
  MB(s)             0.0      0.0                0.0      0.0

Legend for MODES:

 M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy
 D(omino)           : X = Enabled, . = Disabled
 A(daptive Copy)    : D = Disk Mode, W = WP Mode, . = ACp off

# 


Example 5–17 Displaying DIDs Corresponding to the Disks Used

The same procedure applies to the RDF1 and RDF2 sides.

You can look under the PdevName field of output of the dymdg show dg command.

On the RDF1 side, type:


# symdg show dg1

Group Name:  dg1

    Group Type                                   : RDF1     (RDFA)
...
    Standard (STD) Devices (1):
        {
        --------------------------------------------------------------------
                                                      Sym               Cap 
        LdevName              PdevName                Dev  Att. Sts     (MB)
        --------------------------------------------------------------------
        DEV001                /dev/rdsk/c5t6006048000018790002353594D303637d0s2 0067      RW      4315
        }

    Device Group RDF Information
...

To obtain the corresponding DID, type:


# scdidadm -L | grep c5t6006048000018790002353594D303637d0
217      pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0 /dev/did/rdsk/d217   
217      pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0 /dev/did/rdsk/d217 
#

To list the corresponding DID, type:


# cldevice show d217

=== DID Device Instances ===                   

DID Device Name:                                /dev/did/rdsk/d217
  Full Device Path:                                pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Full Device Path:                                pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Replication:                                     none
  default_fencing:                                 global

# 

On the RDF2 side, type:

You can look under the PdevName field of output of dymdg show dg command.


# symdg show dg1

Group Name:  dg1

    Group Type                                   : RDF2     (RDFA)
...
    Standard (STD) Devices (1):
        {
        --------------------------------------------------------------------
                                                      Sym               Cap 
        LdevName              PdevName                Dev  Att. Sts     (MB)
        --------------------------------------------------------------------
        DEV001                /dev/rdsk/c5t6006048000018799018253594D303637d0s2 0067      WD      4315
        }

    Device Group RDF Information
...

To obtain the corresponding DID, type:


# scdidadm -L | grep c5t6006048000018799018253594D303637d0
108      pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0 /dev/did/rdsk/d108   
108      pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0 /dev/did/rdsk/d108   
# 

To list the corresponding DID, type:


# cldevice show d108

=== DID Device Instances ===                   

DID Device Name:                                /dev/did/rdsk/d108
  Full Device Path:                                pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Full Device Path:                                pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Replication:                                     none
  default_fencing:                                 global

# 


Example 5–18 Combining DID instances

From the RDF2 side, type:


# cldevice combine -t srdf -g dg1 -d d217 d108
# 


Example 5–19 Displaying the Combined DIDs

From any node in the cluster, type:


# cldevice show d217 d108
cldevice:  (C727402) Could not locate instance "108".

=== DID Device Instances ===                   

DID Device Name:                                /dev/did/rdsk/d217
  Full Device Path:                                pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Full Device Path:                                pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Full Device Path:                                pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Full Device Path:                                pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Replication:                                     srdf
  default_fencing:                                 global

#