Go to main content

Oracle® Solaris Cluster 4.3 System Administration Guide

Exit Print View

Updated: June 2017
 
 

Configuring and Administering Storage-Based Replicated Devices

You can configure an Oracle Solaris Cluster device group to contain devices that are replicated by using storage-based replication. Oracle Solaris Cluster software supports EMC Symmetrix Remote Data Facility software for storage-based replication.

Before you can replicate data with EMC Symmetrix Remote Data Facility software, you must be familiar with the storage-based replication documentation and have the storage-based replication product and the latest updates installed on your system. For information about installing the storage-based replication software, see the product documentation.

The storage-based replication software configures a pair of devices as replicas with one device as the primary replica and the other device as the secondary replica. At any given time, the device attached to one set of nodes will be the primary replicas. The device attached to the other set of nodes will be the secondary replica.

In an Oracle Solaris Cluster configuration, the primary replica is automatically moved whenever the Oracle Solaris Cluster device group to which the replica belongs is moved. Therefore, the replica primary should never be moved in an Oracle Solaris Cluster configuration directly. Rather, the takeover should be accomplished by moving the associated Oracle Solaris Cluster device group.


Caution

Caution  -  The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.


Administering EMC Symmetrix Remote Data Facility Replicated Devices

The following table lists the tasks you must perform to set up and manage an EMC Symmetrix Remote Data Facility (SRDF) storage-based replicated device.

Table 6  Task Map: Administering an EMC SRDF Storage-Based Replicated Device
Task
Instructions
Install the SRDF software on your storage device and nodes
The documentation that shipped with your EMC storage device.
Configure the EMC replication group
Configure the DID device
Register the replicated group
Verify the configuration
Manually recover data after a campus cluster's primary room completely fails

How to Configure an EMC SRDF Replication Group

Before You Begin

  • EMC Solutions Enabler software must be installed on all cluster nodes before you configure an EMC Symmetrix Remote Data Facility (SRDF) replication group. First, configure the EMC SRDF device groups on shared disks in the cluster. For more information about how to configure the EMC SRDF device groups, see your EMC SRDF product documentation.

  • When using EMC SRDF, use dynamic devices instead of static devices. Static devices require several minutes to change the replication primary and can impact failover time.


Caution

Caution  -  The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.


  1. Assume a role that provides solaris.cluster.modify authorization on all nodes connected to the storage array.
  2. For a three-site or three-data-center implementation using concurrent SRDF or cascaded devices, set the SYMAPI_2SITE_CLUSTER_DG parameter.

    Add the following entry to the Solutions Enabler options file on all participating cluster nodes:

    SYMAPI_2SITE_CLUSTER_DG=device-group:rdf-group-number
    device-group

    Specifies the name of the device group.

    rdf-group-number

    Specifies the RDF group that connects the host's local symmetrix to the second site's symmetrix.

    This entry enables the cluster software to automate the movement of the application between the two SRDF synchronous sites.

    For more information about three-data-center configurations, see Three-Data-Center (3DC) Topologies in Oracle Solaris Cluster 4.3 Geographic Edition Overview.

  3. On each node configured with the replicated data, discover the symmetrix device configuration.

    This might take a few minutes.

    # /usr/symcli/bin/symcfg discover
  4. If you have not already created the replica pairs, create them now.

    Use the symrdf command to create your replica pairs. For instructions on creating the replica pairs, refer to your SRDF documentation.


    Note -  If using concurrent RDF devices for a three-site or three-data-center implementation, add the following parameter to all symrdf commands:
    -rdfg rdf-group-number

    Specifying the RDF group number to the symrdf command ensures that the symrdf operation is directed to the correct RDF group.


  5. On each node configured with replicated devices, verify that data replication is set up correctly.
    # /usr/symcli/bin/symdg show group-name
  6. Perform a swap of the device group.
    1. Verify that the primary and secondary replicas are synchronized.
      # /usr/symcli/bin/symrdf -g group-name verify -synchronized
    2. Determine which node contains the primary replica and which node contains the secondary replica by using the symdg show command.
      # /usr/symcli/bin/symdg show group-name

      The node with the RDF1 device contains the primary replica and the node with the RDF2 device state contains the secondary replica.

    3. Enable the secondary replica.
      # /usr/symcli/bin/symrdf -g group-name failover
    4. Swap the RDF1 and RDF2 devices.
      # /usr/symcli/bin/symrdf -g group-name swap -refresh R1
    5. Enable the replica pair.
      # /usr/symcli/bin/symrdf -g group-name establish
    6. Verify that the primary node and secondary replicas are synchronized.
      # /usr/symcli/bin/symrdf -g group-name verify -synchronized
  7. Repeat all of step 5 on the node which originally had the primary replica.

Next Steps

After you have configured a device group for your EMC SRDF replicated device, you must configure the device identifier (DID) driver that the replicated device uses.

How to Configure DID Devices for Replication Using EMC SRDF

This procedure configures the device identifier (DID) driver that the replicated device uses. Ensure that the specified DID device instances are replicas of each other and that they belong to the specified replication group.

Before You Begin

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Assume a role that provides solaris.cluster.modify authorization on any node of the cluster.
  2. Determine which DID devices correspond to the configured RDF1 and RDF2 devices.
    # /usr/symcli/bin/symdg show group-name

    Note -  If your system does not display the entire Oracle Solaris device patch, set the environment variable SYMCLI_FULL_PDEVNAME to 1 and retype the symdg -show command.
  3. Determine which DID devices correspond to the Oracle Solaris devices.
    # cldevice list -v
  4. For each pair of matched DID devices, combine the instances into a single replicated DID device. Run the following command from the RDF2/secondary side.
    # cldevice combine -t srdf -g replication-device-group \
    -d destination-instance source-instance

    Note -  The –T option is not supported for SRDF data replication devices.
    -t replication-type

    Specifies the replication type. For EMC SRDF, type SRDF.

    -g replication-device-group

    Specifies the name of the device group as shown in the symdg show command.

    -d destination-instance

    Specifies the DID instance that corresponds to the RDF1 device.

    source-instance

    Specifies the DID instance that corresponds to the RDF2 device.


    Note -  If you combine the wrong DID device, use the –b option for the scdidadm command to undo the combining of two DID devices.
    # scdidadm -b device
    –b device

    The DID instance that corresponded to the destination_device when the instances were combined.


  5. If the name of a replication device group changes, perform the following additional steps.

    If the name of the replication device group (and the corresponding global device group) changes, you must update the replicated device information by first using the scdidadm –b command to remove the existing information. The last step is to use the cldevice combine command to create a new, updated device.

  6. Verify that the DID instances have been combined.
    # cldevice list -v device
  7. Verify that the SRDF replication is set.
    # cldevice show device
  8. On all nodes, verify that the DID devices for all combined DID instances are accessible.
    # cldevice list -v

Next Steps

After you have configured the device identifier (DID) driver that the replicated device uses, you must verify the EMC SRDF replicated global device group configuration.

How to Verify EMC SRDF Replicated Global Device Group Configuration

Before You Begin

Before you verify the global device group, you must first create it. You can use device groups from Solaris Volume Manager, ZFS, or raw-disk. For more information, consult the following:


Caution

Caution  -  The name of the Oracle Solaris Cluster device group that you created (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Verify that the primary device group corresponds to the same node as the node that contains the primary replica.
    # symdg -show group-name
    # cldevicegroup status -n nodename group-name
  2. Perform a trial switchover to ensure that the device groups are configured correctly and the replicas can move between nodes.

    If the device group is offline, bring it online.

    # cldevicegroup switch -n nodename group-name
    -n nodename

    The node to which the device group is switched. This node becomes the new primary.

  3. Verify that the switchover was successful by comparing the output of the following commands.
    # symdg -show group-name
    # cldevicegroup status -n nodename group-name

Example: Configuring an SRDF Replication Group for Oracle Solaris Cluster

This example completes the Oracle Solaris Cluster-specific steps necessary to set up SRDF replication in your cluster. The example assumes that you have already performed the following tasks:

  • Completed pairing LUNS for replication between arrays.

  • Installed the SRDF software on your storage device and cluster nodes.

Example 30  Creating Replica Pairs

This example involves a four-node cluster where two nodes are connected to one symmetrix and the other two nodes are connected to the second symmetrix. The SRDF device group is called dg1.

Run the following commands on all nodes
# symcfg discover
! This operation might take up to a few minutes.

# symdev list pd

Symmetrix ID: 000187990182

        Device Name          Directors                   Device                
--------------------------- ------------ --------------------------------------
                                                                           Cap 
Sym  Physical               SA :P DA :IT  Config        Attribute    Sts   (MB)
--------------------------- ------------- -------------------------------------

0067 c5t600604800001879901* 16D:0 02A:C1  RDF2+Mir      N/Grp'd      RW    4315
0068 c5t600604800001879901* 16D:0 16B:C0  RDF1+Mir      N/Grp'd      RW    4315
0069 c5t600604800001879901* 16D:0 01A:C0  RDF1+Mir      N/Grp'd      RW    4315
…
On all nodes on the RDF1 side, run the following commands
# symdg -type RDF1 create dg1
# symld -g dg1 add dev 0067

On all nodes on the RDF2 side, run the following commands
# symdg -type RDF2 create dg1
# symld -g dg1 add dev 0067
Example 31  Verifying Data Replication Setup

The following commands are performed on one node of the cluster.

# symdg show dg1

Group Name:  dg1

    Group Type                                   : RDF1     (RDFA)
    Device Group in GNS                          : No
    Valid                                        : Yes
    Symmetrix ID                                 : 000187900023
    Group Creation Time                          : Thu Sep 13 13:21:15 2007
    Vendor ID                                    : EMC Corp
    Application ID                               : SYMCLI

    Number of STD Devices in Group               :    1
    Number of Associated GK's                    :    0
    Number of Locally-associated BCV's           :    0
    Number of Locally-associated VDEV's          :    0
    Number of Remotely-associated BCV's (STD RDF):    0
    Number of Remotely-associated BCV's (BCV RDF):    0
    Number of Remotely-assoc'd RBCV's (RBCV RDF) :    0

    Standard (STD) Devices (1):
        {
        ------------------------------------------------------------------
                                                    Sym               Cap 
        LdevName            PdevName                Dev  Att. Sts     (MB)
        ------------------------------------------------------------------
        DEV001              /dev/rdsk/c5t6006048000018790002353594D303637d0s2 0067   RW   4315
        }

    Device Group RDF Information
...
# symrdf -g dg1 establish

Execute an RDF 'Incremental Establish' operation for device
group 'dg1' (y/[n]) ? y

An RDF 'Incremental Establish' operation execution is
in progress for device group 'dg1'. Please wait...

    Write Disable device(s) on RA at target (R2)..............Done.
    Suspend RDF link(s).......................................Done.
    Mark target (R2) devices to refresh from source (R1)......Started.
    Device: 0067 ............................................ Marked.
    Mark target (R2) devices to refresh from source (R1)......Done.
    Merge device track tables between source and target.......Started.
    Device: 0067 ............................................ Merged.
    Merge device track tables between source and target.......Done.
    Resume RDF link(s)........................................Started.
    Resume RDF link(s)........................................Done.

The RDF 'Incremental Establish' operation successfully initiated for
device group 'dg1'.

#
# symrdf -g dg1 query

Device Group (DG) Name             : dg1
DG's Type                          : RDF2
DG's Symmetrix ID                  : 000187990182


       Target (R2) View                 Source (R1) View     MODES           
--------------------------------    ------------------------ ----- ------------
             ST                  LI      ST                                    
Standard      A                   N       A                                   
Logical       T  R1 Inv   R2 Inv  K       T  R1 Inv   R2 Inv       RDF Pair    
Device  Dev   E  Tracks   Tracks  S Dev   E  Tracks   Tracks MDA   STATE       
-------------------------------- -- ------------------------ ----- ------------

DEV001  0067 WD       0        0 RW 0067 RW       0        0 S..   Synchronized

Total          -------- --------           -------- --------
  MB(s)             0.0      0.0                0.0      0.0

Legend for MODES:

 M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy
 D(omino)           : X = Enabled, . = Disabled
 A(daptive Copy)    : D = Disk Mode, W = WP Mode, . = ACp off

# 
Example 32  Displaying DIDs Corresponding to the Disks Used

The same procedure applies to the RDF1 and RDF2 sides.

You can look under the PdevName field of output of the dymdg show dg command.

Run these commands on the RDF1 side
# symdg show dg1

Group Name:  dg1

    Group Type                                   : RDF1     (RDFA)
...
    Standard (STD) Devices (1):
        {
        -----------------------------------------------------------------
                                                   Sym               Cap 
        LdevName           PdevName                Dev  Att. Sts     (MB)
        -----------------------------------------------------------------
        DEV001             /dev/rdsk/c5t6006048000018790002353594D303637d0s2 0067    RW   4315
        }

    Device Group RDF Information
…
Obtain the corresponding DID
# cldevice list | grep c5t6006048000018790002353594D303637d0
217      pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0 /dev/did/rdsk/d217   
217      pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0 /dev/did/rdsk/d217 

List the corresponding DID
# cldevice show d217

=== DID Device Instances ===                   

DID Device Name:                      /dev/did/rdsk/d217
  Full Device Path:                      pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Full Device Path:                      pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Replication:                           none
  default_fencing:                       global

# 
Run these commands on the RDF2 side
# symdg show dg1

Group Name:  dg1

    Group Type                                   : RDF2     (RDFA)
...
    Standard (STD) Devices (1):
        {
        -----------------------------------------------------------------
                                                   Sym               Cap 
        LdevName           PdevName                Dev  Att. Sts     (MB)
        -----------------------------------------------------------------
        DEV001             /dev/rdsk/c5t6006048000018799018253594D303637d0s2 0067    WD   4315
        }

    Device Group RDF Information
…
Obtain the corresponding DID
# cldevice list | grep c5t6006048000018799018253594D303637d0
108      pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0 /dev/did/rdsk/d108   
108      pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0 /dev/did/rdsk/d108   

List the corresponding DID
# cldevice show d108

=== DID Device Instances ===                   

DID Device Name:            /dev/did/rdsk/d108
  Full Device Path:               pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Full Device Path:               pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Replication:                    none
  default_fencing:                global

# 
Example 33  Combining DID instances

From the RDF2 side, type:

# cldevice combine -t srdf -g dg1 -d d217 d108
Example 34  Displaying the Combined DIDs

From any node in the cluster, type:

# cldevice show d217 d108
cldevice:  (C727402) Could not locate instance "108".

=== DID Device Instances ===                   

DID Device Name:                      /dev/did/rdsk/d217
  Full Device Path:                      pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Full Device Path:                      pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Full Device Path:                      pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Full Device Path:                      pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Replication:                           srdf
  default_fencing:                       global

How to Recover EMC SRDF Data after a Primary Room's Complete Failure

This procedure performs data recovery when a campus cluster's primary room fails completely, the primary room fails over to a secondary room, and then the primary room comes back online. The campus cluster's primary room is the primary node and storage site. The complete failure of a room includes the failure of both the host and the storage in that room. If the primary room fails, Oracle Solaris Cluster automatically fails over to the secondary room, makes the secondary room's storage device readable and writable, and enables the failover of the corresponding device groups and resource groups.

When the primary room returns online, you can manually recover the data from the SRDF device group that was written to the secondary room and resynchronize the data. This procedure recovers the SRDF device group by synchronizing the data from the original secondary room (this procedure uses phys-campus-2 for the secondary room) to the original primary room (phys-campus-1). The procedure also changes the SRDF device group type to RDF1 on phys-campus-2 and to RDF2 on phys-campus-1.

Before You Begin

You must configure the EMC replication group and DID devices, as well as register the EMC replication group before you can perform a manual failover. For information about creating a Solaris Volume Manager device group, see How to Add and Register a Device Group (Solaris Volume Manager).


Note -  These instructions demonstrate one method you can use to manually recover SRDF data after the primary room fails over completely and then comes back online. Check the EMC documentation for additional methods.

Log into the campus cluster's primary room to perform these steps. In the procedure below, dg1 is the SRDF device group name. At the time of the failure, the primary room in this procedure is phys-campus-1 and the secondary room is phys-campus-2.

  1. Log into the campus cluster's primary room and assume a role that provides solaris.cluster.modify authorization.
  2. From the primary room, use the symrdf command to query the replication status of the RDF devices and view information about those devices.
    phys-campus-1# symrdf -g dg1 query

    Tip  -  A device group that is in the split state is not synchronized.
  3. If the RDF pair state is split and the device group type is RDF1, then force a failover of the SRDF device group.
    phys-campus-1# symrdf -g dg1 -force failover
  4. View the status of the RDF devices.
    phys-campus-1# symrdf -g dg1 query
  5. After the failover, you can swap the data on the RDF devices that failed over.
    phys-campus-1# symrdf -g dg1 swap
  6. Verify the status and other information about the RDF devices.
    phys-campus-1# symrdf -g dg1 query
  7. Establish the SRDF device group in the primary room.
    phys-campus-1# symrdf -g dg1 establish
  8. Confirm that the device group is in a synchronized state and that the device group type is RDF2.
    phys-campus-1# symrdf -g dg1 query
Example 35  Manually Recovering EMC SRDF Data after a Primary Site Failover

This example provides the Oracle Solaris Cluster-specific steps necessary to manually recover EMC SRDF data after a campus cluster's primary room fails over, a secondary room takes over and records data, and then the primary room comes back online. In the example, the SRDF device group is called dg1 and the standard logical device is DEV001. The primary room is phys-campus-1 at the time of the failure, and the secondary room is phys-campus-2. Perform the steps from the campus cluster's primary room, phys-campus-1.

phys-campus-1# symrdf -g dg1 query | grep DEV
DEV001 0012RW  0  0NR 0012RW  2031   O S.. Split

phys-campus-1# symdg list | grep RDF
dg1 RDF1  Yes  00187990182  1  0  0  0  0

phys-campus-1# symrdf -g dg1 -force failover
...

phys-campus-1# symrdf -g dg1 query | grep DEV
DEV001  0012  WD  0  0 NR 0012 RW  2031  O S..  Failed Over

phys-campus-1# symdg list | grep RDF
dg1  RDF1  Yes  00187990182  1  0  0  0  0

phys-campus-1# symrdf -g dg1 swap
...

phys-campus-1# symrdf -g dg1 query | grep DEV
DEV001  0012 WD  0  0 NR 0012 RW  0  2031 S.. Suspended

phys-campus-1# symdg list | grep RDF
dg1  RDF2  Yes  000187990182  1  0  0  0  0

phys-campus-1# symrdf -g dg1 establish
...

phys-campus-1# symrdf -g dg1 query | grep DEV
DEV001  0012 WD  0  0 RW 0012 RW  0  0 S.. Synchronized

phys-campus-1# symdg list | grep RDF
dg1  RDF2  Yes  000187990182  1  0  0  0  0