JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster System Administration Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering Oracle Solaris Cluster

2.  Oracle Solaris Cluster and RBAC

3.  Shutting Down and Booting a Cluster

4.  Data Replication Approaches

5.  Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems

Overview of Administering Global Devices and the Global Namespace

Global Device Permissions for Solaris Volume Manager

Dynamic Reconfiguration With Global Devices

Administering Storage-Based Replicated Devices

Administering EMC Symmetrix Remote Data Facility Replicated Devices

How to Configure an EMC SRDF Replication Group

How to Configure DID Devices for Replication Using EMC SRDF

How to Verify EMC SRDF Replicated Global Device Group Configuration

Example: Configuring an SRDF Replication Group for Oracle Solaris Cluster

Overview of Administering Cluster File Systems

Cluster File System Restrictions

Administering Device Groups

How to Update the Global-Devices Namespace

How to Change the Size of a lofi Device That Is Used for the Global-Devices Namespace

Migrating the Global-Devices Namespace

How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device

How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition

Adding and Registering Device Groups

How to Add and Register a Device Group (Solaris Volume Manager)

How to Add and Register a Device Group (Raw-Disk)

How to Add and Register a Replicated Device Group (ZFS)

Maintaining Device Groups

How to Remove and Unregister a Device Group (Solaris Volume Manager)

How to Remove a Node From All Device Groups

How to Remove a Node From a Device Group (Solaris Volume Manager)

How to Remove a Node From a Raw-Disk Device Group

How to Change Device Group Properties

How to Set the Desired Number of Secondaries for a Device Group

How to List a Device Group Configuration

How to Switch the Primary for a Device Group

How to Put a Device Group in Maintenance State

Administering the SCSI Protocol Settings for Storage Devices

How to Display the Default Global SCSI Protocol Settings for All Storage Devices

How to Display the SCSI Protocol of a Single Storage Device

How to Change the Default Global Fencing Protocol Settings for All Storage Devices

How to Change the Fencing Protocol for a Single Storage Device

Administering Cluster File Systems

How to Add a Cluster File System

How to Remove a Cluster File System

How to Check Global Mounts in a Cluster

Administering Disk-Path Monitoring

How to Monitor a Disk Path

How to Unmonitor a Disk Path

How to Print Failed Disk Paths

How to Resolve a Disk-Path Status Error

How to Monitor Disk Paths From a File

How to Enable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail

How to Disable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail

6.  Administering Quorum

7.  Administering Cluster Interconnects and Public Networks

8.  Adding and Removing a Node

9.  Administering the Cluster

10.  Configuring Control of CPU Usage

11.  Updating Your Software

12.  Backing Up and Restoring a Cluster

A.  Example

Index

Administering Storage-Based Replicated Devices

You can configure an Oracle Solaris Cluster device group to contain devices that are replicated by using storage-based replication. Oracle Solaris Cluster software supports EMC Symmetrix Remote Data Facility software for storage-based replication.

Before you can replicate data with EMC Symmetrix Remote Data Facility software, you must be familiar with the storage-based replication documentation and have the storage-based replication product and the latest updates installed on your system. For information about installing the storage-based replication software, see the product documentation.

The storage-based replication software configures a pair of devices as replicas with one device as the primary replica and the other device as the secondary replica. At any given time, the device attached to one set of nodes will be the primary replicas. The device attached to the other set of nodes will be the secondary replica.

In an Oracle Solaris Cluster configuration, the primary replica is automatically moved whenever the Oracle Solaris Cluster device group to which the replica belongs is moved. Therefore, the replica primary should never be moved in an Oracle Solaris Cluster configuration directly. Rather, the takeover should be accomplished by moving the associated Oracle Solaris Cluster device group.


Caution

Caution - The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.


This section contains the following procedures:

Administering EMC Symmetrix Remote Data Facility Replicated Devices

The following table lists the tasks you must perform to set up and manage an EMC Symmetrix Remote Data Facility (SRDF) storage-based replicated device.

Table 5-2 Task Map: Administering an EMC SRDF Storage-Based Replicated Device

Task
Instructions
Install the SRDF software on your storage device and nodes
The documentation that shipped with your EMC storage device.
Configure the EMC replication group
Configure the DID device
Register the replicated group
Verify the configuration
Manually recover data after a campus cluster's primary room completely fails

How to Configure an EMC SRDF Replication Group

Before You Begin


Caution

Caution - The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.


  1. Assume a role that provides solaris.cluster.modify RBAC authorization on all nodes connected to the storage array.
  2. On each node configured with the replicated data, discover the symmetrix device configuration.

    This might take a few minutes.

    # /usr/symcli/bin/symcfg discover
  3. If you have not already created the replica pairs, create them now.

    Use the symrdf command to create your replica pairs. For instructions on creating the replica pairs, refer to your SRDF documentation.

  4. On each node configured with replicated devices, verify that data replication is set up correctly.
    # /usr/symcli/bin/symdg show group-name
  5. Perform a swap of the device group.
    1. Verify that the primary and secondary replicas are synchronized.
      # /usr/symcli/bin/symrdf -g group-name verify -synchronized
    2. Determine which node contains the primary replica and which node contains the secondary replica by using the symdg show command.
      # /usr/symcli/bin/symdg show group-name

      The node with the RDF1 device contains the primary replica and the node with the RDF2 device state contains the secondary replica.

    3. Enable the secondary replica.
      # /usr/symcli/bin/symrdf -g group-name failover
    4. Swap the RDF1 and RDF2 devices.
      # /usr/symcli/bin/symrdf -g group-name swap -refresh R1
    5. Enable the replica pair.
      # /usr/symcli/bin/symrdf -g group-name establish
    6. Verify that the primary node and secondary replicas are synchronized.
      # /usr/symcli/bin/symrdf -g group-name verify -synchronized
  6. Repeat all of step 5 on the node which originally had the primary replica.

Next Steps

After you have configured a device group for your EMC SRDF replicated device, you must configure the device identifier (DID) driver that the replicated device uses.

How to Configure DID Devices for Replication Using EMC SRDF

This procedure configures the device identifier (DID) driver that the replicated device uses.

Before You Begin

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Assume a role that provides solaris.cluster.modify RBAC authorization on any node of the cluster.
  2. Determine which DID devices correspond to the configured RDF1 and RDF2 devices.
    # /usr/symcli/bin/symdg show group-name

    Note - If your system does not display the entire Oracle Solaris device patch, set the environment variable SYMCLI_FULL_PDEVNAME to 1 and retype the symdg -show command.


  3. Determine which DID devices correspond to the Oracle Solaris devices.
    # cldevice list -v
  4. For each pair of matched DID devices, combine the instances into a single replicated DID device. Run the following command from the RDF2/secondary side.
    # cldevice combine -t srdf -g replication-device-group \
    -d destination-instance source-instance

    Note - The -T option is not supported for SRDF data replication devices.


    -t replication-type

    Specifies the replication type. For EMC SRDF, type SRDF.

    -g replication-device-group

    Specifies the name of the device group as shown in the symdg show command.

    -d destination-instance

    Specifies the DID instance that corresponds to the RDF1 device.

    source-instance

    Specifies the DID instance that corresponds to the RDF2 device.


    Note - If you combine the wrong DID device, use the -b option for the scdidadm command to undo the combining of two DID devices.

    # scdidadm -b device
    -b device

    The DID instance that corresponded to the destination_device when the instances were combined.


  5. If the name of a replication device group changes, additional steps are required for SRDF. After you complete steps 1 through 4, perform the appropriate additional step.
    Item
    Description
    SRDF
    If the name of the replication device group (and the corresponding global device group) changes, you must update the replicated device information by first using the scdidadm -b command to remove the existing information. The last step is to use the cldevice combine command to create a new, updated device.
  6. Verify that the DID instances have been combined.
    # cldevice list -v device
  7. Verify that the SRDF replication is set.
    # cldevice show device
  8. On all nodes, verify that the DID devices for all combined DID instances are accessible.
    # cldevice list -v

Next Steps

After you have configured the device identifier (DID) driver that the replicated device uses, you must verify the EMC SRDF replicated global device group configuration.

How to Verify EMC SRDF Replicated Global Device Group Configuration

Before You Begin

Before you verify the global device group, you must first create it. You can use device groups from Solaris Volume Manager ZFS, or raw-disk. For more information, consult the following:


Caution

Caution - The name of the Oracle Solaris Cluster device group that you created (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Verify that the primary device group corresponds to the same node as the node that contains the primary replica.
    # symdg -show group-name
    # cldevicegroup status -n nodename group-name
  2. Perform a trial switchover to ensure that the device groups are configured correctly and the replicas can move between nodes.

    If the device group is offline, bring it online.

    # cldevicegroup switch -n nodename group-name
    -n nodename

    The node to which the device group is switched. This node becomes the new primary.

  3. Verify that the switchover was successful by comparing the output of the following commands.
    # symdg -show group-name
    # cldevicegroup status -n nodename group-name

Example: Configuring an SRDF Replication Group for Oracle Solaris Cluster

This example completes the Oracle Solaris Cluster specific steps necessary to set up SRDF replication in your cluster. The example assumes that you have already performed the following tasks:

This example involves a four-node cluster where two nodes are connected to one symmetrix and the other two nodes are connected to the second symmetrix. The SRDF device group is called dg1.

Example 5-1 Creating Replica Pairs

Run the following command on all nodes.

# symcfg discover
! This operation might take up to a few minutes.
# symdev list pd

Symmetrix ID: 000187990182

        Device Name          Directors                   Device                
--------------------------- ------------ --------------------------------------
                                                                           Cap 
Sym  Physical               SA :P DA :IT  Config        Attribute    Sts   (MB)
--------------------------- ------------- -------------------------------------

0067 c5t600604800001879901* 16D:0 02A:C1  RDF2+Mir      N/Grp'd      RW    4315
0068 c5t600604800001879901* 16D:0 16B:C0  RDF1+Mir      N/Grp'd      RW    4315
0069 c5t600604800001879901* 16D:0 01A:C0  RDF1+Mir      N/Grp'd      RW    4315
...

On all nodes on the RDF1 side, type:

# symdg -type RDF1 create dg1
# symld -g dg1 add dev 0067

On all nodes on the RDF2 side, type:

# symdg -type RDF2 create dg1
# symld -g dg1 add dev 0067

Example 5-2 Verifying Data Replication Setup

From one node in the cluster, type:

# symdg show dg1

Group Name:  dg1

    Group Type                                   : RDF1     (RDFA)
    Device Group in GNS                          : No
    Valid                                        : Yes
    Symmetrix ID                                 : 000187900023
    Group Creation Time                          : Thu Sep 13 13:21:15 2007
    Vendor ID                                    : EMC Corp
    Application ID                               : SYMCLI

    Number of STD Devices in Group               :    1
    Number of Associated GK's                    :    0
    Number of Locally-associated BCV's           :    0
    Number of Locally-associated VDEV's          :    0
    Number of Remotely-associated BCV's (STD RDF):    0
    Number of Remotely-associated BCV's (BCV RDF):    0
    Number of Remotely-assoc'd RBCV's (RBCV RDF) :    0

    Standard (STD) Devices (1):
        {
        --------------------------------------------------------------------
                                                      Sym               Cap 
        LdevName              PdevName                Dev  Att. Sts     (MB)
        --------------------------------------------------------------------
        DEV001                /dev/rdsk/c5t6006048000018790002353594D303637d0s2 0067      RW      4315
        }

    Device Group RDF Information
...
# symrdf -g dg1 establish

Execute an RDF 'Incremental Establish' operation for device
group 'dg1' (y/[n]) ? y

An RDF 'Incremental Establish' operation execution is
in progress for device group 'dg1'. Please wait...

    Write Disable device(s) on RA at target (R2)..............Done.
    Suspend RDF link(s).......................................Done.
    Mark target (R2) devices to refresh from source (R1)......Started.
    Device: 0067 ............................................ Marked.
    Mark target (R2) devices to refresh from source (R1)......Done.
    Merge device track tables between source and target.......Started.
    Device: 0067 ............................................ Merged.
    Merge device track tables between source and target.......Done.
    Resume RDF link(s)........................................Started.
    Resume RDF link(s)........................................Done.

The RDF 'Incremental Establish' operation successfully initiated for
device group 'dg1'.

#  
# symrdf -g dg1 query  


Device Group (DG) Name             : dg1
DG's Type                          : RDF2
DG's Symmetrix ID                  : 000187990182


       Target (R2) View                 Source (R1) View     MODES           
--------------------------------    ------------------------ ----- ------------
             ST                  LI      ST                                    
Standard      A                   N       A                                   
Logical       T  R1 Inv   R2 Inv  K       T  R1 Inv   R2 Inv       RDF Pair    
Device  Dev   E  Tracks   Tracks  S Dev   E  Tracks   Tracks MDA   STATE       
-------------------------------- -- ------------------------ ----- ------------

DEV001  0067 WD       0        0 RW 0067 RW       0        0 S..   Synchronized

Total          -------- --------           -------- --------
  MB(s)             0.0      0.0                0.0      0.0

Legend for MODES:

 M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy
 D(omino)           : X = Enabled, . = Disabled
 A(daptive Copy)    : D = Disk Mode, W = WP Mode, . = ACp off

# 

Example 5-3 Displaying DIDs Corresponding to the Disks Used

The same procedure applies to the RDF1 and RDF2 sides.

You can look under the PdevName field of output of the dymdg show dg command.

On the RDF1 side, type:

# symdg show dg1

Group Name:  dg1

    Group Type                                   : RDF1     (RDFA)
...
    Standard (STD) Devices (1):
        {
        --------------------------------------------------------------------
                                                      Sym               Cap 
        LdevName              PdevName                Dev  Att. Sts     (MB)
        --------------------------------------------------------------------
        DEV001                /dev/rdsk/c5t6006048000018790002353594D303637d0s2 0067      RW      4315
        }

    Device Group RDF Information
...

To obtain the corresponding DID, type:

# scdidadm -L | grep c5t6006048000018790002353594D303637d0
217      pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0 /dev/did/rdsk/d217   
217      pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0 /dev/did/rdsk/d217 
#

To list the corresponding DID, type:

# cldevice show d217

=== DID Device Instances ===                   

DID Device Name:                                /dev/did/rdsk/d217
  Full Device Path:                                pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Full Device Path:                                pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Replication:                                     none
  default_fencing:                                 global

# 

On the RDF2 side, type:

You can look under the PdevName field of output of dymdg show dg command.

# symdg show dg1

Group Name:  dg1

    Group Type                                   : RDF2     (RDFA)
...
    Standard (STD) Devices (1):
        {
        --------------------------------------------------------------------
                                                      Sym               Cap 
        LdevName              PdevName                Dev  Att. Sts     (MB)
        --------------------------------------------------------------------
        DEV001                /dev/rdsk/c5t6006048000018799018253594D303637d0s2 0067      WD      4315
        }

    Device Group RDF Information
...

To obtain the corresponding DID, type:

# scdidadm -L | grep c5t6006048000018799018253594D303637d0
108      pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0 /dev/did/rdsk/d108   
108      pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0 /dev/did/rdsk/d108   
# 

To list the corresponding DID, type:

# cldevice show d108

=== DID Device Instances ===                   

DID Device Name:            /dev/did/rdsk/d108
  Full Device Path:               pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Full Device Path:               pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Replication:                    none
  default_fencing:                global

# 

Example 5-4 Combining DID instances

From the RDF2 side, type:

# cldevice combine -t srdf -g dg1 -d d217 d108
# 

Example 5-5 Displaying the Combined DIDs

From any node in the cluster, type:

# cldevice show d217 d108
cldevice:  (C727402) Could not locate instance "108".

=== DID Device Instances ===                   

DID Device Name:                                /dev/did/rdsk/d217
  Full Device Path:                                pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Full Device Path:                                pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0
  Full Device Path:                                pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Full Device Path:                                pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0
  Replication:                                     srdf
  default_fencing:                                 global

# 

How to Recover EMC SRDF Data after a Primary Room's Complete Failure

This procedure performs data recovery when a campus cluster's primary room fails completely, the primary room fails over to a secondary room, and then the primary room comes back online. The campus cluster's primary room is the primary node and storage site. The complete failure of a room includes the failure of both the host and the storage in that room. If the primary room fails, Oracle Solaris Cluster automatically fails over to the secondary room, makes the secondary room's storage device readable and writable, and enables the failover of the corresponding device groups and resource groups.

When the primary room returns online, you can manually recover the data from the SRDF device group that was written to the secondary room and resynchronize the data. This procedure recovers the SRDF device group by synchronizing the data from the original secondary room (this procedure uses phys-campus-2 for the secondary room) to the original primary room (phys-campus-1). The procedure also changes the SRDF device group type to RDF1 on phys-campus-2 and to RDF2 on phys-campus-1.

Before You Begin

You must configure the EMC replication group and DID devices, as well as register the EMC replication group before you can perform a manual failover. For information about creating a Solaris Volume Manager device group, see How to Add and Register a Device Group (Solaris Volume Manager).


Note - These instructions demonstrate one method you can use to manually recover SRDF data after the primary room fails over completely and then comes back online. Check the EMC documentation for additional methods.


Log into the campus cluster's primary room to perform these steps. In the procedure below, dg1 is the SRDF device group name. At the time of the failure, the primary room in this procedure is phys-campus-1 and the secondary room is phys-campus-2.

  1. Log into the campus cluster's primary room and assume a role that provides solaris.cluster.modify RBAC authorization.
  2. From the primary room, use the symrdf command to query the replication status of the RDF devices and view information about those devices.
    phys-campus-1# symrdf -g dg1 query

    Tip - A device group that is in the split state is not synchronized.


  3. If the RDF pair state is split and the device group type is RDF1, then force a failover of the SRDF device group.
    phys-campus-1# symrdf -g dg1 -force failover
  4. View the status of the RDF devices.
    phys-campus-1# symrdf -g dg1 query
  5. After the failover, you can swap the data on the RDF devices that failed over.
    phys-campus-1# symrdf -g dg1 swap
  6. Verify the status and other information about the RDF devices.
    phys-campus-1# symrdf -g dg1 query
  7. Establish the SRDF device group in the primary room.
    phys-campus-1# symrdf -g dg1 establish
  8. Confirm that the device group is in a synchronized state and that the device group type is RDF2.
    phys-campus-1# symrdf -g dg1 query

Example 5-6 Manually Recovering EMC SRDF Data after a Primary Site Failover

This example provides the Oracle Solaris Cluster-specific steps necessary to manually recover EMC SRDF data after a campus cluster's primary room fails over, a secondary room takes over and records data, and then the primary room comes back online. In the example, the SRDF device group is called dg1 and the standard logical device is DEV001. The primary room is phys-campus-1 at the time of the failure, and the secondary room is phys-campus-2. Perform the steps from the campus cluster's primary room, phys-campus-1.

phys-campus-1# symrdf -g dg1 query | grep DEV
DEV001 0012RW  0  0NR 0012RW  2031   O S.. Split

phys-campus-1# symdg list | grep RDF
dg1 RDF1  Yes  00187990182  1  0  0  0  0

phys-campus-1# symrdf -g dg1 -force failover
...

phys-campus-1# symrdf -g dg1 query | grep DEV
DEV001  0012  WD  0  0 NR 0012 RW  2031  O S..  Failed Over

phys-campus-1# symdg list | grep RDF
dg1  RDF1  Yes  00187990182  1  0  0  0  0

phys-campus-1# symrdf -g dg1 swap
...

phys-campus-1# symrdf -g dg1 query | grep DEV
DEV001  0012 WD  0  0 NR 0012 RW  0  2031 S.. Suspended

phys-campus-1# symdg list | grep RDF
dg1  RDF2  Yes  000187990182  1  0  0  0  0

phys-campus-1# symrdf -g dg1 establish
...

phys-campus-1# symrdf -g dg1 query | grep DEV
DEV001  0012 WD  0  0 RW 0012 RW  0  0 S.. Synchronized

phys-campus-1# symdg list | grep RDF
dg1  RDF2  Yes  000187990182  1  0  0  0  0