Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster 4.1 |
1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
Overview of Administering Global Devices and the Global Namespace
Global Device Permissions for Solaris Volume Manager
Dynamic Reconfiguration With Global Devices
Administering Storage-Based Replicated Devices
Administering EMC Symmetrix Remote Data Facility Replicated Devices
How to Configure an EMC SRDF Replication Group
How to Configure DID Devices for Replication Using EMC SRDF
How to Verify EMC SRDF Replicated Global Device Group Configuration
Example: Configuring an SRDF Replication Group for Oracle Solaris Cluster
Overview of Administering Cluster File Systems
Cluster File System Restrictions
How to Update the Global-Devices Namespace
How to Change the Size of a lofi Device That Is Used for the Global-Devices Namespace
Migrating the Global-Devices Namespace
How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device
How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition
Adding and Registering Device Groups
How to Add and Register a Device Group (Solaris Volume Manager)
How to Add and Register a Device Group (Raw-Disk)
How to Add and Register a Replicated Device Group (ZFS)
How to Remove and Unregister a Device Group (Solaris Volume Manager)
How to Remove a Node From All Device Groups
How to Remove a Node From a Device Group (Solaris Volume Manager)
How to Remove a Node From a Raw-Disk Device Group
How to Change Device Group Properties
How to Set the Desired Number of Secondaries for a Device Group
How to List a Device Group Configuration
How to Switch the Primary for a Device Group
How to Put a Device Group in Maintenance State
Administering the SCSI Protocol Settings for Storage Devices
How to Display the Default Global SCSI Protocol Settings for All Storage Devices
How to Display the SCSI Protocol of a Single Storage Device
How to Change the Default Global Fencing Protocol Settings for All Storage Devices
How to Change the Fencing Protocol for a Single Storage Device
Administering Cluster File Systems
How to Add a Cluster File System
How to Remove a Cluster File System
How to Check Global Mounts in a Cluster
Administering Disk-Path Monitoring
How to Print Failed Disk Paths
How to Resolve a Disk-Path Status Error
How to Monitor Disk Paths From a File
How to Enable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
How to Disable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
7. Administering Cluster Interconnects and Public Networks
10. Configuring Control of CPU Usage
You can configure an Oracle Solaris Cluster device group to contain devices that are replicated by using storage-based replication. Oracle Solaris Cluster software supports EMC Symmetrix Remote Data Facility software for storage-based replication.
Before you can replicate data with EMC Symmetrix Remote Data Facility software, you must be familiar with the storage-based replication documentation and have the storage-based replication product and the latest updates installed on your system. For information about installing the storage-based replication software, see the product documentation.
The storage-based replication software configures a pair of devices as replicas with one device as the primary replica and the other device as the secondary replica. At any given time, the device attached to one set of nodes will be the primary replicas. The device attached to the other set of nodes will be the secondary replica.
In an Oracle Solaris Cluster configuration, the primary replica is automatically moved whenever the Oracle Solaris Cluster device group to which the replica belongs is moved. Therefore, the replica primary should never be moved in an Oracle Solaris Cluster configuration directly. Rather, the takeover should be accomplished by moving the associated Oracle Solaris Cluster device group.
Caution - The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group. |
This section contains the following procedures:
The following table lists the tasks you must perform to set up and manage an EMC Symmetrix Remote Data Facility (SRDF) storage-based replicated device.
Table 5-2 Task Map: Administering an EMC SRDF Storage-Based Replicated Device
|
Before You Begin
EMC Solutions Enabler software must be installed on all cluster nodes before you configure an EMC Symmetrix Remote Data Facility (SRDF) replication group. First, configure the EMC SRDF device groups on shared disks in the cluster. For more information about how to configure the EMC SRDF device groups, see your EMC SRDF product documentation.
When using EMC SRDF, use dynamic devices instead of static devices. Static devices require several minutes to change the replication primary and can impact failover time.
Caution - The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group. |
This might take a few minutes.
# /usr/symcli/bin/symcfg discover
Use the symrdf command to create your replica pairs. For instructions on creating the replica pairs, refer to your SRDF documentation.
# /usr/symcli/bin/symdg show group-name
# /usr/symcli/bin/symrdf -g group-name verify -synchronized
# /usr/symcli/bin/symdg show group-name
The node with the RDF1 device contains the primary replica and the node with the RDF2 device state contains the secondary replica.
# /usr/symcli/bin/symrdf -g group-name failover
# /usr/symcli/bin/symrdf -g group-name swap -refresh R1
# /usr/symcli/bin/symrdf -g group-name establish
# /usr/symcli/bin/symrdf -g group-name verify -synchronized
Next Steps
After you have configured a device group for your EMC SRDF replicated device, you must configure the device identifier (DID) driver that the replicated device uses.
This procedure configures the device identifier (DID) driver that the replicated device uses.
Before You Begin
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# /usr/symcli/bin/symdg show group-name
Note - If your system does not display the entire Oracle Solaris device patch, set the environment variable SYMCLI_FULL_PDEVNAME to 1 and retype the symdg -show command.
# cldevice list -v
# cldevice combine -t srdf -g replication-device-group \ -d destination-instance source-instance
Note - The -T option is not supported for SRDF data replication devices.
Specifies the replication type. For EMC SRDF, type SRDF.
Specifies the name of the device group as shown in the symdg show command.
Specifies the DID instance that corresponds to the RDF1 device.
Specifies the DID instance that corresponds to the RDF2 device.
Note - If you combine the wrong DID device, use the -b option for the scdidadm command to undo the combining of two DID devices.
# scdidadm -b device
The DID instance that corresponded to the destination_device when the instances were combined.
|
# cldevice list -v device
# cldevice show device
# cldevice list -v
Next Steps
After you have configured the device identifier (DID) driver that the replicated device uses, you must verify the EMC SRDF replicated global device group configuration.
Before You Begin
Before you verify the global device group, you must first create it. You can use device groups from Solaris Volume Manager ZFS, or raw-disk. For more information, consult the following:
Caution - The name of the Oracle Solaris Cluster device group that you created (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group. |
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# symdg -show group-name # cldevicegroup status -n nodename group-name
If the device group is offline, bring it online.
# cldevicegroup switch -n nodename group-name
The node to which the device group is switched. This node becomes the new primary.
# symdg -show group-name # cldevicegroup status -n nodename group-name
This example completes the Oracle Solaris Cluster specific steps necessary to set up SRDF replication in your cluster. The example assumes that you have already performed the following tasks:
Completed pairing LUNS for replication between arrays.
Installed the SRDF software on your storage device and cluster nodes.
This example involves a four-node cluster where two nodes are connected to one symmetrix and the other two nodes are connected to the second symmetrix. The SRDF device group is called dg1.
Example 5-1 Creating Replica Pairs
Run the following command on all nodes.
# symcfg discover ! This operation might take up to a few minutes. # symdev list pd Symmetrix ID: 000187990182 Device Name Directors Device --------------------------- ------------ -------------------------------------- Cap Sym Physical SA :P DA :IT Config Attribute Sts (MB) --------------------------- ------------- ------------------------------------- 0067 c5t600604800001879901* 16D:0 02A:C1 RDF2+Mir N/Grp'd RW 4315 0068 c5t600604800001879901* 16D:0 16B:C0 RDF1+Mir N/Grp'd RW 4315 0069 c5t600604800001879901* 16D:0 01A:C0 RDF1+Mir N/Grp'd RW 4315 ...
On all nodes on the RDF1 side, type:
# symdg -type RDF1 create dg1 # symld -g dg1 add dev 0067
On all nodes on the RDF2 side, type:
# symdg -type RDF2 create dg1 # symld -g dg1 add dev 0067
Example 5-2 Verifying Data Replication Setup
From one node in the cluster, type:
# symdg show dg1 Group Name: dg1 Group Type : RDF1 (RDFA) Device Group in GNS : No Valid : Yes Symmetrix ID : 000187900023 Group Creation Time : Thu Sep 13 13:21:15 2007 Vendor ID : EMC Corp Application ID : SYMCLI Number of STD Devices in Group : 1 Number of Associated GK's : 0 Number of Locally-associated BCV's : 0 Number of Locally-associated VDEV's : 0 Number of Remotely-associated BCV's (STD RDF): 0 Number of Remotely-associated BCV's (BCV RDF): 0 Number of Remotely-assoc'd RBCV's (RBCV RDF) : 0 Standard (STD) Devices (1): { -------------------------------------------------------------------- Sym Cap LdevName PdevName Dev Att. Sts (MB) -------------------------------------------------------------------- DEV001 /dev/rdsk/c5t6006048000018790002353594D303637d0s2 0067 RW 4315 } Device Group RDF Information ... # symrdf -g dg1 establish Execute an RDF 'Incremental Establish' operation for device group 'dg1' (y/[n]) ? y An RDF 'Incremental Establish' operation execution is in progress for device group 'dg1'. Please wait... Write Disable device(s) on RA at target (R2)..............Done. Suspend RDF link(s).......................................Done. Mark target (R2) devices to refresh from source (R1)......Started. Device: 0067 ............................................ Marked. Mark target (R2) devices to refresh from source (R1)......Done. Merge device track tables between source and target.......Started. Device: 0067 ............................................ Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. The RDF 'Incremental Establish' operation successfully initiated for device group 'dg1'. # # symrdf -g dg1 query Device Group (DG) Name : dg1 DG's Type : RDF2 DG's Symmetrix ID : 000187990182 Target (R2) View Source (R1) View MODES -------------------------------- ------------------------ ----- ------------ ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE -------------------------------- -- ------------------------ ----- ------------ DEV001 0067 WD 0 0 RW 0067 RW 0 0 S.. Synchronized Total -------- -------- -------- -------- MB(s) 0.0 0.0 0.0 0.0 Legend for MODES: M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off #
Example 5-3 Displaying DIDs Corresponding to the Disks Used
The same procedure applies to the RDF1 and RDF2 sides.
You can look under the PdevName field of output of the dymdg show dg command.
On the RDF1 side, type:
# symdg show dg1 Group Name: dg1 Group Type : RDF1 (RDFA) ... Standard (STD) Devices (1): { -------------------------------------------------------------------- Sym Cap LdevName PdevName Dev Att. Sts (MB) -------------------------------------------------------------------- DEV001 /dev/rdsk/c5t6006048000018790002353594D303637d0s2 0067 RW 4315 } Device Group RDF Information ...
To obtain the corresponding DID, type:
# scdidadm -L | grep c5t6006048000018790002353594D303637d0 217 pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0 /dev/did/rdsk/d217 217 pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0 /dev/did/rdsk/d217 #
To list the corresponding DID, type:
# cldevice show d217 === DID Device Instances === DID Device Name: /dev/did/rdsk/d217 Full Device Path: pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0 Full Device Path: pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0 Replication: none default_fencing: global #
On the RDF2 side, type:
You can look under the PdevName field of output of dymdg show dg command.
# symdg show dg1 Group Name: dg1 Group Type : RDF2 (RDFA) ... Standard (STD) Devices (1): { -------------------------------------------------------------------- Sym Cap LdevName PdevName Dev Att. Sts (MB) -------------------------------------------------------------------- DEV001 /dev/rdsk/c5t6006048000018799018253594D303637d0s2 0067 WD 4315 } Device Group RDF Information ...
To obtain the corresponding DID, type:
# scdidadm -L | grep c5t6006048000018799018253594D303637d0 108 pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0 /dev/did/rdsk/d108 108 pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0 /dev/did/rdsk/d108 #
To list the corresponding DID, type:
# cldevice show d108 === DID Device Instances === DID Device Name: /dev/did/rdsk/d108 Full Device Path: pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0 Full Device Path: pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0 Replication: none default_fencing: global #
Example 5-4 Combining DID instances
From the RDF2 side, type:
# cldevice combine -t srdf -g dg1 -d d217 d108 #
Example 5-5 Displaying the Combined DIDs
From any node in the cluster, type:
# cldevice show d217 d108 cldevice: (C727402) Could not locate instance "108". === DID Device Instances === DID Device Name: /dev/did/rdsk/d217 Full Device Path: pmoney1:/dev/rdsk/c5t6006048000018790002353594D303637d0 Full Device Path: pmoney2:/dev/rdsk/c5t6006048000018790002353594D303637d0 Full Device Path: pmoney4:/dev/rdsk/c5t6006048000018799018253594D303637d0 Full Device Path: pmoney3:/dev/rdsk/c5t6006048000018799018253594D303637d0 Replication: srdf default_fencing: global #
This procedure performs data recovery when a campus cluster's primary room fails completely, the primary room fails over to a secondary room, and then the primary room comes back online. The campus cluster's primary room is the primary node and storage site. The complete failure of a room includes the failure of both the host and the storage in that room. If the primary room fails, Oracle Solaris Cluster automatically fails over to the secondary room, makes the secondary room's storage device readable and writable, and enables the failover of the corresponding device groups and resource groups.
When the primary room returns online, you can manually recover the data from the SRDF device group that was written to the secondary room and resynchronize the data. This procedure recovers the SRDF device group by synchronizing the data from the original secondary room (this procedure uses phys-campus-2 for the secondary room) to the original primary room (phys-campus-1). The procedure also changes the SRDF device group type to RDF1 on phys-campus-2 and to RDF2 on phys-campus-1.
Before You Begin
You must configure the EMC replication group and DID devices, as well as register the EMC replication group before you can perform a manual failover. For information about creating a Solaris Volume Manager device group, see How to Add and Register a Device Group (Solaris Volume Manager).
Note - These instructions demonstrate one method you can use to manually recover SRDF data after the primary room fails over completely and then comes back online. Check the EMC documentation for additional methods.
Log into the campus cluster's primary room to perform these steps. In the procedure below, dg1 is the SRDF device group name. At the time of the failure, the primary room in this procedure is phys-campus-1 and the secondary room is phys-campus-2.
phys-campus-1# symrdf -g dg1 query
Tip - A device group that is in the split state is not synchronized.
phys-campus-1# symrdf -g dg1 -force failover
phys-campus-1# symrdf -g dg1 query
phys-campus-1# symrdf -g dg1 swap
phys-campus-1# symrdf -g dg1 query
phys-campus-1# symrdf -g dg1 establish
phys-campus-1# symrdf -g dg1 query
Example 5-6 Manually Recovering EMC SRDF Data after a Primary Site Failover
This example provides the Oracle Solaris Cluster-specific steps necessary to manually recover EMC SRDF data after a campus cluster's primary room fails over, a secondary room takes over and records data, and then the primary room comes back online. In the example, the SRDF device group is called dg1 and the standard logical device is DEV001. The primary room is phys-campus-1 at the time of the failure, and the secondary room is phys-campus-2. Perform the steps from the campus cluster's primary room, phys-campus-1.
phys-campus-1# symrdf -g dg1 query | grep DEV DEV001 0012RW 0 0NR 0012RW 2031 O S.. Split phys-campus-1# symdg list | grep RDF dg1 RDF1 Yes 00187990182 1 0 0 0 0 phys-campus-1# symrdf -g dg1 -force failover ... phys-campus-1# symrdf -g dg1 query | grep DEV DEV001 0012 WD 0 0 NR 0012 RW 2031 O S.. Failed Over phys-campus-1# symdg list | grep RDF dg1 RDF1 Yes 00187990182 1 0 0 0 0 phys-campus-1# symrdf -g dg1 swap ... phys-campus-1# symrdf -g dg1 query | grep DEV DEV001 0012 WD 0 0 NR 0012 RW 0 2031 S.. Suspended phys-campus-1# symdg list | grep RDF dg1 RDF2 Yes 000187990182 1 0 0 0 0 phys-campus-1# symrdf -g dg1 establish ... phys-campus-1# symrdf -g dg1 query | grep DEV DEV001 0012 WD 0 0 RW 0012 RW 0 0 S.. Synchronized phys-campus-1# symdg list | grep RDF dg1 RDF2 Yes 000187990182 1 0 0 0 0