Go to main content

Oracle® Solaris Cluster Geographic Edition Data Replication Guide for Hitachi TrueCopy and Universal Replicator

Exit Print View

Updated: July 2016
 
 

Configuring Data Replication With Hitachi TrueCopy or Universal Replicator Software on the Primary Cluster

This section describes the tasks that you must perform on the primary cluster before you can configure Hitachi TrueCopy or Universal Replicator data replication in the Geographic Edition framework.

In all examples in this document, the “primary” cluster is the cluster on which the application data service is started during routine operations. The partner cluster is “secondary.” The primary cluster is named cluster-paris, and the secondary cluster is named cluster-newyork. The cluster-paris cluster consists of two nodes, phys-paris-1 and phys-paris-2. The cluster-newyork cluster also consists of two nodes, phys-newyork-1 and phys-newyork-2. Two device groups are configured on each cluster. The devgroup1 device group contains the paired devices pair1 and pair2. The devgroup2 device group contains the paired devices pair3 and pair4.

Configuration of the /etc/horcm.conf File

As used with the Geographic Edition configuration, a Hitachi TrueCopy or Universal Replicator data replication component is a named entity consisting of sets of paired Logical Unit Numbers (LUNs). One member of each pair of LUNs is located in local storage on the primary cluster and the other member is located in local storage on a Geographic Edition partner cluster. Data is written to one member of a pair of LUNs in local storage on the primary cluster and replicated to the other member of the pair on local storage on the secondary cluster. Each LUN in a pair is assigned the same name as the name that is assigned to the other LUN in the pair. Thus, data that is written to the LUN assigned the pair1 device name on the primary cluster is replicated to the LUN assigned the pair1 device name on the secondary cluster. Data that is written to the LUN assigned the pair2 device name on the primary cluster is replicated to the LUN assigned the pair2 device name on the secondary cluster.

On each storage-attached node of each cluster, pairs are given names and assigned to a data replication component in the /etc/horcm.conf file. Additionally, in this file, each data replication component is assigned a name that is the same on all storage-attached nodes of all clusters that are participating in a Geographic Edition partnership.

In the /etc/horcm.conf file, you configure each Hitachi TrueCopy or Universal Replicator data replication component as a property of either the HORCM_DEV parameter or the HORCM_LDEV parameter. Depending on their intended use, you might configure one data replication component in the /etc/horcm.conf file as a property of HORCM_DEV and another data replication component as a property of HORCM_LDEV. However, a single data replication component can only be configured as a property of HORCM_DEV or of HORCM_LDEV. For any one data replication component, the selected parameter, HORCM_DEV or HORCM_LDEV, must be consistent on all storage-attached nodes of all clusters that are participating in the Geographic Edition partnership.

Of the parameters that are configured in the /etc/horcm.conf file, only HORCM_DEV and HORCM_LDEV have requirements that are specific to the Geographic Edition configuration. For information about configuring other parameters in the /etc/horcm.conf file, see the documentation for Hitachi TrueCopy or Universal Replicator.

Journal Volumes

Entries in the /etc/horcm.conf file for Hitachi Universal Replicator data replication components can associate journal volumes with data LUNs. Journal volumes are specially configured LUNs on the storage system array. On both the primary and secondary arrays, local journal volumes store data that has been written to application data storage on the primary cluster, but not yet replicated to application data storage on the secondary cluster. Journal volumes thereby enable Hitachi Universal Replicator to maintain the consistency of data even if the connection between the paired clusters in a Geographic Edition partnership temporarily fails. A journal volume can be used by more than one data replication component on the local cluster, but typically is assigned to just one data replication component. Hitachi TrueCopy does not support journalling.

If you want to implement journalling, you must configure Hitachi Universal Replicator data replication components as properties of the HORCM_LDEV parameter because only that parameter supports the association of data LUNs with journal volumes in the Geographic Edition Hitachi Universal Replicator module. If you configure Hitachi Universal Replicator data replication components by using the HORCM_DEV parameter, no journalling occurs, and Hitachi Universal Replicator has no greater functionality than does Hitachi TrueCopy.

Configuring the /etc/horcm.conf File on the Nodes of the Primary Cluster

On each storage-attached node of the primary cluster, you configure Hitachi TrueCopy and Universal Replicator data replication components as properties of the HORCM_DEV or HORCM_LDEV parameter in the /etc/horcm.conf file, and associate them with LUNs and, if appropriate, journal volumes. All devices that are configured in this file, including journal volumes, must be in locally attached storage. The /etc/horcm.conf file is read by the HORCM daemon when it starts, which occurs during reboot or when the Geographic Edition framework is started. If you change the /etc/horcm.conf file on any node after the Geographic Edition framework is started, and you do not anticipate rebooting, you must restart the HORCM daemon on that node by using the following commands:

phys-paris-1# horcm-installation-directory/usr/bin/horcmshutdown.sh
phys-paris-1# horcm-installation-directory/usr/bin/horcmstart.sh

Figure 3, Table 3, Example HORCM_LDEV Section of the /etc/horcm.conf File on the Primary Cluster shows the configuration of one journalling Hitachi Universal Replicator data replication component in the /etc/horcm.conf file as a property of the HORCM_LDEV parameter. Each LUN in the data replication component is described on a single line consisting of four space-delimited entries. The LUNs in the devgroup1 data replication component are named pair1 and pair2. The administrator chooses the data replication component and paired device names. In the third field of the file, each LUN is described by its serial number, followed by a colon, followed by the journal ID of its associated journal volume. In the logical device number (ldev) field, the controller unit (CU) is followed by a colon (:), which is followed by the logical device number. Both values are in hexadecimal format.

All entries are supplied by the raidscan command, which is described in more detail in Hitachi TrueCopy and Universal Replicator documentation. The ldev value that is supplied by the raidscan command is in decimal format, so you must convert the value to base 16 to obtain the correct format for the entry in the ldev field.

You can only use the configuration shown in Figure 3, Table 3, Example HORCM_LDEV Section of the /etc/horcm.conf File on the Primary Cluster with Hitachi Universal Replicator, as Hitachi TrueCopy does not support journalling.


Note -  If you want to ensure the consistency of replicated data with Hitachi Universal Replicator on both the primary cluster and the secondary cluster, you must specify a journal volume ID in the third property configuration field of HORCM_LDEV for each device in a Hitachi Universal Replicator data replication component. Otherwise, journalling does not occur and Hitachi Universal Replicator's functionality in Geographic Edition configurations is no greater than the functionality of Hitachi TrueCopy.
Table 3  Example HORCM_LDEV Section of the /etc/horcm.conf File on the Primary Cluster
# dev_group
dev_name
serial#:jid#
ldev
devgroup1
pair1
10136:0
00:12
devgroup1
pair2
10136:0
00:13

Figure 4, Table 4, Example HORCM_DEV Section of the /etc/horcm.conf File on the Primary Cluster shows the configuration of one non-journalling Hitachi TrueCopy or Universal Replicator data replication component in the /etc/horcm.conf file as a property of the HORCM_DEV parameter. Each LUN in the data replication component is described on a single line consisting of five space-delimited entries. The table describes a data replication component named devgroup2 that is composed of two LUNs in a single shared storage array that is attached to the nodes of the primary cluster. The LUNs have the device names pair3 and pair4 and are designated by their port, CL1-A, target 0, and LU numbers, 3 and 4. The port number, target ID, and LU numbers are supplied by the raidscan command, which is described in more detail in Hitachi's documentation. For Hitachi TrueCopy and Universal Replicator, there is no entry in the MU number field.

Table 4  Example HORCM_DEV Section of the /etc/horcm.conf File on the Primary Cluster
# dev_group
dev_name
port number
TargetID
LU number
MU number
devgroup2
pair3
CL1-A
0
3
-
devgroup2
pair4
CL1-A
0
4
-

How to Set Up Raw-Disk Device Groups for Geographic Edition Systems

Geographic Edition supports the use of raw-disk device groups in addition to various volume managers. When you initially configure Oracle Solaris Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Geographic Edition.

  1. For the devices that you want to use, unconfigure the predefined device groups.

    The following commands remove the predefined device groups for d7 and d8.

    phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8
    phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8
    phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8
  2. Create the new raw-disk device group, including the desired devices.

    Ensure that the new DID does not contain any slashes. The following command creates a global device group rawdg containing d7 and d8.

    phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \
    -t rawdisk -d d7,d8 rawdg
Example 1  Configuring a Raw-Disk Device Group

The following commands illustrate configuring the device group on the primary cluster, configuring the same device group on the partner cluster, and adding the group to a Hitachi TrueCopy or Universal Replicator protection group.

Remove the automatically created device groups from the primary cluster.
phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8
phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8
phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8

Create the raw-disk device group on the primary cluster.
phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \
-t rawdisk -d d7,d8 rawdg

Remove the automatically created device groups from the partner cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6
phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6
phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6

Create the raw-disk device group on the partner cluster.
phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \
-t rawdisk -d d5,d6 rawdg

Add the raw-disk device group to the protection group rawpg.
phys-paris-1# geopg create -d truecopy -p Nodelist=phys-paris-1,phys-paris-2 \
-o Primary -p Ctgid=5 -p Cluster_dgs=rawdg -s paris-newyork-ps rawpg

Next Steps

When configuring the partner cluster, create a raw-disk device group of the same name as the one you created here. See How to Replicate the Configuration Information From the Primary Cluster When Using Raw-Disk Device Groups for the instructions about this task.

Once you have configured the device group on both clusters, you can use the device group name wherever one is required in Geographic Edition commands such as geopg.

How to Configure a Highly Available Local File System With ZFS for Hitachi Universal Replicator Replication

Follow this procedure to configure a highly available local file system that uses a ZFS storage pool (zpool).


Note -  Perform this procedure only if you are using Hitachi Universal Replicator. ZFS is not supported with Hitachi TrueCopy replication.

If you are not using ZFS, perform instead How to Configure a Highly Available Local File System for Hitachi TrueCopy or Universal Replicator Replication.


Before You Begin

Ensure that the Oracle Solaris Cluster application resource group has already been configured.

Observe the following requirements and restrictions for ZFS:

  • ZFS is not supported with Hitachi TrueCopy. Use ZFS only with Hitachi Universal Replicator.

  • Ensure that the zpool version on the cluster where you create the zpool is supported by the Oracle Solaris OS version of the partner cluster nodes. This is necessary so that the zpool can be imported by the partner cluster nodes, when that cluster becomes primary. You can do this by setting the zpool version to the default zpool version of the cluster that is running the earlier version of Oracle Solaris software.

  • Mirrored and unmirrored ZFS zpool are supported.

  • ZFS zpool spares are not supported with storage-based replication in a Geographic Edition configuration. The information about the spare that is stored in the zpool results in the zpool being incompatible with the remote system after it has been replicated.

  • Ensure that Hitachi Universal Replicator is configured to preserve write ordering, even after a rolling failure.

Do not configure a storage-based replicated volume as a quorum device. The Geographic Edition software does not support Hitachi Universal Replicator S-VOL and Command Device as an Oracle Solaris Cluster quorum device. See Using Storage-Based Data Replication Within a Campus Cluster in Oracle Solaris Cluster 4.3 System Administration Guide for more information.

  1. Create a ZFS zpool.
    # zpool create appdataz mirror cNtXdY cNtAdB
    create appdataz

    Specifies the name of the zpool to create.

    mirror cNtXdY cNtAdB

    Specifies the LUNs to replicate with Hitachi Universal Replicator.

  2. Add an HAStoragePlus resource to the application resource group, app-rg.
    # clresource create -g app-rg \
    -t HAStoragePlus \
    -p zpools=appdataz \
    hasp4appdataz
    -g app-rg

    Specifies the application resource group.

    -p zpools=appdataz

    Specifies the zpool.

    hasp4appdataz

    Specifies the name of the HAStoragePlus resource to create.

Example 2  Configuring a Highly Available Local File System With ZFS

This example creates a locally mounted file system, with HAStoragePlus using a ZFS zpool. The file system created in this example is mounted locally every time the resource is brought online.

This example assumes that the app-rg1 resource group already exists.

  1. Create the zpool appdata1.

    # zpool create appdata1 mirror c6t6006048000018790002353594D313137d0 c6t6006048000018790002353594D313143d0
  2. Add the HAStoragePlus resource hasp4appdata-rs to the application resource group app-rg1.

    # clresource create -g app-rg1 \
    -t HAStoragePlus \
    -p zpools=appdata1 \
    hasp4appdata-rs

How to Configure a Highly Available Local File System for Hitachi TrueCopy or Universal Replicator Replication


Note -  If you want to create a highly available local file system that uses a ZFS storage pool and you are using Hitachi Universal Replicator replication, do not perform this procedure. Instead, go to How to Configure a Highly Available Local File System With ZFS for Hitachi Universal Replicator Replication.

Before You Begin

Before you configure the file system on cluster-paris, ensure that the Oracle Solaris Cluster entities you require, such as application resource groups, device groups, and mount points, have already been configured.

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Geographic Edition framework does not support Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as an Oracle Solaris Cluster quorum device. See Using Storage-Based Data Replication Within a Campus Cluster in Oracle Solaris Cluster 4.3 System Administration Guide for more information.

  1. Create the required file system on the vol1 volume at the command line.
  2. Add an entry to the /etc/vfstab file that contains information such as the mount location.

    Whether the file system is to be mounted locally or globally depends on various factors, such as your performance requirements, or the type of application resource group you are using.


    Note -  You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Oracle Solaris Cluster software and the Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. Data must not be mounted on the secondary cluster or data on the primary will not be replicated to the secondary cluster. Otherwise, the data will not be replicated from the primary cluster to the secondary cluster.
  3. Add the HAStoragePlus resource to the application resource group, apprg1.

    Adding the resource to the application resource group ensures that the necessary file systems are remounted before the application is brought online.

    For more information about the HAStoragePlus resource type, refer to the Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.

Example 3  Configuring a Highly Available Local File System

This example assumes that the apprg1 resource group and the oradg1 device group that uses DID d4 already exist.

  1. Create a UFS file system.

    phys-paris-1# newfs /dev/global/rdsk/d4s0
  2. Update the /etc/vfstab file on all nodes.

    # echo "/dev/global/dsk/d4s0 /dev/global/rdsk/d4s0 /mounts/sample ufs 2 no logging" >> /etc/vfstab
  3. Add the HAStoragePlus resource type.

    phys-paris-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \
    -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \
    -p GlobalDevicePaths=oradg1 rs-has