Go to main content

Oracle® Solaris Cluster Remote Replication Guide for Oracle ZFS Storage Appliance

Exit Print View

Updated: February 2020
 
 

Configuring Remote Replication With Oracle ZFS Storage Appliance Software

This section describes the steps you must perform before you can configure Oracle ZFS Storage Appliance remote replication with the disaster recovery framework. The following procedures are in this section:

How to Create a Role and Associated User for the Primary and Secondary Appliances

If a role and associated user do not yet exist on the source and target appliances, perform this procedure to create them.

  1. Log in to the Oracle ZFS Storage appliance.
  2. Create a role for remote replication.

    Configure the role with the following permissions:

    • Object nas.*.*.* with permissions clone, destroy, rrsource, rrtarget, createShare, and createProject.

    • Object workflow.*.* with permission read.

  3. Create a user for replication that is associated with the role you created in Step 2.

How to Create a Replication Target on Each Appliance

This is a one time procedure to be executed when a pair of appliances are configured to replicate to each other. Run the procedure on each appliance.

  1. Log in to the Oracle ZFS Storage appliance.
  2. Navigate to Configuration > Services > Remote Replication.
  3. Click on the button to add a target, then enter the required information of the target appliance and click Add.

    When complete, the appliance on site paris will list appliance on site newyork as a target and the appliance on site newyork will list appliance on site paris as a target.

How to Create a Project and Enable Replication for the Project

  1. Log in to the Oracle ZFS Storage appliance on the primary cluster-paris site.
  2. Navigate to Shares > Projects and create the projects that you need for your application.
  3. In each project, create the file systems and LUNs that you need for your application.

    Ensure that NFS exceptions and LUN settings are identical on the primary and secondary storage appliances. For more information, see Copying and Editing Actions in Oracle ZFS Storage 7000 System Administration Guide (http://docs.oracle.com/cd/E26765_01/html/E26397/).

  4. For iSCSI LUNs, if you use nondefault targets and target groups, ensure that target groups and initiator groups used by LUNs within the project also exist on the replication target.

    These groups must use the same name in the replication target as in the source appliance.

  5. For each project, navigate to Replication, create an action, and enable the action with continuous mode.

Troubleshooting

If you need to stop Oracle ZFS Storage Appliance replication directly from the Oracle ZFS Storage appliance, you must perform the following tasks in the order shown:

  • Set continuous=false.

  • Wait for the update to complete.

  • Set enabled=false to stop replication.

The disaster recovery framework requires that last_result of replication be a success. Otherwise, adding a project to a disaster recovery framework protection group and protection group validation will fail.

How to Configure Oracle Solaris Cluster Resources on the Primary Cluster

This procedure creates Oracle Solaris Cluster resources on the primary cluster for the application to be protected.

Before You Begin

Ensure that the following tasks are completed on the storage appliance:

  • Replication peers are configured by the storage administrator.

  • Projects are configured by the storage administrator.

  • Replication is enabled for the project.

  • For iSCSI LUNs, if you use nondefault target groups, the target groups and initiator groups used by LUNs within the project also exist on the replication target. In addition, these groups use the same names in the replication target as in the source appliance.

  • If you use file systems, NFS Exceptions exist for all nodes of both clusters. This ensures that either cluster can access the file systems when that cluster has the primary role.

  1. Create the Oracle Solaris Cluster device groups, file systems, or ZFS storage pools you want to use.

    Specify the LUNs or file systems in the Oracle ZFS Storage appliance to be replicated.

    For information about creating device groups, file systems, and ZFS storage pools in a cluster configuration, see Administering an Oracle Solaris Cluster 4.4 Configuration.

  2. Create an HAStoragePlus resource or a scalable mount-point resource for the device group, file system, or ZFS storage pool you use.

    This resource manages bringing online the Oracle ZFS Storage Appliance storage on both the primary and secondary clusters.

    For information about creating an HAStoragePlus or scalable mount-point resource, see Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

How to Configure Oracle Solaris Cluster Resources on the Secondary Cluster

This procedure creates Oracle Solaris Cluster resources on the secondary cluster for the application to be protected.

Before You Begin

Ensure that the following tasks are completed on the storage appliance:

  • Replication peers are configured by the storage administrator.

  • Projects are configured by the storage administrator.

  • Replication is enabled for the project.

  • For iSCSI LUNs, if you use nondefault target groups, the target groups and initiator groups used by LUNs within the project also exist on the replication target. In addition, these groups must use the same names in the replication target as in the source appliance.

  • If you use file systems, NFS Exceptions exist for all nodes of both clusters. This ensures that either cluster can access the file systems when that cluster has the primary role.

  1. On one node of the cluster-newyork (partner) site, create the application group.

    The Auto_start_on_new_cluster property must be set to False.

    phys-newyork-1# clresourcegroup create -p Auto_start_on_new_cluster=False \
    application-resource-group
  2. Determine whether the replicated project contains any LUNs.
    1. On the cluster-paris (primary) site, access the Oracle ZFS Storage Appliance browser user interface (BUI).
    2. Navigate to Shares > Projects and select the project being replicated.
  3. If the project contains only file systems, perform the following tasks.

    If the project contains any LUNs, skip to Step 4.

    1. If replication is not in continuous mode, select Replication for the project and click Update Now or Sync Now.

      This executes a manual replication to synchronize the two sites.

    2. On the cluster-newyork (partner) site, access the appliance BUI.
    3. Navigate to In Projects > Replica and select the project being replicated.
    4. Select Replication for the project and click Clone Most Recently Received Project Snapshot.

      Enter the same project name as on the primary appliance.

  4. If the replicated project contains LUNS, perform the following tasks.
    1. Create protection group and add replicated project and resource groups to it.

      See How to Create and Configure an Oracle ZFS Storage Appliance Protection Group.


      Note -  Resource groups added to the protection group can be empty on the secondary cluster. The storage and application resources will be created on the secondary cluster in subsequent steps.
    2. From one node of either cluster, start the protection group globally.
      # geopg start -e global protection-group
    3. From one node of either cluster, switch over the protection group to the secondary cluster.
      # geopg switchover -f -m cluster-newyork protection-group

      The project is made local on the secondary storage.

    4. On secondary cluster, map iSCSI devices from the project on the secondary storage.
      1. Map the iSCSI devices to the corresponding DID numbers.
      2. Use the cldevice list command to find devices corresponding to the devices being exported from the appliance.
    5. Create the Oracle Solaris Cluster device groups or file systems, orimport the ZFS storage pools that you want to use LUNs in the project.

      Specify the LUNs or file systems in the project that is now local on the secondary appliance.

      For information about creating device groups and file systems and adding ZFS storage pools in a cluster configuration, see Administering an Oracle Solaris Cluster 4.4 Configuration.

  5. On cluster-newyork, create an HAStoragePlus resource or a scalable mount-point resource for the device group, file system, or ZFS storage pool you use.

    This resource manages bringing online the Oracle ZFS Storage Appliance storage on both the primary and secondary clusters,

    For information about creating an HAStoragePlus or scalable mount-point resource, see Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

  6. Bring up the application on cluster-newyork that uses the replicated storage and create corresponding cluster resources.
  7. On cluster-newyork, confirm that the application resource group is correctly configured by bringing it online.
    phys-newyork-1# clresourcegroup online -emM application-resource-group
  8. If the replication project contains only file systems, perform the following tasks.

    If the project contains any LUNs, skip to Step 9.

    1. On a node of the secondary cluster, put the application resource group in the unmanaged state on secondary cluster.
      phys-newyork-1# clresource disable -g application-resource-group +
      phys-newyork-1# clresourcegroup offline application-resource-group
      phys-newyork-1# clresourcegroup unmanage application-resource-group
    2. If you created a file system and it is mounted, unmount the file system.
      phys-newyork-1# umount /mounts/file-system
    3. If the Oracle Solaris Cluster device group is online, take it offline.
      phys-newyork-1# cldevicegroup offline raw-disk-group
    4. Destroy the clone on the Oracle ZFS Storage appliance.
      1. Access the appliance BUI on the cluster-newyork site.
      2. Navigate to Shares > Projects and select the project that is cloned.
      3. Select Remove or Destroy entry for the cloned project.

      Initial configuration on the secondary cluster is now complete.

  9. If the replicated project contains any LUNs, from one node of either cluster, switch over the protection group to the primary cluster.

    This step takes offline the configuration on the secondary cluster and brings it online on the primary cluster.

    # geopg switchover -f -m cluster-paris protection-group

Next Steps