Go to main content

Oracle® Solaris Cluster Data Replication Guide for ZFS Snapshots

Exit Print View

Updated: October 2018
 
 

Use Case: Configuring Oracle Solaris ZFS Snapshot Replication When Both Partner Clusters Are Global Zones

This example shows how to set up the protection group with Oracle Solaris ZFS snapshot replication to protect and manage the application and its ZFS datasets. Assume that the application resource group and application user is already set up. For more information about the configuration assumptions for this example, see Configuration Assumptions of Use Cases.

The following figure shows the resource groups and resources that are created by setup actions performed by the zfs_snap_geo_register script.

Figure 2  Example Setup of Oracle Solaris ZFS Snapshot Replication in the Global Zone on Both the Partner Clusters

image:Set up of resource groups and resources in Oracle Solaris ZFS Snapshot                         replication

This figure displays the infrastructure resource group, replication agent resource group, and replication status resource group apart from the application resource group. The infrastructure resource group contains the logical hostname resource and the HAStoragePlus resource. The replication agent resource group contains a replication agent resource. The replication status resource group contain the replication status resource.

The setup actions done by the zfs_snap_geo_register script also sets the following extension properties on the resources:

  • The ZpoolsImportOnly property is set to True on HAStoragePlus resource in the infrastructure resource group. This ensure that when the resource starts, it imports the ZFS storage pool without mounting the file systems.


    Note -  The ZpoolsImportOnly property of SUNW.HAStoragePlus resource type is used by the cluster software internally for the replication infrastructure.
  • The ZpoolsExportOnStop property is set to False on HAStoragePlus resource in the application resource group. This ensures that when the resource stops, it does not export the ZFS storage pool. This is required as the infrastructure HAStoragePlus resource manages the export of the ZFS storage pool after both application and replication using the ZFS storage pool stop.


    Note -  The ZpoolsExportOnStop property of SUNW.HAStoragePlus resource type is used by the cluster software internally for the replication infrastructure.

You can perform the following steps to configure the Oracle Solaris ZFS snapshot replication by using the zfs_snap_geo_register script:

  1. Decide the hostname to use on each cluster as replication-related logical hostname. For this example, assume that the logical hostname on the paris cluster is paris-lh. The logical hostname on the newyork cluster is newyork-lh.

    Add the logical hostname entries for paris-lh in the /etc/hosts file on both the nodes of the paris cluster. Similarly, add the logical hostname entries for newyork-lh in the /etc/hosts file on both the nodes of the newyork cluster.

  2. Create the replication user and SSH setup. For information about the step for creating the replication user and SSH setup, see Use Case: Setting Up Replication User and SSH.

  3. Provide ZFS permissions to replication user on both clusters. You must import the ZFS storage pool to provide permissions. If application's HAStoragePlus is online, log in as root user on the node where it is online. Else if the zpool is not online on any cluster node, import the zpool on any node and perform ZFS commands on that node. On each cluster, log in as the root user to the cluster node where the zpool is imported.

    Type the following commands on the node of the paris cluster where srcpool1 is imported:

    # /sbin/zfs allow zfsuser1 create,destroy,hold,mount,receive,release,send,rollback,snapshot \
    srcpool1/app1-ds1
    # /sbin/zfs allow srcpool1/app1-ds1
    ---- Permissions on srcpool1/app1-ds1 ---------------------------------
    Local+Descendent permissions:
    user zfsuser1 create,destroy,hold,mount,receive,release,rollback,send,snapshot
    #

    Type the following commands on the node of the newyork cluster where targpool1 is imported:

    # /sbin/zfs allow zfsuser2 create,destroy,hold,mount,receive,release,send,rollback,snapshot \
    targpool1/app1-ds1-copy
    # /sbin/zfs allow targpool1/app1-ds1-copy
    ---- Permissions on targpool1/app1-ds1-copy ---------------------------------
    Local+Descendent permissions:
    user zfsuser2 create,destroy,hold,mount,receive,release,rollback,send,snapshot
    #

    Note that the replication user on each cluster must have the above Local+Descendent ZFS permissions on the ZFS dataset used on that cluster.

  4. Suppose you create a file /var/tmp/geo/zfs_snapshot/sbp_conf file to use as the script-based plugin configuration file on both clusters to specify replication component node lists and evaluation rules. Add the following entry in the /var/tmp/geo/zfs_snapshot/sbp_conf file on each node of the paris cluster :

    repcom1|any|paris-node-1,paris-node-2

    Add the following entry in the /var/tmp/geo/zfs_snapshot/sbp_conf file on each node of the newyork cluster :

    repcom1|any|newyork-node-1,newyork-node-2
    
  5. Copy /opt/ORCLscgrepzfssnap/etc/zfs_snap_geo_config to create a parameters file /var/tmp/geo/zfs_snapshot/repcom1_conf for the replication component. Type the configuration parameters in the configuration file.

    PS=paris-newyork
    PG=pg1
    REPCOMP=repcom1
    REPRS=repcom1-repstatus-rs
    REPRG=pg1-repstatus-rg
    DESC="Protect app1-rg1 using ZFS snapshot replication"
    APPRG=app1-rg1
    CONFIGFILE=/var/tmp/geo/zfs_snapshot/sbp_conf
    LOCAL_REP_USER=zfsuser1
    REMOTE_REP_USER=zfsuser2
    LOCAL_PRIV_KEY_FILE=
    REMOTE_PRIV_KEY_FILE=/export/home/zfsuser2/.ssh/zfsrep1 
    LOCAL_ZPOOL_RS=par-app1-hasp1
    REMOTE_ZPOOL_RS=ny-app1-hasp1
    LOCAL_LH=paris-lh
    REMOTE_LH=newyork-lh
    LOCAL_DATASET=srcpool1/app1-ds1
    REMOTE_DATASET=targpool1/app1-ds1-copy
    REPLICATION_INTERVAL=120
    NUM_OF_SNAPSHOTS_TO_STORE=2
    REPLICATION_STREAM_PACKAGE=false
    SEND_PROPERTIES=true
    INTERMEDIARY_SNAPSHOTS=false
    RECURSIVE=true
    MODIFY_PASSPHRASE=false
  6. Execute the zfs_snap_geo_register script using the replication configuration file as parameter from any one node of paris cluster. specify the SSH passphrase for zfsuser1 of paris cluster at the Password for property local_service_password prompt and the SSH passphrase for zfsuser2 of newyork cluster at the Password for property remote_service_password prompt

    # /opt/ORCLscgrepzfssnap/util/zfs_snap_geo_register -f /var/tmp/geo/zfs_snapshot/repcom1_conf
    Password for property local_service_password :
    Password for property remote_service_password :
    

    This setup action performed by the zfs_snap_geo_register script creates the following components in the primary cluster, as shown in Example Setup of Oracle Solaris ZFS Snapshot Replication in the Global Zone on Both the Partner Clusters:

    • Protection group pg1

    • Replication component repcom1

    • Infrastructure resource group pg1-srcpool1-infr-rg and its resources as shown in the figure

    • Replication resource group repcom1-snap-rg which contains the resource repcom1-snap-rs

    • Replication status resource group pg1-repstatus-rg and replication status resource repcom1-repstatus-rs

  7. On any node of the paris cluster, check that the protection group and replication component is created successfully.

    # /usr/cluster/bin/geopg show pg1
    # /usr/cluster/bin/geopg status pg1
  8. On any node of the newyork cluster, get the protection group that is created on the primary cluster.

    # /usr/cluster/bin/geopg get -s paris-newyork pg1

    This command creates the resource groups and resources in the secondary cluster, as shown in Example Setup of Oracle Solaris ZFS Snapshot Replication in the Global Zone on Both the Partner Clusters.

  9. From any node of the newyork cluster, check that the protection group and replication components are available. Ensure that the protection group synchronization status between paris and newyork cluster shows OK.

    # /usr/cluster/bin/geopg show pg1
    # /usr/cluster/bin/geopg status pg1

    Similarly, check status from any node of paris cluster:

    # /usr/cluster/bin/geopg show pg1
    # /usr/cluster/bin/geopg status pg1
  10. Activate the protection group to start the Oracle Solaris ZFS snapshot replication.

    $ /usr/cluster/bin/geopg start -e global pg1
  11. Type the following command from one node of either partner cluster to confirm whether the protection group started on both clusters.

    # geopg status pg1