Go to main content

Oracle® Solaris Cluster Data Replication Guide for ZFS Snapshots

Exit Print View

Updated: October 2018
 
 

Use Case: Configuring Oracle Solaris ZFS Snapshot Replication With Both Partner Clusters as Zone Cluster

This example shows how to set up the protection group with Oracle Solaris ZFS snapshot replication to protect and manage the application and its ZFS datasets with both the partner clusters as zone cluster. Assume that the application resource group and application user is already set up. Execute the commands for setting up logical hostname, private string and ZFS permissions in the global zone of the nodes of the zone cluster. Any disaster recovery framework commands must be executed within the zone cluster. You must create the private string to store the SSH passphrase on both the partners. You must create logical hostname resource and resource group in the global zone of the zone cluster partner. Suppose that both the primary cluster paris and the secondary cluster newyork are zone clusters.

Figure 4  Example Resource Group and Resource Setup for Oracle Solaris ZFS Snapshot Replication With Both Partner Cluster as Zone Cluster

image:Example setup for Oracle Solaris ZFS snapshot replication with both                         partner cluster as zone cluster

Perform the following actions to configure the Oracle Solaris ZFS snapshot replication:

  1. Decide the hostname to use on each cluster as replication-related logical hostname. For this example, assume that the logical hostname on the paris cluster is paris-lh. The logical hostname on the newyork cluster is newyork-lh.

    Add the logical hostname entries for paris-lh in the /etc/hosts file in the global zone on both nodes of the paris cluster. Similarly, add the logical hostname entries for newyork-lh in the /etc/hosts file in the global zone on both nodes of the newyork cluster.

  2. Create the replication user and SSH setup. For information about the step for creating the replication user and SSH setup, see Use Case: Setting Up Replication User and SSH.

  3. Provide ZFS permissions to replication user on both clusters. You must import the ZFS storage pool to provide permissions. If application's HAStoragePlus is online, log in as root user on the node where it is online. Else if the ZFS storage pool is not online on any cluster node, import the ZFS storage pool on any node and perform ZFS commands on that node. On each cluster, log in as the root user to the cluster node where the zpool is imported.

    Type the following commands in the global zone of the node of the paris cluster where the srcpool1 zpool is imported.

    # /sbin/zfs allow zfsuser1 create,destroy,hold,mount,receive,release,send,rollback,snapshot \
    srcpool1/app1-ds1
    # /sbin/zfs allow srcpool1/app1-ds1
    ---- Permissions on srcpool1/app1-ds1 ---------------------------------
    Local+Descendent permissions:
    user zfsuser1 create,destroy,hold,mount,receive,release,rollback,send,snapshot
    #

    Type the following commands in the global zone of the node of the newyork cluster where the targpool1 zpool is imported.

    # /sbin/zfs allow zfsuser2 create,destroy,hold,mount,receive,release,send,rollback,snapshot \
    targpool1/app1-ds1-copy
    # /sbin/zfs allow targpool1/app1-ds1-copy
    ---- Permissions on targpool1/app1-ds1-copy ---------------------------------
    Local+Descendent permissions:
    user zfsuser2 create,destroy,hold,mount,receive,release,rollback,send,snapshot
    #

    Note that the replication user on each cluster must have the above Local+Descendent ZFS permissions on the ZFS dataset used on that cluster.

  4. Since newyork is a zone cluster, create Oracle Solaris Cluster private strings in the global zone of each partner to store the SSH passphrase for the replication user of that partner. The private string object name must be in the format zonename:replication-component:local_service_passphrase. The zone cluster name is always the same as the zone name by restriction.

    Since paris cluster is also a zone cluster, therefore you must use paris as the zonename. Type the following command in the global zone of any one zone of the paris cluster to create the private string for zfsuser1, and specify the SSH passphrase for zfsuser1 on the prompt:

    # /usr/cluster/bin/clpstring create -b paris:repcom1:local_service_passphrase \
    paris:repcom1:local_service_passphrase
    Enter string value:
    Enter string value again:

    Since newyork is the zone cluster name, you must use newyork as the zonename. Type the following command in the global zone of any one zone of the newyork cluster to create the private string for zfsuser2, and specify the SSH passphrase for zfsuser2 on the prompt :

    # /usr/cluster/bin/clpstring create -b newyork:repcom1:local_service_passphrase \
    newyork:repcom1:local_service_passphrase
    Enter string value:
    Enter string value again:
  5. Suppose you create a file /var/tmp/geo/zfs_snapshot/sbp_conf to use as the script-based plugin configuration file on both clusters to specify replication component node lists and evaluation rules. Add the following entry in the /var/tmp/geo/zfs_snapshot/sbp_conf file in each zone of the paris cluster :

    repcom1|any|paris-node-1,paris-node-2

    Add the following entry in the /var/tmp/geo/zfs_snapshot/sbp_conf file in each zone of the newyork zone cluster :

    repcom1|any|newyork-node-1,newyork-node-2
    
  6. Copy /opt/ORCLscgrepzfssnap/etc/zfs_snap_geo_config to create a parameters file /var/tmp/geo/zfs_snapshot/repcom1_conf for the replication component from one node of the primary cluster paris. Type the configuration parameters in the configuration file.

    PS=paris-newyork
    PG=pg1
    REPCOMP=repcom1
    REPRS=repcom1-repstatus-rs
    REPRG=pg1-repstatus-rg
    DESC="Protect app1-rg1 using ZFS snapshot replication"
    APPRG=app1-rg1
    CONFIGFILE=/var/tmp/geo/zfs_snapshot/sbp_conf
    LOCAL_REP_USER=zfsuser1
    REMOTE_REP_USER=zfsuser2
    LOCAL_PRIV_KEY_FILE=
    REMOTE_PRIV_KEY_FILE=/export/home/zfsuser2/.ssh/zfsrep1 
    LOCAL_ZPOOL_RS=par-app1-hasp1
    REMOTE_ZPOOL_RS=ny-app1-hasp1
    LOCAL_LH=paris-lh
    REMOTE_LH=newyork-lh
    LOCAL_DATASET=srcpool1/app1-ds1
    REMOTE_DATASET=targpool1/app1-ds1-copy
    REPLICATION_INTERVAL=120
    NUM_OF_SNAPSHOTS_TO_STORE=2
    REPLICATION_STREAM_PACKAGE=false
    SEND_PROPERTIES=true
    INTERMEDIARY_SNAPSHOTS=false
    RECURSIVE=true
    MODIFY_PASSPHRASE=false
  7. Execute the zfs_snap_geo_register script using the replication configuration file as parameter on any one zone of the primary cluster paris. You must perform the step from the same node as in Step 6.

    # /opt/ORCLscgrepzfssnap/util/zfs_snap_geo_register -f /var/tmp/geo/zfs_snapshot/repcom1_conf
    

    The set up actions performed by the zfs_snap_geo_register script creates the following components in the primary cluster, as shown in Example Resource Group and Resource Setup for Oracle Solaris ZFS Snapshot Replication With Both Partner Cluster as Zone Cluster:

    • Protection group pg1

    • Replication component repcom1

    • Infrastructure resource group pg1-srcpool1-infr-rg and the storage resource pg1-srcpool1-stor-rs

    • Replication resource group repcom1-snap-rg which contains the resource repcom1-snap-rs

    • Replication status resource group pg1-repstatus-rg and replication status resource repcom1-repstatus-rs

  8. From any zone of the paris cluster, check that the protection group and replication component is created successfully.

    # /usr/cluster/bin/geopg show pg1
    # /usr/cluster/bin/geopg status pg1
  9. Configure a logical hostname resource and resource group in the global zone of the zone cluster partner paris to host the replication hostname. Type the following commands on one node of the global zone.

    The zone cluster partner is paris, and replication hostname to use for paris is paris-lh. The infrastructure resource group created automatically in the paris zone cluster is pg1-srcpool1-infr-rg. The storage resource in that infrastructure resource group is pg1-srcpool1-stor-rs. Type the following commands in the global zone of any one zone of paris cluster:

    # clrg create paris-lh-rg
    # clrslh create -g paris-lh-rg -h paris-lh paris-lh-rs
    # clrg manage paris-lh-rg
    # clrg set -p RG_affinities=++paris:pg1-srcpool1-infr-rg paris-lh-rg
    (C538594) WARNING: resource group global:paris-lh-rg has a strong positive affinity on resource group
    paris:pg1-srcpool1-infr-rg with Auto_start_on_new_cluster=FALSE; global:paris-lh-rg will
    be forced to remain offline until its strong affinities are satisfied.
    # clrs set -p Resource_dependencies_offline_restart=paris:pg1-srcpool1-stor-rs paris-lh-rs
    # clrg show -p Auto_start_on_new_cluster paris-lh-rg
    === Resource Groups and Resources ===
    Resource Group:            paris-lh-rg
    Auto_start_on_new_cluster: True

    If the Auto_start_on_new_cluster property is not set to True, type the following command:

    # clrg set -p Auto_start_on_new_cluster=True paris-lh-rg
  10. On any one zone of the newyork zone cluster, get the protection group that is created on the primary cluster.

    # /usr/cluster/bin/geopg get -s paris-newyork pg1

    This command creates the resource setup in the secondary cluster.

  11. Configure a logical hostname resource and resource group in the global zone of the zone cluster partner newyork to host the replication hostname. Type the following commands on one node of the global zone.

    The zone cluster partner is newyork, and replication hostname to use for newyork is newyork-lh. The infrastructure resource group created automatically in the newyork zone cluster is pg1-targpool1-infr-rg. The storage resource in that infrastructure resource group is pg1-targpool1-stor-rs. Type the following commands in the global zone of any one zone of newyork cluster:

    # clrg create newyork-lh-rg
    # clrslh create -g newyork-lh-rg -h newyork-lh newyork-lh-rs
    # clrg manage newyork-lh-rg
    # clrg set -p RG_affinities=++newyork:pg1-targpool1-infr-rg newyork-lh-rg
    (C538594) WARNING: resource group global:newyork-lh-rg has a strong positive affinity on resource group
    newyork:pg1-targpool1-infr-rg with Auto_start_on_new_cluster=FALSE; global:newyork-lh-rg will
    be forced to remain offline until its strong affinities are satisfied.
    # clrs set -p Resource_dependencies_offline_restart=newyork:pg1-targpool1-stor-rs newyork-lh-rs
    # clrg show -p Auto_start_on_new_cluster newyork-lh-rg
    === Resource Groups and Resources ===
    Resource Group:            newyork-lh-rg
    Auto_start_on_new_cluster: True

    If the Auto_start_on_new_cluster property is not set to True, type the following command:

    # clrg set -p Auto_start_on_new_cluster=True newyork-lh-rg
  12. From any one zone of the newyork zone cluster, check that the protection group and replication components are available. Ensure that the protection group synchronization status between paris and newyork cluster shows OK.

    # /usr/cluster/bin/geopg show pg1
    # /usr/cluster/bin/geopg status pg1

    Similarly, execute such commands in any one zone of paris zone cluster to check status:

    # /usr/cluster/bin/geopg show pg1
    # /usr/cluster/bin/geopg status pg1
  13. Activate the protection group from any one zone of the paris zone cluster to start the Oracle Solaris ZFS snapshot replication.

    # /usr/cluster/bin/geopg start -e global pg1
  14. Type the following command from one node of either partner cluster to confirm whether the protection group started on both clusters.

    # geopg status pg1