Go to main content

Oracle® Solaris Cluster Geographic Edition Data Replication Guide for ZFS Snapshots

Exit Print View

Updated: February 2017
 
 

How to Create and Configure an Oracle Solaris ZFS Snapshot Protection Group

Before You Begin

Ensure that the following conditions are met:

  1. Assume the root role or assume a role that is assigned the Geo Management RBAC rights profile.

    For more information about RBAC, see Securing Geographic Edition Software in Oracle Solaris Cluster 4.3 Geographic Edition Installation and Configuration Guide.


    Note -  If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.
    # chmod A+user:username:rwx:allow /var/cluster/geo

    The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Oracle Solaris ZFS snapshot software.


  2. On all nodes of both clusters, create or update the script-based plug-in configuration file for a protection group. This file contains the details of which nodes pertain to a replication component in the protection group.

    Update the file so that it contains one line that contains the rule information for the replication component.

    replication-component|any|nodelist
    replication-component

    Name of the replication component provided in the replication configuration file.

    nodelist

    The name of one or more cluster nodes where the plug-in is to validate the configuration.

    Ensure to add any as the evaluation rule.

    For example, suppose that you want to create and use a file /var/tmp/geo/zfs_snapshot/sbp_conf. Suppose that the nodes of cluster paris are paris-node-1 and paris-node-2. On each node of the cluster paris, type the following commands:

    paris-node-N# mkdir -p /var/tmp/geo/zfs_snapshot
    paris-node-N# echo "repcom1|any|paris-node-1,paris-node-2" > /var/tmp/geo/zfs_snapshot/sbp_conf

    Suppose that the nodes of the cluster newyork are newyork-node-1 and newyork-node-2. On each node of cluster newyork, type the following commands:

    newyork-node-N# mkdir -p /var/tmp/geo/zfs_snapshot
    newyork-node-N# echo "repcom1|any|newyork-node-1,newyork-node-2" > /var/tmp/geo/zfs_snapshot/sbp_conf

    For more information about configuration files, see configuration_file Property in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.

  3. Ensure that the Auto_start_on_new_cluster property of the application resource group is set to False.
    # clresourcegroup show -p Auto_start_on_new_cluster app-group

    If necessary, change the property value to False.

    # clresourcegroup set -p Auto_start_on_new_cluster=False app-group
  4. If either partner is a zone cluster, configure an Oracle Solaris Cluster private string in the global zone on each partner.

    This private string stores the SSH passphrase of the replication user on that partner. The name of the private string must have the following format :

    local-partner-zonename:replication-component:local_service_passphrase

      For example:

    • Partnership between a global zone and a zone cluster – Suppose the name of the zone cluster is zc1. The name of the replication component is repcom1. The replication user for the global zone partner is zfsuser1. The replication user for the zone cluster partner is zfsuser2.

      In one node of the global zone partner, type the following command to create a private string to store the SSH passphrase of zfsuser1:

      # clps create -b global:repcom1:local_service_passphrase \
      global:repcom1:local_service_passphrase 
      <Enter SSH passphrase for zfsuser1 at prompt>

      In the global zone of one node of the zone cluster partner zc1, type the following command to create a private string to store the SSH passphrase of zfsuser2 :

      # clps create -b zc1:repcom1:local_service_passphrase \
      zc1:repcom1:local_service_passphrase 
      <Enter SSH passphrase for zfsuser2 at prompt>
    • Partnership between two zone clusters – Suppose the partnership is between zone clusters zc1 and zc2 and the replication component is repcom1. Suppose that the replication user for zc1 is zfsuser1 and that for zc2 is zfsuser2.

      In the global zone of one node of the zone cluster partner zc1, type the following command to create a private string to store the SSH passphrase of zfsuser1:

      # clps create -b zc1:repcom1:local_service_passphrase \
      zc1:repcom1:local_service_passphrase
      <Enter SSH passphrase for zfsuser1 at prompt>

      In the global zone of one node of the zone cluster partner zc2, type the following command to create a private string to store the SSH passphrase of zfsuser2:

      # clps create -b zc2:repcom1:local_service_passphrase \
      zc2:repcom1:local_service_passphrase
      <Enter SSH passphrase for zfsuser2 at prompt>
  5. On one node of the primary cluster, copy the default replication configuration file to another location and specify the values in the file.

    For example, copy the file to /var/tmp/geo/zfs_snapshot directory.

    # cp /opt/ORCLscgrepzfssnap/etc/zfs_snap_geo_config /var/tmp/geo/zfs_snapshot

    The following list uses sample values:

    PS=paris-newyork
    PG=pg1
    REPCOMP=repcom1
    REPRS=repcom1-repstatus-rs
    REPRG=pg1-repstatus-rg
    DESC="Protect app1-rg1 using ZFS snapshot replication"
    APPRG=app1-rg1
    CONFIGFILE=/var/tmp/geo/zfs_snapshot/sbp_conf
    LOCAL_REP_USER=zfsuser1
    REMOTE_REP_USER=zfsuser2
    LOCAL_PRIV_KEY_FILE=
    REMOTE_PRIV_KEY_FILE=
    LOCAL_ZPOOL_RS=par-app1-hasp1
    REMOTE_ZPOOL_RS=ny-app1-hasp1
    LOCAL_LH=paris-lh
    REMOTE_LH=newyork-lh
    LOCAL_DATASET=srcpool1/app1-ds1
    REMOTE_DATASET=targpool1/app1-ds1-copy
    REPLICATION_INTERVAL=120
    NUM_OF_SNAPSHOTS_TO_STORE=2
    REPLICATION_STREAM_PACKAGE=false
    SEND_PROPERTIES=true
    INTERMEDIARY_SNAPSHOTS=false
    RECURSIVE=true
    MODIFY_PASSPHRASE=false

    For more information about the zfs_snap_geo_config file, see Overview of the Oracle Solaris ZFS Snapshot Remote Replication Configuration File.

  6. On the primary cluster node where the replication configuration file with replication parameter values is stored, execute the setup script /opt/ORCLscgrepzfssnap/util/zfs_snap_geo_register.

    For example:

    paris-node-1# /opt/ORCLscgrepzfssnap/util/zfs_snap_geo_register -f \
    /var/tmp/geo/zfs_snapshot/zfs_snap_geo_config

    This setup action performed by the zfs_snap_geo_register script creates the following components:

    • Protection group pg1

    • Replication component repcom1

    • Infrastructure resource group pg1-app-rg1-infr-rg

    • Replication resource group repcom1-snap-rg which contains the resource repcom1-snap-rs

    • Replication status resource group pg1-repstatus-rg and replication status resource repcom1-repstatus-rs

    For details about an example setup involving resource groups and resources, see Use Cases for Oracle Solaris ZFS Snapshot Replication.

  7. Replicate the protection group to the partner cluster.

    The final messages of the setup script outline the required geopg get command. You must log in to one node of the partner cluster and execute that exact command.

    For example, where paris-newyork is the partnership name and pg1 is the protection group name:

    newyork-node-1# geopg get --partnership paris-newyork pg1
  8. If any partner is a zone cluster, configure a logical hostname resource and resource group in the global zone of that zone cluster partner to host the replication hostname.

    Perform this configuration for each zone cluster partner in a partnership. The names of the resource and resource group are not restricted to any specific format. After configuring the logical hostname resource and resource group, perform the following actions:

    • Add a strong positive affinity from the logical hostname resource group to the zpool's infrastructure resource group.


      Note -  Setting the strong positive resource group affinity will print a warning message, if the logical hostname resource group has the Auto_start_on_new_cluster=TRUE property while the zpool's infrastructure resource group has Auto_start_on_new_cluster=FALSE. This is allowed, since the Geographic Edition software will bring up the zpool's infrastructure resource group when required, thereby also bringing up the logical hostname resource group due to the affinity.

      As one infrastructure resource group is configured per application resource group if it is failover, one logical hostname resource group is required for each such infrastructure resource group.

      If an application resource group is scalable, one logical hostname resource group is configured for each of the zpools managed by the application resource group.


    • Add an offline-restart resource dependency from the logical hostname resource to the zpool's infrastructure storage HAStoragePlus resource.

    • Ensure that Auto_start_on_new_cluster is TRUE on the logical hostname resource group. This property is TRUE by default. In case the property is FALSE, set it to TRUE.

    It is essential to have a strong positive affinity from such a logical hostname resource group to the associated Oracle Solaris ZFS snapshot infrastructure resource group. This must be set so that the replication logical hostname is online in the global zone of the same cluster node where the associated ZFS pool is imported by the infrastructure SUNW.HAStoragePlus resource.

    For example:

    Suppose the local partner is a zone cluster zc1 and local replication hostname is paris-lh. The zpool infrastructure resource group in zc1 is pg1-app-rg1-infr-rg. The storage resource is pg1-srcpool1-stor-rs. Type the following commands in the global zone of one node of zc1:

    # clrg create paris-lh-rg 
    # clrslh create -g paris-lh-rg -h paris-lh paris-lh-rs 
    # clrg manage paris-lh-rg  
    # clrg set -p RG_affinities=++zc1:pg1-app-rg1-infr-rg paris-lh-rg 
    (C538594) WARNING: resource group global:paris-lh-rg has a strong  positive affinity on
    resource group zc1:pg1-app-rg1-infr-rg with  Auto_start_on_new_cluster=FALSE; 
    global:paris-lh-rg will be forced to remain offline until its strong affinities are satisfied. 
    # clrs set -p Resource_dependencies_offline_restart=zc1:pg1-srcpool1-stor-rs paris-lh-rs 
    # clrg show -p Auto_start_on_new_cluster paris-lh-rg 
    === Resource Groups and Resources === 
    Resource Group:                               paris-lh-rg   
    Auto_start_on_new_cluster:                      True 
        **********

    If the property is not True, type the following command :

    $ clrg set -p Auto_start_on_new_cluster=True paris-lh-rg 
  9. Verify the protection group configuration.

    For example, suppose repcom1-repstatus-rs is the replication status resource name:

    paris-node-1# geoadm status
    paris-node-1# clresource status repcom1-repstatus-rs
    newyork-node-1# geoadm status
    newyork-node-1# clresource status repcom1-repstatus-rs

See Also


Note -  Save the /var/tmp/geo/zfs_snapshot/zfs_snap_geo_config file for possible future use. When you want to modify any properties for this replication component that you created, you can edit the desired parameters in this same file and re-run the zfs_snap_geo_register script. For more information, see How to Modify an Oracle Solaris ZFS Snapshot Replication Component.

Troubleshooting

If you experience failures while performing this procedure, enable debugging. See Debugging an Oracle Solaris ZFS Snapshot Protection Group.

Next Steps

For information about activating a protection group, see How to Activate a Protection Group in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.