Go to main content

Oracle® Solaris Cluster Geographic Edition Data Replication Guide for ZFS Snapshots

Exit Print View

Updated: February 2017
 
 

Adding a Replication Component to an Oracle Solaris ZFS Snapshot Protection Group

A protection group is the container for the application resource groups and replication components, which contain data for services that are protected from disaster. A ZFS snapshot replication component in a protection group protects the data by replicating it from the primary cluster to the secondary cluster. The software also monitors the replication status.

This section provides information about adding a replication component to an Oracle Solaris ZFS snapshot protection group:

How to Add a Replication Component to an Oracle Solaris ZFS snapshot Protection Group

Perform this procedure to add a replication component to an existing Oracle Solaris ZFS snapshot protection group.


Note -  When the protection group is initially created, the replication component specified in the zfs_snap_geo_config configuration file is added to the protection group. Thus, you only need to run this procedure to add more replication components to existing Oracle Solaris ZFS snapshot protection groups.

Before You Begin

Before you add a replication component to a protection group, ensure that the following conditions are met:

  • The Geographic Edition software is installed on the primary and secondary clusters.

  • You have reviewed the information in Planning Remote Replication Using Oracle Solaris ZFS Snapshot.

  • You have performed the prerequisites mentioned in Prerequisites for Configuring Remote Replication Using Oracle Solaris ZFS Snapshot.

  • The local cluster is a member of a partnership.

  • The protection group is defined on the local cluster.

  • The protection group is offline on the local cluster and the partner cluster, if the partner cluster can be reached.

  • The zpool for the dataset to be put under replication is managed by a SUNW.HAStoragePlus resource in an application resource group. Ensure that the application resource group is in the unmanaged state on the local partner, and also on the remote partner cluster if it is reachable.

  1. Assume the root role or assume a role that is assigned the Geo Management RBAC rights profile.

    For more information about RBAC, see Securing Geographic Edition Software in Oracle Solaris Cluster 4.3 Geographic Edition Installation and Configuration Guide.


    Note -  If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.
    # chmod A+user:username:rwx:allow /var/cluster/geo

    The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Oracle Solaris ZFS snapshot software.


  2. On all nodes of both clusters, update the script-based plug-in configuration file for a protection group.

    This file is already associated with the protection group and would be shown in the output of geopg show. This file contains the details of which nodes pertain to a replication component in the protection group.

    Update the file so that it contains one line that contains the rule information for the replication component.

    replication-component|any|nodelist
    replication-component

    Name of the replication component provided in the replication configuration file.

    nodelist

    The name of one or more cluster nodes where the plug-in is to validate the configuration.

    For example, suppose the configuration file is /var/tmp/geo/zfs_snapshot/sbp_conf. Suppose that the nodes of cluster paris are paris-node-1 and paris-node-2. On each node of the cluster paris, type the following commands:

    paris-node-N# echo "repcom1|any|paris-node-1,paris-node-2" >> /var/tmp/geo/zfs_snapshot/sbp_conf

    Suppose that the nodes of the cluster newyork are newyork-node-1 and newyork-node-2. On each node of cluster newyork, type the following commands:

    newyork-node-N# echo "repcom1|any|newyork-node-1,newyork-node-2" >> /var/tmp/geo/zfs_snapshot/sbp_conf

    For more information about configuration files, see configuration_file Property in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.

  3. Ensure that the Auto_start_on_new_cluster property of the application resource group is set to False.
    # clresourcegroup show -p Auto_start_on_new_cluster app-group

    If necessary, change the property value to False.

    # clresourcegroup set -p Auto_start_on_new_cluster=False app-group
  4. If either partner is a zone cluster, configure a Oracle Solaris Cluster private string in the global zone on each partner to store the SSH passphrase of the replication user on that partner.

    The name of the private string must have the following format :

    local-partner-zonename:replication-component:local_service_passphrase

    For example, suppose the partnership is between a global zone and a zone cluster zc1. The name of the replication component is repcom1. The replication user for the global zone partner is zfsuser1. The replication user for the zone cluster partner is zfsuser2. In one node of the global zone partner, type the following command to create a private string to store the SSH passphrase of zfsuser1:

    $ clps create -b global:repcom1:local_service_passphrase global:repcom1:local_service_passphrase
    [Enter SSH passphrase for zfsuser1 at prompt]

    In the global zone of one node of the zone cluster partner zc1, type the following command to create a private string to store the SSH passphrase of zfsuser2:

    $ clps create -b zc1:repcom1:local_service_passphrase zc1:repcom1:local_service_passphrase 
    <Enter SSH passphrase for zfsuser2 at prompt>

    If the partnership is between two zone clusters zc1 and zc2 and the replication component is repcom1. Suppose that the replication user for zc1 is zfsuser1 and that for zc2 is zfsuser2. In the global zone of one node of the zone cluster partner zc1, type the following command to create a private string to store the SSH passphrase of zfsuser1:

    $ clps create -b zc1:repcom1:local_service_passphrase zc1:repcom1:local_service_passphrase
    <Enter SSH passphrase for zfsuser1 at prompt>

    In the global zone of one node of the zone cluster partner zc2, type the following command to create a private string to store the SSH passphrase of zfsuser2:

    $ clps create -b zc2:repcom1:local_service_passphrase zc2:repcom1:local_service_passphrase 
    <Enter SSH passphrase for zfsuser2 at prompt>
  5. Add a replication component to the protection group.

    On one node of the local cluster, copy the default replication configuration file to another location and specify the values for the replication component in the file. Then, use the zfs_snap_geo_register script with the new configuration file.

    For example, copy the file to /var/tmp/geo/zfs_snapshot directory. Use the zfs_snap_geo_register script with the new configuration file.

    paris-node-1# cp /opt/ORCLscgrepzfssnap/etc/zfs_snap_geo_config /var/tmp/geo/zfs_snapshot/repcom1_config

    After you fill in the values in the configuration file, execute the setup script.

    paris-node-1# /opt/ORCLscgrepzfssnap/util/zfs_snap_geo_register -f /var/tmp/geo/zfs_snapshot/repcom1_config

    This command adds a replication component to a protection group on the local cluster. If the partner cluster contains a protection group with the same name, the command also propagates the new configuration to the partner cluster.


    Note -  The add operation for the replication component is performed during the scripted registration. For details about scripted registration, see How to Create and Configure an Oracle Solaris ZFS Snapshot Protection Group.
  6. Configure protection group pg1 logical hostname resource and resource group in the global zone of that zone cluster partner to host the replication hostname.

    Perform this configuration for each zone cluster partner in a partnership. The names of the resource and resource group are not restricted to any specific format.


    Note -  If another replication component that uses the same zpool exists in the same protection group, then you must have configured such a logical hostname resource group for that replication component, in which case you do not need to configure another one. The existing replication component and the new replication component being configured will have the same common infrastructure resource group managing the common zpool, and the same logical hostname resource group in the global zone will suffice to co-locate the replication logical hostname with the zpool.

    As one infrastructure resource group is configured per application resource group if it is failover, one logical hostname resource group is required for each such infrastructure resource group.

    If an application resource group is scalable, one logical hostname resource group is configured for each of the zpools managed by the application resource group.


    After configuring the logical hostname resource and resource group, perform the following actions:

    • Add a strong positive affinity from the logical hostname resource group to the zpool's infrastructure resource group.


      Note -  Setting the strong positive resource group affinity will print a warning message, if the logical hostname resource group has the Auto_start_on_new_cluster=TRUE property while the zpool's infrastructure resource group has Auto_start_on_new_cluster=FALSE. This is allowed, since the Geographic Edition software will bring up the zpool's infrastructure resource group when required, thereby also bringing up the logical hostname resource group due to the affinity.
    • Add a offline-restart resource dependency from the logical hostname resource to the zpool's infrastructure storage HAStoragePlus resource.

    • Ensure that Auto_start_on_new_cluster is TRUE on the logical hostname resource group. This property is TRUE by default. In case the property is FALSE, set it to TRUE.

    It is essential to have a strong positive affinity from such a logical hostname resource group to the associated Oracle Solaris ZFS snapshot infrastructure resource group. This must be set so that the replication logical hostname is online in the global zone of the same cluster node where the associated ZFS pool is imported by the infrastructure SUNW.HAStoragePlus resource.

    For example, suppose the local partner is a zone cluster zc1 and local replication hostname is paris-lh. The zpool infrastructure resource group in zc1 is pg1-app-rg1-infr-rg. The storage resource is pg1-srcpool1-stor-rs. Type the following commands in the global zone of one node of zc1:

    # clrg create paris-lh-rg 
    # clrslh create -g paris-lh-rg -h paris-lh paris-lh-rs 
    # clrg manage paris-lh-rg  
    # clrg set -p RG_affinities=++zc1:pg1-app-rg1-infr-rg paris-lh-rg 
    (C538594) WARNING: resource group global:paris-lh-rg has a strong  positive affinity on
    resource group zc1:pg1-app-rg1-infr-rg with  Auto_start_on_new_cluster=FALSE;
    global:paris-lh-rg will be forced to  remain offline until its strong affinities are satisfied. 
    # clrs set -p Resource_dependencies_offline_restart=zc1:pg1-srcpool1-stor-rs paris-lh-rs 
    # clrg show -p Auto_start_on_new_cluster paris-lh-rg 
    === Resource Groups and Resources === 
    Resource Group:                               paris-lh-rg   
    Auto_start_on_new_cluster:                      True 
        **********

    If the property is not True, type the following command :

    # clrg set -p Auto_start_on_new_cluster=True paris-lh-rg 
  7. Verify the protection group configuration.

    For example, suppose repcom1-repstatus-rs is the replication status resource name:

    paris-node-1# geoadm status
    paris-node-1# clresource status repcom1-repstatus-rs
    newyork-node-1# geoadm status
    newyork-node-1# clresource status repcom1-repstatus-rs

See Also


Note -  Save the /var/tmp/geo/zfs_snapshot/repcom1_config file for possible future use. When you want to modify any properties for this replication component that you created, you can edit the desired parameters in this same file and re-run the zfs_snap_geo_register script. For more information, see How to Modify an Oracle Solaris ZFS Snapshot Replication Component.

Troubleshooting

If you have difficulties adding the replication component to the protection group, see Debugging an Oracle Solaris ZFS Snapshot Protection Group.