Go to main content

Oracle® Solaris Cluster Data Replication Guide for ZFS Snapshots

Exit Print View

Updated: October 2018
 
 

Prerequisites for Configuring Remote Replication Using Oracle Solaris ZFS Snapshot

Perform the following actions before you run the setup script to configure the Oracle Solaris ZFS snapshot replication:

  • Since the application uses ZFS as data store, the application resource group would already have a HAStoragePlus resource managing the ZFS pool used by the application. If the application resource group does not have a HAStoragePlus resource, create a HAStoragePlus resource to manage the ZFS pool that the application uses on both primary and secondary cluster. Provide the name of that HAStoragePlus resource in the configuration file when configuring replication of datasets of the ZFS pool. Ensure to upgrade the HAStoragePlus resource to at least version 11 of the HAStoragePlus resource type. For information about creating an HAStoragePlus, see Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

  • To use zpools for globally mounted ZFS filesystems, you must configure a device group for the zpool. The name of such a zpool device group must be the same as the zpool. The application resource group might have a HAStoragePlus resource configured in it. If the application resource group does not have a HAStoragePlus resource, such entries will be empty in the configuration file in case of zpools for globally mounted ZFS filesystems.

  • Decide which hostnames to use as logical hostname for replication infrastructure on each partner cluster.

  • In the global zone on all nodes in the application's node-list on both clusters, configure the replication user with ZFS permissions to perform the required ZFS operations on the ZFS datasets that are added to the protection group. Ensure that the replication user is configured in the global zone on all nodes in the application's node-list on partner clusters. The ZFS permissions are needed in the global zone on all nodes in the application's node-list on partner clusters.

    The user must have the following Local+Descendent ZFS permissions on the ZFS dataset in the cluster where the user is set up : create, destroy, hold, mount, receive, release, rollback, send, and snapshot. For an example illustrating how to set the ZFS permission, see step 3 in Use Case: Configuring Oracle Solaris ZFS Snapshot Replication When Both Partner Clusters Are Global Zones

    For more information about ZFS dataset permissions, see Oracle Solaris ZFS Delegated Administration.

    The source and target datasets must already exist in the source and target zpools on the primary and secondary clusters.

  • Perform the SSH configuration to enable the replication user to communicate between global zones of partner clusters. Perform the following actions to set up SSH:

    • Set up SSH keys for the replication user in global zone of each partner cluster, and copy the public keys on to the corresponding remote global zone of partner cluster.

    • Specify the SSH passphrase as input to the replication setup. Note the following when providing the passphrase:

      • If both partners are global zones, the setup or register script will prompt for SSH passphrases when executed to create a snapshot replication component. If the MODIFY_PASSPHRASE parameter is set to True in the configuration file, the setup or register script will prompt for such passphrases when executed to modify an existing snapshot replication component. Provide the passphrase for local cluster and remote cluster replication user at the prompts.

      • If at least one partner is a zone cluster, you must configure an Oracle Solaris Cluster private string object in the global zone of each partner to store the passphrase for the private key of the replication user of the partner. Note that the Oracle Solaris Cluster private string must be created even for a global zone partner if its partner is a zone cluster.

        The name of the Oracle Solaris Cluster private string must have the following format:

        local-partner-zonename:replication-component:local_service_passphrase

      For information about generating a public/private key pair for use with secure shell, see How to Generate a Public/Private Key Pair for Use With Secure Shell. For an example illustrating how to set up SSH, see Use Case: Setting Up Replication User and SSH.

  • You must ensure that the application resource groups that are added to the protection group are in the unmanaged state on the primary and secondary clusters.

    When you configure remote replication using ZFS snapshot for an application, the setup software sets resource group affinities on the application resource group and also sets resource properties on the HAStoragePlus resource that manages the zpool in the application. The setup script also adds the application to the protection group. To perform all the required configuration set up, it is required that the application is in the unmanaged state on the primary and secondary clusters when creating the replication component.

  • For a zone cluster, before upgrading Oracle Solaris Cluster from 4.3 to 4.4, the resource dependencies of logical hostname resource on the HAStoragePlus resource of infrastructure resource group and the resource group affinities of logical hostname resource group on infrastructure resource group needs to be removed.

    For example, suppose the local partner paris is a zone cluster and the logical hostname resource group in global zone is paris-lh-rg, and the resource is paris-lh-rs. Type the following commands in the global zone of one node of paris to only remove the affinity and dependency:

    $ clrg set -p RG_affinities-=++paris:pg1-srcpool1-infr-rg paris-lh-rg
    $ clrs set -p Resource_dependencies_offline_restart-=paris:pg1-srcpool1-stor-rs paris-lh-rs