Before You Begin
Ensure that the following conditions are met:
The Geographic Edition software is installed on the primary and secondary clusters.
You have reviewed the information in Planning Remote Replication Using Oracle Solaris ZFS Snapshot.
You have performed the prerequisites mentioned in Prerequisites for Configuring Remote Replication Using Oracle Solaris ZFS Snapshot.
The local cluster is a member of a partnership.
The protection group you are creating does not already exist on either partner cluster.
The application resource group containing the HAStoragePlus resource managing the zpool is in the unmanaged state on the primary and secondary clusters.
For more information about RBAC, see Securing Geographic Edition Software in Oracle Solaris Cluster 4.3 Geographic Edition Installation and Configuration Guide.
# chmod A+user:username:rwx:allow /var/cluster/geo
The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Oracle Solaris ZFS snapshot software.
Update the file so that it contains one line that contains the rule information for the replication component.
replication-component|any|nodelist
Name of the replication component provided in the replication configuration file.
The name of one or more cluster nodes where the plug-in is to validate the configuration.
Ensure to add any as the evaluation rule.
For example, suppose that you want to create and use a file /var/tmp/geo/zfs_snapshot/sbp_conf. Suppose that the nodes of cluster paris are paris-node-1 and paris-node-2. On each node of the cluster paris, type the following commands:
paris-node-N# mkdir -p /var/tmp/geo/zfs_snapshot paris-node-N# echo "repcom1|any|paris-node-1,paris-node-2" > /var/tmp/geo/zfs_snapshot/sbp_conf
Suppose that the nodes of the cluster newyork are newyork-node-1 and newyork-node-2. On each node of cluster newyork, type the following commands:
newyork-node-N# mkdir -p /var/tmp/geo/zfs_snapshot newyork-node-N# echo "repcom1|any|newyork-node-1,newyork-node-2" > /var/tmp/geo/zfs_snapshot/sbp_conf
For more information about configuration files, see configuration_file Property in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.
# clresourcegroup show -p Auto_start_on_new_cluster app-group
If necessary, change the property value to False.
# clresourcegroup set -p Auto_start_on_new_cluster=False app-group
This private string stores the SSH passphrase of the replication user on that partner. The name of the private string must have the following format :
local-partner-zonename:replication-component:local_service_passphrase
For example:
Partnership between a global zone and a zone cluster – Suppose the name of the zone cluster is zc1. The name of the replication component is repcom1. The replication user for the global zone partner is zfsuser1. The replication user for the zone cluster partner is zfsuser2.
In one node of the global zone partner, type the following command to create a private string to store the SSH passphrase of zfsuser1:
# clps create -b global:repcom1:local_service_passphrase \ global:repcom1:local_service_passphrase <Enter SSH passphrase for zfsuser1 at prompt>
In the global zone of one node of the zone cluster partner zc1, type the following command to create a private string to store the SSH passphrase of zfsuser2 :
# clps create -b zc1:repcom1:local_service_passphrase \ zc1:repcom1:local_service_passphrase <Enter SSH passphrase for zfsuser2 at prompt>
Partnership between two zone clusters – Suppose the partnership is between zone clusters zc1 and zc2 and the replication component is repcom1. Suppose that the replication user for zc1 is zfsuser1 and that for zc2 is zfsuser2.
In the global zone of one node of the zone cluster partner zc1, type the following command to create a private string to store the SSH passphrase of zfsuser1:
# clps create -b zc1:repcom1:local_service_passphrase \
zc1:repcom1:local_service_passphrase
<Enter SSH passphrase for zfsuser1 at prompt>
In the global zone of one node of the zone cluster partner zc2, type the following command to create a private string to store the SSH passphrase of zfsuser2:
# clps create -b zc2:repcom1:local_service_passphrase \
zc2:repcom1:local_service_passphrase
<Enter SSH passphrase for zfsuser2 at prompt>
For example, copy the file to /var/tmp/geo/zfs_snapshot directory.
# cp /opt/ORCLscgrepzfssnap/etc/zfs_snap_geo_config /var/tmp/geo/zfs_snapshot
The following list uses sample values:
PS=paris-newyork PG=pg1 REPCOMP=repcom1 REPRS=repcom1-repstatus-rs REPRG=pg1-repstatus-rg DESC="Protect app1-rg1 using ZFS snapshot replication" APPRG=app1-rg1 CONFIGFILE=/var/tmp/geo/zfs_snapshot/sbp_conf LOCAL_REP_USER=zfsuser1 REMOTE_REP_USER=zfsuser2 LOCAL_PRIV_KEY_FILE= REMOTE_PRIV_KEY_FILE= LOCAL_ZPOOL_RS=par-app1-hasp1 REMOTE_ZPOOL_RS=ny-app1-hasp1 LOCAL_LH=paris-lh REMOTE_LH=newyork-lh LOCAL_DATASET=srcpool1/app1-ds1 REMOTE_DATASET=targpool1/app1-ds1-copy REPLICATION_INTERVAL=120 NUM_OF_SNAPSHOTS_TO_STORE=2 REPLICATION_STREAM_PACKAGE=false SEND_PROPERTIES=true INTERMEDIARY_SNAPSHOTS=false RECURSIVE=true MODIFY_PASSPHRASE=false
For more information about the zfs_snap_geo_config file, see Overview of the Oracle Solaris ZFS Snapshot Remote Replication Configuration File.
For example:
paris-node-1# /opt/ORCLscgrepzfssnap/util/zfs_snap_geo_register -f \ /var/tmp/geo/zfs_snapshot/zfs_snap_geo_config
This setup action performed by the zfs_snap_geo_register script creates the following components:
Protection group pg1
Replication component repcom1
Infrastructure resource group pg1-app-rg1-infr-rg
Replication resource group repcom1-snap-rg which contains the resource repcom1-snap-rs
Replication status resource group pg1-repstatus-rg and replication status resource repcom1-repstatus-rs
For details about an example setup involving resource groups and resources, see Use Cases for Oracle Solaris ZFS Snapshot Replication.
The final messages of the setup script outline the required geopg get command. You must log in to one node of the partner cluster and execute that exact command.
For example, where paris-newyork is the partnership name and pg1 is the protection group name:
newyork-node-1# geopg get --partnership paris-newyork pg1
Perform this configuration for each zone cluster partner in a partnership. The names of the resource and resource group are not restricted to any specific format. After configuring the logical hostname resource and resource group, perform the following actions:
Add a strong positive affinity from the logical hostname resource group to the zpool's infrastructure resource group.
As one infrastructure resource group is configured per application resource group if it is failover, one logical hostname resource group is required for each such infrastructure resource group.
If an application resource group is scalable, one logical hostname resource group is configured for each of the zpools managed by the application resource group.
Add an offline-restart resource dependency from the logical hostname resource to the zpool's infrastructure storage HAStoragePlus resource.
Ensure that Auto_start_on_new_cluster is TRUE on the logical hostname resource group. This property is TRUE by default. In case the property is FALSE, set it to TRUE.
It is essential to have a strong positive affinity from such a logical hostname resource group to the associated Oracle Solaris ZFS snapshot infrastructure resource group. This must be set so that the replication logical hostname is online in the global zone of the same cluster node where the associated ZFS pool is imported by the infrastructure SUNW.HAStoragePlus resource.
For example:
Suppose the local partner is a zone cluster zc1 and local replication hostname is paris-lh. The zpool infrastructure resource group in zc1 is pg1-app-rg1-infr-rg. The storage resource is pg1-srcpool1-stor-rs. Type the following commands in the global zone of one node of zc1:
# clrg create paris-lh-rg # clrslh create -g paris-lh-rg -h paris-lh paris-lh-rs # clrg manage paris-lh-rg # clrg set -p RG_affinities=++zc1:pg1-app-rg1-infr-rg paris-lh-rg (C538594) WARNING: resource group global:paris-lh-rg has a strong positive affinity on resource group zc1:pg1-app-rg1-infr-rg with Auto_start_on_new_cluster=FALSE; global:paris-lh-rg will be forced to remain offline until its strong affinities are satisfied. # clrs set -p Resource_dependencies_offline_restart=zc1:pg1-srcpool1-stor-rs paris-lh-rs # clrg show -p Auto_start_on_new_cluster paris-lh-rg === Resource Groups and Resources === Resource Group: paris-lh-rg Auto_start_on_new_cluster: True **********
If the property is not True, type the following command :
$ clrg set -p Auto_start_on_new_cluster=True paris-lh-rg
For example, suppose repcom1-repstatus-rs is the replication status resource name:
paris-node-1# geoadm status paris-node-1# clresource status repcom1-repstatus-rs newyork-node-1# geoadm status newyork-node-1# clresource status repcom1-repstatus-rs
See Also
Troubleshooting
If you experience failures while performing this procedure, enable debugging. See Debugging an Oracle Solaris ZFS Snapshot Protection Group.
Next Steps
For information about activating a protection group, see How to Activate a Protection Group in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.