Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Geographic Edition Remote Replication Guide for Sun ZFS Storage Appliance Oracle Solaris Cluster 4.1 |
1. Configuring and Administering Sun ZFS Storage Appliance Protection Groups
Planning and Configuring Remote Replication With Sun ZFS Storage Appliance Software
Guidelines for Remote Replication With Sun ZFS Storage Appliance Software
Overview of the Sun ZFS Storage Appliance Configuration File
Geographic Edition Properties to Set for Sun ZFS Storage Appliance Replication
Creating, Modifying, Validating, and Deleting a Sun ZFS Storage Appliance Protection Group
Strategies for Creating Sun ZFS Storage Appliance Protection Groups
Configuring Remote Replication With Sun ZFS Storage Appliance Software
How to Create a Role and Associated User for the Primary and Secondary Appliances
How to Create a Project and Enable Replication for the Project
How to Configure Oracle Solaris Cluster Resources on the Primary Cluster
How to Configure Oracle Solaris Cluster Resources on the Secondary Cluster
How to Create and Configure a Sun ZFS Storage Appliance Protection Group
How to Modify a Sun ZFS Storage Appliance Protection Group
Validating a Sun ZFS Storage Appliance Protection Group
How to Validate a Sun ZFS Storage Appliance Protection Group
Administering Sun ZFS Storage Appliance Data-Replicated Components
How to Add a Remote Replication Component to a Sun ZFS Storage Appliance Protection Group
Remote Replication Subsystem Process for Verifying the Replicated Component
How to Modify a Sun ZFS Storage Appliance Data-Replicated Component
How to Remove a Data-Replicated Component From a Sun ZFS Storage Appliance Protection Group
Administering Sun ZFS Storage Appliance Application Resource Groups
How to Add an Application Resource Group to a Sun ZFS Storage Appliance Protection Group
How to Delete an Application Resource Group From a Sun ZFS Storage Appliance Protection Group
Replicating a Sun ZFS Storage Appliance Protection Group Configuration to a Partner Cluster
How to Replicate the Sun ZFS Storage Appliance Protection Group Configuration to a Partner Cluster
Activating and Deactivating a Sun ZFS Storage Appliance Protection Group
How to Activate a Sun ZFS Storage Appliance Protection Group
How to Deactivate a Sun ZFS Storage Appliance Protection Group
Resynchronizing a Sun ZFS Storage Appliance Protection Group
How to Resynchronize a Sun ZFS Storage Appliance Protection Group
Checking the Runtime Status of Sun ZFS Storage Appliance Remote Replication
Overview of Displaying a Sun ZFS Storage Appliance Runtime Status
How to Check the Runtime Status of Sun ZFS Storage Appliance Replication
Sun ZFS Storage Appliance Replication Resource Group Runtime Status and Status Messages
2. Migrating Services That Use Sun ZFS Storage Appliance Remote Replication
This section contains the following topics:
Strategies for Creating Sun ZFS Storage Appliance Protection Groups
Configuring Remote Replication With Sun ZFS Storage Appliance Software
How to Create and Configure a Sun ZFS Storage Appliance Protection Group
Note - You can create protection groups that are not configured to use remote replication. To create a protection group that does not use a replication subsystem, omit the -d data-replication-type option when you use the geopg command. The geoadm status command shows a state for these protection groups of Degraded.
For more information, see Creating a Protection Group That Does Not Require Data Replication in Oracle Solaris Cluster Geographic Edition System Administration Guide.
The following task maps describe the steps to perform:
Table 1-2 Task Map: Creating a Protection Group
|
This section describes the steps you must perform before you can configure Sun ZFS Storage Appliance remote replication with Geographic Edition software. The following procedures are in this section:
How to Create a Role and Associated User for the Primary and Secondary Appliances
How to Create a Project and Enable Replication for the Project
How to Configure Oracle Solaris Cluster Resources on the Primary Cluster
How to Configure Oracle Solaris Cluster Resources on the Secondary Cluster
If a role and associated user do not yet exist on the source and target appliances, perform this procedure to create them.
Configure the role with the following permissions:
Object nas.*.*.* with permissions clone, destroy, rrsource, and rrtarget.
Object workflow.*.* with permission read.
Ensure that NFS exceptions and LUN settings are identical on the primary and secondary storage appliances. For more information, see Copying and Editing Actions in Sun ZFS Storage 7000 System Administration Guide.
These groups must use the same name in the replication target as in the source appliance.
This procedure creates Oracle Solaris Cluster resources on the primary cluster for the application to be protected.
Before You Begin
Ensure that the following tasks are completed on the storage appliance:
Replication peers are configured by the storage administrator.
Projects are configured by the storage administrator.
Replication is enabled for the project.
For iSCSI LUNs, if you use nondefault target groups, the target groups and initiator groups used by LUNs within the project also exist on the replication target. In addition, these groups use the same names in the replication target as in the source appliance.
If you use file systems, NFS Exceptions exist for all nodes of both clusters. This ensures that either cluster can access the file systems when that cluster has the primary role.
Specify the LUNs or file systems in the Sun ZFS Storage appliance to be replicated.
For information about creating device groups, file systems, and ZFS storage pools in a cluster configuration, see Oracle Solaris Cluster System Administration Guide.
This resource manages bringing online the Sun ZFS Storage Appliance storage on both the primary and secondary clusters
For information about creating an HAStoragePlus or scalable mount-point resource, see Oracle Solaris Cluster Data Services Planning and Administration Guide.
This procedure creates Oracle Solaris Cluster resources on the secondary cluster for the application to be protected.
Before You Begin
Ensure that the following tasks are completed on the storage appliance:
Replication peers are configured by the storage administrator.
Projects are configured by the storage administrator.
Replication is enabled for the project.
For iSCSI LUNs, if you use nondefault target groups, the target groups and initiator groups used by LUNs within the project also exist on the replication target. In addition, these groups must use the same names in the replication target as in the source appliance.
If you use file systems, NFS Exceptions exist for all nodes of both clusters. This ensures that either cluster can access the file systems when that cluster has the primary role.
This executes a manual replication to synchronize the two sites.
Specify the LUNs or file systems in the Sun ZFS Storage appliance to be replicated.
For information about creating device groups, file systems, and ZFS storage pools in a cluster configuration, see Oracle Solaris Cluster System Administration Guide.
This resource manages bringing online the Sun ZFS Storage Appliance storage on both the primary and secondary clusters
For information about creating an HAStoragePlus or scalable mount-point resource, see Oracle Solaris Cluster Data Services Planning and Administration Guide.
phys-newyork-1# clresourcegroup online -emM app-resource-group phys-newyork-1# clresourcegroup offline app-resource-group
phys-newyork-1# umount /mounts/file-system
phys-newyork-1# cldevicegroup offline raw-disk-group
Initial configuration on the secondary cluster is now complete.
Before You Begin
Ensure that the following conditions are met:
The Geographic Edition software is installed on the primary and secondary storage appliances.
You have reviewed the information in Planning and Configuring Remote Replication With Sun ZFS Storage Appliance Software.
You have created a remote replication role and user on each appliance. See How to Create a Role and Associated User for the Primary and Secondary Appliances.
You have created the projects you need. See How to Create a Project and Enable Replication for the Project.
The local cluster is a member of a partnership.
The protection group you are creating does not already exist on either partner cluster.
Perform this procedure from a node of the primary cluster.
For more information about RBAC, see Geographic Edition Software and RBAC in Oracle Solaris Cluster Geographic Edition System Administration Guide.
Note - If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.
# chmod A+user:username:rwx:allow /var/cluster/geo
The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Sun ZFS Storage Appliance software.
The /var/tmp/ directory is used as an example location in this step and the next step.
# cp /opt/ORCLscgrepzfssa/etc/zfssa_geo_config /var/tmp/
Update the file so that it contains one line that contains the rule information for the replication component.
project-name|any|nodelist
Name of the project.
The name of one or more cluster nodes where the plug-in is to validate the configuration.
For example, assuming that the nodes of cluster cluster-newyork are phys-newyork-1 and phys-newyork-2, on each node of cluster cluster-newyork, you would issue the following commands:
phys-newyork-N# mkdir /etc/opt/SUNWscgrepsbp phys-newyork-N# echo "trancos|any|phys-newyork-1,phys-newyork-2" > /etc/opt/SUNWscgrepsbp/configuration
Assuming that the nodes of cluster paris are phys-paris-3 and phys-paris-4, on each node of cluster paris, you would issue the following commands:
phys-paris-N# mkdir /etc/opt/SUNWscgrepsbp phys-paris-N# echo "trancos|any|phys-paris-3,phys-paris-4" > /etc/opt/SUNWscgrepsbp/configuration
For more information about configuration files, see configuration_file Property in Oracle Solaris Cluster Geographic Edition System Administration Guide.
The following list uses sample values:
PS=zfssa-ps PG=zfssa-pg REPCOMP=trancos REPRS=zfssa-rep-rs REPRG=zfssa-rep-rg DESC="ZFS Storage Appliance replication protection group" APPRG=usa-rg CONFIGFILE=/etc/opt/SUNWscgrepsbp/configuration LOCAL_CONNECT_STRING=user@local-appliance.example.com REMOTE_CONNECT_STRING=user@remote-appliance.example.com CLUSTER_DGS=
Note - For the LOCAL_CONNECT_STRING and REMOTE_CONNECT_STRING variables, provide the user that you created in Step 3 of How to Create a Role and Associated User for the Primary and Secondary Appliances.
For more information about the zfssa_geo_config file, see Overview of the Sun ZFS Storage Appliance Configuration File.
For example:
phys-newyork-1# /opt/ORCLscgrepzfssa/util/zfssa_geo_register -f /var/tmp/zfssa_geo_config
The final messages of the registration script outline the required geopg get command. You must log in to one node of the partner cluster and execute that exact command.
For example, where zfssa-ps is the partnership name and zfssa-pg is the protection group name:
phys-newyork-1# geopg get --partnership zfssa-ps zfssa-pg
phys-newyork-1# geoadm status phys-newyork-1# clresource status zfssa-rep-rs
Specifies the name of the replication resource.
Troubleshooting
If you experience failures while performing this procedure, enable debugging. See Debugging a Sun ZFS Storage Appliance Protection Group.
Before You Begin
Before modifying the configuration of your protection group, ensure that the protection group you want to modify exists locally.
For more information about RBAC, see Geographic Edition Software and RBAC in Oracle Solaris Cluster Geographic Edition System Administration Guide.
Note - If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.
# chmod A+user:username:rwx:allow /var/cluster/geo
The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Sun ZFS Storage Appliance software.
# geopg set-prop -p property [-p…] pg-name
Specifies a property of the protection group.
For more information about the properties you can set, see Appendix A, Standard Geographic Edition Properties, in Oracle Solaris Cluster Geographic Edition System Administration Guide.
Specifies the name of the protection group.
This command modifies the properties of a protection group on all nodes of the local cluster. If the partner cluster contains a protection group of the same name, this command also propagates the new configuration information to the partner cluster.
For information about the properties you can set, see Property Descriptions for Script-Based Plug-Ins in Oracle Solaris Cluster Geographic Edition System Administration Guide.
For information about the names and values that are supported by the Geographic Edition software, see Appendix B, Legal Names and Values of Geographic Edition Entities, in Oracle Solaris Cluster Geographic Edition System Administration Guide.
For more information about the geopg command, refer to the geopg(1M) man page.
Example 1-1 Modifying the Configuration of a Protection Group
The following example modifies the Timeout property of the zfssa-pg protection group.
# geopg set-prop -p Timeout=300 zfssa-pg
Troubleshooting
The geopg set-prop command revalidates the protection group with the new configuration information. If the validation is unsuccessful on the local cluster, the configuration of the protection group is not modified. Otherwise, the configuration status is set to OK on the local cluster.
If the configuration status is OK on the local cluster but the validation is unsuccessful on the partner cluster, the configuration status is set to Error on the partner cluster.
During protection group validation, the Sun ZFS Storage remote replication layer of the Geographic Edition software validates that the following conditions are met:
The specified device group is a valid Oracle Solaris Cluster device group. The replication layer uses the cldevicegroup list command if the Cluster_dgs property is specified. The replication layer also verifies that the device group is of a valid type.
The properties are valid for each Sun ZFS Storage component that has been added to the protection group.
Before You Begin
Ensure that the protection group you want to validate exists locally and that the common agent container is online on all nodes of both clusters in the partnership.
For more information about RBAC, see Geographic Edition Software and RBAC in Oracle Solaris Cluster Geographic Edition System Administration Guide.
Note - If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.
# chmod A+user:username:rwx:allow /var/cluster/geo
The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Sun ZFS Storage Appliance software.
Note - This command validates the configuration of the protection group on the local cluster only. To validate the protection group configuration on the partner cluster, run the command again on the partner cluster.
# geopg validate pg-name
Specifies a unique name that identifies a single protection group
Example 1-2 Validating the Configuration of a Protection Group
The following example validates the protection group zfssa-pg.
# geopg validate zfssa-pg
Troubleshooting
If the configuration status of a protection group is displayed as Error in the geoadm status output, you can validate the configuration by using the geopg validate command. This command checks the current state of the protection group and its entities.
If the protection group and its entities are valid, the configuration status of the protection groups is set to OK.
If the geopg validate command finds an error in the configuration files, the command displays an error message and the configuration remains in the Error state. Fix the error in the configuration, then rerun the geopg validate command.
If you encounter problems when creating a protection group or replicating a protection group with the geopg get command, you can set the DEBUG property of the /opt/ORCLscgrepzfssa/etc/config file to run trace logs. These logs will display on the terminal.
After a Sun ZFS Storage Appliance replication component is added to the protection group, you instead enable debugging by directly setting the Debug_level property of the Sun ZFS Storage Appliance resource with the clresource set command. Debug messages will display on the terminal.
# clresource set -p Debug_level=N zfssa-rep-rs
The following values are valid for the DEBUG and Debug_level properties:
No trace. This is the default.
Function trace.
Trace everything.
Additionally, logs of oscgeo7kcli calls and their results are recorded in /var/cluster/geo/zfssa/replication-component_logfile files on each cluster node.
Before You Begin
Before deleting a protection group, ensure that the following conditions are met:
The protection group you want to delete exists locally.
The protection group is offline on the local cluster.
Note - To keep the application resource groups online while deleting the protection group, you must remove the application resource groups from the protection group .
Perform this procedure from a node in the cluster where you want to delete the protection group.
For more information about RBAC, see Geographic Edition Software and RBAC in Oracle Solaris Cluster Geographic Edition System Administration Guide.
Note - If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.
# chmod A+user:username:rwx:allow /var/cluster/geo
The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Sun ZFS Storage Appliance software.
# geopg delete pg-name
Specifies the name of the protection group
This command deletes the configuration of the protection group from the local cluster. The command also removes the replication resource group for each Sun ZFS Storage Appliance component in the protection group. This command does not alter the replication state of the component.
Example 1-3 Deleting a Sun ZFS Storage Appliance Protection Group
The following example deletes the protection group zfssa-pg from both partner clusters. The protection group is offline on both partner clusters. In this example, cluster-paris is the primary cluster and cluster-newyork is the partner cluster.
# rlogin phys-paris-1 -l root phys-paris-1# geopg delete zfssa-pg # rlogin phys-newyork-1 -l root phys-newyork-1# geopg delete zfssa-pg
Example 1-4 Deleting a Sun ZFS Storage Appliance Protection Group While Keeping Application Resource Groups Online
The following example keeps online two application resource groups, apprg1 and apprg2, while deleting their protection group, zfssa-pg from both partner clusters. First the application resource groups are removed from the protection group, then the protection group is deleted from the primary cluster phys-paris and the partner cluster phys-newyork.
phys-paris-1# geopg remove-resource-group apprg1,apprg2 zfssa-pg phys-paris-1# geopg stop -e global zfssa-pg phys-paris-1# geopg delete zfssa-pg phys-newyork-1# geopg delete zfssa-pg
Troubleshooting
If the deletion is unsuccessful, the configuration status is set to Error. Fix the cause of the error, and rerun the geopg delete command.