JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Geographic Edition Remote Replication Guide for Sun ZFS Storage Appliance     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Configuring and Administering Sun ZFS Storage Appliance Protection Groups

Overview of Configuring and Administering Remote Replication in a Sun ZFS Storage Appliance Protection Group

Planning and Configuring Remote Replication With Sun ZFS Storage Appliance Software

Guidelines for Remote Replication With Sun ZFS Storage Appliance Software

Overview of the Sun ZFS Storage Appliance Configuration File

Geographic Edition Properties to Set for Sun ZFS Storage Appliance Replication

Remote Replication Layer Process for Validating the Application Resource Groups and Remote Replication Entities

Creating, Modifying, Validating, and Deleting a Sun ZFS Storage Appliance Protection Group

Strategies for Creating Sun ZFS Storage Appliance Protection Groups

Configuring Remote Replication With Sun ZFS Storage Appliance Software

How to Create a Role and Associated User for the Primary and Secondary Appliances

How to Create a Project and Enable Replication for the Project

How to Configure Oracle Solaris Cluster Resources on the Primary Cluster

How to Configure Oracle Solaris Cluster Resources on the Secondary Cluster

How to Install the Sun ZFS Storage Appliance Plug-In for Geographic Edition

How to Create and Configure a Sun ZFS Storage Appliance Protection Group

How to Modify a Sun ZFS Storage Appliance Protection Group

Validating a Sun ZFS Storage Appliance Protection Group

How to Validate a Sun ZFS Storage Appliance Protection Group

Debugging a Sun ZFS Storage Appliance Protection Group

How to Delete a Sun ZFS Storage Appliance Protection Group

Administering Sun ZFS Storage Appliance Data-Replicated Components

How to Add a Remote Replication Component to a Sun ZFS Storage Appliance Protection Group

Remote Replication Subsystem Process for Verifying the Replicated Component

How to Modify a Sun ZFS Storage Appliance Data-Replicated Component

How to Remove a Data-Replicated Component From a Sun ZFS Storage Appliance Protection Group

Administering Sun ZFS Storage Appliance Application Resource Groups

How to Add an Application Resource Group to a Sun ZFS Storage Appliance Protection Group

How to Delete an Application Resource Group From a Sun ZFS Storage Appliance Protection Group

Replicating a Sun ZFS Storage Appliance Protection Group Configuration to a Partner Cluster

How to Replicate the Sun ZFS Storage Appliance Protection Group Configuration to a Partner Cluster

Activating and Deactivating a Sun ZFS Storage Appliance Protection Group

How to Activate a Sun ZFS Storage Appliance Protection Group

How to Deactivate a Sun ZFS Storage Appliance Protection Group

Resynchronizing a Sun ZFS Storage Appliance Protection Group

How to Resynchronize a Sun ZFS Storage Appliance Protection Group

Checking the Runtime Status of Sun ZFS Storage Appliance Remote Replication

Overview of Displaying a Sun ZFS Storage Appliance Runtime Status

How to Check the Runtime Status of Sun ZFS Storage Appliance Replication

Sun ZFS Storage Appliance Replication Resource Group Runtime Status and Status Messages

2.  Migrating Services That Use Sun ZFS Storage Appliance Remote Replication

Index

Creating, Modifying, Validating, and Deleting a Sun ZFS Storage Appliance Protection Group

This section contains the following topics:


Note - You can create protection groups that are not configured to use remote replication. To create a protection group that does not use a replication subsystem, omit the -d data-replication-type option when you use the geopg command. The geoadm status command shows a state for these protection groups of Degraded.

For more information, see Creating a Protection Group That Does Not Require Data Replication in Oracle Solaris Cluster Geographic Edition System Administration Guide.


Strategies for Creating Sun ZFS Storage Appliance Protection Groups

The following task maps describe the steps to perform:

Table 1-2 Task Map: Creating a Protection Group

Task
Description
1. Create a role and user for each storage appliance. Create projects and enable replication. Configure remote replication for both partner clusters.
2. Download and install the Sun ZFS Storage Appliance plug-in for Geographic Edition.
3. Create the protection group from a cluster node.
4. Add the replication component to the protection group.
5. Start the protection group locally.
6. Add the application resource group to the protection group.
7. From the secondary cluster, retrieve the protection group configuration.
8. From the secondary cluster, activate the protection group locally.

Configuring Remote Replication With Sun ZFS Storage Appliance Software

This section describes the steps you must perform before you can configure Sun ZFS Storage Appliance remote replication with Geographic Edition software. The following procedures are in this section:

How to Create a Role and Associated User for the Primary and Secondary Appliances

If a role and associated user do not yet exist on the source and target appliances, perform this procedure to create them.

  1. Log in to the Sun ZFS Storage appliance.
  2. Create a role for remote replication.

    Configure the role with the following permissions:

    • Object nas.*.*.* with permissions clone, destroy, rrsource, and rrtarget.

    • Object workflow.*.* with permission read.

  3. Create a user for replication that is associated with the role you created in Step 2.

How to Create a Project and Enable Replication for the Project

  1. Log in to the Sun ZFS Storage appliance on the primary cluster-paris site.
  2. Navigate to Shares > Projects and create the projects that you need for your application.
  3. In each project, create the file systems and LUNs that you need for your application.

    Ensure that NFS exceptions and LUN settings are identical on the primary and secondary storage appliances. For more information, see Copying and Editing Actions in Sun ZFS Storage 7000 System Administration Guide.

  4. For iSCSI LUNs, if you use nondefault targets and target groups, ensure that target groups and initiator groups used by LUNs within the project also exist on the replication target.

    These groups must use the same name in the replication target as in the source appliance.

  5. For each project, navigate to Replication, create an action, and enable the action with continuous mode.

How to Configure Oracle Solaris Cluster Resources on the Primary Cluster

This procedure creates Oracle Solaris Cluster resources on the primary cluster for the application to be protected.

Before You Begin

Ensure that the following tasks are completed on the storage appliance:

  1. Create the Oracle Solaris Cluster device groups, file systems, or ZFS storage pools you want to use.

    Specify the LUNs or file systems in the Sun ZFS Storage appliance to be replicated.

    If you create a ZFS storage pool, observe the following requirements and restrictions:

    • Ensure that the zpool version on the cluster where you create the zpool is supported by the Oracle Solaris OS version of the partner cluster nodes. This is necessary so that the zpool can be imported by the partner cluster nodes, when that cluster becomes primary. You can do this by setting the zpool version to the default zpool version of the cluster that is running the earlier version of Oracle Solaris software.

    • Mirrored and unmirrored ZFS storage pools are supported.

    • ZFS storage pool spares are not supported with storage-based replication in a Geographic Edition configuration. The information about the spare that is stored in the storage pool results in the storage pool being incompatible with the remote system after it has been replicated.

    • ZFS can be used with either Synchronous or Asynchronous mode. If you use Asynchronous mode, ensure that SRDF is configured to preserve write ordering, even after a rolling failure.

    For information about creating device groups, file systems, and ZFS storage pools in a cluster configuration, see Oracle Solaris Cluster System Administration Guide.

  2. Create an HAStoragePlus resource or a scalable mount-point resource for the device group, file system, or ZFS storage pool you use.

    This resource manages bringing online the Sun ZFS Storage Appliance storage on both the primary and secondary clusters

    For information about creating an HAStoragePlus or scalable mount-point resource, see Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Configure Oracle Solaris Cluster Resources on the Secondary Cluster

This procedure creates Oracle Solaris Cluster resources on the secondary cluster for the application to be protected.

Before You Begin

Ensure that the following tasks are completed on the storage appliance:

  1. On the cluster-paris (primary) site, access the Sun ZFS Storage Appliance browser user interface (BUI).
  2. Navigate to Shares > Projects and select the project being replicated.
  3. Select Replication for the project and click Update Now.

    This executes a manual replication to synchronize the two sites.

  4. On the cluster-newyork (partner) site, access the appliance BUI.
  5. Navigate to In Projects > Replica and select the project being replicated.
  6. Select Replication for the project and click the Reverse the Direction of Replication icon.

    Replication is reversed.

  7. Create the Oracle Solaris Cluster device groups, file systems, or ZFS storage pools you want to use.

    Specify the LUNs or file systems in the Sun ZFS Storage appliance to be replicated.

    If you create a ZFS storage pool, observe the following requirements and restrictions:

    • Ensure that the zpool version on the cluster where you create the zpool is supported by the Oracle Solaris OS version of the partner cluster nodes. This is necessary so that the zpool can be imported by the partner cluster nodes, when that cluster becomes primary. You can do this by creating the zpool on the cluster that runs the lowest version of Oracle Solaris software.

    • Mirrored and unmirrored ZFS storage pools are supported.

    • ZFS storage pool spares are not supported with storage-based replication in a Geographic Edition configuration. The information about the spare that is stored in the storage pool results in the storage pool being incompatible with the remote system after it has been replicated.

    • ZFS can be used with either Synchronous or Asynchronous mode. If you use Asynchronous mode, ensure that SRDF is configured to preserve write ordering, even after a rolling failure.

    For information about creating device groups, file systems, and ZFS storage pools in a cluster configuration, see Oracle Solaris Cluster System Administration Guide.

  8. Create an HAStoragePlus resource or a scalable mount-point resource for the device group, file system, or ZFS storage pool you use.

    This resource manages bringing online the Sun ZFS Storage Appliance storage on both the primary and secondary clusters

    For information about creating an HAStoragePlus or scalable mount-point resource, see Oracle Solaris Cluster Data Services Planning and Administration Guide.

  9. Confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
    phys-newyork-1# clresourcegroup online -emM app-resource-group
    phys-newyork-1# clresourcegroup offline app-resource-group
  10. If you created a file systems and it is mounted, unmount the file system.
    phys-newyork-1# umount /mounts/file-system
  11. If the Oracle Solaris Cluster device group is online, take it offline.
    phys-newyork-1# cldevicegroup offline raw-disk-group
  12. Reverse the replication on the primary site.
    1. Access the appliance BUI on the cluster-paris site.
    2. Navigate to Projects > Replica and select the project being replicated.
    3. Select Replication for the project and click the Reverse the Direction of Replication icon

How to Install the Sun ZFS Storage Appliance Plug-In for Geographic Edition

Perform this procedure on all nodes of both clusters in the partnership.

  1. In a web browser, go to the Oracle ZFS Storage Appliance Plugin Download site, http://www.oracle.com/technetwork/server-storage/sun-unified-storage/downloads/zfssa-plugins-1489830.html.
  2. Click the Accept License Agreement button.
  3. Click the Download link for the latest Oracle Solaris Cluster Geographic Edition Plugin for Solaris 10.

    The zip file containing the ORCLscgezfssacli package is downloaded. Unzip the file to extract the package.

  4. As the root role, on all nodes of the global cluster, navigate to the directory containing the extracted ORCLscgezfssacli package and install it.

    The installation should be done within the global zone.

    # pkgadd -d . ORCLscgezfssacli

How to Create and Configure a Sun ZFS Storage Appliance Protection Group

Before You Begin

Ensure that the following conditions are met:

Perform this procedure from a node of the primary cluster.

  1. Assume the root role or assume a role that is assigned the Geo Management RBAC rights profile.

    For more information about RBAC, see Geographic Edition Software and RBAC in Oracle Solaris Cluster Geographic Edition System Administration Guide.


    Note - If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.

    # chmod A+user:username:rwx:allow /var/cluster/geo

    The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Sun ZFS Storage Appliance software.


  2. Copy the default zfssa_geo_config file to another location.

    The /var/tmp/ directory is used as an example location in this step and the next step.

    # cp /opt/ORCLscgrepzfssa/etc/zfssa_geo_config /var/tmp/
  3. On all nodes of both clusters, create or update an /etc/opt/SUNWscgrepsbp/configuration file to contain the script-based plug-in evaluation rules.

    Update the file so that it contains one line that contains the rule information for the replication component.

    project-name|any|nodelist
    project-name

    Name of the project.

    nodelist

    The name of one or more cluster nodes where the plug-in is to validate the configuration.

    For example, assuming that the nodes of cluster cluster-newyork are phys-newyork-1 and phys-newyork-2, on each node of cluster cluster-newyork, you would issue the following commands:

    phys-newyork-N# mkdir /etc/opt/SUNWscgrepsbp
    phys-newyork-N# echo "trancos|any|phys-newyork-1,phys-newyork-2" > /etc/opt/SUNWscgrepsbp/configuration

    Assuming that the nodes of cluster paris are phys-paris-3 and phys-paris-4, on each node of cluster paris, you would issue the following commands:

    phys-paris-N# mkdir /etc/opt/SUNWscgrepsbp
    phys-paris-N# echo "trancos|any|phys-paris-3,phys-paris-4" > /etc/opt/SUNWscgrepsbp/configuration

    For more information about configuration files, see configuration_file Property in Oracle Solaris Cluster Geographic Edition System Administration Guide.

  4. Specify the configuration values in the temporary /var/tmp/zfssa_geo_config file.

    The following list uses sample values:

    PS=zfssa-ps
    PG=zfssa-pg
    REPCOMP=trancos
    REPRS=zfssa-rep-rs
    REPRG=zfssa-rep-rg
    DESC="ZFS Storage Appliance replication protection group"
    APPRG=usa-rg
    CONFIGFILE=/etc/opt/SUNWscgrepsbp/configuration
    LOCAL_CONNECT_STRING=user@local-appliance.example.com
    REMOTE_CONNECT_STRING=user@remote-appliance.example.com
    CLUSTER_DGS=

    Note - For the LOCAL_CONNECT_STRING and REMOTE_CONNECT_STRING variables, provide the user that you created in Step 3 of How to Create a Role and Associated User for the Primary and Secondary Appliances.


    For more information about the zfssa_geo_config file, see Overview of the Sun ZFS Storage Appliance Configuration File.

  5. Execute the zfssa_geo_register script on the primary cluster.

    For example:

    phys-newyork-1# /opt/ORCLscgrepzfssa/util/zfssa_geo_register -f /var/tmp/zfssa_geo_config
  6. Replicate the protection group to the partner cluster.

    The final messages of the registration script outline the required geopg get command. You must log in to one node of the partner cluster and execute that exact command.

    For example, where zfssa-ps is the partnership name and zfssa-pg is the protection group name:

    phys-newyork-1# geopg get --partnership zfssa-ps zfssa-pg
  7. Verify the protection group configuration.
    phys-newyork-1# geoadm status
    phys-newyork-1# clresource status zfssa-rep-rs
    zfssa-rep-rs

    Specifies the name of the replication resource.

  8. Verify that you can switch over from one cluster to the other.

    See How to Switch Over Sun ZFS Storage Appliance Remote Replication From the Primary Cluster to the Secondary Cluster.

Troubleshooting

If you experience failures while performing this procedure, enable debugging. See Debugging a Sun ZFS Storage Appliance Protection Group.

How to Modify a Sun ZFS Storage Appliance Protection Group

Before You Begin

Before modifying the configuration of your protection group, ensure that the protection group you want to modify exists locally.

  1. Assume the root role or assume a role that is assigned the Geo Management RBAC rights profile.

    For more information about RBAC, see Geographic Edition Software and RBAC in Oracle Solaris Cluster Geographic Edition System Administration Guide.


    Note - If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.

    # chmod A+user:username:rwx:allow /var/cluster/geo

    The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Sun ZFS Storage Appliance software.


  2. Modify the configuration of the protection group.
    # geopg set-prop -p property [-p…] pg-name
    -p property

    Specifies a property of the protection group.

    For more information about the properties you can set, see Appendix A, Standard Geographic Edition Properties, in Oracle Solaris Cluster Geographic Edition System Administration Guide.

    pg-name

    Specifies the name of the protection group.

    This command modifies the properties of a protection group on all nodes of the local cluster. If the partner cluster contains a protection group of the same name, this command also propagates the new configuration information to the partner cluster.

    For information about the properties you can set, see Property Descriptions for Script-Based Plug-Ins in Oracle Solaris Cluster Geographic Edition System Administration Guide.

    For information about the names and values that are supported by the Geographic Edition software, see Appendix B, Legal Names and Values of Geographic Edition Entities, in Oracle Solaris Cluster Geographic Edition System Administration Guide.

    For more information about the geopg command, refer to the geopg(1M) man page.

Example 1-1 Modifying the Configuration of a Protection Group

The following example modifies the Timeout property of the zfssa-pg protection group.

# geopg set-prop -p Timeout=300 zfssa-pg

Troubleshooting

The geopg set-prop command revalidates the protection group with the new configuration information. If the validation is unsuccessful on the local cluster, the configuration of the protection group is not modified. Otherwise, the configuration status is set to OK on the local cluster.

If the configuration status is OK on the local cluster but the validation is unsuccessful on the partner cluster, the configuration status is set to Error on the partner cluster.

Validating a Sun ZFS Storage Appliance Protection Group

During protection group validation, the Sun ZFS Storage remote replication layer of the Geographic Edition software validates that the following conditions are met:

How to Validate a Sun ZFS Storage Appliance Protection Group

Before You Begin

Ensure that the protection group you want to validate exists locally and that the common agent container is online on all nodes of both clusters in the partnership.

  1. Assume the root role or assume a role that is assigned the Geo Management RBAC rights profile.

    For more information about RBAC, see Geographic Edition Software and RBAC in Oracle Solaris Cluster Geographic Edition System Administration Guide.


    Note - If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.

    # chmod A+user:username:rwx:allow /var/cluster/geo

    The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Sun ZFS Storage Appliance software.


  2. Validate the configuration of the protection group.

    Note - This command validates the configuration of the protection group on the local cluster only. To validate the protection group configuration on the partner cluster, run the command again on the partner cluster.


    # geopg validate pg-name 
    pg-name

    Specifies a unique name that identifies a single protection group

Example 1-2 Validating the Configuration of a Protection Group

The following example validates the protection group zfssa-pg.

# geopg validate zfssa-pg

Troubleshooting

If the configuration status of a protection group is displayed as Error in the geoadm status output, you can validate the configuration by using the geopg validate command. This command checks the current state of the protection group and its entities.

Debugging a Sun ZFS Storage Appliance Protection Group

If you encounter problems when creating a protection group or replicating a protection group with the geopg get command, you can set the DEBUG property of the /opt/ORCLscgrepzfssa/etc/config file to run trace logs. These logs will display on the terminal.

After a Sun ZFS Storage Appliance replication component is added to the protection group, you instead enable debugging by directly setting the Debug_level property of the Sun ZFS Storage Appliance resource with the clresource set command. Debug messages will display on the terminal.

# clresource set -p Debug_level=N zfssa-rep-rs

The following values are valid for the DEBUG and Debug_level properties:

0

No trace. This is the default.

1

Function trace.

2

Trace everything.

Additionally, logs of oscgeo7kcli calls and their results are recorded in /var/cluster/geo/zfssa/replication-component_logfile files on each cluster node.

How to Delete a Sun ZFS Storage Appliance Protection Group

Before You Begin

Before deleting a protection group, ensure that the following conditions are met:


Note - To keep the application resource groups online while deleting the protection group, you must remove the application resource groups from the protection group .


Perform this procedure from a node in the cluster where you want to delete the protection group.

  1. Assume the root role or assume a role that is assigned the Geo Management RBAC rights profile.

    For more information about RBAC, see Geographic Edition Software and RBAC in Oracle Solaris Cluster Geographic Edition System Administration Guide.


    Note - If you use a role with Geo Management RBAC rights, ensure that the /var/cluster/geo ACLs are correct on each node of both partner clusters. If necessary, assume the root role on the cluster node and set the correct ACLs.

    # chmod A+user:username:rwx:allow /var/cluster/geo

    The /var/cluster/geo directory must have the correct access control lists (ACL) applied for compatibility between the Geo Management RBAC rights profile and Sun ZFS Storage Appliance software.


  2. Delete the protection group from the local cluster.
    # geopg delete pg-name
    pg-name

    Specifies the name of the protection group

    This command deletes the configuration of the protection group from the local cluster. The command also removes the replication resource group for each Sun ZFS Storage Appliance component in the protection group. This command does not alter the replication state of the component.

  3. To delete the protection group on the secondary cluster, repeat Step 1 and Step 2 on cluster-newyork.

Example 1-3 Deleting a Sun ZFS Storage Appliance Protection Group

The following example deletes the protection group zfssa-pg from both partner clusters. The protection group is offline on both partner clusters. In this example, cluster-paris is the primary cluster and cluster-newyork is the partner cluster.

# rlogin phys-paris-1 -l root
phys-paris-1# geopg delete zfssa-pg
# rlogin phys-newyork-1 -l root
phys-newyork-1# geopg delete zfssa-pg

Example 1-4 Deleting a Sun ZFS Storage Appliance Protection Group While Keeping Application Resource Groups Online

The following example keeps online two application resource groups, apprg1 and apprg2, while deleting their protection group, zfssa-pg from both partner clusters. First the application resource groups are removed from the protection group, then the protection group is deleted from the primary cluster phys-paris and the partner cluster phys-newyork.

phys-paris-1# geopg remove-resource-group apprg1,apprg2 zfssa-pg
phys-paris-1# geopg stop -e global zfssa-pg 
phys-paris-1# geopg delete zfssa-pg
phys-newyork-1# geopg delete zfssa-pg

Troubleshooting

If the deletion is unsuccessful, the configuration status is set to Error. Fix the cause of the error, and rerun the geopg delete command.