This chapter contains the procedures for configuring and administering data replication with Hitachi TrueCopy software. The chapter contains the following sections:
Creating, Modifying, Validating, and Deleting a Hitachi TrueCopy Protection Group
Administering Hitachi TrueCopy Data Replication Device Groups
Replicating the Hitachi TrueCopy Protection Group Configuration to a Secondary Cluster
Checking the Runtime Status of Hitachi TrueCopy Data Replication
Before you begin creating protection groups, consider the following strategies:
Taking the application offline before creating the protection group.
This strategy is the most straightforward because you use a single command to create the protection group on one cluster, retrieve the information on the other cluster, and start the protection group. However, because the protection group is not brought online until the end of the process, you must take the application resource group offline to add it to the protection group.
Creating the protection group while the application remains online.
While this strategy allows you to create a protection group without any application outage, it requires issuing more commands.
The following sections describe the steps for each strategy.
To create a protection group while the application resource group is offline, complete the following steps.
Create the protection group from a cluster node.
For more information, see How to Create and Configure a Hitachi TrueCopy Protection Group That Does Not Use Oracle Real Application Clusters or How to Create a Protection Group for Oracle Real Application Clusters.
Add the data replication device group to the protection group.
For more information, see How to Add a Data Replication Device Group to a Hitachi TrueCopy Protection Group.
Take the application resource group offline.
Add the application resource group to the protection group.
For more information, see How to Add an Application Resource Group to a Hitachi TrueCopy Protection Group.
On the other cluster, retrieve the protection group configuration.
For more information, see How to Replicate the Hitachi TrueCopy Protection Group Configuration to a Secondary Cluster.
From either cluster, start the protection group globally.
For more information, see How to Activate a Hitachi TrueCopy Protection Group.
To add an existing application resource group to a new protection group without taking the application offline, complete the following steps on the cluster where the application resource group is online.
Create the protection group from a cluster node.
For more information, see How to Create and Configure a Hitachi TrueCopy Protection Group That Does Not Use Oracle Real Application Clusters or How to Create a Protection Group for Oracle Real Application Clusters.
Add the data replication device group to the protection group.
For more information, see How to Add a Data Replication Device Group to a Hitachi TrueCopy Protection Group.
Start the protection group locally.
For more information, see How to Activate a Hitachi TrueCopy Protection Group.
Add the application resource group to the protection group.
For more information, see How to Add an Application Resource Group to a Hitachi TrueCopy Protection Group.
Complete the following steps on the other cluster.
Retrieve the protection group configuration.
For more information, see How to Replicate the Hitachi TrueCopy Protection Group Configuration to a Secondary Cluster.
Activate the protection group locally.
For more information, see How to Activate a Hitachi TrueCopy Protection Group.
This example creates a protection group without taking the application offline.
In this example, the apprg1 resource group is online on the cluster-paris cluster.
Create the protection group on cluster-paris.
phys-paris-1# geopg create -d truecopy -p Nodelist=phys-paris-1,phys-paris-2 \ -o Primary -s paris-newyork-ps tcpg Protection group "tcpg" has been successfully created |
Add the device group, tcdg, to the protection group.
phys-paris-1# geopg add-device-group -p fence_level=async tcdg tcpg |
Activate the protection group locally.
phys-paris-1# geopg start -e local tcpg Processing operation.... this may take a while.... Protection group "tcpg" successfully started. |
Add to the protection group an application resource group that is already online.
phys-paris-1# geopg add-resource-group apprg1 tcpg Following resource groups were successfully inserted: "apprg1" |
Verify that the application resource group was added successfully.
phys-paris-1# geoadm status Cluster: cluster-paris Partnership "paris-newyork-ps" : OK Partner clusters : newyork Synchronization : OK ICRM Connection : OK Heartbeat "hb_cluster-paris~cluster-newyork" monitoring \ "paris-newyork-ps" OK Plug-in "ping-plugin" : Inactive Plug-in "tcp_udp_plugin" : OK Protection group "tcpg" : Degraded Partnership : paris-newyork-ps Synchronization : OK Cluster cluster-paris : Degraded Role : Primary Configuration : OK Data replication : Degraded Resource groups : OK Cluster cluster-newyork : Unknown Role : Unknown Configuration : Unknown Data Replication : Unknown Resource Groups : Unknown |
On a node of the partner cluster, retrieve the protection group.
phys-newyork-1# geopg get -s paris-newyork-ps tcpg Protection group "tcpg" has been successfully created. |
Activate the protection group locally on the partner cluster.
phys-newyork-1# geopg start -e local tcpg Processing operation.... this may take a while.... Protection group "tcpg" successfully started. |
Verify that the protection group was successfully created and activated.
Running the geoadm status command on cluster-paris produces the following output:
phys-paris-1# geoadm status Cluster: cluster-paris Partnership "paris-newyork-ps" : OK Partner clusters : newyork Synchronization : OK ICRM Connection : OK Heartbeat "hb_cluster-paris~cluster-newyork" monitoring \ "paris-newyork-ps": OK Plug-in "ping-plugin" : Inactive Plug-in "tcp_udp_plugin" : OK Protection group "tcpg" : Degraded Partnership : paris-newyork-ps Synchronization : OK Cluster cluster-paris : Degraded Role : Primary Configuration : OK Data replication : Degraded Resource groups : OK Cluster cluster-newyork : Degraded Role : Secondary Configuration : OK Data Replication : Degraded Resource Groups : OK |
This section contains procedures for the following tasks:
Requirements to Support Oracle Real Application Clusters With Data Replication Software
How to Create a Protection Group for Oracle Real Application Clusters
How the Data Replication Subsystem Validates the Device Group
You can create protection groups that are not configured to use data replication. To create a protection group that does not use a data replication subsystem, omit the -d datareplicationtype option when you use the geopg command. The geoadm status command shows a state for these protection groups of Degraded.
For more information, see Creating a Protection Group That Does Not Require Data Replication in Sun Cluster Geographic Edition System Administration Guide.
Use the steps in this task to create and configure a Hitachi TrueCopy protection group. If you want to use Oracle Real Application Clusters, see How to Create a Protection Group for Oracle Real Application Clusters.
Before you create a protection group, ensure that the following conditions are met:
The local cluster is a member of a partnership.
The protection group you are creating does not already exist.
Protection group names are unique in the global Sun Cluster Geographic Edition namespace. You cannot use the same protection group name in two partnerships on the same system.
You can also replicate the existing configuration of a protection group from a remote cluster to the local cluster. For more information, see Replicating the Hitachi TrueCopy Protection Group Configuration to a Secondary Cluster.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Create a new protection group by using the geopg create command.
This command creates a protection group on all nodes of the local cluster.
# geopg create -s partnershipname -o localrole -d truecopy [-p property [-p…]] \ protectiongroupname |
Specifies the name of the partnership.
Specifies the role of this protection group on the local cluster as either primary or secondary.
Specifies that the protection group data is replicated by the Hitachi TrueCopy software.
Specifies the properties of the protection group.
You can specify the following properties:
Description – Describes the protection group.
Timeout – Specifies the timeout period for the protection group in seconds.
Nodelist – Lists the host names of the machines that can be primary for the replication subsystem.
Cluster_dgs – Lists the device groups where the data is written.
For more information about the properties you can set, see Appendix A, Standard Sun Cluster Geographic Edition Properties, in Sun Cluster Geographic Edition System Administration Guide.
Specifies the name of the protection group.
For information about the names and values that are supported by Sun Cluster Geographic Edition software, see Appendix B, Legal Names and Values of Sun Cluster Geographic Edition Entities, in Sun Cluster Geographic Edition System Administration Guide.
For more information about the geopg command, refer to the geopg(1M) man page.
This example creates a Hitachi TrueCopy protection group on cluster-paris, which is set as the primary cluster.
# geopg create -s paris-newyork-ps -o primary -d truecopy \ -p Nodelist=phys-paris-1,phys-paris-2 tcpg |
This example creates a Hitachi TrueCopy protection group, tcpg, for an application resource group, resourcegroup1, that is currently online on cluster-newyork.
Create the protection group without the application resource group.
# geopg create -s paris-newyork-ps -o primary -d truecopy \ -p nodelist=phys-paris-1,phys-paris-2 tcpg |
Activate the protection group.
# geopg start -e local tcpg |
Add the application resource group.
# geopg add-resource-group resourcegroup1 tcpg |
Sun Cluster Geographic Edition software supports Oracle Real Application Clusters with Hitachi TrueCopy software. Observe the following requirements when you configure Oracle Real Application Clusters:
Each CRS OCR and Voting Disk Location must be in its own device group on each cluster and cannot be replicated.
Static data such as CRS and database binaries are not required to be replicated. But this data must be accessible from all nodes of both clusters.
You must create a SUNW.ScalDeviceGroup resource in its own resource group for the device group that holds dynamic database files. This resource group must be separate from the resource group that holds the clusterware SUNW.ScalDeviceGroup resource.
To be able to leave RAC infrastructure resource groups outside of Sun Cluster Geographic Edition control, you must run Sun Cluster Geographic Edition binaries on both cluster partners and set the RAC protection group External_Dependency_Allowed property to true.
Do not add the CRS OCR and Voting Disk device group to the protection group's cluster_dgs property.
Do not add RAC infrastructure resource groups to the protection group. Only add the rac_server_proxy resource group and resource groups for device groups that are replicated to the protection group. Also, you must set to false the auto_start_on_new_cluster resource group property for the rac_server_proxy resource group and resource groups and for device groups that are replicated.
When you use a cluster file system for an Oracle RAC file system, such as a flash recovery area, alert, or trace log files, you must manually create on both clusters a separate resource group that uses the HAStoragePlus resource to bring online these corresponding file systems. You must set a strong resource dependency from nonClusterware SUNW.ScalDeviceGroup resources to this HAStoragePlus resource. Then add this HAStoragePlus resource group to the RAC protection group.
Before you create a protection group for Oracle Real Application Clusters (RAC), ensure that the following conditions are met:
Read Requirements to Support Oracle Real Application Clusters With Data Replication Software.
The node list of the protection group must be the same as the node list of RAC framework resource group.
If one cluster is running RAC on a different number of nodes than another cluster, ensure that all nodes on both clusters have the same resource groups defined.
If you are using the VERITAS Volume Manager cluster feature to manage data, you must specify the cluster feature disk group and Sun Cluster device groups for other data volumes in the cluster_dgs property.
When a cluster and the VERITAS Volume Manager cluster feature software restart, the RAC framework automatically tries to import all cluster feature device groups that were imported already before cluster went down. Therefore, the attempt to import the device groups to the original primary fails.
Log in to a cluster node on the primary cluster.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Create a new protection group by using the geopg create command.
This command creates a protection group on all nodes of the local cluster.
# geopg create -s partnershipname -o localrole -d truecopy \ -p External_Dependency_Allowed=true [-p property [-p…]] protectiongroupname |
Specifies the name of the partnership.
Specifies the role of this protection group on the local cluster as primary.
Specifies that the protection group data is replicated by the Hitachi TrueCopy software.
Specifies the properties of the protection group.
You can specify the following properties:
Description – Describes the protection group.
External_Dependency_Allowed - Specifies whether to allow any dependencies between resource groups and resources that belong to this protection group and resource groups and resources that do not belong to this protection group. For RAC, set this property to true.
Timeout – Specifies the timeout period for the protection group in seconds.
Nodelist – Lists the host names of the machines that can be primary for the replication subsystem.
Cluster_dgs – Specifies the VERITAS Volume Manager cluster feature disk group where the data is written.
For more information about the properties you can set, see Appendix A, Standard Sun Cluster Geographic Edition Properties, in Sun Cluster Geographic Edition System Administration Guide.
Specifies the name of the protection group.
For information about the names and values that are supported by Sun Cluster Geographic Edition software, see Appendix B, Legal Names and Values of Sun Cluster Geographic Edition Entities, in Sun Cluster Geographic Edition System Administration Guide.
For more information about the geopg command, refer to the geopg(1M) man page.
Add a Hitachi TrueCopy device group to the protection group.
# geopg add-device-group [-p property [-p…]] protectiongroupname |
Specifies the properties of the protection group.
You can specify the Fence_level properties which defines the fence level that is used by the disk device group. The fence level determines the level of consistency among the primary and secondary volumes for that disk device group. You must set this to never.
To avoid application failure on the primary cluster, specify a Fence_level of never or async. If the Fence_level parameter is not set to never or async, data replication might not function properly when the secondary site goes down.
If you specify a Fence_level of never, the data replication roles do not change after you perform a takeover.
Do not use programs that would prevent the Fence_level parameter from being set to data or status because these values might be required in special circumstances.
If you have special requirements to use a Fence_level of data or status, consult your Sun representative.
For more information about the properties you can set, see Appendix A, Standard Sun Cluster Geographic Edition Properties, in Sun Cluster Geographic Edition System Administration Guide.
Specifies the name of the protection group.
Add to the protection group only the rac_server_proxy resource group and resource groups for device groups that are replicated.
Do not add the RAC framework resource group to the protection group. This ensures that, if the protection group becomes secondary on the node, the framework resource group does not become unmanaged. In addition, multiple RAC databases can be on the cluster, and the databases can be under Sun Cluster Geographic Edition control or not under its control.
# geopg add-resource-group resourcegroup protectiongroupname |
Specifies a comma-separated list of resource groups to add to or delete from the protection group. The specified resource groups must already be defined.
The protection group must be online before you add a resource group. The geopg add-resource-group command fails when a protection group is offline and the resource group that is being added is online.
If a protection group has already been started at the time that you add a resource group, the resource group remains unmanaged. You must start the resource group manually by running the geopg start command.
Specifies the name of the protection group.
This example creates the protection group pg1 which uses RAC and the cluster feature.
A cluster feature disk group oracle-dg controls the data which is replicated by the Hitachi TrueCopy device group VG01. The node list of the RAC framework resource group is set to all nodes of the cluster.
Create the protection group on the primary cluster with the cluster feature disk group oracle-dg.
# geopg create -s pts1 -o PRIMARY -d Truecopy \ -p cluster_dgs=racdbdg -p external_dependency_allowed=true pg1 Protection group "pg1" successfully created. |
Add the Hitachi TrueCopy device group VG01 to protection group pg1.
# geopg add-device-group --property fence_level=never VG01 pg1 Device group "VG01" successfully added to the protection group "pg1". |
Add the rac_server_proxy-rg resource group and the replicated device-group resource groups, hasp4rac-rg and scaldbdg-rg, to the protection group.
# geopg add-resource-group rac_server_proxy-rg,hasp4rac-rg,\ scaldbdg-rg pg1 |
Before creating the protection group, the data replication layer validates that the horcmd daemon is running.
The data replication layer validates that the horcmd daemon is running on at least one node that is specified in the Nodelist property. For more information about the horcmd daemon, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide.
If the Cluster_dgs property is specified, then the data replication layer verifies that the device group specified is a valid Sun Cluster device group. The data replication layer also verifies that the device group is of a valid type.
The device groups that are specified in the Cluster_dgs property must be written to only by applications that belong to the protection group. This property must not specify device groups that receive information from applications outside the protection group.
A Sun Cluster resource group is automatically created when the protection group is created.
This resource in this resource group monitors data replication. The name of the Hitachi TrueCopy data replication resource group is rg-tc-protectiongroupname.
These automatically created replication resource groups are for Sun Cluster Geographic Edition internal implementation purposes only. Use caution when you modify these resource groups by using Sun Cluster commands.
Before modifying the configuration of your protection group, ensure that the protection group you want to modify exists locally.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Modify the configuration of the protection group.
This command modifies the properties of a protection group on all nodes of the local cluster. If the partner cluster contains a protection group of the same name, this command also propagates the new configuration information to the partner cluster.
# geopg set-prop -p property [-p...] protectiongroupname |
Specifies the properties of the protection group.
For more information about the properties you can set, see Appendix A, Standard Sun Cluster Geographic Edition Properties, in Sun Cluster Geographic Edition System Administration Guide.
Specifies the name of the protection group.
For information about the names and values that are supported by Sun Cluster Geographic Edition software, see Appendix B, Legal Names and Values of Sun Cluster Geographic Edition Entities, in Sun Cluster Geographic Edition System Administration Guide.
For more information about the geopg command, refer to the geopg(1M) man page.
This example modifies the Timeout property of the protection group that was created in Example 2–2.
# geopg set-prop -p Timeout=400 tcpg |
During protection group validation, the Hitachi TrueCopy data replication subsystem validates the following:
The horcmd daemon is running on at least one node that is specified in the Nodelist property of the protection group. The data replication layer also confirms that a path to a Hitachi TrueCopy storage device exists from the node on which the horcmd daemon is running.
For more information about the horcmd daemon, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide
The device group specified is a valid Sun Cluster device group or a CVM device group if the Cluster_dgs property is specified. The data replication layer also verifies that the device group is of a valid type.
The properties are validated for each Hitachi TrueCopy device group that has been added to the protection group.
When the geoadm status output displays that the Configuration status of a protection group is Error, you can validate the configuration by using the geopg validate command. This command checks the current state of the protection group and its entities.
If the protection group and its entities are valid, then the Configuration status of the protection groups is set to OK. If the geopg validate command finds an error in the configuration files, then the command displays a message about the error and the configuration remains in the error state. In such a case, you can fix the error in the configuration, and run the geopg validate command again.
Ensure that the protection group you want to validate exists locally and that the Common Agent Container is online on all nodes of both clusters in the partnership.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Validate the configuration of the protection group.
This command validates the configuration of the protection group on the local cluster only. To validate the protection group configuration on the partner cluster, run the command again on the partner cluster.
# geopg validate protectiongroupname |
Specifies a unique name that identifies a single protection group
This example validates a protection group.
# geopg validate tcpg |
If you want to delete the protection group everywhere, you must run the geopg delete command on each cluster where the protection group exists.
Before deleting a protection group, ensure that the following conditions are met:
The protection group you want to delete exists locally.
The protection group is offline on the local cluster.
You must remove the application resource groups from the protection group in order to keep the application resource groups online while deleting the protection group. See Example 2–8 and Example 2–10 for examples of this procedure.
Log in to a node on the primary cluster.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Delete the protection group.
This command deletes the configuration of the protection group from the local cluster. The command also removes the replication resource group for each Hitachi TrueCopy device group in the protection group. This command does not alter the pair state of the Hitachi TrueCopy device group.
# geopg delete protectiongroupname |
Specifies the name of the protection group
To delete the protection group on the secondary cluster, repeat step 1 and step 2 on cluster-newyork.
This example deletes a protection group from both partner clusters.
cluster-paris is the primary cluster. For a reminder of the sample cluster configuration, see Example Sun Cluster Geographic Edition Cluster Configuration in Sun Cluster Geographic Edition System Administration Guide.
# rlogin phys-paris-1 -l root phys-paris-1# geopg delete tcpg # rlogin phys-newyork-1 -l root phys-newyork-1# geopg delete tcpg |
This example keeps online two application resource groups, apprg1 and apprg2, while deleting their protection group, tcpg. Remove the application resource groups from the protection group, then delete the protection group.
# geopg remove-resource-group apprg1,apprg2 tcpg # geopg stop -e global tcpg # geopg delete tcpg |
To make an application highly available, the application must be managed as a resource in an application resource group.
All the entities you configure for the application resource group on the primary cluster, such as application resources, installation, application configuration files, and resource groups, must be replicated to the secondary cluster. The resource group names must be identical on both clusters. Also, the data that the application resource uses must be replicated to the secondary cluster.
This section contains information about the following tasks:
How to Add an Application Resource Group to a Hitachi TrueCopy Protection Group
How to Delete an Application Resource Group From a Hitachi TrueCopy Protection Group
You can add an existing resource group to the list of application resource groups for a protection group. Before you add an application resource group to a protection group, ensure that the following conditions are met:
The protection group is defined.
The resource group exists on both clusters and is in an appropriate state.
The Auto_start_on_new_cluster property of the resource group is set to False. You can view this property by using the clresourcegroup command.
# clresourcegroup show -p auto_start_on_new_cluster apprg |
When you bring a protection group online on the primary cluster, you should bring the application resources groups participating in that protection group online only on the same primary cluster. Setting the Auto_start_on_new_cluster property to False prevents the Sun Cluster resource group manager from automatically starting the application resource groups. In this case, the start up of resource groups is reserved to the Sun Cluster Geographic Editionsoftware.
Application resource groups should be online only on the primary cluster when the protection group is activated.
Set the Auto_start_on_new_cluster property to False as follows:
# clresourcegroup set -p Auto_start_on_new_cluster=False apprg |
The application resource group must not have dependencies on resource groups and resources outside of this protection group. To add several application resource groups that share dependencies, you must add the application resource groups to the protection group in a single operation. If you add the application resource groups separately, the operation fails.
The protection group can be activated or deactivated and the resource group can be either Online or Unmanaged.
If the resource group is Unmanaged and the protection group is Active after the configuration of the protection group has changed, the local state of the protection group becomes Degraded.
If the resource group to add is Online and the protection group is deactivated, the request is rejected. You must activate the protection group before adding an active resource group.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Add an application resource group to the protection group.
This command adds an application resource group to a protection group on the local cluster. Then the command propagates the new configuration information to the partner cluster if the partner cluster contains a protection group of the same name.
# geopg add-resource-group resourcegrouplist protectiongroup |
Specifies the name of the application resource group. You can specify more than one resource group in a comma-separated list.
Specifies the name of the protection group.
For information about the names and values that are supported by Sun Cluster Geographic Edition software, see Appendix B, Legal Names and Values of Sun Cluster Geographic Edition Entities, in Sun Cluster Geographic Edition System Administration Guide.
If the add operation is unsuccessful on the local cluster, the configuration of the protection group is not modified. Otherwise, the Configuration status is set to OK on the local cluster.
If the Configuration status is OK on the local cluster, but the add operation is unsuccessful on the partner cluster, the Configuration status is set to Error on the partner cluster.
After the application resource group is added to the protection group, the application resource group is managed as an entity of the protection group. Then the application resource group is affected by protection group operations such as start, stop, switchover, and takeover.
This example adds two application resource groups, apprg1 and apprg2, to tcpg.
# geopg add-resource-group apprg1,apprg2 tcpg |
You can remove an application resource group from a protection group without altering the state or contents of an application resource group.
Ensure that the following conditions are met:
The protection group is defined on the local cluster.
The resource group to be removed is part of the application resource groups of the protection group. For example, you cannot remove a resource group that belongs to the data replication management entity.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Remove the application resource group from the protection group.
This command removes an application resource group from the protection group on the local cluster. If the partner cluster contains a protection group of the same name, then the command removes the application resource group from the protection group on the partner cluster.
# geopg remove-resource-group resourcegrouplist protectiongroup |
Specifies the name of the application resource group. You can specify more than one resource group in a comma-separated list.
Specifies the name of the protection group.
If the remove operation is unsuccessful on the local cluster, the configuration of the protection group is not modified. Otherwise, the Configuration status is set to OK on the local cluster.
If the Configuration status is OK on the local cluster, but the remove operation is unsuccessful on the partner cluster, the Configuration status is set to Error on the partner cluster.
This example removes two application resource groups, apprg1 and apprg2, from tcpg.
# geopg remove-resource-group apprg1,apprg2 tcpg |
This section provides the following information about administering Hitachi TrueCopy data replication device groups:
How to Add a Data Replication Device Group to a Hitachi TrueCopy Protection Group
How the State of the Hitachi TrueCopy Device Group Is Validated
How to Modify a Hitachi TrueCopy Data Replication Device Group
How to Delete a Data Replication Device Group From a Hitachi TrueCopy Protection Group
For details about configuring a Hitachi TrueCopy data replication protection group, see How to Create and Configure a Hitachi TrueCopy Protection Group That Does Not Use Oracle Real Application Clusters.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Create a data replication device group in the protection group.
This command adds a device group to a protection group on the local cluster and propagates the new configuration to the partner cluster if the partner cluster contains a protection group of the same name.
# geopg add-device-group -p property [-p...] devicegroupname protectiongroupname |
Specifies the properties of the data replication device group.
You can specify the Fence_level property which defines the fence level that is used by the device group. The fence level determines the level of consistency among the primary and secondary volumes for that device group.
You can set this property to data, status, never, or async. When you use a Fence_level of never or async, the application can continue to write to the primary cluster even after failure on the secondary cluster. However, when you set the Fence_level property to data or status, the application on the primary cluster might fail because the secondary cluster is not available for the following reasons:
Data replication link failure
Secondary cluster and storage is down
Storage on the secondary cluster is down
To avoid application failure on the primary cluster, specify a Fence_level of never or async.
If you specify a Fence_level of never, the data replication roles do not change after you perform a takeover.
If you have special requirements to use a Fence_level of data or status, consult your Sun representative.
For more information about application errors associated with different fence levels, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide.
The other properties you can specify depend on the type of data replication you are using. For details about these properties, see Appendix A, Standard Sun Cluster Geographic Edition Properties, in Sun Cluster Geographic Edition System Administration Guide.
Specifies the name of the new data replication device group.
Specifies the name of the protection group that will contain the new data replication device group.
For information about the names and values that are supported by Sun Cluster Geographic Edition software, see Appendix B, Legal Names and Values of Sun Cluster Geographic Edition Entities, in Sun Cluster Geographic Edition System Administration Guide.
For more information about the geopg command, refer to the geopg(1M) man page.
|
This example creates a Hitachi TrueCopy data replication device group in the tcpg protection group.
# geopg add-device-group -p Fence_level=data devgroup1 tcpg |
When the Hitachi TrueCopy device group, configured as dev_group in the /etc/horcm.conf file, is added to a protection group, the data replication layer makes the following validations.
Validates that the horcmd daemon is running on at least one node in the Nodelist property of the protection group.
For more information about the horcmd daemon, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide
Checks that the path to the storage device exists from all the nodes that are specified in the Nodelist property. The storage device controls the new Hitachi TrueCopy device group.
The Hitachi TrueCopy device group properties that are specified in the geopg add-device-group command are validated as described in the following table.
Hitachi TrueCopy Device Group Property |
Validation |
---|---|
devicegroupname |
Checks that the specified Hitachi TrueCopy device group is configured on all of the cluster nodes that are specified in the Nodelist property. |
Fence_level |
If a pair is already established for this Hitachi TrueCopy device group, the data replication layer checks that the specified Fence_level matches the already established fence level. If a pair is not yet established, for example, if a pair is in the SMPL state, any Fence_level is accepted. |
When a Hitachi TrueCopy device group is added to a protection group, a Sun Cluster resource is automatically created by this command. This resource monitors data replication. The name of the resource is r-tc-protectiongroupname-devicegroupname. This resource is placed in the corresponding Sun Cluster resource group, which is named rg-tc-protectiongroupname.
You must use caution before you modify these replication resources with Sun Cluster commands. These resources are for internal implementation purposes only.
For validation purposes, Sun Cluster Geographic Edition gives each Hitachi TrueCopy device group a state according to the current state of its pair. This state is returned by the pairvolchk -g devicegroup -ss command.
The remainder of this section describes the individual device group states and how these states are validated against the local role of the protection group.
An individual Hitachi TrueCopy device group can be in one of the following states:
SMPL
Regular Primary
Regular Secondary
Takeover Primary
Takeover Secondary
The state of a particular device group is determined by using the value that is returned by the pairvolchk -g devicegroup -ss command. The following table describes the device group state associated with the values returned by the pairvolchk command.
Table 2–1 Individual Hitachi TrueCopy Device Group States
Output of pairvolchk |
Individual Device Group State |
---|---|
11 = SMPL |
SMPL |
22 / 42 = PVOL_COPY 23 / 42 = PVOL_PAIR 26 / 46 = PVOL_PDUB 47 = PVOL_PFUL 48 = PVOL_PFUS |
Regular Primary |
24 / 44 = PVOL_PSUS 25 / 45 = PVOL_PSUE For these return codes, determining the individual device group category requires that the horcmd process be active on the remote cluster so that the remote-pair-state for this device group can be obtained. |
Regular Primary, if remote-cluster-state !=SSWS or Takeover Secondary, if remote-cluster-state == SSWS SSWS, when you use the pairdisplay -g devicegroup -fc command. |
32 / 52 = SVOL_COPY 33 / 53 = SVOL_PAIR 35 / 55 = SVOL_PSUE 36 / 56 = SVOL_PDUB 57 = SVOL_PFUL 58 = SVOL_PFUS |
Regular Secondary |
34 / 54 = SVOL_PSUS |
Regular Secondary, if local-cluster-state !=SSWS or Takeover Primary, if local-cluster-state == SSWS SSWS, when you use the pairdisplay -g devicegroup -fc command. |
If a protection group contains only one Hitachi TrueCopy device group, then the aggregate device group state is the same as the individual device group state.
When a protection group contains multiple Hitachi TrueCopy device groups, the aggregate device group state is obtained as described in the following table.
Table 2–2 Conditions That Determine the Aggregate Device Group State
Condition |
Aggregate Device Group State |
---|---|
All individual device group states are SMPL |
SMPL |
All individual device group states are either Regular Primary or SMPL |
Regular Primary |
All individual device group states are either Regular Secondary or SMPL |
Regular Secondary |
All individual device group states are either Takeover Primary or SMPL |
Takeover Primary |
All individual device group states are either Takeover Secondary or SMPL |
Takeover Secondary |
The aggregate device group state cannot be obtained for any other combination of individual device group states. This is considered a pair-state validation failure.
The local role of a Hitachi TrueCopy protection group is validated against the aggregate device group state as described in the following table.
Table 2–3 Validating the Aggregate Device Group State Against the Local Role of a Protection Group
Aggregate Device Group State |
Valid Local Protection Group Role |
---|---|
SMPL |
primary or secondary |
Regular Primary |
primary |
Regular Secondary |
secondary |
Takeover Primary |
primary |
Takeover Secondary |
secondary |
This example validates the state of a Hitachi TrueCopy device group against the role of the Hitachi TrueCopy protection group to which it belongs.
First, the protection group is created as follows:
phys-paris-1# geopg create -s paris-newyork-ps -o primary -d truecopy tcpg |
A device group, devgroup1, is added to the protection group, tcpg, as follows:
phys-paris-1# geopg add-device-group -p fence_level=async devgroup1 tcpg |
The current state of a Hitachi TrueCopy device group, devgroup1, is provided in the output of the pairdisplay command as follows:
phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 12345 1..P-VOL PAIR ASYNC,54321 609 - devgroup1 pair1(R) (CL1-C , 0, 20)54321 609..S-VOL PAIR ASYNC,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 12345 2..P-VOL PAIR ASYNC,54321 610 - devgroup1 pair2(R) (CL1-C , 0,21) 54321 610..S-VOL PAIR ASYNC,----- 2 - |
The pairvolchk -g <DG> -ss command is run and returns a value of 23.
phys-paris-1# pairvolchk -g devgroup1 -ss parivolchk : Volstat is P-VOL.[status = PAIR fence = ASYNC] phys-paris-1# echo $? 23 |
The output of the pairvolchk command is 23, which corresponds in Table 2–1 to an individual device group state of Regular Primary. Because the protection group contains only one device group, the aggregate device group state is the same as the individual device group state. The device group state is valid because the local role of the protection group, specified by the -o option, is primary, as specified in Table 2–3.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Modify the device group.
This command modifies the properties of a device group in a protection group on the local cluster. Then the command propagates the new configuration to the partner cluster if the partner cluster contains a protection group of the same name.
# geopg modify-device-group -p property [-p...] TCdevicegroupname protectiongroupname |
Specifies the properties of the data replication device group.
For more information about the properties you can set, see Appendix A, Standard Sun Cluster Geographic Edition Properties, in Sun Cluster Geographic Edition System Administration Guide.
Specifies the name of the new data replication device group.
Specifies the name of the protection group that will contain the new data replication device group.
This example modifies the properties of a data replication device group that is part of a Hitachi TrueCopy protection group.
# geopg modify-device-group -p fence_level=async tcdg tcpg |
You might delete a data replication device group from a protection group if you added a data replication device group to a protection group. Normally, after an application is configured to write to a set of disks, you would not change the disks.
Deleting a data replication device group does not stop replication or change the replication status of the data replication device group.
For information about deleting protection groups, refer to How to Delete a Hitachi TrueCopy Protection Group. For information about deleting application resource groups from a protection group, refer to How to Delete an Application Resource Group From a Hitachi TrueCopy Protection Group.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Remove the device group.
This command removes a device group from a protection group on the local cluster. Then the command propagates the new configuration to the partner cluster if the partner cluster contains a protection group of the same name.
# geopg remove-device-group devicegroupname protectiongroupname |
Specifies the name of the data replication device group
Specifies the name of the protection group
When a device group is deleted from a Hitachi TrueCopy protection group, the corresponding Sun Cluster resource, r-tc-protectiongroupname-devicegroupname, is removed from the replication resource group. As a result, the deleted device group is no longer monitored. The resource group is removed when the protection group is deleted.
This example removes a Hitachi TrueCopy data replication device group.
# geopg remove-device-group tcdg tcpg |
After you have configured data replication, resource groups, and resources on your primary and secondary clusters, you can replicate the configuration of the protection group to the secondary cluster.
Before you replicate the configuration of a Hitachi TrueCopy protection group to a secondary cluster, ensure that the following conditions are met:
The protection group is defined on the remote cluster, not on the local cluster.
The device groups in the protection group on the remote cluster exist on the local cluster.
The application resource groups in the protection group on the remote cluster exist on the local cluster.
The Auto_start_on_new_cluster property of the resource group is set to False. You can view this property by using the clresourcegroup command.
# clresourcegroup show -p auto_start_on_new_cluster apprg |
Setting the Auto_start_on_new_cluster property to False prevents the Sun Cluster resource group manager from automatically starting the resource groups in the protection group. Therefore, after the Sun Cluster Geographic Edition software restarts and communicates with the remote cluster to ensure that the remote cluster is running and that the remote cluster is the secondary cluster for that resource group. The Sun Cluster Geographic Edition software does not automatically start the resource group on the primary cluster.
Application resource groups should be online only on primary cluster when the protection group is activated.
Set the Auto_start_on_new_cluster property to False as follows:
# clresourcegroup set -p Auto_start_on_new_cluster=False apprg1 |
Log in to phys-newyork-1.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
phys-newyork-1 is the only node on the secondary cluster. For a reminder of which node is phys-newyork-1 , see Example Sun Cluster Geographic Edition Cluster Configuration in Sun Cluster Geographic Edition System Administration Guide.
Replicate the protection group configuration to the partner cluster by using the geopg get command.
This command retrieves the configuration information of the protection group from the remote cluster and creates the protection group on the local cluster.
phys-newyork-1# geopg get -s partnershipname [protectiongroup] |
Specifies the name of the partnership from which the protection group configuration information should be retrieved and the name of the partnership where the protection will be created locally.
Specifies the name of the protection group.
If no protection group is specified, then all protection groups that exist in the specified partnership on the remote partner are created on the local cluster.
The geopg get command replicates Sun Cluster Geographic Edition related entities. For information about how to replicate Sun Cluster entities, see Replicating and Upgrading Configuration Data for Resource Groups, Resource Types, and Resources in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
This example replicates the configuration of tcpg from cluster-paris to cluster-newyork.
# rlogin phys-newyork-1 -l root phys-newyork-1# geopg get -s paris-newyork-ps tcpg |
When you activate a protection group, the protection group assumes the role that you assigned to it during configuration. For more information about configuring protection groups, see How to Create and Configure a Hitachi TrueCopy Protection Group That Does Not Use Oracle Real Application Clusters.
You can activate a protection group in the following ways:
Globally – Activates a protection group on both clusters where the protection group is configured.
On the primary cluster only – Secondary cluster remains inactive.
On the secondary cluster only – Primary cluster remains inactive.
Activating a Hitachi TrueCopy protection group on a cluster has the following effect on the data replication layer:
The data replication configuration of the protection group is validated. During validation, the current local role of a protection group is compared with the aggregate device group state as described in Table 2–3. If validation is successful, data replication is started.
Data replication is started on the data replication device groups that are configured for the protection group, no matter whether the activation occurs on a primary or secondary cluster. Data is always replicated from the cluster on which the local role of the protection group is primary to the cluster on which the local role of the protection group is secondary.
Application handling proceeds only after data replication has been started successfully.
Activating a protection group has the following effect on the application layer:
When a protection group is activated on the primary cluster, the application resource groups that are configured for the protection group are also started.
When a protection group is activated on the secondary cluster, the application resource groups are not started.
The Hitachi TrueCopy command that is used to start data replication depends on the following factors:
Aggregate device group state
Local role of the protection group
Current pair state
The following table describes the Hitachi TrueCopy command that is used to start data replication for each of the possible combinations of factors. In the commands, dg is the device group name and fl is the fence level that is configured for the device group.
Table 2–4 Commands Used to Start Hitachi TrueCopy Data Replication
Aggregate Device Group State |
Valid Local Protection Group Role |
Hitachi TrueCopy Start Command |
---|---|---|
SMPL |
primary or secondary |
paircreate -vl -g dg -f fl paircreate -vr -g dg -f fl Both commands require that the horcmd process is running on the remote cluster. |
Regular Primary |
primary |
If the local state code is 22, 23, 25, 26, 29, 42, 43, 45, 46, or 47, no command is run because data is already being replicated. If the local state code is 24, 44, or 48, then the following command is run: pairresync -g dg [-l]. If the local state code is 11, then the following command is run: paircreate -vl -g dg -f fl. Both commands require that the horcmd process is running on the remote cluster. |
Regular Secondary |
secondary |
If the local state code is 32, 33, 35, 36, 39, 52, 53, 55, 56, or 57, no command is run because data is already being replicated. If the local state code is 34, 54, or 58, then the following command is run: pairresync -g dg If the local state code is 11, the following command is run: paircreate -vr -g dg -f fl Both commands require that the horcmd process is up on the remote cluster. |
Takeover Primary |
primary |
If the local state code is 34 or 54, the following command is run: pairresync -swaps -g. If the local state code is 11, then the following command is run: paircreate -vl -g dg -f fl. The paircreate command requires that the horcmd process is running on the remote cluster. |
Takeover Secondary |
secondary |
If the local state code is 24, 44, 25, or 45, the following command is run: pairresync -swapp -g dg. If the local state code is 11, the following command is run: paircreate -vr -g dg -f fl. Both commands require that the horcmd process is running on the remote cluster. |
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Activate the protection group.
When you activate a protection group, its application resource groups are also brought online.
# geopg start -e scope [-n] protectiongroupname |
Specifies the scope of the command.
If the scope is Local, then the command operates on the local cluster only. If the scope is Global, the command operates on both clusters that deploy the protection group.
The property values, such as Global and Local, are not case sensitive.
Prevents the start of data replication at protection group startup.
If you omit this option, the data replication subsystem starts at the same time as the protection group.
Specifies the name of the protection group.
The geopg start command uses Sun Cluster commands to bring resource groups and resources online.
This example illustrates how the Sun Cluster Geographic Edition determines the Hitachi TrueCopy command that is used to start data replication.
First, the Hitachi TrueCopy protection group is created.
phys-paris-1# geopg create -s paris-newyork-ps -o primary -d truecopy tcpg |
A device group, devgroup1, is added to the protection group.
phys-paris-1# geopg add-device-group -p fence_level=async devgroup1 tcpg |
The current state of a Hitachi TrueCopy device group, devgroup1, is provided in the output of the pairdisplay command:
phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 12345 1..SMPL ---- ----, ----- ---- - devgroup1 pair1(R) (CL1-C , 0, 20)54321 609..SMPL ---- ----, ----- ---- - devgroup1 pair2(L) (CL1-A , 0, 2) 12345 2..SMPL ---- ----, ----- ---- - devgroup1 pair2(R) (CL1-C , 0,21) 54321 610..SMPL ---- ----, ----- ---- - |
The aggregate device group state is SMPL.
Next, the protection group, tcpg, is activated by using the geopg start command.
phys-paris-1# geopg start -e local tcpg |
The Sun Cluster Geographic Edition software runs the paircreate -g devgroup1 -vl -f async command at the data replication level. If the command is successful, the state of devgroup1 is provided in the output of the pairdisplay command:
phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 12345 1..P-VOL COPY ASYNC,54321 609 - devgroup1 pair1(R) (CL1-C , 0, 20)54321 609..S-VOL COPY ASYNC,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 12345 2..P-VOL COPY ASYNC,54321 610 - devgroup1 pair2(R) (CL1-C , 0,21) 54321 610..S-VOL COPY ASYNC,----- 2 - |
This example activates a protection group globally.
# geopg start -e global tcpg |
The protection group, tcpg, is activated on both clusters where the protection group is configured.
This example activates a protection group on a local cluster only. This local cluster might be a primary cluster or a secondary cluster, depending on the role of the cluster.
# geopg start -e local tcpg |
You can deactivate a protection group on the following levels:
Globally – Deactivates a protection group on both clusters where the protection group is configured
On the primary cluster only – Secondary cluster remains active
On the secondary cluster only – Primary cluster remains active
Deactivating a Hitachi TrueCopy protection group on a cluster has the following effect on the data replication layer:
The data replication configuration of the protection group is validated. During validation, the current local role of the protection group is compared with the aggregate device group state as described in Table 2–3. If validation is successful, data replication is stopped.
Data replication is stopped on the data replication device groups that are configured for the protection group, whether the deactivation occurs on a primary or secondary cluster.
Deactivating a protection group has the following effect on the application layer:
When a protection group is deactivated on the primary cluster, all of the application resource groups that are configured for the protection group are stopped and unmanaged.
When a protection group is deactivated on the secondary cluster, the resource groups on the secondary cluster are not affected. Application resource groups that are configured for the protection group might remain active on the primary cluster, depending on the activation state of the primary cluster.
The Hitachi TrueCopy command that is used to stop data replication depends on the following factors:
Aggregate device group state
Local role of the protection group
Current pair state
The following table describes the Hitachi TrueCopy command used to stop data replication for each of the possible combinations of factors. In the commands, dg is the device group name.
Table 2–5 Commands Used to Stop Hitachi TrueCopy Data Replication
Aggregate Device Group State |
Valid Local Protection Group Role |
Hitachi TrueCopyStop Command |
---|---|---|
SMPL |
primary or secondary |
No command is run because no data is being replicated. |
Regular Primary |
primary |
If the local state code is 22, 23, 26, 29, 42, 43, 46, or 47, then the following command is run: pairsplit -g dg [-l]. If the local state code is 11, 24, 25, 44, 45, or 48, then no command is run because no data is being replicated. |
Regular Secondary |
secondary |
If the local state code is 32, 33, 35, 36, 39, 52, 53, 55, 56, or 57, the following command is run: pairsplit -g dg. If the local state code is 33 or 53 and the remote state is PSUE, no command is run to stop replication. If the local state code is 11, 34, 54, or 58, then no command is run because no data is being replicated. |
Takeover Primary |
primary |
No command is run because no data is being replicated. |
Takeover Secondary |
secondary |
No command is run because no data is being replicated. |
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Deactivate the protection group.
When you deactivate a protection group, its application resource groups are also unmanaged.
# geopg stop -e scope [-D] protectiongroupname |
Specifies the scope of the command.
If the scope is Local, then the command operates on the local cluster only. If the scope is Global, the command operates on both clusters where the protection group is deployed.
The property values, such as Global and Local, are not case sensitive.
Specifies that only data replication should be stopped and the protection group should be online.
If you omit this option, the data replication subsystem and the protection group are both stopped.
Specifies the name of the protection group.
This example illustrates how the Sun Cluster Geographic Edition software determines the Hitachi TrueCopy command that is used to stop data replication.
The current state of the Hitachi TrueCopy device group, devgroup1, is provided in the output of the pairdisplay command:
phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 12345 1..P-VOL PAIR ASYNC,54321 609 - devgroup1 pair1(R) (CL1-C , 0, 20)54321 609..S-VOL PAIR ASYNC,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 12345 2..P-VOL PAIR ASYNC,54321 610 - devgroup1 pair2(R) (CL1-C , 0,21) 54321 610..S-VOL PAIR ASYNC,----- 2 - |
A device group, devgroup1, is added to the protection group as follows:
phys-paris-1# geopg add-device-group -p fence_level=async devgroup1 tcpg |
The Sun Cluster Geographic Edition software runs the pairvolchk -g <DG> -ss command at the data replication level, which returns a value of 43.
# pairvolchk -g devgroup1 -ss Volstat is P-VOL.[status = PAIR fence = ASYNC] phys-paris-1# echo $? 43 |
Next, the protection group, tcpg, is deactivated by using the geopg stop command.
phys-paris-1# geopg stop -s local tcpg |
The Sun Cluster Geographic Edition software runs the pairsplit -g devgroup1 command at the data replication level.
If the command is successful, the state of devgroup1 is provided in the output of the pairdisplay command:
phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 12345 1..P-VOL PSUS ASYNC,54321 609 - devgroup1 pair1(R) (CL1-C , 0, 20)54321 609..S-VOL SSUS ASYNC,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 12345 2..P-VOL PSUS ASYNC,54321 610 - devgroup1 pair2(R) (CL1-C , 0,21) 54321 610..S-VOL SSUS ASYNC,----- 2 - |
This example deactivates a protection group on all clusters.
# geopg stop -e global tcpg |
This example deactivates a protection group on the local cluster.
# geopg stop -e local tcpg |
This example stops only data replication on a local cluster.
# geopg stop -e local -D tcpg |
If the administrator decides later to deactivate both the protection group and its underlying data replication subsystem, the administrator can rerun the command without the -D option:
# geopg stop -e local tcpg |
This example keeps two application resource groups, apprg1 and apprg2, online while deactivating their protection group, tcpg, on both clusters.
Remove the application resource groups from the protection group.
# geopg remove-resource-group apprg1,apprg2 tcpg |
Deactivate the protection group.
# geopg stop -e global tcpg |
You can resynchronize the configuration information of the local protection group with the configuration information that is retrieved from the partner cluster. You need to resynchronize a protection group when its Synchronization status in the output of the geoadm status command is Error.
For example, you might need to resynchronize protection groups after booting the cluster. For more information, see Booting a Cluster in Sun Cluster Geographic Edition System Administration Guide.
Resynchronizing a protection group updates only entities that are related to Sun Cluster Geographic Edition software. For information about how to update Sun Cluster entities, see Replicating and Upgrading Configuration Data for Resource Groups, Resource Types, and Resources in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
The protection group must be deactivated on the cluster where you are running the geopg update command. For information about deactivating a protection group, see Deactivating a Hitachi TrueCopy Protection Group.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Resynchronize the protection group.
# geopg update protectiongroupname |
Specifies the name of the protection group
This example resynchronizes a protection group.
# geopg update tcpg |
You can obtain an overall view of the status of replication, as well as a more detailed runtime status of the Hitachi TrueCopy replication resource groups. The following sections describe the procedures for checking each status.
The status of each Hitachi TrueCopy data replication resource indicates the status of replication on a particular device group. The status of all the resources under a protection group are aggregated in the replication status. This replication status is the second component of the protection group state. For more information about the states of protection groups, refer to Monitoring the Runtime Status of the Sun Cluster Geographic Edition Software in Sun Cluster Geographic Edition System Administration Guide.
To view the overall status of replication, look at the protection group state as described in the following procedure.
Access a node of the cluster where the protection group has been defined.
You must be assigned the Basic Solaris User RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC in Sun Cluster Geographic Edition System Administration Guide.
Check the runtime status of replication.
# geoadm status |
Refer to the Protection Group section of the output for replication information. The information that is displayed by this command includes the following:
Whether the local cluster is enabled for partnership participation
Whether the local cluster is involved in a partnership
Status of the heartbeat configuration
Status of the defined protection groups
Status of current transactions
Check the runtime status of data replication for each Hitachi TrueCopy device group.
# clresource status |
Refer to the Status and Status Message fields for the data replication device group you want to check.
For more information about these fields, see Table 2–6.
The Sun Cluster Geographic Edition software internally creates and maintains one replication resource group for each protection group. The name of the replication resource group has the following format:
rg-tc_truecopyprotectiongroupname |
If you add a Hitachi TrueCopy device group to a protection group, Sun Cluster Geographic Edition software creates a resource for each device group. This resource monitors the status of replication for its device group. The name of each resource has the following format:
r-tc-truecopyprotectiongroupname-truecopydevicegroupname |
You can monitor the status of replication of this device group by checking the Status and Status Message of this resource. Use the clresource status command to display the resource status and the status message.
The following table describes the Status and Status Message values that are returned by the clresource status command when the State of the Hitachi TrueCopy replication resource group is Online.
Table 2–6 Status and Status Messages of an Online Hitachi TrueCopy Replication Resource Group
Status |
Status Message |
---|---|
Online |
P-Vol/S-Vol:PAIR |
Online |
P-Vol/S-Vol:PAIR:Remote horcmd not reachable |
Online |
P-Vol/S-Vol:PFUL |
Online |
P-Vol/S-Vol:PFUL:Remote horcmd not reachable |
Degraded |
SMPL:SMPL |
Degraded |
SMPL:SMPL:Remote horcmd not reachable |
Degraded |
P-Vol/S-Vol:COPY |
Degraded |
P-Vol/S-Vol:COPY:Remote horcmd not reachable |
Degraded |
P-Vol/S-Vol:PSUS |
Degraded |
P-Vol/S-Vol:PSUS:Remote horcmd not reachable |
Degraded |
P-Vol/S-Vol:PFUS |
Degraded |
P-Vol/S-Vol:PFUS:Remote horcmd not reachable |
Faulted |
P-Vol/S-Vol:PDFUB |
Faulted |
P-Vol/S-Vol:PDUB:Remote horcmd not reachable |
Faulted |
P-Vol/S-Vol:PSUE |
Faulted |
P-Vol/S-Vol:PSUE:Remote horcmd not reachable |
Degraded |
S-Vol:SSWS:Takeover Volumes |
Faulted |
P-Vol/S-Vol:Suspicious role configuration. Actual Role=x, Config Role=y |
For more information about these values, refer to the Hitachi TrueCopy documentation.
For more information about the clresource status command, see the clresource(1CL) man page.