8 Disaster Recovery
This chapter explains how an administrator configures disaster recovery so that two Oracle Private Cloud Appliance systems in different physical locations operate as each other's fallback in case an outage occurs at one site.
Implementation details and technical background information for this feature can be found in the Oracle Private Cloud Appliance Concepts Guide. Refer to the section "Disaster Recovery" in the chapter Appliance Administration Overview.
Enabling Disaster Recovery on the Appliances
This section explains how to connect the systems that participate in the disaster recovery setup. It requires two Oracle Private Cloud Appliance systems installed at different sites, and a third system running an Oracle Enterprise Manager installation with Oracle Site Guard.
Collecting System Parameters for Disaster Recovery
To set up disaster recovery for your environment, you need to collect certain information in advance. To be able to fill out the parameters required to run the setup commands, you need the following details:
-
Region ID
Each Private Cloud Appliance system is defined as an independent region. You need the region IDs of the two systems that will become each other's fallback. To retrieve the region ID, use the
show pcaSystem
command in the Service CLI -
IP addresses in the data center network
Each of the two ZFS Storage Appliances needs at least one IP address in the data center network. This IP address is assigned to the storage controller interface that is physically connected to the data center network. If your environment also contains optional high-performance storage, then two pairs of data center IP addresses are required.
To complete the Oracle Site Guard configuration, you need the following details:
-
The endpoints of both Private Cloud Appliance systems, where API calls are received. These are URIs, which are formatted as follows:
http(s)://<myRegion>.<myDomain>:<port>
The required ports are 30004 (https) and 30005 (http).
For example:
https://myprivatecloud.example.com:30004 http://myprivatecloud.example.com:30005
-
An administrative user name and password for authentication with the Private Cloud Appliance services and authorization of the disaster recovery API calls. These credentials are securely stored within Oracle Enterprise Manager.
Connecting the Components in the Disaster Recovery Setup
The ZFS Storage Appliances installed in the two Oracle Private Cloud Appliance racks must be connected to each other, in order to replicate the data that must be protected by the disaster recovery setup. This is a direct connection through the data center network; it does not use the uplinks from the spine switches to the data center.
To create the redundant replication connection, four cable connections are required at each of the two sites. The ZFS Storage Appliance has two controllers; you must connect both 25Gbit SFP28 interfaces of each controller's first dual-port Ethernet expansion card to the next-level data center switches. At the other site, the same four ports must also be cabled this way.
The replication connection must be used exclusively for data under the control of disaster recovery configurations. Any other data replicated over this connection might be automatically destroyed.
In the next phase, the network configuration is created on top of the interfaces you cabled into the data center network. On each storage controller the two interfaces are aggregated into a redundant 25Gbit connection. The aggregation interface is assigned an IP address: one controller owns the replication IP address for the standard performance storage pool; the other controller owns the replication IP for the high-performance storage pool, if one is present.
Note:
Link aggregation needs to be configured on the data center switches as well. The MTU of the ZFS Storage Appliance data links is 9000 bytes; set the data center switch MTU to 9216 bytes.The administrators at the two sites are not required to configure the replication network
manually. The configuration of the ZFS Storage Appliance network
interfaces is automated through the drSetupService
command in the Service CLI. When executing the command, the
administrator provides the IP addresses and other configuration settings as command
parameters. Use of the drSetupService
command is described in the next
section.
Your Oracle Enterprise Manager does not require additional installations specific to Private Cloud Appliance in order to perform disaster recovery tasks. It only needs to be able to reach the two appliances over the network. Oracle Site Guard is available by default in the software library of Oracle Enterprise Manager.
To allow Oracle Site Guard to manage failover operations between the two Private Cloud Appliance systems, you must set up both appliances as sites. You identify the two sites by their endpoint URIs, which are used to configure the disaster recovery scripts in the failover operation plans. You also provide a user name and password to allow Oracle Site Guard to authenticate with the two appliances.
For additional information and instructions, please refer to the product documentation of Oracle Site Guard and Oracle Enterprise Manager.
Setting Up Peering Between the ZFS Storage Appliances
Once the physical connection between the ZFS Storage Appliances has
been established, you set them up as peers using the drSetupService
command
in the Service CLI. You run this command from both
systems so that they operate as each other's replica.
The replication parameters for standard storage are mandatory with the setup command. If your Private Cloud Appliance systems also include high-performance storage, then add the replication parameters for the high-performance storage pool to the setup command.
However, only set up replication for high-performance storage if the high-performance storage pool is effectively available on the ZFS Storage Appliances. If not, re-run the setup command to add the high-performance storage pool at a later time, after it has been configured on the ZFS Storage Appliances.
Caution:
Verify all parameters carefully before executing the command. They cannot be changed or corrected once the configuration scripts are launched.
Syntax (entered on a single line):
drSetupService region=<primary_system_region_id> localIp=<primary_system_standard_replication_ip> (in CIDR notation) remoteIp=<replica_system_standard_replication_ip> localIpPerf=<primary_system_performance_replication_ip> (in CIDR notation) remoteIpPerf=<replica_system_performance_replication_ip>
Examples:
-
With only standard storage configured:
system 1
PCA-ADMIN> drSetupService region=pca1region \ localIp=10.50.7.31/23 remoteIp=10.50.7.33
system 2
PCA-ADMIN> drSetupService region=pca2region \ localIp=10.50.7.33/23 remoteIp=10.50.7.31
-
With both standard and high-performance storage configured:
system 1
PCA-ADMIN> drSetupService region=pca1region \ localIp=10.50.7.31/23 remoteIp=10.50.7.33 \ localIpPerf=10.50.7.32/23 remoteIpPerf=10.50.7.34
system 2
PCA-ADMIN> drSetupService region=pca2region \ localIp=10.50.7.33/23 remoteIp=10.50.7.31 \ localIpPerf=10.50.7.34/23 remoteIpPerf=10.50.7.32
The script configures both ZFS Storage Appliances. After successful configuration of the replication interfaces, two more actions must be performed:
-
generating CA certificates and uploading them to the ZFS Storage Appliance
-
enabling replication over the interfaces you just configured
Configuring ZFS Storage Appliance Certificates
The configuration of the replication interfaces for disaster recovery introduces new IP addresses to the core configuration of the system. To allow these new hosts to be authenticated and authorized correctly, an updated CA certificate must be uploaded to the ZFS Storage Appliances.
After configuring the replication interfaces, but before enabling replication between the two storage appliances, generate a new CA certificate and upload it to the storage appliances.
-
Log in to one of the management nodes.
-
Execute the following commands:
# python3 -c "from pca_foundation.manager.mgmt_node import upload_system_cert; upload_system_cert()" # /var/lib/pca-foundation/scripts/pca_factory_init.py --restart-http-zfs
Enabling Replication for Disaster Recovery
Caution:
Before enabling replication, please refer to the Oracle Private Cloud Appliance Release Notes. Make sure you have read and understand this section: Enabling DR Replication Fails on Second System.
To enable replication between the two storage appliances, using the interfaces you
configured earlier, re-run the same drSetupService
command from the Service CLI, but this time followed by
enableReplication=True
. You must also provide the
remotePassword
to authenticate with the other storage appliance and
complete the peering setup.
Examples:
-
With only standard storage configured:
system 1
PCA-ADMIN> drSetupService region=pca1region \ localIp=10.50.7.31 remoteIp=10.50.7.33 \ enableReplication=True remotePassword=********
system 2
PCA-ADMIN> drSetupService region=pca2region \ localIp=10.50.7.33 remoteIp=10.50.7.31 \ enableReplication=True remotePassword=********
-
With both standard and high-performance storage configured:
system 1
PCA-ADMIN> drSetupService region=pca1region \ localIp=10.50.7.31 remoteIp=10.50.7.33 \ localIpPerf=10.50.7.32 remoteIpPerf=10.50.7.34 \ enableReplication=True remotePassword=********
system 2
PCA-ADMIN> drSetupService region=pca2region \ localIp=10.50.7.33 remoteIp=10.50.7.31 \ localIpPerf=10.50.7.34 remoteIpPerf=10.50.7.32 \ enableReplication=True remotePassword=********
At this stage, the ZFS Storage Appliances in the disaster recovery setup have been successfully peered. The storage appliances are ready to perform scheduled data replication every 5 minutes. The data to be replicated is based on the DR configurations you create. See Managing Disaster Recovery Configurations.
Managing Disaster Recovery Configurations
This section explains how to configure disaster recovery settings on the two Oracle Private Cloud Appliance systems you intend to set up as each other's fallback.
Creating a DR Configuration
A DR configuration is the parent object to which you add compute instances that you want to protect against system outages.
Using the Service CLI
-
Gather the information that you need to run the command:
-
a unique name for the DR configuration
-
a unique name for the associated ZFS storage project
-
-
Create an empty DR configuration with the
drCreateConfig
command.Syntax (entered on a single line):
drCreateConfig configName=<DR_configuration_name> project=<ZFS_storage_project_name>
Example:
PCA-ADMIN> drCreateConfig configName=drConfig1 project=drProject1 Command: drCreateConfig configName=drConfig1 project=drProject1 Status: Success Time: 2021-08-17 07:19:33,163 UTC Data: Message = Successfully started job to create config drConfig1 Job Id = 252041b1-ff44-4c8e-a3de-11c1e47d9217
-
Use the job ID to check the status of the operation you started.
PCA-ADMIN> drGetJob jobid=252041b1-ff44-4c8e-a3de-11c1e47d9217 Command: drGetJob jobid=252041b1-ff44-4c8e-a3de-11c1e47d9217 Status: Success Time: 2021-08-17 07:21:07,021 UTC Data: Type = create_config Job Id = 252041b1-ff44-4c8e-a3de-11c1e47d9217 Status = finished Start Time = 2021-08-17 07:19:33.507048 End Time = 2021-08-17 07:20:16.783743 Result = success Message = job successfully retrieved Response = Successfully created DR config drConfig1: 439ad078-7e6a-4908-affa-ac89210d76ac
-
When the DR configuration is created, the storage project for data replication is set up on the ZFS Storage Appliances.
Note the DR configuration ID. You need it for all subsequent commands to modify the configuration.
-
To display a list of existing DR configurations, use the
drGetConfigs
command.PCA-ADMIN> drGetConfigs Command: drGetConfigs Status: Success Time: 2021-08-17 07:44:54,443 UTC Data: id configName -- ---------- 439ad078-7e6a-4908-affa-ac89210d76ac drConfig1 e8291afa-a413-4932-880a-abb8ac22c85d drConfig2 7ad05d9f-731c-41b8-b477-35da4b999071 drConfig3
-
To display the status and details of a DR configuration, use the
drGetConfig
command.Syntax:
drGetConfig drConfigId=<DR_configuration_id>
Example:
PCA-ADMIN> drGetConfig drConfigId=439ad078-7e6a-4908-affa-ac89210d76ac Command: drGetConfig drConfigId=439ad078-7e6a-4908-affa-ac89210d76ac Status: Success Time: 2021-08-17 07:47:53,401 UTC Data: Type = DrConfig Config State = ENABLED Config Name = drConfig1 Config Id = 439ad078-7e6a-4908-affa-ac89210d76ac Project Id = drProject1
Adding Site Mappings to a DR Configuration
Site mappings are added to determine how and where on the replica system the instances should be brought back up in case the primary system experiences an outage and a failover is triggered. Each site mapping contains a source object for the primary system and a corresponding target object for the replica system. Make sure that these resources exist on both the primary and replica system before you add the site mappings to the DR configuration.
These are the site mapping types you can add to a DR configuration:
-
Compartment: specifies that, if a failover occurs, instances from the source compartment must be brought up in the target compartment on the replica system
-
Subnet: specifies that, if a failover occurs, instances connected to the source subnet must be connected to the target subnet on the replica system
-
Network security group: specifies that, if a failover occurs, instances that belong to the source network security group must be included in the target security group on the replica system
Using the Service CLI
-
Gather the information that you need to run the command:
-
DR configuration ID (
drGetConfigs
) -
Mapping source and target object OCIDs
Use the Compute Enclave UI or CLI on the primary and replica system respectively. CLI commands:
-
oci iam compartment list
-
oci network subnet list --compartment-id "ocid1.compartment.....uniqueID"
-
oci network nsg list --compartment-id "ocid1.compartment.....uniqueID"
-
-
-
Add a site mapping to the DR configuration with the
drAddSiteMapping
command.Syntax (entered on a single line):
drAddSiteMapping drConfigId=<DR_configuration_id> objType=[compartment | subnet | networksecuritygroup] sourceId=<source_object_OCID> targetId=<target_object_OCID>
Examples:
PCA-ADMIN> drAddSiteMapping \ drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \ objType=compartment \ sourceId="ocid1.compartment.....<region1>...uniqueID" \ targetId="ocid1.compartment.....<region2>...uniqueID" Command: drAddSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d objType=compartment sourceId="ocid1.compartment.....<region1>...uniqueID" targetId="ocid1.compartment.....<region2>...uniqueID" Status: Success Time: 2021-08-17 09:07:24,957 UTC Data: 9244634e-431f-43a1-89ab-5d25905d43f9 PCA-ADMIN> drAddSiteMapping \ drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \ objType=subnet \ sourceId="ocid1.subnet.....<region1>...uniqueID" \ targetId="ocid1.subnet.....<region2>...uniqueID" Command: drAddSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d objType=subnet sourceId="ocid1.subnet.....<region1>...uniqueID" targetId="ocid1.subnet.....<region2>...uniqueID" Status: Success Time: 2021-08-17 09:07:24,957 UTC Data: d1bf2cf2-d8c7-4271-b8b6-cdf757648175 PCA-ADMIN> drAddSiteMapping \ drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \ objType=networksecuritygroup \ sourceId="ocid1.nsg.....<region1>...uniqueID" \ targetId="ocid1.nsg.....<region2>...uniqueID" Command: drAddSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d objType=networksecuritygroup sourceId="ocid1.nsg.....<region1>...uniqueID" targetId="ocid1.nsg.....<region2>...uniqueID" Status: Success Time: 2021-08-17 09:07:24,957 UTC Data: 422f8892-ba0a-4a89-bc37-61b5c0fbbbaa
-
Repeat the command with the OCIDs of all the source and target objects that you want to include in the site mappings of the DR configuration.
Note:
Mappings for compartments and subnets are always required in order to perform a failover or switchover. Missing mappings will be detected by the Oracle Site Guard scripts during a precheck on the replica system. -
To display the list of site mappings included in the DR configuration, use the
drGetSiteMappings
command. The DR configuration ID is a required parameter.Syntax:
drGetSiteMappings drConfigId=<DR_configuration_id>
Example:
PCA-ADMIN> drGetSiteMappings drConfigId=63b36a80-7047-42bd-8b97-8235269e240d Command: drGetSiteMappings drConfigId=63b36a80-7047-42bd-8b97-8235269e240d Status: Success Time: 2021-08-17 09:19:22,580 UTC Data: id name -- ---- d1bf2cf2-d8c7-4271-b8b6-cdf757648175 null 9244634e-431f-43a1-89ab-5d25905d43f9 null 422f8892-ba0a-4a89-bc37-61b5c0fbbbaa null
-
To display the status and details of a site mapping included in the DR configuration, use the
drGetSiteMapping
command.Syntax (entered on a single line):
drGetSiteMapping drConfigId=<DR_configuration_id> mappingId=<site_mapping_id>
Example:
PCA-ADMIN> drGetSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d mappingId=d1bf2cf2-d8c7-4271-b8b6-cdf757648175 Command: drGetSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d mappingId=d1bf2cf2-d8c7-4271-b8b6-cdf757648175 Status: Success Time: 2021-08-17 09:25:53,148 UTC Data: Type = DrSiteMapping Object Type = subnet Source Id = ocid1.nsg.....<region1>...uniqueID Target Id = ocid1.nsg.....<region2>...uniqueID Work State = Normal
Removing Site Mappings from a DR Configuration
You can remove a site mapping from the DR configuration if it is no longer required.
Using the Service CLI
-
Gather the information that you need to run the command:
-
DR configuration ID (
drGetConfigs
) -
Site mapping ID (
drGetSiteMappings
)
-
-
Remove the selected site mapping from the DR configuration with the
drRemoveSiteMapping
command.Syntax (entered on a single line):
drRemoveSiteMapping drConfigId=<DR_configuration_id> mappingId=<site_mapping_id>
Example:
PCA-ADMIN> drRemoveSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d mappingId=422f8892-ba0a-4a89-bc37-61b5c0fbbbaa Command: drRemoveSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d mappingId=422f8892-ba0a-4a89-bc37-61b5c0fbbbaa Status: Success Time: 2021-08-17 09:41:43,319 UTC
-
Repeat the command with the IDs of all the site mappings that you want to remove from the DR configuration.
Adding Instances to a DR Configuration
Once a DR configuration has been created and the relevant site mappings have been set up, you add the required compute instances. Their data and disks are stored in the ZFS storage project associated with the DR configuration, and replicated over the network connection between the ZFS Storage Appliances of both Private Cloud Appliance systems.
If your system contains optional high-performance disk shelves, you must set up peering accordingly between the ZFS Storage Appliances. As a result, two ZFS projects are created for each DR configuration: one in the standard pool and one in the high-performance pool. When you add instances to the DR configuration that have disks running on standard as well as high-performance storage, those storage resources are automatically added to the ZFS project in the appropriate pool.
Using the Service CLI
-
Gather the information that you need to run the command:
-
DR configuration ID (
drGetConfigs
) -
Instance OCIDs from the Compute Enclave UI or CLI (
oci compute instance list --compartment-id <compartment_OCID>
)
-
-
Add a compute instance to the DR configuration with the
drAddComputeInstance
command.Syntax (entered on a single line):
drAddComputeInstance drConfigId=<DR_configuration_id> instanceId=<instance_OCID>
Example:
PCA-ADMIN> drAddComputeInstance \ drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \ instanceId=ocid1.instance.....<region1>...uniqueID Command: drAddComputeInstance drConfigId=63b36a80-7047-42bd-8b97-8235269e240d instanceId=ocid1.instance.....<region1>...uniqueID Status: Success Time: 2021-08-17 07:24:35,186 UTC Data: Message = Successfully started job to add instance ocid1.instance.....<region1>...uniqueID to DR config 63b36a80-7047-42bd-8b97-8235269e240d Job Id = 8dcbd22d-69b0-4319-b09f-1a4df847e9df
-
Use the job ID to check the status of the operation you started.
PCA-ADMIN> drGetJob jobId=8dcbd22d-69b0-4319-b09f-1a4df847e9df Command: drGetJob jobId=8dcbd22d-69b0-4319-b09f-1a4df847e9df Status: Success Time: 2021-08-17 07:36:27,719 UTC Data: Type = add_computeinstance Job Id = 8dcbd22d-69b0-4319-b09f-1a4df847e9df Status = finished Start Time = 2021-08-17 07:24:36.776193 End Time = 2021-08-17 07:26:59.406929 Result = success Message = job successfully retrieved Response = Successfully added instance [ocid1.instance.....<region1>...uniqueID] to DR config [63b36a80-7047-42bd-8b97-8235269e240d]
-
Repeat the
drAddComputeInstance
command with the OCIDs of all the compute instances that you want to add to the DR configuration. -
To display the list of instances included in the DR configuration, use the
drGetComputeInstances
command. The DR configuration ID is a required parameter.Syntax:
drGetComputeInstances drConfigId=<DR_configuration_id>
Example:
PCA-ADMIN> drGetComputeInstances drConfigId=63b36a80-7047-42bd-8b97-8235269e240d Command: drGetComputeInstances drConfigId=63b36a80-7047-42bd-8b97-8235269e240d Status: Success Time: 2021-08-17 08:33:39,586 UTC Data: id name -- ---- ocid1.instance.....<region1>...instance1_uniqueID null ocid1.instance.....<region1>...instance2_uniqueID null ocid1.instance.....<region1>...instance3_uniqueID null
-
To display the status and details of an instance included in the DR configuration, use the
drGetComputeInstance
command.Syntax (entered on a single line):
drGetComputeInstance drConfigId=<DR_configuration_id> instanceId=<instance_OCID>
Example:
PCA-ADMIN> drGetComputeInstance \ drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \ instanceId=ocid1.instance.....<region1>...instance1_uniqueID Command: drGetComputeInstance drConfigId=63b36a80-7047-42bd-8b97-8235269e240d instanceId=ocid1.instance.....<region1>...instance1_uniqueID Status: Success Time: 2021-08-17 08:34:42,413 UTC Data: Type = ComputeInstance Compartment Id = ocid1.compartment........uniqueID Boot Volume Id = ocid1.bootvolume........uniqueID Compute Instance Shape = VM.PCAStandard1.8 Work State = Normal
Removing Instances from a DR Configuration
Instances can only be part of a single DR configuration. You can remove a compute instance from the DR configuration to which it was added.
Using the Service CLI
-
Gather the information that you need to run the command:
-
DR configuration ID (
drGetConfigs
) -
Instance OCID (
drGetComputeInstances
)
-
-
Remove the selected compute instance from the DR configuration with the
drRemoveComputeInstance
command.Syntax (entered on a single line):
drRemoveComputeInstance drConfigId=<DR_configuration_id> instanceId=<instance_OCID>
Example:
PCA-ADMIN> drRemoveComputeInstance \ drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \ instanceId=ocid1.instance.....<region1>...instance3_uniqueID Command: drRemoveComputeInstance drConfigId=63b36a80-7047-42bd-8b97-8235269e240d instanceId=ocid1.instance.....<region1>...instance3_uniqueID Status: Success Time: 2021-08-17 08:45:59,718 UTC Data: Message = Successfully started job to remove instance ocid1.instance.....<region1>...instance3_uniqueID from DR config 63b36a80-7047-42bd-8b97-8235269e240d Job Id = 303b42ff-077c-4504-ac73-25930652f73a
-
Use the job ID to check the status of the operation you started.
PCA-ADMIN> drGetJob jobId=303b42ff-077c-4504-ac73-25930652f73a Command: drGetJob jobId=303b42ff-077c-4504-ac73-25930652f73a Status: Success Time: 2021-08-17 08:56:27,719 UTC Data: Type = remove_computeinstance Job Id = 303b42ff-077c-4504-ac73-25930652f73a Status = finished Start Time = 2021-08-17 08:46:00.641212 End Time = 2021-08-17 07:47:19.142262 Result = success Message = job successfully retrieved Response = Successfully removed instance [ocid1.instance.....<region1>...instance3_uniqueID] from DR config [63b36a80-7047-42bd-8b97-8235269e240d]
-
Repeat the
drRemoveComputeInstance
command with the OCIDs of all the compute instances that you want to remove from the DR configuration.
Refreshing a DR Configuration
To ensure that the replication information stored in a DR configuration is updated with all the latest changes in your environment, you can refresh the DR configuration.
Using the Service CLI
-
Look up the ID of the DR configuration you want to refresh (
drGetConfigs
). -
Refresh the data stored in the selected DR configuration with the
drRefreshConfig
command.Syntax:
drRefreshConfig drConfigId=<DR_configuration_id>
Example:
PCA-ADMIN> drRefreshConfig drConfigId=63b36a80-7047-42bd-8b97-8235269e240d Command: drRefreshConfig drConfigId=63b36a80-7047-42bd-8b97-8235269e240d Status: Success Time: 2021-08-17 10:43:33,241 UTC Data: Message = Successfully started job to refresh DR config 63b36a80-7047-42bd-8b97-8235269e240d Job Id = 205eb34e-f416-41d3-95a5-506a1d891fdb
-
Use the job ID to check the status of the operation you started.
PCA-ADMIN> drGetJob jobId=205eb34e-f416-41d3-95a5-506a1d891fdb Command: drGetJob jobId=205eb34e-f416-41d3-95a5-506a1d891fdb Status: Success Time: 2021-08-17 10:51:27,719 UTC Data: Type = refresh_config Job Id = 205eb34e-f416-41d3-95a5-506a1d891fdb Status = finished Start Time = 2021-08-17 10:43:34.264828 End Time = 2021-08-17 10:45:12.718561 Result = success Message = job successfully retrieved Response = Successfully refreshed DR config [63b36a80-7047-42bd-8b97-8235269e240d]
Deleting a DR Configuration
When you no longer need a DR configuration, you can remove it with a single command. It also removes all site mappings and cleans up the associated storage projects on the ZFS Storage Appliances of the primary and replica system. However, you must stop all compute instances that are part of the DR configuration before you can delete it.
Using the Service CLI
-
Stop all the compute instances that are part of the DR configuration you want to delete.
-
Look up the ID of the DR configuration you want to delete (
drGetConfigs
). -
Delete the selected DR configuration with the
drDeleteConfig
command.Syntax:
drDeleteConfig drConfigId=<DR_configuration_id>
Example:
PCA-ADMIN> drDeleteConfig drConfigId=63b36a80-7047-42bd-8b97-8235269e240d Command: drDeleteConfig drConfigId=63b36a80-7047-42bd-8b97-8235269e240d Status: Success Time: 2021-08-17 14:45:19,634 UTC Data: Message = Successfully started job to delete DR config 63b36a80-7047-42bd-8b97-8235269e240d Job Id = d2c1198d-f521-4b8d-a9f1-c36c7965d567
-
Use the job ID to check the status of the operation you started.
PCA-ADMIN> drGetJob jobId=d2c1198d-f521-4b8d-a9f1-c36c7965d567 Command: drGetJob jobId=d2c1198d-f521-4b8d-a9f1-c36c7965d567 Status: Success Time: 2021-08-17 16:18:33,462 UTC Data: Type = delete_config Job Id = d2c1198d-f521-4b8d-a9f1-c36c7965d567 Status = finished Start Time = 2021-08-17 14:45:20.105569 End Time = 2021-08-17 14:53:32.405569 Result = success Message = job successfully retrieved Response = Successfully deleted DR config [63b36a80-7047-42bd-8b97-8235269e240d]