8 Disaster Recovery

This chapter explains how an administrator configures disaster recovery so that two Oracle Private Cloud Appliance systems in different physical locations operate as each other's fallback in case an outage occurs at one site.

Implementation details and technical background information for this feature can be found in the Oracle Private Cloud Appliance Concepts Guide. Refer to the section "Disaster Recovery" in the chapter Appliance Administration Overview.

Enabling Disaster Recovery on the Appliances

This section explains how to connect the systems that participate in the disaster recovery setup. It requires two Oracle Private Cloud Appliance systems installed at different sites, and a third system running an Oracle Enterprise Manager installation with Oracle Site Guard.

Collecting System Parameters for Disaster Recovery

To set up disaster recovery for your environment, you need to collect certain information in advance. To be able to fill out the parameters required to run the setup commands, you need the following details:

  • IP addresses in the data center network

    Each of the two ZFS Storage Appliances needs at least one IP address in the data center network. This IP address is assigned to the storage controller interface that is physically connected to the data center network. If your environment also contains optional high-performance storage, then two pairs of data center IP addresses are required.

  • Data center subnet and gateway

    The ZFS Storage Appliances need to be able to exchange data over the network. Their network interfaces connect them to a local subnet. For each interface included in the disaster recovery configuration, the subnet address and gateway address are required.

To complete the Oracle Site Guard configuration, you need the following details:

  • The endpoints of both Private Cloud Appliance systems, where API calls are received. These are URIs, which are formatted as follows: https://<myRegion>.<myDomain>

    For example:

    https://myprivatecloud.example.com
  • An administrative user name and password for authentication with the Private Cloud Appliance services and authorization of the disaster recovery API calls. These credentials are securely stored within Oracle Enterprise Manager.

Connecting the Components in the Disaster Recovery Setup

The ZFS Storage Appliances installed in the two Oracle Private Cloud Appliance racks must be connected to each other, in order to replicate the data that must be protected by the disaster recovery setup. This is a direct connection through the data center network; it does not use the uplinks from the spine switches to the data center.

To create the redundant replication connection, four cable connections are required at each of the two sites. The ZFS Storage Appliance has two controllers; you must connect both 25Gbit SFP28 interfaces of each controller's first dual-port Ethernet expansion card to the next-level data center switches. At the other site, the same four ports must also be cabled this way.

The replication connection must be used exclusively for data under the control of disaster recovery configurations. Any other data replicated over this connection might be automatically destroyed.

In the next phase, the network configuration is created on top of the interfaces you cabled into the data center network. On each storage controller the two interfaces are aggregated into a redundant 25Gbit connection. The aggregation interface is assigned an IP address: one controller owns the replication IP address for the standard performance storage pool; the other controller owns the replication IP for the high-performance storage pool, if one is present.

Note:

Link aggregation needs to be configured on the data center switches as well. The MTU of the ZFS Storage Appliance data links is 9000 bytes; set the data center switch MTU to 9216 bytes.

The administrators at the two sites are not required to configure the replication network manually. The configuration of the ZFS Storage Appliance network interfaces is automated through the drSetupService command in the Service CLI. When executing the command, the administrator provides the IP addresses and other configuration settings as command parameters. Use of the drSetupService command is described in the next section.

Your Oracle Enterprise Manager does not require additional installations specific to Private Cloud Appliance in order to perform disaster recovery tasks. It only needs to be able to reach the two appliances over the network. Oracle Site Guard is available by default in the software library of Oracle Enterprise Manager.

To allow Oracle Site Guard to manage failover operations between the two Private Cloud Appliance systems, you must set up both appliances as sites. You identify the two sites by their endpoint URIs, which are used to configure the disaster recovery scripts in the failover operation plans. You also provide a user name and password to allow Oracle Site Guard to authenticate with the two appliances.

For additional information and instructions, please refer to the product documentation of Oracle Site Guard and Oracle Enterprise Manager.

Setting Up Peering Between the ZFS Storage Appliances

Once the physical connection between the ZFS Storage Appliances has been established, you set them up as peers using the drSetupService command in the Service CLI. You run this command from both systems so that they operate as each other's replica.

The non-optional replication parameters for standard storage are mandatory with the setup command. If your Private Cloud Appliance systems also include high-performance storage, then add the replication parameters for the high-performance storage pool to the setup command.

However, only set up replication for high-performance storage if the high-performance storage pool is effectively available on the ZFS Storage Appliances. If not, re-run the setup command to add the high-performance storage pool at a later time, after it has been configured on the ZFS Storage Appliances.

When you set up the replication interfaces for the disaster recovery service, the system assumes that the gateway is the first host address in the subnet of the local IP address you specify. This applies to the replication interface for standard storage as well as high-performance storage. For example, if you specify a local IP address as 10.50.7.31/23 and the gateway address is not 10.50.6.1 then you must add the gateway IP address to the drSetupService command using the gatewayIp and gatewayIpPerf parameters.

Optionally, you can also set a maximum number of DR configurations and a retention period for disaster recovery job details.

Syntax (entered on a single line):

drSetupService
localIp=<primary_system_standard_replication_ip> (in CIDR notation)
remoteIp=<replica_system_standard_replication_ip>
localIpPerf=<primary_system_performance_replication_ip> (in CIDR notation)
remoteIpPerf=<replica_system_performance_replication_ip>
[Optional Parameters:]
  gatewayIp=<local_subnet_gateway_ip> (default: first host IP in localIp subnet)
  gatewayIpPerf=<local_subnet_gateway_ip> (default: first host IP in localIpPerf subnet)
  maxConfig=<number_DR_configs> (default and maximum is 20)
  jobRetentionHours=<hours> (default and minimum is 24)

Examples:

  • With only standard storage configured:

    system 1

    PCA-ADMIN> drSetupService \
    localIp=10.50.7.31/23 gatewayIp=10.50.7.10 remoteIp=10.50.7.33

    system 2

    PCA-ADMIN> drSetupService \
    localIp=10.50.7.33/23 gatewayIp=10.50.7.10 remoteIp=10.50.7.31
  • With both standard and high-performance storage configured:

    system 1

    PCA-ADMIN> drSetupService \
    localIp=10.50.7.31/23 gatewayIp=10.50.7.10 remoteIp=10.50.7.33 \
    localIpPerf=10.50.7.32/23 gatewayIpPerf=10.50.7.10 remoteIpPerf=10.50.7.34

    system 2

    PCA-ADMIN> drSetupService \
    localIp=10.50.7.33/23 gatewayIp=10.50.7.10 remoteIp=10.50.7.31 \
    localIpPerf=10.50.7.34/23 gatewayIpPerf=10.50.7.10 remoteIpPerf=10.50.7.32

The script configures both ZFS Storage Appliances.

After successful configuration of the replication interfaces, you must enable replication over the interfaces you just configured.

Enabling Replication for Disaster Recovery

To enable replication between the two storage appliances, using the interfaces you configured earlier, re-run the same drSetupService command from the Service CLI, but this time followed by enableReplication=True. You must also provide the remotePassword to authenticate with the other storage appliance and complete the peering setup.

Examples:

  • With only standard storage configured:

    system 1

    PCA-ADMIN> drSetupService \
    localIp=10.50.7.31/23 gatewayIp=10.50.7.10 remoteIp=10.50.7.33 \
    enableReplication=True remotePassword=********

    system 2

    PCA-ADMIN> drSetupService \
    localIp=10.50.7.33/23 gatewayIp=10.50.7.10 remoteIp=10.50.7.31 \
    enableReplication=True remotePassword=********
  • With both standard and high-performance storage configured:

    system 1

    PCA-ADMIN> drSetupService \
    localIp=10.50.7.31/23 gatewayIp=10.50.7.10 remoteIp=10.50.7.33 \
    localIpPerf=10.50.7.32/23 gatewayIpPerf=10.50.7.10 remoteIpPerf=10.50.7.34 \
    enableReplication=True remotePassword=********

    system 2

    PCA-ADMIN> drSetupService \
    localIp=10.50.7.33/23 gatewayIp=10.50.7.10 remoteIp=10.50.7.31 \
    localIpPerf=10.50.7.34/23 gatewayIpPerf=10.50.7.10 remoteIpPerf=10.50.7.32 \
    enableReplication=True remotePassword=********

At this stage, the ZFS Storage Appliances in the disaster recovery setup have been successfully peered. The storage appliances are ready to perform scheduled data replication every 5 minutes. The data to be replicated is based on the DR configurations you create. See Managing Disaster Recovery Configurations.

Modifying the ZFS Storage Appliance Peering Setup

After you set up the disaster recovery service and enabled replication between the systems, you can modify all parameters of the peering configuration – individually or grouped into a single command. You make changes to the service using the drUpdateService command in the Service CLI.

Syntax (entered on a single line):

drUpdateService
localIp=<primary_system_standard_replication_ip> (in CIDR notation)
remoteIp=<replica_system_standard_replication_ip>
localIpPerf=<primary_system_performance_replication_ip> (in CIDR notation)
remoteIpPerf=<replica_system_performance_replication_ip>
gatewayIp=<local_subnet_gateway_ip> (default: first host IP in localIp subnet)
gatewayIpPerf=<local_subnet_gateway_ip> (default: first host IP in localIpPerf subnet)
maxConfig=<number_DR_configs> (default and maximum is 20)
jobRetentionHours=<hours> (default and minimum is 24)

Example 1 – simple parameter change

This example shows how you change the job retention time from 24 to 48 hours and reduce the maximum number of DR configurations from 20 to 12.

PCA-ADMIN> drUpdateService jobRetentionHours=48 maxConfig=12
Command: drUpdateService jobRetentionHours=48 maxConfig=12
Status: Success
Time: 2022-08-11 09:20:48,570 UTC
Data:
  Message = Successfully started job to update DR admin service
  Job Id = ec64cef4-ba68-493d-89c8-22df51553cd8

Use the drShowService command to check the current configuration. Run the command to display the configuration parameters before you modify them. Run it again afterwards to confirm that your changes have been applied successfully.

PCA-ADMIN> drShowService
Command: drShowService
Status: Success
Time: 2022-08-11 09:23:54,951 UTC
Data:
  Local Ip = 10.50.7.31/23
  Remote Ip = 10.50.7.33
  Replication = ENABLED
  Replication High = DISABLED
  Message = Successfully retrieved site configuration
  maxConfig = 12
  gateway IP = 10.50.7.10
  Job Retention Hours = 48

Example 2 – replication IP change

There may be network changes in the data center that require you to use different subnets and IP addresses for the replication interfaces configured in the disaster recovery service. This configuration change must be applied in multiple commands on the two peer systems, and in a specific order. If your systems contain both standard and high-performance storage – as in the example below –, change the replication interface settings for both storage types in the same order.

  1. Update the local IP and gateway parameters on system 1. Leave the remote IPs unchanged.

    PCA-ADMIN> drUpdateService \
    localIp=10.100.33.83/28 gatewayIp=10.100.33.81 \
    localIpPerf=10.100.33.84/28 gatewayIpPerf=10.100.33.81
  2. Update the local IP, gateway, and remote IP parameters on system 2.

    PCA-ADMIN> drUpdateService \
    localIp=10.100.33.88/28 gatewayIp=10.100.33.81 remoteIp=10.100.33.83 \
    localIpPerf=10.100.33.89/28 gatewayIpPerf=10.100.33.81 remoteIpPerf=10.100.33.84
  3. Update the remote IP parameters on system 1.

    PCA-ADMIN> drUpdateService \
    remoteIp=10.100.33.88 remoteIpPerf=10.100.33.89

Unconfiguring the ZFS Storage Appliance Peering Setup

If a reset has been performed on one or both of the systems in your disaster recovery solution, and you need to unconfigure the disaster recovery service to remove the entire peering setup between the ZFS Storage Appliances, use the drDeleteService command in the Service CLI.

Caution:

This command requires no additional parameters. Be careful when entering it at the PCA-ADMIN> prompt, to avoid executing it unintentionally.

You cannot unconfigure the disaster recovery service while DR configurations still exist. Proceed as follows:

  1. Remove all DR configurations from the two systems that have been configured as each other's replica.

  2. Log in to the Service CLI on one of the systems and enter the drDeleteService command.

  3. Log in to the Service CLI on the second system and enter the drDeleteService command there as well.

When the disaster recovery service is not configured, the drShowService command returns an error.

PCA-ADMIN> drShowService
Command: drShowService
Status: Failure
Time: 2022-08-11 12:31:22,840 UTC
Error Msg: PCA_GENERAL_000001: An exception occurred during processing: Operation failed. 
[...]
Error processing dr-admin.service.show response: dr-admin.service.show failed. Service not set up.

Managing Disaster Recovery Configurations

This section explains how to configure disaster recovery settings on the two Oracle Private Cloud Appliance systems you intend to set up as each other's fallback.

Creating a DR Configuration

A DR configuration is the parent object to which you add compute instances that you want to protect against system outages.

Using the Service CLI

  1. Gather the information that you need to run the command:

    • a unique name for the DR configuration

    • a unique name for the associated ZFS storage project

  2. Create an empty DR configuration with the drCreateConfig command.

    Syntax (entered on a single line):

    drCreateConfig 
    configName=<DR_configuration_name>
    project=<ZFS_storage_project_name>

    Example:

    PCA-ADMIN> drCreateConfig configName=drConfig1 project=drProject1
    Command: drCreateConfig configName=drConfig1 project=drProject1
    Status: Success
    Time: 2021-08-17 07:19:33,163 UTC
    Data:
      Message = Successfully started job to create config drConfig1
      Job Id = 252041b1-ff44-4c8e-a3de-11c1e47d9217
  3. Use the job ID to check the status of the operation you started.

    PCA-ADMIN> drGetJob jobid=252041b1-ff44-4c8e-a3de-11c1e47d9217
    Command: drGetJob jobid=252041b1-ff44-4c8e-a3de-11c1e47d9217
    Status: Success
    Time: 2021-08-17 07:21:07,021 UTC
    Data:
      Type = create_config
      Job Id = 252041b1-ff44-4c8e-a3de-11c1e47d9217
      Status = finished
      Start Time = 2021-08-17 07:19:33.507048
      End Time = 2021-08-17 07:20:16.783743
      Result = success
      Message = job successfully retrieved
      Response = Successfully created DR config drConfig1: 439ad078-7e6a-4908-affa-ac89210d76ac
  4. When the DR configuration is created, the storage project for data replication is set up on the ZFS Storage Appliances.

    Note the DR configuration ID. You need it for all subsequent commands to modify the configuration.

  5. To display a list of existing DR configurations, use the drGetConfigs command.

    PCA-ADMIN> drGetConfigs
    Command: drGetConfigs
    Status: Success
    Time: 2021-08-17 07:44:54,443 UTC
    Data:
      id configName
      -- ----------
      439ad078-7e6a-4908-affa-ac89210d76ac drConfig1
      e8291afa-a413-4932-880a-abb8ac22c85d drConfig2
      7ad05d9f-731c-41b8-b477-35da4b999071 drConfig3
  6. To display the status and details of a DR configuration, use the drGetConfig command.

    Syntax:

    drGetConfig drConfigId=<DR_configuration_id>

    Example:

    PCA-ADMIN> drGetConfig drConfigId=439ad078-7e6a-4908-affa-ac89210d76ac
    Command: drGetConfig drConfigId=439ad078-7e6a-4908-affa-ac89210d76ac
    Status: Success
    Time: 2021-08-17 07:47:53,401 UTC
    Data:
      Type = DrConfig
      Config State = ENABLED
      Config Name = drConfig1
      Config Id = 439ad078-7e6a-4908-affa-ac89210d76ac
      Project Id = drProject1

Adding Site Mappings to a DR Configuration

Site mappings are added to determine how and where on the replica system the instances should be brought back up in case the primary system experiences an outage and a failover is triggered. Each site mapping contains a source object for the primary system and a corresponding target object for the replica system. Make sure that these resources exist on both the primary and replica system before you add the site mappings to the DR configuration.

These are the site mapping types you can add to a DR configuration:

  • Compartment: specifies that, if a failover occurs, instances from the source compartment must be brought up in the target compartment on the replica system

  • Subnet: specifies that, if a failover occurs, instances connected to the source subnet must be connected to the target subnet on the replica system

  • Network security group: specifies that, if a failover occurs, instances that belong to the source network security group must be included in the target security group on the replica system

Using the Service CLI

  1. Gather the information that you need to run the command:

    • DR configuration ID (drGetConfigs)

    • Mapping source and target object OCIDs

      Use the Compute Enclave UI or CLI on the primary and replica system respectively. CLI commands:

      • oci iam compartment list

      • oci network subnet list --compartment-id "ocid1.compartment.....uniqueID"

      • oci network nsg list --compartment-id "ocid1.compartment.....uniqueID"

  2. Add a site mapping to the DR configuration with the drAddSiteMapping command.

    Syntax (entered on a single line):

    drAddSiteMapping 
    drConfigId=<DR_configuration_id>
    objType=[compartment | subnet | networksecuritygroup]
    sourceId=<source_object_OCID>
    targetId=<target_object_OCID>

    Examples:

    PCA-ADMIN> drAddSiteMapping \
    drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \
    objType=compartment \
    sourceId="ocid1.compartment.....<region1>...uniqueID" \
    targetId="ocid1.compartment.....<region2>...uniqueID"
    Command: drAddSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d objType=compartment sourceId="ocid1.compartment.....<region1>...uniqueID" targetId="ocid1.compartment.....<region2>...uniqueID"
    Status: Success
    Time: 2021-08-17 09:07:24,957 UTC
    Data:
      9244634e-431f-43a1-89ab-5d25905d43f9
    
    PCA-ADMIN> drAddSiteMapping \
    drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \
    objType=subnet \
    sourceId="ocid1.subnet.....<region1>...uniqueID" \
    targetId="ocid1.subnet.....<region2>...uniqueID"
    Command: drAddSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d objType=subnet sourceId="ocid1.subnet.....<region1>...uniqueID" targetId="ocid1.subnet.....<region2>...uniqueID"
    Status: Success
    Time: 2021-08-17 09:07:24,957 UTC
    Data:
      d1bf2cf2-d8c7-4271-b8b6-cdf757648175
    
    PCA-ADMIN> drAddSiteMapping \
    drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \
    objType=networksecuritygroup \
    sourceId="ocid1.nsg.....<region1>...uniqueID" \
    targetId="ocid1.nsg.....<region2>...uniqueID"
    Command: drAddSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d objType=networksecuritygroup sourceId="ocid1.nsg.....<region1>...uniqueID" targetId="ocid1.nsg.....<region2>...uniqueID"
    Status: Success
    Time: 2021-08-17 09:07:24,957 UTC
    Data:
      422f8892-ba0a-4a89-bc37-61b5c0fbbbaa
  3. Repeat the command with the OCIDs of all the source and target objects that you want to include in the site mappings of the DR configuration.

    Note:

    Mappings for compartments and subnets are always required in order to perform a failover or switchover. Missing mappings will be detected by the Oracle Site Guard scripts during a precheck on the replica system.
  4. To display the list of site mappings included in the DR configuration, use the drGetSiteMappings command. The DR configuration ID is a required parameter.

    Syntax:

    drGetSiteMappings drConfigId=<DR_configuration_id>

    Example:

    PCA-ADMIN> drGetSiteMappings drConfigId=63b36a80-7047-42bd-8b97-8235269e240d
    Command: drGetSiteMappings drConfigId=63b36a80-7047-42bd-8b97-8235269e240d
    Status: Success
    Time: 2021-08-17 09:19:22,580 UTC
    Data:
      id                                     name
      --                                     ----
      d1bf2cf2-d8c7-4271-b8b6-cdf757648175   null
      9244634e-431f-43a1-89ab-5d25905d43f9   null
      422f8892-ba0a-4a89-bc37-61b5c0fbbbaa   null
  5. To display the status and details of a site mapping included in the DR configuration, use the drGetSiteMapping command.

    Syntax (entered on a single line):

    drGetSiteMapping 
    drConfigId=<DR_configuration_id>
    mappingId=<site_mapping_id>

    Example:

    PCA-ADMIN> drGetSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d mappingId=d1bf2cf2-d8c7-4271-b8b6-cdf757648175
    Command: drGetSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d mappingId=d1bf2cf2-d8c7-4271-b8b6-cdf757648175
    Status: Success
    Time: 2021-08-17 09:25:53,148 UTC
    Data:
      Type = DrSiteMapping
      Object Type = subnet
      Source Id = ocid1.nsg.....<region1>...uniqueID
      Target Id = ocid1.nsg.....<region2>...uniqueID
      Work State = Normal

Removing Site Mappings from a DR Configuration

You can remove a site mapping from the DR configuration if it is no longer required.

Using the Service CLI

  1. Gather the information that you need to run the command:

    • DR configuration ID (drGetConfigs)

    • Site mapping ID (drGetSiteMappings)

  2. Remove the selected site mapping from the DR configuration with the drRemoveSiteMapping command.

    Syntax (entered on a single line):

    drRemoveSiteMapping 
    drConfigId=<DR_configuration_id>
    mappingId=<site_mapping_id>

    Example:

    PCA-ADMIN> drRemoveSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d mappingId=422f8892-ba0a-4a89-bc37-61b5c0fbbbaa
    Command: drRemoveSiteMapping drConfigId=63b36a80-7047-42bd-8b97-8235269e240d mappingId=422f8892-ba0a-4a89-bc37-61b5c0fbbbaa
    Status: Success
    Time: 2021-08-17 09:41:43,319 UTC
  3. Repeat the command with the IDs of all the site mappings that you want to remove from the DR configuration.

Adding Instances to a DR Configuration

Once a DR configuration has been created and the relevant site mappings have been set up, you add the required compute instances. Their data and disks are stored in the ZFS storage project associated with the DR configuration, and replicated over the network connection between the ZFS Storage Appliances of both Private Cloud Appliance systems.

If your system contains optional high-performance disk shelves, you must set up peering accordingly between the ZFS Storage Appliances. As a result, two ZFS projects are created for each DR configuration: one in the standard pool and one in the high-performance pool. When you add instances to the DR configuration that have disks running on standard as well as high-performance storage, those storage resources are automatically added to the ZFS project in the appropriate pool.

Using the Service CLI

  1. Gather the information that you need to run the command:

    • DR configuration ID (drGetConfigs)

    • Instance OCIDs from the Compute Enclave UI or CLI (oci compute instance list --compartment-id <compartment_OCID>)

  2. Add a compute instance to the DR configuration with the drAddComputeInstance command.

    Syntax (entered on a single line):

    drAddComputeInstance 
    drConfigId=<DR_configuration_id> 
    instanceId=<instance_OCID>

    Example:

    PCA-ADMIN> drAddComputeInstance \
    drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \
    instanceId=ocid1.instance.....<region1>...uniqueID
    
    Command: drAddComputeInstance drConfigId=63b36a80-7047-42bd-8b97-8235269e240d instanceId=ocid1.instance.....<region1>...uniqueID
    Status: Success
    Time: 2021-08-17 07:24:35,186 UTC
    Data:
      Message = Successfully started job to add instance ocid1.instance.....<region1>...uniqueID to DR config 63b36a80-7047-42bd-8b97-8235269e240d
      Job Id = 8dcbd22d-69b0-4319-b09f-1a4df847e9df
  3. Use the job ID to check the status of the operation you started.

    PCA-ADMIN> drGetJob jobId=8dcbd22d-69b0-4319-b09f-1a4df847e9df
    Command: drGetJob jobId=8dcbd22d-69b0-4319-b09f-1a4df847e9df
    Status: Success
    Time: 2021-08-17 07:36:27,719 UTC
    Data:
      Type = add_computeinstance
      Job Id = 8dcbd22d-69b0-4319-b09f-1a4df847e9df
      Status = finished
      Start Time = 2021-08-17 07:24:36.776193
      End Time = 2021-08-17 07:26:59.406929
      Result = success
      Message = job successfully retrieved
      Response = Successfully added instance [ocid1.instance.....<region1>...uniqueID] to DR config [63b36a80-7047-42bd-8b97-8235269e240d]
  4. Repeat the drAddComputeInstance command with the OCIDs of all the compute instances that you want to add to the DR configuration.

  5. To display the list of instances included in the DR configuration, use the drGetComputeInstances command. The DR configuration ID is a required parameter.

    Syntax:

    drGetComputeInstances drConfigId=<DR_configuration_id>

    Example:

    PCA-ADMIN> drGetComputeInstances drConfigId=63b36a80-7047-42bd-8b97-8235269e240d
    Command: drGetComputeInstances drConfigId=63b36a80-7047-42bd-8b97-8235269e240d
    Status: Success
    Time: 2021-08-17 08:33:39,586 UTC
    Data:
      id                                                           name
      --                                                           ----
      ocid1.instance.....<region1>...instance1_uniqueID            null
      ocid1.instance.....<region1>...instance2_uniqueID            null
      ocid1.instance.....<region1>...instance3_uniqueID            null
  6. To display the status and details of an instance included in the DR configuration, use the drGetComputeInstance command.

    Syntax (entered on a single line):

    drGetComputeInstance 
    drConfigId=<DR_configuration_id>
    instanceId=<instance_OCID>

    Example:

    PCA-ADMIN> drGetComputeInstance \
    drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \
    instanceId=ocid1.instance.....<region1>...instance1_uniqueID
    Command: drGetComputeInstance drConfigId=63b36a80-7047-42bd-8b97-8235269e240d instanceId=ocid1.instance.....<region1>...instance1_uniqueID
    Status: Success
    Time: 2021-08-17 08:34:42,413 UTC
    Data:
      Type = ComputeInstance
      Compartment Id = ocid1.compartment........uniqueID
      Boot Volume Id = ocid1.bootvolume........uniqueID
      Compute Instance Shape = VM.PCAStandard1.8
      Work State = Normal

Removing Instances from a DR Configuration

Instances can only be part of a single DR configuration. You can remove a compute instance from the DR configuration to which it was added.

Using the Service CLI

  1. Gather the information that you need to run the command:

    • DR configuration ID (drGetConfigs)

    • Instance OCID (drGetComputeInstances)

  2. Remove the selected compute instance from the DR configuration with the drRemoveComputeInstance command.

    Syntax (entered on a single line):

    drRemoveComputeInstance 
    drConfigId=<DR_configuration_id>
    instanceId=<instance_OCID>

    Example:

    PCA-ADMIN> drRemoveComputeInstance \
    drConfigId=63b36a80-7047-42bd-8b97-8235269e240d \
    instanceId=ocid1.instance.....<region1>...instance3_uniqueID
    Command: drRemoveComputeInstance drConfigId=63b36a80-7047-42bd-8b97-8235269e240d instanceId=ocid1.instance.....<region1>...instance3_uniqueID
    Status: Success
    Time: 2021-08-17 08:45:59,718 UTC
    Data:
      Message = Successfully started job to remove instance ocid1.instance.....<region1>...instance3_uniqueID from DR config 63b36a80-7047-42bd-8b97-8235269e240d
      Job Id = 303b42ff-077c-4504-ac73-25930652f73a
  3. Use the job ID to check the status of the operation you started.

    PCA-ADMIN> drGetJob jobId=303b42ff-077c-4504-ac73-25930652f73a
    Command: drGetJob jobId=303b42ff-077c-4504-ac73-25930652f73a
    Status: Success
    Time: 2021-08-17 08:56:27,719 UTC
    Data:
      Type = remove_computeinstance
      Job Id = 303b42ff-077c-4504-ac73-25930652f73a
      Status = finished
      Start Time = 2021-08-17 08:46:00.641212
      End Time = 2021-08-17 07:47:19.142262
      Result = success
      Message = job successfully retrieved
      Response = Successfully removed instance [ocid1.instance.....<region1>...instance3_uniqueID] from DR config [63b36a80-7047-42bd-8b97-8235269e240d]
  4. Repeat the drRemoveComputeInstance command with the OCIDs of all the compute instances that you want to remove from the DR configuration.

Refreshing a DR Configuration

To ensure that the replication information stored in a DR configuration is updated with all the latest changes in your environment, you can refresh the DR configuration.

Using the Service CLI

  1. Look up the ID of the DR configuration you want to refresh (drGetConfigs).

  2. Refresh the data stored in the selected DR configuration with the drRefreshConfig command.

    Syntax:

    drRefreshConfig drConfigId=<DR_configuration_id>

    Example:

    PCA-ADMIN> drRefreshConfig drConfigId=63b36a80-7047-42bd-8b97-8235269e240d
    Command: drRefreshConfig drConfigId=63b36a80-7047-42bd-8b97-8235269e240d
    Status: Success
    Time: 2021-08-17 10:43:33,241 UTC 
    Data:
      Message = Successfully started job to refresh DR config 63b36a80-7047-42bd-8b97-8235269e240d
      Job Id = 205eb34e-f416-41d3-95a5-506a1d891fdb
  3. Use the job ID to check the status of the operation you started.

    PCA-ADMIN> drGetJob jobId=205eb34e-f416-41d3-95a5-506a1d891fdb
    Command: drGetJob jobId=205eb34e-f416-41d3-95a5-506a1d891fdb
    Status: Success
    Time: 2021-08-17 10:51:27,719 UTC
    Data:
      Type = refresh_config
      Job Id = 205eb34e-f416-41d3-95a5-506a1d891fdb
      Status = finished
      Start Time = 2021-08-17 10:43:34.264828
      End Time = 2021-08-17 10:45:12.718561
      Result = success
      Message = job successfully retrieved
      Response = Successfully refreshed DR config [63b36a80-7047-42bd-8b97-8235269e240d]

Deleting a DR Configuration

When you no longer need a DR configuration, you can remove it with a single command. It also removes all site mappings and cleans up the associated storage projects on the ZFS Storage Appliances of the primary and replica system. However, you must stop all compute instances that are part of the DR configuration before you can delete it.

Using the Service CLI

  1. Stop all the compute instances that are part of the DR configuration you want to delete.

  2. Look up the ID of the DR configuration you want to delete (drGetConfigs).

  3. Delete the selected DR configuration with the drDeleteConfig command.

    Syntax:

    drDeleteConfig drConfigId=<DR_configuration_id>

    Example:

    PCA-ADMIN> drDeleteConfig drConfigId=63b36a80-7047-42bd-8b97-8235269e240d
    Command: drDeleteConfig drConfigId=63b36a80-7047-42bd-8b97-8235269e240d
    Status: Success
    Time: 2021-08-17 14:45:19,634 UTC 
    Data:
      Message = Successfully started job to delete DR config 63b36a80-7047-42bd-8b97-8235269e240d
      Job Id = d2c1198d-f521-4b8d-a9f1-c36c7965d567
  4. Use the job ID to check the status of the operation you started.

    PCA-ADMIN> drGetJob jobId=d2c1198d-f521-4b8d-a9f1-c36c7965d567
    Command: drGetJob jobId=d2c1198d-f521-4b8d-a9f1-c36c7965d567
    Status: Success
    Time: 2021-08-17 16:18:33,462 UTC
    Data:
      Type = delete_config
      Job Id = d2c1198d-f521-4b8d-a9f1-c36c7965d567
      Status = finished
      Start Time = 2021-08-17 14:45:20.105569
      End Time = 2021-08-17 14:53:32.405569
      Result = success
      Message = job successfully retrieved
      Response = Successfully deleted DR config [63b36a80-7047-42bd-8b97-8235269e240d]