Establishing a Peer Connection
The disaster recovery service requires a mutual, symmetrical peer connection between two Oracle Private Cloud Appliance racks. Dedicated cabling must be installed at each site, and the disaster recovery service on each rack must be configured to accept the other rack as its standby system.
Adding Cable Connections
A peer connection between Oracle Private Cloud Appliance racks requires additional physical connections. Dedicated cabling must be installed between the spine switches and the data center network.
Racks with factory-installed software version 3.0.2-b1261765 or later already have all the necessary internal network interfaces and connections. Only external cabling for the peer connection is required. The ZFS Storage Appliances use the same physical connections for their data replication.
In existing installations where first-generation disaster recovery is configured, an active replication network between the ZFS Storage Appliances exists. When upgrading or patching to the latest appliance software, the existing replication network remains active. The new physical connections from the spine switches are used for peering traffic only.
Data Center Cabling for Rack Peering
Direct peering between racks requires dedicated cabling for each participating system. The additional connections between the spine switches and the data center network are the physical basis on which the network tunnels of the peer connection are configured.
For the purpose of peering, port 6 on each spine switch must be connected to the data center network. To provide the required connection speed of 10 or 25 Gbps, a 4-way breakout cable is attached to spine port 6. From the breakout cable, 1 transceiver is connected to the data center network. Cabling must be identical for both spine switches.
Appliance Internal Cabling for Rack Peering
Appliance rack configurations shipped from the factory before the release of the native DR service do not have the required internal cabling to enable replication through the peer connection tunnels. They lack these crucial components:
-
PCIe 25GbE network interface card in some models of the ZFS Storage Appliance Controllers
-
Ethernet cabling between ZFS Storage Appliance Controllers and spine switches (port 27)
These components can be added to existing installations, so their hardware configuration is equivalent to racks with factory-installed software version 3.0.2-b1261765 or later. Contact Oracle for assistance and additional information.
Backward Compatibility
The native DR service supports both cabling layouts:
-
Peering Topology: combined peer connection and storage replication network through the spine switches
-
Compatibility Topology: peer connection and physically separated direct replication link between the ZFS Storage Appliances
The compatibility topology provides different options for existing installations after upgrading or patching to software version 3.0.2-b1261765 or later. If you have a first-generation DR setup, you can choose to continue with this configuration, on condition that you do not establish a peer connection at the appliance level. However, Oracle recommends migrating the existing configuration to the native DR service, in accordance with your infrastructure design and maintenance schedule. Data center cabling for the peer connection must be added, but you can continue to use the existing storage replication connection. For more information, see Migrating to the Native Disaster Recovery Service.
Creating a Local Endpoint
Traffic between peered Private Cloud Appliance systems flows through tunnels between endpoints. A rack must have a local endpoint configured before it can participate in a peer connection.
Assuming the network configuration remains the same, a local endpoint is set up once. It remains configured if a peer connection is deleted, so it can be reused for any new connection. These parameters are required to create the local endpoint:
-
an IP address for each spine switch
-
the IP addresses of the data center gateways to which the spines are connected
-
IP address of the capacity ZFS pool and, if present, the high-performance ZFS pool
-
the ASN ID of your network environment, if applicable
- Network Configuration Guidelines
-
A local endpoint requires a /29 address block, which has 6 usable IP addresses. Within that /29 range, the spine switch pair is assigned 3 IPs (one is shared). Each ZFS pool is assigned 1 IP outside the spine switch subnet. To allow for additional future peer connections, it is recommended to reserve a data center IP range of at least /25 in size, which corresponds with 16 address blocks of /29 size.
When setting up the local endpoint, you must provide the netmask with the peering network IPs, but not with IPs that have already been configured, such as the data center addresses. The gateway IPs must be provided by the network administrator, and assigned to the data center switches to which the spine switches are connected. Note that spine 1 corresponds with gateway 1, and spine 2 with gateway 2.
Caution:
If your systems are set up with a first-generation disaster recovery configuration, and you are migrating to the native disaster recovery service, perform these steps:
-
Gather the existing disaster recovery configuration details on both appliances, using the
drShowService
command.PCA-ADMIN> drShowService Data: Local Ip = 10.100.3.83/28 Local Ip Perf = 10.100.3.84/28 Remote Host = sn01-dr1.exmaple.com Remote Host Perf = sn02-dr1.exmaple.com Replication = ENABLED Replication High = ENABLED Message = Successfully retrieved site configuration maxConfig = 12 gateway IP = 10.100.3.81 gateway IP Perf = 10.100.3.81 Job Retention Hours = 48
-
Remove the ZFS pool replication IPs from the existing network configuration.
PCA-ADMIN> edit networkConfig ZFSCapacityPoolReplicationEndpoint="" PCA-ADMIN> edit networkConfig ZFSPerfPoolReplicationEndpoint=""
-
Use the storage IP addresses (and subnet mask) already assigned to the ZFS Storage Appliance Controllers for replication between the storage pools.
When you have obtained all required IP addresses, create the appliance local endpoint using either the Service CLI or Service Web UI.
- Using the Service CLI
-
Enter the following command on a single line, replacing the sample IPs with the ones you obtained:
PCA-ADMIN> create LocalEndpoint \ spine1Ip=<10.212.128.3/29> datacenterGateway1Ip=<10.212.128.1> \ spine2Ip=<10.212.128.4/29> datacenterGateway2Ip=<10.212.128.2> \ zfsCapacityPoolEndpointIp=<10.212.128.129/29> zfsPerformancePoolEndpointIp=<10.212.128.130/29> \ localAsn=<136025>
Check the local endpoint configuration with the command:
getLocalEndpoint
. - Using the Service Web UI
-
Under Disaster Recovery Service, open the Local Endpoint page. In the top-right corner, click Create.
In the pop-up window, enter the IP addresses in the respective fields. Click Create Local Endpoint to apply the settings.
In the Local Endpoint page, the Information tab indicates the endpoint is configured. Click the Configuration tab to display the details.
Deleting the Local Endpoint
The local endpoint cannot be deleted if it is part of an existing peer configuration. Delete the peer connection first.
- Using the Service CLI
-
Enter the command:
deleteLocalEndpoint
. - Using the Service Web UI
-
Under Disaster Recovery Service, open the Local Endpoint page. In the top-right corner, click Delete.
Creating the Peer Connection
When two Private Cloud Appliance systems have been cabled correctly, and their local endpoints have been configured, the peer connection can be created.
The peer connection is a symmetrical configuration, meaning the setup must be performed on each connected appliance. The administrators exchange the relevant configuration details of their system, so they can each include the peer details required for creating the connection. A trust relationship between the appliances is established through a CA chain stored in the Secret Service (Vault).
When the first appliance completes its side of the connection setup, it goes into a waiting state. By design, the appliance with the IP address ending with the lowest value initiates the connection. As soon as the entry for the peer appliance is detected, the CA certificates are verified and the mutual trust relationship is confirmed. After successful peering, a pair of secure tunnels is established between the spine switches. These allow the administration services on the appliances to exchange information with each other.
These parameters are required to create a peer connection:
-
the IP addresses (4 in total) of the local and the remote endpoint for each tunnel
-
the IP addresses of the remote spine switches in the peer appliance
-
properties of the peer appliance: domain name, system name, serial number, ASN ID if applicable
-
properties of the peer Admin Service: host name, admin user name, admin password, CA chain
The network configuration must allow peer-to-peer connectivity between the replication endpoints, or use routable IPs when both systems are in separate address spaces. Ensure that the new network setup does not overlap with existing connections between the appliance and the data center.
A peer connection requires a /30 subnet, with 2 IPs assigned to each local endpoint. When setting up the connection, you include the netmask for the local endpoint IPs, but not for the remote endpoint IPs and remote spine switch IPs.
When you have obtained all required parameters, create the peer connection using either the Service CLI or Service Web UI.
- Using the Service CLI
-
Enter the following command on a single line, replacing the sample parameters with the ones you obtained:
PCA-ADMIN> create PeerConnection name=<peerconnection1> description=<"my peer connection"> \ peerSerialNumber=<1654BF2465> peerSystemName=<mypca1> peerDomainName=<mydomain.com> \ localEndpoint1Ip=<172.16.21.1/30> remoteEndpoint1Ip=<172.16.21.2> \ localEndpoint2Ip=<172.16.21.5/30> remoteEndpoint2Ip=<172.16.21.6> \ remoteSpine1Ip=<10.212.128.10> remoteSpine2Ip=<10.212.128.11> \ peerAdminHostname=<mypca1.mydomain.com> peerAdminUserName=<admin> peerAdminPassword=<password> \ peerAdminCaChain=<ca_string> remoteAsn=<136025>
Check the peer connection configuration using the following commands:
PCA-ADMIN> list PeerConnection Data: id Name Peer Admin Hostname Peer Rack Serial Number Lifecycle State -- ---- ------------------- ----------------------- --------------- ocid1.drpeerconnection....unique_ID peerconnection1 mypca1.mydomain.com 1654BF2465 ACTIVE PCA-ADMIN> show peerConnection id=ocid1.drpeerconnection....unique_ID Data: Id = ocid1.drpeerconnection....unique_ID Type = PeerConnection Lifecycle Sub State = ACTIVE Lifecycle State = ACTIVE Peer Rack Serial Number = 1654BF2465 Local Endpoint 1 Ip = 172.16.21.1/30 Local Endpoint 2 Ip = 172.16.21.5/30 Remote Endpoint 1 Ip = 172.16.21.2 Remote Endpoint 2 Ip = 172.16.21.6 Remote Spine 1 Ip = 10.212.128.10 Remote Spine 2 Ip = 10.212.128.11 Peer Admin CaChain = -----BEGIN CERTIFICATE-----\nMIIFbjCCA1agAwIBAgIQfMPkn17+ZTNl/jZjYzbpn[...] Peer Admin Hostname = mypca1.mydomain.com Peer Rack Domain Name = mydomain.com Peer Rack System Name = mypca1 Peer Rack Admin User Name = admin Peer Rack Admin User Password = ******* Remote Asn = 136025 ProgressRecordIds 1 = id:d39144d6-feef-4988-ba71-fac4b046fff8 type:ProgressRecord name: ProgressRecordIds 2 = id:940b397f-993c-4ab9-9708-909dabb65c47 type:ProgressRecord name: ProgressRecordIds 3 = id:64b31360-3d0d-4dc2-a925-35164143eb25 type:ProgressRecord name: ProgressRecordIds 4 = id:7e8d9e2e-74b1-4d31-9098-7a09d719ec6a type:ProgressRecord name: ProgressRecordIds 5 = id:2309bcdc-1689-410b-a93e-528444ada2a5 type:ProgressRecord name: ProgressRecordIds 6 = id:8a4d5747-d8fe-48e2-96e2-f4c797964cbe type:ProgressRecord name: Name = peerconnection1 Work State = Normal
- Using the Service Web UI
-
Under Disaster Recovery Service, open the Peer Connections page. In the top-right corner, click Create Peer Connection.
In the pop-up window, enter all parameters in the respective fields. Click Create Peer Connection to apply the settings.
In the Peer Connections page, the table displays a new entry for the connection you created. Click the name in the table to display the detail page of the peer connection, and review its configuration parameters.
Updating the Peer Connection
There is no CLI command or UI function to modify the peer connection once it's configured. Changing the peer connection requires that you delete it and create a new connection with the updated parameters.
Deleting the Peer Connection
If a peer connection is no longer used, you can delete it. Ensure that the peer configuration is removed from each connected appliance.
- Using the Service CLI
-
Look up the ID of the peer connection you want to delete, then enter the delete command as shown.
PCA-ADMIN> list PeerConnection Data: id Name Peer Admin Hostname Peer Rack Serial Number Lifecycle State -- ---- ------------------- ----------------------- --------------- ocid1.drpeerconnection....unique_ID peerconnection1 mypca1.mydomain.com 1654BF2465 ACTIVE PCA-ADMIN> delete peerConnection id=ocid1.drpeerconnection....unique_ID
- Using the Service Web UI
-
Under Disaster Recovery Service, open the Peer Connections page. In the table, click the name of the connection you want to delete. The peer connection detail page is displayed. In the top-right corner, click Delete.