Network Connection Requirements

These sections describe the network connection requirements and data center network requirements to connect the Oracle Private Cloud Appliance to your existing network infrastructure.

Network Overview

For overview information regarding network infrastructure, see the following sections in the Hardware Overview chapter of the Oracle Private Cloud Appliance Concepts Guide.

Device Management Network

The device management network provides internal access to the management interfaces of all appliance components.

Data Network

The appliance data connectivity is built on redundant 100Gbit switches in two-layer design similar to a leaf-spine topology. An Oracle Private Cloud Appliance rack contains two leaf and two spine switches. The leaf switches interconnect the rack hardware components, while the spine switches form the backbone of the network and provide a path for external traffic.

Uplinks are the connections between the Oracle Private Cloud Appliance and the customer data center. For external connectivity, 5 ports are reserved on each spine switch. Four ports are available to establish the uplinks between the appliance and the data center network; one port is reserved to optionally segregate the administration network from the data traffic. Use this section to plan your network topology and logical connection options.

Administration Network

You can optionally segregate administrative appliance access from the data traffic.

Reserved Network Resources

Oracle Private Cloud Appliance requires a large number of IP addresses and several VLANs for internal operation. See "Reserved Network Resources" in the Hardware Overview section for the IP address ranges reserved for internal use by Oracle Private Cloud Appliance.

Network Configuration Requirements

On each spine switch, ports 1-4 can be used for uplinks to the data center network. For speeds of 10Gbps or 25Gbps, the spine switch port must be split using a 4-way splitter or breakout cable. For higher speeds of 40Gbps or 100Gbps each switch port uses a single direct cable connection. For detailed information about choosing the appropriate configuration, refer to "Uplinks" in the Network Infrastructure section of the Hardware Overview.

The uplinks are configured during system initialization, based on information you provide as part of the Initial Installation Checklist. Unused spine switch uplink ports, including unused breakout ports, are disabled for security reasons.

It is critical that both spine switches have the same connections to the next-level data center switches. This configuration provides redundancy and load splitting at the level of the spine switches, the ports and the data center switches. This outbound cabling depends on the network topology you deploy. The cabling pattern plays a key role in the continuation of service during failover scenarios.

For more information about the available topologies (Triangle, Square, and Mesh) refer to "Uplinks" in theHardware Overview chapter of the Oracle Private Cloud Appliance Concepts Guide.

  • Before installation, you must run network cables from your existing network infrastructure to the Oracle Private Cloud Appliance installation site. For instructions see Connect the Appliance to Your Network.

  • Plan to connect at least 1 high-speed Ethernet port on each spine switch to your data center public Ethernet network.

  • Configuring the optional Administration network requires 2 additional cable connections (one each from port 5 on the two spine switches) to a pair of next-level data center switches.

  • Uplink connectivity is based on layer 3 of the OSI model.

  • When upgrading from build 302-b892153 or previous, and running vPC/HSRP, if you want to change the network configuration to support new features, contact Oracle.

DNS Configuration for Oracle Private Cloud Appliance

To integrate the data of the Oracle Private Cloud Appliance's dedicated DNS zone into the data center DNS configuration, two options are supported: zone delegation or manual configuration. The preferred approach is to configure zone delegation, as described below.

However, if you select manual configuration, it is good practice to register network host names and IP addresses for the management network, client network, and additional public networks in the data center Domain Name System (DNS) prior to initial configuration. In particular, all public addresses, VIP addresses and infrastructure services endpoints should be registered in DNS prior to installation.

All addresses registered in DNS must be configured for forward resolution; reverse resolution is not supported in the Oracle Private Cloud Appliance services zone.

Zone Delegation (preferred)

For zone delegation to work, it is required that the data center's recursive caches are able to reach TCP/UDP port 53 on the virtual IP address shared by the appliance management nodes. It may be necessary to change your firewall configuration.

Configure the data center DNS server so that it operates as the parent zone of the appliance DNS zone. Thus, all DNS requests for the child zone are delegated to the appliance internal DNS server. In the data center DNS configuration, add a name server record for the child zone and an address record for the authoritative server of that zone.

In the example it is assumed that the data center DNS domain is example.com, that the appliance is named mypca, and that the management node cluster virtual IP address is 192.0.2.102. The appliance internal DNS server host name is ns1.

$ORIGIN example.com.
[...]
mypca       IN    NS    ns1.mypca.example.com.
ns1.mypca   IN    A     192.0.2.102

Manual Configuration

Manually add DNS records for all labels or host names required by the appliance.

In the examples it is assumed that the data center DNS domain is example.com, that the appliance is named mypca, and that the management node cluster virtual IP address is 192.0.2.102 in the data network and 203.0.113.12 in the (optional) administration network.

Note:

For object storage you must point the DNS label to the Object Storage Public IP. This is the public IP address you assign specifically for this purpose when setting up the data center public IP ranges during Initial Setup. Refer to the Public IPs step near the end of the section "Complete the Initial Setup".

Appliance Infrastructure Service and DNS Label Data Center DNS Record Data Center DNS Record with Admin Network Enabled

Admin service

admin.mypca.example.com

admin
    A  192.0.2.102
admin
    A  203.0.113.12

Networking, Compute, Block Storage, Work Requests services

iaas.mypca.example.com

iaas
    A  192.0.2.102
iaas
    A  192.0.2.102

Identity and Access Management service

identity.mypca.example.com

identity
    A  192.0.2.102
identity
    A  192.0.2.102

DNS service

dns.mypca.example.com

dns
    A  192.0.2.102
dns
    A  192.0.2.102

Object storage

objectstorage.mypca.example.com

Note:

Use the Object Storage Public IP from the Appliance Initial Setup.

objectstorage
    A  198.51.100.33
objectstorage
    A  198.51.100.33

File storage

filestorage.mypca.example.com

filestorage
    A  192.0.2.102
filestorage
    A  192.0.2.102

Alert manager

alertmanager.mypca.example.com

alertmanager
    A  192.0.2.102
alertmanager
    A  203.0.113.12

API

api.mypca.example.com

api
    A  192.0.2.102
api
    A  203.0.113.12

OKE service

containermanager.mypca.example.com

containermanager
    A  192.0.2.102
containermanager
    A  192.0.2.102

Resource principal service

rps.mypca.example.com

rps
    A  192.0.2.102
rps
    A  203.0.113.12

Grafana

grafana.mypca.example.com

grafana
    A  192.0.2.102
grafana
    A  203.0.113.12

Prometheus

prometheus.mypca.example.com

prometheus
    A  192.0.2.102
prometheus
    A  203.0.113.12

Prometheus-gw

prometheus-gw.mypca.example.com

prometheus-gw
    A  192.0.2.102
prometheus-gw
    A  203.0.113.12

Service Web UI

adminconsole.mypca.example.com

adminconsole
    A  192.0.2.102
adminconsole
    A  203.0.113.12

Compute Web UI

console.mypca.example.com

console
    A  192.0.2.102
console
    A  192.0.2.102

Data Center Switch Configuration Notes

When configuring the data center switches to accept incoming Oracle Private Cloud Appliance uplinks – the default uplinks as well as any custom uplinks you define – take these notes into account.

  • All uplinks, default and customer, are configured to use link aggregation (LACP). All switch ports included in an uplink configuration must belong to the same link aggregation group (LAG). The switch ports on the data center side of the uplinks must be configured accordingly.

  • The spine switches operate with the Virtual Port Channel (vPC) feature enabled in static routing configurations. For more information about configuration rules, see "Uplinks" in the Network Infrastructure section of the Hardware Overview.

  • Oracle Private Cloud Appliance supports layer 3 based uplink connectivity to the customer datacenter. Static routing and BGP4-based dynamic routing are supported in layer 3.

  • Auto-negotiation is not available for uplink ports. Transfer speed must be specified on the customer switches' end. For the supported uplink ports speeds, see "Uplinks" in the Network Infrastructure section of the Hardware Overview.

Administration Network Configuration Notes

If you choose to segregate administrative appliance access from the data traffic, ensure that your data center network is configured accordingly, so that all traffic can be routed to the appropriate destinations in both the administration and the data network.

Access to Service Endpoints

When the administration network is enabled, some appliance infrastructure services are accessed through the Admin Management VIP, instead of the regular Management Node VIP. These service endpoints are:

  • 'admin'

  • 'adminconsole'

  • 'prometheus-gw'

  • 'prometheus'

  • 'grafana'

  • 'api'

  • 'alertmanager'

  • 'rps'

The following service endpoints are always accessed through the Management Node VIP in the data network:

  • 'console'

  • 'iaas'

  • 'identity'

  • 'filestorage'

  • 'objectstorage'

  • 'dns'

  • 'containermanager'

Ensure that the data center firewall is configured to allow this traffic. If you manage the DNS records required by the appliance in the data center DNS configuration, ensure that they point to the correct network and address, as shown in DNS Configuration for Oracle Private Cloud Appliance ("Manual Configuration").

OKE Cluster Management

When the Oracle Container Engine for Kubernetes (OKE) is used on a system configured with a separate administration network, the data center firewall must be configured to allow traffic between the OKE control plane and the OKE clusters deployed by Compute Enclave users.

The OKE control plane runs on the management nodes in the administration network, while the OKE clusters are deployed in the data network. The management interface of an OKE cluster is port 6443 on its load balancer public IP address. This address is assigned from the data center IP range you reserved and configured as public IPs during initial appliance setup.

Because of the network segregation, traffic from the OKE control plane must exit the appliance through the administration network, and reenter through the data network to reach the OKE cluster. The data center network infrastructure must allow traffic in both directions. Without the necessary firewall and routing rules, users cannot deploy OKE clusters.

Figure 3-1 Example of System Configured with a Separate Administration Network

Diagram showing packet flow when a system is configured with a separate administration network.