Preparing the Data Center Network Environment

The network architecture of Oracle Private Cloud Appliance relies on physical high speed Ethernet connectivity. Prepare the data center network configuration so it meets the requirements for incorporating the appliance.

The networking infrastructure in Private Cloud Appliance is integral to the system and must not be altered. The networking does not integrate into any data center management or provisioning frameworks such as Cisco ACI, Network Director, or the like.

However, Private Cloud Appliance can communicate with the Cisco ACI fabric in your data center using the L3Out functionality (static routes or eBGP) provided by Cisco ACI. For more information about this Cisco feature, see the Cisco ACI Fabric L3Out Guide.

Caution

No changes to the networking switches in Private Cloud Appliance are supported, unless when instructed by Oracle Support or through a knowledge article on the My Oracle Support website.

For important conceptual information about appliance networking, see Private Cloud Appliance Network Infrastructure. It describes the different networks and their roles, the uplinks that connect to the data center network, and the network resources reserved for appliance operation.

Data Center DNS Configuration for Private Cloud Appliance

To integrate the data of the dedicated Private Cloud Appliance DNS zone into the data center DNS configuration, two options are supported: zone delegation or manual configuration.

The preferred approach is to configure zone delegation. However, if you select manual configuration, it is good practice to register network host names and IP addresses for the management network, client network, and additional public networks in the data center Domain Name System (DNS) prior to initial configuration. In particular, all public addresses, VIP addresses and infrastructure services endpoints should be registered in DNS prior to installation.

All addresses registered in DNS must be configured for forward resolution; reverse resolution is not supported in the Private Cloud Appliance services zone.

Zone Delegation (preferred)

For zone delegation to work, it is required that the data center's recursive caches are able to reach TCP/UDP port 53 on the virtual IP address shared by the appliance management nodes. It may be necessary to change your firewall configuration.

Configure the data center DNS server so that it operates as the parent zone of the appliance DNS zone. Thus, all DNS requests for the child zone are delegated to the appliance internal DNS server. In the data center DNS configuration, add a name server record for the child zone and an address record for the authoritative server of that zone.

In the example it is assumed that the data center DNS domain is example.com, that the appliance is named mypca, and that the management node cluster virtual IP address is 192.0.2.102. The appliance internal DNS server host name is ns1.

$ORIGIN example.com.
[...]
mypca       IN    NS    ns1.mypca.example.com.
ns1.mypca   IN    A     192.0.2.102
Caution

DNS lookup for services endpoints has changed in controller software version 3.0.2-b1483396. Individual address records per service have been consolidated into CNAME records referencing a common services or adminservices record. With zone delegation of the appliance subdomain, lookups for a defined RTYPE return the CNAME record and the RTYPE. Lookups for undefined RTYPEs that previously returned no answer, now return a CNAME record only.

Manual Configuration

Manually add DNS records for all labels or host names required by the appliance.

In the examples it is assumed that the data center DNS domain is example.com, that the appliance is named mypca, and that the management node cluster virtual IP address is 192.0.2.102 in the data network and 203.0.113.12 in the (optional) administration network.

Note

For object storage you must point the DNS label to the Object Storage Public IP. This is the public IP address you assign specifically for this purpose when setting up the data center public IP ranges during Initial Setup.

Appliance Infrastructure Service and DNS Label

Data Center DNS Record

Data Center DNS Record with Admin Network Enabled

Admin service

admin.mypca.example.com

admin
    A  192.0.2.102
admin
    A  203.0.113.12

Networking, Compute, Block Storage, Work Requests services

iaas.mypca.example.com

iaas
    A  192.0.2.102
iaas
    A  192.0.2.102

Identity and Access Management service

identity.mypca.example.com

identity
    A  192.0.2.102
identity
    A  192.0.2.102

DNS service

dns.mypca.example.com

dns
    A  192.0.2.102
dns
    A  192.0.2.102

Object storage

objectstorage.mypca.example.com

Use the Object Storage Public IP from the Appliance Initial Setup.

objectstorage
    A  198.51.100.33
objectstorage
    A  198.51.100.33

File storage

filestorage.mypca.example.com

filestorage
    A  192.0.2.102
filestorage
    A  192.0.2.102

Alert manager

alertmanager.mypca.example.com

alertmanager
    A  192.0.2.102
alertmanager
    A  203.0.113.12

API

api.mypca.example.com

api
    A  192.0.2.102
api
    A  203.0.113.12

OKE service

containerengine.mypca.example.com

containerengine
    A  192.0.2.102
containerengine
    A  192.0.2.102

Resource principal service

rps.mypca.example.com

rps
    A  192.0.2.102
rps
    A  203.0.113.12

Grafana

grafana.mypca.example.com

grafana
    A  192.0.2.102
grafana
    A  203.0.113.12

Prometheus

prometheus.mypca.example.com

prometheus
    A  192.0.2.102
prometheus
    A  203.0.113.12

Prometheus-gw

prometheus-gw.mypca.example.com

prometheus-gw
    A  192.0.2.102
prometheus-gw
    A  203.0.113.12

Service Web UI

adminconsole.mypca.example.com

adminconsole
    A  192.0.2.102
adminconsole
    A  203.0.113.12

Compute Web UI

console.mypca.example.com

console
    A  192.0.2.102
console
    A  192.0.2.102

Data Center Network Configuration Guidelines

Follow these important guidelines for a smooth integration of Private Cloud Appliance into the data center network.

Data Center Switch Notes

  • All uplinks, default and customer, are configured to use link aggregation (LACP). All switch ports included in an uplink configuration must belong to the same link aggregation group (LAG). The switch ports on the data center side of the uplinks must be configured accordingly.

  • The spine switches operate with the Virtual Port Channel (vPC) feature enabled in static routing configurations.

  • Private Cloud Appliance supports layer 3 based uplink connectivity to the customer data center. Static routing and BGP4-based dynamic routing are supported in layer 3.

  • Autonegotiation isn't available for uplink ports. Transfer speed must be specified on the customer switches' end.

For more information, see Uplinks and Uplink Protocols.

Administration Network Guidelines

If you choose to segregate administrative appliance access from the data traffic, ensure that the data center network is configured accordingly, so that all traffic can be routed to the appropriate destinations in both the administration and the data network.

Access to Service Endpoints

When the administration network is enabled, some appliance infrastructure services are accessed through the Admin Management VIP, instead of the regular Management Node VIP. These service endpoints are:

  • 'admin'

  • 'adminconsole'

  • 'prometheus-gw'

  • 'prometheus'

  • 'grafana'

  • 'api'

  • 'alertmanager'

  • 'rps'

The following service endpoints are always accessed through the Management Node VIP in the data network:

  • 'console'

  • 'iaas'

  • 'identity'

  • 'filestorage'

  • 'objectstorage'

  • 'dns'

  • 'containerengine'

Ensure that the data center firewall is configured to allow this traffic. If you manage the DNS records required by the appliance in the data center DNS configuration, ensure that they point to the correct network and address, as shown in Data Center DNS Configuration for Private Cloud Appliance (Manual Configuration).

OKE Cluster Management

When Kubernetes Engine is used on a system configured with a separate administration network, the data center firewall must be configured to allow traffic between the OKE control plane and the OKE clusters deployed by Compute Enclave users.

The OKE control plane runs on the management nodes in the administration network, while the OKE clusters are deployed in the data network. The management interface of an OKE cluster is port 6443 on its load balancer public IP address. This address is assigned from the data center IP range you reserved and configured as public IPs during initial appliance setup.

Because of the network segregation, traffic from the OKE control plane must exit the appliance through the administration network, and reenter through the data network to reach the OKE cluster. The data center network infrastructure must allow traffic in both directions. Without the necessary firewall and routing rules, users cannot deploy OKE clusters.

Diagram showing packet flow when a system is configured with a separate administration network.

Default System IP Addresses

The management IP address represents a component's connection to the internal administration network.

Caution

For hardware management, Private Cloud Appliance uses a network internal to the system. It is not recommended to connect the management ports or the internal administration network switches to the data center network infrastructure.

The table in this section lists the default management IP addresses assigned to servers and other hardware components in a Private Cloud Appliance base configuration.

Rack Unit

Rack Component

Management IP Address Assigned During Manufacturing

32

Spine Switch

100.96.2.21

31

Spine Switch

100.96.2.20

26

Management Switch

100.96.2.1

100.96.0.1

25

Leaf/Data Switch

100.96.2.23

24

Leaf/Data Switch

100.96.2.22

Management Node VIP

100.96.2.32

ILOM: 100.96.0.32

7

Management Node

100.96.2.35

ILOM: 100.96.0.35

6

Management Node

100.96.2.34

ILOM: 100.96.0.34

5

Management Node

100.96.2.33

ILOM: 100.96.0.33

Storage VIPs

Performance pool 100.96.2.5

Capacity pool 100.96.2.4

3-4

Oracle ZFS Storage Appliance Controller Server (2 rack units)

100.96.2.3

ILOM: 100.96.0.3

1-2

Oracle ZFS Storage Appliance Controller Server (2 rack units)

100.96.2.2

ILOM: 100.96.0.2

Compute nodes are assigned an IP address in the internal administration network during the provisioning process. The system IP address is DHCP-based; the ILOM is assigned the system IP, where the third octet is changed from 2 to 0. For example: if a compute node receives IP 100.96.2.64 , then its ILOM has IP 100.96.0.64 . When assigned to a host, these IP addresses are stored and persisted in the DHCP database.