Configure Oracle SuperCluster

This section describes the following scenarios:

Overlapping IB Networks Enabled

One or more Oracle Engineered Systems (Oracle SuperCluster) can be discovered and managed by a single Oracle Enterprise Manager Ops Center instance based on the certain conditions.

Starting with Oracle Enterprise Manager Ops Center 12c Release 3 (12.3.1.0.0), overlapping IB networks is enabled by default.

The Oracle SuperCluster system must not be discovered. If the Oracle SuperCluster system is already discovered, remove the system completely, then rediscover the Oracle SuperCluster system again.

Enabling Overlapping Networks

Procedure to enable overlapping networks.

  1. In the Navigation pane, under Administration, click Enterprise Controller.
  2. In the center pane, click Configuration.
  3. In the Configuration Management section, select Network/Fabric Manager from the Subsystem drop-down list.
  4. Set the value of oem.oc.networkmgmt.ib.overlapping.enabled property file to true.

Note:

Restart the Enterprise Controller for the changes to take effect.

Overlapping IB Networks not Enabled

One or more Oracle Engineered Systems can be discovered and managed by a single Oracle Enterprise Manager Ops Center instance based on the certain conditions only.

  • None of Oracle Engineered System instances have overlapping private networks connected through IPoIB, that is, networks that have the same CIDR (Classless Inter-Domain Routing) or networks that are sub-blocks of the same CIDR. For example, 192.0.2.1/21 and 192.0.2.1/24 are overlapping.

  • None of the Oracle Engineered System instances or generic datacenter assets have overlapping management or client access networks connected through Ethernet, that is, networks that have the same CIDR or networks that are sub-blocks of the same CIDR. For example, 192.0.2.1/21 and 192.0.2.1/24 are overlapping. As an exception, you can use the same CIDR (not sub-block) for multiple systems. For example, you can use 192.0.2.1/22 as a CIDR for Ethernet network on one or more engineered systems and/or generic datacenter assets.

  • None of the Oracle Engineered System instances have overlapping public networks connected through EoIB, that is, networks that have the same CIDR or networks that are sub-blocks of the same CIDR. For example, 192.0.2.1/21 and 192.0.2.1/24 are overlapping. As an exception, you can use the same CIDR (not sub-block) for multiple systems. For example, you can use 192.2.0.0/22 as a CIDR for public EoIB network on multiple engineered systems.

  • None of the networks configured in Oracle Enterprise Manager Ops Center overlaps with any network, that is, overlapping networks are not supported by Oracle Enterprise Manager Ops Center.

Note:

To manage two or more engineered systems that have overlapping networks or any networks already present in Oracle Enterprise Manager Ops Center, reconfigure one of the conflicting systems before it is discovered and managed by the same Oracle Enterprise Manager Ops Center. You can also enable the overlapping network feature to manage multiple systems with overlapping networks.

Example Oracle SuperCluster Network Configurations

Example Oracle SuperCluster network configurations.

The following are example Oracle SuperCluster network configurations that you can use when configuring the network to discover and manage Oracle SuperCluster systems. Status OK indicates a valid configuration and status Fail indicates an invalid configuration.


Table 3-1 Example Oracle SuperCluster Network Configuration-1

1 GbE 10 GbE IB

SuperCluster1

192.0.251.0/21

192.4.251.0/24

192.168.30.0/24

SuperCluster2

192.0.251.0/21

192.4.251.0/24

192.168.31.0/24

Status

OK

OK

OK


Status:

OK - SuperCluster1-1GbE and SuperCluster2-1GbE share the same network.

OK - SuperCluster1-10GbE and SuperCluster2-10GbE share the same network.

OK - SuperCluster1-IB does not overlap with SuperCluster2-IB.


Table 3-2 Example Oracle SuperCluster Network Configuration-2

1 GbE 10 GbE IB

SuperCluster1

192.0.251.0/21

192.0.250.0/24

192.168.30.0/24 - IB fabric connected with SuperCluster2

SuperCluster2

192.6.0.0/21

192.0.250.0/24

192.168.30.0/24 - IB fabric connected with SuperCluster1

Status

OK

OK

OK


Status:

OK - SuperCluster1-1GbE and SuperCluster2-1GbE represent different non-overlapping networks.

OK - SuperCluster1-10GbE and SuperCluster2-10GbE share the same network.

OK - SuperCluster1-IB and SuperCluster2-IB represent the same network as they are interconnected.


Table 3-3 Example SuperCluster Network Configuration-3

1 GbE 10 GbE IB

SuperCluster1

192.0.2.1/21

192.0.251.0/21

192.168.30.0/24

SuperCluster2

192.0.0.128/25

192.0.7.0/24

192.168.30.0/24

Status

FAIL

OK

FAIL


Status:

FAIL - SuperCluster1-1GbE and SuperCluster2-1GbE define overlapping networks.

OK - SuperCluster1-10GbE and SuperCluster2-10GbE represent different non-overlapping networks.

FAIL - SuperCluster1-1GbE and SuperCluster2-10GbE define overlapping networks.

FAIL - SuperCluster1-IB and SuperCluster2-IB do not define unique private networks (racks are not interconnected).

Limitations

This section describes the limitations in configuring Oracle SuperCluster engineered system.

Do not create server pools using private networks attached to members from two or more SuperCluster systems (racks). To create server pools with members from two or more SuperCluster systems, use public networks. Use private networks only in server pools with members belonging to the same SuperCluster system.

Oracle Solaris 11 Software Library Setup

When you discover an Oracle SuperCluster system, default install agents work only if the Oracle Solaris 11 Software Update Library is correctly setup, because Oracle Solaris uses Oracle Solaris 11 Software Update Library.

If Oracle Enterprise Manager Ops Center was just installed, initialize the Oracle Solaris 11 Software Update Library before the discovery is started as it will fail to install agents. Ensure that the Oracle Solaris 11 Software Update Library contains the correct Oracle Solaris packages that your Enterprise Controller and Proxy Controllers use and also the SRUs that are used on the Oracle Engineered System.

To manage generic assets, you also need the correct Oracle Solaris packages for each generic Solaris 11 managed OS. Typically, you must create a full copy of the Oracle Solaris 11 support repository.

Starting with Oracle Enterprise Manager Ops Center 12c Release 2 (12.2.2.0.0), Oracle SuperCluster Solaris 11 OS uses the default HMP packages delivered with Oracle SuperCluster / QFSDP, instead of packages normally delivered by Ops Center installation.

It is recommended to install agents in all domains.

Deploy Proxy Controller

Procedure to deploy the proxy controller on Oracle Enterprise Manager Ops Center.

Deploy the Proxy Controller only if you do not have a suitable Proxy Controller in Oracle Enterprise Manager Ops Center that can discover Oracle Engineered Systems.

Perform the following steps to deploy the Proxy Controller on Oracle Enterprise Manager Ops Center.

  1. In the Navigation pane, click Administration.
  2. In the Actions pane, click Deploy Proxy.
  3. Select Remote Proxies, then click Next.
  4. Enter the Proxy Hostname/IP, SSH User, SSH Password, Privileged Role, and Privileged Password in the respective fields.
  5. Click Next. The Remote Proxy Controller is deployed. This might take a few minutes.
  6. Review the Summary, then click Finish.

The Remote Proxy Controller is deployed on Oracle Enterprise Manager Ops Center. You can now perform discovery of the Oracle Engineered Systems.

Prepare Setup for Oracle SuperCluster Discovery

Prepare the setup for Oracle SuperCluster system based on options such as Network is identified by the Enterprise Controller and Network not identified by the Enterprise Controller.

The setup is based on the following options:

  • Network is identified by the Enterprise Controller

  • Network is not identified by the Enterprise Controller

Note:

Oracle SuperCluster can be discovered only by trained Oracle staff. See Discover Oracle SuperCluster.

Note:

Ensure you have a Proxy Controller deployed that can access the network. To deploy a Proxy Controller, see Deploy Proxy Controller.

Network is Identified by the Enterprise Controller

If the Enterprise Controller host identifies the management network of the Oracle SuperCluster (CIDR must be the same), ensure that the network is assigned to the Proxy Controller. If it is not assigned, assign the network to the Proxy Controller.

To assign the network to the Proxy Controller, perform the following steps:

  1. In the Navigation pane, select Administration.

  2. Select a Proxy Controller.

  3. In the Actions pane, click Associate Networks.

Network is not Identified by the Enterprise Controller

Option: Network not identified by the Enterprise Controller.

If the Enterprise Controller host does not identify the network (Oracle Engineered System management network must be routable from it), create a fabric definition and a network for the fabric.

Create Fabric Definition

Procedure to create a fabric description.

  1. In the Navigation pane, under Networks, select Fabrics from the drop-down list.
  2. In the Actions pane, click Define Ethernet Fabric.
  3. In the Fabric Name field enter a name for the fabric.
  4. (Optional) Enter the description.
  5. Click Next.
  6. Enter the VLAN ID Ranges, then click Next.
  7. Select the networks to be associated with the fabric, then click Next.
  8. Review the Summary, the click Finish.

    A new fabric is created.

Create Network for the Fabric

Procedure to create network for the fabric.

After the fabric is created, you must create a network for the new fabric.

  1. In the Navigation pane, under Networks, select Networks from the drop-down list.
  2. In the Actions pane, click Define Network.
  3. In the Network IP field, enter the IP address (in CIDR format) of the network that represents the management network of the Oracle Engineered System you want to manage.
  4. Enter the Gateway IP address.
  5. In the Network Name field, enter a name for the network.
  6. Click Next.
  7. Assign the newly created fabric to the Proxy Controller, then click Next.

    The setup is now ready for Oracle Engineered System discovery.

Ports for Oracle SuperCluster

This section summarizes the set of ports and their protocols used by Oracle SuperCluster.

The proxy Controller for an Oracle SuperCluster engineered system does not have unique ports or protocols. The following table summarizes the set of ports and their protocols used by Oracle SuperCluster.


Table 3-4 Required Ports and Protocols for Oracle SuperCluster

Communication Direction Protocol and Port Purpose

Proxy Controller to Exadata's ILOM Service Processors

SSH, TCP: Port 22

IPMI, TCP, UDP: Port 623

Proxy Controller discovers, manages, and monitors the service processor of Exadata.

Proxy Controller to Exadata cells

SSH, TCP: Port 22

Proxy Controller discovers, manages, and monitors the compute nodes.

Proxy Controller to Oracle ZFS Storage Appliance

SSH, TCP: Port 22

IPMI, TCP, UDP: Port 623

Proxy Controller discovers, manages, and monitors the service processor of the storage appliance.

Proxy Controller to Oracle ZFS Storage Appliance

SSH: Port 215

Proxy Controller discovers the projects of the storage appliance:

  • iSCSI volumes.

  • NFS shares

Proxy Controller to Cisco switch

SSH version 2: Port 22

SNMP: Port 161

Proxy Controller discovers and manages the switch

Proxy Controller to InfiniBand switch

SSH: Port 22

IPMI: Port 623

Proxy Controller discovers and manages the switch.