6.6 Introduction to Oracle Exalogic Network Configuration

This section introduces the following topics:

6.6.1 InfiniBand Fabric

Exalogic machines use a unified 40 Gb per second InfiniBand quad data rate (QDR) fabric for internal communication.

Applications running on compute nodes communicate with applications on other compute nodes using this InfiniBand network. Exalogic machines communicate with Oracle Exadata Database Machines for database connectivity via IPoIB. Exalogic machines can be connected to an external network, including a standard database hosted on a machine outside of the Exalogic machine, via the InfiniBand-to-10 Gb Ethernet gateways using Ethernet over InfiniBand (EoIB). Each Exalogic machine configuration includes at least 2 such gateways, which also act as InfiniBand switches connecting all compute nodes and the storage appliance within the Exalogic machine.

6.6.2 InfiniBand Switches

Sun Network QDR InfiniBand Gateway Switches (part number NM2-GW) are used as the leaf switches in the Exalogic machine. They connect to the Host Channel Adapters (HCAs) of Exalogic compute nodes.

These switches (NM2-GW) also act as Ethernet gateways to connect your Exalogic machine to the external LAN over Ethernet. For more information, see Connectivity Between Exalogic Machine and External LAN Through Sun Network QDR InfiniBand Gateway Switch.

Sun Datacenter InfiniBand Switch 36 (part number NM2-36P) is used in multirack configuration scenarios (an Exalogic machine to another Exalogic machine, and an Exalogic machine to an Oracle Exadata Database Machine) only. This switch is not connected or used in an Exalogic machine single rack.

Note:

In the Exalogic machine, InfiniBand switches (both leaf and spine switches) are automatically configured to separate the IP over InfiniBand (IPoIB) traffic and the Ethernet over InfiniBand (EoIB) traffic.

6.6.3 Default Bonded Interfaces

After the Sun Network QDR InfiniBand Gateway Switches are connected to Exalogic compute nodes, the following bonded interfaces are configured:

  • IP over InfiniBand (IPoIB) - bond0 link (ib0/ib1 for Oracle Linux, and ibp0/ibp1 for Oracle Solaris)

    ib0 or ibp0 represents the HCA port 0 of compute nodes, and ib1 or ibp1 represents the HCA port 1 of compute nodes.

    Note:

    Depending on your application deployment and isolation requirements, you can create additional bonded IP subnet interfaces over this default IPoIB link.

    For more information, see the "Application Isolation by Subnetting over IPoIB" topic in the Oracle Exalogic Enterprise Deployment Guide.

  • Ethernet over InfiniBand (EoIB) - bond1 link, which uses two vNICs, such as vNIC0 and vNIC1 for ib0 and ib1 (vNIC0 and vNIC1 for ibp0 and ibp1 on Oracle Solaris), respectively.

    Note:

    Oracle Solaris uses the IP Multipathing (IPMP) technology to support IPMP Groups that provide the same functionality as bonded interfaces on Oracle Linux. If you are using Oracle Solaris on Exalogic compute nodes, you can name the IPMP groups anything. In this guide, BOND0 and BOND1 are used as example names to keep the terminology consistent with Oracle Linux.

6.6.4 IPMP Overview for Oracle Solaris Users

On the Oracle Solaris operating system, IP network multipathing (IPMP) provides physical interface failure detection and transparent network access failover for a system with multiple interfaces on the same IP link. IPMP also provides load spreading of packets for systems with multiple interfaces.

This section discusses the following topics:

6.6.4.1 IPMP Components

IPMP comprises the following components:

  • The in.mpathd daemon

  • The /etc/default/mpathd configuration file

  • ifconfig options for IPMP configuration

    Note:

    For information about the in.mpathd daemon and the mpathd configuration file, see the in.mpathd (1M) man page on the Oracle Solaris operating system installed on Exalogic compute nodes. For information about ifconfig, see the ifconfig (1M) man page.

6.6.4.2 IPMP Groups

An IP multipathing group, or IPMP group, consists of one or more physical interfaces on the same system that are configured with the same IPMP group name. All interfaces in the IPMP group must be connected to the same IP link. The same (non-null) character string IPMP group name identifies all interfaces in the group. You can place interfaces from NICs of different speeds within the same IPMP group, as long as the NICs are of the same type. IPMP groups on Oracle Solaris provide the same functionality as Bonded Interfaces on Oracle Linux in the Exalogic environment. For example, the default IPMP group ipmp0 comprises two physical interfaces that are connected to the default IPoIB link for internal communication in your Exalogic machine. The other default IPMP group ipmp1 comprises two virtual interfaces that are connected to the default EoIB link for external data center connectivity.

Note:

For information about administering and configuring IPMP groups on the Oracle Solaris operating system installed on Exalogic compute nodes, see Oracle Solaris 11.1 documentation.

6.6.5 Connectivity Between Exalogic Compute Nodes

Compute nodes in the Exalogic machine are connected to one another through dual-ported InfiniBand quad data rate (QDR) host channel adapters (HCAs). Each HCA has an IP address, and active-passive bonding is configured. The active port of the HCA connects to an Sun Network QDR InfiniBand Gateway Switch, and the passive port of the HCA connects to another Sun Network QDR InfiniBand Gateway Switch in the Exalogic machine.

Note:

For more information about network connectivity in different Exalogic machine configurations, see Cabling Diagrams.

6.6.6 Connectivity Between Exalogic Machine and External LAN Through Sun Network QDR InfiniBand Gateway Switch

The Sun Network QDR InfiniBand Gateway Switches also act as gateways to connect to Ethernet networks, and they support eight 10 GB Ethernet ports. These ports can be accessed by Exalogic compute nodes through the InfiniBand network through EoIB. You can create multiple VLANs per each of these Ethernet ports.

Each Exalogic compute node can access one or more Ethernet ports on two Sun Network QDR InfiniBand Gateway Switches (NM2-GW), for HA purposes. An Exalogic machine full rack includes 4 gateway switches. Therefore, a group of 8 compute nodes in the Exalogic machine full rack can access one Ethernet port on both the primary gateway switch and the secondary gateway switch that the group of compute nodes is connected to. Each port is represented as an EoIB vNIC at the compute nodes. Each compute node has two bonded vNICs (active/passive).

Note:

You can configure up to eight compute nodes to use a single 10 GB Ethernet port.

For information about creating a VNIC for Ethernet connectivity, see Configure Ethernet Over InfiniBand.

This section discusses the following topics:

6.6.6.1 Ethernet Device Requirements

Before you begin, ensure that you have a 10 Gb Ethernet Switch, Router, or NIC device that supports any of the following:

  • SFP+ 10G-Base-SR Module

  • XFP 10G-Base-SR Module

  • QSFP Optical Module

For example, here is how a QSFP module on the Exalogic's Sun Network QDR InfiniBand Gateway Switch (NM2-GW) is connected to the SFP+/XFP modules on the data center's 10 GbE switch.

Figure 6-2 Connectivity Between NM2-GW and External 10 GB Ethernet Switch

Description of Figure 6-2 follows
Description of "Figure 6-2 Connectivity Between NM2-GW and External 10 GB Ethernet Switch"

6.6.6.2 Network Interface Configuration for Compute Nodes

By default, each Exalogic compute node is configured with one bonded EoIB interface (ethX) for one external LAN. It is BOND1 (vnic0/vnic1), which connects to one external LAN, such as LAN1.

If a vNIC is created at one of the Sun Network QDR InfiniBand Gateway Switches, the ethX interface is associated with the vNIC automatically.

Note:

You can configure additional EoIB network interfaces for connecting to additional LANs, as required.

6.6.6.3 Transceiver and Cable Requirements

Table 6-3 lists the transceiver and cable requirements that you must complete to connect your Exalogic machine to your data center's 10 Gb Ethernet switch.

Table 6-3 Transceivers and Cables

Optical Module on Exalogic's Sun Network QDR InfiniBand Gateway Switch Cable Needed Ethernet Switch Vendor Transceiver Needed

QSFP module

QSFP MTP to 4 LC

A minimum of one optical cable per NM2-GW is needed, but two cables per NM2-GW are recommended.

A Sun Oracle switch or a 10 GbE standard switch from a third-party vendor

For Sun Oracle switch: x2129/3 SFP+/XFP SR module

For third-party switches: SFP+/XFP module provided by the switch vendor

QSFP module

QSFP – QSFP

A minimum of one optical cable per NM2-GW is needed, but two cables per NM2-GW are recommended.

A Sun Oracle switch or a 10 GbE standard switch from a third-party vendor

For Sun Oracle switch: x2124A QSFP module

For third-party switches: QSFP module provided by the switch vendor

Note: Exalogic ships with QSFP transceivers, by default. Customers may use them on the data center switch side if they use a Sun Oracle 10GbE switch, such as the Sun Network 10 GbE Switch 72p.

6.6.7 Additional InfiniBand Network Requirements and Specifications

Table 6-4 lists additional InfiniBand specifications and cable requirements.

Table 6-4 HCA, Port Specifications and Cable Requirements

Component/Item Exalogic Machine Full Rack Exalogic Machine Half Rack Exalogic Machine Quarter Rack Two Exalogic Machines

InfiniBand quad data rate (QDR) host channel adapters (HCAs)

30

16

8

60

Unused ports in (Sun Network QDR InfiniBand Gateway Switches (NM2-GW leaf switches)

0

6

16

6

Unused ports in Sun Datacenter InfiniBand Switch (NM2-36P) spine switch

Note: This switch is used in multirack configurations only.

Not applicable

Not applicable

Not applicable

Not applicable