An Exalogic machine includes compute nodes, storage appliance, and equipment to connect the compute nodes to your network. The network connections allow the servers to be administered remotely, enable clients to connect to the compute nodes, and enable client access to the storage appliance.
The following table describes the network components and interfaces for each compute node and the storage appliance:
Table 6-1 Available network components and interfaces on the compute nodes and storage appliance
Compute Node | Storage Appliance (two server heads) | |
---|---|---|
Gigabit Ethernet (GbE) ports:
|
4 (only NET0 or igb0 is connected and used) |
4 per server head (1 and 10 GbE ports available for Exalogic X4 and newer systems, 1 GbE for earlier systems) |
Dual-port QDR InfiniBand Host Channel Adapter:
|
1 (this port is not connected or used) |
1 per server head |
Ethernet Port for ILOM remote management |
1 |
4 per head (the ETH0 and ETH1 interfaces are used for active and passive clustering support; the dedicated ILOM port is not used, sideband management is used instead through the igb0 port) |
Note:
These ports are pre-wired in the Exalogic machine at the time of manufacturing. Do not touch or modify the ports.
The Cisco Ethernet switch supplied with the Exalogic machine is minimally configured during installation. The minimal configuration disables IP routing, and sets the following:
Host name
IP address
Subnet mask
Default gateway
Domain name
Domain Name Server
NTP server
Time
Time zone
Additional configuration, such as defining multiple virtual local area networks (VLANs) or enabling routing, may be required for the switch to operate properly in your environment and is beyond the scope of the installation service.
To deploy the Exalogic machine, verify that you meet the minimum network requirements. There are up to five networks for an Exalogic machine. Each network must be on a distinct and separate subnet from the others. The network descriptions are as follows:
Management network: This required network connects to your existing management network, and is used for administrative work for all components of the Exalogic machine. It connects ILOM, compute nodes, server heads in the storage appliance, switches connected to the Ethernet switch in the Exalogic machine rack. This management network is in a single subnet. ILOM connectivity uses the NET0
(on Oracle Solaris, igb0
) sideband interface.
For multirack configurations, you may have any of the following:
A single subnet per configuration
A single subnet per rack in the multirack configuration
Multiple subnets per configuration
Oracle recommends that you configure a single subnet per configuration.
With sideband management, only the NET0
(on Oracle Solaris, igb0
) interface of each compute node is physically connected to the Ethernet switch on the rack. For the server heads in the storage appliance, NET0
and NET1
interfaces (on Oracle Solaris, igb0
and igb1
) are physically connected to support active-passive clustering.
Note:
Do not use the management network interface (NET0
on Oracle Linux, and igb0
on Oracle Solaris) on compute nodes for client or application network traffic. Cabling or configuration changes to these interfaces on Exalogic compute nodes is not permitted.
InfiniBand private network: This required network connects the compute nodes and the storage appliance through the BOND0
interface to the InfiniBand switches/gateways on the Exalogic rack. It is the default IP over InfiniBand (IPoIB) subnet created automatically during the initial configuration of the Exalogic machine.
Note:
This network is either based on the default InfiniBand partition or based on a partition allocated for the Exalogic machine. A single default partition is defined at the rack level. For more information, see Work with the Default Rack-Level InfiniBand Partition.
Client access network: This required network connects the compute nodes to your existing client network through the BOND1
interface and is used for client access to the compute nodes (this is related primarily to a physical Exalogic deployment). Each Exalogic compute node has a single default client access (edge network) to an external 10 Gb Ethernet network through a Sun Network QDR InfiniBand Gateway Switch.
The logical network interface of each compute node for client access network connectivity is bonded. Bond1 consists of 2 vNICs (Ethernet over IB vNICs). Each vNIC is mapped to a separate Sun Network QDR InfiniBand Gateway Switch for high availability (HA) and each host EoIB vNIC is associated with a different HCA IB port (On Oracle Linux, vNIC0 -> ib0, vNIC1 -> ib1; on Oracle Solaris, vNIC0 -> ibp0, vNIC1 -> ibp1).
Additional networks (optional): Each Sun Network QDR InfiniBand Gateway Switch has eight 10 Gb Ethernet ports. The number of ports used in Exalogic deployment depends on your specific bandwidth requirements (how many 10 Gb ports can be shared per compute node) and on your specific LAN/VLAN connection requirements. A group of 16 compute nodes connects 2 Sun Network QDR InfiniBand Gateway Switches in an active-passive bond. Each compute node is connected to two separate Sun Network QDR InfiniBand Gateway Switches for HA.
Note that each compute node requires a bond for each external network (physical network or VLAN).
Figure 6-1 shows the network diagram for the Exalogic machine with Oracle Linux operating system.
Figure 6-1 Network Diagram for Exalogic Machine
Note:
If you are using Oracle Solaris, you can assign the logical names of IPMP groups to be ipmp0
or BOND0,
and ipmp1
or BOND1
and have the name of the datalink corresponding to the NET0
Ethernet port to be displayed as igp0
or net0
in the Solaris administration commands. For more information, see IPMP Overview for Oracle Solaris Users.