2 Understanding the Network Requirements for Oracle Exadata
Review the network requirements for Oracle Exadata before installing or configuring the hardware.
Note:
For ease of reading, the name "Oracle Exadata Rack" is used when information refers to both Oracle Exadata and Oracle Exadata Storage Expansion Rack.- Overview of Network Requirements
In addition to the database and storage servers, Oracle Exadata includes equipment to connect the system to your network. The network connections allow clients to connect to the database servers and also enables remote system administration. - Network Channel Bonding Support
- Network Partitioning on Oracle Exadata
- Configuring a Separate Network for ILOM
When configuring or re-imaging an Oracle Exadata Rack, you can use Oracle Exadata Deployment Assistant (OEDA) to configure a separate network for Integrated Lights Out Manager (ILOM). - Default IP Addresses
Starting with Oracle Exadata System Software release 12.1.2.1.0, the default administration network IP addresses are assigned dynamically by the elastic configuration procedure during the first start of the system. - Default Port Assignments
2.1 Overview of Network Requirements
In addition to the database and storage servers, Oracle Exadata includes equipment to connect the system to your network. The network connections allow clients to connect to the database servers and also enables remote system administration.
Use the information in this section in conjunction with Oracle Exadata Deployment Assistant (OEDA) to configure your Oracle Exadata environment.
To deploy Oracle Exadata ensure that you meet the minimum network requirements. Oracle Exadata requires a minimum of three networks, and there are interfaces available for additional networks. Each network must be on a separate and distinct subnet. The network descriptions are as follows:
-
Administration Network: Also known as the management network, this required network connects to your existing management network infrastructure, and is used for administrative work on all components of Oracle Exadata. By default, the administration network connects the database servers, storage servers, server Integrated Lights Out Manager (ILOM) interfaces, and RDMA Network Fabric switches to the Management Network Switch in the rack. One uplink is required from the Management Network Switch to your management network.
Each database server and storage server has two network interfaces for administration. One interface provides management access to the operating system through a dedicated Ethernet port. The other network interface is dedicated to ILOM. By default, Oracle Exadata is delivered with both interfaces connected to the Management Network Switch. Cabling or configuration changes to these interfaces is not permitted, except that starting with Oracle Exadata System Software release 19.1.0, the ILOM interfaces can be connected to a dedicated ILOM network, which is separate from the administration network. The administration network interfaces on the database servers should not be used for client or application network traffic.
Notes:
- Separate uplinks to your management network are also recommended for remote monitoring of each power distribution unit (PDU). This configuration enables you to easily differentiate between system outages caused by PDU failure as opposed to failure of the Management Network Switch.
- A properly secured configuration requires full isolation of the administration network from all other networks.
-
Client Network: This required network connects the database servers to your existing client network and is used for client access to the database servers. Applications access databases through this network using Single Client Access Name (SCAN) and Oracle RAC Virtual IP (VIP) addresses. Database servers support channel bonding to provide higher bandwidth or availability for client connections to the database. Non-bonded network configurations are not supported on Oracle Exadata X7 and later systems.
-
Private Network: Also known as the RDMA Network Fabric, storage network, or interconnect. This network connects the database servers and storage servers. Oracle Database uses this network for Oracle RAC cluster interconnect traffic and for accessing data on the Oracle Exadata Storage Servers. The private network is automatically configured during installation. It is non-routable, fully contained in Oracle Exadata, and does not connect to your existing networks.
Starting with Oracle Exadata X8M, the private network uses RDMA over Converged Ethernet (RoCE).
Previously, the private network was built using InfiniBand technology. RoCE Network Fabric uses different switches and cables from those used by InfiniBand Network Fabric.
-
Additional Networks: Database servers can optionally connect to additional networks using the available open ports not used by the administration network and the client network.
By using the OEDA Web user interface, you can create up to two additional networks. In OEDA, the first additional network is known as the Backup Network, and the second additional network is known as the Other Network. You can create more additional networks by using the OEDA command-line interface (OEDACLI).
Like the client network, the additional networks support channel bonding to maximize bandwidth and availability. Non-bonded network configurations are not supported on Oracle Exadata X7 and later systems.
The following diagram displays how the various Oracle Exadata components connect to the different networks.
2.2 Network Channel Bonding Support
A pair of database server network ports can be bonded to provide higher network availability or bandwidth for the client network and additional networks.
Non-bonded network configurations are not supported on Oracle Exadata X7 and later systems.
In a bonded network configuration:
-
Use Oracle Exadata Deployment Assistant (OEDA) to specify the physical network interfaces that you want to bond for the client network and the additional networks, if configured. OEDA generates bonded network interfaces that amalgamate two physical network interfaces.
-
Manual changes to the OEDA-generated bonding configuration are allowed but are discouraged. Oracle will not support questions or issues associated with non-standard bonding configurations. In any event, bonding configurations having fewer than two networking interfaces are not permitted.
-
The bonded client network interface name is
bondeth0
. The bonded interface name for the first additional network, also known in OEDA as the Backup Network, isbondeth1
. The bonded interface name for the second additional network, also known in OEDA as the Other Network, isbondeth2
, and so on. -
The XML configuration file generated by OEDA includes detailed information that maps the bonded network interfaces to the underlying Ethernet ports.
-
During the initial configuration using OEDA, the Linux bonding module is configured to use active-backup mode (
mode=active-backup
) by default. Additional configuration of other bonding parameters is allowed but is outside the scope of installation services and must be performed by customer network engineers. Reconfiguration to enable a different bonding policy is permitted but is discouraged.For further details, refer to the "Linux Ethernet Bonding Driver HOWTO" at https://www.kernel.org/doc/Documentation/networking/bonding.txt.
-
You must provide network infrastructure (switches) capable of supporting the chosen bonding mode. For example, if Link Aggregation Control Protocol (LACP) is enabled (
mode=802.3ad
), then you must supply and configure the network switches accordingly.Requirements for specific bonding policies are documented in the "Linux Ethernet Bonding Driver HOWTO" at https://www.kernel.org/doc/Documentation/networking/bonding.txt
2.3 Network Partitioning on Oracle Exadata
Oracle Exadata supports network partitioning using a variety of mechanisms.
- VLAN Support on Customer-Facing Networks
Oracle Exadata can use VLANs to implement network partitioning in conjunction with the client, backup, administration, and ILOM networks. - Access VLAN Support with RoCE Network Fabric
Oracle Exadata can use Access VLAN settings to implement server-level isolation across the RoCE Network Fabric. - Using Exadata Secure RDMA Fabric Isolation
Starting with Oracle Exadata System Software release 20.1.0, you can configure the RoCE Network Fabric to enable Exadata Secure RDMA Fabric Isolation. - Using InfiniBand Partitioning for Network Isolation with InfiniBand Network Fabric
An InfiniBand partition defines a group of InfiniBand nodes or members that are allowed to communicate with one another.
2.3.1 VLAN Support on Customer-Facing Networks
Oracle Exadata can use VLANs to implement network partitioning in conjunction with the client, backup, administration, and ILOM networks.
By default, the network switches are minimally configured, without VLAN tagging. If VLAN tagging is to be used, then it can be configured by the customer during the initial deployment. Customers can also configure VLAN tagging after the initial deployment. This applies to both physical and virtual machine (VM) deployments.
Notes:
-
Oracle Exadata Deployment Assistant (OEDA) supports VLAN tagging for both physical and VM deployments.
-
Network VLAN tagging is supported for Oracle Real Application Clusters (Oracle RAC) on the public network.
-
Client and backup VLAN networks must be bonded. The administration network is never bonded.
-
If the backup network is on a tagged VLAN network, the client network must also be on a separate tagged VLAN network.
-
The backup and client networks can share the same network cables.
-
VLAN tagging on the client and backup networks is supported with IPv4 and IPv6 on all hardware models. For IPv6 support on Oracle Database version 12.1.0.2, and later, patch 22289350 is also required.
-
VM deployments do not support IPv6 VLANs.
-
VLAN tagging on the administration network is only supported with IPv4 addresses on X3-2 and above for two-socket servers, and X4-8 and above for eight-socket servers.
-
If the client network uses VLAN tagging and your system uses more than 10 Oracle Clusterware virtual IP (VIP) addresses, then you must use 3-digit VLAN IDs. Do not use 4-digit VLAN IDs because the VLAN name can exceed the operating system interface name limit, which is the 15 characters.
Related Topics
- Implementing InfiniBand Partitioning across Oracle VM Oracle RAC Clusters on Oracle Exadata
- Enabling 802.1Q VLAN Tagging in Exadata Database Machine over client networks (My Oracle Support Doc ID 1423676.1)
- Implementing Tagged VLAN Interfaces in Oracle VM Environments on Exadata (My Oracle Support Doc ID 2018550.1)
Parent topic: Network Partitioning on Oracle Exadata
2.3.2 Access VLAN Support with RoCE Network Fabric
Oracle Exadata can use Access VLAN settings to implement server-level isolation across the RoCE Network Fabric.
By default, Oracle Exadata uses Access
VLAN ID 3888 for all RoCE Network Fabric private network
traffic, on the server re0
and re1
interfaces. This
setting enables all database servers and storage servers to communicate freely with each
other, and is suitable for many system configurations. However, you can change the Access
VLAN ID to a non-default value to implement server-level isolation.
You can use this capability to create isolated groups of servers in an Oracle Exadata X8M system. For example, in a Half Rack X8M-2 system you might want to create two isolated server groups:
- Database servers 1 and 2, and storage servers 1, 2, and 3 using VLAN ID 3888
- Database servers 3 and 4, and storage servers 4, 5, 6, and 7 using VLAN ID 3889
With this configuration:
- Database servers 1 and 2 can only access storage servers 1, 2, and 3. But, they cannot access storage servers 4, 5, 6 or 7.
- Database servers 3 and 4 can only access storage servers 4, 5, 6 and 7. But, they cannot access storage servers 1, 2, and 3.
- Oracle Linux KVM guests on database servers 1 and 2 can communicate with each other, but cannot communicate with guests on database servers 3 and 4.
- Oracle Linux KVM guests on database servers 3 and 4 can communicate with each other, but cannot communicate with guests on database servers 1 and 2.
Related Topics
Parent topic: Network Partitioning on Oracle Exadata
2.3.3 Using Exadata Secure RDMA Fabric Isolation
Starting with Oracle Exadata System Software release 20.1.0, you can configure the RoCE Network Fabric to enable Exadata Secure RDMA Fabric Isolation.
Exadata Secure RDMA Fabric Isolation enables strict network isolation for Oracle Real Application Clusters (Oracle RAC) clusters on Oracle Exadata systems that use RDMA over Converged Ethernet (RoCE).
Secure Fabric provides critical infrastructure for secure consolidation of multiple tenants on Oracle Exadata, where each tenant resides in a dedicated virtual machine (VM) cluster. Using this feature ensures that:
- Database servers in separate clusters cannot communicate with each other. They are completely isolated from each other on the network.
- Database servers in multiple clusters can share all of the storage server resources. However, even though the different clusters share the same storage network, no cross-cluster network traffic is possible.
Exadata Secure RDMA Fabric Isolation uses RoCE VLANs to ensure that a VM cluster cannot see network packets from another VM cluster. Secure Fabric uses a double VLAN tagging system, where one tag identifies the network partition and the other tag specifies the membership level of the server in the partition. Within each network partition, a partition member with full membership can communicate with all other partition members, including other full and limited members. Partition members with limited membership cannot communicate with other limited membership partition members. However, a partition member with limited membership can communicate with other full membership partition members.
With Secure Fabric, each database cluster uses a dedicated network partition and VLAN ID for cluster networking between the database servers, which supports Oracle RAC inter-node messaging. In this partition, all of the database servers are full members. They can communicate freely within the partition but cannot communicate with database servers in other partitions.
Another partition, with a separate VLAN ID, supports the storage network partition. The storage servers are full members in the storage network partition, and every database server VM is also a limited member. By using the storage network partition:
- Each database server can communicate with all of the storage servers.
- Each storage server can communicate with all of the database servers that they support.
- Storage servers can communicate directly with each other to perform cell-to-cell operations.
The following diagram illustrates the network partitions that support Exadata Secure RDMA Fabric Isolation. In the diagram, the line connecting the Sales VMs illustrates the Sales cluster network. The Sales cluster network is the dedicated network partition that supports cluster communication between the Sales VMs. The line connecting the HR VMs illustrates the HR cluster network. The HR cluster network is another dedicated network partition that supports cluster communication between the HR VMs. The lines connecting the database server VMs (Sales and HR) to the storage servers illustrate the storage network. The storage network is the shared network partition that supports communications between the database server VMs and the storage servers. But, it does not allow communication between the Sales and HR clusters.
Figure 2-1 Secure Fabric Network Partitions

Description of "Figure 2-1 Secure Fabric Network Partitions"
As illustrated in the diagram, each database server (KVM host) can support multiple VMs in separate database clusters. However, Secure Fabric does not support configurations where one database server contains multiple VMs belonging to the same database cluster. In other words, using the preceding example, one database server cannot support multiple Sales VMs or multiple HR VMs.
To support the cluster network partition and the storage network partition, each database server VM is plumbed with 4 virtual interfaces:
clre0
andclre1
support the cluster network partition.-
stre0
andstre1
support the storage network partition.Corresponding
stre0
andstre1
interfaces are also plumbed on each storage server.
On each server, the RoCE network interface card acts like a switch on the hypervisor, which performs VLAN tag enforcement. Since this is done at the KVM host level, cluster isolation cannot be bypassed by any software exploits or misconfiguration on the database server VMs.
You can only enable Secure Fabric as part of the initial system deployment using Oracle Exadata Deployment Assistant (OEDA). You cannot enable Secure Fabric on an existing system without wiping the system and re-deploying it using OEDA. When enabled, Secure Fabric applies to all servers and clusters that share the same RoCE Network Fabric.
To use Secure Fabric you must:
-
Configure the RoCE Network Fabric switch hardware to enable Secure Fabric. After you complete the switch configuration, the leaf switch ports become trunk ports, which can carry network traffic with multiple VLAN IDs.
The switch configuration must occur before initial system deployment using OEDA. See Configuring the RoCE Network Fabric Switch Switches to Enable Exadata Secure RDMA Fabric Isolation.
- As part of initial system deployment using OEDA, select the option to enable Secure Fabric and specify VLAN IDs for all of the network partitions. This option is one of the advanced options located in the Cluster Networks page of the OEDA Web user interface. See Using the Browser-based Version of Oracle Exadata Deployment Assistant.
Parent topic: Network Partitioning on Oracle Exadata
2.3.4 Using InfiniBand Partitioning for Network Isolation with InfiniBand Network Fabric
An InfiniBand partition defines a group of InfiniBand nodes or members that are allowed to communicate with one another.
InfiniBand partitioning enables network separation between different clusters on systems with InfiniBand Network Fabric.
InfiniBand partitions are created and managed by the master subnet manager. Each partition is identified by a unique partition key, and partition members use the key for communication within the partition. Members within a partition can only communicate among themselves.
With Oracle Exadata, each database cluster uses a dedicated network partition for cluster networking between the database servers. All of the database servers can communicate freely within the partition, other but cannot communicate with database servers in other partitions. Another partition enables communication between each database cluster and the storage servers. By using this partition, database servers can communicate with all of the storage servers, storage servers can communicate with all of the database servers that they support, and storage servers can communicate directly with each other to perform cell-to-cell operations.
You can use InfiniBand partitioning on physical or virtual machine (VM) deployments.
For details see Configuring InfiniBand Partitioning.
Parent topic: Network Partitioning on Oracle Exadata
2.4 Configuring a Separate Network for ILOM
When configuring or re-imaging an Oracle Exadata Rack, you can use Oracle Exadata Deployment Assistant (OEDA) to configure a separate network for Integrated Lights Out Manager (ILOM).
Before Oracle Exadata System Software release 19.1.0, the Exadata servers and ILOM interfaces must have network access to each other for certain features, such as alert notification. Starting with Oracle Exadata System Software release 19.1.0, this network dependency is removed while maintaining all of the previously supported features. Now, you can configure ILOM interfaces on a completely separate network.
2.5 Default IP Addresses
Starting with Oracle Exadata System Software release 12.1.2.1.0, the default administration network IP addresses are assigned dynamically by the elastic configuration procedure during the first start of the system.
The default administration network IP addresses are in the 172.16.2.1 to 172.16.7.254 range. In earlier releases, Oracle Exadata had default IP addresses set at the factory, and the range of IP addresses was 192.168.1.1 to 192.168.1.203.
Note:
Prior to connecting Oracle Exadata to
the network, ensure these IP addresses do not conflict with other addresses on the
network. Use the checkip.sh
script
generated by Oracle Exadata
Deployment Assistant (OEDA)
to check for conflicts. You run the checkip.sh
script on the network after the DNS entries for
the Oracle Exadata have been created,
but before the Oracle Exadata is
configured or connected to the network. Oracle recommends running the script to
avoid configuration delays, even if a check was performed as part of planning
process before the machine was delivered. See Verifying the Network Configuration Prior to Configuring the Rack.
If you run OEDA on a Microsoft Windows system, then the
generated script is checkip.bat
.
2.6 Default Port Assignments
The following table lists ports used by services on Oracle Exadata. The table shows default port assignments, which may vary from system to system based on implementation-specific customizations. Review the list and open the necessary ports to enable network communication through a firewall.
Table 2-1 Default Port Assignments
Source | Target | Protocol | Port | Network | Application |
---|---|---|---|---|---|
Any |
Database servers, Exadata Storage Servers, network switches, and Integrated Lights Out Manager (ILOM) interfaces |
SSH over TCP |
22 |
Administration |
SSH |
Exadata Storage Servers |
SMTP e-mail server |
SMTP |
25 |
Administration |
SMTP (Simple Mail Transfer Protocol) |
Database servers |
DNS servers |
UDP or TCP |
53 |
Client |
DNS (Domain Name System) |
Database servers, Exadata Storage Servers, network switches, and ILOM interfaces |
DNS servers |
UDP or TCP |
53 |
Administration |
DNS |
Any |
ILOM interfaces on database servers, Exadata Storage Servers, and ILOM-enabled network switches |
HTTP |
80 |
Administration |
ILOM Web interface (user configurable, Default: redirection to port 443) |
Any |
rpcbind |
TCP |
111 |
Administration |
rpcbind |
Database servers |
NTP servers |
NTP over UDP |
123 |
Client |
Outgoing Network Time Protocol (NTP) |
Database servers, Exadata Storage Servers, network switches, and ILOM interfaces |
NTP servers |
NTP over UDP |
123 |
Administration |
Outgoing NTP |
Any |
ILOM interfaces on database servers, Exadata Storage Servers, and ILOM-enabled network switches |
SNMP over UDP |
161 |
Administration |
SNMP (Simple Network Management Protocol) (user configurable) |
Any |
PDU |
SNMP over UDP |
161 |
Administration |
SNMP (user configurable) |
Exadata Storage Servers |
SNMP subscriber such as Oracle Enterprise Manager Cloud Control or an SNMP manager |
SNMP |
162 |
Administration |
SNMP version 1 (SNMPv1) outgoing traps (user-configurable) |
Database servers, Exadata Storage Servers, network switches, and ILOM interfaces |
ASR Manager |
SNMP |
162 |
Administration |
Telemetry messages sent to ASR Manager |
ILOM interfaces on database servers, Exadata Storage Servers, and ILOM-enabled network switches |
Any |
IPMI over UDP |
162 |
Administration |
Outgoing Intelligent Platform Management Interface (IPMI) Platform Event Trap (PET) |
Exadata Storage Server ILOMs |
Management Server (MS) |
SNMPv3 |
162 |
Administration |
Exadata Storage Server ILOM SNMP notification rules |
PDU |
SNMP trap receivers |
SNMP over UDP |
162 |
Administration |
Outgoing SNMPv2 traps |
Any |
Management Server (MS) on Exadata Storage Servers |
HTTPS |
443 |
Administration |
Requests from ExaCLI and/or RESTful API calls |
Any |
ILOM interfaces on database servers, Exadata Storage Servers, and ILOM-enabled network switches |
HTTPS |
443 |
Administration |
ILOM Web interface (user configurable) |
Any |
PDU |
HTTPS |
443 |
Administration |
PDU Web interface |
Exadata Storage Servers |
SMTPS client |
SMTPS |
465 |
Administration |
Simple Mail Transfer Protocol, Secure (if configured) |
Database servers, Exadata Storage Servers, network switches, and ILOM interfaces |
Syslog server |
Syslog over UDP |
514 |
Administration |
Outgoing Syslog |
PDU |
Syslog server |
Syslog over UDP |
514 |
Administration |
Outgoing Syslog |
Any |
ILOM interfaces on database servers, Exadata Storage Servers, and ILOM-enabled network switches |
IPMI over UDP |
623 |
Administration |
IPMI |
Any |
plathwsvcd |
TCP |
723 |
Administration |
plathwsvcd |
Any |
evnd |
TCP |
791 |
Administration |
evnd |
Any |
partitiond |
TCP |
867 |
Administration |
partitiond |
Any |
Database servers |
TCP |
1521 |
Client |
Database listener |
Any |
tgtd |
TCP |
3260 |
Administration |
SCSI target daemon |
Any |
Database servers |
TCP |
3872 |
Administration |
Java EM agent |
Any |
Exadata Storage Servers |
TCP |
5053 |
Administration |
Fast node death detection (FNDD) on RDMA over Converged Ethernet (RoCE) systems |
Any |
ILOM interfaces on database servers and Exadata Storage Servers |
TCP |
5120 |
Administration |
ILOM remote console: CD |
Any |
ILOM interfaces on database servers and Exadata Storage Servers |
TCP |
5121 |
Administration |
ILOM remote console: keyboard and mouse |
Any |
ILOM interfaces on database servers and Exadata Storage Servers |
TCP |
5123 |
Administration |
ILOM remote console: diskette |
Any |
ILOM interfaces on database servers and Exadata Storage Servers |
TCP |
5555 |
Administration |
ILOM remote console: encryption |
Any |
ILOM interfaces on database servers and Exadata Storage Servers |
TCP |
5556 |
Administration |
ILOM remote console: authentication |
ASR Manager |
ILOM interfaces on database servers and Exadata Storage Servers |
HTTP |
6481 |
Administration |
Service tag listener for asset activation |
Any |
ILOM interfaces on database servers and Exadata Storage Servers |
TCP |
6481 |
Administration |
ILOM remote console: Servicetag daemon |
Any |
ILOM interfaces on database servers and Exadata Storage Servers |
TCP |
7578 |
Administration |
ILOM remote console: video |
Any |
ILOM interfaces on database servers and Exadata Storage Servers |
TCP |
7579 |
Administration |
ILOM remote console: serial |
Any |
Database servers and Exadata Storage Servers |
TCP |
7777 |
Both |
Oracle Enterprise Manager Grid Control HTTP console port |
Any |
Database servers and Exadata Storage Servers |
TCP |
7799 |
Both |
Oracle Enterprise Manager Grid Control HTTPS console port |
Any |
Management Server (MS) on database servers and Exadata Storage Servers |
TCP |
7878 8888 |
Administration |
MS access through Oracle WebLogic Note: Applies only to Oracle Exadata System Software before release 20.1.0. |
Any |
Management Server (MS) on database servers |
HTTPS |
7879 |
Administration |
Requests from ExaCLI and/or RESTful API calls |
Database servers and Exadata Storage Servers |
ASR Manager |
HTTPS |
8100 16161 |
Administration |
Diagpack uploads |
Database server ILOM interfaces |
Management Server (MS) |
SNMPv3 |
8162 |
Administration |
Database Server ILOM SNMP notification rules |
Any |
rpc.statd |
TCP |
21408 40801 41460 47431 |
Administration |
rpc.statd |
See Also:
Managing Oracle Database Port Numbers in Oracle Real Application Clusters Installation Guide for Linux and UNIX.