Go to main content
Oracle® SuperCluster T5-8 Owner's Guide

Exit Print View

Updated: May 2016
 
 

Understanding Large Domains (Half Rack)

These topics provide information on the Large Domain configuration for the Half Rack:

Percentage of CPU and Memory Resource Allocation

One domain is set up on each SPARC T5-8 server in this configuration, taking up 100% of the server. Therefore, 100% of the CPU and memory resources are allocated to this single domain on each server (all four sockets).


Note - You can use the CPU/Memory tool (setcoremem) to change this default allocation after the initial installation of your system, if you want to have some CPU or memory resources parked (unused). See Configuring CPU and Memory Resources (osc-setcoremem) for more information.

Management Network

Two out of the four 1-GbE host management ports are part of one IPMP group for this domain:

  • NET0

  • NET3

10-GbE Client Access Network

All four PCI root complex pairs, and therefore four 10-GbE NICs, are associated with the logical domain on the server in this configuration. However, only two of the four 10-GbE NICs are used with this domain. One port is used on each dual-ported 10-GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group. One port from the dual-ported 10-GbE NICs is connected to the 10-GbE network in this case, with the remaining unused ports and 10-GbE NICs unconnected.

The following 10-GbE NICs and ports are used for connection to the client access network for this configuration:

  • PCIe slot 1, port 0 (active)

  • PCIe slot 14, port 1 (standby)

A single data address is used to access these two physical ports. That data address allows traffic to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.


Note - You can also connect just one port in each IPMP group to the 10-GbE network rather than both ports, if you are limited in the number of 10-GbE connections that you can make to your 10-GbE network. However, you will not have the redundancy and increased bandwidth in this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:

  • Database Domain:

    • Storage private network: Connections through P1 (active) on the InfiniBand HCA associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA associated with the last CPU in the domain.

      So, for a Large Domain in a Half Rack, these connections would be through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 16 (standby).

    • Exadata private network: Connections through P0 (active) and P1 (standby) on all InfiniBand HCAs associated with the domain.

      So, for a Large Domain in a Half Rack, connections will be made through all four InfiniBand HCAs, with P0 on each as the active connection and P1 on each as the standby connection.

  • Application Domain:

    • Storage private network: Connections through P1 (active) on the InfiniBand HCA associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA associated with the last CPU in the domain.

      So, for a Large Domain in a Half Rack, these connections would be through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 16 (standby).

    • Oracle Solaris Cluster private network: Connections through P0 (active) on the InfiniBand HCA associated with the second CPU in the domain and P1 (standby) on the InfiniBand HCA associated with the third CPU in the domain.

      So, for a Large Domain in a Half Rack, these connections would be through P0 on the InfiniBand HCA installed in slot 11 (active) and P1 on the InfiniBand HCA installed in slot 8 (standby).