Go to main content
Oracle® SuperCluster T5-8 Owner's Guide

Exit Print View

Updated: May 2016
 
 

Understanding Medium Domains (Half Rack)

These topics provide information on the Medium Domain configuration for the Half Rack:

Percentage of CPU and Memory Resource Allocation

The amount of CPU and memory resources that you allocate to the logical domain varies, depending on the size of the other domains that are also on the SPARC T5-8 server:

  • Config H2-1 (Two Medium Domains): The following options are available for CPU and memory resource allocation:

    • Two sockets for each Medium Domain

    • One socket for the first Medium Domain, three sockets for the second Medium Domain

    • Three sockets for the first Medium Domain, one socket for the second Medium Domain

    • Four cores for the first Medium Domain, the remaining cores for the second Medium Domain (first Medium Domain must be either a Database Domain or an Application Domain running Oracle Solaris 11 in this case)

  • Config H3-1 (One Medium Domain and two Small Domains): The following options are available for CPU and memory resource allocation:

    • Two sockets for the Medium Domain, one socket apiece for the two Small Domains

    • One socket for the Medium Domain, two sockets for the first Small Domain, one socket for the second Small Domain

    • One socket for the Medium Domain and the first Small Domain, two sockets for the second Small Domain

Management Network

Two 1-GbE host management ports are part of one IPMP group for each Medium Domain. Following are the 1-GbE host management ports associated with each Medium Domain, depending on how many domains are on the SPARC T5-8 server in the Half Rack:

  • First Medium Domain: NET0-1

  • Second Medium Domain, if applicable: NET2-3

10-GbE Client Access Network

Two PCI root complex pairs, and therefore two 10-GbE NICs, are associated with the Medium Domain on the SPARC T5-8 server in the Half Rack. One port is used on each dual-ported 10-GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group.

The following 10-GbE NICs and ports are used for connection to the client access network for this configuration:

  • First Medium Domain:

    • PCIe slot 1, port 0 (active)

    • PCIe slot 9, port 1 (standby)

  • Second Medium Domain, if applicable:

    • PCIe slot 6, port 0 (active)

    • PCIe slot 14, port 1 (standby)

A single data address is used to access these two physical ports. That data address allows traffic to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.


Note - You can also connect just one port in each IPMP group to the 10-GbE network rather than both ports, if you are limited in the number of 10-GbE connections that you can make to your 10-GbE network. However, you will not have the redundancy and increased bandwidth in this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:

  • Database Domain:

    • Storage private network: Connections through P1 (active) on the InfiniBand HCA associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA associated with the second CPU in the domain.

      So, for the first Medium Domain in a Half Rack, these connections would be through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 11 (standby).

    • Exadata private network: Connections through P0 (active) and P1 (standby) on all InfiniBand HCAs associated with the domain.

      So, for the first Medium Domain in a Half Rack, connections would be made through both InfiniBand HCAs (slot 3 and slot 11), with P0 on each as the active connection and P1 on each as the standby connection.

  • Application Domain:

    • Storage private network: Connections through P1 (active) on the InfiniBand HCA associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA associated with the second CPU in the domain.

      So, for the first Medium Domain in a Half Rack, these connections would be through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 11 (standby).

    • Oracle Solaris Cluster private network: Connections through P0 (active) on the InfiniBand HCA associated with the first CPU in the domain and P1 (standby) on the InfiniBand HCA associated with the second CPU in the domain.

      So, for first Medium Domain in a Half Rack, these connections would be through P0 on the InfiniBand HCA installed in slot 3 (active) and P1 on the InfiniBand HCA installed in slot 11 (standby).