Go to main content
Oracle® SuperCluster T5-8 Owner's Guide

Exit Print View

Updated: May 2016
 
 

Understanding Large Domains (Full Rack)

These topics provide information on the Large Domain configuration for the Half Rack:

Percentage of CPU and Memory Resource Allocation

The amount of CPU and memory resources that you allocate to the logical domain varies, depending on the size of the other domains that are also on the SPARC T5-8 server:

  • Config F2-1 (Two Large Domains): The following options are available for CPU and memory resource allocation:

    • Four sockets for each Large Domain

    • Two sockets for the first Large Domain, six sockets for the second Large Domain

    • Six sockets for the first Large Domain, two sockets for the second Large Domain

    • One socket for the first Large Domain, seven sockets for the second Large Domain

    • Seven sockets for the first Large Domain, one socket for the second Large Domain

  • Config F3-1 (One Large Domain and two Medium Domains): The following options are available for CPU and memory resource allocation:

    • Four sockets for the Large Domain, two sockets apiece for the two Medium Domains

    • Two sockets for the Large Domain, four sockets for the first Medium Domain, two sockets for the second Medium Domain

    • Two sockets for the Large Domain and the first Medium Domain, four sockets for the second Medium Domain

    • Six sockets for the Large Domain, one socket apiece for the two Medium Domains

    • Five sockets for the Large Domain, two sockets for the first Medium Domain, one socket for the second Medium Domain

    • Five sockets for the Large Domain, one socket for the first Medium Domain, two sockets for the second Medium Domain

  • Config F4-2 (One Large Domain, two Small Domains, one Medium Domain): The following options are available for CPU and memory resource allocation:

    • Four sockets for the Large Domain, one socket apiece for the two Small Domains, two sockets for the Medium Domain

    • Three sockets for the Large Domain, one socket apiece for the two Small Domains, three sockets for the Medium Domain

    • Two sockets for the Large Domain, one socket apiece for the two Small Domains, four sockets for the Medium Domain

    • Five sockets for the Large Domain, one socket apiece for the two Small Domains and the Medium Domain

  • Config F5-2 (One Large Domain, four Small Domains): The following options are available for CPU and memory resource allocation:

    • Four sockets for the Large Domain, one socket apiece for the four Small Domains

    • Three sockets for the Large Domain, one socket apiece for the first three Small Domains, two sockets for the fourth Small Domain

    • Two sockets for the Large Domain, one socket apiece for the first and second Small Domains, two sockets apiece for the third and fourth Small Domains

    • Two sockets for the Large Domain, one socket apiece for the first three Small Domains, three sockets for the fourth Small Domain

    • Two sockets for the Large Domain and the for the first Small Domain, one socket apiece for the second and third Small Domains, two sockets for the fourth Small Domain

Management Network

Two 1-GbE host management ports are part of one IPMP group for each Large Domain. Following are the 1-GbE host management ports associated with each Large Domain, depending on how many domains are on the SPARC T5-8 server in the Full Rack:

  • First Large Domain: NET0-1

  • Second Large Domain, if applicable: NET2-3

10-GbE Client Access Network

Four PCI root complex pairs, and therefore four 10-GbE NICs, are associated with the Large Domain on the SPARC T5-8 server in the Full Rack. One port is used on each dual-ported 10-GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group.

The following 10-GbE NICs and ports are used for connection to the client access network for this configuration:

  • First Large Domain:

    • PCIe slot 1, port 0 (active)

    • PCIe slot 10, port 1 (standby)

  • Second Large Domain, if applicable:

    • PCIe slot 5, port 0 (active)

    • PCIe slot 14, port 1 (standby)

A single data address is used to access these two physical ports. That data address allows traffic to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.


Note - You can also connect just one port in each IPMP group to the 10-GbE network rather than both ports, if you are limited in the number of 10-GbE connections that you can make to your 10-GbE network. However, you will not have the redundancy and increased bandwidth in this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:

  • Database Domain:

    • Storage private network: Connections through P1 (active) on the InfiniBand HCA associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA associated with the last CPU in the domain.

      For example, for the first Large Domain in a Full Rack, these connections would be through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 12 (standby).

    • Exadata private network: Connections through P0 (active) and P1 (standby) on all InfiniBand HCAs associated with the domain.

      So, for a Large Domain in a Full Rack, connections will be made through the four InfiniBand HCAs associated with the domain, with P0 on each as the active connection and P1 on each as the standby connection.

  • Application Domain:

    • Storage private network: Connections through P1 (active) on the InfiniBand HCA associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA associated with the last CPU in the domain.

      For example, for the first Large Domain in a Full Rack, these connections would be through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 12 (standby).

    • Oracle Solaris Cluster private network: Connections through P0 (active) on the InfiniBand HCA associated with the second CPU in the domain and P1 (standby) on the InfiniBand HCA associated with the third CPU in the domain.

      For example, for the first Large Domain in a Full Rack, these connections would be through P0 on the InfiniBand HCA installed in slot 11 (active) and P1 on the InfiniBand HCA installed in slot 4 (standby).