Determining SuperCluster M6-32 Configurations
Determine the Number of Compute Servers
Determine the Number of DCUs in Each Compute Server
Determine the Number of CMUs in Each DCU
Determine the Amount of Memory in Each DCU
Determine the PDomain Configuration on Each Compute Server
Determine the LDom Configuration for Each PDomain
Determining the Best Configuration for Your Situation
Understanding PDomain Configurations
Type of Configuration PDomains
Allocating CPU Resources for LDoms
CPU Resources for LDoms Flowchart
Compute Server Level Considerations
Allocating Memory Resources for LDoms
Memory Resources for LDoms Flowchart
Compute Server Level Considerations
Understanding PCIe Cards and Slots for LDoms
PCIe Cards and Slots for LDoms Flowchart
Compute Server Level Considerations
Understanding Storage for LDoms
Compute Server Level Considerations
Understanding SuperCluster M6-32
Identifying SuperCluster M6-32 Components
Understanding the Compute Server
Understanding DCU Configurations
PCIe Device Root Complexes Overview
PCIe Communication and Paths Overview
Understanding DCU PCIe and EMS Slot Locations
Understanding Half-Populated DCU Root Complexes
Half-Populated DCU 0 PCIe Slot Root Complexes
Half-Populated DCU 1 PCIe Slot Root Complexes
Half-Populated DCU 2 PCIe Slot Root Complexes
Half-Populated DCU 3 PCIe Slot Root Complexes
Understanding Fully-Populated DCU Root Complexes
Fully-Populated DCU 0 PCIe Slot Root Complexes
Fully-Populated DCU 1 PCIe Slot Root Complexes
Fully-Populated DCU 2 PCIe Slot Root Complexes
Fully-Populated DCU 3 PCIe Slot Root Complexes
Extended Configuration PDomain Overview
Understanding Extended Configuration PDomains
Understanding Four DCUs in One Compute Server (R1 Extended Configuration PDomains)
Understanding Four DCUs Across Two Compute Servers (R2 Extended Configuration PDomains)
Understanding Base Configuration PDomains
Understanding Four DCUs on One Compute Server (R3 Base Configuration PDomains)
Understanding Four DCUs Across Two Compute Servers (R4 Base Configuration PDomains)
Understanding Two DCUs on One Compute Server (R5 Base Configuration PDomains)
Understanding Two DCUs Across Two Compute Servers (R6 Base Configuration PDomains)
Understanding Compute Server Hardware and Networks
CPU and Memory Resources Overview
10GbE Client Access Network Overview
IB Network Data Paths for a Database Domain
IB Network Data Paths for an Application Domain
Understanding SR-IOV Domain Types
Understanding LDom Configurations for Extended Configuration PDomains
Understanding LDom Configurations for Fully-Populated DCUs (Extended Configuration PDomains)
Understanding LDom Configurations for Half-Populated DCUs (Extended Configuration PDomains)
Understanding LDom Configurations for Base Configuration PDomains
Understanding LDom Configurations for Fully-Populated DCUs (Base Configuration PDomains)
Understanding LDom Configurations for Half-Populated DCUs (Base Configuration PDomains)
Understanding Clustering Software
Cluster Software for the Database Domain
Cluster Software for the Oracle Solaris Application Domains
Understanding System Administration Resources
Understanding Platform-Specific Oracle ILOM Features
SPARC: Server-Specific and New Oracle ILOM Features and Requirements
Unsupported Oracle ILOM Features
Oracle ILOM Remote Console Plus Overview
Oracle Hardware Management Pack Overview
Time Synchronization and NTP Service
LDom configurations supported on SuperCluster M6-32 have the following characteristics:
One to four LDoms on one or two DCUs
Each LDom can be one of the following types:
Database Domain (dedicated domain)
Application Domain running Oracle Solaris 10 (dedicated domain)
Application Domain running Oracle Solaris 11 (dedicated domain)
Root Domain
Each DCU in the compute server has sixteen PCIe slots. The following cards are installed in certain PCIe slots and are used to connect to these networks:
One quad-port 1GbE NIC – Used to connect to the management network
Four IB HCAs – Used to connect to the private IB network
Each DCU also has four EMS cards, each with two 10GbE ports, which are used to connect to the 10GbE client access network.
Note - Optional Fibre Channel PCIe cards are also available to facilitate migration of data from legacy storage subsystems to the storage servers integrated with SuperCluster M6-32. The PCIe slots that are available for those optional Fibre Channel PCIe cards varies, depending on your configuration. Refer to the Oracle SuperCluster M6-32 Owner's Guide: Installation for more information.
The PCIe slots and EMSs used for each configuration varies, depending on the type and number of LDoms that are used for that configuration.