Determining SuperCluster M6-32 Configurations
Determine the Number of Compute Servers
Determine the Number of DCUs in Each Compute Server
Determine the Number of CMUs in Each DCU
Determine the Amount of Memory in Each DCU
Determine the PDomain Configuration on Each Compute Server
Determine the LDom Configuration for Each PDomain
Determining the Best Configuration for Your Situation
Understanding PDomain Configurations
Type of Configuration PDomains
Allocating CPU Resources for LDoms
CPU Resources for LDoms Flowchart
Compute Server Level Considerations
Allocating Memory Resources for LDoms
Memory Resources for LDoms Flowchart
Compute Server Level Considerations
Understanding PCIe Cards and Slots for LDoms
PCIe Cards and Slots for LDoms Flowchart
Compute Server Level Considerations
Understanding Storage for LDoms
Compute Server Level Considerations
Understanding SuperCluster M6-32
Identifying SuperCluster M6-32 Components
Understanding the Compute Server
Understanding DCU Configurations
PCIe Device Root Complexes Overview
PCIe Communication and Paths Overview
Understanding DCU PCIe and EMS Slot Locations
Understanding Half-Populated DCU Root Complexes
Half-Populated DCU 0 PCIe Slot Root Complexes
Half-Populated DCU 1 PCIe Slot Root Complexes
Half-Populated DCU 2 PCIe Slot Root Complexes
Half-Populated DCU 3 PCIe Slot Root Complexes
Understanding Fully-Populated DCU Root Complexes
Fully-Populated DCU 0 PCIe Slot Root Complexes
Fully-Populated DCU 1 PCIe Slot Root Complexes
Fully-Populated DCU 2 PCIe Slot Root Complexes
Fully-Populated DCU 3 PCIe Slot Root Complexes
Extended Configuration PDomain Overview
Understanding Extended Configuration PDomains
Understanding Four DCUs in One Compute Server (R1 Extended Configuration PDomains)
Understanding Four DCUs Across Two Compute Servers (R2 Extended Configuration PDomains)
Understanding Base Configuration PDomains
Understanding Four DCUs on One Compute Server (R3 Base Configuration PDomains)
Understanding Four DCUs Across Two Compute Servers (R4 Base Configuration PDomains)
Understanding Two DCUs on One Compute Server (R5 Base Configuration PDomains)
Understanding Two DCUs Across Two Compute Servers (R6 Base Configuration PDomains)
Understanding Compute Server Hardware and Networks
CPU and Memory Resources Overview
LDoms and the PCIe Slots Overview
10GbE Client Access Network Overview
Understanding SR-IOV Domain Types
Understanding LDom Configurations for Extended Configuration PDomains
Understanding LDom Configurations for Fully-Populated DCUs (Extended Configuration PDomains)
Understanding LDom Configurations for Half-Populated DCUs (Extended Configuration PDomains)
Understanding LDom Configurations for Base Configuration PDomains
Understanding LDom Configurations for Fully-Populated DCUs (Base Configuration PDomains)
Understanding LDom Configurations for Half-Populated DCUs (Base Configuration PDomains)
Understanding Clustering Software
Cluster Software for the Database Domain
Cluster Software for the Oracle Solaris Application Domains
Understanding System Administration Resources
Understanding Platform-Specific Oracle ILOM Features
SPARC: Server-Specific and New Oracle ILOM Features and Requirements
Unsupported Oracle ILOM Features
Oracle ILOM Remote Console Plus Overview
Oracle Hardware Management Pack Overview
Time Synchronization and NTP Service
Clustering software is typically used on multiple interconnected servers so that they appear as if they are one server to end users and applications. For SuperCluster M6-32, clustering software is used to cluster certain LDoms on the compute servers together with the same type of domain. The benefits of clustering software include the following:
Reduce or eliminate system downtime because of software or hardware failure
Ensure availability of data and applications to end users, regardless of the kind of failure that would normally take down a single-server system
Increase application throughput by enabling services to scale to additional processors by adding nodes to the cluster and balancing the load
Provide enhanced availability of the system by enabling you to perform maintenance without shutting down the entire cluster