Determining SuperCluster M6-32 Configurations
Determine the Number of Compute Servers
Determine the Number of DCUs in Each Compute Server
Determine the Number of CMUs in Each DCU
Determine the Amount of Memory in Each DCU
Determine the PDomain Configuration on Each Compute Server
Determine the LDom Configuration for Each PDomain
Determining the Best Configuration for Your Situation
Understanding PDomain Configurations
Type of Configuration PDomains
Allocating CPU Resources for LDoms
CPU Resources for LDoms Flowchart
Compute Server Level Considerations
Allocating Memory Resources for LDoms
Memory Resources for LDoms Flowchart
Compute Server Level Considerations
Understanding PCIe Cards and Slots for LDoms
PCIe Cards and Slots for LDoms Flowchart
Compute Server Level Considerations
Understanding Storage for LDoms
Compute Server Level Considerations
Understanding SuperCluster M6-32
Identifying SuperCluster M6-32 Components
Understanding the Compute Server
Understanding DCU Configurations
PCIe Device Root Complexes Overview
PCIe Communication and Paths Overview
Understanding DCU PCIe and EMS Slot Locations
Understanding Half-Populated DCU Root Complexes
Half-Populated DCU 0 PCIe Slot Root Complexes
Half-Populated DCU 1 PCIe Slot Root Complexes
Half-Populated DCU 2 PCIe Slot Root Complexes
Half-Populated DCU 3 PCIe Slot Root Complexes
Understanding Fully-Populated DCU Root Complexes
Fully-Populated DCU 0 PCIe Slot Root Complexes
Fully-Populated DCU 1 PCIe Slot Root Complexes
Fully-Populated DCU 2 PCIe Slot Root Complexes
Fully-Populated DCU 3 PCIe Slot Root Complexes
Extended Configuration PDomain Overview
Understanding Extended Configuration PDomains
Understanding Four DCUs in One Compute Server (R1 Extended Configuration PDomains)
Understanding Four DCUs Across Two Compute Servers (R2 Extended Configuration PDomains)
Understanding Base Configuration PDomains
Understanding Four DCUs on One Compute Server (R3 Base Configuration PDomains)
Understanding Four DCUs Across Two Compute Servers (R4 Base Configuration PDomains)
Understanding Two DCUs on One Compute Server (R5 Base Configuration PDomains)
Understanding Two DCUs Across Two Compute Servers (R6 Base Configuration PDomains)
Understanding Compute Server Hardware and Networks
CPU and Memory Resources Overview
LDoms and the PCIe Slots Overview
10GbE Client Access Network Overview
Understanding SR-IOV Domain Types
Understanding LDom Configurations for Extended Configuration PDomains
Understanding LDom Configurations for Fully-Populated DCUs (Extended Configuration PDomains)
Understanding LDom Configurations for Half-Populated DCUs (Extended Configuration PDomains)
Understanding LDom Configurations for Base Configuration PDomains
Understanding LDom Configurations for Fully-Populated DCUs (Base Configuration PDomains)
Understanding LDom Configurations for Half-Populated DCUs (Base Configuration PDomains)
Understanding Clustering Software
Cluster Software for the Database Domain
Cluster Software for the Oracle Solaris Application Domains
Understanding System Administration Resources
Understanding Platform-Specific Oracle ILOM Features
SPARC: Server-Specific and New Oracle ILOM Features and Requirements
Unsupported Oracle ILOM Features
Oracle ILOM Remote Console Plus Overview
Oracle Hardware Management Pack Overview
Time Synchronization and NTP Service
The following restrictions apply to hardware and software modifications to SuperCluster M6-32. Violating these restrictions can result in loss of warranty and support.
SuperCluster M6-32 hardware cannot be modified or customized. There is one exception to this. The only allowed hardware modification to SuperCluster M6-32 is to the administrative Ethernet management switch included with SuperCluster M6-32. You may choose to do the following:
Replace the Ethernet management switch, at your expense, with an equivalent 1U 48-port Ethernet management switch that conforms to your internal data center network standards. You must perform this replacement, at your expense and labor, after delivery of SuperCluster M6-32. If you choose to make this change, then Oracle cannot make or assist with this change given the numerous possible scenarios involved, and it is not included as part of the standard installation. You must supply the replacement hardware, and make or arrange for this change through other means.
Remove the CAT5 cables connected to the Ethernet management switch, and connect them to your network through an external switch or patch panel. You must perform these changes at your expense and labor. In this case, the Ethernet management switch in the rack can be turned off and disconnected from the data center network.
The storage rack, containing nine storage servers and the Oracle ZFS Storage ZS3-ES storage appliance (ZFS storage appliance), is a required component for SuperCluster M6-32. For additional storage, up to 17 optional expansion racks, either Full, Half or Quarter Racks, can be added to SuperCluster M6-32.
The optional expansion rack can only be connected to SuperCluster M6-32 or Oracle Exadata Database Machine, and only supports databases running on the Database Domains in SuperCluster M6-32 or on the database servers in Oracle Exadata Database Machine.
Standalone storage servers can only be connected to SuperCluster M6-32 or Oracle Exadata Database Machine, and only support databases running on the Database Domains in SuperCluster M6-32 or on the database servers in Oracle Exadata Database Machine. The standalone storage servers must be installed in a separate rack.
Earlier Oracle DB releases can be run in Application Domains running Oracle Solaris 10. Non-Oracle DBs can be run in either Application Domains running Oracle Solaris 10 or Oracle Solaris 11, depending on the Oracle Solaris version they support.
Oracle Exadata Storage Server Software and the operating systems cannot be modified, and you cannot install any additional software or agents on the storage servers.
You cannot update the firmware directly on the storage servers. The firmware is updated as part of a storage server patch.
You may load additional software on the Database Domains on the compute servers. However, to ensure best performance, Oracle discourages adding software except for agents, such as backup agents and security monitoring agents, on the Database Domains. Loading nonstandard kernel modules to the OS of the Database Domains is allowed but discouraged. Oracle does not support questions or issues with the nonstandard modules. If a server crashes, and Oracle suspects the crash may have been caused by a nonstandard module, then Oracle support may refer the customer to the vendor of the nonstandard module or ask that the issue be reproduced without the nonstandard module. Modifying the Database Domain OS other than by applying official patches and upgrades is not supported. IB-related packages should always be maintained at the officially supported release.
SuperCluster M6-32 supports separate domains dedicated to applications, with high throughput/low latency access to the database domains through IB. Since Oracle DB is by nature a client server, applications running in the Application Domains can connect to database instances running in the Database Domain. Applications can be run in the Database Domain, although it is discouraged.
You cannot connect USB devices to the storage servers except as documented in the Oracle Exadata Storage Server Software User's Guide and this guide. In those documented situations, the USB device should not draw more than 100 mA of power.
The network ports on the servers can be used to connect to external storage servers using iSCSI or NFS. However, the FCoE protocol is not supported.
Only switches specified for use in SuperCluster M6-32, Oracle Exadata Rack, and Oracle Exalogic Elastic Cloud may be connected to the IB network. Connecting third-party switches and other switches not used in SuperCluster M6-32, Oracle Exadata Rack, and Oracle Exalogic Elastic Cloud is not supported.