Determining SuperCluster M6-32 Configurations
Determine the Number of Compute Servers
Determine the Number of DCUs in Each Compute Server
Determine the Number of CMUs in Each DCU
Determine the Amount of Memory in Each DCU
Determine the PDomain Configuration on Each Compute Server
Determine the LDom Configuration for Each PDomain
Determining the Best Configuration for Your Situation
Understanding PDomain Configurations
Allocating CPU Resources for LDoms
Allocating Memory Resources for LDoms
Understanding PCIe Cards and Slots for LDoms
Understanding Storage for LDoms
Understanding SuperCluster M6-32
Identifying SuperCluster M6-32 Components
Understanding the Compute Server
Understanding DCU Configurations
Understanding Half-Populated DCU Root Complexes
Understanding Fully-Populated DCU Root Complexes
Extended Configuration PDomain Overview
Understanding Extended Configuration PDomains
Understanding Base Configuration PDomains
Understanding Compute Server Hardware and Networks
Understanding LDom Configurations for Extended Configuration PDomains
Understanding LDom Configurations for Base Configuration PDomains
Understanding Clustering Software
Cluster Software for the Database Domain
Cluster Software for the Oracle Solaris Application Domains
Understanding System Administration Resources
Understanding Platform-Specific Oracle ILOM Features
Oracle ILOM Remote Console Plus Overview
Oracle Hardware Management Pack Overview
Time Synchronization and NTP Service
Multidomain Extensions to Oracle ILOM MIBs
Hardware Installation Overview
Hardware Installation Task Overview
Hardware Installation Documents
Preparing the Site (Storage Rack and Expansion Racks)
Prepare the Site for the Racks
Network Infrastructure Requirements
Compute Server Default Host Names and IP Addresses
Compute Server Network Components
Storage Rack Network Components
Cable the ZFS Storage Appliance
ZFS Appliance Power Cord Connection Reference
ZFS Storage Appliance Cabling Reference
Leaf Switch 1 Cabling Reference
Leaf Switch 2 Cabling Reference
IB Switch-to-Switch Cabling Reference
Cable the Ethernet Management Switch
Ethernet Management Switch Cabling Reference
Connect SuperCluster M6-32 to the Facility Networks
Expansion Rack Default IP Addresses
Understanding Internal Cabling (Expansion Rack)
Understanding SuperCluster Software
Identify the Version of SuperCluster Software
Controlling SuperCluster M6-32
Powering Off SuperCluster M6-32 Gracefully
Power Off SuperCluster M6-32 in an Emergency
Monitoring SuperCluster M6-32 (OCM)
Monitoring the System With ASR
Configure ASR on the Compute Servers (Oracle ILOM)
Configure SNMP Trap Destinations for Storage Servers
Configure ASR on the ZFS Storage Appliance
Configuring ASR on the Compute Servers (Oracle Solaris 11)
Approve and Verify ASR Asset Activation
Change ssctuner Properties and Disable Features
Configuring CPU and Memory Resources (osc-setcoremem)
Minimum and Maximum Resources (Dedicated Domains)
Supported Domain Configurations
Plan CPU and Memory Allocations
Display the Current Domain Configuration (osc-setcoremem)
Display the Current Domain Configuration (ldm)
Change CPU/Memory Allocations (Socket Granularity)
Change CPU/Memory Allocations (Core Granularity)
Access osc-setcoremem Log Files
Revert to a Previous CPU/Memory Configuration
Remove a CPU/Memory Configuration
Obtaining the EM Exadata Plug-in
Known Issues With the EM Exadata Plug-in
Configuring the Exalogic Software
Prepare to Configure the Exalogic Software
Enable Domain-Level Enhancements
Enable Cluster-Level Session Replication Enhancements
Configuring Grid Link Data Source for Dept1_Cluster1
Configuring SDP-Enabled JDBC Drivers for Dept1_Cluster1
Create an SDP Listener on the IB Network
Administering Oracle Solaris 11 Boot Environments
Advantages to Maintaining Multiple Boot Environments
Mount to a Different Build Environment
Reboot to the Original Boot Environment
Create a Snapshot of a Boot Environment
Remove Unwanted Boot Environments
Monitor Write-through Caching Mode
The following restrictions apply to hardware and software modifications to SuperCluster M6-32. Violating these restrictions can result in loss of warranty and support.
SuperCluster M6-32 hardware cannot be modified or customized. There is one exception to this. The only allowed hardware modification to SuperCluster M6-32 is to the administrative Ethernet management switch included with SuperCluster M6-32. You may choose to do the following:
Replace the Ethernet management switch, at your expense, with an equivalent 1U 48-port Ethernet management switch that conforms to your internal data center network standards. You must perform this replacement, at your expense and labor, after delivery of SuperCluster M6-32. If you choose to make this change, then Oracle cannot make or assist with this change given the numerous possible scenarios involved, and it is not included as part of the standard installation. You must supply the replacement hardware, and make or arrange for this change through other means.
Remove the CAT5 cables connected to the Ethernet management switch, and connect them to your network through an external switch or patch panel. You must perform these changes at your expense and labor. In this case, the Ethernet management switch in the rack can be turned off and disconnected from the data center network.
The storage rack, containing nine storage servers and the Oracle ZFS Storage ZS3-ES storage appliance (ZFS storage appliance), is a required component for SuperCluster M6-32. For additional storage, up to 17 optional expansion racks, either Full, Half or Quarter Racks, can be added to SuperCluster M6-32.
The optional expansion rack can only be connected to SuperCluster M6-32 or Oracle Exadata Database Machine, and only supports databases running on the Database Domains in SuperCluster M6-32 or on the database servers in Oracle Exadata Database Machine.
Standalone storage servers can only be connected to SuperCluster M6-32 or Oracle Exadata Database Machine, and only support databases running on the Database Domains in SuperCluster M6-32 or on the database servers in Oracle Exadata Database Machine. The standalone storage servers must be installed in a separate rack.
Earlier Oracle DB releases can be run in Application Domains running Oracle Solaris 10. Non-Oracle DBs can be run in either Application Domains running Oracle Solaris 10 or Oracle Solaris 11, depending on the Oracle Solaris version they support.
Oracle Exadata Storage Server Software and the operating systems cannot be modified, and you cannot install any additional software or agents on the storage servers.
You cannot update the firmware directly on the storage servers. The firmware is updated as part of a storage server patch.
You may load additional software on the Database Domains on the compute servers. However, to ensure best performance, Oracle discourages adding software except for agents, such as backup agents and security monitoring agents, on the Database Domains. Loading nonstandard kernel modules to the OS of the Database Domains is allowed but discouraged. Oracle does not support questions or issues with the nonstandard modules. If a server crashes, and Oracle suspects the crash may have been caused by a nonstandard module, then Oracle support may refer the customer to the vendor of the nonstandard module or ask that the issue be reproduced without the nonstandard module. Modifying the Database Domain OS other than by applying official patches and upgrades is not supported. IB-related packages should always be maintained at the officially supported release.
SuperCluster M6-32 supports separate domains dedicated to applications, with high throughput/low latency access to the database domains through IB. Since Oracle DB is by nature a client server, applications running in the Application Domains can connect to database instances running in the Database Domain. Applications can be run in the Database Domain, although it is discouraged.
You cannot connect USB devices to the storage servers except as documented in the Oracle Exadata Storage Server Software User's Guide and this guide. In those documented situations, the USB device should not draw more than 100 mA of power.
The network ports on the servers can be used to connect to external storage servers using iSCSI or NFS. However, the FCoE protocol is not supported.
Only switches specified for use in SuperCluster M6-32, Oracle Exadata Rack, and Oracle Exalogic Elastic Cloud may be connected to the IB network. Connecting third-party switches and other switches not used in SuperCluster M6-32, Oracle Exadata Rack, and Oracle Exalogic Elastic Cloud is not supported.