Determining SuperCluster M6-32 Configurations
Determine the Number of Compute Servers
Determine the Number of DCUs in Each Compute Server
Determine the Number of CMUs in Each DCU
Determine the Amount of Memory in Each DCU
Determine the PDomain Configuration on Each Compute Server
Determine the LDom Configuration for Each PDomain
Determining the Best Configuration for Your Situation
Understanding PDomain Configurations
Allocating CPU Resources for LDoms
Allocating Memory Resources for LDoms
Understanding PCIe Cards and Slots for LDoms
Understanding Storage for LDoms
Understanding SuperCluster M6-32
Identifying SuperCluster M6-32 Components
Understanding DCU Configurations
Understanding Half-Populated DCU Root Complexes
Understanding Fully-Populated DCU Root Complexes
Extended Configuration PDomain Overview
Understanding Extended Configuration PDomains
Understanding Base Configuration PDomains
Understanding Compute Server Hardware and Networks
Understanding LDom Configurations for Extended Configuration PDomains
Understanding LDom Configurations for Base Configuration PDomains
Understanding Clustering Software
Cluster Software for the Database Domain
Cluster Software for the Oracle Solaris Application Domains
Understanding System Administration Resources
Understanding Platform-Specific Oracle ILOM Features
Oracle ILOM Remote Console Plus Overview
Oracle Hardware Management Pack Overview
Time Synchronization and NTP Service
Multidomain Extensions to Oracle ILOM MIBs
Hardware Installation Overview
Hardware Installation Task Overview
Hardware Installation Documents
Preparing the Site (Storage Rack and Expansion Racks)
Prepare the Site for the Racks
Network Infrastructure Requirements
Compute Server Default Host Names and IP Addresses
Compute Server Network Components
Storage Rack Network Components
Cable the ZFS Storage Appliance
ZFS Appliance Power Cord Connection Reference
ZFS Storage Appliance Cabling Reference
Leaf Switch 1 Cabling Reference
Leaf Switch 2 Cabling Reference
IB Switch-to-Switch Cabling Reference
Cable the Ethernet Management Switch
Ethernet Management Switch Cabling Reference
Connect SuperCluster M6-32 to the Facility Networks
Expansion Rack Default IP Addresses
Understanding Internal Cabling (Expansion Rack)
Understanding SuperCluster Software
Identify the Version of SuperCluster Software
Controlling SuperCluster M6-32
Powering Off SuperCluster M6-32 Gracefully
Power Off SuperCluster M6-32 in an Emergency
Monitoring SuperCluster M6-32 (OCM)
Monitoring the System With ASR
Configure ASR on the Compute Servers (Oracle ILOM)
Configure SNMP Trap Destinations for Storage Servers
Configure ASR on the ZFS Storage Appliance
Configuring ASR on the Compute Servers (Oracle Solaris 11)
Approve and Verify ASR Asset Activation
Change ssctuner Properties and Disable Features
Configuring CPU and Memory Resources (osc-setcoremem)
Minimum and Maximum Resources (Dedicated Domains)
Supported Domain Configurations
Plan CPU and Memory Allocations
Display the Current Domain Configuration (osc-setcoremem)
Display the Current Domain Configuration (ldm)
Change CPU/Memory Allocations (Socket Granularity)
Change CPU/Memory Allocations (Core Granularity)
Access osc-setcoremem Log Files
Revert to a Previous CPU/Memory Configuration
Remove a CPU/Memory Configuration
Obtaining the EM Exadata Plug-in
Known Issues With the EM Exadata Plug-in
Configuring the Exalogic Software
Prepare to Configure the Exalogic Software
Enable Domain-Level Enhancements
Enable Cluster-Level Session Replication Enhancements
Configuring Grid Link Data Source for Dept1_Cluster1
Configuring SDP-Enabled JDBC Drivers for Dept1_Cluster1
Create an SDP Listener on the IB Network
Administering Oracle Solaris 11 Boot Environments
Advantages to Maintaining Multiple Boot Environments
Mount to a Different Build Environment
Reboot to the Original Boot Environment
Create a Snapshot of a Boot Environment
Remove Unwanted Boot Environments
When performing maintenance on storage servers, it may be necessary to power down or reboot the cell. If a storage server is to be shut down when one or more databases are running, then verify that taking a storage server offline does not impact Oracle ASM disk group and database availability. The ability to take a storage server offline without affecting database availability depends on two items:
Level of Oracle ASM redundancy used on the affected disk groups
Current status of disks in other storage servers that have mirror copies of data on the storage server to be taken offline
CellCLI> LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'
If any grid disks are returned, then it is not safe to take a storage server offline because proper Oracle ASM disk group redundancy will not be maintained. Taking a storage server offline when one or more grid disks are in this state causes Oracle ASM to dismount the affected disk group, causing the databases to shut down abruptly.
CellCLI> ALTER GRIDDISK ALL INACTIVE
The preceding command completes once all disks are inactive and offline.
LIST GRIDDISK WHERE STATUS != 'inactive'
If all grid disks are inactive, then you can shut down the storage server without affecting database availability.
The cell services start automatically.
CellCLI> ALTER GRIDDISK ALL ACTIVE
When the grid disks become active, Oracle ASM automatically synchronizes the grid disks to bring them back into the disk group.
CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
Wait until asmmodestatus is ONLINE or UNUSED for all grid disks. For example:
DATA_CD_00_dm01cel01 ONLINE DATA_CD_01_dm01cel01 SYNCING DATA_CD_02_dm01cel01 OFFLINE DATA_CD_02_dm02cel01 OFFLINE DATA_CD_02_dm03cel01 OFFLINE DATA_CD_02_dm04cel01 OFFLINE DATA_CD_02_dm05cel01 OFFLINE DATA_CD_02_dm06cel01 OFFLINE DATA_CD_02_dm07cel01 OFFLINE DATA_CD_02_dm08cel01 OFFLINE DATA_CD_02_dm09cel01 OFFLINE DATA_CD_02_dm10cel01 OFFLINE DATA_CD_02_dm11cel01 OFFLINE
Oracle ASM synchronization is complete only when all grid disks show asmmodestatus=ONLINE or asmmodestatus=UNUSED. Before taking another storage server offline, Oracle ASM synchronization must complete on the restarted storage server. If synchronization is not complete, the check performed on another storage server fails. For example:
CellCLI> list griddisk attributes name where asmdeactivationoutcome != 'Yes' DATA_CD_00_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_01_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_02_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_03_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_04_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_05_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_06_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_07_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_08_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_09_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_10_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup" DATA_CD_11_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"