Understanding SuperCluster Software
Identify the Version of SuperCluster Software
Controlling SuperCluster M6-32
Powering Off SuperCluster M6-32 Gracefully
Shut Down the Oracle Solaris Cluster
Shut Down the Enterprise Controller (Ops Center)
Shut Down the OS on the Compute Servers
Shut Down the ZFS Storage Appliance
Power Off the Switches and Racks
Power Off SuperCluster M6-32 in an Emergency
Monitoring SuperCluster M6-32 (OCM)
Monitoring the System With ASR
Configure ASR on the Compute Servers (Oracle ILOM)
Configure SNMP Trap Destinations for Storage Servers
Configure ASR on the ZFS Storage Appliance
Configuring ASR on the Compute Servers (Oracle Solaris 11)
Enable the HTTP Receiver on the ASR Manager
Enable HTTPS on ASR Manager (Optional)
Register Compute Servers With Oracle Solaris 11 or Database Domains to ASR Manager
Approve and Verify ASR Asset Activation
Change ssctuner Properties and Disable Features
Configuring CPU and Memory Resources (osc-setcoremem)
Minimum and Maximum Resources (Dedicated Domains)
Supported Domain Configurations
Plan CPU and Memory Allocations
Display the Current Domain Configuration (osc-setcoremem)
Change CPU/Memory Allocations (Socket Granularity)
Change CPU/Memory Allocations (Core Granularity)
Access osc-setcoremem Log Files
Revert to a Previous CPU/Memory Configuration
Remove a CPU/Memory Configuration
Obtaining the EM Exadata Plug-in
Known Issues With the EM Exadata Plug-in
Configuring the Exalogic Software
Prepare to Configure the Exalogic Software
Enable Domain-Level Enhancements
Enable Cluster-Level Session Replication Enhancements
Configuring Grid Link Data Source for Dept1_Cluster1
Runtime Connection Load Balancing
Secure Communication With Oracle Wallet
Create a Grid Link Data Source on Dept1_Cluster1
Configuring SDP-Enabled JDBC Drivers for Dept1_Cluster1
Configure the Database to Support IB
Create an SDP Listener on the IB Network
Administering Oracle Solaris 11 Boot Environments
Advantages to Maintaining Multiple Boot Environments
Mount to a Different Build Environment
Reboot to the Original Boot Environment
Create a Snapshot of a Boot Environment
Remove Unwanted Boot Environments
Monitor Write-through Caching Mode
This procedure describes how to display a compute node domain configuration using a series of ldm commands.
Note - Alternatively, you can use the osc-setcoremem command to get similar information See Display the Current Domain Configuration (osc-setcoremem).
Root Domains are identified by IOV in the STATUS column.
In this example, ssccn3-dom2 and ssccn3-dom3 are Root Domains. The other domains are dedicated domains.
# ldm list-io | grep BUS NAME TYPE BUS DOMAIN STATUS pci_32 BUS pci_32 primary pci_33 BUS pci_33 primary pci_34 BUS pci_34 primary pci_35 BUS pci_35 primary pci_36 BUS pci_36 ssccn3-dom2 IOV pci_37 BUS pci_37 ssccn3-dom2 IOV pci_38 BUS pci_38 ssccn3-dom2 IOV pci_39 BUS pci_39 ssccn3-dom2 IOV pci_40 BUS pci_40 ssccn3-dom1 pci_41 BUS pci_41 ssccn3-dom1 pci_42 BUS pci_42 ssccn3-dom1 pci_43 BUS pci_43 ssccn3-dom1 pci_44 BUS pci_44 ssccn3-dom3 IOV pci_45 BUS pci_45 ssccn3-dom3 IOV pci_46 BUS pci_46 ssccn3-dom3 IOV pci_47 BUS pci_47 ssccn3-dom3 IOV
In this example, ssccn3-dom2 and ssccn3-dom3 are Root Domains (from Step 2). The resources listed for Root Domains only represent the resources that are reserved for the Root Domain itself. Parked resources are not displayed.
# ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME primary active -n-cv- UART 192 2095872M 0.1% 0.1% 12h 28m ssccn3-dom1 active -n---- 5001 192 2T 0.1% 0.1% 12h 25m ssccn3-dom2 active -n---- 5002 8 16G 0.1% 0.1% 2d 23h 34m ssccn3-dom3 active -n--v- 5003 16 32G 0.1% 0.1% 2d 23h 34m
In this example, the first command line reports the number of cores in the logical CPU repository. The second command line reports the amount of memory in the memory repository.
# ldm list-devices -p core | grep cid | wc -l 45 # ldm list-devices memory MEMORY PA SIZE 0x100000000000 1008G 0x180000000000 1T 0x300000000000 1008G 0x380000000000 1008G