JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle SuperCluster M6-32 HTML Owner’s Guide
search filter icon
search icon

Document Information

Using This Documentation

Product Documentation Library

Feedback

Overview

Determining SuperCluster M6-32 Configurations

Configuration Flowchart

Determine the Number of Compute Servers

Determine the Number of DCUs in Each Compute Server

Determine the Number of CMUs in Each DCU

CMU Overview

Determine the Amount of Memory in Each DCU

Determine the PDomain Configuration on Each Compute Server

Determine the LDom Configuration for Each PDomain

Determining the Best Configuration for Your Situation

Understanding PDomain Configurations

Understanding High Redundancy

Allocating CPU Resources for LDoms

Allocating Memory Resources for LDoms

Understanding PCIe Cards and Slots for LDoms

Understanding Storage for LDoms

Understanding the Hardware

Understanding SuperCluster M6-32

Identifying SuperCluster M6-32 Components

Understanding DCUs

DCU Overview

Understanding DCU Configurations

Understanding Half-Populated DCU Root Complexes

Understanding Fully-Populated DCU Root Complexes

Understanding PDomains

PDomains Overview

PDomain Guidelines

Extended Configuration PDomain Overview

Understanding Extended Configuration PDomains

Base Configuration Overview

Understanding Base Configuration PDomains

Understanding LDoms

Understanding Compute Server Hardware and Networks

Understanding Domains

Understanding LDom Configurations for Extended Configuration PDomains

Understanding LDom Configurations for Base Configuration PDomains

Understanding Clustering Software

Cluster Software Overview

Cluster Software for the Database Domain

Cluster Software for the Oracle Solaris Application Domains

Understanding System Administration Resources

Oracle ILOM Overview

Understanding Platform-Specific Oracle ILOM Features

Oracle Solaris OS Overview

OpenBoot Overview

Oracle ILOM Remote Console Plus Overview

Oracle Hardware Management Pack Overview

Time Synchronization and NTP Service

SNMP Service

Multidomain Extensions to Oracle ILOM MIBs

LDAP/SSL

Active Directory

Hardware Installation

Installing SuperCluster M6-32

Hardware Installation Overview

Weight Specifications

Hardware Installation Task Overview

Hardware Installation Documents

Preparing the Site (Storage Rack and Expansion Racks)

Prepare the Site for the Racks

Storage Rack Components

Physical Specifications

Reviewing Power Requirements

Preparing for Cooling

Preparing the Network

Network Topology

Network Infrastructure Requirements

Compute Server Default Host Names and IP Addresses

Install Cable Drops

Prepare DNS

Cabling SuperCluster M6-32

Compute Server Network Components

Storage Rack Network Components

Cable the ZFS Storage Appliance

ZFS Appliance Power Cord Connection Reference

ZFS Storage Appliance Cabling Reference

Cable IB Switches

Leaf Switch 1 Cabling Reference

Leaf Switch 2 Cabling Reference

IB Switch-to-Switch Cabling Reference

Cable the Ethernet Management Switch

Ethernet Management Switch Cabling Reference

Connect SuperCluster M6-32 to the Facility Networks

Connecting Expansion Racks

Expansion Rack Overview

Expansion Rack Components

Expansion Rack Layouts

Installing Expansion Racks

One Expansion Rack Cabling

Two Expansion Racks Cabling

Three Expansion Racks Cabling

Four Expansion Racks Cabling

Five Expansion Racks Cabling

Six Expansion Racks Cabling

Seven Expansion Racks Cabling

Expansion Rack Default IP Addresses

Understanding Internal Cabling (Expansion Rack)

Administration

Understanding SuperCluster Software

Identify the Version of SuperCluster Software

SuperCluster Tools

Controlling SuperCluster M6-32

Cautions

Power On SuperCluster M6-32

Powering Off SuperCluster M6-32 Gracefully

Power Off SuperCluster M6-32 in an Emergency

Monitoring SuperCluster M6-32 (OCM)

OCM Overview

Access OCM Documentation

Monitoring the System With ASR

ASR Overview

ASR Resources

ASR Installation Overview

Configure ASR on the Compute Servers (Oracle ILOM)

Configure SNMP Trap Destinations for Storage Servers

Configure ASR on the ZFS Storage Appliance

Configuring ASR on the Compute Servers (Oracle Solaris 11)

Approve and Verify ASR Asset Activation

Tuning SuperCluster M6-32

ssctuner Overview

Monitor ssctuner Activity

View Log Files

Change ssctuner Properties and Disable Features

Install ssctuner

Enable ssctuner

Configuring CPU and Memory Resources (osc-setcoremem)

osc-setcoremem Overview

Minimum and Maximum Resources (Dedicated Domains)

Supported Domain Configurations

Plan CPU and Memory Allocations

Display the Current Domain Configuration (osc-setcoremem)

Display the Current Domain Configuration (ldm)

Change CPU/Memory Allocations (Socket Granularity)

Change CPU/Memory Allocations (Core Granularity)

Park Cores and Memory

Access osc-setcoremem Log Files

View the SP Configuration

Revert to a Previous CPU/Memory Configuration

Remove a CPU/Memory Configuration

Obtaining the EM Exadata Plug-in

Confirm System Requirements

Known Issues With the EM Exadata Plug-in

Configuring the Exalogic Software

Exalogic Software Overview

Prepare to Configure the Exalogic Software

Enable Domain-Level Enhancements

Enable Cluster-Level Session Replication Enhancements

Configuring Grid Link Data Source for Dept1_Cluster1

Configuring SDP-Enabled JDBC Drivers for Dept1_Cluster1

Create an SDP Listener on the IB Network

Administering Oracle Solaris 11 Boot Environments

Advantages to Maintaining Multiple Boot Environments

Create a Boot Environment

Mount to a Different Build Environment

Reboot to the Original Boot Environment

Create a Snapshot of a Boot Environment

Remove Unwanted Boot Environments

Administering DISM

DISM Restrictions

Disable DISM

Administering Storage Servers

Monitor Write-through Caching Mode

Shut Down or Reboot a Storage Server

Drop a Storage Server

Glossary

Index

Park Cores and Memory

Perform this procedure on each compute node to move CPU and memory resources from dedicated domains into logical CPU and memory repositories, making the resources available for I/O Domains.

If you are parking cores and memory, plan carefully. Once you park resources and create I/O Domains you cannot move resources back to dedicated domains.


Note - To find out if you can perform this procedure, see Supported Domain Configurations.


In this example, 12 cores and 1 TB memory are parked from the primary domain, and 18 cores and 1536 GB memory are parked from the ssccn3-dom1 domain.

This table shows the allocation plan (see Plan CPU and Memory Allocations).

Domain
Domain Type
Cores Before
Cores

After

Memory Before (GB)
Memory

After (GB)

primary
Dedicated
18
6
1536
512
ssccn3-dom1
Dedicated
30
12
2560
1024
ssccn3-dom2
Root
n/a
n/a
n/a
n/a
ssccn3-dom3
Root
n/a
n/a
n/a
n/a
Unallocated Resources
45
4048
Total Resources
93
93
8144
8144
  1. Log in as superuser on the compute node's control domain.
  2. Ensure that all applications are shut down and that there is no production activity running.
  3. Activate any inactive domains using the ldm bind command.

    The tool does not continue if any inactive domains are present.

  4. Run osc-setcoremem to change resource allocations.

    In this example, some resources are left unallocated which parks them.

    Respond when prompted. Press Enter to select the default value.

    # /opt/oracle.supercluster/bin/osc-setcoremem
     
                                  osc-setcoremem
                        v2.0  built on Aug 27 2015 23:09:35
     
     
     Current Configuration: SuperCluster Fully-Populated M6-32 Base
     
     +--------------------------------+-------+--------+-----------+--- MINIMUM ----+
     | DOMAIN                           | CORES | MEM GB |   TYPE    | CORES | MEM GB |
     +---------------------------------+-------+--------+-----------+-------+--------+
     | primary                          |    18 |   1536 | Dedicated |     2 |     32 |
     | ssccn3-dom1                      |    30 |   2560 | Dedicated |     2 |     32 |
     | ssccn3-dom2                      |     1 |     16 |   Root    |     1 |     16 |
     | ssccn3-dom3                      |     2 |     32 |   Root    |     2 |     32 |
     +---------------------------------+-------+--------+-----------+-------+--------+
     | unallocated or parked            |    45 |   4048 |    --     |    -- |   --   |
     +---------------------------------+-------+--------+-----------+-------+--------+
     
     [Note] Following domains will be skipped in this session.
     
     Root Domains
     ------------
     ssccn3-dom2
     ssccn3-dom3
     
     
     CPU allocation preference:
     
            1. Socket level
            2. Core level
     
     In case of Socket level granularity, proportional memory capacity is
      automatically selected for you.
     
     Choose Socket or Core level [S or C] c
     
     
     Step 1 of 2: Core Count
     
     primary      : specify number of cores [min: 2, max: 46. default: 18] : 6
                    you chose [6] cores for primary domain
     
     ssccn3-dom1  : specify number of cores [min: 2, max: 42. default: 30] : 12
                    you chose [12] cores for ssccn3-dom1 domain
     
     
     Configuration In Progress After Core Count Selection:
     
     +--------------------------------+-------+--------+-----------+--- MINIMUM ----+
     | DOMAIN                           | CORES | MEM GB |   TYPE    | CORES | MEM GB |
     +---------------------------------+-------+--------+-----------+-------+--------+
     | primary                          |     6 |   1536 | Dedicated |     2 |     32 |
     | ssccn3-dom1                      |    12 |   2560 | Dedicated |     2 |     64 |
     | *ssccn3-dom2                     |     1 |     16 |   Root    |     1 |     16 |
     | *ssccn3-dom3                     |     2 |     32 |   Root    |     2 |     32 |
     +---------------------------------+-------+--------+-----------+-------+--------+
     | unallocated or parked            |    75 |   4048 |    --     |    -- |   --   |
     +---------------------------------+-------+--------+-----------+-------+--------+
     
     
     Step 2 of 2: Memory Capacity
            (must be 16 GB aligned)
     
     primary: specify memory capacity in GB [min: 32, max: 2048. default: 2048] : 512
                    you chose [512 GB] memory for primary domain
     
     ssccn3-dom1:specify memory capacity in GB [min: 64, max: 2048. default: 2048] : 1024
                    you chose [1024 GB] memory for ssccn3-dom1 domain
     
     
     Configuration In progress After Memory Capacity Selection:
     
     +--------------------------------+-------+--------+-----------+--- MINIMUM ----+
     | DOMAIN                           | CORES | MEM GB |   TYPE    | CORES | MEM GB |
     +---------------------------------+-------+--------+-----------+-------+--------+
     | primary                          |     6 |    512 | Dedicated |     2 |     32 |
     | ssccn3-dom1                      |    12 |   1024 | Dedicated |     2 |     64 |
     | *ssccn3-dom2                     |     1 |     16 |   Root    |     1 |     16 |
     | *ssccn3-dom3                     |     2 |     32 |   Root    |     2 |     32 |
     +---------------------------------+-------+--------+-----------+-------+--------+
     | unallocated or parked            |    75 |   6608 |    --     |    -- |   --   |
     +--------------------------------+-------+--------+-----------+-------+--------+
     
     
     Following domains will be stopped and restarted:
     
            ssccn3-dom1
     
     This configuration requires rebooting the control domain.
     Do you want to proceed? Y/N : y
     
     IMPORTANT NOTE:
     +-                                                                                    -+
     |  After the reboot, osc-setcoremem attempts to complete CPU, memory re-configuration. |
     |  Please check syslog and the state of all domains before using the system.           |
     |  eg.,  dmesg | grep osc-setcoremem ; ldm list | grep -v active ; date                |
     +-                                                                                    -+
     
     All activity is being recorded in log file:
            /opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_08-28-2015_16:18:57.log 
    Please wait while osc-setcoremem is setting up the new CPU, memory configuration.
     It may take a while. Be patient and do not interrupt.
     
     0%    10    20    30    40    50    60    70    80    90   100%
     |-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
     *=====*=====*=====*=====*=====*=====*=====*=====*=====*=====*
     
    Broadcast Message from root (pts/1) on etc5mdbadm0301 Fri Aug 28 16:22:07...
    THE SYSTEM etc5mdbadm0301 IS BEING SHUT DOWN NOW ! ! !
    Log off now or risk your files being damaged
     
                    Task complete with no errors.
     
    #
  5. If the tool indicated that a reboot was needed, after the system reboots, log in as root on the compute node's control domain.
  6. Verify the new resource allocation.

    You can verify the resource allocation and check for possible osc-setcoremem errors in several ways:

  7. Check the log file to ensure that all reconfiguration steps were successful.
    # cd /opt/oracle.supercluster/osc-setcoremem/log
    # ls (identify the name of the log file)
    # tail -17 osc-setcoremem_activity_08-28-2015_16\:18\:57.log
     
     ::Post-reboot activity::
     
     Please wait while osc-setcoremem is setting up the new CPU, memory configuration.
     It may take a while. Be patient and do not interrupt.
     
     
     Executing ldm commands ..
     
     0%    10    20    30    40    50    60    70    80    90   100%
     |-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
     *=====*=====*=====*=====*=====*=====*=====*=====*=====*=====*
     
                    Task complete with no errors.
                    This concludes socket/core, memory reconfiguration.
                    You can continue using the system.
  8. Verify the new resource allocation.

    You can verify the resource allocation and check for possible osc-setcoremem errors in several ways:

    Example:

    # dmesg | grep osc-setcoremem
    Aug 28 16:27:50 etc5mdbadm0301 root[1926]: [ID 702911 user.alert] osc-setcoremem: core, memory re-configuration complete. system can be used for regular work.
     
    # ldm list
    NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM  UPTIME
    primary          active     -n-cv-  UART    48    523008M  0.4%  0.4%  6m
    ssccn3-dom1      active     -n----  5001    96    1T       0.2%  0.2%  3m
    ssccn3-dom2      active     -n----  5002    8     16G      0.1%  0.1%  3d 36m
    ssccn3-dom3      active     -n--v-  5003    16    32G      0.1%  0.1%  3d 36m
  9. Verify the parked cores.

    See Display the Current Domain Configuration (ldm):

    # ldm list-devices -p core | grep cid | wc -l
          75
  10. Verify the parked memory.

    See Display the Current Domain Configuration (ldm):

    # ldm list-devices memory
    MEMORY
        PA                   SIZE
        0x3c00000000         768G
        0x84000000000        768G
        0x100000000000       1008G
        0x180000000000       1T
        0x208000000000       512G
        0x288000000000       512G
        0x300000000000       1008G
        0x380000000000       1008G
  11. Repeat this procedure if you need to change resource allocations on the other compute node.
Related Information