12 Maintaining Oracle Big Data Appliance

This chapter describes how to monitor and maintain Oracle Big Data Appliance. Some procedures use the dcli utility to execute commands in parallel on all servers.

This chapter contains the following sections:

12.1 Monitoring the Ambient Temperature of Servers

Maintaining environmental temperature conditions within design specification for a server helps to achieve maximum efficiency and targeted component service lifetimes. Temperatures outside the ambient temperature range of 21 to 23 degrees Celsius (70 to 74 degrees Fahrenheit) affect all components within Oracle Big Data Appliance, possibly causing performance problems and shortened service lifetimes.

To monitor the ambient temperature:

  1. Connect to an Oracle Big Data Appliance server as root.

  2. Set up passwordless SSH for root by entering the setup-root-ssh command, as described in "Setting Up Passwordless SSH".

  3. Check the current temperature:

    dcli 'ipmitool sunoem cli "show /SYS/T_AMB" | grep value'
    
  4. If any temperature reading is outside the operating range, then investigate and correct the problem. See Table 2-14.

The following is an example of the command output:

bda1node01-adm.example.com: value = 22.000 degree C
bda1node02-adm.example.com: value = 22.000 degree C
bda1node03-adm.example.com: value = 22.000 degree C
bda1node04-adm.example.com: value = 23.000 degree C
          .
          .
          .

12.2 Powering On and Off Oracle Big Data Appliance

This section includes the following topics:

12.2.1 Nonemergency Power Procedures

This section contains the procedures for powering on and off the components of Oracle Big Data Appliance in an orderly fashion.

See Also:

Oracle Big Data Appliance Software User's Guide for powering on and off gracefully when the software is installed and running.

12.2.1.1 Powering On Oracle Big Data Appliance

To turn on Oracle Big Data Appliance:

  • Turn on all 12 breakers on both PDUs.

Oracle ILOM and the Linux operating system start automatically.

12.2.1.2 Powering Off Oracle Big Data Appliance

To turn off Oracle Big Data Appliance:

  1. Turn off the servers.
  2. Turn off all 12 breakers on both PDUs.
12.2.1.2.1 Powering Off the Servers

Use the Linux shutdown command to turn off or restart the servers. Enter this command as root to shut down a server immediately:

# shutdown -hP now

The following command restarts a server immediately:

# shutdown -r now

See Also:

Linux SHUTDOWN manual page for details

12.2.1.2.2 Powering Off Multiple Servers at the Same Time

Use the dcli utility to run the shutdown command on multiple servers at the same time. Do not run the dcli utility from a server that will be shut down. Set up passwordless SSH for root, as described in "Setting Up Passwordless SSH".

The following command shows the syntax of the command:

# dcli -l root -g group_name shutdown -hP now

In this command, group_name is a file that contains a list of servers.

The following example shuts down all Oracle Big Data Appliance servers listed in the server_group file:

# dcli -l root -g server_group shutdown -hP now
12.2.1.3 Powering On and Off Network Switches

The network switches do not have power switches. They turn off when power is removed by turning off a PDU or a breaker in the data center.

12.2.2 Emergency Power-Off Considerations

In an emergency, halt power to Oracle Big Data Appliance immediately. The following emergencies may require powering off Oracle Big Data Appliance:

  • Natural disasters such as earthquake, flood, hurricane, tornado, or cyclone

  • Abnormal noise, smell, or smoke coming from the system

  • Threat to human safety

12.2.2.1 Emergency Power-Off Procedure

To perform an emergency power-off procedure for Oracle Big Data Appliance, turn off power at the circuit breaker or pull the emergency power-off switch in the computer room. After the emergency, contact Oracle Support Services to restore power to the system.

12.2.2.2 Emergency Power-Off Switch

Emergency power-off (EPO) switches are required when computer equipment contains batteries capable of supplying more than 750 volt-amperes for more than 5 minutes. Systems that have these batteries include internal EPO hardware for connection to a site EPO switch or relay. Use of the EPO switch removes power from Oracle Big Data Appliance.

12.2.3 Cautions and Warnings

The following cautions and warnings apply to Oracle Big Data Appliance:

WARNING:

Do not touch the parts of this product that use high-voltage power. Touching them might result in serious injury.

Caution:

  • Do not turn off Oracle Big Data Appliance unless there is an emergency. In that case, follow the "Emergency Power-Off Procedure".

  • Keep the front and rear cabinet doors closed. Failure to do so might cause system failure or result in damage to hardware components.

  • Keep the top, front, and back of the cabinets clear to allow proper airflow and prevent overheating of components.

  • Use only the supplied hardware.

12.3 Adding Memory to the Servers

You can add memory to all servers in the cluster or to specific servers.

12.3.1 Adding Memory to an Oracle Server X8-2L or X7-2L

Oracle Big Data Appliance X8-2L and X7-2 ships from the factory with 256 GB of memory in each server. Eight of the 24 slots are populated with 32 GB DIMMs. Memory is expandable up to 768 GB in the case of 32 GB DIMMs in all 24 slots. Memory is expandable up to 1.5 TB in the case of 64 GB DIMMs in all 24 slots.

Recommended configurations are 8, 12, or 24 DIMMs per server.

Note:

  • All DIMMs on a server must be of the same type. Mixing DIMM types (such as 32 GB and 64 GB DIMMs) on the same server is not supported
  • Memory expansions other than to 12 or 24 DIMMs per server are possible, but not recommended since they may negatively impact performance

To order more memory, contact your Oracle sales representative.

See the Oracle Server X8-2L Service Manual or Oracle® Server X7-2L Service Manual for DIMM population scenarios and rules, installation, and other information.

12.3.2 Adding Memory to an Oracle Server X6-2L

Oracle Big Data Appliance X6-2L ships from the factory with 256 GB of memory in each server. Eight of the 24 slots are populated with 32 GB DIMMs. Memory is expandable up to 768 GB (in the case of 32 GB DIMMs in all 24 slots).

See the Oracle® Server X6-2L Service Manual for instruction on DIMM population scenarios and rules, installation, and other information.

12.3.3 Adding Memory to an Oracle Server X5-2L, Sun Server X4-2L, or Sun Server X3-2L

Oracle Big Data Appliance X5-2 ships from the factory with 128 GB of memory in each server. Eight of the 24 slots are populated with 16 GB DIMMs. Memory is expandable up to 768 GB (in the case of 32 GB DIMMs in all 24 slots).

Oracle Big Data Appliance X4-2 and X3-2 servers are shipped with 64 GB of memory. The eight slots are populated with 8 GB DIMMs. These appliances support 8 GB, 16 GB, and 32 GB DIMMs. You can expand the amount of memory for a maximum of 512 GB (16 x 32 GB) in a server. You can use the 8 * 32 GB Memory Kit.

You can mix DIMM sizes, but they must be installed in order from largest to smallest. You can achieve the best performance by preserving symmetry. For example, add four of the same size DIMMs, one for each memory channel, to each processor, and ensure that both processors have the same size DIMMs installed in the same order.

To add memory to an Oracle Server X5-2L, Sun Server X4-2L, or Sun Server X3-2L:

  1. If you are mixing DIMM sizes, then review the DIMM population rules in the Oracle Server X5-2L Service Manual at

    http://docs.oracle.com/cd/E41033_01/html/E48325/cnpsm.gnvje.html#scrolltoc

  2. Power down the server.

  3. Install the new DIMMs. If you are installing 16 or 32 GB DIMMs, then replace the existing 8 GB DIMMs first, and then replace the plastic fillers. You must install the largest DIMMs first, then the next largest, and so forth. You can reinstall the original DIMMs last.

    See the Oracle Server X5-2L Service Manual at

    http://docs.oracle.com/cd/E41033_01/html/E48325/cnpsm.ceiebfdg.html#scrolltoc

  4. Power on the server.

12.3.4 Adding Memory to Sun Fire X4270 M2 Servers

Oracle Big Data Appliance ships from the factory with 48 GB of memory in each server. Six of the 18 DIMM slots are populated with 8 GB DIMMs. You can populate the empty slots with 8 GB DIMMs to bring the total memory to either 96 GB (12 x 8 GB) or 144 GB (18 x 8 GB). An upgrade to 144 GB may slightly reduce performance because of lower memory bandwidth; memory frequency drops from 1333 MHz to 800 MHz.

To add memory to a Sun Fire X4270 M2 server:

  1. Power down the server.

  2. Replace the plastic fillers with the DIMMs. See the Sun Fire X4270 M2 Server Service Manual at

    http://docs.oracle.com/cd/E19245-01/E21671/motherboard.html#50503715_71311

  3. Power on the server.

12.4 Maintaining the InfiniBand Network

The InfiniBand network connects the servers through the bondib0 interface to the InfiniBand switches in the rack. This section describes how to perform maintenance on the InfiniBand switches.

This section contains the following topics:

12.4.1 Replacing a Failed InfiniBand Switch

Complete these steps to replace a Sun Network QDR InfiniBand Gateway switch or a Sun Datacenter InfiniBand Switch 36.

See Also:

To replace a failed InfiniBand switch:

  1. Turn off both power supplies on the switch by removing the power plugs.

  2. Disconnect the cables from the switch. All InfiniBand cables have labels at both ends indicating their locations. If any cables do not have labels, then label them.

  3. Remove the switch from the rack.

  4. Install the new switch in the rack.

  5. Restore the switch settings using the backup file, as described in "Backing Up and Restoring Oracle ILOM Settings".

  6. Connect to the switch as ilom_admin and open the Fabric Management shell:

    -> show /SYS/Fabric_Mgmt
    

    The prompt changes from -> to FabMan@hostname->

  7. Disable the Subnet Manager:

    FabMan@bda1sw-02-> disablesm
    
  8. Connect the cables to the new switch, being careful to connect each cable to the correct port.

  9. Verify that there are no errors on any links in the fabric:

    FabMan@bda1sw-02-> ibdiagnet -c 1000 -r
    
  10. Enable the Subnet Manager:

    FabMan@bda1sw-02-> enablesm

    Note:

    If the replaced switch was the Sun Datacenter InfiniBand Switch 36 spine switch, then manually fail the master Subnet Manager back to the switch by disabling the Subnet Managers on the other switches until the spine switch becomes the master. Then reenable the Subnet Manager on all the other switches.

12.4.2 Verifying InfiniBand Network Operation

If any component in the InfiniBand network has required maintenance, including replacing an InfiniBand Host Channel Adapter (HCA) on a server, an InfiniBand switch, or an InfiniBand cable, or if operation of the InfiniBand network is suspected to be substandard, then verify that the InfiniBand network is operating properly. The following procedure describes how to verify network operation:

Note:

Use this procedure any time the InfiniBand network is performing below expectations.

To verify InfiniBand network operation:

  1. Enter the ibdiagnet command to verify InfiniBand network quality:

    # ibdiagnet -c 1000
    

    Investigate all errors reported by this command. It generates a small amount of network traffic and can run during a normal workload.

    See Also:

    Sun Network QDR InfiniBand Gateway Switch Command Reference at

    http://docs.oracle.com/cd/E26699_01/html/E26706/gentextid-28027.html#scrolltoc

  2. Report switch port error counters and port configuration information. The LinkDowned, RcvSwRelayErrors, XmtDiscards, and XmtWait errors are ignored by this command:

    #  ibqueryerrors.pl -rR -s LinkDowned,RcvSwRelayErrors,XmtDiscards,XmtWait
    

    See Also:

    Linux man page for ibqueryerrors.

  3. Check the status of the hardware:

    # bdacheckhw
    

    The following is an example of the output:

    [SUCCESS: Correct system model : SUN FIRE X4270 M2 SERVER
    [SUCCESS: Correct processor info : Intel(R) Xeon(R) CPU X5675 @ 3.07GHz
    [SUCCESS: Correct number of types of CPU : 1
    [SUCCESS: Correct number of CPU cores : 24
    [SUCCESS: Sufficient GB of memory (>=48): 48
    [SUCCESS: Correct GB of swap space : 24
    [SUCCESS: Correct BIOS vendor : American Megatrends Inc.
    [SUCCESS: Sufficient BIOS version (>=08080102): 08080102
    [SUCCESS: Recent enough BIOS release date (>=05/23/2011) : 05/23/2011
    [SUCCESS: Correct ILOM version : 3.0.16.10.a r68533
    [SUCCESS: Correct number of fans : 6
    [SUCCESS: Correct fan 0 status : ok
    [SUCCESS: Correct fan 1 status : ok
    [SUCCESS: Correct fan 2 status : ok
    [SUCCESS: Correct fan 3 status : ok
    [SUCCESS: Correct fan 4 status : ok
    [SUCCESS: Correct fan 5 status : ok
    [SUCCESS: Correct number of power supplies : 2
    [1m[34mINFO: Detected Santa Clara Factory, skipping power supply checks
    [SUCCESS: Correct disk controller model : LSI MegaRAID SAS 9261-8i
    [SUCCESS: Correct disk controller firmware version : 12.12.0-0048
    [SUCCESS: Correct disk controller PCI address : 13:00.0
    [SUCCESS: Correct disk controller PCI info : 0104: 1000:0079
    [SUCCESS: Correct disk controller PCIe slot width : x8
    [SUCCESS: Correct disk controller battery type : iBBU08
    [SUCCESS: Correct disk controller battery state : Operational
    [SUCCESS: Correct number of disks : 12
    [SUCCESS: Correct disk 0 model : SEAGATE ST32000SSSUN2.0
    [SUCCESS: Sufficient disk 0 firmware (>=61A): 61A
    [SUCCESS: Correct disk 1 model : SEAGATE ST32000SSSUN2.0
    [SUCCESS: Sufficient disk 1 firmware (>=61A): 61A
              .
              .
              .
    [SUCCESS: Correct disk 10 status : Online, Spun Up No alert
    [SUCCESS: Correct disk 11 status : Online, Spun Up No alert
    [SUCCESS: Correct Host Channel Adapter model : Mellanox Technologies MT26428 ConnectX VPI PCIe 2.0
    [SUCCESS: Correct Host Channel Adapter firmware version : 2.9.1000
    [SUCCESS: Correct Host Channel Adapter PCI address : 0d:00.0
    [SUCCESS: Correct Host Channel Adapter PCI info : 0c06: 15b3:673c
    [SUCCESS: Correct Host Channel Adapter PCIe slot width : x8
    [SUCCESS: Big Data Appliance hardware validation checks succeeded
    
  4. Check the status of the software:

    # bdachecksw
    
    [SUCCESS: Correct OS disk sda partition info : 1 ext3 raid 2 ext3 raid 3 linux-swap 4 ext3 primary
    [SUCCESS: Correct OS disk sdb partition info : 1 ext3 raid 2 ext3 raid 3 linux-swap 4 ext3 primary
    [SUCCESS: Correct data disk sdc partition info : 1 ext3 primary
    [SUCCESS: Correct data disk sdd partition info : 1 ext3 primary
    [SUCCESS: Correct data disk sde partition info : 1 ext3 primary
    [SUCCESS: Correct data disk sdf partition info : 1 ext3 primary
    [SUCCESS: Correct data disk sdg partition info : 1 ext3 primary
    [SUCCESS: Correct data disk sdh partition info : 1 ext3 primary
    [SUCCESS: Correct data disk sdi partition info : 1 ext3 primary
    [SUCCESS: Correct data disk sdj partition info : 1 ext3 primary
    [SUCCESS: Correct data disk sdk partition info : 1 ext3 primary
    [SUCCESS: Correct data disk sdl partition info : 1 ext3 primary
    [SUCCESS: Correct software RAID info : /dev/md2 level=raid1 num-devices=2 /dev/md0 level=raid1 num-devices=2
    [SUCCESS: Correct mounted partitions : /dev/md0 /boot ext3 /dev/md2 / ext3 /dev/sda4 /u01 ext4 /dev/sdb4 /u02 ext4 /dev/sdc1 /u03 ext4 /dev/sdd1 /u04 ext4 /dev/sde1 /u05 ext4 /dev/sdf1 /u06 ext4 /dev/sdg1 /u07 ext4 /dev/sdh1 /u08 ext4 /dev/sdi1 /u09 ext4 /dev/sdj1 /u10 ext4 /dev/sdk1 /u11 ext4 /dev/sdl1 /u12 ext4
    [SUCCESS: Correct swap partitions : /dev/sdb3 partition /dev/sda3 partition
    [SUCCESS: Correct Linux kernel version : Linux 2.6.32-200.21.1.el5uek
    [SUCCESS: Correct Java Virtual Machine version : HotSpot(TM) 64-Bit Server 1.6.0_29
    [SUCCESS: Correct Ansible version : 2.9.6
    [SUCCESS: Correct MySQL version : 5.6
    [SUCCESS: All required programs are accessible in $PATH
    [SUCCESS: All required RPMs are installed and valid
    [SUCCESS: Big Data Appliance software validation checks succeeded

12.4.3 About the Network Subnet Manager Master

The Subnet Manager manages all operational characteristics of the InfiniBand network.

The functions of the Subnet Manager are:

  • Discover the network topology

  • Assign a local identifier to all ports connected to the network

  • Calculate and program switch forwarding tables

  • Monitor changes in the fabric

The InfiniBand network can have multiple Subnet Managers, but only one Subnet Manager is active at a time. The active Subnet Manager is the Master Subnet Manager. The other Subnet Managers are the Standby Subnet Managers. If a Master Subnet Manager is shut down or fails, then a Standby Subnet Manager automatically becomes the Master Subnet Manager.

Each Subnet Manager has a configurable priority. When multiple Subnet Managers are on the InfiniBand network, the Subnet Manager with the highest priority becomes the master Subnet Manager. On Oracle Big Data Appliance, the Subnet Managers on the leaf switches are configured as priority 5, and the Subnet Managers on the spine switches are configured as priority 8.

The following guidelines determine where the Subnet Managers run on Oracle Big Data Appliance:

  • Run the Subnet Managers only on the switches in Oracle Big Data Appliance. Running a Subnet Manager on any other device is not supported.

  • When the InfiniBand network consists of one, two, or three racks cabled together, all switches must run a Subnet Manager. The master Subnet Manager runs on a spine switch.

  • For multirack configurations joining different types of racks, such a Oracle Big Data Appliance and Exalogic, see My Oracle Support note 1682501.1.

  • When the InfiniBand network consists of four or more racks cabled together, then only the spine switches run a Subnet Manager. The leaf switches must disable the Subnet Manager.

See Also:

12.5 Changing the Number of Connections to a Gateway Switch

If you change the number of 10 GbE connections to a Sun Network QDR InfiniBand Gateway switch, then you must run the bdaredoclientnet utility. See "bdaredoclientnet."

To re-create the VNICs in a rack:

  1. Verify that /opt/oracle/bda/network.json exists on all servers and correctly describes the custom network settings. This command identifies files that are missing or have different date stamps:

    dcli ls -l /opt/oracle/bda/network.json
    
  2. Connect to Node1 (bottom of rack) using the administrative network. The bdaredoclientnet utility shuts down the client network, so you cannot use it in this procedure.

  3. Remove passwordless SSH:

    /opt/oracle/bda/bin/remove-root-ssh
    

    See "Setting Up Passwordless SSH" for more information about this command.

  4. Change directories:

    cd /opt/oracle/bda/network
    
  5. Run the utility:

    bdaredoclientnet
    

    The output is similar to that shown in Example 7-2.

  6. Restore passwordless SSH (optional):

    /opt/oracle/bda/bin/setup-root-ssh

12.6 Changing the NTP Servers

The configuration information for Network Time Protocol (NTP) servers can be changed after the initial setup. The following procedure describes how to change the NTP configuration information for InfiniBand switches, Cisco switches, and Sun servers. Oracle recommends changing each server individually.

To update the Oracle Big Data Appliance servers:

  1. Stop NTP services on the server.

  2. Update the /etc/ntp.conf file with the IP address of the new NTP server.

  3. Repeat these steps for each server.

To update the InfiniBand switches:

  1. Log in to the switch as the ilom-admin user.

  2. Follow the instructions in "Setting the Time Zone and Clock on an InfiniBand Switch".

To update the Cisco Ethernet switch:

  1. Use telnet to connect to the Cisco Ethernet switch.

  2. Delete the current setting:

    # configure terminal
    Enter configuration commands, one per line. End with CNTL/Z.
    (config)# no ntp server current_IPaddress
    
  3. Enter the new IP address:

    # configure terminal
    Enter configuration commands, one per line. End with CNTL/Z.
    (config)# ntp server new_IPaddress
    
  4. Save the current configuration:

    # copy running-config startup-config
    
  5. Exit from the session:

    # exit

Restart Oracle Big Data Appliance after changing the servers and switches.

12.7 Monitoring the PDU Current

The PDU current can be monitored directly. Configure threshold settings as a way to monitor the PDUs. The configurable threshold values for each metering unit module and phase are Info low, Pre Warning, and Alarm.

See Also:

Sun Rack II Power Distribution Units User's Guide for information about configuring and monitoring PDUs at

https://docs.oracle.com/cd/E19657-01/html/E23956/index.html

Table 12-1 lists the threshold values for the Oracle Big Data Appliance rack using a single-phase, low-voltage PDU.

Table 12-1 Threshold Values for Single-Phase, Low-Voltage PDU

PDU Module/Phase Info Low Threshold Pre Warning Threshold Alarm Threshold

A

Module 1, phase 1

0

18

23

A

Module 1, phase 2

0

22

24

A

Module 1, phase 3

0

18

23

B

Module 1, phase 1

0

18

23

B

Module 1, phase 2

0

22

24

B

Module 1, phase 3

0

18

23

Table 12-2 lists the threshold values for the Oracle Big Data Appliance rack using a three-phase, low-voltage PDU.

Table 12-2 Threshold Values for Three-Phase, Low-Voltage PDU

PDU Module/Phase Info Low Threshold Pre Warning Threshold Alarm Threshold

A and B

Module 1, phase 1

0

32

40

A and B

Module 1, phase 2

0

34

43

A and B

Module 1, phase 3

0

33

42

Table 12-3 lists the threshold values for the Oracle Big Data Appliance rack using a single-phase, high-voltage PDU.

Table 12-3 Threshold Values for Single-Phase, High-Voltage PDU

PDU Module/Phase Info Low Threshold Pre Warning Threshold Alarm Threshold

A

Module 1, phase 1

0

16

20

A

Module 1, phase 2

0

20

21

A

Module 1, phase 3

0

16

20

B

Module 1, phase 1

0

16

20

B

Module 1, phase 2

0

20

21

B

Module 1, phase 3

0

16

20

Table 12-4 lists the threshold values for the Oracle Big Data Appliance rack using a three-phase, high-voltage PDU.

Table 12-4 Threshold Values for Three-Phase, High-Voltage PDU

PDU Module/Phase Info Low Threshold Pre Warning Threshold Alarm Threshold

A and B

Module 1, phase 1

0

18

21

A and B

Module 1, phase 2

0

18

21

A and B

Module 1, phase 3

0

17

21

12.8 Node Migration

 There is no automated failover for the cluster nodes. You must perform the recovery process manually and the process differs for critical and non-critical nodes.

Nodes 1 through 4  host services that are essential to Oracle Big Data Appliance. Therefore, these are critical nodes. Recover from a critical node failure by migrating all of its services to another node. Nodes 5 and above are generally DataNodes. Since data is replicated across three nodes, the failure of a single DataNode is non-critical.  To recover from a non-critical node failure, attempt to restart the node and if that fails, replace it.

Use the bdacli admin_cluster command to decommission, commission, and migrate nodes.

A node migration only moves roles and should not result in any changes in private keys or trust stores. Renewed certificates should still be in place after a node migration.

See Also:

See Managing a Hardware Failure in the Oracle Big Data Appliance Software User’s Guide for instructions how to replace a failing CDH or NoSQL cluster node.

12.9 Capping CPU Cores on Servers

The bdacli utility provides operations to set or get the number of active cores on servers.

Introduction

Customers may have reasons to reduce the number of active CPUs on specific servers within the appliance. For example, in cases where licensing cost is determined by the number of CPU cores, you can selectively disable cores in order to maintain compliance with a licensing agreement.

Oracle Big Data Appliance servers ship with two CPUs. Each of these contain the same number of physical cores, so the total number of physical cores on a single server is twice the number of cores on a single CPU. By default, the number of virtual cores visible to the operating system is twice the total number of physical cores. This is because Intel hyper-threading is enabled by default for all Oracle Big Data Appliance CPUs.

For example, on X7-2 servers there are 24 cores on a single CPU so the total number of physical cores is 24 x 2 = 48. With hyper-threading 48 physical cores become 96 virtual cores.

The bdacli setinfo active_cores command lets you reduce the number of active cores, or, increase them up to the number of available physical cores. Related bdacli getinfo parameters let you determine the number of available physical cores, the current number of physical cores enabled in the BIOS, and the number of cores actually in use by the server.

All of the commands in the following table must be executed as root.

Table 12-5 bdacli Commands for CPU Core Capping

Command Return or Result
bdacli getinfo server_all_cores Get the total number of available physical cores on the server (both CPUs).
bdacli getinfo server_active_cores Get the number of physical cores that are actually being used on the server.
bdacli getinfo server_enabled_cores Get the number of physical cores enabled on the BIOS. This is normally the same as the number of physical cores that are actually being used on the server. This number may be different if bdacli setinfo active_cores has been called and the server has not yet rebooted.
bdacli setinfo active_cores <number> Sets the number of active physical cores for the system. The number passed as a parameter must be an even number (since Oracle Big Data Appliance has two sockets). The number must be between the minimum supported value (16) and the maximum allowed for the architecture . The server must rebooted for the changes to the BIOS configuration file to take effect.

Note:

Setting the number of active cores does not require a reboot.

Example

This example shows use of the commands on X7-2L servers. The number of physical cores on an X6-2L and X5-2L servers are different, but the procedure is the same.

  1. Get the total number of available physical cores for the server
    # getinfo server_all_cores 
  2. Get the number of physical cores that are actually being used on the server.
    # bdacli getinfo server_active_cores 
  3. Get the number of physical cores enabled on the BIOS. This is normally the same as the as the number of physical cores that are actually being used on the server but may be different if bdacli setinfo active_cores has been called and the server has not yet rebooted. This is the number of physical cores that will be active after the next reboot.
    # bdacli getinfo server_enabled_cores 
  4. Set the number of active physical cores for the system. The number passed as a parameter has to be an even number (since the Oracle Big Data Appliance servers have two sockets). This number must be between the minimum supported value (16) and the maximum allowed for that architecture. The server has to be rebooted for the changes to the BIOS configuration file to take effect
    # bdacli setinfo active_cores <number>

Core Capping on Edge Nodes and on Cluster Nodes in Earlier Releases

The bdacli commands described above are built into Oracle Big Data Appliance as of Release 5.1. However, you can load a patch to enable them on servers within earlier releases of Oracle Big Data Appliance, including edge nodes . The patch is not needed on edge nodes running Oracle Big Data Appliance 5.1.

See Also:

My Oracle Support note 2473609.1 describes how to patch core capping functionality into earlier Oracle Big Data Appliance servers.