This chapter describes the utilities available on Oracle Big Data Appliance. Most of the utilities are for monitoring the health of the hardware and the network.
Checks the health of a CDH cluster, including the software, hardware, and network, and logs the results in a file in the /tmp directory.
If the cluster is protected by Kerberos authentication, then you must obtain a ticket for the hdfs
user before running bdacheckcluster
.
To obtain a ticket for hdfs
:
Add the hdfs
user to the key distribution center (KDC), using this kadmin
command:
addprinc hdfs@REALM_NAME
Request the ticket as hdfs
:
$ su hdfs -c "kinit hdfs@REALM_NAME"
This example shows the output from the utility:
# bdacheckcluster INFO: Logging results to /tmp/bdacheckcluster_1383577061/ Enter CM admin password to enable check for CM services and hosts Press ENTER twice to skip CM services and hosts checks Enter password: # bdacheckcluster INFO: Logging results to /tmp/bdacheckcluster_1373393815/ Enter CM admin password to enable check for CM services and hosts Press ENTER twice to skip CM services and hosts checks Enter password: password Enter password again: password SUCCESS: Mammoth configuration file is valid. SUCCESS: hdfs is in good health SUCCESS: mapreduce is in good health SUCCESS: oozie is in good health SUCCESS: zookeeper is in good health SUCCESS: hive is in good health SUCCESS: hue is in good health SUCCESS: Cluster passed checks on all hadoop services health check SUCCESS: bda1node01.us.oracle.com is in good health SUCCESS: bda1node02.us.oracle.com is in good health SUCCESS: bda1node03.us.oracle.com is in good health . . . SUCCESS: Cluster passed checks on all hosts health check SUCCESS: All cluster host names are pingable SUCCESS: All cluster hosts passed checks on last reboot INFO: Starting cluster host hardware checks SUCCESS: All cluster hosts pass hardware checks INFO: Starting cluster host software checks SUCCESS: All cluster hosts pass software checks SUCCESS: All ILOM hosts are pingable SUCCESS: All client interface IPs are pingable SUCCESS: All admin eth0 interface IPs are pingable SUCCESS: All private Infiniband interface IPs are pingable SUCCESS: All cluster hosts resolve public hostnames to private IPs . . . INFO: Checking local reverse DNS resolve of private IPs on all cluster hosts SUCCESS: All cluster hosts resolve private IPs to public hostnames SUCCESS: 2 virtual NICs available on all cluster hosts SUCCESS: NTP service running on all cluster hosts SUCCESS: At least one valid NTP server accessible from all cluster servers. SUCCESS: Max clock drift of 0 seconds is within limits SUCCESS: Big Data Appliance cluster health checks succeeded
Checks the hardware profile of the server.
See "Configuring the Oracle Big Data Appliance Servers" for tips about using this utility.
This example shows the output from the utility:
# bdacheckhw
SUCCESS: Found BDA v2 server : SUN FIRE X4270 M3
SUCCESS: Correct processor info : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
SUCCESS: Correct number of types of CPU : 1
SUCCESS: Correct number of CPU cores : 32
SUCCESS: Sufficient GB of memory (>=63): 63
SUCCESS: Correct BIOS vendor : American Megatrends Inc.
SUCCESS: Sufficient BIOS version (>=08080102): 18021300
SUCCESS: Recent enough BIOS release date (>=05/23/2011):06/19/2012
SUCCESS: Correct ILOM major version : 3.1.2.12
SUCCESS: Sufficient ILOM minor version (>=74388): 74388
SUCCESS: Correct number of fans : 4
SUCCESS: Correct fan 0 status : ok
SUCCESS: Correct fan 1 status : ok
.
.
.
SUCCESS: Big Data Appliance hardware validation checks succeeded
Checks the InfiniBand cabling between the servers and switches of a single rack, when entered with no options.
Run this command after connecting as root
to any server.
The same as running without options except that the network must still be configured with the factory default settings. You can use this option as soon as Oracle Big Data Appliance arrives at the site, even before the switches are configured.
Verifies that the InfiniBand switch-to-switch cabling among multiple ranks is correct. To create json_file, see the -g
option.
Generates a sample JSON file named sample-multi-rack.json. Use this file as an example of the format required by the -m
option.
The network must be configured with custom settings as described by /opt/oracle/bda/BdaDeploy.json.
This example checks the switch-to-server InfiniBand cables:
[root@node01 network]# bdacheckib
LINK bda1sw-ib3.15A ... bda1node02.HCA-1.2 UP
LINK bda1sw-ib3.15B ... bda1node01.HCA-1.2 UP
LINK bda1sw-ib3.14A ... bda1node04.HCA-1.2 UP
LINK bda1sw-ib3.14B ... bda1node03.HCA-1.2 UP
.
.
.
The next example generates the JSON file and shows the output.
[root@bda1node01 bda]# bdacheckib -g [root@bda1node01 bda]# cat sample-multi-rack.json # This json multirack spec is generated. The array elements are sorted # alphabetically. A properly arranged json spec representing racks from left to right # can be used as input to bdacheckib (bdacheckib -m multi-rack.json) # Note commas separating rack elements are optional. [ {"SPINE_NAME": "bda1sw-ib1", "LEAF1_NAME": "bda1sw-ib2", "LEAF2_NAME": "bda1sw-ib3"} {"SPINE_NAME": "bda2sw-ib1", "LEAF1_NAME": "bda2sw-ib2", "LEAF2_NAME": "bda2sw-ib3"} {"SPINE_NAME": "dm01sw-ib1", "LEAF1_NAME": "dm01sw-ib2", "LEAF2_NAME": "dm01sw-ib3"}
The final example checks all the racks on the InfiniBand network using the edited JSON file created in the previous example:
# bdacheckib -m sample-multi-rack.json
Rack #1 leaf to spines topology check
leaf: bda1sw-ib2
expected 2 links to rack 1, found 4 OK
expected 2 links to rack 2, found 4 OK
expected 2 links to rack 3, found 3 OK
expected 2 links to rack 4, found 3 OK
leaf: bda1sw-ib3
expected 2 links to rack 1, found 3 OK
expected 2 links to rack 2, found 4 OK
expected 2 links to rack 3, found 3 OK
expected 2 links to rack 4, found 3 OK
.
.
.
Rack #1 cabling details
leaf: bda1sw-ib2
LINK ... to rack2 ......... UP
LINK ... to rack2 ......... UP
LINK ... to rack1 ......... UP
LINK ... to rack1 ......... UP
LINK ... to rack3 ......... UP
LINK ... to rack3 ......... UP
LINK ... to rack4 ......... UP
LINK ... to rack4 ......... UP
.
.
.
Checks whether the network configuration is working properly.
This example shows the output from the utility:
[root@node01 network]# bdachecknet
bdachecknet: analyse /opt/oracle/bda/BdaDeploy.json
bdachecknet: passed
bdachecknet: checking for BdaExpansion.json
bdachecknet: ping test private infiniband ips (bondib0 40gbs)
bdachecknet: passed
bdachecknet: ping test admin ips (eth0 1gbs)
bdachecknet: passed
bdachecknet: test admin network resolve and reverse resolve
bdachecknet: passed
bdachecknet: test admin name array matches ip array
bdachecknet: passed
bdachecknet: test client network (eoib) resolve and reverse resolve
bdachecknet: passed
bdachecknet: test client name array matches ip array
bdachecknet: passed
bdachecknet: test ntp servers
bdachecknet: passed
bdachecknet: ping client gateway
bdachecknet: passed
bdachecknet: test arp -a
bdachecknet: passed
Checks the software profile of a server.
See "Configuring the Oracle Big Data Appliance Servers" for tips about using this utility.
This example shows the output from the utility:
# bdachecksw
SUCCESS: Correct OS disk s0 partition info : 1 ext3 raid 2 ext3 raid 3 linux-swap 4 ext3 primary
SUCCESS: Correct OS disk s1 partition info : 1 ext3 raid 2 ext3 raid 3 linux-swap 4 ext3 primary
SUCCESS: Correct data disk s2 partition info : 1 ext3 primary
SUCCESS: Correct data disk s3 partition info : 1 ext3 primary
SUCCESS: Correct data disk s4 partition info : 1 ext3 primary
SUCCESS: Correct data disk s5 partition info : 1 ext3 primary
SUCCESS: Correct data disk s6 partition info : 1 ext3 primary
SUCCESS: Correct data disk s7 partition info : 1 ext3 primary
SUCCESS: Correct data disk s8 partition info : 1 ext3 primary
SUCCESS: Correct data disk s9 partition info : 1 ext3 primary
SUCCESS: Correct data disk s10 partition info : 1 primary
SUCCESS: Correct data disk s11 partition info : 1 primary
SUCCESS: Correct software RAID info : /dev/md2 level=raid1 num-devices=2 /dev/md0 level=raid1 num-devices=2
SUCCESS: Correct mounted partitions : /dev/mapper/lvg1-lv1 /lv1 ext4 /dev/md0 /boot ext3 /dev/md2 / ext3 /dev/sd4 /u01 ext4 /dev/sd4 /u02 ext4 /dev/sd1 /u03 ext4 /dev/sd1 /u04 ext4 /dev/sd1 /u05 ext4 /dev/sd1 /u06 ext4 /dev/sd1 /u07 ext4 /dev/sd1 /u08 ext4 /dev/sd1 /u09 ext4 /dev/sd1 /u10 ext4
SUCCESS: Correct matching label and slot : symbolic link to `../../sda4'
SUCCESS: Correct matching label and slot : symbolic link to `../../sdb4'
.
.
.
SUCCESS: Correct Linux kernel version : Linux 2.6.32-200.21.1.el5uek
SUCCESS: Correct Java Virtual Machine version : HotSpot(TM) 64-Bit Server 1.6.0_37
SUCCESS: Correct puppet version : 2.6.11
SUCCESS: Correct MySQL version : 5.5.17
SUCCESS: All required programs are accessible in $PATH
SUCCESS: All required RPMs are installed and valid
SUCCESS: Correct bda-monitor status : bda monitor is running
SUCCESS: Big Data Appliance software validation checks succeeded
Synchronizes the time of all servers in a cluster.
To use this utility, you must log in as root
to the first server in the node. Passwordless ssh
must also be set up for the cluster. See the -C
parameter for "setup-root-ssh."
This utility creates a log file named bdaclustersynctime.log in the directory identified in the output.
The following example successfully runs bdaclustersynctime
:
# bdaclustersynctime
INFO: Logging results to /tmp/bdacluster_1373485952/
SUCCESS: Mammoth configuration file is valid.
SUCCESS: All cluster host names are pingable
SUCCESS: NTP service running on all cluster hosts
SUCCESS: At least one valid NTP server found
SUCCESS: No errors found syncing date and time on all nodes
SUCCESS: Max clock drift of 0 seconds is within limits
SUCCESS: Sync date and time of cluster succeeded
Collects diagnostic information about an individual server for Oracle Support.
Downloads diagnostics from Cloudera Manager. You must know the Cloudera Manager admin
password to use this parameter.
Collects the output of a complete Hadoop Distributed File System (HDFS) fsck
check.
Gathers ILOM data using ipmitool
. You cannot use ilom
in the same command as snapshot
.
Collects Oracle OS Watcher logs, which include historical operating system performance and monitoring data. The output can consume several hundred megabytes of disk space.
Collects ILOM snapshot data over the network, and provides most useful output than the ilom
option. You must know the server root
password to use this parameter. You cannot use snapshot
in the same command as ilom
.
The name of the compressed file in the /tmp directory where bdadiag
stored the data. The file name has the form bdadiag_server-name_server-serial-number_date.tar.bz2.
The logs are organized in subdirectories, including the following:
The optional parameters instruct bdadiag
to collect additional diagnostics. You can enter the options together on the command line to collect the most information.
You run bdadiag
at the request of Oracle Support and associate it with an open Service Request (SR). See the Oracle Big Data Appliance Software User's Guide for details about providing diagnostics to Oracle Support.
This example shows the basic output from the utility:
# bdadiag
Big Data Appliance Diagnostics Collection Tool v2.3.1
Gathering Linux information
Skipping ILOM collection. Use the ilom or snapshot options, or login to ILOM over the network and run Snapshot separately if necessary.
Generating diagnostics tarball and removing temp directory
===========================================================================
Done. The report files are bzip2 compressed in /tmp/bdadiag_bda1node0101_12 16FM5497_2013_01_18_06_49.tar.bz2
===========================================================================
The next example shows the additional output from the cm
option.
]# bdadiag cm Big Data Appliance Diagnostics Collection Tool v2.2.0 Getting Cloudera Manager Diagnostics Password for the Cloudera Manager admin user is needed Enter password: password Enter password again: password Passwords match Waiting for Cloudera Manager ... Succeeded. Output in : /opt/oracle/BDAMammoth/bdaconfig/tmp/cm_commands.out Collecting diagnostic data ... { "startTime" : "2013-07-09T13:27", "endTime" : "2013-07-08T13:27" } Succeeded. Output in : /opt/oracle/BDAMammoth/bdaconfig/tmp/cm_commands_collectDiagnosticData.out Command ID is 213 ... Command 213 finished after 20 seconds Operation completed successfully Diagnostic data successfully collected Can be downloaded from URL http://bda1node03.example.com:7180/cmf/command/213/download Downloading diagnostic data ... Original Cloudera Manager Diagnostics Bundle Name : 3609df48-4930-11e1-9006-b8ac6f8061c1.cluster22.20130709-20-39.support-bundle.zip Data successfully downloaded and placed in /tmp/bdadiag_bda1node01_1143FMM06E_2013_07_09_13_26/3609df48-4930-11e1-9006-b8ac6f8061c1.AK00023713.cluster22.20130709-20-39.support-bundle.zip Gathering Linux information . . .
The next example shows the additional output from the snapshot
option:
# bdadiag snapshot Big Data Appliance Diagnostics Collection Tool v2.3.1 Please enter Host OS root password required for snapshot: password Gathering Linux information Gathering ILOM Snapshot data - please be patient, this may take a long time snapshot running: Tue Jul 9 13:42:28 PDT 2013 snapshot running: Tue Jul 9 13:43:29 PDT 2013 snapshot running: Tue Jul 9 13:44:32 PDT 2013 snapshot running: Tue Jul 9 13:45:35 PDT 2013 snapshot running: Tue Jul 9 13:46:39 PDT 2013 snapshot running: Tue Jul 9 13:47:43 PDT 2013 snapshot running: Tue Jul 9 13:48:47 PDT 2013 Snapshot Collection completed. Generating diagnostics tarball and removing temp directory ============================================================================== Done. The report files are bzip2 compressed in /tmp/bdadiag_bda1node01_1143FMM06E_2013_07_09_13_40.tar.bz2 ==============================================================================
Deploys the HDFS, MapReduce, and Hive client configuration files from Cloudera Manager.
You must be connected to the server as root
.
To deploy a new client configuration to all nodes of the cluster, use the dcli -C
command.
This example shows the output from one node in the cluster:
# bdagetclientconfig
bdagetclientconfig : Download and deploy HDFS, Map-Reduce and Hive client configuration files
Logging to /tmp/bdagetclientconfig-1368541073.out
Downloading HDFS and Map-Reduce client configuration zipfile
Downloading Hive client configuration zipfile
Deploying HDFS, Map-Reduce and Hive client configurations
Successfully downloaded and deployed HDFS, Map-Reduce and Hive client configurations !
Returns information about an individual server.
If you need to contact Oracle Support about an issue with Cloudera's Distribution including Apache Hadoop, then run this command first.
Validates the hardware and software on a server by running bdacheckhw
, and then bdachecksw
.
Regenerates the bda_reboot_status and BDA_REBOOT_* files in /root, in addition to performing the validation checks. Use this parameter if the checks fail after restarting the server, such that either BDA_REBOOT_FAILED or BDA_REBOOT_WARNINGS exist, and the issue is resolved. Do not use this parameter for the initial set of checks, that is, if /root/bda_reboot_status does not exist.
# bdaimagevalidate
SUCCESS: Found BDA v2 server : SUN FIRE X4270 M3
SUCCESS: Correct processor info : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
SUCCESS: Correct number of types of CPU : 1
SUCCESS: Correct number of CPU cores : 32
SUCCESS: Sufficient GB of memory (>=63): 63
SUCCESS: Correct BIOS vendor : American Megatrends Inc.
SUCCESS: Sufficient BIOS version (>=08080102): 18021300
SUCCESS: Recent enough BIOS release date (>=05/23/2011):06/19/2012
SUCCESS: Correct ILOM major version : 3.1.2.12
.
.
.
SUCCESS: All required programs are accessible in $PATH
SUCCESS: All required RPMs are installed and valid
SUCCESS: Oracle R Connector for Hadoop is available : Oracle R Connector for Hadoop 2.3.0 (rev. 286)
SUCCESS: Correct bda-monitor status : bda monitor is running
SUCCESS: Big Data Appliance software validation checks succeeded
SUCCESS: All Big Data Appliance validation checks succeeded
Re-creates the virtual network interface cards (VNICs) for all servers in the rack and spreads them across the available 10 GbE ports.
Log in to server 1 and change to the /opt/oracle/bda/network directory to run this utility.
You must run this utility after changing the number of 10 GbE connections to a Sun Network QDR InfiniBand Gateway switch. See "Changing the Number of Connections to a Gateway Switch."
The bdaredoclientnet
utility performs the following subset of tasks done by the networksetup-two
script during the initial configuration of Oracle Big Data Appliance:
Verifies that the administrative network is working, the InfiniBand cabling is correct, and the InfiniBand switches are available
Determines how many 10 GbE connections are available and connects them to the InfiniBand Gateway switches
Deletes all VNICs and re-creates them
Connects to each server and updates the configuration files
Restarts the client network and verifies that it can connect to each server using the newly configured client network
The following example shows the output from the bdaredoclientnet
utility:
# cd /opt/oracle/bda/network # bdaredoclientnet bdaredoclientnet: check syntax and static semantics of /opt/oracle/bda/BdaDeploy.json bdaredoclientnet: passed bdaredoclientnet: ping servers by name on admin network bdaredoclientnet: passed bdaredoclientnet: verify infiniband topology bdaredoclientnet: passed bdaredoclientnet: start setup client network (10gigE over Infiniband) bdaredoclientnet: ping both gtw leaf switches bdaredoclientnet: passed bdaredoclientnet: verify existence of gateway ports bdaredoclientnet: passed bdaredoclientnet: removing existing eoib setup for this server Shutting down interface bondeth0: [ OK ] Shutting down interface bondib0: [ OK ] Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] Disabling IPv4 packet forwarding: net.ipv4.ip_forward = 0 [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface bondib0: [ OK ] Bringing up interface eth0: [ OK ] bdaredoclientnet: ping server ips on admin network bdaredoclientnet: passed bdaredoclientnet: ping servers by name on admin network bdaredoclientnet: passed bdaredoclientnet: test ssh server ips on admin network hello from bda1node02.us.oracle.com hello from bda1node03.us.oracle.com . . . bdaredoclientnet: passed bdaredoclientnet: check existence of default vlan for port 0A-ETH-1 on bda1sw-ib2 bdaredoclientnet: use existing default vlan for port 0A-ETH-1 on bda1sw-ib2 bdaredoclientnet: check existence of default vlan for port 0A-ETH-1 on bda1sw-ib3 bdaredoclientnet: use existing default vlan for port 0A-ETH-1 on bda1sw-ib3 bdaredoclientnet: passed bdaredoclientnet: apply eoib on each server bdaredoclientnet: wait a few seconds for the network to restart on 10.111.22.001 bdaredoclientnet: wait a few seconds for the network to restart on 10.111.22.002 . . . check and delete vNIC for bda1node02 on switch bda1sw-ib2 vNIC ID 757 deleted IO Adapter for vNIC deleted check and delete vNIC for bda1node02 on switch bda1sw-ib3 check and delete vNIC for bda1node02 on switch bda1sw-ib2 check and delete vNIC for bda1node02 on switch bda1sw-ib3 vNIC ID 707 deleted IO Adapter for vNIC deleted create vNIC eth9 bda1node02 on switch bda1sw-ib3 vNIC created create vNIC eth8 bda1node02 on switch bda1sw-ib2 vNIC created . . . bdaredoclientnet: ping server ips on client network bdaredoclientnet: passed bdaredoclientnet: test ssh server ips on client network hello from bda1node02.us.oracle.com hello from bda1node03.us.oracle.com . . . bdaredoclientnet: passed bdaredoclientnet: end setup client network
Returns the serial numbers and media access control (MAC) addresses for most components of the Oracle Big Data Appliance server that you are connected to.
This example shows the output from the utility:
# bdaserials
Rack serial number : AK00023713
System serial number : 1137FMM0BY
System UUID : 080020FF-FFFF-FFFF-FFFF-7E97D6282100
Motherboard serial number : 0338MSL-1131BA2194
Chassis serial number : 1137FMM0BY
Memory serial numbers : 87948175 87949173 87948163 8794816B 87948130 87948176
Infiniband HCA serial number : 1388FMH-1122501437
Disk controller serial number : SV11713731
Hard disk serial numbers :
SEAGATE ST32000SSSUN2.0T061A1125L6M89X
SEAGATE ST32000SSSUN2.0T061A1125L6LFH0
SEAGATE ST32000SSSUN2.0T061A1125L6M94J
SEAGATE ST32000SSSUN2.0T061A1125L6LLEZ
SEAGATE ST32000SSSUN2.0T061A1125L6M5S2
SEAGATE ST32000SSSUN2.0T061A1125L6LSD4
SEAGATE ST32000SSSUN2.0T061A1127L6M58L
SEAGATE ST32000SSSUN2.0T061A1127L6R40S
SEAGATE ST32000SSSUN2.0T061A1125L6M3WX
SEAGATE ST32000SSSUN2.0T061A1125L6M65D
SEAGATE ST32000SSSUN2.0T061A1127L6NW3K
SEAGATE ST32000SSSUN2.0T061A1127L6N4G1
MAC addresses :
bondeth0 Ethernet : CE:1B:4B:85:2A:63
bondib0 InfiniBand : 80:00:00:4A:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
bond0 Ethernet : 00:00:00:00:00:00
eth0 Ethernet : 00:21:28:E7:97:7E
eth1 Ethernet : 00:21:28:E7:97:7F
eth2 Ethernet : 00:21:28:E7:97:80
eth3 Ethernet : 00:21:28:E7:97:81
eth8 Ethernet : CE:1B:4B:85:2A:63
eth9 Ethernet : CE:1B:4C:85:2A:63
ib0 InfiniBand : 80:00:00:4A:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
ib1 InfiniBand : 80:00:00:4B:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
Turns off swapping by the operating system.
The bdaswapoff
utility disables both swap partitions on a server, which disables all swapping by the operating system. This state persists when the server restarts; you must run bdaswapon
to restore swapping. Swapping is turned off by default to improve performance and to allow high availability if a disk fails.
Use bdaswapoff
instead of the Linux swapoff
utility.
Turns on paging and swapping by the operating system.
Swapping is turned off by default to improve performance and the ability to recover from disk failure.
Use bdaswapon
instead of the Linux swapon
utility.
Updates the firmware of a particular component of a server, such as a replacement disk drive.
Updates the LSI disk firmware for the specified disk (N). Each server has 12 disks, which are numbered from 0 to 11.
Specifies the file path to the firmware. If the path is omitted, then bdaupdatefw
uses the default firmware for the specified component from /opt/oracle/bda/firmware.
Displays syntax and usage information for bdaupdatefw
.
Updates the Oracle ILOM firmware.
Updates the LSI disk controller firmware.
Updates the firmware for the Mellanox host channel adapter (InfiniBand card).
This utility is typically run by Oracle field engineers when installing or replacing hardware components, which may not be factory-installed with a supported firmware version. During a software installation, the Mammoth copies the currently supported firmware to Oracle Big Data Appliance. The bdaupdatefw
command uses those files when they are needed to update the firmware of a server component.
You can update one firmware package in a single command. Thus, you can specify only one of the following parameters: -d
, -i
, -l
, or -m
.
Caution:
Only use the firmware provided in a Mammoth bundle. Do not attempt to install firmware downloaded from a third-party site. Doing so may result in result in the loss of warranty and support. See "Oracle Big Data Appliance Restrictions on Use."This example shows the output from a command to update the Oracle ILOM firmware. To perform the update, you must execute the ipmiflash
command provided in the output.
# bdaupdatefw -i
[INFO:GENERAL] No firmware file specified. Using default firmware file - /opt/or
acle/bda/firmware/ILOM-3_2_0_r74388-Sun_Fire_X4270_M3.pkg
[INFO:GENERAL] Updating ILOM firmware with the firmware file /opt/oracle/bda/fir
mware/ILOM-3_2_0_r74388-Sun_Fire_X4270_M3.pkg
[INFO:GENERAL] Original version is: 3.1.2.12 r74388
[INFO:GENERAL]
[INFO:GENERAL] Please run the following command and enter the root password
[INFO:GENERAL] for the ILOM when requested
[INFO:GENERAL]
[INFO:GENERAL] Note that this command will shutdown the server after flashing.
[INFO:GENERAL] You will need to login to the ILOM to power on the server afterwa
rds.
[INFO:GENERAL]
[INFO:GENERAL] ipmiflash -v -I lanplus -H 10.133.46.218 -U root write /opt/oracl
e/bda/firmware/ILOM-3_1_2_12_r74388-Sun_Fire_X4270_M3.pkg
[INFO:GENERAL]
Lists all InfiniBand connections in the InfiniBand network.
This example shows two Oracle Big Data Appliances and one Oracle Exadata Database Machine on the InfiniBand network:
[root@bda1node01 network]# iblinkinfo
Switch 0x002128df348ac0a0 SUN IB QDR GW switch bda1sw-ib2 10.133.43.36:
149 1[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 130 2[ ] "SUN IB QDR GW switch bda1sw-ib2 10.133...
149 2[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 127 1[ ] "SUN IB QDR GW switch bda1sw-ib2 10.133...
149 3[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 111 2[ ] "SUN IB QDR GW switch bda1sw-ib2 10.133...
149 4[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 109 1[ ] "SUN IB QDR GW switch bda1sw-ib2 10.133...
149 5[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 143 1[ ] "bda1node02 BDA 192.168.41.20 HCA-1" ( )
149 6[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 137 1[ ] "bda1node01 BDA 192.168.41.19 HCA-1" ( )
149 7[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 141 1[ ] "bda1node04 BDA 192.168.41.22 HCA-1" ( )
149 8[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 123 1[ ] "bda1node03 BDA 192.168.41.21 HCA-1" ( )
149 9[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 151 1[ ] "bda1node06 BDA 192.168.41.24 HCA-1" ( )
149 10[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 112 1[ ] "bda1node05 BDA 192.168.41.23 HCA-1" ( )
149 11[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 139 1[ ] "bda1node07 BDA 192.168.41.25 HCA-1" ( )
149 12[ ] ==( Down/Disabled)==> [ ] "" ( )
149 13[ ] ==( Down/Disabled)==> [ ] "" ( )
149 14[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 85 9[ ] "SUN DCS 36P QDR dm01sw-ib1 10.133.40.203" ( )
149 15[ ] ==( Down/Disabled)==> [ ] "" ( )
.
.
.
Displays a history of operating system upgrades.
This example shows that the appliance was imaged with version 2.1.0 with no upgrades:
$ imagehistory
Big Data Appliance Image History
IMAGE_VERSION : 2.3.1
IMAGE_CREATION_DATE : Sat Nov 2 00:28:57 UTC 2013
IMAGING_START_DATE : Mon Nov 4 07:47:40 UTC 2013
IMAGING_END_DATE : Mon Nov 4 00:42:27 PST 2013
DEPLOYMENT_VERSION : 2.3.1
DEPLOYMENT_START_DATE : Mon Nov 4 02:01:01 PST 2013
DEPLOYMENT_END_DATE : Mon Nov 4 03:18:46 PST 2013
Displays information about the Oracle Big Data Appliance operating system image currently running.
This example identifies the 2.2.0 image:
$ imageinfo
Big Data Appliance Image Info
IMAGE_CREATION_DATE : Fri Nov 01 17:14:03 PDT 2013
IMAGE_LABEL : BDA_2.3.1_LINUX.X64_RELEASE
IMAGE_VERSION : 2.3.1
LINUX_VERSION : Oracle Linux Server release 6.4
KERNEL_VERSION : 2.6.39-400.209.1.el6uek.x86_64
BDA_RPM_VERSION : bda-2.3.1-1.x86_64
OFED_VERSION : OFED-IOV-1.5.5-2.0.0088
JDK_VERSION : jdk-1.7.0_25-fcs.x86_64
HADOOP_VERSION : 2.0.0-cdh4.4.0
Shows the Ethernet bridge ports with active links.
This example shows three active ports (0A-ETH-1, 0A-ETH-3, and 0A-ETH-4) out of the eight available ports on switch bda1sw-ib3:
[root@bda1sw-ib3 ~]# listlinkup | grep Bridge
Bridge-0 Port 0A-ETH-1 (Bridge-0-2) up (Enabled)
Bridge-0 Port 0A-ETH-2 (Bridge-0-2) down (Enabled)
Bridge-0 Port 0A-ETH-3 (Bridge-0-1) up (Enabled)
Bridge-0 Port 0A-ETH-4 (Bridge-0-1) up (Enabled)
Bridge-1 Port 1A-ETH-1 (Bridge-1-2) down (Enabled)
Bridge-1 Port 1A-ETH-2 (Bridge-1-2) down (Enabled)
Bridge-1 Port 1A-ETH-3 (Bridge-1-1) down (Enabled)
Bridge-1 Port 1A-ETH-4 (Bridge-1-1) down (Enabled)
Removes passwordless SSH previously established by the setup-root-ssh command.
Targets all servers in the cluster, using the list of servers in /opt/oracle/bda/cluster-hosts-infiniband.
Targets the servers specified as host1, host2, and so forth, on the command line.
Targets a user-defined set of servers listed in groupfile. You can enter either server names or IP addresses in the file, one per line.
Specifies the range of servers in a starter rack [1-6]
or a starter rack and expansion kit [1-12]
. This parameter is required in the 2.2.x base image when the utility is used before network configuration.
Displays Help.
Specifies the root
password on the command line.
Oracle recommends that you omit this parameter. You will be prompted to enter the password, which the utility does not display on your screen.
You must know the root
password to use this command.
If you do not specify the target servers, then remove-root-ssh
uses all servers in the rack, as listed in /opt/oracle/bda/rack-hosts-infiniband.
Resets the boot order of the server to the factory defaults, as specified in the BIOS. By doing so, it clears any ILOM booting overrides.
The following example resets the boot order of the current server:
# reset-boot-order
Set Boot Device to none
Cleared ILOM boot override - Boot device: none
Found BDA v1 Hardware - setting boot order using biosconfig
Copyright (C) SUN Microsystems 2009.
BIOSconfig Utility Version 2.2.1
Build Date: Aug 24 2009
Build Time: 09:01:30
BIOSconfig Specification Version 2.4
Processing Input BIOS Data....
Success
Found USB device name : USB:02.82;01 Unigen PSA4000
.
.
.
BIOSconfig Specification Version 2.4
Processing Input BIOS Data....
Success
New BIOS boot order :
USB:02.82;01 Unigen PSA4000
RAID:Slot0.F0:(Bus 13 Dev 00)PCI RAID Adapter
PXE:IBA GE Slot 0100 v1331
PXE:IBA GE Slot 0101 v1331
PXE:IBA GE Slot 0700 v1331
PXE:IBA GE Slot 0701 v1331
Establishes passwordless SSH for the root
user.
Targets all servers in the cluster, using the list of servers in /opt/oracle/bda/cluster-hosts-infiniband.
Targets the servers specified as host1, host2, and so forth, on the command line.
Targets a user-defined set of servers listed in groupfile. You can enter either server names or IP addresses in the file, one per line.
Specifies the range of servers in a starter rack [1-6]
or a starter rack and expansion kit [1-12]
. This parameter is required in the 2.2.x base image when the utility is used before network configuration.
Displays Help.
Specifies the root
password on the command line.
Oracle recommends that you omit this parameter. You will be prompted to enter the password, which the utility does not display on your screen.
You must know the root
password to use this command.
If you do not specify the target servers, then setup-root-ssh
uses all servers in the rack, as listed in /opt/oracle/bda/rack-hosts-infiniband.
This example shows passwordless SSH being set up for root
:
# setup-root-ssh Enter root password: password spawn /opt/oracle/bda/bin/dcli -c 192.168.42.37,192.168.42.38... -k root@192.168.42.37's password: root@192.168.42.38's password: . . . 192.168.42.37: ssh key added 192.168.42.38: ssh key added . . . setup-root-ssh succeeded
Shows the device location of an inserted USB drive as it is known to the operating system, such as /dev/sdn.
Lists the VLANs configured on the switch.
Run this command after connecting as root
to a Sun Network QDR InfiniBand Gateway switch.
Lists the virtual network interface cards (VNICs) created for the switch.
Run this command after connecting as root
to a Sun Network QDR InfiniBand Gateway switch.
This example shows the VNICs created in a round-robin process for switch bda1sw-ib3:
# showvnics
ID STATE FLG IOA_GUID NODE IID MAC VLN PKEY GW
--- ----- --- ----------------- -------------------------------- ---- ----------------- --- ---- --------
561 UP N 0021280001CF4C23 bda1node13 BDA 192.168.41.31 0000 CE:4C:23:85:2B:0A NO ffff 0A-ETH-1
564 UP N 0021280001CF4C53 bda1node16 BDA 192.168.41.34 0000 CE:4C:53:85:2B:0D NO ffff 0A-ETH-1
567 UP N 0021280001CF4B58 bda1node01 BDA 192.168.41.19 0000 CE:4B:58:85:2A:FC NO ffff 0A-ETH-1
555 UP N 0021280001CF2A5C bda1node07 BDA 192.168.41.25 0000 CE:2A:5C:85:2B:04 NO ffff 0A-ETH-1
552 UP N 0021280001CF4C74 bda1node04 BDA 192.168.41.22 0000 CE:4C:74:85:2B:01 NO ffff 0A-ETH-1
558 UP N 0021280001CF179B bda1node10 BDA 192.168.41.28 0000 CE:17:9B:85:2B:07 NO ffff 0A-ETH-1
.
.
.