This chapter describes how to extend Oracle Exadata Database Machine.
Note:
For ease of reading, the name "Oracle Exadata Rack" is used when information refers to both Oracle Exadata Database Machine and Oracle Exadata Storage Expansion Rack.
See Also:
Oracle Exadata Database Machine System Overview for the cabling tables for each Exadata version
You can extend Oracle Exadata Database Machine as follows:
You can extend Oracle Exadata Database Machine from a fixed configuration to another fixed configuration, for example, from an eighth rack to a quarter rack, a quarter rack to a half rack, or a half rack to a full rack.
You can also extend Oracle Exadata Database Machine from a fixed or a custom configuration to another custom configuration by adding any combination of database or storage servers up to the allowed maximum. This is known as elastic configuration.
Any combination of Oracle Exadata Database Machine half racks and full racks can be cabled together.
A Sun Datacenter InfiniBand Switch 36 switch, and cables must be ordered before extending Oracle Exadata Database Machine X4-2 racks.
Note:
The cable lengths shown in Multi-Rack Cabling Tables assume the racks are adjacent to each other. If the racks are not adjacent or use overhead cabling trays, then they may require longer cables lengths. Up to 100 meters is supported.
Only optical cables are supported for lengths greater than 5 meters.
Earlier Oracle Exadata Racks can be extended with later Oracle Exadata Racks.
When extending Oracle Exadata Database Machine Eighth Rack with Oracle Exadata Storage Expansion Rack make sure that there are two separate disk groups. There should be one disk group for the drives in the Oracle Exadata Database Machine Eighth Rack rack and one disk group for the drives in Oracle Exadata Storage Expansion Rack.
Multiple Oracle Exadata Database Machines can be run as separate environments, and connect through the InfiniBand network. If you are planning to use multiple Oracle Exadata Database Machines in this manner, then note the following:
All servers on the InfiniBand network must have a unique IP address. When Oracle Exadata Database Machine is deployed, the default InfiniBand network is 192.168.10.1. You must modify the IP addresses before re-configuring the InfiniBand network. Failure to do so causes duplicate IP addresses. After modifying the network, run the verify-topology
and infinicheck
commands to verify the network is working properly. You need to create a file that contains IP addresses for Exadata Storage Servers, such as combined_cellip.ora
. The following is an example of the commands:
# cd /opt/oracle.SupportTools/ibdiagtools # ./verify-toplogy -t fattree # ./infinicheck -c /tmp/combined_cellip.ora -b
When Oracle Exadata Database Machines run in separate clusters, do not modify the cellip.ora
files. The cellip.ora
file on a database server should only include the IP addresses for the Exadata Storage Servers used with that database server.
Cells with disk types different from what is already installed can be added, but the disk types cannot be mixed in the same Oracle ASM disk group. For example, if the existing disk groups all use high performance disks, and cells with high capacity disks are being added, then it is necessary to create new disk groups for the high capacity disks.
When adding the same type of disk, ensure that the grid disk sizes are exactly the same even if the new disks are larger than the existing ones. For example, if the existing disks are 3 TB, and the additional disks are 4 TB, then it is necessary to create grid disks that match the size on the 3 TB disks. A new disk group can be created using the extra 1 TB of disk space.
In order to access Exadata Storage Servers in one Oracle Exadata Database Machine by another Oracle Exadata Database Machine when they are not running as a single cluster, Exadata Storage Servers must have unique Oracle ASM disk group and failure group names on each Oracle Exadata Database Machine. For example, for two Oracle Exadata Database Machines cabled together but run as separate clusters, the following names should be unique:
Cell name
Cell disk name
Grid disk name
Oracle ASM failure group name
All equipment receives a Customer Support Identifier (CSI). Any new equipment for Oracle Exadata Database Machine has a new CSI. Contact Oracle Support Services to reconcile the new CSI with the existing Oracle Exadata Database Machine CSI. Have the original instance numbers or serial numbers available, as well as the new numbers when contacting Oracle Support Services.
The InfiniBand network can be used for external connectivity. The external connectivity ports in the Sun Datacenter InfiniBand Switch 36 switches can connect to media servers for tape backup, data loading, and client and application access. Use the available ports on the leaf switches for external connectivity. There are 12 ports per rack. The available ports are 5B, 6A, 6B, 7A, 7B, and 12A in each leaf switch. For high availability connections, connect one port to one leaf switch and the other port to the second leaf switch. The validated InfiniBand cable lengths are as follows:
Up to 5 meters passive copper 4X QDR QSFP cables
Up to 100 meters fiber optic 4X D=QDR QSFP cables
See Also:
Oracle Automatic Storage Management Administrator's Guide for information about renaming disk groups
Oracle Exadata Database Machine System Overview for details about elastic configuration
Before extending any rack hardware, review the safety precautions and cabling information, and collect information about the current rack in this section.
Before upgrading Oracle Exadata Database Machines, read Important Safety Information for Sun Hardware Systems included with the rack.
Note:
Contact your service representative or Oracle Advanced Customer Services to confirm that Oracle has qualified your equipment for installation and use in Oracle Exadata Database Machine. Oracle is not liable for any issues when you install or use non-qualified equipment.
See Also:
Oracle Exadata Database Machine Installation and Configuraton Guide for safety guidelines
Oracle Engineered System Safety and Compliance Guide, Compliance Model No.: ESY27 for safety notices
Review the following InfiniBand cable precautions before working with the InfiniBand cables:
Fiber optic InfiniBand cables with laser transceivers must be of type Class 1.
Do not allow any copper core InfiniBand cable to bend to a radius tighter than 127 mm (5 inches). Tight bends can damage the cables internally.
Do not allow any optical InfiniBand cable to bend to a radius tighter than 85 mm (3.4 inches). Tight bends can damage the cables internally.
Do not use zip ties to bundle or support InfiniBand cables. The sharp edges of the ties can damage the cables internally. Use hook-and-loop straps.
Do not allow any InfiniBand cable to experience extreme tension. Do not pull or drag an InfiniBand cable. Pulling on an InfiniBand cable can damage the cable internally.
Unroll an InfiniBand cable for its length.
Do not twist an InfiniBand cable more than one revolution for its entire length. Twisting an InfiniBand cable can damage the cable internally.
Do not route InfiniBand cables where they can be stepped on, or experience rolling loads. A crushing effect can damage the cable internally.
Cable paths should be as short as possible. When the length of a cable path has been calculated, select the shortest cable to satisfy the length requirement. When specifying a cable, consider the following:
Bends in the cable path increase the required length of the cable. Rarely does a cable travel in a straight line from connector to connector. Bends in the cable path are necessary, and each bend increases the total length.
Bundling increases the required length of the cables. Bundling causes one or more cables to follow a common path. However, the bend radius is different in different parts of the bundle. If the bundle is large and unorganized, and there are many bends, one cable might experience only the inner radius of bends, while another cable might experience the outer radius of bends. In this situation, the differences of the required lengths of the cables is quite substantial.
If you are routing the InfiniBand cable under the floor, consider the height of the raised floor when calculating cable path length.
When bundling InfiniBand cables in groups, use hook-and-loop straps to keep cables organized. If possible, use color-coordinated straps to help identify cables and their routing. The InfiniBand splitter and 4X copper conductor cables are fairly thick and heavy for their length. Consider the retention strength of the hook-and-loop straps when supporting cables. Bundle as few cables as reasonably possible. If the InfiniBand cables break free of their straps and fall free, the cables might break internally when they strike the floor or from sudden changes in tension.
You can bundle the cables using many hook-and-loop straps. Oracle recommends that no more than eight cables be bundled together.
Place the hook-and-loop straps as close together as reasonably possible, for example, one strap every foot (0.3 m). If a cable breaks free from a strap, then the cable cannot fall far before it is retained by another strap.
Sun Datacenter InfiniBand Switch 36 switch accepts InfiniBand cables from floor or underfloor delivery. Floor and underfloor delivery limits the tension in the InfiniBand cable to the weight of the cable for the rack height of the switch.
Note:
Overhead cabling details are not included in this guide. For details on overhead cabling, contact a certified service engineer.
Review the following cable management arm (CMA) guidelines before routing the cables:
Remove all required cables from the packaging, and allow cables to acclimate or reach operating temperature, if possible. The acclimation period is usually 24 hours. This improves the ability to manipulate the cables.
Label both ends of each cable using a label stock that meets the ANSI/TIA/EIA 606-A standard, if possible.
Begin the installation procedure in ascending order.
Only slide out one server at a time. Sliding out more than one server can cause cables to drop cause problems when sliding the servers back.
Separate the installation by dressing cables with the least stringent bend radius requirements first. The following bend radius requirements are based on EIA/TIA 568-x standards, and may vary from the manufacturer's requirements:
CAT5e UTP: 4 x diameter of the cable or 1 inch/25.4 mm minimum bend radius
AC power cables: 4 x diameter of the cable or 1 inch/ 25.4 mm minimum bend radius
TwinAx: 5 x diameter of the cable or 1.175 inch/33 mm.
Quad Small Form-factor Pluggable (QSFP) InfiniBand cable: 6 x diameter of the cable or 2 inch/55 mm.
Fiber core cable: 10 x diameter of the cable or 1.22 inch/31.75 mm for a 0.125 cable.
Install the cables with the best longevity rate first.
The current configuration information is used to plan patching requirements, configure new IP addresses, and so on. The following information should be collected as described before extending the rack:
The Exachk report for the current rack.
Image history information using the following command:
dcli -g ~/all_group -l root "imagehistory" > imagehistory.txt
Current IP addresses defined for all Exadata Storage Servers and database servers using the following command:
dcli -g ~/all_group -l root "ifconfig" > ifconfig_all.txt
Information about the configuration of the cells, cell disks, flash logs, and IORM plans using the following commands:
dcli -g ~/cell_group -l root "cellcli -e list cell detail" > cell_detail.txt dcli -g ~/cell_group -l root "cellcli -e list physicaldisk detail" > \ physicaldisk_detail.txt dcli -g ~/cell_group -l root "cellcli -e list griddisk attributes \ name,offset,size,status,asmmodestatus,asmdeactivationoutcome" > griddisk.txt dcli -g ~/cell_group -l root "cellcli -e list flashcache detail" > \ fc_detail.txt dcli -g ~/cell_group -l root "cellcli -e list flashlog detail" > fl_detail.txt dcli -g ~/cell_group -l root "cellcli -e list iormplan detail" > \ iorm_detail.txt
HugePages memory configuration on the database servers using the following command:
dcli -g ~/dbs_group -l root "cat /proc/meminfo | grep 'HugePages'" > \ hugepages.txt
InfiniBand switch information using the following command:
ibswitches > ibswitches.txt
Firmware version of the Sun Datacenter InfiniBand Switch 36 switches using the nm2version
command on each switch.
The following network files from the first database server in the rack:
/etc/resolv.conf
/etc/ntp.conf
/etc/network
/etc/sysconfig/network-scripts/ifcfg-*
Any users, user identifiers, groups and group identifiers created for cluster-managed services that need to be created on the new servers, such as Oracle GoldenGate.
/etc/passwd
/etc/group
Output of current cluster status using the following command:
crsctl stat res -t > crs_stat.txt
Patch information from the Grid Infrastructure and Oracle homes using the following commands. The commands must be run as Grid Infrastructure home owner, and the Oracle home owner.
/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch lsinventory -oh \ GRID_HOME -detail -all_nodes > opatch_grid.txt /u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch lsinventory -oh \ ORACLE_HOME -detail -all_nodes >> opatch_oracle.txt
In the preceding commands, GRID_HOME is the path for the Grid Infrastructure home directory, and ORACLE_HOME is the path for the Oracle home directory.
When adding additional servers or rack to an existing rack, the IP addresses for the new servers are obtained using Oracle Exadata Deployment Assistant. If adding additional servers to an existing rack, then the application should only include the new servers. If adding an additional rack, then the new rack should use its own Oracle Exadata Deployment Assistant. The exact Oracle ASM disk group configuration currently in use may not be reflected by the application. This is not an issue, as the grid disks and disk groups are configured manually. All other items, such as the Oracle home location and owner, should be defined exactly as the existing configuration.
When adding Oracle Exadata X4-2 Database Server or later or Oracle Exadata Storage Server X4-2L or later, the bonding configuration must match the existing servers in the rack. The Oracle Exadata Deployment Assistant InfiniBand configuration page has an option to select the type of bonding. Select the option for active-active bonding, or deselect the option for active-passive bonding.
The configuration file generated by the application is used by Oracle Exadata Deployment Assistant. After completing Oracle Exadata Deployment Assistant, use the checkip.sh
and dbm.dat
files to verify the network configuration. The only errors that should occur are from the ping
command to the SCAN addresses, Cisco switch, and Sun Datacenter InfiniBand Switch 36 switches.
See Also:
The files in the $GRID_HOME/rdbms/audit
directory and the $GRID_HOME/log/diagnostics
directory should be moved or deleted before extending a cluster. Oracle recommends moving or deleting the files a day or two before the planned extension because it may take time.
The new rack or servers most-likely have a later release or patch level than the current rack. In some cases, you may want to update the current rack release to the later release. In other cases, you may want to stay at your current release, and choose to reimage the new rack to match the current rack. Whatever you choose to do, ensure that the existing and new servers and Sun Datacenter InfiniBand Switch 36 switches are at the same patch level. Note the following about the hardware and releases:
Tip:
Check My Oracle Support note 888828.1 for latest information on minimum releases.
When expanding Oracle Exadata Database Machine X4-2 with Sun Server X4-2 Oracle Database Servers and Oracle Exadata Storage Server X4-2L Servers, the minimum release for the servers is release 11.2.3.3.0.
When expanding with Oracle Exadata Database Machine X4-8 Full Rack, the minimum release for the servers is release 11.2.3.3.1.
When expanding Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) or Oracle Exadata Database Machine X2-2 (with X4170 M2 and X4270 M2 servers) with Sun Server X3-2 Oracle Database Servers and Exadata Storage Server X3-2 Servers, the minimum release for servers is release 11.2.3.2.0.
When expanding Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) with Sun Fire X4170 M2 Oracle Database Servers and Oracle Exadata Storage Server with Sun Fire X4270 M2 Servers, the minimum release for servers is release 11.2.2.2.0.
The earlier servers may need to be patched to a later release to meet the minimum. In addition, earlier database servers may use Oracle Linux release 5.3. Those servers need to be updated to the latest Oracle Linux release.
Additional patching considerations include the Grid Infrastructure and database home releases and bundle patch updates. If new patches will be applied, then Oracle recommends changing the existing servers so the new servers will inherit the releases as part of the extension procedure. This way, the number of servers being patched is lower. Any patching of the existing servers should be performed in advance so they are at the desired level when the extension work is scheduled, thereby reducing the total amount of work required during the extension.
Perform a visual check of Oracle Exadata Database Machine physical systems before extending the hardware.
Check the rack for damage.
Check the rack for loose or missing screws.
Check Oracle Exadata Database Machine for the ordered configuration.
Check that all cable connections are secure and well seated.
Check power cables.
Ensure the correct connectors have been supplied for the data center facility power source.
Check network data cables.
Check the site location tile arrangement for cable access and airflow.
Check the data center airflow into the front of Oracle Exadata Database Machine.
Perform the following tasks before adding the servers:
Unpack the Oracle Exadata Database Machine expansion kit.
Unpack all Oracle Exadata Database Machine server components from the packing cartons. The following items should be packaged with the servers:
Oracle Database servers or Exadata Storage Server
Power cord, packaged with country kit
Cable management arm with installation instructions
Rackmount kit containing rack rails and installation instructions
(Optional) Sun server documentation and media kit
Note:
If you are extending Oracle Exadata Database Machine X4-2, Oracle Exadata Database Machine X3-8 Full Rack, or Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) half rack, then order the expansion kit that includes a Sun Datacenter InfiniBand Switch 36 switch.
Figure 1-1 shows the components in the server expansion kit.
Lay out the cables for the servers.
Unroll the cables and stretch them to remove the bends.
Apply the cable labels. Oracle recommends labeling all cables before installation.
Install the servers.
Cable the servers.
See Also:
Oracle Exadata Database Machine Maintenance Guide for information about cable labels
"Adding New Servers" for information about installing the servers
"Cabling Database Servers" and "Cabling Exadata Storage Servers" for information about cabling the servers
Oracle Exadata Database Machine can be extended from Oracle Exadata Database Machine Quarter Rack to Oracle Exadata Database Machine Half Rack, from Oracle Exadata Database Machine Half Rack to Oracle Exadata Database Machine Full Rack, and by cabling racks together.
Note:
All new equipment receives a Customer Support Identifier (CSI). Any new equipment for Oracle Exadata Database Machine has a new CSI. Contact Oracle Support Services to reconcile the new CSI with the existing Oracle Exadata Database Machine CSI. Have the original instance numbers or serial numbers available, as well as the new numbers when contacting Oracle Support Services.
Extending Oracle Exadata Database Machine X4-2 or X5-2 from an eighth rack to a quarter rack is done using software. No hardware modifications are needed to extend the rack.
However, hardware modifications may be needed for other Oracle Exadata Database Machine versions. See For Oracle Exadata Database Machine X6-2: Adding High Capacity Disks and Flash Cards and For Oracle Exadata Database Machine X7-2: Upgrading Eighth Rack Systems to a Quarter Rack for details.
This procedure can be done with no downtime or outages, other than a rolling database outage.
Note:
In the following procedures, the disk group names and sizes are examples. The values should be changed in the commands to match the actual system.
The procedures assume user equivalence exists between the root
user on the first database server and all other database servers, and to the celladmin
user on all storage cells.
The text files cell_group
and db_group
should be created to contain lists of cell host names and database server host names, respectively.
The following procedure describes how to review and validate the current configuration.
Log in as the root
user on the first database server.
Review the current configuration of the database servers using the following command:
# dcli -g db_group -l root 'dbmcli -e list dbserver attributes coreCount'
The following is an example of the output from the command for Oracle Exadata Database Machine X5-2 Eighth Rack:
dm01db01: 18 dm01db02: 18
Note:
The number of active cores in Oracle Exadata Database Machine X5-2 Eighth Rack database server is 18. The number of active cores in Oracle Exadata Database Machine X4-2 Eighth Rack database server is 12.
If the number of cores on a database server configured as an eighth rack differs, then contact Oracle Support Services.
Review the current configuration of the storage servers using the following command. The expected output is TRUE
.
# dcli -g cell_group -l celladmin 'cellcli -e LIST CELL attributes eighthrack'
The following procedure describes how to activate the database server cores.
Log in as the root
user on the first database server.
Activate all the database server cores using the following dcli utility command on the database server group:
# dcli -g db_group -l root 'dbmcli -e \
ALTER DBSERVER pendingCoreCount = number_of_cores'
In the preceding command, number_of_cores is the total number of cores to activate. The value includes the existing core count and the additional cores to be activated. The following command shows how to activate all the cores in Oracle Exadata Database Machine X5-2 Eighth Rack:
# dcli -g db_group -l root 'dbmcli -e ALTER DBSERVER pendingCoreCount = 36'
Note:
The maximum number of total active cores for Oracle Exadata Database Machine X5-2 Eighth Rack is 36. The maximum number of total active cores for Oracle Exadata Database Machine X4-2 Eighth Rack is 24.
Restart each database server.
Note:
If this procedure is done in a rolling fashion with the database and Grid Infrastructure active, then ensure the following before restarting the database server:
All Oracle ASM grid disks are online..
No Oracle ASM rebalance operations are active. You can query the V$ASM_OPERATION
view for the status of the rebalance operation.
Shut down the database and Grid Infrastructure in a controlled manner, failing over services as needed. .
Verify the following items on the database server after the restart completes and before proceeding to the next server:
The database and Grid Infrastructure services are active.
See "Using SRVCTL to Verify That Instances are Running" in Oracle Real Application Clusters Administration and Deployment Guide and thecrsctl status resource –w "TARGET = ONLINE" —t
command.The number of active cores is correct. Use the dbmcli -e list dbserver attributes coreCount
command to verify the number of cores.
See Also:
"Changing a Disk to Offline or Online." in Oracle Exadata System Software User's Guide
"Stopping One or More Instances and Oracle RAC Databases Using SRVCTL" in Oracle Real Application Clusters Administration and Deployment Guide
"Stopping Oracle Clusterware" in Oracle Database 2 Day + Real Application Clusters Guide
Oracle Exadata Database Machine Maintenance Guide for additional information about activating a subset of cores
Oracle Exadata Database Machine Licensing Information User's Guide for information about licensing a subset of cores
Upgrade of Oracle Exadata Database Machine X6-2 Eighth Rack High Capacity systems require hardware modification, but upgrade of X6-2 Extreme Flash does not require hardware modification.
Eighth Rack High Capacity storage servers have half the cores enabled, but half the disks and flash cards are removed. Eighth Rack Extreme Flash storage servers have half the cores and flash drives enabled.
Eighth Rack database servers have half the cores enabled.
On Oracle Exadata Database Machine X6-2 Eighth Rack systems with High Capacity disks, you can add high capacity disks and flash cards to extend the system to a Quarter Rack:
Install the six 8 TB disks in HDD slots 6 - 11.
Install the two F320 flash cards in PCIe slots 1 and 4.
Upgrade of Oracle Exadata Database Machine X7-2 Eighth Rack systems requires hardware modification. Eighth Rack database servers have one of the CPUs removed, and all of the memory for CPU1 was moved to CPU0. Storage servers have half the cores enabled, but half the disks and flash cards were removed.
On Oracle Exadata Database Machine X7-2 Eighth Rack systems, you can add CPUs, high capacity disks and flash cards to extend the system to a Quarter Rack:
On the Exadata X7 database server, install CPU1, move half of CPU0's memory to CPU1, and move the 10/25GbE PCI card to PCIe slot 1.
On Exadata High Capacity Storage Servers, install six 10 TB high capacity SAS disks in HDD6-11 and two F640 flash cards in PCIe slots 4 and 6.
On Exadata Extreme Flash Storage Server, install four F640 flash cards in PCIe slots 2,3,8, and 9.
The following procedure describes how to activate the storage server cores and disks.
Log in as the root
user on the first database server.
Activate the cores on the storage server group using the following command. The command uses the dcli utility, and runs the command as the celladmin
user.
# dcli -g cell_group -l celladmin cellcli -e "alter cell eighthRack=false"
Create the cell disks using the following command:
# dcli -g cell_group -l celladmin cellcli -e "create celldisk all"
Recreate the flash log using the following commands:
# dcli -g cell_group -l celladmin cellcli -e "drop flashlog all force" # dcli -g cell_group -l celladmin cellcli -e "create flashlog all"
Expand the flash cache using the following command:
# dcli -g cell_group -l celladmin cellcli -e "alter flashcache all"
Grid disk creation must follow a specific order to ensure the proper offset.
The order of grid disk creation must follow the same sequence that was used during initial grid disks creation. For a standard deployment using Oracle Exadata Deployment Assistant, the order is DATA, RECO, and DBFS_DG. Create all DATA grid disks first, followed by the RECO grid disks, and then the DBFS_DG grid disks.
The following procedure describes how to create the grid disks:
Note:
The commands shown in this procedure use the standard deployment grid disk prefix names of DATA, RECO and DBFS_DG. The sizes being checked are on cell disk 02. Cell disk 02 is used because the disk layout for cell disks 00 and 01 are different from the other cell disks in the server.
Check the size of the grid disks using the following commands. Each cell should return the same size for the grid disks starting with the same grid disk prefix.
# dcli -g cell_group -l celladmin cellcli -e \ "list griddisk attributes name, size where name like \'DATA.*_02_.*\'" # dcli -g cell_group -l celladmin cellcli -e \ "list griddisk attributes name, size where name like \'RECO.*_02_.*\'" # dcli -g cell_group -l celladmin cellcli -e \ "list griddisk attributes name, size where name like \'DBFS_DG.*_02_.*\'"
The sizes shown are used during grid disk creation.
Create the grid disks for the disk groups using the sizes shown in step 1. Table 1-1 shows the commands to create the grid disks based on rack type and disk group.
Table 1-1 Commands to Create Disk Groups When Extending Oracle Exadata Database Machine X4-2 Eighth Rack or Later
Rack | Commands |
---|---|
Extreme Flash Oracle Exadata Database Machine X5-2 and later |
dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_FD_04_\'hostname -s\' celldisk=FD_04_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_FD_05_\'hostname -s\' celldisk=FD_05_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_FD_06_\'hostname -s\' celldisk=FD_06_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_FD_07_\'hostname -s\' celldisk=FD_07_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_FD_04_\'hostname -s\' celldisk=FD_04_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_FD_05_\'hostname -s\' celldisk=FD_05_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_FD_06_\'hostname -s\' celldisk=FD_06_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_FD_07_\'hostname -s\' celldisk=FD_07_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_FD_04_\'hostname -s\' celldisk=FD_04_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_FD_05_\'hostname -s\' celldisk=FD_05_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_FD_06_\'hostname -s\' celldisk=FD_06_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_FD_07_\'hostname -s\' celldisk=FD_07_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" |
High Capacity Oracle Exadata Database Machine X5-2 or Oracle Exadata Database Machine X4-2 and later |
dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_06_\'hostname -s\' celldisk=CD_06_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_07_\'hostname -s\' celldisk=CD_07_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_08_\'hostname -s\' celldisk=CD_08_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_09_\'hostname -s\' celldisk=CD_09_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_10_\'hostname -s\' celldisk=CD_10_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_11_\'hostname -s\' celldisk=CD_11_\'hostname -s\',size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_06_\'hostname -s\' celldisk=CD_06_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_07_\'hostname -s\' celldisk=CD_07_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_08_\'hostname -s\' celldisk=CD_08_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_09_\'hostname -s\' celldisk=CD_09_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_10_\'hostname -s\' celldisk=CD_10_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_11_\'hostname -s\' celldisk=CD_11_\'hostname -s\',size=recosize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_06_\'hostname -s\' celldisk=CD_06_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_07_\'hostname -s\' celldisk=CD_07_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_08_\'hostname -s\' celldisk=CD_08_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_09_\'hostname -s\' celldisk=CD_09_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_10_\'hostname -s\' celldisk=CD_10_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_11_\'hostname -s\' celldisk=CD_11_\'hostname -s\',size=dbfssize, \ cachingPolicy=none" |
The following procedure describes how to add the grid disks to Oracle ASM disk groups.
The grid disks created in "Creating Grid Disks in Eighth Rack Oracle Exadata Database Machine X4-2 or Later" must be added as Oracle ASM disks to their corresponding, existing Oracle ASM disk groups.
Validate the following:
No rebalance operation is currently running.
All Oracle ASM disks are active.
Log in to the first database server as the owner who runs the Grid Infrastructure software.
Set the environment to access the +ASM instance on the server.
Log in to the ASM instance as the sysasm
user using the following command:
$ sqlplus / as sysasm
Validate the current settings, as follows:
SQL> set lines 100 SQL> column attribute format a20 SQL> column value format a20 SQL> column diskgroup format a20 SQL> SELECT att.name attribute, upper(att.value) value, dg.name diskgroup FROM V$ASM_ATTRIBUTE att, V$ASM_DISKGROUP DG WHERE DG.group_number=att.group_number AND att.name LIKE '%appliance.mode%' ORDER BY att.group_number;
The output should be similar to the following:
ATTRIBUTE VALUE DISKGROUP -------------------- -------------------- -------------------- appliance.mode TRUE DATAC1 appliance.mode TRUE DBFS_DG appliance.mode TRUE RECOC1
Disable the appliance.mode
attribute for any disk group that shows TRUE
using the following commands:
SQL> ALTER DISKGROUP data_diskgroup set attribute 'appliance.mode'='FALSE'; SQL> ALTER DISKGROUP reco_diskgroup set attribute 'appliance.mode'='FALSE'; SQL> ALTER DISKGROUP dbfs_dg_diskgroup set attribute 'appliance.mode'='FALSE';
In the preceding commands, data_diskgroup, reco_diskgroup, and dbfs_dg_diskgroup are the names of the DATA, RECO and DBFS_DG disk groups, respectively.
Add the grid disks to the Oracle ASM disk groups.Table 1-2 shows the commands to create the grid disks based on rack type and disk group. Adding the new disks requires a rebalance of the system.
Table 1-2 Commands to Add Disk Groups When Extending Eighth Rack Oracle Exadata Database Machine X4-2 and Later
Rack | Commands |
---|---|
Extreme Flash Oracle Exadata Database Machine X5-2 and later |
SQL> ALTER DISKGROUP data_diskgroup ADD DISK 'o/*/DATA_FD_0[4-7]*' \ REBALANCE POWER 32; SQL> ALTER DISKGROUP reco_diskgroup ADD DISK 'o/*/RECO_FD_0[4-7]*' \ REBALANCE POWER 32; SQL> ALTER DISKGROUP dbfs_dg_diskgroup ADD DISK 'o/*/DBFS_DG_FD_0[4-7]*'\ REBALANCE POWER 32; |
High Capacity Oracle Exadata Database Machine X5-2 or Oracle Exadata Database Machine X4-2 and later |
SQL> ALTER DISKGROUP data_diskgroup ADD DISK 'o/*/DATA_CD_0[6-9]*',' \ o/*/DATA_CD_1[0-1]*' REBALANCE POWER 32; SQL> ALTER DISKGROUP reco_diskgroup ADD DISK 'o/*/RECO_CD_0[6-9]*',' \ o/*/RECO_CD_1[0-1]*' REBALANCE POWER 32; SQL> ALTER DISKGROUP dbfs_dg_diskgroup ADD DISK ' \ o/*/DBFS_DG_CD_0[6-9]*',' o/*/DBFS_DG_CD_1[0-1]*' REBALANCE POWER 32; |
The preceding commands return Diskgroup altered
, if successful.
(Optional) Monitor the current rebalance operation using the following command:
SQL> SELECT * FROM gv$asm_operation;
Re-enable the appliance.mode
attribute, if it was disabled in step 6 using the following commands:
SQL> ALTER DISKGROUP data_diskgroup set attribute 'appliance.mode'='TRUE'; SQL> ALTER DISKGROUP reco_diskgroup set attribute 'appliance.mode'='TRUE'; SQL> ALTER DISKGROUP dbfs_dg_diskgroup set attribute 'appliance.mode'='TRUE';
After adding the grid disks to the Oracle ASM disk groups, validate the configuration.
Log in as the root
user on the first database server.
Check the core count using the following command:
# dcli -g db_group -l root 'dbmcli -e list dbserver attributes coreCount'
Review the storage server configuration using the following command.
# dcli -g cell_group -l celladmin 'cellcli -e list cell attributes eighthrack'
The output should show FALSE
.
Review the appliance mode for each disk group using the following commands:
SQL> set lines 100 SQL> column attribute format a20 SQL> column value format a20 SQL> column diskgroup format a20 SQL> SELECT att.name attribute, upper(att.value) value, dg.name diskgroup \ FROM V$ASM_ATTRIBUTE att, V$ASM_DISKGROUP DG \ WHERE DG.group_number = att.group_number AND \ att.name LIKE '%appliance.mode%' ORDER BY DG.group_number;
Validate the number of Oracle ASM disks using the following command:
SQL> SELECT g.name,d.failgroup,d.mode_status,count(*) \ FROM v$asm_diskgroup g, v$asm_disk d \ WHERE d.group_number=g.group_number \ GROUP BY g.name,d.failgroup,d.mode_status; NAME FAILGROUP MODE_ST COUNT(*) ------------------------- ----------------------------- ------- ---------- DATAC1 EXA01CELADM01 ONLINE 12 DATAC1 EXA01CELADM02 ONLINE 12 DATAC1 EXA01CELADM03 ONLINE 12 RECOC1 EXA01CELADM01 ONLINE 12 RECOC1 EXA01CELADM02 ONLINE 12 RECOC1 EXA01CELADM03 ONLINE 12 RECOC2 EXA01CELADM01 ONLINE 12 RECOC2 EXA01CELADM02 ONLINE 12 RECOC2 EXA01CELADM03 ONLINE 12 DBFS_DG EXA01CELADM01 ONLINE 10 DBFS_DG EXA01CELADM02 ONLINE 10 DBFS_DG EXA01CELADM03 ONLINE 10
All two-socket systems (except eighth rack configurations) will have 12 disks per cell for any system model. Eighth rack configurations will have 6 disks per cell.
Extending Oracle Exadata Database Machine X3-2 or earlier rack from an eighth rack to a quarter rack is done using software. No hardware modifications are needed to extend the rack. This procedure can be done with no downtime or outages, other than a rolling database outage. The following procedures in this section describe how to extend an Oracle Exadata Database Machine X3-2 eighth rack to a quarter rack:
The following procedure describes how to review and validate the current configuration:
Log in as the root
user on the first database server.
Review the current configuration of the database servers using the following command:
# dcli -g db_group -l root /opt/oracle.SupportTools/resourcecontrol -show
The following is an example of the output from the command:
dm01db01: [INFO] Validated hardware and OS. Proceed. dm01db01: dm01db01: system_bios_version: 25010600 dm01db01: restore_status: Ok dm01db01: config_sync_status: Ok dm01db01: reset_to_defaults: Off dm01db01: [SHOW] Number of cores active per socket: 4 dm01db02: [INFO] Validated hardware and OS. Proceed. dm01db02: dm01db02: system_bios_version: 25010600 dm01db02: restore_status: Ok dm01db02: config_sync_status: Ok dm01db02: reset_to_defaults: Off dm01db02: [SHOW] Number of cores active per socket: 4
Note:
The number of active cores in Oracle Exadata Database Machine X3-2 Eighth Rack database server is 4.
If the number of cores on a database server configured as an eighth rack differs, then contact Oracle Support Services.
Ensure the output for restore_status
and config_sync_status
are shown as Ok
before continuing this procedure.
Review the current configuration of the storage servers using the following command. The expected output is TRUE
.
# dcli -g cell_group -l celladmin 'cellcli -e LIST CELL attributes eighthrack'
Ensure that flash disks are not used in Oracle ASM disk groups using the following command. Flash cache is dropped and recreated during this procedure:
# dcli -g cell_group -l celladmin cellcli -e "list griddisk attributes \ asmDiskgroupName,asmDiskName,diskType where diskType ='FlashDisk' \ and asmDiskgroupName !=null"
No rows should be returned by the command.
The following procedure describes how to activate the database server cores:
Log in as the root
user on the first database server.
Activate all the database server cores using the following dcli utility command on the database server group:
# dcli -g db_group -l root /opt/oracle.SupportTools/resourcecontrol \
-core number_of_cores
In the preceding command, number_of_cores is the total number of cores to activate. To activate all the cores, enter All
for the number of cores.
See Also:
Oracle Exadata Database Machine Maintenance Guide for additional information about activating a subset of cores
Oracle Exadata Database Machine Licensing Information for information about licensing a subset of cores
Restart the database servers in a rolling manner using the following command:
# reboot
Note:
Ensure the output for restore_status
and config_sync_status
are shown as Ok
before activating the storage server cores and disks. Getting the status from the BIOS after restarting may take several minutes.
The following procedure describes how to activate the storage server cores and disks:
Log in as the root
user on the first database server.
Activate the cores on the storage server group using the following command. The command uses the dcli utility, and runs the command as the celladmin
user.
# dcli -g cell_group -l celladmin cellcli -e "alter cell eighthRack=false"
Create the cell disks using the following command:
# dcli -g cell_group -l celladmin cellcli -e "create celldisk all"
Recreate the flash log using the following commands:
# dcli -g cell_group -l celladmin cellcli -e "drop flashlog all force" # dcli -g cell_group -l celladmin cellcli -e "create flashlog all"
Expand the flash cache using the following command:
# dcli -g cell_group -l celladmin cellcli -e "alter flashcache all"
Grid disk creation must follow a specific order to ensure the proper offset. The order of grid disk creation must follow the same sequence that was used during initial grid disks creation. For a standard deployment using Oracle Exadata Deployment Assistant, the order is DATA, RECO, and DBFS_DG. Create all DATA grid disks first, followed by the RECO grid disks, and then the DBFS_DG grid disks.
The following procedure describes how to create the grid disks:
Note:
The commands shown in this procedure use the standard deployment grid disk prefix names of DATA, RECO and DBFS_DG. The sizes being checked are on cell disk 02. Cell disk 02 is used because the disk layout for cell disks 00 and 01 are different from the other cell disks in the server.
Check the size of the grid disks using the following commands. Each cell should return the same size for the grid disks starting with the same grid disk prefix.
# dcli -g cell_group -l celladmin cellcli -e \ "list griddisk attributes name, size where name like \'DATA.*02.*\'" # dcli -g cell_group -l celladmin cellcli -e \ "list griddisk attributes name, size where name like \'RECO.*02.*\'" # dcli -g cell_group -l celladmin cellcli -e \ "list griddisk attributes name, size where name like \'DBFS_DG.*02.*\'"
The sizes shown are used during grid disk creation.
Create the grid disks for the disk groups using the sizes shown in step 1. Table 1-3 shows the commands to create the grid disks based on rack type and disk group.
Table 1-3 Commands to Create Disk Groups When Extending Oracle Exadata Database Machine X3-2 Eighth Rack
Rack | Commands |
---|---|
High Performance or High Capacity Oracle Exadata Database Machine X3-2 |
dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DATA_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=datasize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=recosize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=recosize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=recosize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=recosize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=recosize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ RECO_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=recosize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=dbfssize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=dbfssize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=dbfssize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=dbfssize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=dbfssize" dcli -g cell_group -l celladmin "cellcli -e create griddisk \ DBFS_DG_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=dbfssize" |
The grid disks created in "Creating Grid Disks in Oracle Exadata Database Machine X3-2 Eighth Rack" must be added as Oracle ASM disks to their corresponding, existing Oracle ASM disk groups. The following procedure describes how to add the grid disks to Oracle ASM disk groups:
Validate the following:
No rebalance operation is currently running.
All Oracle ASM disks are active.
Log in to the first database server as the owner who runs the Grid Infrastructure software.
Set the environment to access the +ASM instance on the server.
Log in to the ASM instance as the sysasm
user using the following command:
$ sqlplus / as sysasm
Validate the current settings, as follows:
SQL> set lines 100 SQL> column attribute format a20 SQL> column value format a20 SQL> column diskgroup format a20 SQL> SELECT att.name attribute, upper(att.value) value, dg.name diskgroup \ FROM V$ASM_ATTRIBUTE att, V$ASM_DISKGROUP DG \ WHERE DG.group_number = att.group_number AND \ att.name LIKE '%appliance.mode%' ORDER BY att.group_number;
The output should be similar to the following:
ATTRIBUTE VALUE DISKGROUP -------------------- -------------------- -------------------- appliance.mode TRUE DATAC1 appliance.mode TRUE DBFS_DG appliance.mode TRUE RECOC1
Disable the appliance.mode
attribute for any disk group that shows TRUE
using the following commands:
SQL> ALTER DISKGROUP data_diskgroup set attribute 'appliance.mode'='FALSE'; SQL> ALTER DISKGROUP reco_diskgroup set attribute 'appliance.mode'='FALSE'; SQL> ALTER DISKGROUP dbfs_dg_diskgroup set attribute 'appliance.mode'='FALSE';
In the preceding commands, data_diskgroup, reco_diskgroup, and dbfs_dg_diskgroup are the names of the DATA, RECO and DBFS_DG disk groups, respectively.
Add the grid disks to the Oracle ASM disk groups.Table 1-4 shows the commands to create the grid disks based on rack type and disk group. Adding the new disks requires a rebalance of the system.
Table 1-4 Commands to Add Disk Groups When Extending an Oracle Exadata Database Machine X3-2 Eighth Rack
Rack | Commands |
---|---|
High Capacity or High Performance Oracle Exadata Database Machine X3-2 |
SQL> ALTER DISKGROUP data_diskgroup ADD DISK 'o/*/DATA_CD_0[6-9]*',' \ o/*/DATA_CD_1[0-1]*' REBALANCE POWER 32; SQL> ALTER DISKGROUP reco_diskgroup ADD DISK 'o/*/RECO_CD_0[6-9]*',' \ o/*/RECO_CD_1[0-1]*' REBALANCE POWER 32; SQL> ALTER DISKGROUP dbfs_dg_diskgroup ADD DISK ' \ o/*/DBFS_DG_CD_0[6-9]*',' o/*/DBFS_DG_CD_1[0-1]*' REBALANCE POWER 32; |
The preceding commands return Diskgroup altered
, if successful.
(Optional) Monitor the current rebalance operation using the following command:
SQL> SELECT * FROM gv$asm_operation;
Re-enable the appliance.mode
attribute, if it was disabled in step 6 using the following commands:
SQL> ALTER DISKGROUP data_diskgroup set attribute 'appliance.mode'='TRUE'; SQL> ALTER DISKGROUP recodiskgroup set attribute 'appliance.mode'='TRUE'; SQL> ALTER DISKGROUP dbfs_dg_diskgroup set attribute 'appliance.mode'='TRUE';
After adding the grid disks to the Oracle ASM disk groups, validate the configuration. The following procedure describes how to validate the configuration:
Log in as the root
user on the first database server.
Check the core count using the following command:
# dcli -g db_group -l root 'dbmcli -e list dbserver attributes coreCount'
Review the storage server configuration using the following command.
# dcli -g cell_group -l celladmin 'cellcli -e list cell attributes eighthrack'
The output should show FALSE
.
Review the appliance mode for each disk group using the following commands:
SQL> set lines 100 SQL> column attribute format a20 SQL> column value format a20 SQL> column diskgroup format a20 SQL> SELECT att.name attribute, upper(att.value) value, dg.name diskgroup \ FROM V$ASM_ATTRIBUTE att, V$ASM_DISKGROUP DG \ WHERE DG.group_number =att.group_number AND \ att.name LIKE '%appliance.mode%' ORDER BY DG.group_number;
Validate the number of Oracle ASM disks using the following command:
SQL> SELECT g.name,d.failgroup,d.mode_status,count(*) \ FROM v$asm_diskgroup g, v$asm_disk d \ WHERE d.group_number=g.group_number \ GROUP BY g.name,d.failgroup,d.mode_status;
Extending Oracle Exadata Database Machine from quarter rack to half rack, or half rack to full rack consists of adding new hardware to the rack. The following sections describe how to extend Oracle Exadata Database Machine with new servers:
Note:
It is possible to extend the hardware while the machine is online, and with no downtime. However, extreme care should be taken. In addition, patch application to existing switches and servers should be done before extending the hardware.
The following procedure describes how to remove the doors on Oracle Exadata Database Machine.
Note:
For Exadata X7 systems, refer to "Remove the Doors" in Oracle Rack Cabinet 1242 User's Guide at https://docs.oracle.com/cd/E85660_01/html/E87280/gshfw.html#scrolltocRemove the Oracle Exadata Database Machine front and rear doors, as follows:
Unlock the front and rear doors. The key is in the shipping kit.
Open the doors.
Detach the grounding straps connected to the doors by pressing down on the tabs of the grounding strap's quick-release connectors, and pull the straps from the frame.
Lift the doors up and off their hinges.
Description of callouts in Figure 1-2:
1: Detaching the grounding cable.
2: Top rear hinge.
3: Bottom rear hinge.
4: Top front hinge.
5: Bottom front hinge.
Remove the filler panels where the servers will be installed using a No. 2 screwdriver to remove the M6 screws. The number of screws depends on the type of filler panel. Save the screws for future use.
Note:
If you are replacing the filler panels, then do not remove the Duosert cage-nuts from the RETMA (Radio Electronics Television Manufacturers Association) rail holes.
This procedure is necessary as follows:
Upgrading a rack with Sun Fire X4170 Oracle Database Servers to Oracle Exadata Database Machine Half Rack or Oracle Exadata Database Machine Full Rack.
Extending an Oracle Exadata Database Machine Quarter Rack or Oracle Exadata Database Machine Eighth Rack to another rack.
Extending an Oracle Exadata Database Machine X4-2 rack to another rack.
Note:
The steps in this procedure are specific to Oracle Exadata Database Machine. They are not the same as the steps in the Sun Datacenter InfiniBand Switch 36 manual.
Unpack the Sun Datacenter InfiniBand Switch 36 switch components from the packing cartons. The following items should be in the packing cartons:
Sun Datacenter InfiniBand Switch 36 switch
Cable bracket and rackmount kit
Cable management bracket and cover
Two rack rail assemblies
Assortment of screws and captive nuts
Sun Datacenter InfiniBand Switch 36 documentation
The service label procedure on top of the switch includes descriptions of the preceding items.
X5 racks only: Remove the trough from the rack in RU1 and put the cables aside while installing the IB switch. The trough can be discarded.
Install cage nuts in each rack rail in the appropriate holes.
Attach the brackets with cutouts to the power supply side of the switch.
Attach the C-brackets to the switch on the side of the InfiniBand ports.
Slide the switch halfway into the rack from the front. You need to keep it to the left side of the rack as far as possible while pulling the two power cords through the C-bracket on the right side.
Slide the server in rack location U2 out to the locked service position. This improves access to the rear of the switch during further assembly.
Install the slide rails from the rear of the rack into the C-brackets on the switch, pushing them up to the rack rail.
Attach an assembled cable arm bracket to the slide rail and using a No. 3 Phillips screwdriver, screw these together into the rack rail:
Install the lower screw loosely with the cable arm bracket rotated 90 degrees downward. This allows better finger access to the screw.
Rotate the cable arm bracket to the correct position.
Install the upper screw.
Tighten both screws.
If available, a screwdriver with a long-shaft (16-inch / 400mm) will allow easier installation such that the handle is outside the rack and beyond the cabling.
Push the switch completely into the rack from the front, routing the power cords through the cutout on the rail bracket.
Secure the switch to the front rack rail with M6 16mm screws. Tighten the screws using the No. 3 Phillips screwdriver.
Install the lower part of the cable management arm across the back of the switch.
Connect the cables to the appropriate ports.
Install the upper part of the cable management arm.
Slide the server in rack location U2 back into the rack.
Install power cords to the InfiniBand switch power supply slots on the front.
Loosen the front screws to install the vented filler panel brackets. Tighten the screws, and snap on the vented filler panel in front of the switch.
See Also:
Oracle Exadata Database Machine System Overview to view the rack layout
Oracle Exadata Database Machine System Overview for information about InfiniBand networking cables
Oracle Exadata Database Machine Quarter Rack can be upgraded to Oracle Exadata Database Machine Half Rack, and Oracle Exadata Database Machine Half Rack can be upgraded to Oracle Exadata Database Machine Full Rack. The upgrade process includes adding new servers, cables, and, when upgrading to Oracle Exadata Database Machine X2-2 Full Rack, Sun Datacenter InfiniBand Switch 36 switch.
A Oracle Exadata Database Machine Quarter Rack to Oracle Exadata Database Machine Half Rack upgrade consists of installing the following:
Two Oracle Database servers
Four Exadata Storage Servers
One Sun Datacenter InfiniBand Switch 36 switch (for Oracle Exadata Database Machine X2-2 with Sun Fire X4170 M2 Oracle Database Server only)
Associated cables and hardware
A Oracle Exadata Database Machine Half Rack to Oracle Exadata Database Machine Full Rack upgrade consists of installing the following:
Four Oracle Database servers
Seven Exadata Storage Servers
Associated cables and hardware
Note:
If you are extending Oracle Exadata Database Machine X5-2, Oracle Exadata Database Machine X4-2, Oracle Exadata Database Machine X3-8 Full Rack, or Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) half rack, then order the expansion kit that includes a Sun Datacenter InfiniBand Switch 36 switch.
The new servers need to be configured manually when extending Oracle Exadata Database Machine Quarter Rack to Oracle Exadata Database Machine Half Rack, or Oracle Exadata Database Machine Half Rack to Oracle Exadata Database Machine Full Rack. Refer to Setting Up New Servers for additional information.
Always load equipment into the rack from the bottom up, so that the rack does not become top-heavy and tip over. Extend the rack anti-tip bar to prevent the rack from tipping during equipment installation.
See Also:
Oracle Exadata Database Machine System Overview to view the rack layout
Installing the Server Into a Rack in Sun Server X4-2L Installation Guide
The following tasks describes how to add the servers, and cables:
The following procedure describes the pre-installation steps:
Identify the rack unit where the server will be installed. Fill the first available unit, starting from the bottom of the rack.
Remove and discard the trough, which attaches the cable harness when no server is installed in the unit.
Remove and discard the solid filler.
The following procedure describes how to install the rack assembly:
Position a mounting bracket against the chassis so that the slide-rail lock is at the server front, and the five keyhole openings on the mounting bracket are aligned with the five locating pins on the side of the chassis.
Orient the slide-rail assembly so that the ball-bearing track is forward and locked in place.
Starting on either side of the rack, align the rear of the slide-rail assembly against the inside of the rear rack rail, and push until the assembly locks into place with an audible click.
Figure 1-4 Locking the Slide-Rail Assembly Against the Inside of the Rear Rack Rail
Align the front of the slide-rail assembly against the outside of the front rack rail, and push until the assembly locks into place and you hear the click.
WARNING:
Installing a server requires a minimum of two people or a lift because of the weight of each server. Attempting this procedure alone can result in equipment damage, personal injury, or both.
Always load equipment into the rack from the bottom up, so that the rack does not become top-heavy and tip over. Extend the rack anti-tip bar to prevent the rack from tipping during equipment installation.
The following procedure describes how to install the sever:
Read the service label on the top cover of the server before installing a server into the rack.
Push the server into the slide rail assembly:
Push the slide rails into the slide rail assemblies as far as possible.
Position the server so the rear ends of the mounting brackets are aligned with the slide rail assemblies mounted in the equipment rack.
Figure 1-5 Aligning the Rear Ends of the Mounting Brackets with the Slide Rail Assemblies in the Rack
The callouts in the preceding image highlight the following:
1: Mounting bracket inserted into slide rail
2: Slide-rail release lever
Insert the mounting brackets into the slide rails, and push the server into the rack until the mounting brackets encounter the slide rail stops, approximately 30 cm (12 inches).
Simultaneously push down and hold the slide rail release levers on each mounting bracket while pushing the server into the rack.
Continue pushing until the slide rail locks on the front of the mounting brackets engage the slide rail assemblies, and you hear the click.
Cable the new server as described in "Cabling Exadata Storage Servers".
Note:
Oracle recommends that two people push the servers into the rack: one person to move the server in and out of the rack, and another person to watch the cables and CMA.
After the new database servers are installed, they need to be cabled to the existing equipment. The following procedure describes how to cable the new equipment in the rack. The images shown in the procedure are of a Sun Fire X4170 M2 Oracle Database Server.
Note:
The existing cable connections in the rack do not change.
The blue cables connect to Oracle Database servers, and the black cables connect to Exadata Storage Servers. These network cables are for the NET0 Ethernet interface port.
Attach and route the management cables on the CMA and rear panel one server at a time. Do not slide out more than one server at a time.
Start from the bottom of the rack, and work upward. Route the cables through the CMA with the dongle on the top and power cables on the bottom.
Longer hook and loop straps are needed when cabling three CAT5e cables or two TwinAx cables.
Connect the CAT5e cables, AC power cables, and USB to their respective ports on the rear of the server. Ensure the flat side of the dongle is flush against the CMA inner rail.
Figure 1-6 Cables at the Rear of the Server
Adjust the green cable management arm (CMA) brackets
Figure 1-7 Cable Management Arm (CMA) Brackets
Description of the CMA callouts in the preceding image"
Connector A
Front slide bar
Velcro straps (6)
Connector B
Connector C
Connector D
Slide-rail latching bracket (used with connector D)
Rear slide bar
Cable covers
Cable covers
Attach the CMA to the server.
Route the CAT5e and power cables through the wire clip.
Figure 1-8 Cables Routed Through the Cable Management Arm
Bend the CAT5e and power cables to enter the CMA, while adhering to the bend radius minimums.
Secure the CAT5e and power cables under the cable clasps.
Figure 1-9 Cables Secured under the Cable Clasps
Route the cables through the CMA, and secure them with hook and loop straps at equal intervals.
Figure 1-10 Cables Secured with Hook and Loop Straps at Regular Intervals
Connect the InfiniBand or TwinAx cables with the initial bend resting on the CMA. The TwinAx cables are for client access to the database servers.
Figure 1-11 InfiniBand or TwinAx Cables Positioned on the CMA
Secure the InfiniBand or TwinAx cables with hook and loop straps at equal intervals.
Figure 1-12 InfiniBand or TwinAx Cables Secured with Hook and Loop Straps at Regular Intervals
Route the fiber core cables.
Rest the InfiniBand cables over the green clasp on the CMA.
Attach the red ILOM cables to the database server.
Attach the network cables to the Oracle Database server.
Attach the InfiniBand cables from Oracle Database server to the Sun Datacenter InfiniBand Switch 36 switches.
Connect the orange Ethernet cable to the KVM switch.
Connect the red and blue Ethernet cables to the Cisco switch.
Verify operation of the slide rails and CMA for each server, as follows:
Note:
Oracle recommends that two people do this step. One person to move the server in and out of the rack, and another person to observe the cables and CMA.
Slowly pull the server out of the rack until the slide rails reach their stops.
Inspect the attached cables for any binding or kinks.
Verify the CMA extends fully from the slide rails.
Push the server back into the rack, as follows:
Release the two sets of slide rail stops.
Push in both levers simultaneously, and slide the server into the rack. The first stop in the set are levers located on the inside of each slide rail, just behind the back panel of the server. The levers are labeled PUSH
. The server slides approximately 46 cm (18 inches) and stop.
Verify the cables and CMA retract without binding.
Simultaneously push or pull both slide rail release buttons, and push the server completely into the rack until both slide rails engage. The second stop in the set are the slide rail release buttons located near the front of each mounting bracket.
Dress the cables, and then tie off the cables with the straps. Oracle recommends the InfiniBand cables should be dressed in bundles of eight or less.
Extend and then fully retract the server to check cable travel by sliding each server out and back fully to ensure that the cables are not binding or catching.
Repeat the procedure for the rest of the servers.
Connect the power cables to the power distribution units (PDUs). Ensure the breaker switches are in the OFF position before connecting the power cables. Do not plug the power cables into the facility receptacles at this time.
See Also:
Oracle Exadata Database Machine System Overview for cabling tables
"Reviewing the Cable Management Arm Guidelines" for the bend radius minimums
After the new Exadata Storage Servers are installed, you need to connect them to the existing equipment.
The following procedure describes how to cable the new equipment in the rack.
Note:
The existing cable connections in the rack do not change.
The blue cables connect to Oracle Database servers, and the black cables connect to Exadata Storage Servers. These network cables are for the NET0 Ethernet interface port.
Attach and route the management cables on the CMA and rear panel one server at a time. Do not slide out more than one server at a time.
Start from the bottom of the rack, and work upward.
Longer hook and loop straps are needed when cabling three CAT5e cables or two TwinAx cables.
Attach a CMA to the server.
Insert the cables into their ports through the hook and loop straps, then route the cables into the CMA in this order:
Power
Ethernet
InfiniBand
Figure 1-13 Rear of the Server Showing Power and Network Cables
Route the cables through the CMA and secure them with hook and loop straps on both sides of each bend in the CMA.
Figure 1-14 Cables Routed Through the CMA and Secured with Hook and Loop Straps
Close the crossbar covers to secure the cables in the straightaway.
Verify operation of the slide rails and the CMA for each server:
Note:
Oracle recommends that two people do this step: one person to move the server in and out of the rack, and another person to watch the cables and the CMA.
Slowly pull the server out of the rack until the slide rails reach their stops.
Inspect the attached cables for any binding or kinks.
Verify that the CMA extends fully from the slide rails.
Push the server back into the rack:
Release the two sets of slide rail stops.
Locate the levers on the inside of each slide rail, just behind the back panel of the server. They are labeled PUSH.
Simultaneously push in both levers and slide the server into the rack, until it stops in approximately 46 cm (18 inches).
Verify that the cables and CMA retract without binding.
Locate the slide rail release buttons near the front of each mounting bracket.
Simultaneously push in both slide rail release buttons and slide the server completely into the rack, until both slide rails engage.
Dress the cables, and then tie off the cables with the straps. Oracle recommends that you dress the InfiniBand cables in bundles of eight or fewer.
Slide each server out and back fully to ensure that the cables are not binding or catching.
Repeat the procedure for all servers.
Connect the power cables to the power distribution units (PDUs). Ensure the breaker switches are in the OFF position before connecting the power cables. Do not plug the power cables into the facility receptacles now.
See Also:
Oracle Exadata Database Machine System Overview for the cabling tables for your system
The following procedure describes how to close the rack after installing new equipment.
Replace the rack front and rear doors as follows:
Retrieve the doors, and place them carefully on the door hinges.
Connect the front and rear door grounding strap to the frame.
Close the doors.
(Optional) Lock the doors. The keys are in the shipping kit.
(Optional) Replace the side panels, if they were removed for the upgrade, as follows:
Lift each side panel up and onto the side of the rack. The top of the rack should support the weight of the side panel. Ensure the panel fasteners line up with the grooves in the rack frame.
Turn each side panel fastener one-quarter turn clockwise using the side panel removal tool. Turn the fasteners next to the panel lock clockwise. There are 10 fasteners per side panel.
(Optional) Lock each side panel. The key is in the shipping kit. The locks are located on the bottom, center of the side panels.
Connect the grounding straps to the side panels.
After closing the rack, proceed to "Configuring the New Hardware" to configure the new hardware.
Extending Oracle Exadata Database Machine by adding another rack consists of cabling and configuring the racks together. Racks can be cabled together with no downtime. During the cabling procedure, the following should be noted:
There is some performance degradation while cabling the racks together. This degradation results from reduced network bandwidth, and the data retransmission due to packet loss when a cable is unplugged.
The environment is not a high-availability environment because one leaf switch will need to be off. All traffic goes through the remaining leaf switch.
Only the existing rack is operational, and any new rack that is added is powered down.
The software running on the systems cannot have problems related to InfiniBand restarts.
It is assumed that Oracle Exadata Database Machine Half Racks have three InfiniBand switches already installed.
The new racks have been configured with the appropriate IP addresses to be migrated into the expanded system prior to any cabling, and there are no duplicate IP addresses.
The existing spine switch is set to priority 10 during the cabling procedure. This setting gives the spine switch a higher priority than any other switch in the fabric, and is the first to take the Subnet Manager Master role whenever a new Subnet Manager Master is being set during the cabling procedure.
The following sections describe how to extend Oracle Exadata Database Machine with another rack:
The following procedure describes how to cable two racks together. This procedure assumes that the racks are adjacent to each other. In the procedure, the existing rack is R1, and the new rack is R2.
Set the priority of the current, active Subnet Manager Master to 10
on the spine switch, as follows:
Log in to any InfiniBand switch on the active system.
Use the getmaster
command to determine that the Subnet Manager Master is running on the spine switch. If it is not, then follow the procedure in Oracle Exadata Database Machine Installation and Configuration Guide.
Log in to the spine switch.
Use the disablesm
command to stop Subnet Manager.
Use the setsmpriority 10
command to set the priority to 10.
Use the enablesm
command to restart Subnet Manager.
Repeat step 1.b to ensure the Subnet Manager Master is running on the spine switch.
Ensure the new rack is near the existing rack. The InfiniBand cables must be able to reach the servers in each rack.
Completely shut down the new rack (R2).
Cable the two leaf switches R2 IB2 and R2 IB3 in the new rack according to Two-Rack Cabling. Note that you need to first remove the seven existing inter-switch connections between each leaf switch, as well as the two connections between the leaf switches and the spine switch in the new rack R2, not in the existing rack R1.
Verify both InfiniBand interfaces are up on all database nodes and storage cells. You can do this by running the ibstat
command on each node and verifying both interfaces are up.
Power off leaf switch R1 IB2. This causes all the database servers and Exadata Storage Servers to fail over their InfiniBand traffic to R1 IB3.
Disconnect all seven inter-switch links between R1 IB2 and R1 IB3, as well as the one connection between R1 IB2 and the spine switch R1 IB1.
Cable leaf switch R1 IB2 according to Two-Rack Cabling.
Power on leaf switch R1 IB2.
Wait for three minutes for R1 IB2 to become completely operational.
To check the switch, log in to the switch and run the ibswitches
command. The output should show three switches, R1 IB1, R1 IB2, and R1 IB3.
Verify both InfiniBand interfaces are up on all database nodes and storage cells. You can do this by running the ibstat
command on each node and verifying both interfaces are up.
Power off leaf switch R1 IB3. This causes all the database servers and Exadata Storage Servers to fail over their InfiniBand traffic to R1 IB2.
Disconnect the one connection between R1 IB3 and the spine switch R1 IB1.
Cable leaf switch R1 IB3 according to Two-Rack Cabling.
Power on leaf switch R1 IB3.
Wait for three minutes for R1 IB3 to become completely operational.
To check the switch, log in to the switch and run the ibswitches
command. The output should show three switches, R1 IB1, R1 IB2, and R1 IB3.
Power on all the InfiniBand switches in R2.
Wait for three minutes for the switches to become completely operational.
To check the switch, log in to the switch and run the ibswitches
command. The output should show six switches, R1 IB1, R1 IB2, R1 IB3, R2 IB1, R2 IB2, and R2 IB3.
Ensure the Subnet Manager Master is running on R1 IB1 by running the getmaster
command from any switch.
Power on all servers in R2.
Log in to spine switch R1 IB1, and lower its priority to 8 as follows:
Use the disablesm
command to stop Subnet Manager.
Use the setsmpriority 8
command to set the priority to 8.
Use the enablesm
command to restart Subnet Manager.
Ensure Subnet Manager Master is running on one of the spine switches.
After cabling the racks together, proceed to Configuring the New Hardware to configure the racks.
See Also:
Oracle Exadata Database Machine System Overview for the cabling tables for your system
The following procedure describes how to cable several racks together. This procedure assumes that the racks are adjacent to each other. In the procedure, the existing racks are R1, R2, ... Rn, the new rack is Rn+1, and the Subnet Manager Master is running on R1 IB1.
Set the priority of the current, active Subnet Manager Master to 10
on the spine switch, as follows:
Log in to any InfiniBand switch on the active system.
Use the getmaster
command to determine that the Subnet Manager Master is running on the spine switch. If it is not, then follow the procedure in Oracle Exadata Database Machine Installation and Configuraton Guide.
Log in to the spine switch.
Use the disablesm
command to stop Subnet Manager.
Use the setsmpriority 10
command to set the priority to 10.
Use the enablesm
command to restart Subnet Manager.
Repeat step 1.b to ensure the Subnet Manager Master is running on the spine switch.
Ensure the new rack is near the existing rack. The InfiniBand cables must be able to reach the servers in each rack.
Completely shut down the new rack (Rn+1).
Cable the leaf switch in the new rack according to the appropriate table in Multi-Rack Cabling Tables. For example, if rack Rn+1 was R4, then use Table 2-7.
Complete the following procedure for each of the original racks:
Power off leaf switch Rx IB2. This causes all the database servers and Exadata Storage Servers to fail over their InfiniBand traffic to Rx IB3.
Cable leaf switch Rx IB2 according to Multi-Rack Cabling Tables.
Power on leaf switch Rx IB2.
Wait for three minutes for Rx IB2 to become completely operational.
To check the switch, log in to the switch and run the ibswitches
command. The output should show n*3 switches for IB1, IB2, and IB3 in racks R1, R2, ... Rn.
Power off leaf switch Rx IB3. This causes all the database servers and Exadata Storage Servers to fail over their InfiniBand traffic to Rx IB2.
Cable leaf switch Rx IB3 according to Multi-Rack Cabling Tables.
Power on leaf switch Rx IB3.
Wait for three minutes for Rx IB3 to become completely operational.
To check the switch, log in to the switch and run the ibswitches
command. The output should show n*3 switches for IB1, IB2, and IB3 in racks R1, R2, ... Rn.
All racks should now be rewired according to Multi-Rack Cabling Tables.
Power on all the InfiniBand switches in Rn+1.
Wait for three minutes for the switches to become completely operational.
To check the switch, log in to the switch and run the ibswitches
command. The output should show (n+1)*3 switches for IB1, IB2, and IB3 in racks R1, R2, ... Rn+1.
Ensure the Subnet Manager Master is running on R1 IB1 by running the getmaster
command from any switch.
Power on all servers in Rn+1.
Log in to spine switch R1 IB1, and lower its priority to 8 as follows:
Use the disablesm
command to stop Subnet Manager.
Use the setsmpriority 8
command to set the priority to 8.
Use the enablesm
command to restart Subnet Manager.
Ensure Subnet Manager Master is running on one of the spine switches using the getmaster
command from any switch.
Ensure Subnet Manager is running on every spine switch using the following command from any switch:
ibdiagnet -r
Each spine switch should show as running in the Summary Fabric SM-state-priority section of the output. If a spins switch is not running, then log in to the switch and enable Subnet Manager using the enablesm
command.
If there are now four or more racks, then log in to the leaf switches in each rack and disable Subnet Manager using the disablesm
command.
This section contains the following tasks needed to configure the new hardware:
Note:
The new and existing racks must be at the same patch level for Oracle Exadata Database Servers and Oracle Exadata Storage Servers, including the operating system. Refer to "Reviewing Release and Patch Levels" for additional information.Earlier releases of Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) used BOND0
and BOND1
as the names for the bonded InfiniBand and bonded Ethernet client networks, respectively. In the current release, BONDIB0
and BONDETH0
are used for the bonded InfiniBand and bonded Ethernet client networks.
If you are adding new servers to an existing Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers), then ensure the database servers use the same names for bonded configuration. You can either change the new database servers to match the existing server interface names, or change the existing server interface names and Oracle Cluster Registry (OCR) configuration to match the new servers.
Do the following after changing the interface names:
Edit the entries in /etc/sysctl.conf
file on the database servers so that the entries for the InfiniBand network match. The following is an example of the file entries before editing. One set of entries must be changed to match the other set.
Found in X2 node net.ipv4.neigh.bondib0.locktime = 0 net.ipv4.conf.bondib0.arp_ignore = 1 net.ipv4.conf.bondib0.arp_accept = 1 net.ipv4.neigh.bondib0.base_reachable_time_ms = 10000 net.ipv4.neigh.bondib0.delay_first_probe_time = 1 Found in V2 node net.ipv4.conf.bond0.arp_accept=1 net.ipv4.neigh.bond0.base_reachable_time_ms=10000 net.ipv4.neigh.bond0.delay_first_probe_time=1
Save the changes to the sysctl.conf
file.
Use the oifcfg
utility to change the OCR configuration, if the new names differ from what is currently in OCR. The interface names for Oracle Exadata Storage Servers do not have to be changed.
Continue configuring the new hardware, as follows:
If the hardware is new servers, then go to Setting Up New Servers to configure the servers.
If the hardware is a new rack, then go to Setting up a New Rack to configure the rack.
See Also:
Oracle Exadata Database Machine Maintenance Guide for information about changing the InfiniBand network informationNew servers need to be configured when extending Oracle Exadata Database Machine Quarter Rack or Oracle Exadata Database Machine Half Rack.
The new servers do not have any configuration information, and you cannot use Oracle Enterprise Manager Cloud Control to configure them. The servers are configured using the Oracle Exadata Deployment Assistant (OEDA) or manually.
Configuring Servers Using OEDA
Note:
In order to configure the servers with OEDA, the new server information must be entered in OEDA, and configuration files generated.Download the latest release of OEDA listed in My Oracle Support note 888828.1.
Enter the new server information in OEDA. Do not include information for the existing rack.
Note:
When extending an existing rack that has database servers earlier than Oracle Exadata Database Machine X4-2, be sure to deselect the active bonding option for the InfiniBand network so the new database servers are configured with active-passive bonded interfaces.
When extending an existing Oracle Exadata Database Machine X4-2 or later with active-active bonding, select the active bonding option to configure the new database servers for active-active bonding.
Generate the configuration files.
Prepare the servers as follows, starting with the first database server of the new servers:
Configure the servers as described in Preparing the Servers in Oracle Exadata System Software User's Guide.
Note:
OEDA checks the performance level of Oracle Exadata Storage Servers so it is not necessary to check them using the CellCLICALIBRATE
command at this time.Create the cell disks and grid disks as described in Configuring Cells, Cell Disks, and Grid Disks with CellCLI in Oracle Exadata System Software User's Guide.
Create the flash cache and flash log as described in Creating Flash Cache and Flash Grid Disks in Oracle Exadata System Software User's Guide.
Note:
When creating the flash cache, enable write back flash cache.Ensure the InfiniBand and bonded client Ethernet interface names are the same on the new database servers as on the existing database servers.
When using the same, earlier style bonding names, such as BOND0
, for the new database servers, then update the /opt/oracle.cellos/cell.conf
file to reflect the correct bond names.
Note:
If the existing servers useBONDIB0
as the InfiniBand bonding name, then this step can be skipped.Install OEDA on the first new database server.
See Also:
My Oracle Support note 888828.1 for information about OEDACopy the configuration files to the first database server of the new servers in the /opt/oracle.SupportTools/onecommand
directory. This is the information completed in step 2.
Run OEDA up to, but not including, the CreateGridDisk
step, and then run the SetupCellEmailAlerts
step and the Oracle Auto Service Request (ASR) configuration steps.
Note:
The OEDA ValidateEnv
step may display an error message about missing files, pXX.zip
. This is expected behavior because the files are not used for this procedure. You can ignore the error message.
When using capacity-on-demand, OEDA has the SetUpCapacityOnDemand
step. This step uses the resourcecontrol
command to set up the cores correctly.
Configure the storage servers, cell disks and grid disks as described in Configuring Cells, Cell Disks and Grid Disks with CellCLI in Oracle Exadata System Software User's Guide.
Note:
Use the data collected from the existing system, as described in Obtaining Current Configuration Information to determine the grid disk names and sizes.Run reclaimdisks.sh
on each database server.
The /opt/oracle.SupportTools/reclaimdisks.sh -free -reclaim
command reclaims disk space reserved for the deployment type not selected. The command takes less than 5 minutes, approximately. Systems are imaged with disks configured with RAID5; a RAID rebuild is no longer part of the reclaimdisks.sh
process.
Do not skip this step. Skipping this step results in unused space that can no longer be reclaimed by reclaimdisks.sh
.
Verify the time is the same on the new servers as on the existing servers. This check is performed for the storage servers and database servers.
Ensure the NTP settings are the same on the new servers as on the existing servers. This check is performed for storage servers and database servers.
Configure HugePages on the new servers to match the existing servers.
Ensure the values in the /etc/security/limits.conf
file for the new database servers match the existing database servers.
Go to Setting User Equivalence to continue the hardware configuration.
Configuring Servers Manually
Prepare the servers using the procedure described in Preparing the Servers in Oracle Exadata System Software User's Guide.
Ensure the InfiniBand and bonded client Ethernet interface names are the same on the new database servers as on the existing database servers.
Configure the storage servers, cell disks and grid disks as described in Configuring Cells, Cell Disks and Grid Disks with CellCLI in Oracle Exadata System Software User's Guide.
Configure the database servers as described in Setting Up Configuration Files for a Database Server Host in Oracle Exadata System Software User's Guide.
Run reclaimdisks.sh
on each database server.
The /opt/oracle.SupportTools/reclaimdisks.sh -free -reclaim
command reclaims disk space reserved for the deployment type not selected. The command takes less than 5 minutes, approximately. Systems are imaged with disks configured with RAID5; a RAID rebuild is no longer part of the reclaimdisks.sh
process.
Do not skip this step. Skipping this step results in unused space that can no longer be reclaimed by reclaimdisks.sh
.
Verify the time is the same on the new servers as on the existing servers. This check is performed for the storage servers and database server.
Ensure the NTP settings are the same on the new servers as on the existing servers. This check is performed for the storage servers and database servers.
Configure HugePages on the new servers to match the existing servers.
Go to Setting User Equivalence to continue the hardware configuration.
A new rack is configured at the factory. However, it is necessary to set up the network and configuration files for use with the existing rack.
Check the storage servers as described in Checking Exadata Storage Servers in Oracle Exadata Database Machine Installation and Configuration Guide.
Check the database servers as described in Checking Oracle Database Servers in Oracle Exadata Database Machine Installation and Configuration Guide.
Perform the checks as described in Performing Additional Checks and Configuration in Oracle Exadata Database Machine Installation and Configuration Guide.
Verify the InfiniBand network as described in Verifying the InfiniBand Network in Oracle Exadata Database Machine Installation and Configuration Guide.
Perform initial configuration as described in Performing Initial Elastic Configuration of Oracle Exadata Database Machine in Oracle Exadata Database Machine Installation and Configuration Guide.
Reclaim disk space as described in Configuring Oracle Database and Oracle ASM Instances for Oracle Exadata Database Machine Manually in Oracle Exadata Database Machine Installation and Configuration Guide.
Verify the time is the same on the new servers as on the existing servers. This check is performed for storage servers and database servers.
Ensure the NTP settings are the same on the new servers as on the existing servers. This check is performed for storage servers and database servers.
Configure HugePages on the new servers to match the existing servers.
Ensure the InfiniBand and bonded client Ethernet interface names on the new database servers match the existing database servers.
Configure the rack as described in Loading the Configuration Information and Installing the Software in Oracle Exadata Database Machine Installation and Configuration Guide. You can use either the Oracle Exadata Deployment Assistant (OEDA) or Oracle Enterprise Manager Cloud Control to configure the rack.
Note:
Only run OEDA up to the CreateGridDisks step, then configure storage servers as described in Configuring Cells, Cell Disks, and Grid Disks with CellCLI in Oracle Exadata System Software User's Guide.
When adding servers with 3 TB High Capacity (HC) disks to existing servers with 2TB disks, it is recommended to follow the procedure in My Oracle Support note 1476336.1 to properly define the grid disks and disk groups. At this point of setting up the rack, it is only necessary to define the grid disks. The disk groups are created after the cluster has been extended on to the new nodes.
If the existing storage servers have High Performance (HP) disks and you are adding storage servers with High Capacity (HC) disks or the existing storage servers have HC disks and you are adding storage servers HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.
Go to Setting User Equivalence to continue the hardware configuration.
User equivalence can be configured to include all servers once the servers are online.
This procedure must be done before running the post-cabling utilities.
Log in to each new server manually using SSH to verify that each server can accept log ins and that the passwords are correct.
Modify the dbs_group
and cell_group
files on all servers to include all servers.
Create the new directories on the first existing database server.
# mkdir /root/new_group_files # mkdir /root/old_group_files # mkdir /root/group_files
Copy the group files for the new servers to the /root/new_group_files
directory.
Copy the group files for the existing servers to the /root/old_group_files
directory.
Copy the group files for the existing servers to the /root/group_files
directory.
Update the group files to include the existing and new servers.
cat /root/new_group_files/dbs_group >> /root/group_files/dbs_group cat /root/new_group_files/cell_group >> /root/group_files/cell_group cat /root/new_group_files/all_group >> /root/group_files/all_group cat /root/new_group_files/dbs_ib_group >> /root/group_files/dbs_ib_group cat /root/new_group_files/cell_ib_group >> /root/group_files/cell_ib_group cat /root/new_group_files/all_ib_group >> /root/group_files/all_ib_group
Make the updated group files the default group files. The updated group files contain the existing and new servers.
cp /root/group_files/* /root cp /root/group_files/* /opt/oracle.SupportTools/onecommand
Put a copy of the updated group files in the root
user, oracle
user, and Oracle Grid Infrastructure user home directories, and ensure that the files are owned by the respective users.
Modify the /etc/hosts
file on the existing and new database server to include the existing InfiniBand IP addresses for the database servers and storage servers. The existing and new priv_ib_hosts
files can be used for this step.
Note:
Do not copy the/etc/hosts
file from one server to the other servers. Edit the file on each server.Run the setssh-Linux.sh
script as the root
user on one of the existing database servers to configure user equivalence for all servers using the following command. Oracle recommends using the first database server.
# /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \ /path_to_file/all_group -n N
In the preceding command, path_to_file is the directory path for the all_group
file containing the names for the existing and new servers.
Note:
For Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) systems, use the setssh.sh
command to configure user equivalence.
The command line options for the setssh.sh
command differ from the setssh-Linux.sh
command. Run setssh.sh
without parameters to see the proper syntax.
Add the known hosts using InfiniBand. This step requires that all database servers are accessible by way of their InfiniBand interfaces.
# /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \ /path_to_file/all_ib_group -n N -p password
Verify equivalence is configured.
# dcli -g all_group -l root date # dcli -g all_ib_group -l root date
Run the setssh-Linux.sh
script as the oracle
user on one of the existing database servers to configure user equivalence for all servers using the following command. Oracle recommends using the first database server. If there are separate owners for the Oracle Grid Infrastructure software, then run a similar command for each owner.
$ /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \
/path_to_file/dbs_group -n N
In the preceding command, path_to_file is the directory path for the dbs_group
file. The file contains the names for the existing and new servers.
Note:
For Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) systems, use the setssh.sh
command to configure user equivalence.
It may be necessary to temporarily change the permissions on the setssh-Linux.sh
file to 755 for this step. Change the permissions back to the original settings after completing this step.
Add the known hosts using InfiniBand. This step requires that all database servers are accessible by way of their InfiniBand interfaces.
$ /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \ /root/group_files/dbs_ib_group -n N
Verify equivalence is configured.
$ dcli -g dbs_group -l oracle date $ dcli -g dbs_ib_group -l oracle date
If there is a separate Oracle Grid Infrastructure user, then also run the preceding commands for that user, substituting the grid
user name for the oracle
user.
The following procedure describes how to start the cluster if it was stopped earlier for cabling an additional rack.
Note:
Oracle recommends you start one server, and let it come up fully before starting Oracle Clusterware on the rest of the servers.
It is not necessary to stop a cluster when extending Oracle Exadata Database Machine Half Rack to a Full Rack, or a Quarter Rack to a Half Rack or Full Rack.
Log in as the root
user on the original cluster.
Start one server of the cluster.
# Grid_home/grid/bin/crsctl start cluster
Check the status of the server.
Grid_home/grid/bin/crsctl stat res -t
Run the preceding command until it shows that the first server has started.
Start the other servers in the cluster.
# Grid_home/grid/bin/crsctl start cluster -all
Check the status of the servers.
Grid_home/grid/bin/crsctl stat res -t
It may take several minutes for all servers to start and join the cluster.
Grid disks can be added to Oracle ASM disk groups before or after the new servers are added to the cluster. The advantage of adding the grid disks before adding the new servers is that the rebalance operation can start earlier. The advantage of adding the grid disks after adding the new servers is that the rebalance operation can be done on the new servers so less load is placed on the existing servers.
The following procedure describes how to add grid disk to existing Oracle ASM disk groups.
Note:
It is assumed in the following examples that the newly-installed storage servers have the same grid disk configuration as the existing storage servers, and that the additional grid disks will be added to existing disk groups.
The information gathered about the current configuration should be used when setting up the grid disks.
If the existing storage servers have High Performance (HP) disks and you are adding storage servers with High Capacity (HC) disks or the existing storage servers have HC disks and you are adding storage servers HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.
Ensure the new storage servers are running the same version of software as storage servers already in use. Run the following command on the first database server:
dcli -g dbs_group -l root "imageinfo -ver"
Note:
If the Oracle Exadata System Software on the storage servers does not match, then upgrade or patch the software to be at the same level. This could be patching the existing servers or new servers. Refer to Reviewing Release and Patch Levels for additional information.Modify the /etc/oracle/cell/network-config/cellip.ora
file on all database servers to have a complete list of all storage servers. This can be done by modifying the file for one database server, and then copying the file to the other database servers. The cellip.ora
file should be identical on all database servers.
When adding Oracle Exadata Storage Server X4-2L servers, the cellip.ora
file contains two IP addresses listed for each cell. Copy each line completely to include the two IP addresses, and merge the addresses in the cellip.ora
file of the existing cluster.
The following is an example of the cellip.ora
file after expanding Oracle Exadata Database Machine X3-2 Half Rack to Full Rack using Oracle Exadata Storage Server X4-2L servers:
cell="192.168.10.9" cell="192.168.10.10" cell="192.168.10.11" cell="192.168.10.12" cell="192.168.10.13" cell="192.168.10.14" cell="192.168.10.15" cell="192.168.10.17;192.168.10.18" cell="192.168.10.19;192.168.10.20" cell="192.168.10.21;192.168.10.22" cell="192.168.10.23;192.168.10.24" cell="192.168.10.25;192.168.10.26" cell="192.168.10.27;192.168.10.28" cell="192.168.10.29;192.168.10.30"
In the preceding example, lines 1 through 7 are for the original servers, and lines 8 through 14 are for the new servers. Oracle Exadata Storage Server X4-2L servers have two IP addresses each.
Ensure the updated cellip.ora
file is on all database servers. The updated file must include a complete list of all storage servers.
Verify accessibility of all grid disks from one of the original database servers. The following command can be run as the root
user or the oracle
user.
$ Grid_home/grid/bin/kfod disks=all dscvgroup=true
The output from the command shows grid disks from the original and new storage servers.
Add the grid disks from the new storage servers to the existing disk groups using commands similar to the following. You cannot have both high performance disks and high capacity disks in the same disk group.
$ .oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle $ sqlplus / as sysasm SQL> ALTER DISKGROUP data ADD DISK 2> 'o/*/DATA*dm02*' 3> rebalance power 11;
In the preceding commands, a Full Rack was added to an existing Oracle Exadata Rack. The prefix for the new rack is dm02
, and the grid disk prefix is DATA
.
The following is an example in which an Oracle Exadata Database Machine Half Rack was upgraded to a Full Rack. The cell host names in the original system were named dm01cel01
through dm01cel07
. The new cell host names are dm01cel08
through dm01cel14
.
$ .oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle $ SQLPLUS / AS sysasm SQL> ALTER DISKGROUP data ADD DISK 2> 'o/*/DATA*dm01cel08*', 3> 'o/*/DATA*dm01cel09*', 4> 'o/*/DATA*dm01cel10*', 5> 'o/*/DATA*dm01cel11*', 6> 'o/*/DATA*dm01cel12*', 7> 'o/*/DATA*dm01cel13*', 8> 'o/*/DATA*dm01cel14*' 9> rebalance power 11;
Note:
If your system is running Oracle Database 11g release 2 (11.2.0.1), then Oracle recommends a power limit of 11 so that the rebalance completes as quickly as possible. If your system is running Oracle Database 11g release 2 (11.2.0.2), then Oracle recommends a power limit of 32. The power limit does have an impact on any applications that are running during the rebalance.
Ensure the ALTER DISKGROUP
commands are run from different Oracle ASM instances. That way, the rebalance operation for multiple disk groups can run in parallel.
Add disks to all disk groups including SYSTEMDG
or DBFS_DG
.
When adding servers with 3 TB High Capacity (HC) disks to existing servers with 2 TB disks, it is recommended to follow the procedure in My Oracle Support note 1476336.1 to properly define the grid disks and disk groups. At this point of setting up the rack, the new grid disks should be defined, but need to be placed into disk groups. Refer to the steps in My Oracle Support note 1476336.1.
If the existing storage servers have High Performance (HP) disks and you are adding storage servers with High Capacity (HC) disks, or the existing storage servers have HC disks and you are adding storage servers with HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.
Monitor the status of the rebalance operation using a query similar to the following from any Oracle ASM instance:
SQL> SELECT * FROM GV$ASM_OPERATION WHERE STATE = 'RUN';
The remaining tasks can be done while the rebalance is in progress.
See Also:
Obtaining Current Configuration Information for information about the existing grid disks.
Setting Up New Servers for information about configuring the grid disks.
Oracle Automatic Storage Management Administrator's Guide for information about the ASM_POWER_LIMIT
parameter.
This procedure describes how to add servers to a cluster.
For adding nodes to an Oracle VM cluster, refer to Expanding an Oracle VM RAC Cluster on Exadata in Oracle Exadata Database Machine Maintenance Guide.
Caution:
If Oracle Clusterware manages additional services that are not yet installed on the new nodes, such as Oracle GoldenGate, then note the following:
It may be necessary to stop those services on the existing node before running the addNode.sh
script.
It is necessary to create any users and groups on the new database servers that run these additional services.
It may be necessary to disable those services from auto-start so that Oracle Clusterware does not try to start the services on the new nodes.
Note:
To prevent problems with transferring files between existing and new nodes, you need to set up SSH equivalence. See Step 4 in Expanding an Oracle VM Oracle RAC Cluster on Exadata in for details.Ensure the /etc/oracle/cell/network-config/*.ora
files are correct and consistent on all database servers. The cellip.ora
file all database server should include the older and newer database servers and storage servers.
Ensure the ORACLE_BASE
and diag
destination directories have been created on the Oracle Grid Infrastructure destination home.
The following is an example for Oracle Grid Infrastructure 11g
:
# dcli -g /root/new_group_files/dbs_group -l root mkdir -p \ /u01/app/11.2.0/grid /u01/app/oraInventory /u01/app/grid/diag # dcli -g /root/new_group_files/dbs_group -l root chown -R grid:oinstall \ /u01/app/11.2.0 /u01/app/oraInventory /u01/app/grid # dcli -g /root/new_group_files/dbs_group -l root chmod -R 770 \ /u01/app/oraInventory # dcli -g /root/new_group_files/dbs_group -l root chmod -R 755 \ /u01/app/11.2.0 /u01/app/11.2.0/grid
The following is an example for Oracle Grid Infrastructure 12c
:
# cd / # rm -rf /u01/app/* # mkdir -p /u01/app/12.1.0.2/grid # mkdir -p /u01/app/oracle/product/12.1.0.2/dbhome_1 # chown -R oracle:oinstall /u01
Ensure the inventory
directory and Grid home directories have been created and have the proper permissions. The directories should be owned by the Grid user and the OINSTALL
group. The inventory
directory should have 770 permission, and the Oracle Grid Infrastructure home directories should have 755.
If you are running Oracle Grid Infrastructure 12c
or later:
Make sure oraInventory
does not exist inside /u01/app
.
Make sure /etc/oraInst.loc
does not exist.
Create users and groups on the new nodes with the same user identifiers and group identifiers as on the existing nodes.
Note:
If Oracle Exadata Deployment Assistant (OEDA) was used earlier, then these users and groups should have been created. Check that they do exist, and have the correct UID and GID values.Log in as the Grid user on an existing host.
Verify the Oracle Cluster Registry (OCR) backup exists.
ocrconfig -showbackup
Verify that the additional database servers are ready to be added to the cluster using commands similar to following:
$ cluvfy stage -post hwos -n \
dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \
-verbose
$ cluvfy comp peer -refnode dm01db01 -n \
dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \
-orainv oinstall -osdba dba | grep -B 3 -A 2 mismatched
$ cluvfy stage -pre nodeadd -n \
dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \
-verbose -fixup -fixupdir /home/grid_owner_name/fixup.d
In the preceding commands, grid_owner_name is the name of the Oracle Grid Infrastructure software owner, dm02db01 through db02db08 are the new database servers, and refnode is an existing database server.
Note:
The second and third commands do not display output if the commands complete correctly.
An error about a voting disk, similar to the following, may be displayed:
ERROR: PRVF-5449 : Check of Voting Disk location "o/192.168.73.102/ \ DATA_CD_00_dm01cel07(o/192.168.73.102/DATA_CD_00_dm01cel07)" \ failed on the following nodes: Check failed on nodes: dm01db01 dm01db01:No such file or directory … PRVF-5431 : Oracle Cluster Voting Disk configuration check
If such an error occurs:
- If you are running Oracle Grid Infrastructure 11g
, set the environment variable as follows:
$ export IGNORE_PREADDNODE_CHECKS=Y
Setting the environment variable does not prevent the error when running the cluvfy
command, but it does allow the addNode.sh
script to complete successfully.
- If you are running Oracle Grid Infrastructure 12c
or later, use the following addnode
parameters: -ignoreSysPrereqs -ignorePrereq
In Oracle Grid Infrastructure 12c
and later, addnode
does not use the IGNORE_PREADDNODE_CHECKS
environment variable.
If a database server was installed with a certain image and subsequently patched to a later image, then some operating system libraries may be older than the version expected by the cluvfy
command. This causes the cluvfy
command and possibly the addNode.sh
script to fail.
It is permissible to have an earlier version as long as the difference in versions is minor. For example, glibc-common-2.5-81.el5_8.2
versus glibc-common-2.5-49
. The versions are different, but both are at version 2.5, so the difference is minor, and it is permissible for them to differ.
Set the environment variable IGNORE_PREADDNODE_CHECKS=Y
before running the addNode.sh
script or use the addnode
parameters -ignoreSysPrereqs -ignorePrereq
with the addNode.sh
script to workaround this problem.
Ensure that all directories inside the Oracle Grid Infrastructure home on the existing server have their executable bits set. Run the following commands as the root
user.
find /u01/app/11.2.0/grid -type d -user root ! -perm /u+x ! \
-perm /g+x ! -perm o+x
find /u01/app/11.2.0/grid -type d -user grid_owner_name ! -perm /u+x ! \
-perm /g+x ! -perm o+x
In the preceding commands, grid_owner_name is the name of the Oracle Grid Infrastructure software owner, and /u01/app/11.2.0/grid
is the Oracle Grid Infrastructure home directory.
If any directories are listed, then ensure the group and others permissions are +x
. The Grid_home/network/admin/samples
, $GI_HOME/crf/admin/run/crfmond
, and Grid_home/crf/admin/run/crflogd
directories may need the +x
permissions set.
If you are running Oracle Grid Infrastructure 12c
or later, run commands similar to the following:
# chmod -R u+x /u01/app/12.1.0.2/grid/gpnp/gpnp_bcp* # chmod -R o+rx /u01/app/12.1.0.2/grid/gpnp/gpnp_bcp* # chmod o+r /u01/app/12.1.0.2/grid/bin/oradaemonagent /u01/app/12.1.0.2/grid/srvm/admin/logging.properties # chmod a+r /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/*O # chmod a+r /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/*0 # chown -f gi_owner_name:dba /u01/app/12.1.0.2/grid/OPatch/ocm/bin/emocmrsp
The Grid_home/network/admin/samples
directory needs the +x
permission:
chmod -R a+x /u01/app/12.1.0.2/grid/network/admin/samples
Run the following command. It is assumed that the Oracle Grid Infrastructure home is owned by the Grid user.
$ dcli -g old_db_nodes -l root chown -f grid_owner_name:dba \
/u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp
This step is needed only if you are running Oracle Grid Infrastructure 11g
. In Oracle Grid Infrastructure 12c
, no response file is needed because the values are specified on the command line.
Create a response file, add-cluster-nodes.rsp
, as the Grid user to add the new servers similar to the following:
RESPONSEFILE_VERSION=2.2.1.0.0 CLUSTER_NEW_NODES={dm02db01,dm02db02, \ dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08} CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm0201-vip,dm0202-vip,dm0203-vip,dm0204-vip, \ dm0205-vip,dm0206-vip,dm0207-vip,dm0208-vip}
In the preceding file, the host names dm02db01
through db02db08
are the new nodes being added to the cluster.
Note:
The lines listing the server names should appear on one continuous line. They are wrapped in the documentation due to page limitations.Ensure most of the files in the Grid_home/rdbms/audit
and Grid_home/log/diag/*
directories have been moved or deleted before extending a cluster.
Refer to My Oracle Support note 744213.1 if the installer runs out of memory. The note describes how to edit the Grid_home/oui/ora-param.ini
file, and change the JRE_MEMORY_OPTIONS
parameter to -Xms512m-Xmx2048m
.
Add the new servers by running the addNode.sh
script from an existing server as the Grid user.
If you are running Oracle Grid Infrastructure 11g
:
$ cd Grid_home/oui/bin
$ ./addNode.sh -silent -responseFile /path/to/add-cluster-nodes.rsp
If you are running Oracle Grid Infrastructure 12c
or later, run the addnode.sh
command with the CLUSTER_NEW_NODES
and CLUSTER_NEW_VIRTUAL_HOSTNAMES
parameters. The syntax is:
$ ./addnode.sh -silent "CLUSTER_NEW_NODES={comma_delimited_new_nodes}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={comma_delimited_new_node_vips}"
For example:
$ cd Grid_home/addnode/
$ ./addnode.sh -silent "CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,
dm02db06,dm02db07,dm02db08}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm02db01-vip,dm02db02-vip,
dm02db03-vip,dm02db04-vip,dm02db05-vip,dm02db06-vip,dm02db07-vip,dm02db08-vip}"
-ignoreSysPrereqs -ignorePrereq
Verify the grid disks are visible from each of the new database servers.
$ Grid_home/grid/bin/kfod disks=all dscvgroup=true
Run the orainstRoot.sh
script as the root
user when prompted using the dcli
utility.
$ dcli -g new_db_nodes -l root \ /u01/app/oraInventory/orainstRoot.sh
Disable HAIP on the new servers.
Before running the root.sh
script, on each new server, set the HAIP_UNSUPPORTED
environment variable to TRUE
.
$ export HAIP_UNSUPPORTED=true
Run the Grid_home/root.sh
script on each server sequentially. This simplifies the process, and ensures that any issues can be clearly identified and addressed.
Note:
The node identifier is set in order of the nodes where theroot.sh
script is run. Typically, the script is run from the lowest numbered node name to the highest.Check the log file from the root.sh
script and verify there are no problems on the server before proceeding to the next server. If there are problems, then resolve them before continuing.
Check the status of the cluster after adding the servers.
$ cluvfy stage -post nodeadd -n \ dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \ -verbose
Check that all servers have been added and have basic services running.
crsctl stat res -t
Note:
It may be necessary to mount disk groups on the new servers. The following commands must be run as the oracle
user.
$ srvctl start diskgroup -g data $ srvctl start diskgroup -g reco
If you are running Oracle Grid Infrastructure releases 11.2.0.2 and later, then perform the following steps:
Manually add the CLUSTER_INTERCONNECTS
parameter to the SPFILE for each Oracle ASM instance.
ALTER SYSTEM SET cluster_interconnects = '192.168.10.x' \ sid='+ASMx' scope=spfile
Restart the cluster on each new server.
Verify the parameters were set correctly.
Cell alerts need to be configured for the new Oracle Exadata Storage Servers.
The configuration depends on the type of installation.
When extending Oracle Exadata Database Machine Quarter Rack to Half Rack, or Half Rack to Full Rack:
Manually configure cell alerts on the new storage servers. Use the settings on the original storage servers as a guide. To view the settings on the original storage servers, use a command similar to the following:
dcli -g new_cells_nodes -l celladmin cellcli -e list cell detail
To view the alerts on the new storage servers, use a command similar to the following:
dcli -g new_cell_nodes -l root "cellcli -e ALTER CELL \ smtpServer=\'mailserver.example.com\' \ smtpPort=25, \ smtpUseSSL=false,smtpFrom=\'DBM dm01\', \ smtpFromAddr=\'storecell@example.com\', \ smtpToAddr=\'dbm-admins@example.com\', \ notificationMethod=\'mail,snmp\', \ notificationPolicy=\'critical,warning,clear\', \ snmpSubscriber=\(\(host=\'snmpserver.example.com, port=162\')\)"
Note:
The backslash character (\
) is used as an escape character for the dcli utility, and as a line continuation character in the preceding command.When cabling racks:
Use Oracle Exadata Deployment Assistant (OEDA) to set up e-mail alerts for storage servers as the root
user from the original rack to the new rack. The utility includes the SetupCellEmailAlerts step to configure alerts.
It is necessary to add the Oracle Database software directory ORACLE_HOME
to the database servers after the cluster modifications are complete, and all the database servers are in the cluster.
Check the Oracle_home/bin
directory for files ending in zero (0), such as nmb0
, that are owned by the root
user and do not have oinstall
or world read privileges. Use the following command to modify the file privileges:
# chmod a+r $ORACLE_HOME/bin/*0
If you are running Oracle Database release 12c or later, you also have to change permissions for files ending in uppercase O, in addition to files ending in zero.
# chmod a+r $ORACLE_HOME/bin/*O
This step is required for Oracle Database 11g only. If you are running Oracle Database 12c, you can skip this step because the directory has already been created.
Create the ORACLE_BASE
directory for the database owner, if it is different from the Oracle Grid Infrastructure software owner (Grid user) using the following commands:
# dcli -g root/new_group_files/dbs_group -l root mkdir -p /u01/app/oracle # dcli -g root/new_group_files/dbs_group -l root chown oracle:oinstall \ /u01/app/oracle
Run the following command to set ownership of the emocmrsp
file in the Oracle Database $ORACLE_HOME
directory:
# dcli -g old_db_nodes -l root chown -f oracle:dba \ /u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp
This step is required for Oracle Database 11g only. If you are running Oracle Database 12c, then you can skip this step because the values are entered on the command line.
Create a response file, add-db-nodes.rsp
, as the oracle
owner to add the new servers similar to the following:
RESPONSEFILE_VERSION=2.2.1.0.0 CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05, \ dm02db06,dm02db07,dm02db08}
Note:
The lines listing the server names should appear on one continuous line. The are wrapped in the document due to page limitations.Add the Oracle Database ORACLE_HOME
directory to the new servers by running the addNode.sh
script from an existing server as the database owner user.
If you are running Oracle Grid Infrastructure 11g:
$ cd $ORACLE_HOME/oui/bin $ ./addNode.sh -silent -responseFile /path/to/add-db-nodes.rsp
If you are running Oracle Grid Infrastructure 12c, then you specify the nodes on the command line. The syntax is:
./addnode.sh -silent "CLUSTER_NEW_NODES={comma_delimited_new_nodes}"
For example:
$ cd $Grid_home/addnode
$ ./addnode.sh -silent "CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,
dm02db06,dm02db07,dm02db08}" -ignoreSysPrereqs -ignorePrereq
Ensure the $ORACLE_HOME/oui/oraparam.ini
file has the memory settings that match the parameters set in the Oracle Grid Infrastructure home.
Run the root.sh
script on each server when prompted as the root
user using the dcli utility.
$ dcli -g new_db_nodes -l root $ORACLE_HOME/root.sh
In the preceding command, new_db_nodes is the file with the list of new database servers.
Verify the ORACLE_HOME
directories have been added to the new servers.
# dcli -g /root/all_group -l root du -sm \ /u01/app/oracle/product/11.2.0/dbhome_1
Before adding the database instances to the new servers, check the following:
Maximum file size: If any data files have reached their maximum file size, then the addInstance
command may fail with an ORA-00740 error. Oracle recommends you check that none of the files listed in DBA_DATA_FILES
have reached their maximum size. Files that have reached their maximum should be corrected.
Online redo logs: If the online redo logs are kept in the directory specified by the DB_RECOVERY_FILE_DEST
parameter, then ensure the space allocated is sufficient for the additional redo logs for the new instances being added. If necessary, then increase the size for the DB_RECOVERY_FILE_DEST_SIZE
parameter.
Total number of instances in the cluster: Set the value of the initialization parameter cluster_database_instances
in the SPFILE for each database to the total number of instances that will be in the cluster after adding the new servers.
The HugePages settings are correctly configured on the new servers to match the existing servers.
Use a command similar the following from an existing database server to add instances to the new servers. In the command, the instance, dbm9
, is being added for server dm02db01
.
dbca -silent -addInstance -gdbName dbm -nodeList dm02db01 -instanceName dbm9 \ -sysDBAUsername sys
The command must be run for all servers and instances, substituting the server name and instance name, as appropriate.
Note:
If the command fails, then ensure any files that were created, such as redo log files, are cleaned up. ThedeleteInstance
command does not clean log files or data files that were created by the addInstance
command.Add the CLUSTER_INTERCONNECTS
parameter to each new instance.
Manually add the CLUSTER_INTERCONNECTS
parameter to the SPFILE for each new database instance. The additions are similar to the existing entries, but are the InfiniBand addresses corresponding to the server that each instance runs on.
Restart the instance on each new server.
Verify the parameters were set correctly.
Use the following procedure to ensure the new hardware is correctly configured and ready for use:
Run the /opt/oracle.SupportTools/ibdiagtools/verify-topology
command to ensure that all InfiniBand cables are connected and secure.
Run the Oracle Exadata Database Machine HealthCheck utility using the steps described in My Oracle Support note 1070954.1.
Verify the instance additions using the following commands:
srvctl config database -d dbm srvctl status database -d dbm
Check the cluster resources using the following command:
crsctl stat res -t
Ensure the original configuration summary report from the original cluster deployment is updated to include all servers. This document should include the calibrate and network verifications for the new rack, and the InfiniBand cable checks (verify-topology
and infinicheck
).
Conduct a power-off test, if possible. If the new Exadata Storage Servers cannot be powered off, then verify that the new database servers with the new instances can be powered off and powered on, and that all processes start automatically.
Note:
Ensure the Oracle ASM disk rebalance process has completed for all disk groups by using the following command:
select * from gv$asm_operation
No rows should be returned by the command.
Review the configuration settings, such as the following:
All parallelism settings
Backup configurations
Standby site, if any
Service configuration
Oracle Database File System (DBFS) configuration, and mount points on new servers
Installation of Oracle Enterprise HugePage Manager agents on new database servers
HugePages settings
Incorporate the new cell and database servers into Auto Service Request.
Update Oracle Enterprise Manager Cloud Control to include the new nodes.
See Also:
Oracle Exadata Database Machine Maintenance Guide for information about verifying the InfiniBand network configuration
Auto Service Request Quick Installation Guide for Oracle Exadata Database Machine