1 Extending Oracle Exadata Database Machine

This chapter describes how to extend Oracle Exadata Database Machine.

Note:

For ease of reading, the name "Oracle Exadata Rack" is used when information refers to both Oracle Exadata Database Machine and Oracle Exadata Storage Expansion Rack.

See Also:

1.1 About Extending Oracle Exadata Database Machine

You can extend Oracle Exadata Database Machine as follows:

  • You can extend Oracle Exadata Database Machine from a fixed configuration to another fixed configuration, for example, from an eighth rack to a quarter rack, a quarter rack to a half rack, or a half rack to a full rack.

  • You can also extend Oracle Exadata Database Machine from a fixed or a custom configuration to another custom configuration by adding any combination of database or storage servers up to the allowed maximum. This is known as elastic configuration. See Oracle Exadata Database Machine System Overview for details.

  • Any combination of Oracle Exadata Database Machine half racks and full racks can be cabled together.

  • A Sun Datacenter InfiniBand Switch 36 switch, and cables must be ordered before extending Oracle Exadata Database Machine X4-2 racks.

Note:

  • The cable lengths shown in Multi-Rack Cabling Tables assume the racks are adjacent to each other. If the racks are not adjacent or use overhead cabling trays, then they may require longer cables lengths. Up to 100 meters is supported.

    Only optical cables are supported for lengths greater than 5 meters.

  • Earlier Oracle Exadata Racks can be extended with later Oracle Exadata Racks.

  • When extending Oracle Exadata Database Machine Eighth Rack with Oracle Exadata Storage Expansion Rack make sure that there are two separate disk groups. There should be one disk group for the drives in the Oracle Exadata Database Machine Eighth Rack rack and one disk group for the drives in Oracle Exadata Storage Expansion Rack.

Multiple Oracle Exadata Database Machines can be run as separate environments, and connect through the InfiniBand network. If you are planning to use multiple Oracle Exadata Database Machines in this manner, then note the following:

  • All servers on the InfiniBand network must have a unique IP address. When Oracle Exadata Database Machine is deployed, the default InfiniBand network is 192.168.10.1. You must modify the IP addresses before re-configuring the InfiniBand network. Failure to do so causes duplicate IP addresses. After modifying the network, run the verify-topology and infinicheck commands to verify the network is working properly. You need to create a file that contains IP addresses for Exadata Storage Servers, such as combined_cellip.ora. The following is an example of the commands:

    # cd /opt/oracle.SupportTools/ibdiagtools
    # ./verify-toplogy -t fattree
    # ./infinicheck -c /tmp/combined_cellip.ora -b
    
  • When Oracle Exadata Database Machines run in separate clusters, do not modify the cellip.ora files. The cellip.ora file on a database server should only include the IP addresses for the Exadata Storage Servers used with that database server.

  • Cells with disk types different from what is already installed can be added, but the disk types cannot be mixed in the same Oracle ASM disk group. For example, if the existing disk groups all use high performance disks, and cells with high capacity disks are being added, then it is necessary to create new disk groups for the high capacity disks.

    When adding the same type of disk, ensure that the grid disk sizes are exactly the same even if the new disks are larger than the existing ones. For example, if the existing disks are 3 TB, and the additional disks are 4 TB, then it is necessary to create grid disks that match the size on the 3 TB disks. A new disk group can be created using the extra 1 TB of disk space.

  • In order to access Exadata Storage Servers in one Oracle Exadata Database Machine by another Oracle Exadata Database Machine when they are not running as a single cluster, Exadata Storage Servers must have unique Oracle ASM disk group and failure group names on each Oracle Exadata Database Machine. For example, for two Oracle Exadata Database Machines cabled together but run as separate clusters, the following names should be unique:

    • Cell name

    • Cell disk name

    • Grid disk name

    • Oracle ASM failure group name

    See Also:

    Oracle Automatic Storage Management Administrator's Guide for information about renaming disk groups

  • All equipment receives a Customer Support Identifier (CSI). Any new equipment for Oracle Exadata Database Machine has a new CSI. Contact Oracle Support Services to reconcile the new CSI with the existing Oracle Exadata Database Machine CSI. Have the original instance numbers or serial numbers available, as well as the new numbers when contacting Oracle Support Services.

The InfiniBand network can be used for external connectivity. The external connectivity ports in the Sun Datacenter InfiniBand Switch 36 switches can connect to media servers for tape backup, data loading, and client and application access. Use the available ports on the leaf switches for external connectivity. There are 12 ports per rack. The available ports are 5B, 6A, 6B, 7A, 7B, and 12A in each leaf switch. For high availability connections, connect one port to one leaf switch and the other port to the second leaf switch. The validated InfiniBand cable lengths are as follows:

  • Up to 5 meters passive copper 4X QDR QSFP cables

  • Up to 100 meters fiber optic 4X D=QDR QSFP cables

1.2 Preparing to Extend Oracle Exadata Database Machine

Before extending any rack hardware, review the safety precautions and cabling information, and collect information about the current rack in this section.

1.2.1 Reviewing the Safety Precautions

Before upgrading Oracle Exadata Database Machines, read Important Safety Information for Sun Hardware Systems included with the rack.

Note:

Contact your service representative or Oracle Advanced Customer Services to confirm that Oracle has qualified your equipment for installation and use in Oracle Exadata Database Machine. Oracle is not liable for any issues when you install or use non-qualified equipment.

1.2.2 Reviewing the InfiniBand Cable Precautions

Review the following InfiniBand cable precautions before working with the InfiniBand cables:

  • Fiber optic InfiniBand cables with laser transceivers must be of type Class 1.

  • Do not allow any copper core InfiniBand cable to bend to a radius tighter than 127 mm (5 inches). Tight bends can damage the cables internally.

  • Do not allow any optical InfiniBand cable to bend to a radius tighter than 85 mm (3.4 inches). Tight bends can damage the cables internally.

  • Do not use zip ties to bundle or support InfiniBand cables. The sharp edges of the ties can damage the cables internally. Use hook-and-loop straps.

  • Do not allow any InfiniBand cable to experience extreme tension. Do not pull or drag an InfiniBand cable. Pulling on an InfiniBand cable can damage the cable internally.

  • Unroll an InfiniBand cable for its length.

  • Do not twist an InfiniBand cable more than one revolution for its entire length. Twisting an InfiniBand cable can damage the cable internally.

  • Do not route InfiniBand cables where they can be stepped on, or experience rolling loads. A crushing effect can damage the cable internally.

1.2.3 Estimating InfiniBand Cable Path Lengths

Cable paths should be as short as possible. When the length of a cable path has been calculated, select the shortest cable to satisfy the length requirement. When specifying a cable, consider the following:

  • Bends in the cable path increase the required length of the cable. Rarely does a cable travel in a straight line from connector to connector. Bends in the cable path are necessary, and each bend increases the total length.

  • Bundling increases the required length of the cables. Bundling causes one or more cables to follow a common path. However, the bend radius is different in different parts of the bundle. If the bundle is large and unorganized, and there are many bends, one cable might experience only the inner radius of bends, while another cable might experience the outer radius of bends. In this situation, the differences of the required lengths of the cables is quite substantial.

  • If you are routing the InfiniBand cable under the floor, consider the height of the raised floor when calculating cable path length.

1.2.4 Bundling InfiniBand Cables

When bundling InfiniBand cables in groups, use hook-and-loop straps to keep cables organized. If possible, use color-coordinated straps to help identify cables and their routing. The InfiniBand splitter and 4X copper conductor cables are fairly thick and heavy for their length. Consider the retention strength of the hook-and-loop straps when supporting cables. Bundle as few cables as reasonably possible. If the InfiniBand cables break free of their straps and fall free, the cables might break internally when they strike the floor or from sudden changes in tension.

You can bundle the cables using many hook-and-loop straps. Oracle recommends that no more than eight cables be bundled together.

Place the hook-and-loop straps as close together as reasonably possible, for example, one strap every foot (0.3 m). If a cable breaks free from a strap, then the cable cannot fall far before it is retained by another strap.

1.2.4.1 Floor and Underfloor Delivery of InfiniBand Cables

Sun Datacenter InfiniBand Switch 36 switch accepts InfiniBand cables from floor or underfloor delivery. Floor and underfloor delivery limits the tension in the InfiniBand cable to the weight of the cable for the rack height of the switch.

Note:

Overhead cabling details are not included in this guide. For details on overhead cabling, contact a certified service engineer.

1.2.5 Reviewing the Cable Management Arm Guidelines

Review the following cable management arm (CMA) guidelines before routing the cables:

  • Remove all required cables from the packaging, and allow cables to acclimate or reach operating temperature, if possible. The acclimation period is usually 24 hours. This improves the ability to manipulate the cables.

  • Label both ends of each cable using a label stock that meets the ANSI/TIA/EIA 606-A standard, if possible.

  • Begin the installation procedure in ascending order.

  • Only slide out one server at a time. Sliding out more than one server can cause cables to drop cause problems when sliding the servers back.

  • Separate the installation by dressing cables with the least stringent bend radius requirements first. The following bend radius requirements are based on EIA/TIA 568-x standards, and may vary from the manufacturer's requirements:

    • CAT5e UTP: 4 x diameter of the cable or 1 inch/25.4 mm minimum bend radius

    • AC power cables: 4 x diameter of the cable or 1 inch/ 25.4 mm minimum bend radius

    • TwinAx: 5 x diameter of the cable or 1.175 inch/33 mm.

    • Quad Small Form-factor Pluggable (QSFP) InfiniBand cable: 6 x diameter of the cable or 2 inch/55 mm.

    • Fiber core cable: 10 x diameter of the cable or 1.22 inch/31.75 mm for a 0.125 cable.

  • Install the cables with the best longevity rate first.

1.2.6 Obtaining Current Configuration Information

The current configuration information is used to plan patching requirements, configure new IP addresses, and so on. The following information should be collected as described before extending the rack:

  • The Exachk report for the current rack.

  • Image history information using the following command:

    dcli -g ~/all_group -l root "imagehistory" > imagehistory.txt
    
  • Current IP addresses defined for all Exadata Storage Servers and database servers using the following command:

    dcli -g ~/all_group -l root "ifconfig" > ifconfig_all.txt
    
  • Information about the configuration of the cells, cell disks, flash logs, and IORM plans using the following commands:

    dcli -g ~/cell_group -l root "cellcli -e list cell detail" > cell_detail.txt
    
    dcli -g ~/cell_group -l root "cellcli -e list physicaldisk detail" >   \
    physicaldisk_detail.txt
    
    dcli -g ~/cell_group -l root "cellcli -e list griddisk attributes      \
    name,offset,size,status,asmmodestatus,asmdeactivationoutcome" > griddisk.txt
    
    dcli -g ~/cell_group -l root "cellcli -e list flashcache detail" >     \
    fc_detail.txt
    
    dcli -g ~/cell_group -l root "cellcli -e list flashlog detail" > fl_detail.txt
    
    dcli -g ~/cell_group -l root "cellcli -e list iormplan detail" >       \
    iorm_detail.txt
    
  • HugePages memory configuration on the database servers using the following command:

    dcli -g ~/dbs_group -l root "cat /proc/meminfo | grep 'HugePages'" >    \
    hugepages.txt
    
  • InfiniBand switch information using the following command:

    ibswitches > ibswitches.txt
    
  • Firmware version of the Sun Datacenter InfiniBand Switch 36 switches using the nm2version command on each switch.

  • The following network files from the first database server in the rack:

    • /etc/resolv.conf

    • /etc/ntp.conf

    • /etc/network

    • /etc/sysconfig/network-scripts/ifcfg-*

  • Any users, user identifiers, groups and group identifiers created for cluster-managed services that need to be created on the new servers, such as Oracle GoldenGate.

    • /etc/passwd

    • /etc/group

  • Output of current cluster status using the following command:

    crsctl stat res -t > crs_stat.txt
    
  • Patch information from the Grid Infrastructure and Oracle homes using the following commands. The commands must be run as Grid Infrastructure home owner, and the Oracle home owner.

    /u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch lsinventory -oh    \
    GRID_HOME -detail -all_nodes > opatch_grid.txt
    
    /u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch lsinventory -oh    \
    ORACLE_HOME -detail -all_nodes >> opatch_oracle.txt
    

    In the preceding commands, GRID_HOME is the path for the Grid Infrastructure home directory, and ORACLE_HOME is the path for the Oracle home directory.

1.2.7 Preparing the Network Configuration

When adding additional servers or rack to an existing rack, the IP addresses for the new servers are obtained using Oracle Exadata Deployment Assistant. If adding additional servers to an existing rack, then the application should only include the new servers. If adding an additional rack, then the new rack should use its own Oracle Exadata Deployment Assistant. The exact Oracle ASM disk group configuration currently in use may not be reflected by the application. This is not an issue, as the grid disks and disk groups are configured manually. All other items, such as the Oracle home location and owner, should be defined exactly as the existing configuration.

When adding Oracle Exadata X4-2 Database Server or later or Oracle Exadata Storage Server X4-2L or later, the bonding configuration must match the existing servers in the rack. The Oracle Exadata Deployment Assistant InfiniBand configuration page has an option to select the type of bonding. Select the option for active-active bonding, or deselect the option for active-passive bonding.

The configuration file generated by the application is used by Oracle Exadata Deployment Assistant. After completing Oracle Exadata Deployment Assistant, use the checkip.sh and dbm.dat files to verify the network configuration. The only errors that should occur are from the ping command to the SCAN addresses, Cisco switch, and Sun Datacenter InfiniBand Switch 36 switches.

1.2.8 Moving Audit and Diagnostic Files

The files in the $GRID_HOME/rdbms/audit directory and the $GRID_HOME/log/diagnostics directory should be moved or deleted before extending a cluster. Oracle recommends moving or deleting the files a day or two before the planned extension because it may take time.

1.2.9 Reviewing Release and Patch Levels

The new rack or servers most-likely have a later release or patch level than the current rack. In some cases, you may want to update the current rack release to the later release. In other cases, you may want to stay at your current release, and choose to reimage the new rack to match the current rack. Whatever you choose to do, ensure that the existing and new servers and Sun Datacenter InfiniBand Switch 36 switches are at the same patch level. Note the following about the hardware and releases:

Tip:

Check My Oracle Support note 888828.1 for latest information on minimum releases.

  • When expanding Oracle Exadata Database Machine X4-2 with Sun Server X4-2 Oracle Database Servers and Oracle Exadata Storage Server X4-2L Servers, the minimum release for the servers is release 11.2.3.3.0.

  • When expanding with Oracle Exadata Database Machine X4-8 Full Rack, the minimum release for the servers is release 11.2.3.3.1.

  • When expanding Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) or Oracle Exadata Database Machine X2-2 (with X4170 M2 and X4270 M2 servers) with Sun Server X3-2 Oracle Database Servers and Exadata Storage Server X3-2 Servers, the minimum release for servers is release 11.2.3.2.0.

  • When expanding Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) with Sun Fire X4170 M2 Oracle Database Servers and Oracle Exadata Storage Server with Sun Fire X4270 M2 Servers, the minimum release for servers is release 11.2.2.2.0.

  • The earlier servers may need to be patched to a later release to meet the minimum. In addition, earlier database servers may use Oracle Linux release 5.3. Those servers need to be updated to the latest Oracle Linux release.

Additional patching considerations include the Grid Infrastructure and database home releases and bundle patch updates. If new patches will be applied, then Oracle recommends changing the existing servers so the new servers will inherit the releases as part of the extension procedure. This way, the number of servers being patched is lower. Any patching of the existing servers should be performed in advance so they are at the desired level when the extension work is scheduled, thereby reducing the total amount of work required during the extension.

1.2.10 Performing Preliminary Checks

Perform a visual check of Oracle Exadata Database Machine physical systems before extending the hardware.

  1. Check the rack for damage.

  2. Check the rack for loose or missing screws.

  3. Check Oracle Exadata Database Machine for the ordered configuration.

  4. Check that all cable connections are secure and well seated.

  5. Check power cables.

  6. Ensure the correct connectors have been supplied for the data center facility power source.

  7. Check network data cables.

  8. Check the site location tile arrangement for cable access and airflow.

  9. Check the data center airflow into the front of Oracle Exadata Database Machine.

1.2.11 Preparing to Add Servers

Perform the following tasks before adding the servers:

  1. Unpack the Oracle Exadata Database Machine expansion kit.

  2. Unpack all Oracle Exadata Database Machine server components from the packing cartons. The following items should be packaged with the servers:

    • Oracle Database servers or Exadata Storage Server

    • Power cord, packaged with country kit

    • Cable management arm with installation instructions

    • Rackmount kit containing rack rails and installation instructions

    • (Optional) Sun server documentation and media kit

    Note:

    If you are extending Oracle Exadata Database Machine X4-2, Oracle Exadata Database Machine X3-8 Full Rack, or Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) half rack, then order the expansion kit that includes a Sun Datacenter InfiniBand Switch 36 switch.

    Figure 1-1 shows the components in the server expansion kit.

    Figure 1-1 Server Components for Upgrade

    Description of Figure 1-1 follows
    Description of "Figure 1-1 Server Components for Upgrade"
  3. Lay out the cables for the servers.

  4. Unroll the cables and stretch them to remove the bends.

  5. Apply the cable labels. Oracle recommends labeling all cables before installation.

    See Also:

    Oracle Exadata Database Machine Maintenance Guide for information about cable labels

  6. Install the servers as described in "Adding New Servers."

  7. Cable the servers as described in "Cabling Database Servers" and "Cabling Exadata Storage Servers."

1.3 Extending the Hardware

Oracle Exadata Database Machine can be extended from Oracle Exadata Database Machine Quarter Rack to Oracle Exadata Database Machine Half Rack, from Oracle Exadata Database Machine Half Rack to Oracle Exadata Database Machine Full Rack, and by cabling racks together.

Note:

All new equipment receives a Customer Support Identifier (CSI). Any new equipment for Oracle Exadata Database Machine has a new CSI. Contact Oracle Support Services to reconcile the new CSI with the existing Oracle Exadata Database Machine CSI. Have the original instance numbers or serial numbers available, as well as the new numbers when contacting Oracle Support Services.

1.3.1 Extending an Eighth Rack to a Quarter Rack in Oracle Exadata Database Machine SL6, X4-2 and Later

Extending Oracle Exadata Database Machine X4-2 or X5-2 from an eighth rack to a quarter rack is done using software. No hardware modifications are needed to extend the rack.

However, hardware modifications may be needed for Oracle Exadata Database Machine X6-2 and SL6. See For Oracle Exadata Database Machine X6-2 and SL6: Adding High Capacity Disks and Flash Cards for details.

This procedure can be done with no downtime or outages, other than a rolling database outage.

Note:

In the following procedures, the disk group names and sizes are examples. The values should be changed in the commands to match the actual system.

The procedures assume user equivalence exists between the root user on the first database server and all other database servers, and to the celladmin user on all storage cells.

The text files cell_group and db_group should be created to contain lists of cell host names and database server host names, respectively.

1.3.1.1 Reviewing and Validating Current Configuration of Eighth Rack Oracle Exadata Database Machine SL6 and X4-2 or Later

The following procedure describes how to review and validate the current configuration.

  1. Log in as the root user on the first database server.

  2. Review the current configuration of the database servers using the following command:

    # dcli -g db_group -l root 'dbmcli -e list dbserver attributes coreCount'
    

    The following is an example of the output from the command for Oracle Exadata Database Machine X5-2 Eighth Rack:

    dm01db01: 18
    dm01db02: 18
    

    Note:

    The number of active cores in Oracle Exadata Database Machine X5-2 Eighth Rack database server is 18. The number of active cores in Oracle Exadata Database Machine X4-2 Eighth Rack database server is 12.

    If the number of cores on a database server configured as an eighth rack differs, then contact Oracle Support Services.

  3. Review the current configuration of the storage servers using the following command. The expected output is TRUE.

    # dcli -g cell_group -l celladmin 'cellcli -e LIST CELL attributes eighthrack'
    

1.3.1.2 Activating Database Server Cores in Eighth Rack Oracle Exadata Database Machine SL6 and X4-2 or Later

The following procedure describes how to activate the database server cores.

  1. Log in as the root user on the first database server.

  2. Activate all the database server cores using the following dcli utility command on the database server group:

    # dcli -g db_group -l root  'dbmcli  -e    \
    ALTER DBSERVER pendingCoreCount = number_of_cores'
    

    In the preceding command, number_of_cores is the total number of cores to activate. The value includes the existing core count and the additional cores to be activated. The following command shows how to activate all the cores in Oracle Exadata Database Machine X5-2 Eighth Rack:

    # dcli -g db_group -l root 'dbmcli -e ALTER DBSERVER pendingCoreCount = 36'
    

    Note:

    The maximum number of total active cores for Oracle Exadata Database Machine X5-2 Eighth Rack is 36. The maximum number of total active cores for Oracle Exadata Database Machine X4-2 Eighth Rack is 24.

  3. Restart each database server.

    Note:

    If this procedure is done in a rolling fashion with the database and Grid Infrastructure active, then ensure the following before restarting the database server:

  4. Verify the following items on the database server after the restart completes and before proceeding to the next server:

    • The database and Grid Infrastructure services are active.

      See "Using SRVCTL to Verify That Instances are Running" in Oracle Real Application Clusters Administration and Deployment Guide and the crsctl status resource –w "TARGET = ONLINE" —t command.
    • The number of active cores is correct. Use the dbmcli -e list dbserver attributes coreCount command to verify the number of cores.

See Also:

1.3.1.3 For Oracle Exadata Database Machine X6-2 and SL6: Adding High Capacity Disks and Flash Cards

Upgrade of Oracle Exadata Database Machine X6-2 Eighth Rack High Capacity systems require hardware modification, but upgrade of X6-2 Extreme Flash and SL6 systems does not require hardware modification.

Eighth Rack High Capacity storage servers have half the cores enabled, but half the disks and flash cards are removed. Eighth Rack Extreme Flash storage servers have half the cores and flash drives enabled.

Eighth Rack database servers have half the cores enabled.

On Oracle Exadata Database Machine X6-2 Eighth Rack systems with High Capacity disks, you can add high capacity disks and flash cards to extend the system to a Quarter Rack:

  1. Install the six 8 TB disks in HDD slots 6 - 11.

  2. Install the two F320 flash cards in PCIe slots 1 and 4.

1.3.1.4 Activating Storage Server Cores and Disks in Eighth Rack Oracle Exadata Database Machine SL6 and X4-2 or Later

The following procedure describes how to activate the storage server cores and disks.

  1. Log in as the root user on the first database server.

  2. Activate the cores on the storage server group using the following command. The command uses the dcli utility, and runs the command as the celladmin user.

    # dcli -g cell_group -l celladmin cellcli -e "alter cell eighthRack=false"
    
  3. Create the cell disks using the following command:

    # dcli -g cell_group -l celladmin cellcli -e  "create celldisk all"
    
  4. Recreate the flash log using the following commands:

    # dcli -g cell_group -l celladmin cellcli -e  "drop flashlog all force"
    # dcli -g cell_group -l celladmin cellcli -e  "create flashlog all"
    
  5. Expand the flash cache using the following command:

    # dcli -g cell_group -l celladmin cellcli -e  "alter flashcache all"
    

1.3.1.5 Creating Grid Disks in Eighth Rack Oracle Exadata Database Machine SL6 and X4-2 or Later

Grid disk creation must follow a specific order to ensure the proper offset.

The order of grid disk creation must follow the same sequence that was used during initial grid disks creation. For a standard deployment using Oracle Exadata Deployment Assistant, the order is DATA, RECO, and DBFS_DG. Create all DATA grid disks first, followed by the RECO grid disks, and then the DBFS_DG grid disks.

The following procedure describes how to create the grid disks:

Note:

The commands shown in this procedure use the standard deployment grid disk prefix names of DATA, RECO and DBFS_DG. The sizes being checked are on cell disk 02. Cell disk 02 is used because the disk layout for cell disks 00 and 01 are different from the other cell disks in the server.

  1. Check the size of the grid disks using the following commands. Each cell should return the same size for the grid disks starting with the same grid disk prefix.

    # dcli -g cell_group -l celladmin cellcli -e    \
    "list griddisk attributes name, size where name like \'DATA.*_02_.*\'"
    
    # dcli -g cell_group -l celladmin cellcli -e    \
    "list griddisk attributes name, size where name like \'RECO.*_02_.*\'"
    
    # dcli -g cell_group -l celladmin cellcli -e    \
    "list griddisk attributes name, size where name like \'DBFS_DG.*_02_.*\'" 
    

    The sizes shown are used during grid disk creation.

  2. Create the grid disks for the disk groups using the sizes shown in step 1. Table 1-1 shows the commands to create the grid disks based on rack type and disk group.

Table 1-1 Commands to Create Disk Groups When Extending Oracle Exadata Database Machine X4-2 Eighth Rack or Later or SL6

Rack Commands

Extreme Flash Oracle Exadata Database Machine X5-2 and lateror SL6

dcli -g cell_group -l celladmin "cellcli -e create griddisk         \
DATA_FD_04_\'hostname -s\' celldisk=FD_04_\'hostname -s\',size=datasize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk         \
DATA_FD_05_\'hostname -s\' celldisk=FD_05_\'hostname -s\',size=datasize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk         \
DATA_FD_06_\'hostname -s\' celldisk=FD_06_\'hostname -s\',size=datasize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk         \
DATA_FD_07_\'hostname -s\' celldisk=FD_07_\'hostname -s\',size=datasize"
dcli -g cell_group -l celladmin "cellcli -e create griddisk          \
RECO_FD_04_\'hostname -s\' celldisk=FD_04_\'hostname -s\',size=recosize, \
cachingPolicy=none"

dcli -g cell_group -l celladmin "cellcli -e create griddisk          \
RECO_FD_05_\'hostname -s\' celldisk=FD_05_\'hostname -s\',size=recosize, \
cachingPolicy=none"

dcli -g cell_group -l celladmin "cellcli -e create griddisk          \
RECO_FD_06_\'hostname -s\' celldisk=FD_06_\'hostname -s\',size=recosize, \
cachingPolicy=none"

dcli -g cell_group -l celladmin "cellcli -e create griddisk          \
RECO_FD_07_\'hostname -s\' celldisk=FD_07_\'hostname -s\',size=recosize, \
cachingPolicy=none"
dcli -g cell_group -l celladmin "cellcli -e create griddisk           \
DBFS_DG_FD_04_\'hostname -s\' celldisk=FD_04_\'hostname -s\',size=dbfssize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk           \
DBFS_DG_FD_05_\'hostname -s\' celldisk=FD_05_\'hostname -s\',size=dbfssize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk           \
DBFS_DG_FD_06_\'hostname -s\' celldisk=FD_06_\'hostname -s\',size=dbfssize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk           \
DBFS_DG_FD_07_\'hostname -s\' celldisk=FD_07_\'hostname -s\',size=dbfssize"

High Capacity Oracle Exadata Database Machine X5-2 or Oracle Exadata Database Machine X4-2 and lateror SL6

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DATA_CD_06_\'hostname -s\' celldisk=CD_06_\'hostname -s\',size=datasize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DATA_CD_07_\'hostname -s\' celldisk=CD_07_\'hostname -s\',size=datasize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DATA_CD_08_\'hostname -s\' celldisk=CD_08_\'hostname -s\',size=datasize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DATA_CD_09_\'hostname -s\' celldisk=CD_09_\'hostname -s\',size=datasize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DATA_CD_10_\'hostname -s\' celldisk=CD_10_\'hostname -s\',size=datasize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DATA_CD_11_\'hostname -s\' celldisk=CD_11_\'hostname -s\',size=datasize"
dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
RECO_CD_06_\'hostname -s\' celldisk=CD_06_\'hostname -s\',size=recosize, \
cachingPolicy=none"

dcli -g cell_group -l celladmin "cellcli -e create griddisk             \
RECO_CD_07_\'hostname -s\' celldisk=CD_07_\'hostname -s\',size=recosize, \
cachingPolicy=none"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
RECO_CD_08_\'hostname -s\' celldisk=CD_08_\'hostname -s\',size=recosize, \
cachingPolicy=none"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
RECO_CD_09_\'hostname -s\' celldisk=CD_09_\'hostname -s\',size=recosize, \
cachingPolicy=none"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
RECO_CD_10_\'hostname -s\' celldisk=CD_10_\'hostname -s\',size=recosize, \
cachingPolicy=none"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
RECO_CD_11_\'hostname -s\' celldisk=CD_11_\'hostname -s\',size=recosize, \
cachingPolicy=none"
dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DBFS_DG_CD_06_\'hostname -s\' celldisk=CD_06_\'hostname -s\',size=dbfssize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DBFS_DG_CD_07_\'hostname -s\' celldisk=CD_07_\'hostname -s\',size=dbfssize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DBFS_DG_CD_08_\'hostname -s\' celldisk=CD_08_\'hostname -s\',size=dbfssize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DBFS_DG_CD_09_\'hostname -s\' celldisk=CD_09_\'hostname -s\',size=dbfssize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DBFS_DG_CD_10_\'hostname -s\' celldisk=CD_10_\'hostname -s\',size=dbfssize"

dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
DBFS_DG_CD_11_\'hostname -s\' celldisk=CD_11_\'hostname -s\',size=dbfssize"

1.3.1.6 Adding Grid Disks to Oracle ASM Disk Groups in Eighth Rack Oracle Exadata Database Machine SL6 and X4-2 or Later

The following procedure describes how to add the grid disks to Oracle ASM disk groups.

The grid disks created in "Creating Grid Disks in Eighth Rack Oracle Exadata Database Machine SL6 and X4-2 or Later" must be added as Oracle ASM disks to their corresponding, existing Oracle ASM disk groups.

  1. Validate the following:

    • No rebalance operation is currently running.

    • All Oracle ASM disks are active.

  2. Log in to the first database server as the owner who runs the Grid Infrastructure software.

  3. Set the environment to access the +ASM instance on the server.

  4. Log in to the ASM instance as the sysasm user using the following command:

    $ sqlplus / as sysasm
    
  5. Validate the current settings, as follows:

    SQL> set lines 100
    SQL> column attribute format a20
    SQL> column value format a20
    SQL> column diskgroup format a20
    SQL> SELECT att.name attribute, upper(att.value) value, dg.name diskgroup
    FROM V$ASM_ATTRIBUTE att, V$ASM_DISKGROUP DG
    WHERE DG.group_number=att.group_number AND att.name LIKE '%appliance.mode%'
    ORDER BY att.group_number;
    

    The output should be similar to the following:

    ATTRIBUTE            VALUE                DISKGROUP
    -------------------- -------------------- --------------------
    appliance.mode       TRUE                 DATAC1
    appliance.mode       TRUE                 DBFS_DG
    appliance.mode       TRUE                 RECOC1
    
  6. Disable the appliance.mode attribute for any disk group that shows TRUE using the following commands:

    SQL> ALTER DISKGROUP data_diskgroup set attribute 'appliance.mode'='FALSE';
    SQL> ALTER DISKGROUP reco_diskgroup set attribute 'appliance.mode'='FALSE';
    SQL> ALTER DISKGROUP dbfs_dg_diskgroup set attribute 'appliance.mode'='FALSE';
    

    In the preceding commands, data_diskgroup, reco_diskgroup, and dbfs_dg_diskgroup are the names of the DATA, RECO and DBFS_DG disk groups, respectively.

  7. Add the grid disks to the Oracle ASM disk groups.Table 1-2 shows the commands to create the grid disks based on rack type and disk group. Adding the new disks requires a rebalance of the system.

    Table 1-2 Commands to Add Disk Groups When Extending Eighth Rack Oracle Exadata Database Machine X4-2 and Later or SL6

    Rack Commands

    Extreme Flash Oracle Exadata Database Machine X5-2 and later or SL6

    SQL> ALTER DISKGROUP data_diskgroup ADD DISK 'o/*/DATA_FD_0[4-7]*'      \
    REBALANCE POWER 32;
     
    SQL> ALTER DISKGROUP reco_diskgroup ADD DISK 'o/*/RECO_FD_0[4-7]*'      \
    REBALANCE POWER 32;
     
    SQL> ALTER DISKGROUP dbfs_dg_diskgroup ADD DISK 'o/*/DBFS_DG_FD_0[4-7]*'\
    REBALANCE POWER 32; 
    

    High Capacity Oracle Exadata Database Machine X5-2 or Oracle Exadata Database Machine X4-2 and later or SL6

    SQL> ALTER DISKGROUP data_diskgroup ADD DISK 'o/*/DATA_CD_0[6-9]*','    \
    o/*/DATA_CD_1[0-1]*' REBALANCE POWER 32;
     
    SQL> ALTER DISKGROUP reco_diskgroup ADD DISK 'o/*/RECO_CD_0[6-9]*','    \
    o/*/RECO_CD_1[0-1]*' REBALANCE POWER 32;
     
    SQL> ALTER DISKGROUP dbfs_dg_diskgroup ADD DISK '                       \
    o/*/DBFS_DG_CD_0[6-9]*',' o/*/DBFS_DG_CD_1[0-1]*' REBALANCE POWER 32; 
    

    The preceding commands return Diskgroup altered, if successful.

  8. (Optional) Monitor the current rebalance operation using the following command:

    SQL> SELECT * FROM  gv$asm_operation;
    
  9. Re-enable the appliance.mode attribute, if it was disabled in step 6 using the following commands:

    SQL> ALTER DISKGROUP data_diskgroup set attribute 'appliance.mode'='TRUE';
    SQL> ALTER DISKGROUP reco_diskgroup set attribute 'appliance.mode'='TRUE';
    SQL> ALTER DISKGROUP dbfs_dg_diskgroup set attribute 'appliance.mode'='TRUE';
    

1.3.1.7 Validating New Quarter Rack Configuration for Oracle Exadata Database Machine SL6 and X4-2 or Later

After adding the grid disks to the Oracle ASM disk groups, validate the configuration.

  1. Log in as the root user on the first database server.

  2. Check the core count using the following command:

    # dcli -g db_group -l root 'dbmcli -e list dbserver attributes coreCount'
    
  3. Review the storage server configuration using the following command.

    # dcli -g cell_group -l celladmin 'cellcli -e list cell attributes eighthrack'
    

    The output should show FALSE.

  4. Review the appliance mode for each disk group using the following commands:

    SQL> set lines 100
    SQL> column attribute format a20
    SQL> column value format a20
    SQL> column diskgroup format a20
    SQL> SELECT att.name attribute, upper(att.value) value, dg.name diskgroup    \
    FROM V$ASM_ATTRIBUTE att, V$ASM_DISKGROUP DG                                 \
    WHERE DG.group_number = att.group_number AND                                 \
    att.name LIKE '%appliance.mode%' ORDER BY DG.group_number;
    
  5. Validate the number of Oracle ASM disks using the following command:

    SQL> SELECT g.name,d.failgroup,d.mode_status,count(*)                      \
    FROM v$asm_diskgroup g, v$asm_disk d                                       \
    WHERE d.group_number=g.group_number                                        \
    GROUP BY g.name,d.failgroup,d.mode_status;
    

1.3.2 Extending an Eighth Rack to a Quarter Rack in Oracle Exadata Database Machine X3-2

Extending Oracle Exadata Database Machine X3-2 or earlier rack from an eighth rack to a quarter rack is done using software. No hardware modifications are needed to extend the rack. This procedure can be done with no downtime or outages, other than a rolling database outage. The following procedures in this section describe how to extend an Oracle Exadata Database Machine X3-2 eighth rack to a quarter rack:

1.3.2.1 Reviewing and Validating Current Configuration of Oracle Exadata Database Machine X3-2 Eighth Rack

The following procedure describes how to review and validate the current configuration:

  1. Log in as the root user on the first database server.

  2. Review the current configuration of the database servers using the following command:

    # dcli -g db_group -l root /opt/oracle.SupportTools/resourcecontrol -show
    

    The following is an example of the output from the command:

    dm01db01: [INFO] Validated hardware and OS. Proceed.
    dm01db01:
    dm01db01: system_bios_version:  25010600
    dm01db01: restore_status:  Ok
    dm01db01: config_sync_status:  Ok
    dm01db01: reset_to_defaults: Off
    dm01db01: [SHOW] Number of cores active per socket: 4
    dm01db02: [INFO] Validated hardware and OS. Proceed.
    dm01db02:
    dm01db02: system_bios_version:  25010600
    dm01db02: restore_status:  Ok
    dm01db02: config_sync_status:  Ok
    dm01db02: reset_to_defaults: Off
    dm01db02: [SHOW] Number of cores active per socket: 4
    

    Note:

    The number of active cores in Oracle Exadata Database Machine X3-2 Eighth Rack database server is 4.

    If the number of cores on a database server configured as an eighth rack differs, then contact Oracle Support Services.

    Ensure the output for restore_status and config_sync_status are shown as Ok before continuing this procedure.

  3. Review the current configuration of the storage servers using the following command. The expected output is TRUE.

    # dcli -g cell_group -l celladmin 'cellcli -e LIST CELL attributes eighthrack'
    
  4. Ensure that flash disks are not used in Oracle ASM disk groups using the following command. Flash cache is dropped and recreated during this procedure:

    # dcli -g cell_group -l celladmin cellcli -e  "list griddisk attributes   \
    asmDiskgroupName,asmDiskName,diskType where diskType ='FlashDisk'         \
    and asmDiskgroupName !=null"
    

    No rows should be returned by the command.

1.3.2.2 Activating Database Server Cores in Oracle Exadata Database Machine X3-2 Eighth Rack

The following procedure describes how to activate the database server cores:

  1. Log in as the root user on the first database server.

  2. Activate all the database server cores using the following dcli utility command on the database server group:

    # dcli -g db_group -l root /opt/oracle.SupportTools/resourcecontrol      \
    -core number_of_cores 
    

    In the preceding command, number_of_cores is the total number of cores to activate. To activate all the cores, enter All for the number of cores.

    See Also:

  3. Restart the database servers in a rolling manner using the following command:

    # reboot
    

    Note:

    Ensure the output for restore_status and config_sync_status are shown as Ok before activating the storage server cores and disks. Getting the status from the BIOS after restarting may take several minutes.

1.3.2.3 Activating Storage Server Cores and Disks in Oracle Exadata Database Machine X3-2 Eighth Rack

The following procedure describes how to activate the storage server cores and disks:

  1. Log in as the root user on the first database server.

  2. Activate the cores on the storage server group using the following command. The command uses the dcli utility, and runs the command as the celladmin user.

    # dcli -g cell_group -l celladmin cellcli -e "alter cell eighthRack=false"
    
  3. Create the cell disks using the following command:

    # dcli -g cell_group -l celladmin cellcli -e  "create celldisk all"
    
  4. Recreate the flash log using the following commands:

    # dcli -g cell_group -l celladmin cellcli -e  "drop flashlog all force"
    # dcli -g cell_group -l celladmin cellcli -e  "create flashlog all"
    
  5. Expand the flash cache using the following command:

    # dcli -g cell_group -l celladmin cellcli -e  "alter flashcache all"
    

1.3.2.4 Creating Grid Disks in Oracle Exadata Database Machine X3-2 Eighth Rack

Grid disk creation must follow a specific order to ensure the proper offset. The order of grid disk creation must follow the same sequence that was used during initial grid disks creation. For a standard deployment using Oracle Exadata Deployment Assistant, the order is DATA, RECO, and DBFS_DG. Create all DATA grid disks first, followed by the RECO grid disks, and then the DBFS_DG grid disks.

The following procedure describes how to create the grid disks:

Note:

The commands shown in this procedure use the standard deployment grid disk prefix names of DATA, RECO and DBFS_DG. The sizes being checked are on cell disk 02. Cell disk 02 is used because the disk layout for cell disks 00 and 01 are different from the other cell disks in the server.

  1. Check the size of the grid disks using the following commands. Each cell should return the same size for the grid disks starting with the same grid disk prefix.

    # dcli -g cell_group -l celladmin cellcli -e    \
    "list griddisk attributes name, size where name like \'DATA.*02.*\'"
    
    # dcli -g cell_group -l celladmin cellcli -e    \
    "list griddisk attributes name, size where name like \'RECO.*02.*\'"
    
    # dcli -g cell_group -l celladmin cellcli -e    \
    "list griddisk attributes name, size where name like \'DBFS_DG.*02.*\'" 
    

    The sizes shown are used during grid disk creation.

  2. Create the grid disks for the disk groups using the sizes shown in step 1. Table 1-3 shows the commands to create the grid disks based on rack type and disk group.

    Table 1-3 Commands to Create Disk Groups When Extending Oracle Exadata Database Machine X3-2 Eighth Rack

    Rack Commands

    High Performance or High Capacity Oracle Exadata Database Machine X3-2

    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DATA_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=datasize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DATA_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=datasize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DATA_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=datasize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DATA_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=datasize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DATA_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=datasize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DATA_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=datasize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    RECO_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=recosize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    RECO_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=recosize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    RECO_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=recosize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    RECO_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=recosize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    RECO_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=recosize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    RECO_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=recosize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DBFS_DG_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=dbfssize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DBFS_DG_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=dbfssize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DBFS_DG_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=dbfssize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DBFS_DG_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=dbfssize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DBFS_DG_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=dbfssize"
    
    dcli -g cell_group -l celladmin "cellcli -e create griddisk            \
    DBFS_DG_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=dbfssize"
    

1.3.2.5 Adding Grid Disks to Oracle ASM Disk Groups in Oracle Exadata Database Machine X3-2 Eighth Rack

The grid disks created in "Creating Grid Disks in Oracle Exadata Database Machine X3-2 Eighth Rack" must be added as Oracle ASM disks to their corresponding, existing Oracle ASM disk groups. The following procedure describes how to add the grid disks to Oracle ASM disk groups:

  1. Validate the following:

    • No rebalance operation is currently running.

    • All Oracle ASM disks are active.

  2. Log in to the first database server as the owner who runs the Grid Infrastructure software.

  3. Set the environment to access the +ASM instance on the server.

  4. Log in to the ASM instance as the sysasm user using the following command:

    $ sqlplus / as sysasm
    
  5. Validate the current settings, as follows:

    SQL> set lines 100
    SQL> column attribute format a20
    SQL> column value format a20
    SQL> column diskgroup format a20
    SQL> SELECT att.name attribute, upper(att.value) value, dg.name diskgroup   \
    FROM V$ASM_ATTRIBUTE att, V$ASM_DISKGROUP  DG                               \
    WHERE DG.group_number = att.group_number AND                                \
    att.name LIKE '%appliance.mode%' ORDER BY att.group_number;
    

    The output should be similar to the following:

    ATTRIBUTE            VALUE                DISKGROUP
    -------------------- -------------------- --------------------
    appliance.mode       TRUE                 DATAC1
    appliance.mode       TRUE                 DBFS_DG
    appliance.mode       TRUE                 RECOC1
    
  6. Disable the appliance.mode attribute for any disk group that shows TRUE using the following commands:

    SQL> ALTER DISKGROUP data_diskgroup set attribute 'appliance.mode'='FALSE';
    SQL> ALTER DISKGROUP reco_diskgroup set attribute 'appliance.mode'='FALSE';
    SQL> ALTER DISKGROUP dbfs_dg_diskgroup set attribute 'appliance.mode'='FALSE';
    

    In the preceding commands, data_diskgroup, reco_diskgroup, and dbfs_dg_diskgroup are the names of the DATA, RECO and DBFS_DG disk groups, respectively.

  7. Add the grid disks to the Oracle ASM disk groups.Table 1-4 shows the commands to create the grid disks based on rack type and disk group. Adding the new disks requires a rebalance of the system.

    Table 1-4 Commands to Add Disk Groups When Extending an Oracle Exadata Database Machine X3-2 Eighth Rack

    Rack Commands

    High Capacity or High Performance Oracle Exadata Database Machine X3-2

    SQL> ALTER DISKGROUP data_diskgroup ADD DISK 'o/*/DATA_CD_0[6-9]*','    \
    o/*/DATA_CD_1[0-1]*' REBALANCE POWER 32;
     
    SQL> ALTER DISKGROUP reco_diskgroup ADD DISK 'o/*/RECO_CD_0[6-9]*','    \
    o/*/RECO_CD_1[0-1]*' REBALANCE POWER 32;
     
    SQL> ALTER DISKGROUP dbfs_dg_diskgroup ADD DISK '                       \
    o/*/DBFS_DG_CD_0[6-9]*',' o/*/DBFS_DG_CD_1[0-1]*' REBALANCE POWER 32; 
    

    The preceding commands return Diskgroup altered, if successful.

  8. (Optional) Monitor the current rebalance operation using the following command:

    SQL> SELECT * FROM  gv$asm_operation;
    
  9. Re-enable the appliance.mode attribute, if it was disabled in step 6 using the following commands:

    SQL> ALTER DISKGROUP data_diskgroup set attribute 'appliance.mode'='TRUE';
    SQL> ALTER DISKGROUP recodiskgroup set attribute 'appliance.mode'='TRUE';
    SQL> ALTER DISKGROUP dbfs_dg_diskgroup set attribute 'appliance.mode'='TRUE';
    

1.3.2.6 Validating New Oracle Exadata Database Machine X3-2 Quarter Rack Configuration

After adding the grid disks to the Oracle ASM disk groups, validate the configuration. The following procedure describes how to validate the configuration:

  1. Log in as the root user on the first database server.

  2. Check the core count using the following command:

    # dcli -g db_group -l root 'dbmcli -e list dbserver attributes coreCount'
    
  3. Review the storage server configuration using the following command.

    # dcli -g cell_group -l celladmin 'cellcli -e list cell attributes eighthrack'
    

    The output should show FALSE.

  4. Review the appliance mode for each disk group using the following commands:

    SQL> set lines 100
    SQL> column attribute format a20
    SQL> column value format a20
    SQL> column diskgroup format a20
    SQL> SELECT att.name attribute, upper(att.value) value, dg.name diskgroup    \
    FROM V$ASM_ATTRIBUTE att, V$ASM_DISKGROUP  DG                                \
    WHERE DG.group_number =att.group_number AND                                  \
    att.name LIKE '%appliance.mode%' ORDER BY DG.group_number;
    
  5. Validate the number of Oracle ASM disks using the following command:

    SQL> SELECT g.name,d.failgroup,d.mode_status,count(*)                      \
    FROM v$asm_diskgroup g, v$asm_disk d                                       \
    WHERE d.group_number=g.group_number                                        \
    GROUP BY g.name,d.failgroup,d.mode_status;
    

1.3.3 Extending a Quarter Rack or Half Rack

Extending Oracle Exadata Database Machine from quarter rack to half rack, or half rack to full rack consists of adding new hardware to the rack. The following sections describe how to extend Oracle Exadata Database Machine with new servers:

Note:

It is possible to extend the hardware while the machine is online, and with no downtime. However, extreme care should be taken. In addition, patch application to existing switches and servers should be done before extending the hardware.

1.3.3.1 Removing the Doors

The following procedure describes how to remove the doors on Oracle Exadata Database Machine.

  1. Remove the Oracle Exadata Database Machine front and rear doors, as follows:

    1. Unlock the front and rear doors. The key is in the shipping kit.

    2. Open the doors.

    3. Detach the grounding straps connected to the doors by pressing down on the tabs of the grounding strap's quick-release connectors, and pull the straps from the frame.

    4. Lift the doors up and off their hinges.

    Figure 1-2 Removing the Rack Doors

    Description of Figure 1-2 follows
    Description of "Figure 1-2 Removing the Rack Doors"

    Description of callouts in Figure 1-2:

    1: Detaching the grounding cable.

    2: Top rear hinge.

    3: Bottom rear hinge.

    4: Top front hinge.

    5: Bottom front hinge.

  2. Remove the filler panels where the servers will be installed using a No. 2 screwdriver to remove the M6 screws. The number of screws depends on the type of filler panel. Save the screws for future use.

    Figure 1-3 Removing the Filler Panels

    Description of Figure 1-3 follows
    Description of "Figure 1-3 Removing the Filler Panels"

    Note:

    If you are replacing the filler panels, then do not remove the Duosert cage-nuts from the RETMA (Radio Electronics Television Manufacturers Association) rail holes.

1.3.3.2 Adding a Sun Datacenter InfiniBand Switch 36 Switch (Optional)

This procedure is necessary as follows:

  • Upgrading a rack with Sun Fire X4170 Oracle Database Servers to Oracle Exadata Database Machine Half Rack or Oracle Exadata Database Machine Full Rack.

  • Extending an Oracle Exadata Database Machine Quarter Rack or Oracle Exadata Database Machine Eighth Rack to another rack.

  • Extending an Oracle Exadata Database Machine X4-2 rack to another rack.

Note:

The steps in this procedure are specific to Oracle Exadata Database Machine. They are not the same as the steps in the Sun Datacenter InfiniBand Switch 36 manual.

See Also:

Oracle Exadata Database Machine System Overview to view the rack layout

  1. Unpack the Sun Datacenter InfiniBand Switch 36 switch components from the packing cartons. The following items should be in the packing cartons:

    • Sun Datacenter InfiniBand Switch 36 switch

    • Cable bracket and rackmount kit

    • Cable management bracket and cover

    • Two rack rail assemblies

    • Assortment of screws and captive nuts

    • Sun Datacenter InfiniBand Switch 36 documentation

    The service label procedure on top of the switch includes descriptions of the preceding items.

  2. X5 racks only: Remove the trough from the rack in RU1 and put the cables aside while installing the IB switch. The trough can be discarded.

  3. Install cage nuts in each rack rail in the appropriate holes.

  4. Attach the brackets with cutouts to the power supply side of the switch.

  5. Attach the C-brackets to the switch on the side of the InfiniBand ports.

  6. Slide the switch halfway into the rack from the front. You need to keep it to the left side of the rack as far as possible while pulling the two power cords through the C-bracket on the right side.

  7. Slide the server in rack location U2 out to the locked service position. This improves access to the rear of the switch during further assembly.

  8. Install the slide rails from the rear of the rack into the C-brackets on the switch, pushing them up to the rack rail.

  9. Attach an assembled cable arm bracket to the slide rail and using a No. 3 Phillips screwdriver, screw these together into the rack rail:

    1. Install the lower screw loosely with the cable arm bracket rotated 90 degrees downward. This allows better finger access to the screw.

    2. Rotate the cable arm bracket to the correct position.

    3. Install the upper screw.

    4. Tighten both screws.

    If available, a screwdriver with a long-shaft (16-inch / 400mm) will allow easier installation such that the handle is outside the rack and beyond the cabling.

  10. Push the switch completely into the rack from the front, routing the power cords through the cutout on the rail bracket.

  11. Secure the switch to the front rack rail with M6 16mm screws. Tighten the screws using the No. 3 Phillips screwdriver.

  12. Install the lower part of the cable management arm across the back of the switch.

  13. Connect the cables to the appropriate ports.

    See Also:

    Oracle Exadata Database Machine System Overview for information about InfiniBand networking cables

  14. Install the upper part of the cable management arm.

  15. Slide the server in rack location U2 back into the rack.

  16. Install power cords to the InfiniBand switch power supply slots on the front.

  17. Loosen the front screws to install the vented filler panel brackets. Tighten the screws, and snap on the vented filler panel in front of the switch.

1.3.3.3 Adding New Servers

Oracle Exadata Database Machine Quarter Rack can be upgraded to Oracle Exadata Database Machine Half Rack, and Oracle Exadata Database Machine Half Rack can be upgraded to Oracle Exadata Database Machine Full Rack. The upgrade process includes adding new servers, cables, and, when upgrading to Oracle Exadata Database Machine X2-2 Full Rack, Sun Datacenter InfiniBand Switch 36 switch.

A Oracle Exadata Database Machine Quarter Rack to Oracle Exadata Database Machine Half Rack upgrade consists of installing the following:

  • Two Oracle Database servers

  • Four Exadata Storage Servers

  • One Sun Datacenter InfiniBand Switch 36 switch (for Oracle Exadata Database Machine X2-2 with Sun Fire X4170 M2 Oracle Database Server only)

  • Associated cables and hardware

A Oracle Exadata Database Machine Half Rack to Oracle Exadata Database Machine Full Rack upgrade consists of installing the following:

  • Four Oracle Database servers

  • Seven Exadata Storage Servers

  • Associated cables and hardware

Note:

  • If you are extending Oracle Exadata Database Machine X5-2, Oracle Exadata Database Machine X4-2, Oracle Exadata Database Machine X3-8 Full Rack, or Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) half rack, then order the expansion kit that includes a Sun Datacenter InfiniBand Switch 36 switch.

  • The new servers need to be configured manually when extending Oracle Exadata Database Machine Quarter Rack to Oracle Exadata Database Machine Half Rack, or Oracle Exadata Database Machine Half Rack to Oracle Exadata Database Machine Full Rack. Refer to Task 2: Setting Up New Servers for additional information.

  • Always load equipment into the rack from the bottom up, so that the rack does not become top-heavy and tip over. Extend the rack anti-tip bar to prevent the rack from tipping during equipment installation.

See Also:

The following tasks describes how to add the servers, and cables:

Task 1   Preparing for Installation

The following procedure describes the pre-installation steps:

  1. Identify the rack unit where the server will be installed. Fill the first available unit, starting from the bottom of the rack.

  2. Remove and discard the trough, which attaches the cable harness when no server is installed in the unit.

  3. Remove and discard the solid filler.

Task 2   Installing the Rack Assembly

The following procedure describes how to install the rack assembly:

  1. Position a mounting bracket against the chassis so that the slide-rail lock is at the server front, and the five keyhole openings on the mounting bracket are aligned with the five locating pins on the side of the chassis.

  2. Orient the slide-rail assembly so that the ball-bearing track is forward and locked in place.

  3. Starting on either side of the rack, align the rear of the slide-rail assembly against the inside of the rear rack rail, and push until the assembly locks into place with an audible click.

    Figure 1-4 Locking the Slide-Rail Assembly Against the Inside of the Rear Rack Rail

    Description of Figure 1-4 follows
    Description of "Figure 1-4 Locking the Slide-Rail Assembly Against the Inside of the Rear Rack Rail"
  4. Align the front of the slide-rail assembly against the outside of the front rack rail, and push until the assembly locks into place and you hear the click.

  5. Repeat steps 2 to 4 on the other side on the rack.

Task 3   Installing the Server

WARNING:

  • Installing a server requires a minimum of two people or a lift because of the weight of each server. Attempting this procedure alone can result in equipment damage, personal injury, or both.

  • Always load equipment into the rack from the bottom up, so that the rack does not become top-heavy and tip over. Extend the rack anti-tip bar to prevent the rack from tipping during equipment installation.

The following procedure describes how to install the sever:

  1. Read the service label on the top cover of the server before installing a server into the rack.

  2. Push the server into the slide rail assembly:

    1. Push the slide rails into the slide rail assemblies as far as possible.

    2. Position the server so the rear ends of the mounting brackets are aligned with the slide rail assemblies mounted in the equipment rack.

      Figure 1-5 Aligning the Rear Ends of the Mounting Brackets with the Slide Rail Assemblies in the Rack


      Description of Figure 1-5 follows
      Description of "Figure 1-5 Aligning the Rear Ends of the Mounting Brackets with the Slide Rail Assemblies in the Rack"

      The callouts in the preceding image highlight the following:

      1: Mounting bracket inserted into slide rail

      2: Slide-rail release lever

    3. Insert the mounting brackets into the slide rails, and push the server into the rack until the mounting brackets encounter the slide rail stops, approximately 30 cm (12 inches).

    4. Simultaneously push down and hold the slide rail release levers on each mounting bracket while pushing the server into the rack.

    5. Continue pushing until the slide rail locks on the front of the mounting brackets engage the slide rail assemblies, and you hear the click.

  3. Cable the new server as described in "Cabling Exadata Storage Servers".

Note:

Oracle recommends that two people push the servers into the rack: one person to move the server in and out of the rack, and another person to watch the cables and CMA.

1.3.3.4 Cabling Database Servers

After the new database servers are installed, they need to be cabled to the existing equipment. The following procedure describes how to cable the new equipment in the rack. The images shown in the procedure are of a Sun Fire X4170 M2 Oracle Database Server.

Note:

  • The existing cable connections in the rack do not change.

  • The blue cables connect to Oracle Database servers, and the black cables connect to Exadata Storage Servers. These network cables are for the NET0 Ethernet interface port.

  • Attach and route the management cables on the CMA and rear panel one server at a time. Do not slide out more than one server at a time.

  • Start from the bottom of the rack, and work upward. Route the cables through the CMA with the dongle on the top and power cables on the bottom.

  • Longer hook and loop straps are needed when cabling three CAT5e cables or two TwinAx cables.

  1. Connect the CAT5e cables, AC power cables, and USB to their respective ports on the rear of the server. Ensure the flat side of the dongle is flush against the CMA inner rail.

    Figure 1-6 Cables at the Rear of the Server

    Description of Figure 1-6 follows
    Description of "Figure 1-6 Cables at the Rear of the Server"
  2. Adjust the green cable management arm (CMA) brackets

    Figure 1-7 Cable Management Arm (CMA) Brackets

    Description of Figure 1-7 follows
    Description of "Figure 1-7 Cable Management Arm (CMA) Brackets"

    Description of the CMA callouts in the preceding image"

    1. Connector A

    2. Front slide bar

    3. Velcro straps (6)

    4. Connector B

    5. Connector C

    6. Connector D

    7. Slide-rail latching bracket (used with connector D)

    8. Rear slide bar

    9. Cable covers

    10. Cable covers

  3. Attach the CMA to the server.

  4. Route the CAT5e and power cables through the wire clip.

    Figure 1-8 Cables Routed Through the Cable Management Arm

    Description of Figure 1-8 follows
    Description of "Figure 1-8 Cables Routed Through the Cable Management Arm"
  5. Bend the CAT5e and power cables to enter the CMA, while adhering to the bend radius minimums.

    See Also:

    "Reviewing the Cable Management Arm Guidelines" for the bend radius minimums

  6. Secure the CAT5e and power cables under the cable clasps.

    Figure 1-9 Cables Secured under the Cable Clasps

    Description of Figure 1-9 follows
    Description of "Figure 1-9 Cables Secured under the Cable Clasps"
  7. Route the cables through the CMA, and secure them with hook and loop straps at equal intervals.

    Figure 1-10 Cables Secured with Hook and Loop Straps at Regular Intervals

    Description of Figure 1-10 follows
    Description of "Figure 1-10 Cables Secured with Hook and Loop Straps at Regular Intervals"
  8. Connect the InfiniBand or TwinAx cables with the initial bend resting on the CMA. The TwinAx cables are for client access to the database servers.

    Figure 1-11 InfiniBand or TwinAx Cables Positioned on the CMA

    Description of Figure 1-11 follows
    Description of "Figure 1-11 InfiniBand or TwinAx Cables Positioned on the CMA"
  9. Secure the InfiniBand or TwinAx cables with hook and loop straps at equal intervals.

    Figure 1-12 InfiniBand or TwinAx Cables Secured with Hook and Loop Straps at Regular Intervals

    Description of Figure 1-12 follows
    Description of "Figure 1-12 InfiniBand or TwinAx Cables Secured with Hook and Loop Straps at Regular Intervals"
  10. Route the fiber core cables.

  11. Rest the InfiniBand cables over the green clasp on the CMA.

  12. Attach the red ILOM cables to the database server.

  13. Attach the network cables to the Oracle Database server.

  14. Attach the InfiniBand cables from Oracle Database server to the Sun Datacenter InfiniBand Switch 36 switches.

  15. Connect the orange Ethernet cable to the KVM switch.

  16. Connect the red and blue Ethernet cables to the Cisco switch.

  17. Verify operation of the slide rails and CMA for each server, as follows:

    Note:

    Oracle recommends that two people do this step. One person to move the server in and out of the rack, and another person to observe the cables and CMA.

    1. Slowly pull the server out of the rack until the slide rails reach their stops.

    2. Inspect the attached cables for any binding or kinks.

    3. Verify the CMA extends fully from the slide rails.

  18. Push the server back into the rack, as follows:

    1. Release the two sets of slide rail stops.

    2. Push in both levers simultaneously, and slide the server into the rack. The first stop in the set are levers located on the inside of each slide rail, just behind the back panel of the server. The levers are labeled PUSH. The server slides approximately 46 cm (18 inches) and stop.

    3. Verify the cables and CMA retract without binding.

    4. Simultaneously push or pull both slide rail release buttons, and push the server completely into the rack until both slide rails engage. The second stop in the set are the slide rail release buttons located near the front of each mounting bracket.

  19. Dress the cables, and then tie off the cables with the straps. Oracle recommends the InfiniBand cables should be dressed in bundles of eight or less.

  20. Extend and then fully retract the server to check cable travel by sliding each server out and back fully to ensure that the cables are not binding or catching.

  21. Repeat the procedure for the rest of the servers.

  22. Connect the power cables to the power distribution units (PDUs). Ensure the breaker switches are in the OFF position before connecting the power cables. Do not plug the power cables into the facility receptacles at this time.

1.3.3.5 Cabling Exadata Storage Servers

After the new Exadata Storage Servers are installed, they need to be cabled to the existing equipment. The following procedure describes how to cable the new equipment in the rack.

Note:

  • The existing cable connections in the rack do not change.

  • The blue cables connect to Oracle Database servers, and the black cables connect to Exadata Storage Servers. These network cables are for the NET0 Ethernet interface port.

  • Attach and route the management cables on the CMA and rear panel one server at a time. Do not slide out more than one server at a time.

  • Start from the bottom of the rack, and work upward.

  • Longer hook and loop straps are needed when cabling three CAT5e cables or two TwinAx cables.

  1. Attach a CMA to the server.

  2. Insert the cables into their ports through the hook and loop straps, then route the cables into the CMA in this order:

    1. Power

    2. Ethernet

    3. InfiniBand

    Figure 1-13 Rear of the Server Showing Power and Network Cables

    Description of Figure 1-13 follows
    Description of "Figure 1-13 Rear of the Server Showing Power and Network Cables"
  3. Route the cables through the CMA and secure them with hook and loop straps on both sides of each bend in the CMA.

    Figure 1-14 Cables Routed Through the CMA and Secured with Hook and Loop Straps

    Description of Figure 1-14 follows
    Description of "Figure 1-14 Cables Routed Through the CMA and Secured with Hook and Loop Straps"
  4. Close the crossbar covers to secure the cables in the straightaway.

  5. Verify operation of the slide rails and the CMA for each server:

    Note:

    Oracle recommends that two people do this step: one person to move the server in and out of the rack, and another person to watch the cables and the CMA.

    1. Slowly pull the server out of the rack until the slide rails reach their stops.

    2. Inspect the attached cables for any binding or kinks.

    3. Verify that the CMA extends fully from the slide rails.

  6. Push the server back into the rack:

    1. Release the two sets of slide rail stops.

    2. Locate the levers on the inside of each slide rail, just behind the back panel of the server. They are labeled PUSH.

    3. Simultaneously push in both levers and slide the server into the rack, until it stops in approximately 46 cm (18 inches).

    4. Verify that the cables and CMA retract without binding.

    5. Locate the slide rail release buttons near the front of each mounting bracket.

    6. Simultaneously push in both slide rail release buttons and slide the server completely into the rack, until both slide rails engage.

  7. Dress the cables, and then tie off the cables with the straps. Oracle recommends that you dress the InfiniBand cables in bundles of eight or fewer.

  8. Slide each server out and back fully to ensure that the cables are not binding or catching.

  9. Repeat the procedure for all servers.

  10. Connect the power cables to the power distribution units (PDUs). Ensure the breaker switches are in the OFF position before connecting the power cables. Do not plug the power cables into the facility receptacles now.

1.3.3.6 Closing the Rack

The following procedure describes how to close the rack after installing new equipment.

  1. Replace the rack front and rear doors as follows:

    1. Retrieve the doors, and place them carefully on the door hinges.

    2. Connect the front and rear door grounding strap to the frame.

    3. Close the doors.

    4. (Optional) Lock the doors. The keys are in the shipping kit.

  2. (Optional) Replace the side panels, if they were removed for the upgrade, as follows:

    1. Lift each side panel up and onto the side of the rack. The top of the rack should support the weight of the side panel. Ensure the panel fasteners line up with the grooves in the rack frame.

    2. Turn each side panel fastener one-quarter turn clockwise using the side panel removal tool. Turn the fasteners next to the panel lock clockwise. There are 10 fasteners per side panel.

    3. (Optional) Lock each side panel. The key is in the shipping kit. The locks are located on the bottom, center of the side panels.

    4. Connect the grounding straps to the side panels.

After closing the rack, proceed to "Configuring the New Hardware" to configure the new hardware.

1.3.4 Extending a Rack by Adding Another Rack

Extending Oracle Exadata Database Machine by adding another rack consists of cabling and configuring the racks together. Racks can be cabled together with no downtime. During the cabling procedure, the following should be noted:

  • There is some performance degradation while cabling the racks together. This degradation results from reduced network bandwidth, and the data retransmission due to packet loss when a cable is unplugged.

  • The environment is not a high-availability environment because one leaf switch will need to be off. All traffic goes through the remaining leaf switch.

  • Only the existing rack is operational, and any new rack that is added is powered down.

  • The software running on the systems cannot have problems related to InfiniBand restarts.

  • It is assumed that Oracle Exadata Database Machine Half Racks have three InfiniBand switches already installed.

  • The new racks have been configured with the appropriate IP addresses to be migrated into the expanded system prior to any cabling, and there are no duplicate IP addresses.

  • The existing spine switch is set to priority 10 during the cabling procedure. This setting gives the spine switch a higher priority than any other switch in the fabric, and is the first to take the Subnet Manager Master role whenever a new Subnet Manager Master is being set during the cabling procedure.

The following sections describe how to extend Oracle Exadata Database Machine with another rack:

1.3.4.1 Cabling Two Racks Together

The following procedure describes how to cable two racks together. This procedure assumes that the racks are adjacent to each other. In the procedure, the existing rack is R1, and the new rack is R2.

  1. Set the priority of the current, active Subnet Manager Master to 10 on the spine switch, as follows:

    1. Log in to any InfiniBand switch on the active system.

    2. Use the getmaster command to determine that the Subnet Manager Master is running on the spine switch. If it is not, then follow the procedure in Oracle Exadata Database Machine Installation and Configuraton Guide.

    3. Log in to the spine switch.

    4. Use the disablesm command to stop Subnet Manager.

    5. Use the setsmpriority 10 command to set the priority to 10.

    6. Use the enablesm command to restart Subnet Manager.

    7. Repeat step 1.b to ensure the Subnet Manager Master is running on the spine switch.

  2. Ensure the new rack is near the existing rack. The InfiniBand cables must be able to reach the servers in each rack.

  3. Completely shut down the new rack (R2).

  4. Cable the two leaf switches R2 IB2 and R2 IB3 in the new rack according to Table 2-2. Note that you need to first remove the seven existing inter-switch connections between each leaf switch, as well as the two connections between the leaf switches and the spine switch in the new rack R2, not in the existing rack R1.

  5. Verify both InfiniBand interfaces are up on all database nodes and storage cells. You can do this by running the ibstat command on each node and verifying both interfaces are up.

  6. Power off leaf switch R1 IB2. This causes all the database servers and Exadata Storage Servers to fail over their InfiniBand traffic to R1 IB3.

  7. Disconnect all seven inter-switch links between R1 IB2 and R1 IB3, as well as the one connection between R1 IB2 and the spine switch R1 IB1.

  8. Cable leaf switch R1 IB2 according to Table 2-1.

  9. Power on leaf switch R1 IB2.

  10. Wait for three minutes for R1 IB2 to become completely operational.

    To check the switch, log in to the switch and run the ibswitches command. The output should show three switches, R1 IB1, R1 IB2, and R1 IB3.

  11. Verify both InfiniBand interfaces are up on all database nodes and storage cells. You can do this by running the ibstat command on each node and verifying both interfaces are up.

  12. Power off leaf switch R1 IB3. This causes all the database servers and Exadata Storage Servers to fail over their InfiniBand traffic to R1 IB2.

  13. Disconnect the one connection between R1 IB3 and the spine switch R1 IB1.

  14. Cable leaf switch R1 IB3 according to Table 2-1.

  15. Power on leaf switch R1 IB3.

  16. Wait for three minutes for R1 IB3 to become completely operational.

    To check the switch, log in to the switch and run the ibswitches command. The output should show three switches, R1 IB1, R1 IB2, and R1 IB3.

  17. Power on all the InfiniBand switches in R2.

  18. Wait for three minutes for the switches to become completely operational.

    To check the switch, log in to the switch and run the ibswitches command. The output should show six switches, R1 IB1, R1 IB2, R1 IB3, R2 IB1, R2 IB2, and R2 IB3.

  19. Ensure the Subnet Manager Master is running on R1 IB1 by running the getmaster command from any switch.

  20. Power on all servers in R2.

  21. Log in to spine switch R1 IB1, and lower its priority to 8 as follows:

    1. Use the disablesm command to stop Subnet Manager.

    2. Use the setsmpriority 8 command to set the priority to 8.

    3. Use the enablesm command to restart Subnet Manager.

  22. Ensure Subnet Manager Master is running on one of the spine switches.

After cabling the racks together, proceed to Configuring the New Hardware to configure the racks.

1.3.4.2 Cabling Several Racks Together

The following procedure describes how to cable several racks together. This procedure assumes that the racks are adjacent to each other. In the procedure, the existing racks are R1, R2, ... Rn, the new rack is Rn+1, and the Subnet Manager Master is running on R1 IB1.

  1. Set the priority of the current, active Subnet Manager Master to 10 on the spine switch, as follows:

    1. Log in to any InfiniBand switch on the active system.

    2. Use the getmaster command to determine that the Subnet Manager Master is running on the spine switch. If it is not, then follow the procedure in Oracle Exadata Database Machine Installation and Configuraton Guide.

    3. Log in to the spine switch.

    4. Use the disablesm command to stop Subnet Manager.

    5. Use the setsmpriority 10 command to set the priority to 10.

    6. Use the enablesm command to restart Subnet Manager.

    7. Repeat step 1.b to ensure the Subnet Manager Master is running on the spine switch.

  2. Ensure the new rack is near the existing rack. The InfiniBand cables must be able to reach the servers in each rack.

  3. Completely shut down the new rack (Rn+1).

  4. Cable the leaf switch in the new rack according to the appropriate table in Multi-Rack Cabling Tables. For example, if rack Rn+1 was R4, then use Table 2-9.

  5. Complete the following procedure for each of the original racks:

    1. Power off leaf switch Rx IB2. This causes all the database servers and Exadata Storage Servers to fail over their InfiniBand traffic to Rx IB3.

    2. Cable leaf switch Rx IB2 according to Multi-Rack Cabling Tables.

    3. Power on leaf switch Rx IB2.

    4. Wait for three minutes for Rx IB2 to become completely operational.

      To check the switch, log in to the switch and run the ibswitches command. The output should show n*3 switches for IB1, IB2, and IB3 in racks R1, R2, ... Rn.

    5. Power off leaf switch Rx IB3. This causes all the database servers and Exadata Storage Servers to fail over their InfiniBand traffic to Rx IB2.

    6. Cable leaf switch Rx IB3 according to Multi-Rack Cabling Tables.

    7. Power on leaf switch Rx IB3.

    8. Wait for three minutes for Rx IB3 to become completely operational.

      To check the switch, log in to the switch and run the ibswitches command. The output should show n*3 switches for IB1, IB2, and IB3 in racks R1, R2, ... Rn.

    All racks should now be rewired according to Multi-Rack Cabling Tables.

  6. Power on all the InfiniBand switches in Rn+1.

  7. Wait for three minutes for the switches to become completely operational.

    To check the switch, log in to the switch and run the ibswitches command. The output should show (n+1)*3 switches for IB1, IB2, and IB3 in racks R1, R2, ... Rn+1.

  8. Ensure the Subnet Manager Master is running on R1 IB1 by running the getmaster command from any switch.

  9. Power on all servers in Rn+1.

  10. Log in to spine switch R1 IB1, and lower its priority to 8 as follows:

    1. Use the disablesm command to stop Subnet Manager.

    2. Use the setsmpriority 8 command to set the priority to 8.

    3. Use the enablesm command to restart Subnet Manager.

  11. Ensure Subnet Manager Master is running on one of the spine switches using the getmaster command from any switch.

  12. Ensure Subnet Manager is running on every spine switch using the following command from any switch:

    ibdiagnet -r 
    

    Each spine switch should show as running in the Summary Fabric SM-state-priority section of the output. If a spins switch is not running, then log in to the switch and enable Subnet Manager using the enablesm command.

  13. If there are now four or more racks, then log in to the leaf switches in each rack and disable Subnet Manager using the disablesm command.

1.4 Configuring the New Hardware

This section contains the following tasks needed to configure the new hardware:

Note:

The new and existing racks must be at the same patch level for Exadata Storage Servers and database servers, including the operating system. Refer to "Reviewing Release and Patch Levels" for additional information.

1.4.1 Task 1: Changing the Interface Names

Earlier releases of Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) used BOND0 and BOND1 as the names for the bonded InfiniBand and bonded Ethernet client networks, respectively. In the current release, BONDIB0 and BONDETH0 are used for the bonded InfiniBand and bonded Ethernet client networks. If you are adding new servers to an existing Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers), then ensure the database servers use the same names for bonded configuration. You can either change the new database servers to match the existing server interface names, or change the existing server interface names and Oracle Cluster Registry (OCR) configuration to match the new servers.

Do the following after changing the interface names:

  1. Edit the entries in /etc/sysctl.conf file on the database servers so that the entries for the InfiniBand network match. The following is an example of the file entries before editing. One set of entries must be changed to match the other set.

    Found in X2 node
    
    net.ipv4.neigh.bondib0.locktime = 0
    net.ipv4.conf.bondib0.arp_ignore = 1
    net.ipv4.conf.bondib0.arp_accept = 1 
    net.ipv4.neigh.bondib0.base_reachable_time_ms = 10000
    net.ipv4.neigh.bondib0.delay_first_probe_time = 1
    
    Found in V2 node
    
    net.ipv4.conf.bond0.arp_accept=1
    net.ipv4.neigh.bond0.base_reachable_time_ms=10000
    net.ipv4.neigh.bond0.delay_first_probe_time=1
    
  2. Save the changes to the sysctl.conf file.

  3. Use the oifcfg utility to change the OCR configuration, if the new names differ from what is currently in OCR. The interface names for Exadata Storage Servers do not have to be changed.

  4. Continue configuring the new hardware, as follows:

See Also:

Oracle Exadata Database Machine Maintenance Guide for information about changing the InfiniBand network information

1.4.2 Task 2: Setting Up New Servers

New servers need to be configured when extending Oracle Exadata Database Machine Quarter Rack or Oracle Exadata Database Machine Half Rack.

The new servers do not have any configuration information, and you cannot use Oracle Enterprise Manager Grid Control to configure them. The servers are configured using the Oracle Exadata Deployment Assistant or manually.

Configuring Servers Using Oracle Exadata Deployment Assistant

Note:

In order to configure the servers with Oracle Exadata Deployment Assistant, the new server information must be entered in Oracle Exadata Deployment Assistant, and configuration files generated. Refer to "Preparing the Network Configuration" for additional information.

  1. Download the latest release of Oracle Exadata Deployment Assistant listed in My Oracle Support note 888828.1.

  2. Enter the new server information in Oracle Exadata Deployment Assistant. Do not include information for the existing rack.

    Note:

    • When extending an existing rack that does not have Sun Server X4-2 Oracle Database Servers or later, be sure to deselect the active bonding option for the InfiniBand network so the new database servers are configured with active-passive bonded interfaces.

    • When extending an existing Oracle Exadata Database Machine X4-2 or later with active-active bonding, select the active bonding option to configure the new database servers for active-active bonding.

  3. Generate the configuration files.

  4. Prepare the servers as follows, starting with the first database server of the new servers:

    1. Configure the servers as described in "Preparing the Servers" in chapter 2 of Oracle Exadata Storage Server Software User's Guide.

      Note:

      Oracle Exadata Deployment Assistant checks the performance level of Exadata Storage Servers so it is not necessary to check them using the CellCLI CALIBRATE command at this time.

    2. Create the cell disks and grid disks as described in "Configuring Cells, Cell Disks, and Grid Disks with CellCLI" in chapter 2 of Oracle Exadata Storage Server Software User's Guide.

    3. Create the flash cache and flash log as described in "Creating Flash Cache and Flash Grid Disks" in chapter 2 of Oracle Exadata Storage Server Software User's Guide.

      Note:

      When creating the flash cache, enable write back flash cache.

  5. Ensure the InfiniBand and bonded client Ethernet interface names are the same on the new database servers as on the existing database servers.

  6. When using the same, earlier style bonding names, such as BOND0, for the new database servers, then update the /opt/oracle.cellos/cell.conf file to reflect the correct bond names.

    Note:

    If the existing servers use BONDIB0 as the InfiniBand bonding name, then this step can be skipped.

  7. Install Oracle Exadata Deployment Assistant on the first new database server.

    See Also:

    My Oracle Support note 888828.1 for information about Oracle Exadata Deployment Assistant

  8. Copy the configuration files to the first database server of the new servers in the /opt/oracle.SupportTools/onecommand directory. This is the information completed in step 2.

  9. Run Oracle Exadata Deployment Assistant up to, but not including, the CreateGridDiskstep, and then run the SetupCellEmailAlerts step and the Automatic Service Request steps.

    Note:

    • The Oracle Exadata Deployment Assistant ValidateEnv step may display an error message about missing files, pXX.zip. This is expected behavior because the files are not used for this procedure. You can ignore the error message.

    • When using capacity-on-demand, Oracle Exadata Deployment Assistant has the SetUpCapacityOnDemand step. This step uses the resourcecontrol command to set up the cores correctly.

  10. Configure the Exadata Storage Servers, cell disks and grid disks as described in "Configuring Cells, Cell Disks and Grid Disks with CellCLI" in chapter 2 of Oracle Exadata Storage Server Software User's Guide.

    Note:

    Use the data collected from the existing system, as described in "Obtaining Current Configuration Information" to determine the grid disk names and sizes.

  11. Reclaim disk space as described in Oracle Exadata Database Machine Installation and Configuration Guide of this guide.

  12. Verify the time is the same on the new servers as on the existing servers. This check is performed for Exadata Storage Servers and database servers.

  13. Ensure the NTP settings are the same on the new servers as on the existing servers. This check is performed for Exadata Storage Servers and database servers.

  14. Configure HugePages on the new servers to match the existing servers.

  15. Ensure the values in the /etc/security/limits.conf file for the new database servers match the existing database servers.

  16. Go to Task 4: Setting User Equivalence to continue the hardware configuration.

Configuring Servers Manually

  1. Prepare the servers using the procedure described in "Preparing the Servers" in chapter 2 of Oracle Exadata Storage Server Software User's Guide.

  2. Ensure the InfiniBand and bonded client Ethernet interface names are the same on the new database servers as on the existing database servers.

  3. Configure the Exadata Storage Servers, cell disks and grid disks as described in "Configuring Cells, Cell Disks and Grid Disks with CellCLI" in chapter 2 of Oracle Exadata Storage Server Software User's Guide.

  4. Configure the database servers as described in "Setting Up Configuration Files for a Database Server Host" in chapter 2 of Oracle Exadata Storage Server Software User's Guide.

  5. Reclaim disk space as described in Oracle Exadata Database Machine Installation and Configuration Guide.

  6. Verify the time is the same on the new servers as on the existing servers. This check is performed for Exadata Storage Servers and database servers.

  7. Ensure the NTP settings are the same on the new servers as on the existing servers. This check is performed for Exadata Storage Servers and database servers.

  8. Configure HugePages on the new servers to match the existing servers.

  9. Go to Task 4: Setting User Equivalence to continue the hardware configuration.

1.4.3 Task 3: Setting up a New Rack

A new rack is configured at the factory. However, it is necessary to set up the network and configuration files for use with the existing rack. Use the following procedure to configure the rack:

  1. Check the Exadata Storage Servers as described in Oracle Exadata Database Machine Installation and Configuraton Guide.

  2. Check the database servers as described in Oracle Exadata Database Machine Installation and Configuraton Guide.

  3. Perform the checks as described in Oracle Exadata Database Machine Installation and Configuraton Guide.

  4. Verify the InfiniBand network as described in Oracle Exadata Database Machine Installation and Configuraton Guide.

  5. Perform initial configuration as described in Oracle Exadata Database Machine Installation and Configuraton Guide.

  6. Reclaim disk space as described in Oracle Exadata Database Machine Installation and Configuraton Guide of this guide.

  7. Verify the time is the same on the new servers as on the existing servers. This check is performed for Exadata Storage Servers and database servers.

  8. Ensure the NTP settings are the same on the new servers as on the existing servers. This check is performed for Exadata Storage Servers and database servers.

  9. Configure HugePages on the new servers to match the existing servers.

  10. Ensure the InfiniBand and bonded client Ethernet interface names on the new database servers match the existing database servers.

  11. Configure the rack as described in Oracle Exadata Database Machine Installation and Configuraton Guide. You can use either the Oracle Exadata Deployment Assistant or Oracle Enterprise Manager Grid Control to configure the rack.

    Note:

    • Only run the Oracle Exadata Deployment Assistant up to the CreateGridDisks step, then configure Exadata Storage Servers as described in "Configuring Cells, Cell Disks, and Grid Disks with CellCLI" in Oracle Exadata Storage Server Software User's Guide.

    • When adding servers with 3 TB High Capacity (HC) disks to existing servers with 2TB disks, it is recommended to follow the procedure in My Oracle Support note 1476336.1 to properly define the grid disks and disk groups. At this point of setting up the rack, it is only necessary to define the grid disks. The disk groups are created after the cluster has been extended on to the new nodes.

    • If the existing Exadata Cells have High Performance (HP) disks and you are adding Exadata Cells with High Capacity (HC) disks or the existing Exadata Cells have HC disks and you are adding Exadata Cell HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.

  12. Go to Task 4: Setting User Equivalence to continue the hardware configuration.

1.4.4 Task 4: Setting User Equivalence

User equivalence can be configured to include all servers once the servers are online. This procedure must be done before running the post-cabling utilities. The following procedure describes how to set user equivalence:

  1. Log in to each new server manually using SSH to verify that each server can accept logins and that the passwords are correct.

  2. Modify the dbs_group and cell_group files on all servers to include all servers as follows:

    1. Create the new directories on the first existing database server using the following commands:

      # mkdir /root/new_group_files
      # mkdir /root/old_group_files
      # mkdir /root/group_files
      
    2. Copy the group files for the new servers to the /root/new_group_files directory.

    3. Copy the group files for the existing servers to the /root/old_group_files directory.

    4. Copy the group files for the existing servers to the /root/group_files directory.

    5. Update the group files with existing and new servers using the following commands:

      cat /root/new_group_files/dbs_group >> /root/group_files/dbs_group
      cat /root/new_group_files/cell_group >> /root/group_files/cell_group
      cat /root/new_group_files/all_group >> /root/group_files/all_group
      cat /root/new_group_files/dbs_ib_group >> /root/group_files/dbs_ib_group
      cat /root/new_group_files/cell_ib_group >> /root/group_files/cell_ib_group
      cat /root/new_group_files/all_ib_group >> /root/group_files/all_ib_group
      
    6. Make the updated group files the default group files using the following commands. The updated group files contain the existing and new servers.

      cp /root/group_files/* /root
      cp /root/group_files/* /opt/oracle.SupportTools/onecommand
      
    7. Put a copy of the updated group files in the root user, oracle user, and Grid Infrastructure user home directories, and ensure that the files are owned by the respective users.

  3. Modify the /etc/hosts file on the existing and new database server to include the existing InfiniBand IP addresses for the database servers and Exadata Storage Servers. The existing and new priv_ib_hosts files can be used for this step.

    Note:

    Do not copy the /etc/hosts file from one server to the other servers. Edit each host's file.

  4. Run the setssh-Linux.sh script as the root user on one of the existing database servers to configure user equivalence for all servers using the following command. Oracle recommends using the first database server.

    # /opt/oracle.SupportTools/onecommand/setssh-Linux.sh  -s -c N -h \
      /path_to_file/all_group -n N 
    

    In the preceding command, path_to_file is the directory path for the all_group file containing the names for the existing and new servers.

    Note:

    For Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) systems, use the setssh.sh command to configure user equivalence.

    The command line options for the setssh.sh command differ from the setssh-Linux.sh command. Run setssh.sh without parameters to see the proper syntax.

  5. Add the known hosts using InfiniBand using the following command. This requires all database servers are accessible by way of their InfiniBand interfaces.

    # /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h    \
      /path_to_file/all_ib_group -n N -p password
    
  6. Verify equivalence using the following commands:

    # dcli -g all_group -l root date
    # dcli -g all_ib_group -l root date
    
  7. Run the setssh-Linux.sh script as the oracle user on one of the existing database servers to configure user equivalence for all servers using the following command. Oracle recommends using the first database server. If there are separate owners for the Grid Infrastructure, then run a similar command for each owner.

    $ /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h   \
      /path_to_file/dbs_group -n N
    

    In the preceding command, path_to_file is the directory path for the dbs_group file. The file contains the names for the existing and new servers.

    Note:

    • For Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) systems, use the setssh.sh command to configure user equivalence.

    • It may be necessary to temporarily change the permissions on the setssh-Linux.sh file to 755 for this step. Change the permissions back to the original settings after completing this step.

  8. Add the known hosts using InfiniBand using the following command. This requires all database servers are accessible by way of their InfiniBand interfaces.

    $ /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h    \
       /root/group_files/dbs_ib_group -n N
    
  9. Verify equivalence using the following commands:

    $ dcli -g dbs_group -l oracle date
    $ dcli -g dbs_ib_group -l oracle date
    

    If there is a Grid Infrastructure user, then also run the preceding commands for that user, substituting the Grid Infrastructure user name for the oracle user.

1.4.5 Task 5: Starting the Cluster

The following procedure describes how to start the cluster if it was stopped earlier for cabling an additional rack.

Note:

  • Oracle recommends you start one server, and let it come up fully before starting Oracle Clusterware on the rest of the servers.

  • It is not necessary to stop a cluster when extending a Oracle Exadata Database Machine Half Rack to Oracle Exadata Database Machine Full Rack, or Oracle Exadata Database Machine Quarter Rack to Oracle Exadata Database Machine Half Rack or Oracle Exadata Database Machine Full Rack.

  1. Log in as the root user on the original cluster.

  2. Start one server of the cluster using the following command:

    # GRID_HOME/grid/bin/crsctl start cluster
    
  3. Check the status of the server using the following command:

    GRID_HOME/grid/bin/crsctl stat res -t
    

    Run the preceding command until it shows the first server has started.

  4. Start the other servers in the cluster using the following command:

    # GRID_HOME/grid/bin/crsctl start cluster -all
    
  5. Check the status of the servers using the following command:

    GRID_HOME/grid/bin/crsctl stat res -t
    

    It may take several minutes for all servers to start and join the cluster.

1.4.6 Task 6: Adding Grid Disks to Oracle ASM Disk Groups

Grid disks can be added to Oracle ASM disk groups before or after the new servers are added to the grid infrastructure. The advantage of adding the grid disks before adding the new servers is that the rebalance operation can start earlier. The advantage of adding the grid disks after is that the rebalance operation can be done on the new servers so less load is placed on the existing servers.

The following procedure describes how to add grid disk to existing Oracle ASM disk groups.

Note:

  • It is assumed in the following examples that the newly-installed Exadata Storage Servers have the same grid disk configuration as the existing Exadata Storage Servers, and that the additional grid disks will be added to existing disk groups.

    The information gathered about the current configuration should be used when setting up the grid disks. Refer to "Obtaining Current Configuration Information" for information about the existing grid disks.

    Refer to Task 2: Setting Up New Servers for information about configuring the grid disks.

  • If the existing Exadata Cells have High Performance (HP) disks and you are adding Exadata Cells with High Capacity (HC) disks or the existing Exadata Cells have HC disks and you are adding Exadata Cell HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.

  1. Ensure the new Exadata Storage Servers are running the same version of software as Exadata Storage Servers already in use using the following command on the first database server:

    dcli -g dbs_group -l root "imageinfo -ver"
    

    Note:

    If the software on the cells does not match, then upgrade or patch the software to be at the same level. This could be patching the existing servers or new servers. Refer to "Reviewing Release and Patch Levels" for additional information.

  2. Modify the /etc/oracle/cell/network-config/cellip.ora file on all database servers to have a complete list of all Exadata Storage Servers. This can be done by modifying the file for one database server, and then copying the file to the other servers. The cellip.ora file should be identical on all database servers.

    When adding Exadata Storage Server X4-2L Servers, the cellip.ora file contains two IP addresses listed for each cell. Copy each line completely to include the two IP addresses, and merge the addresses in the cellip.ora file of the existing cluster.

    The following is an example of the cellip.ora file after expanding Oracle Exadata Database Machine X3-2 Half Rack to full rack using Exadata Storage Server X4-2L Servers servers:

    cell="192.168.10.9"
    cell="192.168.10.10"
    cell="192.168.10.11"
    cell="192.168.10.12"
    cell="192.168.10.13"
    cell="192.168.10.14"
    cell="192.168.10.15"
    cell="192.168.10.17;192.168.10.18"
    cell="192.168.10.19;192.168.10.20"
    cell="192.168.10.21;192.168.10.22"
    cell="192.168.10.23;192.168.10.24"
    cell="192.168.10.25;192.168.10.26"
    cell="192.168.10.27;192.168.10.28"
    cell="192.168.10.29;192.168.10.30"
    

    In the preceding example, lines 1 through 7 are for the original servers, and lines 8 through 14 are for the new servers. Exadata Storage Server X4-2L Servers servers have two IP addresses each.

  3. Ensure the updated cellip.ora file is on all database servers. The updated file must include a complete list of all Exadata Storage Servers.

  4. Verify accessibility of all grid disks using the following command from one of the original database servers. The command can be run as the root user or the oracle user.

    $ GRID_HOME/grid/bin/kfod disks=all dscvgroup=true
    

    The output from the command shows grid disks from the original and new Exadata Storage Servers.

  5. Add the grid disks from the new Exadata Storage Servers to the existing disk groups using commands similar to the following. This step is only permitted when adding high performance disks to high performance disks, or high capacity disks to high capacity disks.

    $ .oraenv
    ORACLE_SID = [oracle] ? +ASM1
    The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
    
    $ sqlplus / as sysasm
    SQL> ALTER DISKGROUP data ADD DISK
      2> 'o/*/DATA*dm02*'
      3> rebalance power 11;
    

    In the preceding commands, a full rack was added to an existing system. The prefix for the new rack is dm02, and the grid disk prefix is DATA.

    The following is an example in which Oracle Exadata Database Machine Half Rack was upgraded to Oracle Exadata Database Machine Full Rack. The cell host names in the original system were named dm01cel01 through dm01cel07. The new cell host names are dm01cel08 through dm01cel14.

    $ .oraenv
    ORACLE_SID = [oracle] ? +ASM1
    The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
    
    $ SQLPLUS / AS sysasm
    SQL> ALTER DISKGROUP data ADD DISK
      2> 'o/*/DATA*dm01cel08*',
      3> 'o/*/DATA*dm01cel09*',
      4> 'o/*/DATA*dm01cel10*',
      5> 'o/*/DATA*dm01cel11*',
      6> 'o/*/DATA*dm01cel12*',
      7> 'o/*/DATA*dm01cel13*',
      8> 'o/*/DATA*dm01cel14*'
      9> rebalance power 11;
    

    Note:

    • If your system is running Oracle Database 11g release 2 (11.2.0.1), then Oracle recommends a power limit of 11 so that the rebalance completes as quickly as possible. If your system is running Oracle Database 11g release 2 (11.2.0.2), then Oracle recommends a power limit of 32. The power limit does have an impact on any applications that are running during the rebalance. Refer to Oracle Automatic Storage Management Administrator's Guide for information about the ASM_POWER_LIMIT parameter.

    • Ensure the ALTER DISKGROUP commands are run from different Oracle ASM instances. That way, the rebalance operation for multiple disk groups can run in parallel.

    • Add disks to all disk groups including SYSTEMDG or DBFS_DG.

    • When adding servers with 3 TB High Capacity (HC) disks to existing servers with 2TB disks, it is recommended to follow the procedure in My Oracle Support note 1476336.1 to properly define the grid disks and disk groups. At this point of setting up the rack, the new grid disks should be defined, but need to be placed into disk groups. Refer to the steps in My Oracle Support note 1476336.1.

    • If the existing Exadata Cells have High Performance (HP) disks and you are adding Exadata Cells with High Capacity (HC) disks, or the existing Exadata Cells have HC disks and you are adding Exadata Cell HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.

  6. Monitor the status of the rebalance operation using a query similar to the following from any Oracle ASM instance:

    SQL> SELECT * FROM GV$ASM_OPERATION WHERE STATE = 'RUN';
    

    The remaining tasks can be done while the rebalance is in progress.

1.4.7 Task 7: Adding Servers to a Cluster

The following procedure describes how to add servers to a cluster:

Caution:

If Cluster Ready Services (CRS) manages additional services that are not yet installed on the new nodes, such as Oracle Golden Gate, then note the following:

  • It may be necessary to stop those services on the existing node before running the addNode.sh script.

  • It is necessary to create any users and groups on the new database servers that run these additional services.

  • It may be necessary to disable those services from auto-start so that CRS does not try to start the services on the new nodes.

Note:

To prevent problems with transferring files between existing and new nodes, you need to set up SSH equivalence. See the "Setting up ssh equivalence from the newly added guest domain(s) to all the other nodes in the cluster" for details.

  1. Ensure the /etc/oracle/cell/network-config/*.ora files are correct and consistent on all database servers. The cellip.ora file all database server should include the older and newer database servers and storage servers.

  2. Ensure the ORACLE_BASE and diag destination directories have been created on the Grid Infrastructure destination home.

    The following is an example for 11g:

    # dcli -g /root/new_group_files/dbs_group -l root mkdir -p   \
      /u01/app/11.2.0/grid  /u01/app/oraInventory /u01/app/grid/diag
    
    # dcli -g /root/new_group_files/dbs_group -l root chown -R grid:oinstall \
      /u01/app/11.2.0 /u01/app/oraInventory /u01/app/grid
    
    # dcli -g /root/new_group_files/dbs_group -l root chmod -R 770   \
      /u01/app/oraInventory 
    
    # dcli -g /root/new_group_files/dbs_group -l root chmod -R 755   \
      /u01/app/11.2.0  /u01/app/11.2.0/grid
    

    The following is an example for 12c:

    # cd /
    # rm -rf /u01/app/*
    # mkdir -p /u01/app/12.1.0.2/grid
    # mkdir -p /u01/app/oracle/product/12.1.0.2/dbhome_1
    # chown -R oracle:oinstall /u01
    
  3. Ensure the inventory directory and Grid Infrastructure home directories have been created and have the proper permissions. The directories should be owned by the Grid Infrastructure owner and the OINSTALL group. The inventory directory should have 770 permission, and the Grid Infrastructure home directories should have 755.

    If you are running 12c:

    • Make sure oraInventory does not exist inside /u01/app.

    • Make sure /etc/oraInst.loc does not exist.

  4. Create users and groups on the new nodes with the same user identifiers and group identifiers as on the existing nodes.

    Note:

    If Oracle Exadata Deployment Assistant was used earlier, then these users and groups should have been created. Check that they do exist, and have the correct UID and GID values.

  5. Log in as the Grid Infrastructure owner on an existing host.

  6. Verify the Oracle Cluster Registry (OCR) backup exists using the following command:

    ocrconfig -showbackup 
    
  7. Verify that the additional database servers are ready to be added to the cluster using commands similar to following:

    $ cluvfy stage -post hwos -n \
      dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \
      -verbose
    
    $ cluvfy comp peer -refnode dm01db01 -n \
      dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \
      -orainv oinstall -osdba dba | grep -B 3 -A 2 mismatched
    
    $ cluvfy stage -pre nodeadd -n \
      dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \
      -verbose -fixup -fixupdir /home/gi_owner_name/fixup.d
    

    In the preceding commands, gi_owner_name is the name of the Grid Infrastructure owner, dm02db01 through db02db08 are the new database servers, and refnode is an existing database server.

    Note:

    • The second and third commands do not display output if the commands complete correctly.

    • An error about a voting disk, similar to the following, may be displayed:

       ERROR: 
       PRVF-5449 : Check of Voting Disk location "o/192.168.73.102/ \
       DATA_CD_00_dm01cel07(o/192.168.73.102/DATA_CD_00_dm01cel07)" \
       failed on the following nodes:
       Check failed on nodes: 
               dm01db01
               dm01db01:No such file or directory
       …
       PRVF-5431 : Oracle Cluster Voting Disk configuration check
      

      If such an error occurs:

      - If you are running 11g, set the environment variable as follows:

      $ export IGNORE_PREADDNODE_CHECKS=Y
      

      Setting the environment variable does not prevent the error when running the cluvfy command, but it does allow the addNode.sh script to complete successfully.

      - If you are running 12c, use the following addnode parameters: -ignoreSysPrereqs -ignorePrereq

      In 12c, addnode does not use the IGNORE_PREADDNODE_CHECKS environment variable.

    • If a database server was installed with a certain image and subsequently patched to a later image, then some operating system libraries may be older than the version expected by the cluvfy command. This causes the cluvfy command and possibly the addNode.sh script to fail.

      It is permissible to have an earlier version as long as the difference in version is minor. For example, glibc-common-2.5-81.el5_8.2 versus glibc-common-2.5-49. The versions are different, but both are at version 2.5, so the difference is minor, and it is permissible for them to differ.

      Set the environment variable IGNORE_PREADDNODE_CHECKS=Y before running the addNode.sh script to workaround this problem.

  8. Ensure that all directories inside the Grid Infrastructure home on the existing server have their executable bits set. To do this, use the following commands as the root owner.

    find /u01/app/11.2.0/grid  -type d  -user root  ! -perm /u+x  !     \
    -perm /g+x ! -perm o+x
    
    find /u01/app/11.2.0/grid  -type d  -user gi_owner_name  ! -perm /u+x  !   \
    -perm /g+x ! -perm o+x
    

    In the preceding commands, gi_owner_name is the name of the Grid Infrastructure owner, and /u01/app/11.2.0/grid is the Grid Infrastructure home.

    If any directories are listed, then ensure the group and others permissions are +x. The Grid_home/network/admin/samples, $GI_HOME/crf/admin/run/crfmond, and Grid_home/crf/admin/run/crflogd directories may need the +x permissions set.

    If you are running 12c, run the following commands:

    # chmod -R u+x /u01/app/12.1.0.2/grid/gpnp/gpnp_bcp*
    
    # chmod -R o+rx /u01/app/12.1.0.2/grid/gpnp/gpnp_bcp*
    
    # chmod o+r /u01/app/12.1.0.2/grid/bin/oradaemonagent /u01/app/12.1.0.2/grid/srvm/admin/logging.properties
    
    # chmod a+r /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/*O
    
    # chmod a+r /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/*0
    
    # chown -f gi_owner_name:dba /u01/app/12.1.0.2/grid/OPatch/ocm/bin/emocmrsp
    

    The Grid_home/network/admin/samples directory needs the +x permission:

    chmod -R a+x /u01/app/12.1.0.2/grid/network/admin/samples
    
  9. Run the following command. It is assumed that the Grid Infrastructure home is owned by the Grid Infrastructure user.

    $ dcli -g old_db_nodes -l root chown -f gi_owner_name:dba \
      /u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp
    
  10. This step is needed only if you are running 11g. In 12c, no response file is needed because the values are specified on the command line.

    Create a response file, add-cluster-nodes.rsp, as the Grid Infrastructure user to add the new servers similar to the following:

    RESPONSEFILE_VERSION=2.2.1.0.0
    
    CLUSTER_NEW_NODES={dm02db01,dm02db02,   \
    dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08}
    
    CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm0201-vip,dm0202-vip,dm0203-vip,dm0204-vip,  \
    dm0205-vip,dm0206-vip,dm0207-vip,dm0208-vip}
    

    In the preceding file, the host names dm02db01 through db02db08 are the new nodes being added to the cluster.

    Note:

    The lines listing the server names should appear on one continuous line. They are wrapped in the document due to page limitations.

  11. Ensure most of the files in the Grid_home/rdbms/audit and Grid_home/log/diag/* directories have been moved or deleted before extending a cluster.

  12. Refer to My Oracle Support note 744213.1, If the installer runs out of memory. The note describes how to edit the Grid_home/oui/ora-param.ini file, and change the JRE_MEMORY_OPTIONS parameter to -Xms512m-Xmx2048m.

  13. Add the new servers by running the addNode.sh script from an existing server as the Grid Infrastructure owner.

    • If you are running 11g:

      $ cd Grid_home/oui/bin
      $ ./addNode.sh -silent -responseFile /path/to/add-cluster-nodes.rsp
      
    • If you are running 12c, run the addnode.sh command with the CLUSTER_NEW_NODES and CLUSTER_NEW_VIRTUAL_HOSTNAMES parameters. The syntax is:

      $ ./addnode.sh -silent "CLUSTER_NEW_NODES={comma_delimited_new_nodes}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={comma_delimited_new_node_vips}"
      

      For example:

      $ cd Grid_home/addnode/
      
      $ ./addnode.sh -silent "CLUSTER_NEW_NODES={scaqaa04adm01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={scaqaa04client01-vip}" -ignoreSysPrereqs -ignorePrereq
      
  14. Verify the grid disks are visible from each of the new database servers using the following command:

    $ Grid_home/grid/bin/kfod disks=all dscvgroup=true
    
  15. Run the orainstRoot.sh script as the root user when prompted using the dcli utility.

    $ dcli -g new_db_nodes -l root \
      /u01/app/oraInventory/orainstRoot.sh
    
  16. Run the Grid_home/root.sh script on each server sequentially. This simplifies the process, and ensures that any issues can be clearly identified and addressed.

    Note:

    The node identifier is set in order of the nodes where the root.sh script is run. Typically, the script is run from the lowest numbered node name to the highest.

  17. Check the log file from the root.sh script and verify there are no problems on the server before proceeding to the next server. If there are problems, then resolve them before continuing.

  18. Check the status of the cluster after adding the servers using a command similar to the following:

    $ cluvfy stage -post nodeadd -n \
      dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \
      -verbose
    
  19. Check that all servers have been added and have basic services running using the following command from any server:

    crsctl stat res -t
    

    Note:

    It may be necessary to mount disk groups on the new servers using commands similar to the following. The commands must be run as the oracle user.

    $ srvctl start diskgroup -g data
    $ srvctl start diskgroup -g reco
    
  20. (Releases 11.2.0.2 and later) Do the following procedure:

    1. Manually add the CLUSTER_INTERCONNECTS parameter to the SPFILE for each Oracle ASM instance using the following command:

      alter system set cluster_interconnects = '192.168.10.x'       \
      sid='+ASMx' scope=spfile
      
    2. Restart the cluster on each new server.

    3. Verify the parameters were set correctly.

For adding nodes to an Oracle VM Cluster, refer to "Expanding an Oracle VM RAC Cluster on Exadata" in the Oracle Exadata Database Machine Maintenance Guide.

1.4.8 Task 8: Configuring Cell Alerts for New Exadata Storage Servers

Cell alerts need to be configured for the new Exadata Storage Servers. The configuration depends on the type of installation, as follows:

  • When extending Oracle Exadata Database Machine Quarter Rack to Oracle Exadata Database Machine Half Rack, or Oracle Exadata Database Machine Half Rack to Oracle Exadata Database Machine Full Rack:

    Manually configure cell alerts on the new Exadata Storage Servers. Use the settings on the original Exadata Storage Servers as a guide. To view the settings on the original Exadata Storage Servers, use a command similar to the following:

    dcli -g new_cells_nodes -l celladmin cellcli -e list cell detail
    

    To view the alerts on the new Exadata Storage Servers, use a command similar to the following:

    dcli -g new_cell_nodes -l root "cellcli -e ALTER CELL            \
    smtpServer=\'mailserver.example.com\'                            \
    smtpPort=25,                                                     \
    smtpUseSSL=false,smtpFrom=\'DBM dm01\',                          \
    smtpFromAddr=\'storecell@example.com\',                          \
    smtpToAddr=\'dbm-admins@example.com\',                           \
    notificationMethod=\'mail,snmp\',                                   \
    notificationPolicy=\'critical,warning,clear\',                      \
    snmpSubscriber=\(\(host=\'snmpserver.example.com, port=162\')\)"
    

    Note:

    The backslash character (\) is used as an escape character for the dcli utility, and as a line continuation character in the preceding command.

  • When cabling racks:

    Use Oracle Exadata Deployment Assistant to set up e-mail alerts for Exadata Storage Servers as the root user from the original rack to the new rack. The utility includes the SetupCellEmailAlerts step to configure alerts.

1.4.9 Task 9: Adding Oracle Database ORACLE_HOME to the New Servers

It is necessary to add the Oracle Database software directory ORACLE_HOME to the servers after the cluster modifications are complete, and all the servers are in the cluster. The following procedure describes how to add the ORACLE_HOME directory to the servers:

  1. Check the $ORACLE_HOME/bin directory for files ending in zero (0), such as nmb0, that are owned by the root user and do not have oinstall or world read privileges. Use the following command to modify the file privileges:

    # chmod a+r $ORACLE_HOME/bin/*0
    

    If you are running 12c, you also have to change permissions for files ending in uppercase O, in addition to files ending in zero.

    # chmod a+r $ORACLE_HOME/bin/*O
    # chmod a+r $ORACLE_HOME/bin/*0
    
  2. This step is required for 11g only. If you are running 12c, you can skip this step because the directory has already been created.

    Create the ORACLE_BASE directory for the database owner, if it is different from the Grid Infrastructure owner using the following commands:

    # dcli -g root/new_group_files/dbs_group -l root mkdir -p /u01/app/oracle
    # dcli -g root/new_group_files/dbs_group -l root chown oracle:oinstall  \
     /u01/app/oracle
    
  3. Run the following command to set ownership of the emocmrsp file in the Oracle Database $ORACLE_HOME directory:

    # dcli -g old_db_nodes -l root chown -f oracle:dba \
    /u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp
    
  4. This step is required for 11g only. If you are running 12c, you can skip this step because the values are entered on the command line.

    Create a response file, add-db-nodes.rsp, as the oracle owner to add the new servers similar to the following:

    RESPONSEFILE_VERSION=2.2.1.0.0
    
    CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,   \
    dm02db06,dm02db07,dm02db08}
    

    Note:

    The lines listing the server names should appear on one continuous line. The are wrapped in the document due to page limitations.

  5. Add the Oracle Database ORACLE_HOME directory to the new servers by running the addNode.sh script from an existing server as the database owner user.

    • If you are running 11g:

      $ cd $ORACLE_HOME/oui/bin
      $ ./addNode.sh -silent -responseFile /path/to/add-db-nodes.rsp
      
    • If you are running 12c, you specify the nodes on the command line. The syntax is:

      ./addnode.sh -silent "CLUSTER_NEW_NODES={comma_delimited_new_nodes}"
      

      For example:

      $ cd $RDBMS_HOME/addnode
      
      $ ./addnode.sh -silent "CLUSTER_NEW_NODES={scaqaa04adm01}" -ignoreSysPrereqs -ignorePrereq
      
  6. Ensure the $ORACLE_HOME/oui/oraparam.ini file has the memory settings that match the parameters set in the Grid Infrastructure home.

  7. Run the root.sh script on each server when prompted as the root user using the dcli utility.

    $ dcli -g new_db_nodes -l root $RDBMS_HOME/root.sh
    

    In the preceding command, new_db_nodes is the file with the list of new database servers.

  8. Verify the ORACLE_HOME directories have been added to the new servers using the following command:

    # dcli -g /root/all_group -l root du -sm \
      /u01/app/oracle/product/11.2.0/dbhome_1
    

1.4.10 Task 10: Adding Database Instance to the New Servers

Before adding the database instances to the new servers, check the following:

  • Maximum file size: If any data files have reached their maximum file size, then the addInstance command may fail with an ORA-740 error. Oracle recommends you check that none of the files listed in DBA_DATA_FILES have reached their maximum size. Files that have reached their maximum should be corrected.

  • Online redo logs: If the online redo logs are kept in the directory specified by the DB_RECOVERY_FILE_DEST parameter, then ensure the space allocated is sufficient for the additional redo logs for the new instances being added. If necessary, then increase the size for the DB_RECOVERY_FILE_DEST_SIZE parameter.

  • Total number of instances in the cluster: Set the value of the initialization parameter cluster_database_instances in the spfile for each database to the total number of instances that will be in the cluster after adding the new servers.

  • The HugePages settings are correctly configured on the new servers to match the existing servers.

The following procedure describes how to add database instances to the new servers:

  1. Use a command similar the following from an existing database server to add instances to the new servers. In the command, the instance, dbm9, is being added for server dm02db01.

    dbca -silent -addInstance -gdbName dbm -nodeList dm02db01 -instanceName dbm9 \
    -sysDBAUsername sys
    
    
    

    The command must be run for all servers and instances, substituting the server name and instance name, as appropriate.

    Note:

    If the command fails, then ensure any files that were created, such as redo log files, are cleaned up. The deleteInstance command does not clean log files or data files that were created by the addInstance command.

  2. Do the following procedure:

    1. Manually add the CLUSTER_INTERCONNECTS parameter to the spfile for each new Oracle instance. The additions are similar to the existing entries, but are the InfiniBand addresses corresponding to the server each instance runs on.

    2. Restart the instance on each new server.

    3. Verify the parameters were set correctly.

1.5 Returning the Rack to Service

Use the following procedure to ensure the new hardware is correctly configured and ready for use:

  1. Run the /opt/oracle.SupportTools/ibdiagtools/verify-topology command to ensure that all InfiniBand cables are connected and secure.

    See Also:

    Oracle Exadata Database Machine Maintenance Guide for information about verifying the InfiniBand network configuration

  2. Run the Oracle Exadata Database Machine HealthCheck utility using the steps described in My Oracle Support note 1070954.1.

  3. Verify the instance additions using the following commands:

    srvctl config database -d dbm
    srvctl status database -d dbm
    
  4. Check the cluster resources using the following command:

    crsctl stat res -t
    
  5. Ensure the original configuration summary report from the original cluster deployment is updated to include all servers. This document should include the calibrate and network verifications for the new rack, and the InfiniBand cable checks (verify-topology and infinicheck).

  6. Conduct a power-off test, if possible. If the new Exadata Storage Servers cannot be powered off, then verify that the new database servers with the new instances can be powered off and powered on, and that all processes start automatically.

    Note:

    Ensure the Oracle ASM disk rebalance process has completed for all disk groups by using the following command:

    select * from gv$asm_operation
    

    No rows should be returned by the command.

  7. Review the configuration settings, such as the following:

    • All parallelism settings

    • Backup configurations

    • Standby site, if any

    • Service configuration

    • Oracle Database File System (DBFS) configuration, and mount points on new servers

    • Installation of Oracle Enterprise HugePage Manager agents on new database servers

    • HugePages settings

  8. Incorporate the new cell and database servers into Auto Service Request.

    See Also:

    Oracle Exadata Quick Installation Guide for ASR at

    http://www.oracle.com/technetwork/systems/asr/documentation/

  9. Update Grid Control to include the new nodes.