3.6 Adding Grid Disks to Oracle ASM Disk Groups

Grid disks can be added to Oracle ASM disk groups before or after the new servers are added to the cluster. The advantage of adding the grid disks before adding the new servers is that the rebalance operation can start earlier. The advantage of adding the grid disks after adding the new servers is that the rebalance operation can be done on the new servers so less load is placed on the existing servers.

The following procedure describes how to add grid disk to existing Oracle ASM disk groups.

Note:

  • It is assumed in the following examples that the newly-installed storage servers have the same grid disk configuration as the existing storage servers, and that the additional grid disks will be added to existing disk groups.

    The information gathered about the current configuration should be used when setting up the grid disks.

  • If the existing storage servers have High Performance (HP) disks and you are adding storage servers with High Capacity (HC) disks or the existing storage servers have HC disks and you are adding storage servers HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.

  1. Ensure the new storage servers are running the same version of software as storage servers already in use. Run the following command on the first database server:

    dcli -g dbs_group -l root "imageinfo -ver"
    

    Note:

    If the Oracle Exadata System Software on the storage servers does not match, then upgrade or patch the software to be at the same level. This could be patching the existing servers or new servers. Refer to Reviewing Release and Patch Levels for additional information.
  2. Modify the /etc/oracle/cell/network-config/cellip.ora file on all database servers to have a complete list of all storage servers. The cellip.ora file should be identical on all database servers.

    When adding Oracle Exadata Storage Server X4-2L servers, the cellip.ora file contains two IP addresses listed for each cell. Copy each line completely to include the two IP addresses, and merge the addresses in the cellip.ora file of the existing cluster.

    1. From any database server, make a backup copy of the cellip.ora file.

      cp /etc/oracle/cell/network-config
      cp cellip.ora cellip.ora.orig
      cp cellip.ora cellip.ora-bak
    2. Edit the cellip.ora-bak file and add the IP addresses for the new storage servers.
    3. Copy the edited file to the cellip.ora file on all database nodes using dcli. Use a file named dbnodes that contains the names of every database server in the cluster, with each database name on a separate line. Run the following command from the directory that contains the cellip.ora-bak file.

      /usr/local/bin/dcli -g dbnodes -l root -f cellip.ora-bak -d
       /etc/oracle/cell/network-config/cellip.ora

    The following is an example of the cellip.ora file after expanding Oracle Exadata Database Machine X3-2 Half Rack to Full Rack using Oracle Exadata Storage Server X4-2L servers:

    cell="192.168.10.9"
    cell="192.168.10.10"
    cell="192.168.10.11"
    cell="192.168.10.12"
    cell="192.168.10.13"
    cell="192.168.10.14"
    cell="192.168.10.15"
    cell="192.168.10.17;192.168.10.18"
    cell="192.168.10.19;192.168.10.20"
    cell="192.168.10.21;192.168.10.22"
    cell="192.168.10.23;192.168.10.24"
    cell="192.168.10.25;192.168.10.26"
    cell="192.168.10.27;192.168.10.28"
    cell="192.168.10.29;192.168.10.30"
    

    In the preceding example, lines 1 through 7 are for the original servers, and lines 8 through 14 are for the new servers. Oracle Exadata Storage Server X4-2L servers have two IP addresses each.

  3. Ensure the updated cellip.ora file is on all database servers. The updated file must include a complete list of all storage servers.

  4. Verify accessibility of all grid disks from one of the original database servers. The following command can be run as the root user or the oracle user.

    $ Grid_home/grid/bin/kfod disks=all dscvgroup=true
    

    The output from the command shows grid disks from the original and new storage servers.

  5. Add the grid disks from the new storage servers to the existing disk groups using commands similar to the following. You cannot have both high performance disks and high capacity disks in the same disk group.

    $ .oraenv
    ORACLE_SID = [oracle] ? +ASM1
    The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
    
    $ sqlplus / as sysasm
    SQL> ALTER DISKGROUP data ADD DISK
      2> 'o/*/DATA*dm02*'
      3> rebalance power 11;
    

    In the preceding commands, a Full Rack was added to an existing Oracle Exadata Rack. The prefix for the new rack is dm02, and the grid disk prefix is DATA.

    The following is an example in which an Oracle Exadata Database Machine Half Rack was upgraded to a Full Rack. The cell host names in the original system were named dm01cel01 through dm01cel07. The new cell host names are dm01cel08 through dm01cel14.

    $ .oraenv
    ORACLE_SID = [oracle] ? +ASM1
    The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
    
    $ SQLPLUS / AS sysasm
    SQL> ALTER DISKGROUP data ADD DISK
      2> 'o/*/DATA*dm01cel08*',
      3> 'o/*/DATA*dm01cel09*',
      4> 'o/*/DATA*dm01cel10*',
      5> 'o/*/DATA*dm01cel11*',
      6> 'o/*/DATA*dm01cel12*',
      7> 'o/*/DATA*dm01cel13*',
      8> 'o/*/DATA*dm01cel14*'
      9> rebalance power 11;

    Note:

    • If your system is running Oracle Database 11g release 2 (11.2.0.1), then Oracle recommends a power limit of 11 so that the rebalance completes as quickly as possible. If your system is running Oracle Database 11g release 2 (11.2.0.2), then Oracle recommends a power limit of 32. The power limit does have an impact on any applications that are running during the rebalance.

    • Ensure the ALTER DISKGROUP commands are run from different Oracle ASM instances. That way, the rebalance operation for multiple disk groups can run in parallel.

    • Add disks to all disk groups including SYSTEMDG or DBFS_DG.

    • When adding servers with 3 TB High Capacity (HC) disks to existing servers with 2 TB disks, it is recommended to follow the procedure in My Oracle Support note 1476336.1 to properly define the grid disks and disk groups. At this point of setting up the rack, the new grid disks should be defined, but need to be placed into disk groups. Refer to the steps in My Oracle Support note 1476336.1.

    • If the existing storage servers have High Performance (HP) disks and you are adding storage servers with High Capacity (HC) disks, or the existing storage servers have HC disks and you are adding storage servers with HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.

  6. Monitor the status of the rebalance operation using a query similar to the following from any Oracle ASM instance:

    SQL> SELECT * FROM GV$ASM_OPERATION WHERE STATE = 'RUN';
    

    The remaining tasks can be done while the rebalance is in progress.

See Also: