3.8.1 Adding a Storage Server

This topic describes the step-by-step procedure to add a new storage server (or cell) to an existing Oracle Exadata elastic configuration.

  1. If adding a brand new storage server, perform these steps:
    1. Complete all necessary cabling requirements to make the new storage server available to the desired storage grid.
    2. Image the storage server with the appropriate Oracle Exadata System Software image and provide appropriate input when prompted for the IP addresses.
  2. If this is an existing storage server in the rack and you are allocating it to another cluster within the RDMA Network Fabric network, note the IP addresses assigned to the RDMA Network Fabric interfaces (such as ib0 and ib1 or re0 and re1) of the storage server being added.

    Add the IP addresses to the /etc/oracle/cell/network-config/cellip.ora file on every Oracle RAC node.

    1. cd /etc/oracle/cell/network-config
    2. cp cellip.ora cellip.ora.orig
    3. cp cellip.ora cellip.ora-bak
    4. Add the new entries to /etc/oracle/cell/network-config/cellip.ora-bak.
    5. Copy the edited file to the cellip.ora file on all database nodes using the following command, where database_nodes refers to a file containing the names of each database server in the cluster, with each name on a separate line:
      /usr/local/bin/dcli -g database_nodes -l root -f cellip.ora-bak -d
       /etc/oracle/cell/network-config/cellip.ora
  3. If Oracle Auto Service Request (ASR) alerting was set up on the existing storage servers, configure cell Oracle ASR alerting for the storage server being added.
    1. From any existing storage server, list the cell snmpsubscriber attribute.
      CellCLI> LIST CELL ATTRIBUTES snmpsubscriber
    2. Apply the same snmpsubscriber attribute value to the new storage server by running the following command, replacing snmpsubscriber with the value from the previous command.
      CellCLI> ALTER CELL snmpsubscriber=snmpsubscriber

      Note:

      In the snmpsubscriber value, enclose the host name or IP address in quotation marks if it contains non-alphanumeric characters. For example:

      CellCLI> ALTER CELL snmpSubscriber=((host="asr-host.example.com",port=162,community=public,type=asr,asrmPort=16161))
    3. From any existing storage server, list the cell attributes required for configuring cell alerting.
      CellCLI> LIST CELL ATTRIBUTES -
      notificationMethod,notificationPolicy,mailServer,smtpToAddr, -
      smtpFrom,smtpFromAddr,smtpUseSSL,smtpPort
    4. Apply the same values to the new storage server by running the following command, substituting the placeholders with the values found from the existing storage server.
      CellCLI> ALTER CELL -
       notificationMethod='notificationMethod', -
       notificationPolicy='notificationPolicy', -
       mailServer='mailServer', -
       smtpToAddr='smtpToAddr', -
       smtpFrom='smtpFrom', -
       smtpFromAddr='smtpFromAddr', -
       smtpUseSSL=smtpUseSSL, -
       smtpPort=smtpPort
  4. If needed, create cell disks on the storage server being added.
    1. On the new cell, list any existing cell disks.
      CellCLI> LIST CELLDISK
      
    2. If the cell disks are not present, then create the cell disks.
      CellCLI> CREATE CELLDISK ALL
    3. If your system has PMEM devices, then check that the PMEM log was created by default.
      CellCLI> LIST PMEMLOG

      You should see the name of the PMEM log. It should look like cellnodename_PMEMLOG, and its status should be normal.

      If the PMEM log does not exist, create it.

      CellCLI> CREATE PMEMLOG ALL
    4. Check that the flash log was created by default.
      CellCLI> LIST FLASHLOG

      You should see the name of the flash log. It should look like cellnodename_FLASHLOG, and its status should be normal.

      If the flash log does not exist, create it.

      CellCLI> CREATE FLASHLOG ALL
    5. If the system contains PMEM devices, then check the current PMEM cache mode and compare it to the PMEM cache mode on existing cells.
      CellCLI> LIST CELL ATTRIBUTES pmemcachemode

      Note:

      Commencing with Oracle Exadata System Software release 23.1.0, PMEM cache operates only in write-through mode.

      If the PMEM cache mode on the new cell does not match the existing cells, change the PMEM cache mode as follows:

      1. If the PMEM cache exists and the cell is in WriteBack PMEM cache mode, you must first flush the PMEM cache.

        CellCLI> ALTER PMEMCACHE ALL FLUSH

        Wait for the command to return.

        If the PMEM cache mode is WriteThrough, then you do not need to flush the cache first.

      2. Drop the PMEM cache.

        CellCLI> DROP PMEMCACHE ALL
      3. Change the PMEM cache mode.

        The value of the pmemCacheMode attribute is either writeback or writethrough. The value has to match the PMEM cache mode of the other storage cells in the cluster.

        CellCLI> ALTER CELL PMEMCacheMode=writeback_or_writethrough
      4. Re-create the PMEM cache.

        CellCLI> CREATE PMEMCACHE ALL
    6. Check the current flash cache mode and compare it to the flash cache mode on existing cells.
      CellCLI> LIST CELL ATTRIBUTES flashcachemode

      If the flash cache mode on the new cell does not match the existing cells, change the flash cache mode as follows:

      1. If the flash cache exists and the cell is in WriteBack flash cache mode, you must first flush the flash cache.

        CellCLI> ALTER FLASHCACHE ALL FLUSH

        Wait for the command to return.

      2. Drop the flash cache.

        CellCLI> DROP FLASHCACHE ALL
      3. Change the flash cache mode.

        The value of the flashCacheMode attribute is either writeback or writethrough. The value has to match the flash cache mode of the other storage cells in the cluster.

        CellCLI> ALTER CELL flashCacheMode=writeback_or_writethrough
      4. Create the flash cache.

        CellCLI> CREATE FLASHCACHE ALL
  5. If needed, configure data security on the new storage server.
    1. On an existing cell, check for the existence of security keys.

      For example;

      CellCLI> LIST KEY DETAIL
      
      name:                             
      key: 8217035e5ac8ed64503020a40c520848
      type: CELL         
      
      name: Cluster-c1
      key: da88cbc5579d4179f89d00a44d0edae9
      type: ASMCLUSTER
      
      name: Cluster-c2
      key: 77fb637d4267913f40449fa2c57c6cf9
      type: ASMCLUSTER
    2. If the existing cells contain security keys, then configure the new cell accordingly.
  6. Create the grid disks on the cell being added.
    1. Query the attributes of the existing grid disks from an existing cell.
      CellCLI> LIST GRIDDISK ATTRIBUTES name,asmDiskGroupName,cachingPolicy,size,offset,availableTo
    2. For each disk group found by the above command, create grid disks on the new cell that is being added.

      Match the attributes of the existing grid disks for the particular disk group as reported by the previous LIST GRIDDISK command.

      Create the grid disks in the order of increasing offset to ensure the same layout and performance characteristics as the existing cells.

      For example, if the LIST GRIDDISK command identifies grid disks with the following characteristics, then create grid disks for DATAC1 first, then RECOC1, and finally DBFS_DG.

      asmDiskGroupName          size            offset
      DATAC1                    2.15625T        32M
      RECOC1                    552.109375G     2.1562957763671875T
      DBFS_DG                   33.796875G      2.695465087890625T

      Use the following command as a template:

      CellCLI> CREATE GRIDDISK ALL HARDDISK -
       prefix=matching_prefix_of_the_corresponding_existing_diskgroup, -
       size=size_followed_by_G_or_T, -
       cachingPolicy='value_from_command_above_for_this_disk_group', -
       availableto='value_from_command_above_for_this_disk_group', -
       comment="Cluster cluster_name diskgroup diskgroup_name"

      Caution:

      Be sure to specify the EXACT size shown along with the unit (either T or G)
  7. Log in to each Oracle RAC node and verify that the newly created grid disks are visible from the Oracle RAC nodes.

    Run the following command as the OS owner of the Oracle Grid Infrastructure software (typically grid or oracle). In the command, Grid_home refers to the installation home directory for the Oracle Grid Infrastructure software and cell_being_added refers to the name of the new cell being added.

    $ Grid_home/bin/kfod op=disks disks=all | grep cell_being_added

    The kfod command output should display all the grid disks on the newly added storage server.

  8. Add the newly created grid disks to the corresponding Oracle ASM disk groups.

    In this example, comma_separated_disk_names refers to the disk names from step 6 corresponding to disk_group_name.

    SQL> ALTER DISKGROUP disk_group_name ADD DISK 'comma_separated_disk_names';

    This command kicks off an Oracle ASM rebalance at the default power level.

  9. Monitor the progress of the rebalance by querying GV$ASM_OPERATION.
    SQL> SELECT * FROM GV$ASM_OPERATION;

    When the rebalance completes, the addition of the cell to the Oracle RAC cluster is complete.

  10. Download and run the latest version of Exachk to ensure that the resulting configuration implements the latest best practices for Oracle Exadata.